nova-17.0.1/0000775000175000017500000000000013250073472012570 5ustar zuulzuul00000000000000nova-17.0.1/ChangeLog0000664000175000017500000577524613250073463014374 0ustar zuulzuul00000000000000CHANGES ======= 17.0.1 ------ * Allow 'network' in RequestContext service\_catalog * Check for multiattach before removing connections * Pass user context to virt driver when detaching volume * Handle spawning error on unshelving * Imported Translations from Zanata * compute: Cleans up allocations after failed resize * Update noVNC deployment docs to mention non-US keymap fix in 1.0.0 * [placement] Add functional tests for traits API * [placement] Add sending global request ID in put (3) 17.0.0 ------ * libvirt: disconnect volume from host during detach * Ensure attachment\_id always exists for block device mapping * Add functional test for deleting BFV server with old attach flow * Update plugs Contrail methods to work with privsep * Only pull associated \*sharing\* providers * Fix error handling in compute API for multiattach errors * Detach volumes when deleting a BFV server pre-scheduling * Add functional recreate test of deleting a BFV server pre-scheduling * Clean up ports and volumes when deleting ERROR instance * Add functional tests to ensure BDM removal on delete * Store block device mappings in cell0 * Drop extra loop which modifies Cinder volume status * Check quota before creating volume snapshots * Add the ability to get absolute limits from Cinder * Add resource\_class to fields in ironic node cache * Lazy-load instance attributes with read\_deleted=yes * unquiesce instance on volume snapshot failure 17.0.0.0rc2 ----------- * Add admin guide doc on volume multiattach support * Cleanup the manage-volumes admin doc * Don't JSON encode instance\_info.traits for ironic * Use correct arguments in task inits * Bindep does not catch missing libpcre3-dev on Ubuntu * Fix docs for IsolatedHostsFilter * Handle volume-backed instances in IsolatedHostsFilter * Add regression test for BFV+IsolatedHostsFilter failure * doc: fix the link for the evacuate cli * Make bdms querying in multi-cell use scatter-gather and ignore down cell * VGPU: Modify the example of vgpu white\_list set * Refine waiting for vif plug events during \_hard\_reboot * Update UPPER\_CONSTRAINTS\_FILE for stable/queens * Update .gitreview for stable/queens 17.0.0.0rc1 ----------- * Encode libvirt domain XML in UTF-8 * Clean up reservations in migrate\_task call path * Compute RPC client bump to 5.0 * Bump compute RPC API to version 5.0 * Fixed auto-convergence option name in doc * Workaround glanceclient bug when CONF.glance.api\_servers not set * Remove a duplicate colon * TrivialFix: Add a blankline * XenAPI: Provide support matrix and doc for VGPU * Add a prelude release note for the 17.0.0 Queens GA * Address comments from I51adbbdf13711e463b4d25c2ffd4a3123cd65675 * Add late server group policy check to rebuild * Add regression test for bug 1735407 * Fix wrong link for "Manage Flavors" in CPU topologies doc * Make sure that we have usable input for graphical console * fix misspelling of 'projectUser' * Test case: new standard resource class unusable * Clarify CONF.scheduler.max\_attempts * Add release note for Aggregate[Core|Ram|Disk]Filter change * placement doc: Conflict caveat for DELETE APIs * Trivial fix a missleading comment * Provide support matrix and doc for VGPU * doc: update the GPU passthrough HPC feature entry * [placement] Add sending global request ID in put (2) * [placement] Add sending global request ID in put (1) * [placement] Add sending global request ID in post * Update cells v2 layout doc caveats for Queens * Zuul: Remove project name * Doc: Nix os-traits link from POST resource\_classes * docs: Add booting from an encrypted volume * libvirt: fix native luks encryption failure to find volume\_id * Don't wait for vif plug events during \_hard\_reboot * Don't rely on parse.urlencode in url comparisons * Reset the \_RC\_CACHE between tests * Fix invalid UUIDs in test\_compute.py * Fix the wrong description * doc: placement upgrade notes for queens * Add functional tests for traits-based scheduling * Cleanup launch instance and manage IPs docs * Migrate "launch instance" user guide docs * Pass limit to /allocation\_requests * doc: mark the max microversions for queens * Updated from global requirements * trivial: Fix few policy doc * Query all cells for service version in \_validate\_bdm * add "--until-complete" option for nova-manage db archive\_deleted\_rows * Mention required traits in the flavors user docs * Fix nits in support traits changes * Log options at debug when starting API services under wsgi * set\_{aggregates|traits}\_for\_provider: tolerate set * ProviderTree.get\_provider\_uuids: Top-down ordering * SchedulerReportClient.\_delete\_provider * ComputeDriver.update\_provider\_tree() * report client: get\_provider\_tree\_and\_ensure\_root * trivial: Fix typos in release notes * Use util.validate\_query\_params in list\_traits * Add functional tests for virt driver get\_traits() method * Implement get\_traits() for the ironic virt driver * Add get\_traits() method to ComputeDriver * [placement] Separate API schemas (resource\_provider) * Fix invalid UUIDs in remaining tests * ProviderTree.new\_child: parent is either uuid or name * Add server filters whitelist in server api-ref * reno for notification-transformation-queens * Add the nova-multiattach job * Collapse duplicate error handling in rebuild\_instance * Rollback instance.image\_ref on failed rebuild * Updated from global requirements * SchedulerReportClient.set\_aggregates\_for\_provider * Fix a comment in a notification functional test * Bumping functional test job timeouts * Remove deprecated policy items from fake\_policy * Reduce policy deprecation warnings in test runs * Fix the incorrect RST convention * Fix SUSE Install Guide: Placement port * Log the events we timed out waiting for while plugging vifs * Reduce complexity of \_from\_db\_object 17.0.0.0b3 ---------- * Ironic: Get IP address for volume connector * Add release note for QEMU native LUKS decryption * Fix missing 'if\_notifications\_enabled' decorator * Fix missing marker functions * Fix bug case by none token context * Transform instance.resize\_prep notification * Move remaining uses of parted to privsep * Avoid suspending guest with attached vGPUs * placement: enable required traits from the flavor extra specs * placement: using the dict format for the allocation in claim\_resources * Update VMWare vSphere link address * Handle TZ change in iso8601 >=0.1.12 * Updated from global requirements * Fix the order of target host checks * Add the Nova libvirt StorPool attachment driver * Expand on when you might want to set --max-count for map\_instances * libvirt: pass the mdevs when rebooting the guest * Set server status to ERROR if rebuild failed * libvirt: QEMU native LUKS decryption for encrypted volumes * Replace curly quotes with straight quotes * Fix 'all\_tenants' & 'all\_projects' type in api-ref * Use neutron port\_list when filtering instance by ip * Start moving users of parted to privsep * Add PowerVM to feature-classification * Fix update\_cell to ignore existing identical cells * Change compute RPC to use alternates for resize * Report Client: PUT empty (not None) JSON data * Send traits to ironic on server boot * PowerVM Driver: SEA * Recreate mediated devices on reboot * [api] Allow multi-attach in compute api * doc: Document TLS security setup for noVNC proxy * placement: support traits in allocation candidates API * api-ref: Fix parameter type in server-migrations.inc * Transform instance-evacuate notification * [placement] Add sending global request ID in delete (3) * Add index(instance\_uuid, updated\_at) on instance\_actions table * Fix 500 in test\_resize\_server\_negative\_invalid\_state * Generalize DB conf group copying * Track tree-associated providers in report client * ProviderTree.populate\_from\_iterable * Raise on API errors getting aggregates/traits * Updated from global requirements * Remove redundant swap\_volume tests * Track associated sharing RPs in report client * SchedulerReportClient.set\_traits\_for\_provider * ProviderTree.data => ProviderData * Cleanup redundant want\_version assignment * Fix format in flavors.rst * libvirt: Introduce disk encryption config classes * libvirt: Collocate encryptor and volume driver calls * libvirt: create vGPU for instance * Deduplicate service status notification samples * libvirt: don't attempt to live snapshot paused instances * Pass multiattach flag to reserve\_block\_device\_name * Handle swapping to a multiattach volume * [libvirt] Allow multiple volume attachments * trivial: Remove crud from 'conf.py' * Fix openstackdocstheme options for api-ref * Updated from global requirements * [placement] Add functional tests for resource class API * correct referenced url in comments * Transform instance.resize\_confirm notification * placement: \_get\_trees\_matching\_all\_resources() * Account for deprecation of personality files * PowerVM driver: ovs vif * add \_has\_provider\_trees() utility function * func tests for nested providers in alloc candidate * Deduplicate aggregate notification samples * Fix accumulated nits * Make sure that functional test triggered on sample changes * Add taskflow to requirements * Updated from global requirements * Enable py36 unit tests in tox * Stop globally caching host states in scheduler HostManager * make unit tests compatible with os-vif 1.8.0 * Remove unnecessary execute permissions in files * [placement] Fix resource provider delete * Transform rescue/unrescue instance notifications * conf: Do not inherit image signature props with snapshots * Track provider traits in report client * Fix missing rps in allocation candidates * Add aggregates check in allocation candidates * Fix accumulated nits in refactor series * Test helper: validate provider summaries * Revert "Deduplicate service status notification samples" * console: Provide an RFB security proxy implementation * console: introduce the VeNCrypt RFB authentication scheme * console: introduce framework for RFB authentication * console: Send bytes to sockets * Update links in documents * Add a warning in 'nova-manage cell\_v2 delete\_cell' * Modify the test case of get\_disk\_mapping\_rescue\_with\_config * Rename block\_device\_info\_get\_root * Increase notification wait timeout in functional tests * [placement] Add sending global request ID in delete (2) * Fix comment in MigrationSortContext * Add index(updated\_at) on migrations table * Add pagination and Changes-since filter support for os-migrations * Deduplicate service status notification samples * Add exception to no-upcall note of cells doc * Fix typo in release note * Add cross cell sort support for get\_migrations * libvirt: add tests to check multipath in iscsi/fc volume connectors * libvirt: test to make sure volume\_use\_multipath is properly used * libvirt: use 'host-passthrough' as default on AArch64 * Add reference to policy sample * Add an additional description for 'token\_ttl' * Updated from global requirements * Qualify the Placement 1.15 release note * Add migration db and object pagination support * Add regression test for resize failing during retries * Fix race condition in retrying migrations * libvirt: Provide VGPU inventory for a single GPU type * Fix OpenStack capitalization * Update FAQs about listing hosts in cellv2 * Add ConsoleAuthToken object * Optionalize instance\_uuid in console\_auth\_token\_get\_valid() * Add index on token\_hash and instance\_uuid for console\_auth\_tokens * Add access\_url\_base to console\_auth\_tokens table * Add debug output for selected page size * Use method validate\_integer from oslo.utils * conf: hyperv: fix a comment typo * Remove a duplicate line in a unit test * Use volume shared\_targets to lock during attach/detach * Handle no allocations during migrate * Add regression test for resizing failing when using CachingScheduler * zuul: Move legacy jobs to project * Imported Translations from Zanata * log test: use fixtures.StandardLogging in setUp * Fix up formatting for deprecate-api-extensions-policies release note * Fix documentation nits in set\_and\_clear\_allocations * Document lack of side-effects in AllocationList.create\_all() * VMware: add support for different firmwares * hyper-v: Deprecates support for Windows / Hyper-V Server 2012 * Use UEFI as the default boot for AArch64 * Don't log a warning for InstanceNotFound in detach\_interface * manager: more detailed info of unsupported compute driver * Add test for assignment of uuid to a deleted BDM * Fix fake libvirt XML generation for disks * Handle glance exception during rotating instance backup * Move aggregates from report client to ProviderTree * Only call numa\_fit\_instance\_to\_host if necessary * Expose BDM uuid to drivers * DriverBlockDevice: make subclasses inherit \_proxy\_as\_attr * Add an online migration for BDM.uuid * Address nits in I46d483f9de6776db1b025f925890624e5e682ada * Add support for getting volume details with a specified microversion * XenAPI: Unit tests must mock os\_xenapi calls * Revert "Modify \_poll\_shelved\_instances periodic task call \_shelve\_offload\_instance()" * Remove 'nova-manage host' and 'nova-manage agent' * Remove 'nova-manage logs' command * conf: Remove deprecated 'remap\_vbd\_dev' option * api-ref: Fix incorrect parameter name * [placement] Add sending global request ID in delete * trivial: conf: libvirt: remove a redundant space * Fix the formatting for 2.58 in the compute REST API history doc * trivial: Modify signature of \_filter\_non\_requested\_pfs * Add PCI NUMA policies * Document testing guide for new API contributions * trivial: use cn instead of rp * Updated from global requirements * Test allocation candidates: multiple aggregates * Fix functional tests for USE\_NEUTRON * Make conductor pass and use host\_lists * Don't try to delete build request during a reschedule * libvirt: don't log snapshot success unless it actually happens * Add retry\_on\_deadlock decorator to action\_event\_start * conf: libvirt: Cleanup CPU modelling related options * Remove dead parameter from '\_create\_domain\_and\_network' * Handle images with no data * tests: Use correct response type in tests * Remove the inherits parameter for the Resource object * Remove the LoadedExtensionInfo object * Initialize osprofiler in WSGI application * doc: update supported drivers for cpu topology * Do not set allocation.id in AllocationList.create\_all() * [placement] Fix getting placement request ID * [placement] Enable limiting GET /allocation\_candidates * Pass RequestSpec to ConductorTaskAPI.build\_instances * Fix an error in \_get\_host\_states when deleting a compute node * Provide example for placement last-modified header of now * objects: Add PCI NUMA policy fields * Workaround missing RequestSpec.project\_id when moving an instance * Use instance.project\_id when creating request specs for old instances * Fix duplicate allocation candidates * trivial: conf: libvirt: fix a typo * Remove extensions module * Fix 4 doc typos * Fix false positive server group functional tests * Updated from global requirements * XenAPI: create vGPU for instance * update\_cell allows more than once cell to have the same db/transport url * [placement] Add x-openstack-request-id in API ref * [placement] Separate API schemas (allocation\_candidate) * [placement] Separate API schemas (allocation) * Implement set\_and\_clear\_allocations in report client * Make BlockDeviceMapping object support uuid * Add uuid column to BlockDeviceMapping * Remove unused argument from LibvirtDriver.\_disconnect\_volume * Removed unused argument from LibvirtDriver.\_connect\_volume * Fix unit test failures when direct IO not supported * [placement] Separate API schemas (resource\_class) * Updated from global requirements * Deduplicate functional test code * Aggregate ops on ProviderTree * Implement query param schema for migration index * Make request\_spec.spec MediumText * Fix the formatting for 2.56 in the compute REST API history doc * Delete the TypeAffinityFilter * live-mig: keep disk device address same * Traits ops on ProviderTree * SchedulerReportClient.\_get\_providers\_in\_aggregates * [placement] Separate API schemas (inventory) * [placement] Separate API schemas (aggregate) * [placement] Separate API schemas (trait) * [placement] Separate API schemas (usage) * Fix the bug report link of API Guide * Extract instance allocation removal code * Test alloc\_cands with one RP shared between two RPs * Test alloc\_cands with non overlapping sharing RPs * handle traits with sharing providers * Fix possible TypeError in VIF.fixed\_ips * Add pagination and changes-since for instance-actions * Updated common create server sample request because of microversion 2.57 * Fix some typos in nova doc * Retry \_trait\_sync on deadlock * Remove unnecessary connector stash in attachment\_update * Pass mountpoint to volume attachment\_create with connector * Pass bdms to versioned notifications during finish\_revert\_resize * Update and complete volume attachments during resize * Pass mountpoint to volume attachment\_update * Don't persist could-be-stale InstanceGroup fields in RequestSpec * Update nova-status and docs for nova-compute requiring placement 1.14 * Wait for live\_migration\_rollback.end notification * Some nit fix in multi\_cell\_list * Raise MarkerNotFound if BuildRequestList.get\_by\_filters doesn't find marker * Move flushing block devices to privsep * Convert ext filesystem resizes to privsep * [placement] Add info about last-modified to contrib docs * [placement] Add cache headers to placement api requests * Stabilize test\_live\_migration\_abort func test * doc: add note about fixing admin-only APIs without a microversion * Deprecate file injection * VMware: implement get\_inventory() driver method * VMware: expose max vCPUs and max memory per ESX host * VMware: fix memory stats * api-ref: Fix a description for 'guest\_format' * Move the claim\_resources method to scheduler utils * Change RPC for select\_destinations() * Re-use existing ComputeNode on ironic rebalance * placement: skip authentication on root URI * Add instance action db and obj pagination support * Update Instance action's updated\_at when action event updated * Make live migration hold resources with a migration allocation * Add instance action record for snapshot instances * Add quiesce and unquiesce in support matrix * libvirt: throw NotImplementedError if qga is not responsive when setting password * [placement] Fix API reference for microversion 1.14 * Unmap compute nodes when deleting host mapping * Follow up on removing old-style quotas code * Add API and nova-manage tests that use the NoopQuotaDriver * Add instance action record for backup instances * Don't launch guestfs in a thread pool if guestfs.debug is enabled * Remove confusing comment in compute\_node\_get API method * [placement] add name to resource provider create error * Improve error message on invalid BDM fields * doc: link in some Sydney summit content * trivial: more suitable log in set\_admin\_password * Add support for listing hosts in cellv2 * [placement] Add 'Location' parameters in API ref * [placement] Object changes to support last-modified headers 17.0.0.0b2 ---------- * Implement new attach Cinder flow * Add new style volume attachment support to block\_device.py * SchedulerReportClient.\_get\_providers\_in\_tree * Modify select\_destinations() to return objects and alts * Move the to\_dict() method to the Selection object * Return Selection objects from the scheduler driver * Refactor the code to check for sufficient hosts * Fix 'force' parameter in os-quota-sets PUT schema * Reformat \_get\_all\_with\_shared * Updated from global requirements * Deprecate configurable Hide Server Address Feature * XenAPI: update the picture in Xen hypervisor document * Deprecate API extensions policies * Avoid stashed connector lookup for new style detach * placement: update client to set parent provider * Scheduler set\_inventory\_for\_provider does nested * placement: adds REST API for nested providers * placement: allow filter providers in tree * XenAPI: Don't use nicira-iface-id for XenServer VIF * archive\_deleted\_instances is not atomic for insert/delete * Remove the unused request\_id filter from api-paste.ini * Add a new check to volume attach * Add instance action record for shelve\_offload instances * Modify \_poll\_shelved\_instances periodic task call \_shelve\_offload\_instance() * Add Selection objects * Fix doubling allocations on rebuild * Add PowerVM to compute\_driver options * Updated from global requirements * Fix wrong argument order in functional test * [placement] Fix an error message in API validation * Transform instance.resize\_revert notification * Mention API behavior change when over quota limit * [placement] Fix foreign key constraint error * [placement] Add aggregate link note in API ref * Fail fast if changing image on a volume-backed server rebuild * Get original image\_id from volume for volume-backed instance rebuild * Add regression test for rebuilding a volume-backed server * ProviderTree.get\_provider\_uuids() * Fix cellsv1 messaging test * Make \_Provider really private * Split instance\_list into instance and multi\_cell * Genericify the instance\_list stuff * Remove 'nova-manage account' and 'nova-manage project' * Remove 'nova-manage shell' command * Updated from global requirements * Fixes 'Not enough available memory' log message * Only log not correcting allocation once per period * Add description for resource class creation * Trivial: Nix duplicate PlacementFixture() in test * [placement] re-use existing conf with auth token middleware * Fix disk size during live migration with disk over-commit * Use ksa adapter for keystone conf & requests * Downgrade log for keystone verify client fail * [placement]Enhance doc for placement allocation list * Update description of Rebuild in server\_concepts.rst * Use oslo\_db Session in resource\_provider.py * VMware: Handle concurrent registrations of the VC extension * Proper error handling by \_ensure\_resource\_provider * Refactor placement version check * Nix log translations from scheduler.client.report * Remove old-style quotas code * Remove direct usage of glance.generate\_image\_url * remove glance usage inside compute * Assert that we restrict cold migrations to the same cell * [placement] Fix format in placement API ref * Enable cold migration with target host(2/2) * qemu-img do not use cache=none if no O\_DIRECT support * remove reserve\_quota\_delta * Raise specific exception when swapping migration allocations fails * Fix ValueError when loading old pci device record * Updated from global requirements * Remove the objects for describing the extension for v2.1 API * Remove the objects which related to the old v2 API implementation * Updated from global requirements * Save updated libvirt domain XML after swapping volume * placement: add nested resource providers * Deprecate the IronicHostManager * Fix some incorrect option references for scheduler filters * Remove deprecated TrustedFilter * Fix NoneType error when [service\_user] is misconfigured * check query param for server groups function * Deduplicate instance.create notification samples * Nits from Ic3ab7d60e4ac12b767fe70bef97b327545a86e74 * [placement] Fix GET PUT /allocations nits * [placement] POST /allocations to set allocations for >1 consumers * Add instance action record for lock/unlock instances * XenAPI: provide vGPU inventory in compute node * XenAPI: get vGPU stats from hypervisor * Add 'all\_tenants' for GET sec group api ref * Update the documentation links * Add instance action record for attach/detach/swap volumes * Add regression test for rebuild with new image doubling allocations * Refined fix for validating image on rebuild * Address nits from service create/destroy notification review * Versioned notifications for service create and delete * Remove unnecessary self.flags and ConfPatcher * Implement query param schema for delete assisted vol * Add ProviderSummary.resource\_class\_names @property * required traits for no sharing providers * Fix invalid minRam error message * finish refactor AllocCandidates.\_get\_by\_filters() * PowerVM support matrix update * Fix the format file name * Simplify BDM boot index checking * Remove unused global variables * Updated from global requirements * Implement query param schema for flavor index * Implement query param schema for fping index * Implement query param schema for sec group APIs * Finish stestr migration * Fix incorrect known vcpuset when CPUPinningUnknown raised * Enable cold migration with target host(1/2) * Update server query section in the API concept doc * [placement] Add 'CUSTOM\_' prefix description in API ref * [placement] Fix parameter order in placement API ref * Remove 'nova-manage quota refresh' command * Api-guide: Address TODOs in user\_concepts section * Update server status api guide * Api guide:add Server Consoles * Update Metadata api section of api guide * Implement query param schema for simple\_tenant\_usage * Transform instance-live\_migration\_pre notification * Use FakeLiveMigrateDriver in notification test * Change live\_migrate tests to use fakedriver * Test resource allocation during soft delete * factor out compute service start in ServerMovingTest * Moving more utils to ProviderUsageBaseTestCase * Don't overwrite binding-profile * Fix TypeError of \_get\_project\_id when project\_id is None * Regenerate and pass configdrive when rebuild Ironic nodes * Update bindep.txt for doc builds * [placement] Symmetric GET and PUT /allocations/{consumer\_uuid} * Service token is not experimental * Use ksa adapter for neutron client * Get auth from context for glance endpoint * vgpu: add enabled white list * cleanup mapping/reqspec after archive instance * Fix the usage of instance.snapshot notification sample * Update document related to host aggregate * api-ref: Add a description of 'key\_name' in rebuild * api-ref: Fix an example in "Delete Assisted Volume Snapshot" * Use the RequestSpec when getting scheduler\_hints in compute * Add migration\_get\_by\_uuid in db api * Add instance action record for attach/detach interface * placement: Document request headers in api-ref * Deduplicate keypair notification samples * Include project\_id and user\_id in AllocationList.get\_all\_by\_consumer\_id * Clean up exception caught in \_validate\_and\_build\_base\_options * Implement query param schema for volume, snapshot API * Implement query param schema for quota set APIs * api-ref: fix the type on the block\_device\_mapping\_v2 parameter * placement: Document \`in:\` prefix for ?member\_of= * libvirt: Re-initialise volumes, encryptors, and vifs on hard reboot * VMware: serial console log (completed) * PowerVM Driver: config drive * Fix TypeError in nova-manage db archive\_deleted\_rows * Remove setting of version/release from releasenotes * Fix the formatting for the 2.54 microversion REST API version history * hyper-v: Do not allow instances with pinned CPUs to spawn * Updated from global requirements * Add microversion to allow setting flavor description * Fix docstring for GET /os-migrations and related DB API * Add a note about versioned notification samples being per-release * Document the real behavior of notify\_on\_state\_change * Use NoDBTestCase for powervm driver tests * create allocation request for single provider * build alloc request resources for shared resources * build ProviderSummary objects in sep function * begin refactor AllocCandidates.\_get\_by\_filters() * Add security release note for OSSA-2017-005 * Add error message on metadata API * api-ref: make a note about os:scheduler\_hints being a top-level key * doc: fix link to creating unit tests in contributor guide * Validate new image via scheduler during rebuild * Add FlavorPayload.description for versioned notifications * placement: AllocCands.get\_by\_{filters => requests} * Deduplicate server\_group samples * Correct log message when removing a security group * Updated from global requirements * Enable reset keypair while rebuilding instance * Test allocation\_candidates with only sharing RPs * Test alloc candidates with same RC in cn & shared * rt: Make resource tracker always invoking get\_inventory() * Revert "Don't overwrite binding-profile" * Cleanup build\_request\_spec * Refactor test\_allocation\_candidates * block\_device\_mapping\_v2.bus\_type is missing from api-ref * Remove incorrect comment about instance.locked * Don't overwrite binding-profile * Do not use “-y” for package install * [placement] set accept to application/json if accept not set * [placement] Fix a wrong redirection in placement doc * Handle InstanceNotFound when setting password via metadata * Extract allocation candidates functional tests * Deduplicate instance.reboot notification samples * Deduplicate instance.live\_migration notification samples * Deduplicate instance.interface\_attach samples * Deduplicate instance.power-off notification samples * Transform instance-live\_migration\_abort notification * Deduplicated instance.(un)pause notification samples * Factor out duplicated notification sample data (2) * Move last\_bytes into the path module * Fix test\_get\_volume\_config method * Fix missing versioned notification sample * Clean up allocations if instance deleted during build * Avoid deleting allocations for instances being built * libvirt: remove old code in post\_live\_migration\_at\_destination * Using --option ARGUMENT * Add Flavor.description attribute * Modify incorrect debug meaasge in \_inject\_data * Avoid redundant security group queries in GET /servers/{id}/os-security-groups * Update contributor microversion doc for compute * Updated from global requirements * Granularize resources\_from\_{flavor|request\_spec} * Parse granular resources/traits from extra\_specs * placement: Parse granular resources & traits * RequestGroup class for placement & consumers * Factor out duplicated notification sample data * libvirt: Don't VIR\_MIGRATE\_NON\_SHARED\_INC without migrate\_disks * libvirt: do unicode conversion for error messages * Fix return type in FilterScheduler.\_legacy\_find\_hosts * Implement power\_off/power\_on for the FakeDriver * Remove instance.keypairs migration code * conf: Validate '[api] vendordata\_providers' options * conf: Remove 'vendordata\_driver' opt * Trivial grammar fix * Fix warning on {'cell\_id': 1} is an invalid UUID * Move contrail vif plugging to privsep * Move plumgrid vif plugging to privsep * Move midonet vif plugging to privsep * Move infiniband vif plugging to privsep * Remove compatibility method from FlavorPayload * placement: Contributor doc microversion checklist * libvirt: do not remove inst\_base when volume-backed during resize * Refactor claim\_resources() to use retries decorator * Make put\_allocations() retry on concurrent update * [placement] avoid case issues microversions in gabbits * Fix format in live-migration-usage.rst * Don't update RT in \_allocate\_network * Transform keypair.import notification * api-ref: document caveats with scheduler hints * add whereto for testing redirect rules * rp: break functions out of \_set\_traits() * Use Migration object in ComputeManagerMigrationTestCase * check query param for used\_limits function * VMware: add support for graceful shutdown of instances * Pass requested\_destination in filter\_properties * Functional regression test for evacuate with a target * Fix indent in configuring-migrations.rst * XenAPI: resolve VBD unplug failure with VM\_MISSING\_PV\_DRIVERS error * libvirt: properly decode error message from qemu guest agent * Use ksa adapter for placement conf & requests * Only filter/weigh hosts once if scheduling a single instance * Update placement api-ref: allocations link in 1.11 * rt: Implement XenAPI get\_inventory() method * Fix instance lookup in hide\_server\_addresses extension * libvirt: remove extraneous retry assignment in cleanup method * libvirt: Don't disregard cache mode for instance boot disks * Fix live migration grenade ceph setup * Pass the correct image to build\_request\_spec in conductor.rebuild\_instance * rp: remove \_HasAResourceProvider mixin * rp: move RP.\_set\_traits() to module scope * rp: Remove RP.get\_traits() method * [placement] Limit number of attempts to delete allocations * [placement] Allow \_set\_allocations to delete allocations * conf: Move additional nova-net opts to 'network' * Do not attempt volume swap when guest is stopped/suspended * Convert IVS VIF plugging / unplugging to privsep * Move blkid calls to privsep * trivial: Rename 'policy\_check' -> 'policy' * test: Store the OutputStreamCapture fixture * Accept all standard resource classes in flavor extra specs * Fix AttributeError in BlockDeviceMapping.obj\_load\_attr * Move project\_id and user\_id to Allocation object * VGPU: Define vgpu resource class * Make migration uuid hold allocations for migrating instances * Fix wrapping of neutron forbidden error * Import user-data page from openstack-manuals * Import the config drive docs from openstack-manuals * Move kpartx calls to privsep * Move nbd commands to privsep * Move loopback setup and removal to privsep * Move the idmapshift binary into privsep * Include /resource\_providers/uuid/allocations link * xenapi: cached images should be cleaned up by time * Add test so we remember why CUSTOM\_ prefix added * Move xend existence probes to privsep * Move shred to privsep * Add alternate hosts * Implement query param schema for host index * conf: Remove deprecated 'null\_kernel' opt * Adds 'sata' as a valid disk bus for qemu and kvm hypervisors * propagate OSError to MigrationPreCheckError * Trivial: fix spelling of allocation\_request * Transform instance.trigger\_crash\_dump notification * Add debug information to metadata requests 17.0.0.0b1 ---------- * placement: integrate ProviderTree to report client * [Trivial] Fix up a docstring * Remove duplicate error info * [placement] Clean up TODOs in allocations.yaml gabbit * Add attachment\_get to refresh\_connection\_info * Add 'delete\_host' command in 'nova-manage cell\_v2' * Keep updating allocations for Ironic * docs: Explain the flow of the "serial console" feature * Send Allocations to spawn * Move lvm handling to privsep * Cleanup mount / umount and associated rmdir calls * Update live migration to use v3 cinder api * placement: set/check if inventory change in tree * Move restart\_compute\_service to a common place * Fix nova-manage commands that do not exist * fix cleaning up evacuated instances * doc: Fix command output in scheduler document * Refactor resource tracker to account for migration allocations * Revert allocations by migration uuid * Split get\_allocations\_for\_instance() into useful bits * Regenerate context during targeting * Pick ironic nodes without VCPU set * Don't use mock.patch.stopall * Move test\_uuid\_sentinels to NoDBTestCase * [placement] Confirm that empty resources query causes 400 * [placement] add coverage for update of standard resource class * api-ref: add warning about force evacuate for ironic * Add snapshot id to the snapshot notifications * Reproduce bug 1721652 in the functional test env * Add 'done' to migration\_get\_in\_progress\_by\_host\_and\_node filter * Update "SHUTOFF" description in API guide * api-ref: fix server status values in GET /servers docs * Fix connection info refresh for reboot * rp: rework AllocList.get\_all\_by\_consumer\_id() * rp: fix up AllocList.get\_by\_resource\_provider\_uuid * rp: remove ability to delete 1 allocation record * rp: remove dead code in Allocation.\_create\_in\_db() * rp: streamline InventoryList.get\_all\_by\_rp\_uuid() * rp: remove CRUD operations on Inventory class * Make expected notifications output easier to read in tests * Elevate existing RequestContext to get bandwidth usage * Fix target\_cell usage for scatter\_gather\_cells * Nix bug msg from ConfGroupForServiceTypeNotFound * nova-manage map\_instances is not using the cells info from the API database * Updated from global requirements * Update cinder in RequestContext service catalog * Target context for build notification in conductor * Don't fix protocol-less glance api\_servers anymore * Move user\_data max length check to schema * Remove unnecessary BDM destroy during instance delete * rp: Move RP.\_get|set\_aggregates() to module scope * rp: de-ORM ResourceProvider.get\_by\_uuid() * use already loaded BDM in instance.create * use already loaded BDM in instance. (2) * use already loaded BDM in instance. * Remove dead code of api.fault notification sending * Fix sending legacy instance.update notification * doc: Rework man pages * Fix typo in test\_prep\_resize\_errors\_migration * Fix minor input items from previous patches * nova.utils.get\_ksa\_adapter() * De-duplicate \_numa\_get\_flavor\_XXX\_map\_list * hardware: Flatten functions * Update libvirt volume drivers to use os-brick constants * Always put 'uuid' into sort\_keys for stable instance lists * Fix instance\_get\_by\_sort\_filters() for multiple sort keys * Deprecate allowed\_direct\_url\_schemes and nova.image.download.modules * Add error notification for instance.interface\_attach * Note TrustedFilter deprecation in docs * Make setenv consistent for unit, func, and api-samples * Blacklist test\_extend\_attached\_volume from cells v1 job * Pre-create migration object * Remove metadata/system\_metadata filter handling from get\_all * fix unstable shelve offload functional tests * TrivialFix: Fix the incorrect test case * stabilize test\_resize\_server\_error\_and\_reschedule\_was\_failed * api-ref: note that project\_id filter only works with all\_tenants * Avoid redundant BDM lookup in check\_can\_live\_migrate\_source * Only query BDMs once in API during rebuild * Make allocation cleanup honor new by-migration rules * Modernize set\_vm\_state\_and\_notify * Remove system\_metadata loading in Instance.\_load\_flavor * Stop joining on system\_metadata when listing instances * Remove old compat code from servers ViewBuilder.\_get\_metadata * Remove unused get\_all\_instance\_\*metadata methods * doc: Add documentation for cpu\_realtime, cpu\_realtime\_mask * Remove 400 as expected error * Remove doc todo related to bug/1506667 * api-ref: add note about rebuild not replacing volume-backed root disk * api-ref: remove redundant preserve\_ephemeral mention from rebuild docs * [placement] gabbi tests for shared custom resource class * Update RT aggregate map less frequently * libvirt: add method to configure migration speed * Set migration object attributes for source/dest during live migrate * Refactor duplicate code for looking up the compute node name * Fix CellDatabases fixture swallowing exceptions * Use improved instance\_list module in compute API * Fix a pagination logic bug in test\_bug\_1689692 * Add hints to what the Migration attribute values are * Move cell0 marker test to Cellsv1DeprecatedTestMixIn * Ensure instance can migrate when launched concurrently * console: introduce basic framework for security proxying * [placement] Update the placement deployment instructions * Move allocation manipulation out of drop\_move\_claim() * Do not monkey patch eventlet in unit tests * Do not setup conductor in BaseAPITestCase * Make etree.tostring() emit unicode everywhere * Fix inconsistency of 'NOTE:' description * Don't shell out to mkdir, use ensure\_tree() * Read from console ptys using privsep * Move ploop commands to privsep * Set group\_members when converting to legacy request spec * Support qemu >= 2.10 * Fix policy check performance in 2.47+ * doc: make host aggregates examples more discoverable * Remove dest node allocations during live migration rollback * Fix race in delete allocation in ServerMovingTests * xenapi: pass migrate\_data to recover\_method if live migrate fails * \_rollback\_live\_migration in live-migration seqdiag * Log consumer uuid when retrying claims in the scheduler * Add recreate test for live migrate rollback not cleaning up dest allocs * Add slowest command to tox.ini * Make TestRPC inherit from the base nova TestCase * Ensure errors\_out\_migration errors out migration * use context mgr in instance.delete * Implement query param schema for GET hypervisor(2.33) * Remove SCREEN\_LOGDIR from devstack install setting * Fix --max-count handling for nova-manage cell\_v2 map\_instances * Set the Pike release version for scheduler RPC * Add functional for live migrate delete * Fix IoOpsFilter test case class name * Add get\_node\_uuid() helper to ResourceTracker * Live Migration sequence diagram * Deprecate idle\_timeout in api\_database * cleanup test-requirements * Add 400 as error code for resource class delete * Implement query param schema for agent index * fix nova accepting invalid availability zone name with ':' * check query param for service's index function * Remove useless periodic task that expires quota reservations * Add attachment\_get call to volume/cinder\_api * Add functional migrate force\_complete test * Copy some tests to a cellsv1 mixin * Add get\_instance\_objects\_sorted() * Make 'fault' a valid joined query field for Instance * Change livesnapshot to true by default * docs: Rename cellsv2\_layout -> cellsv2-layout * Add datapath type information to OVS vif objects * libvirt: Make 'get\_domain' private * Fix 500 if list servers called with empty regex pattern * Vzstorage: synchronize volume connect * Add \_wait\_for\_action\_fail\_completion to InstanceHelperMixin * Remove allocations when unshelve fails on host * Updated from global requirements * Add instance.interface\_detach notification * Add default configuration files to data\_files * Remove method "\_get\_host\_ref\_from\_name" * Add a regression test for bug 1718455 * Add recreate test for unshelve offloaded instance spawn fail * Add PowerVM hypervisor configuration doc * Add tests to validate instance\_list handles faults correctly * Add fault-filling into instance\_get\_all\_by\_filters\_sort() * Support pagination in instance\_list * Add db.instance\_get\_by\_sort\_filters() * Make instance\_list honor global query limit * Add base implementation for efficient cross-cell instance listing * Fix hyperlinks in document * api-ref: fix default sort key when listing servers * Add instance.interface\_attach notification * libvirt: bandwidth param should be set in guest migrate * Updated from global requirements * Add connection pool size to vSphere settings * Add live.migration.force.complete to the legacy notification whitelist * Restore '[vnc] vnc\_\*' option support * neutron: handle binding:profile=None during migration * doc: Add documentation for emulator\_thread\_policy * doc: Split flavors docs into admin and user guides * VMware: Factor out relocate\_vm() * remove re-auth logic for ironic client wrapper * hyperv: report disk\_available\_least field * Allow shuffling hosts with the same best weight * Enable custom certificates for keystone communication * Fix the ocata config-reference URLs * Fix a typo * Account for compute.metrics.update in legacy notification whitelist * use unicode in tests to avoid SQLA warning * Move libvirts dmcrypt support to privsep * Squash dacnet\_admin privsep context * Squash dac\_admin privsep context * Move the dac\_admin privsep code to a new location * Use symbolic names for capabilities, expand sys\_admin context * stabilize test\_resize\_server\_error\_and\_reschedule\_was\_failed * Updated from global requirements * Drop support for the Cinder v2 API * Remove 400 as expected error * Set error state after failed evacuation * Add @targets\_cell for live\_migrate\_instance method in conductor * [placement] Removing versioning from resource\_provider objects * doc: rename the Indices and Tables section * doc: Further cleanup of doc contributor guide * [placement] Unregister the ResourceProvider object * [placement] Unregister the ResourceProviderList object * [placement] Unregister the Inventory object * [placement] Unregister the InventoryList object * [placement] Unregister the Allocation object * [placement] Unregister the AllocationList object * [placement] Unregister the Usage object * [placement] Unregister the UsageList object * [placement] Unregister the ResourceClass object * [placement] Unregister the ResourceClassList object * [placement] Unregister the Trait object * [placement] Unregister the TraitList object * Add '\_has\_qos\_queue\_extension' function * Add '\_has\_dns\_extension' function * Assume neutron auto\_allocate extension's enabled * Add single quotes for posargs on jobs * Add nova-manage db command for ironic flavor migrations * enhance api-ref for os-server-external-events * Have one list of reboot task\_states * Call terminate\_connection when shelve\_offloading * Revert "Enable test\_iscsi\_volume in live migration job" * Target context when setting instance to ERROR when over quota * Cleanup running of osprofiler tests * Fix test runner config issues with os-testr 1.0.0 * Fix missed chown call * Updated from global requirements * Tweak connection\_info translation for the new Cinder attach/detach API * Add attachment\_complete call to volume/cinder.py * Remove dest node allocation if evacuate MoveClaim fails * Add a test to make sure failed evacuate cleans up dest allocation * Add recreate test for evacuate claim failure * Create allocations against forced dest host during evacuate * fake\_notifier: Refactor wait\_for\_versioned\_notification * Transform instance.resize.error notifications * Update docs to include standardization of VM diagnostics * Refactor ServerMovingTests for non-move tests * Remove deprecated keymgr code * Move execs of tee to privsep * Add ComputeNodeList.get\_by\_hypervisor\_type() * Split out the core of the ironic flavor migration * Fix binary name * Revert "Revert "Fix AZ related API docs"" * [placement] Correct a comment in \_set\_allocations * Remove Xen networking plugin * Revert "Fix AZ related API docs" * [placement] correct error on bad resource class in allocation * api-ref: note the microversions for GET /resource\_providers query params * doc: fix flavor notes * Fix AZ related API docs * Transform aggregate.remove\_host notification * Transform servergroup.delete notification * Transform aggregate.add\_host notification * Cleanup unused get\_iscsi\_initiator * Remove two testing stubs which aren't really needed * Typo error about help resource\_classes.inc * Transform servergroup.create notification * Set regex flag on ostestr command for osprofiler tests * Transform keypair.delete notification * Move execs of touch to privsep * Move libvirt usages of chown to privsep * Enable test\_iscsi\_volume in live migration job * Refactor out claim\_resources\_on\_destination into a utility * Fix broken URLs * Ensure instance mapping is updated in case of quota recheck fails * Track which cell each instance is created in and use it consistently * Make ConductorTaskTestCase run with 2 cells * xenapi: Exception Error logs shown in Citrix XenServer CI * Update contributor guide for Queens * Allow setting up multiple cells in the base TestCase * Fix test\_rpc\_consumer\_isolation for oslo.messaging 5.31.0 * Fix broken link * First attempt at adding a privsep user to nova itself * Provide hints when nova-manage db sync fails to sync cell0 * Add release note for force live migration allocations * Handle exception on adding secgroup * doc: Add configuration index page * doc: Add user index page * spelling mistake * Fix ValueError if invalid max\_rows passed to db purge * Remove usage of kwarg retry\_on\_request in API * Add release note for requiring shred 8.22 or above * Make xen unit tests work with os-xenapi>=0.3.0 * Skip more racy rebuild failing tests with cells v1 * Add some inline code docs tracing the cold migrate flow * Mark LXC as missing for swap volume support * Remove compatibility code for flavors * rbd: Remove unnecessary 'encode' calls * Updated from global requirements * Pass config object to oslo\_reports * Replace http with https for doc links in nova * Put base policy rules at first * Amend uuid4 hacking rule * conf: Rename two VNC options * Correct examples in "Manage Compute services" documentation * Handle deleted instances when refreshing the info\_cache * Remove qpid description in doc * Replace dd with shred for zeroing lvm volumes * Update docs for \_destroy\_evacuated\_instances * doc: link to versioned notification samples from main index * doc: link to placement api-ref and history docs from main index * doc: fix online\_data\_migrations option in upgrades doc * Add recreate test for forced host evacuate not setting dest allocations * add online\_data\_migrations to nova docs * Glance download: only fsync files * Functional test for regression bug #1713783 * doc: fix show-hide sample in notification devref * Default the service version in the notification tests * api-ref: add warnings about forcing the host for live migrate/evacuate * HyperV: Perform proper cleanup after failed instance spawns * [placement] Update user doc with api-ref link * [placement] api-ref GET /traits name:startswith * Add video type virtio for AArch64 * Document tagged attach in the feature support matrix * [placement] Require at least one resource class in allocation * Enhance doc for nova services * Update doc to indicate nova-network deprecated * Updated from global requirements * [placement] Add test for empty resources in allocation * Refactor LiveMigrationTask.\_find\_destination * Cleanup allocations on invalid dest node during live migration * Hyper-V: Perform proper cleanup after cold migration * Test InstanceNotFound handling in 'nova usage' * Typo fix in admin doc ssh-configuration.html * iso8601.is8601.Utc No Longer Exists * Fix nova assisted volume snapshots * Fix \_delete\_inventory log message in report client * Add functional recreate test for live migration pre-check fails * doc: Remove deprecated call to sphinx.util.compat * Remove unneeded attributes from context * Updates to scheduling workflow doc * Add uuid online migration for migrations * Add uuid to migration object and migrate-on-load * Add uuid to migration table * Add placeholder migrations for Pike backports * Clarify the field usage guidelines * Optimize MiniDNS for fewer syscalls * [Trivial] docstrings, typos, minor refactoring * Update PCI passthrough doc for moved options * tests: De-duplicate some graphics tests * Reduce code complexity - linux\_net.py * Refactor init\_instance:resume\_guests\_state * conf: Allow users to unset 'keymap' options * Change default for [notifications]/default\_publisher\_id to $host * Deprecate CONF.monkey\_patch * Add device tag support info in support matrix * Prevent blank line at start of migration placeholders * Remove useless error handling in prep\_resize * De-duplicate two delete\_allocation\_for\_\* methods * Move hash ring initialization to init\_host() for ironic * Fix bug on vmware driver attach volume failed * fix a typo in format\_cpu\_spec doc * Cleanup allocations in failed prep\_resize * Add functional test for rescheduling during a migration * Remove allocation when booting instance rescheduled or aborted * Fix sample configuration generation for compute-related options * Add formatting to scheduling activity diagram * Monkey patch the blockdiag extension * docs: Document the scheduler workflow * Updated from global requirements * Delete instance allocations when the instance is deleted * How about not logging errors every time we shelve offload? * Add missing tests for \_remove\_deleted\_instances\_allocations * nova-manage: Deprecate 'cell' commands * Add missing unit tests for FilterScheduler.\_get\_all\_host\_states * api-ref: fix key\_name note formatting * Assume neutron port\_binding extensions enabled * libvirt: Fix getting a wrong guest object * pci: Validate behavior of empty devname * Tests: Add cleanup of 'instances' directory * Remove the section about extensions from the API concept doc * Restrict live migration to same cell * Remove source node allocation after live migration completes * Allocate resources on forced dest host during live migration * Add language for compute node configuration * trivial: Remove some single use function from utils * Add functional live migrate test * Add functional force live migrate test * doc: Address review comments for main index * trivial: Remove dead function, variable * tests: Remove useless test * Remove plug\_ovs\_hybrid, unplug\_ovs\_hybrid * Correct statement in api-ref * Fix a typo in code comment * Refactor libvirt.utils.execute() away * Fix quobyte test\_validate\_volume\_no\_mtab\_entry * Updated from global requirements * update comment for dropping support * Move common definition into common layer * Remove host filter for \_cleanup\_running\_deleted\_instances periodic task * Fix contributor documentation * replace chance with filter scheduler in func tests * Clean up resources at shelve offload * test shelve and shelve offload with placement * Amend the code review guide for microversion API * delete allocation of evacuated instance * Make scheduler.utils.merge\_resources ignore zero values * Fix a wrong link * Fix reporting inventory for provisioned nodes in the Ironic driver * Avoid race in test\_evacuate * Reset client session when placement endpoint not found * Update api doc with latest updates in api framework * doc: Extend nfv feature matrix with pinning/NUMA * Always use application/json accept header in report client * Fix messages in functional tests * Handle addition of new nodes/instances in ironic flavor migration * Skip test\_rebuild\_server\_in\_error\_state for cells v1 * test server evacuation with placement * doc: add superconductor up-call caveat for cross\_az\_attach=False * doc: add another up-call caveat for cells v2 for xenapi aggregates * Update reno for stable/pike * Deprecate bare metal filters 16.0.0.0rc1 ----------- * Remove "dhcp\_options\_for\_instance" * Clarifying node\_uuid usage in ironic driver * doc: address review comments in stable-api guide updates * Resource tracker compatibility with Ocata and Pike * placement: avoid returning duplicated alloc\_reqs when no sharing rp * Imported Translations from Zanata * [placement] Make placement\_api\_docs.py failing * [placement] Add api-ref for allocation\_candidates * Clarify that vlan feature means nova-network support * [placement] Add api-ref for RP usages * Remove ram/disk sched filters from default list * Remove provider allocs in confirm/revert resize * placement: refactor healing of allocations in RT * remove log message with potential stale info * doc: Address review comments for contributor index * Require Placement 1.10 in nova-status upgrade check * Mark Chance and Caching schedulers as deprecated * [placement] Add api-ref for usages * Clean up \*most\* ec2 / euca2ools references * Add documentation for documentation contributions * Structure cli page * doc: Import configuration reference * Add release note for shared storage known issue * Improve stable-api doc with current API state * update policy UT fixtures * Bulk import all config reference figures * rework index intro to describe nova * Mark max microversion for Pike in history doc * Add a prelude section for Pike * doc: provide more details on scheduling with placement * Add functional test for local delete allocations * Document service layout for consoles with cells * Add For Operators section to front page * Create For End Users index section * doc: code review considerations for online data migrations * Add track\_instance\_changes note in disable\_group\_policy\_check\_upcall * Cleanup release note about ignoring allow\_same\_net\_traffic * no instance info cache update if instance deleted * Add format\_dom for PCI device addresses * doc: Add additional content to admin guide * Create reference subpage * Raise NoValidHost if no allocation candidates * Fix all >= 2 hit 404s * Handle ironicclient failures in Ironic driver * Fix migrate single instance when it was created concurrently * trivial: Remove files from 'tools' * trivial: Remove "vif" script * tools/xenserver: Remove 'cleanup\_sm\_locks' * Test resize with too big flavor * [placement] Add api-ref for RP allocations * placement: filtering the resource provider id when delete trait association * Updated from global requirements * Add resource utilities to scheduler utils * Add Contributor Guide section page * Fix getting instance bdms in multiple cells * Update install guide to clearly define between package installs * doc: Import administration guide * doc: Import installation guide * Complete dostring of live\_migration related methods * Add a caveat section about cellsv2 upcalls * doc: Start using oslo\_policy.sphinxext * policies: Fix Sphinx issues * doc: Start using oslo\_config.sphinxext * doc: Rework README to reflect new doc URLs * doc: Remove dead files * nova-manage: Deprecate '--version' parameters * imagebackend: cleanup constructor args to Rbd * Sum allocations in the scheduler when resizing to the same host * doc: Make use of definition lists, literals * hardware offload support for openvswitch * reflow rpc doc to 80 columns * fix list rendering in policy-enforcement * Fix scope of errors\_out\_migration in finish\_resize * Fix scope of errors\_out\_migration in resize\_instance * Split Compute.errors\_out\_migration into a separate contextmanager * fix list rendering in cells * fix list rendering in aggregates * Fix list rendering in bdm doc * fix list rendering in rpc doc * Fix list rendering in code-review.rst * Fix whitespace in rest\_api\_version\_history * Fix lists in process doc * [placement] Avoid error log on 405 response * Keep the code consistent * Filter out stale migrations in resource audit * Test resize to same host with placement api * fix rpc broken rst comment * sort redirectmatch lines * add top 404 redirect * [placement] Require at least one allocation when PUT * Add redirect for api-microversion-history doc * Fix 409 handling in report client when deleting inventory * Detach device from live domain even if not found on persistent * Cleanup unnecessary logic in os-volume\_attachments controller code * Adopt new pypowervm power\_off APIs * placement: remove existing allocs when set allocs * Additional assertions to resize tests * Accept any scheduler driver entrypoint * add redirects for existing broken docs urls * Add some more cellsv2 doc goodness * Test resize with placement api * Deprecate cells v1 * Add release note for PUT /os-services/\* for non-compute services * Updated from global requirements * Don't warn on expected network-vif-unplugged events * do not pass proxy env variables by tox * Show quota detail when inject file quota exceeds * rootwrap.d cleanup mislabeled files * always show urls in list\_cells * api-ref: requested security groups are not applied to pre-existing ports * api-ref: fix security\_groups response parameter in os-security-groups * Clean variable names and docs around neutron allocate\_for\_instance * explain payload inheritance in notification devref * Update SSL cert used in testing * Remove RamFilter and DiskFilter in default filter * Enhance support matrix document * remove extension param and usage * Add description on maximum placement API version * Updated from global requirements * Add cinder keystone client opts to config reference * Updated from global requirements * fix test\_rebuild\_server\_exc instability * [placement] quash unicode warning with shared provider * add a redirect for the old cells landing page * Remove unnecessary code 16.0.0.0b3 ---------- * claim resources in placement API during schedule() * placement: account for move operations in claim * add description about key\_name * doc: add FAQ entry for cells v1 config options * Add oslo\_concurrency=INFO to default log levels for nova-manage * stabilize test\_create\_delete\_server functional test * Ensure we unshelve in the cell the instance is mapped * Fix example in \_serialize\_allocations\_for\_consumer * deprecate \`\`wsgi\_log\_format\`\` config variable * Move the note about '/os-volume\_boot' to the correct place * Remove the useless fake ExtensionManager from API unittests * Netronome SmartNIC Enablement * Updated from global requirements * Enhance support matrix document * add cli to support matrix * add a retry on DBDeadlock to \_set\_allocations() * docstring and unused code removal * libvirt: Post-migration, set cache value for Cinder volume(s) * use os\_traits.MISC\_SHARES\_VIA\_AGGREGATE * style-only: s/context/ctx/ * Instance remains in migrating state forever * Add helper method for waiting migrations in functional tests * Improve assertJsonEqual error reporting * Translate the return value of attachment\_create and \_update * Move the last\_bytes util method to libvirt * Do not import nova.conf into nova/exception.py * Set IronicNodeState.uuid in \_update\_from\_compute\_node * Add VIFHostDevice support to libvirt driver * Remove redundant free\_vcpus logging in \_report\_hypervisor\_resource\_view * Remove the useless extension block\_device\_mapping\_v1 object * Remove the useless FakeExt * Remove the code related to extension loading from APIRouterV21 * Add 'updated\_at' field to InstancePayload in notifications * Use wsgi-intercept in OSAPIFixture * API ref: associate floating IP requires Active status * Suppress some test warnings * Use enum value instead of string service name * rename binary to source in versioned notifications * Trim the fat from InstanceInfo * [placement] Use wsgi\_intercept in PlacementFixture * Replaces uuid.uuid4 with uuidutils.generate\_uuid() * Ironic: Support boot from Cinder volume * [placement] Flush RC\_CACHE after each gabbit sequence * Stop using mox stubs in cast\_as\_call.py * Add online migration to move quotas to API database * Migrate Ironic Flavors * Add tags to instance.create Notification * request\_log addition for running under uwsgi * Stop using mox stubs in test\_console\_auth\_tokens.py * Increase cpu time for image conversion * Remove an unnecessary argument in \_prep\_resize * Updated from global requirements * Using plain routes for the microversions test * Updated from global requirements * Updated from global requirements * placement: add retry tight loop claim\_resources() * Dump versioned notifications when test\_create\_delete\_server * retry on authentication failure in api\_client * Change default policy to view quota details * Implement interface attach/detach in ironic virt driver * Update policy description for 'instance\_actions' * Update ironic feature matrix * Updated from global requirements * doc: Switch to openstackdocstheme * Don't cast cinderclient microversions to float * Remove the unittest for plugin framework * Use plain routes list for versions instead of stevedore * Removed unused 'wrap' property * Make Quotas object favor the API database * Remove check\_detach * Remove improper LOG.exception() calls in placement * VMware: Handle missing volume vmdk during detach * Use \_error\_out\_instance\_on\_exception in finish\_resize * placement: proper JOIN order for shared resources * placement: alloc candidates only shared resources * Allow wrapping of closures * Updated from global requirements * provide interface-scoped nameserver information * Only setup iptables for metadata if using nova-net * Fix and optimize external\_events for multiple cells * Add policy granularity to the Flavors API * Deprecate useless quota\_usage\_refresh from nova-manage * add dict of allocation requests to select\_dests() * Handle None returned from get\_allocation\_candidates due to connect failure * Updated from global requirements * Update URL home-page in documents according to document migration * api-ref: Fix an expand button in os-quota-sets * Correct the description of 'disable-log-reason' api-ref * Consider instance flavor resource overrides in allocations * Do not mention that tags are case sensitive in docs * api-ref: fix max\_version for deprecated os-quota-class-sets parameters * Handle uuids in os-hypervisors API * Use uuid for id in os-services API * Remove 'reserved' count from used limits * Make security\_group\_rules use check\_deltas() for quota * Make key\_pairs use check\_deltas() for quota * Count instances to check quota * Use plain routes list for extension\_info instead of stevedore * Use plain routes list for os-snapshots instead of stevedore * Use plain routes list for os-volume-attachments instead of stevedore * doc: Populate the 'user' section * doc: Populate the 'reference' section * doc: Populate the 'contributor' section * doc: Populate the 'configuration' section * Add log info in scheduler to mark start of scheduling * [placement] Add api-ref for allocations * [placement] Add api-ref for RP traits * [placement] Add api-ref for traits * Remove translation of log messages * Fix indentation in policy doc * conf: remove \*\_topic config opts * Stop using mox stubs in test\_remote\_consoles.py * api-ref: Verify parameters in os-migrations.inc * Use URIOpt * Convert HostState.limits['numa\_topology'] to primitive * Log compute node uuid when the record is created * Remove key\_manager.api\_class hack * Update policy descriptions for base * Consistent policies * Support tag instances when boot(4/4) * Fix instance evacuation with PCI devices * [placement] fix 500 error when allocating to bad class * [placement] Update allocation-candidates.yaml for gabbi 1.35 * fix test\_volume\_swap\_server instability * XenAPI: Fix ValueError in test\_slave\_asks\_master\_to\_add\_slave\_to\_pool * api-ref: mention disk size limitations in resize flavor * [placement] cover deleting a custom resource class in use * [placement] cover deleting standard trait * Updated from global requirements * fix unshelve notification test instability * scheduler: isolate \_get\_sorted\_hosts() * Set wsgi.keep\_alive=False globally for tests * Dump tracked version notifications when swap volume tests fail * Default reservations=None in Cells v1 and conductor APIs * Avoid false positives of Jinja2 in Bandit scan * Updated from global requirements * Remove 'create\_rule\_default' * Use oslo.polcy DocumentedRuleDefault * trivial: Remove unnecessary function * doc: Populate the 'cli' section * Fix the releasenote and api-ref for quota-class API * Fix typo * Stop counting hw\_video:ram\_max\_mb against quota * Add ability to signal and perform online volume size change * api-ref: mark instance action events parameter as optional * Add BDM to InstancePayload * placement: add claim\_resources() to report client * doc: Enable pep8 on doc generation code * doc: Remove dead plugin * Use plain routes list for os-volumes instead of stevedore * Use plain routes list for os-baremetal-nodes endpoint instead of stevedore * Use plain routes list for os-security-group-default-rules instead of stevedore * Use plain routes list for os-security-group-rules instead of stevedore * Use plain routes list for server-security-groups instead of stevedore * Use plain routes list for os-security-groups instead of stevedore * Use plain routes list for image-metadata instead of stevedore * Use plain routes list for images instead of stevedore * Remove the test for the route '/resources.:(format)' * doc: Use consistent author, section for man pages * Use plain routes list for os-networks instead of stevedore * doc: Remove cruft from conf.py * Fix wrong log parm * Query deleted instance records during \_destroy\_evacuated\_instances * Skip boot from encrypted volume on Xen+libvirt * improve notification short-circuit * Use PCIAddressField in oslo.versionedobjects * Fix quota class set APIs * api-ref: Add X-Openstack-Request-Id description * Fix a missing classifier * api-ref: Add missing parameters in limits.inc * api-ref: Fix parameters in server-security-groups * Stop using deprecated 'message' attribute in Exception * Adjust error msg for ImageNUMATopologyAsymmetric * placement: scheduler uses allocation candidates * Trivial: Remove unnecessary format specifier * Handle Cinder 3.27 style attachments in swap\_volume * Support tag instances when boot(3/4) * Remove reverts\_task\_state decorator from swap\_volume * Pre-load instance.device\_metadata in InstanceMetadata * Updated from global requirements * [placement] Improve allocation\_candidates coverage * xenapi: avoid unnecessary BDM query when building device metadata * Add release note for xenapi virt device tagging support * Make notification publisher\_id consistent * Modify some comments for virt driver * Fix parameters and description for os-volume\_attachments * Remove nova.api.extensions.server.extensions usage * Fix error message when support matrix entry is missing a driver * Fix comment for API binary name in WSGIService * Fix arguments in calling \_delete\_nic\_metadata * Fix incorrect docstrings in neutron network API * Add 'networks' quota in quota sample files * Reset the traits sync flag in the placement fixtures * Add api-ref for os-quota-class-set APIs * trivial: Use valid UUIDs in test\_admin\_password * placement: filter usage records by resource provider id * Fix 'project-id' 'user-id' as required in server group * Reduce (notification) test duplication * Use plain routes list for os-cells endpoint instead of stevedore * Hyper-V: fix live migration with CSVs * placement: support GET /allocation\_candidates * Handle keypair not found from metadata server using cells * Don't delete neutron port when attach failed * Removes getfattr from Quobyte Nova driver * libvirt: update the logic to configure volume with scsi controller * libvirt: update logic to configure device for scsi controller * Updated from global requirements * conf: fix netconf, my\_ip and host are unclear * Remove wsdl\_location configuration option * hyperv: Fixes log message in livemigrationops * hyperv: stop serial console workers while deleting vm files * hyperv: Fixes Generation 2 VMs volume boot order * Ensure the JSON-Schema covers the legacy v2 API * API support for tagged device attachment * Delete disk metadata when detaching volume * Add scatter gather utilities for cells * Sanitize instance in InstanceMetadata to avoid un-pickleable context * remove the very old unmaintained wsgi scripts * Extract custom resource classes from flavors * Fix the log information argument mistake * Remove mox from nova.tests.unit.virt.xenapi.test\_vm\_utils.py * Handle version for PUT and POST in PlacementFixture * Add a reset for traits DB sync * Strengthen the warning on the old broken WSGI script * Add key\_name field to InstancePayload * Add keypairs field to InstanceCreatePayload * api-ref: Fix missing parameters in API Versions * placement: refactor driver select\_destinations() * Updated from global requirements * VStorage: changed default log path * Add python 3.5 in classifier * Delete nic metadata when detaching interface * Remove mox from nova.tests.unit.api.openstack.compute.test\_limits * Add get\_count\_by\_vm\_state() to InstanceList object * move resources\_from\_request\_spec() to utils * return 400 Bad Request when empty string resources * placement: adds ProviderTree for nested resources * Add missing microversion documentation * Remove mox in test\_availability\_zone.py * Stop using mox stubs in test\_keypairs.py * Plumbing for tagged nic attachment * Remove code that produces warning in modern Python * Plumbing for tagged volume attachment * Fix misuse of assertIsNone * Simplify a condition * Support paging over compute nodes with a uuid marker * Update api-ref to indicate swap param * \_schedule\_instances() supporting a RequestSpec object * Removes potentially bad exit value from accepted list in Quobyte volume driver * Switch Nova Quobyte volume driver to mount via systemd-run * Clean up volumes on boot failure * Explain why API services are filtered out of GET /os-services * Fix redundant BDM lookups during rebuild * Delete all inventory has its own method DELETE * Remove translation of log messages * hypervisor\_hostname must match get\_available\_nodes * Fix using targeted cell context when finding services in cells * [doc] Updated sqlalchemy URL in migrate readme * placement: separate normalize\_resources\_qs\_param * Updated from global requirements * Use more specific asserts in tests * Transform instance.soft\_delete notifications * Fix the note at the end of allocate\_for\_instance * Count floating ips to check quota * Add FloatingIPList.get\_count\_by\_project() * Count fixed ips to check quota * Add FixedIPList.get\_count\_by\_project() * Count security groups to check quota * Add SecurityGroupList.get\_counts() * Count networks to check quota * Provide a hint when \_verify\_response fails * Provide error message in MismatchError for api-samples tests * placement: produce set of allocation candidates * Reduce code duplication * Use plain routes list for os-remote-consoles instead of stevedore * Remove multiple create from stevedore * Use plain routes list for os-tenant-networks instead of stevedore * Use plain routes list for os-cloudpipe endpoint instead of stevedore * Use plain routes list for os-quota-classes endpoint instead of stevedore * Consolidate index and detail methods in HypervisorsController * Handle uuid in HostAPI.compute\_node\_get * api-ref: fix unshelve asynchronous postconditions typo * add missing notification samples to dev ref * Fix regression preventing reporting negative resources for overcommit * Add separate instance.create payload type * placement: Add GET /usages to placement API * placement project\_id, user\_id in PUT /allocations * api-ref: fix hypervisor\_hostname description for Ironic * Updated from global requirements * Provide original fault message when BFV fails * Add PowerVM to nova support matrix * remove null\_safe\_int from module scope * Fix a wrong comment * Stop caching compute nodes in the request * Centralize compute\_node\_search\_by\_hypervisor in os-hypervisors * api-ref: cleanup PUT /os-hypervisors/statistics docs * Make compute\_node\_statistics() work across cells * Only auto-disable new nova-compute services * Cleanup the plethora of libvirt live migration options * [placement] Update placement devref to modern features * Make all timestamps formats equal * Transform keypair.create notification * remove mox from unit/virt/vmwareapi/test\_driver\_api.py * XenAPI: device tagging * Updated from global requirements * api-ref: fix misleading description in PUT /os-services/disable * Remove service control from feature support matrix * Indicate Hyper-v supports fibre channel in support matrix * Use CONF.host for powervm nodename * Pull out code that builds VIF in \_build\_network\_info\_model * Use plain routes list for os-server-groups endpoint instead of stevedore * Use plain routes list for user\_data instead of stevedore * remove get\_nw\_info\_for\_instance from compute.utils * remove ugly local import * Add missing query filter params for GET /os-services API * XenAPI: Create linux bridge in dest host during live migration * Remove translation of log messages * Count server group members to check quota * Add bool\_from\_string for force-down action * Remove old service version check for mitaka * Clarify conf/compute.py help text for ListOpts * Use plain routes list for block\_device\_mapping instead of stevedore * Use plain routes list for os-consoles, os-console-auth-tokens endpoint instead of stevedore * [placement] Increase test coverage * Remove unused variable * pci: add uuid field to PciDevice object * libvirt: dump debug info when interface detach times out * Amend api-ref for multiple networks request * Remove translation of log messages * Calculate stopped instance's disk sizes for disk\_available\_least * Transform instance.live\_migration\_rollback notification * Add InstanceGroup.\_remove\_members\_in\_db 16.0.0.0b2 ---------- * Fix lookup of instance mapping in metadata set-password * libvirt: Extract method \_guest\_add\_spice\_channel * libvirt: Extract method \_guest\_add\_memory\_balloon * libvirt: Extract method \_guest\_add\_watchdog\_action * libvirt: Extract method \_guest\_add\_pci\_devices * libvirt: Extract method \_guest\_add\_video\_device * libvirt: fix alternative\_device\_name for detaching interfaces * [placement] Add api-ref for aggregates * Add docstring for test\_limit\_check\_project\_and\_user\_zero\_values * Skip microversion discovery check for update/delete volume attachments * Use 3.27 microversion when creating new style volume attachments * Use microversions for new style volume attachments * libvirt: handle missing rbd\_secret\_uuid from old connection info * Log a warning if there is only one cell when listing instances * [placement] Use util.extract\_json in allocations handler * [placement] Disambiguate resource provider conflict message * raise exception if create Virtuozzo container with swap disk * Convert additional disassociate tests to mock * Remove useless API tests * Remove \*\*kwargs passing in payload \_\_init\_\_ * Prefer non-PCI host nodes for non-PCI instances * Add PCIWeigher * XenAPI: Remove bittorrent.py which is already deprecated * Count server groups to check quota * Default to 0 when merging values in limit check * api-ref: fix type for hypervisor\_marker * Fix html\_last\_updated\_fmt for Python3 * nfs fix for xenial images * Remove unused CONF import from placement/auth.py * xen: pass Xen console in cmdline * Add earliest-version tags for stable branch renos * Fix the race condition with novnc * Add service\_token for nova-glance interaction * Adopts keystoneauth with glance client * placement: use separate tables for projects/users * Move rebuild notification tests into separate method * contrail: add vrouter VIF plugin type support * Fix cell0 naming when QS params on the connection * libvirt: Check if domain is persistent before detaching devices * Fix device metadata service version check for multiple cells * Remove cells topic configuration option * Add get\_minimum\_version\_all\_cells() helper for service * libvirt: rearange how scsi controller is defined * libvirt: set full description of the controller used by disk * libvirt: update LibvirtConfigGuestDeviceAddress to provide XML * Use plain routes list for os-services endpoint instead of stevedore * use plain routes list for os-virtual-interfaces * use plain routes list for hypervisor endpoint instead of stevedore * Use plain routes list for hosts endpoint instead of stevedore * Use plain routes list for os-fping endpoint * Use plain routes list for instance actions endpoint * Use plain routes list for server ips endpoint * XenAPI: use os-xenapi 0.2.0 in nova * Add InstanceGroupList.get\_counts() * Reset the \_TRAITS\_SYNCED global in Traits tests * Revert "Remove Babel from requirements.txt" * Avoid unnecessary lazy-loads in mutated\_migration\_context * libvirt: log vm and task state when vif plugging times out * Send out notifications when instance tags changed * Catch neutronclient.NotFound on floating deletion * Move notifications/objects/test\_base.py * Fixed some nits for microversion 2.48 * code comments incorrectness * Remove Babel from requirements.txt * Sync os-traits to Traits database table * Support tag instances when boot(2/4) * ComputeDriver.get\_info not limited to inst name * Replace messaging.get\_transport with get\_rpc\_transport * Be more tolerant of keystone catalog configuration * Send request\_id on glance calls * Updated from global requirements * [placement] Add api-ref for resource classes * Standardization of VM diagnostics info API * Remove unused exceptions * Refactor a test method including 7 test cases * Fix missing marker functions * Completed implementation of instance diagnostics for Xen * Updated from global requirements * Use VIR\_DOMAIN\_BLOCK\_REBASE\_COPY\_DEV when rebasing * show flavor info in server details * placement: Specific error for inventory in use * Updated from global requirements * Add database migration and model for consumers * add new test fixture flavor with extra\_specs * Updated from global requirements * Connecting Nova to DRBD storage nodes directly * Update server create networks API reference description for tags * libvirt: fix call args to destroy() during live migration rollback * Pass a list of instance UUIDs to scheduler * Fix call to driver\_detach in remove\_volume\_connection * Use plain routes list for server diagnostics endpoint * Use plain routes list for os-server-external-events endpoint * Use plain routes list for server-migrations endpoint instead of stevedore * Use plain routes list for server-tags instead of stevedore * Use plain routes list for os-interface endpoint instead of stevedore * Remove usage of parameter enforce\_type * placement: test for agg association not sharing * placement: test non-shared out of inventory * placement: tests for non-shared with shared * placement: shared resources when finding providers * Fix live migration devstack hook for multicell environment * Target cell on local delete * Updated from global requirements * Fix default\_availability\_zone docs * Send request\_id on neutron calls * Update policy description for os-volumes * Fix doc job with correct ref link * Remove oslo.config deprecated parameter enforce\_type * Completely remove mox from unit/network/test\_linux\_net.py * Add configuration options for certificate validation * Do not rely on dogpile internals for mocks * XenAPI: nova-compute cannot restart after manually delete VM * Add policy description for os-networks * Changing deleting stale allocations warning to debug * Replace diagnostics objects with Nova diagnostics objects * Added nova objects for intance diagnostics * [placement] adjust resource provider links by microversion * Add \`img\_hide\_hypervisor\_id\` image property * Catch InstanceNotFound when deleting allocations * Remove mox from nova/tests/unit/virt/xenapi/test\_xenapi.py[1] * [placement] Add api-ref for DELETE resource provider * [placement] Add api-ref for PUT resource provider * [placement] Add api-ref for GET resource provider * [placement] Add api-ref for POST resource provider * [placement] Add api-ref for DELETE RP inventory * [placement] Add api-ref for PUT RP inventory * [placement] Add api-ref for GET RP inventory * [placement] Add api-ref for DELETE RP inventories * [placement] Add api-ref for PUT RP inventories * Add check\_deltas() and limit\_check\_project\_and\_user() to Quotas * Enhancement comments on CountableResource * Deprecate TypeAffinityFilter * [placement] Add api-ref for GET RP inventories * Optimize creating security\_group * Limit the min length of string for integer JSON-Schema * Avoid lazy-loading instance.id when cross\_az\_attach=False * Use plain routes list for os-migrations endpoint instead of stevedore * Updated from global requirements * Migrate to oslo request\_id middleware - mv 2.46 * Ensure the value of filter parameter is unicode * XenAPI: Deprecate nicira-iface-id for XenServer VIF * Don't run ssh validation in cells v1 job * Fix MarkerNotFound when paging and marker was found in cell0 * Add recreate functional test for regression bug 1689692 * cinder: add attachment\_update method * cinder: add attachment\_create method * Use targeted context when burying instances in cell0 * Send request\_id on cinder calls * Remove unused migrate\_data kwarg from virt driver destroy() method * Fix the display of updated\_at time when using memcache driver * Check instance existing before check in mapping * Remove mox from unit/cells/test\_cells\_messaging.py * make sure to rebuild claim on recreate * Nix redundant dict in set\_inventory\_for\_provider * PowerVM Driver: SSP emphemeral disk support * Avoid lazy-load error when getting instance AZ * Handle conflict from neutron when addFloatingIP fails * re-Allow adding computes with no ComputeNodes to aggregates * Libvirt volume driver for Veritas HyperScale * Make the method to put allocations public * Don't delete allocation if instance being scheduled * Exclude deleted service records when calling hypervisor statistics * Modify incorrect comment on return\_reservation\_id * Remove incorrect comments in multiple\_create * Have nova.context use super from\_context * Handle new hosts for updating instance info in scheduler * [Trivial] Hyper-V: accept Glance vhdx images * Add strict option to discover\_hosts * make route and controller in alpha sequence * [placement] Fix placement-api-ref check tool * Use plain routes list for limits endpoint instead of stevedore * Updated from global requirements * Handle uuid in HostAPI.\_find\_service * doc: add cells v2 FAQ on mapping instances * doc: add cells v2 FAQ on refreshing global cells cache * doc: start a FAQs section for cells v2 * De-complicate some of the instance delete path * doc: add links to summit videos on cells v2 * Make target\_cell() yield a new context * Move to proper target\_cell calling convention * Updated from global requirements * Repair links in Nova documentation * api-ref: Fix parameter order in os-services.inc * fix typo * Deprecate unused policy from policy doc * trivial: Remove dead code * convert unicode to string before we connect to rados * Use plain routes list for os-quota-sets endpoint instead of stevedore * Use plain routes list for os-certificates endpoint instead of stevedore * Remove mox from cells/test\_cells\_rpc\_driver.py * api-ref: Example verification for servers-actions.inc * Updated from global requirements * nova-manage: Deprecate 'log' commands * nova-manage: Deprecate 'host' commands * nova-manage: Deprecate 'project', 'account' commands * libvirt: remove glusterfs volume driver * libvirt: remove scality volume driver * Deprecate scheduler trusted filter * XenAPI: remove hardcode dom0 plugin version in unit test * Change log level from ERROR to DEBUG for NotImplemented * Skip policy rules on attach\_network for none network allocation * Skip ceph in grenade live migration job due to restart failures * Correct \_ensure\_console\_log\_for\_instance implementation * Cache database and message queue connection objects * Correct the error message for query parameter validation * correctly log port id in neutron api * Fix uuid replacement in aggregate notification test * Remove DeviceIsBusy exception * Catch exception.OverQuota when create image for volume backed instance * Add policy description for os-host * Libvirt support for tagged volume attachment * Libvirt support for tagged nic attachment * Updated from global requirements * Add policy description for 'os-hide-server-addresses' * Add policy description for os-fixed-ips * Add policy description for networks\_associate * Add policy description for server\_usage * Modify the description of flat\_injected in nova.conf * Add policy description for multinic * Add policy description for 'limits' * Use plain routes list for server-password endpoint instead of stevedore * libvirt: expand checks for SubclassSignatureTestCase * fix InvalidSharedStorage exception message * Fix decoding of encryption key passed to dmcrypt * Make compute auto-disable itself if builds are failing * Make discover\_hosts only query for unmapped ComputeNode records * api-ref: Fix examples for add/removeFixedIp action * Fix a typo in code comment * Updated from global requirements * [BugFix] Change the parameter of the exception error message * Handle special characters in database connection URL netloc * fix typo in parameter type definition * Move null\_safe funcs to module level * do not log error for missing \_save\_tags * Add more description to policies in the keypairs.py * Add description to policies in extended\_status and extended\_volumes * Address comments when moving volume detach to block\_device.py * Updated from global requirements * Add a functional test for 'removeFloatingIp' action * Correct the wording about filter options * libvirt: Fix races with nfs volume mount/umount * libvirt: Pass instance to connect\_volume and disconnect\_volume * Remove the can\_host column * Totally freeze the extension\_info API * Trivial fix typo in document * Add missing rootwrap filter for cryptsetup * Add Cinder v3 detach to shutdown\_instance * Make NovaException format errors fatal for tests * Fix unit test exception KeyErrors * [BugFix] Release the memory quota for video ram when deleting an instance * Remove the rebuild extension help methods * service: use restart\_method='mutate' for all services * Verify project id for flavor access calls * Add a convenience attribute for reportclient * Add uuid to service.update notification payload * objects: add ComputeNode.get\_by\_uuid method * objects: add Service.get\_by\_uuid method * db api: add service\_get\_by\_uuid * Add online data migration for populating services.uuid * placement: get providers sharing capacity * Remove cloudpipe APIs * Replace newton to release\_name in upgrade.rst * Fix a typo * neutron: retrieve physical network name from a multi-provider network * Use six.text\_type() when logging Instance object * Fix typo in wsgi applications release note * Catching OverQuota Exception * Add description to policies in extended\_az and extend\_ser\_attrs * Add policy description for os-quota-classes * Add policy description for instance actions * Add policy description for fping * Updated from global requirements * Ensure sample policy help text correctly wrapped * Add policy description for extensions * Use plain routes list for server-metadata endpoint instead of stevedore * Transform instance.volume\_detach notification * Transform instance.volume\_attach.error notification * Transform instance.volume\_attach notification * Fix units for description of "flavor\_swap" parameter * Don't lazy-load flavor.projects during destroy() * devref and reno for nova-{api,metadata}-wsgi scripts * Add pbr-installed wsgi application for metadata api * Update devref with vendordata changes * remove unused functions * Use systemctl to restart services * Remove nova-cert leftovers * Add policy description for image size * Add policy description for instance-usage-audit-log * Add policy description for Servers IPs * Add policy description for config\_drive * XenAPI: update support matrix to support detach interface * Remove unnecessary execute permissions * Use plain routes list for os-fixed-ips endpoint instead of stevedore * Use plain routes list for os-availability-zone endpoint instead of stevedore * Use plain routes list for os-assisted-volume-snapshots endpoint * Use plain routes list for os-agents endpoint instead of stevedore * Use plain routes list for os-floating-ip-dns endpoint instead of stevedore * Add compute\_nodes\_uuid\_idx unique index * Use plain routes list for os-floating-ips-bulk endpoint instead of stevedore * Use plain routes list for os-floating-ip-pools endpoint instead of stevedore * Use plain routes list for os-floating-ips endpoint instead of stevedore * api-ref: Fix unnecessary description in servers-admin-action * api-ref: Fix parameters in servers-action-console-output * api-ref: Use 'note' directive * use plain routes list for os-simple-tenant-usage * Use plain routes list for os-instance-usage-audit-log endpoint instead of stevedore * Support tag instances when boot(1) * Add Cinder v3 detach call to \_terminate\_volume\_connections * placement: implement get\_inventory() for libvirt * nova-manage: Deprecate 'agent' commands * Add reserved\_host\_cpus option * Update description to policies in remaining flavor APIs * Add description to policies in migrations.py * Trivial fix: fix broken links * Remove nova-cert * Fixed a broken link in API Plugins document * Stop using mox int unit/virt/xenapi/image/test\_utils.py * Add ability to query for ComputeNodes by their mapped value * Add ComputeNode.mapped field * Updated from global requirements * Add a note to \*\_allocation\_ratio options about Ironic hardcode * Remove legacy v2.0 code from test\_flavor\_access * Do not log live migration success when it actually failed * Expose StandardLogging fixture for use * Add Cinder v3 detach to local\_cleanup * Don't check for file type in \_find\_base\_file * Rename \_handle\_base\_image to \_mark\_in\_use * Add context comments to \_handle\_base\_image * Add mock check and fix uuid's use in test * Revert "Prevent delete cell0 in nova-manage command" * Improve comment for PCI port binding update * Parse algorithm from cipher for ephemeral disk encryption * Add description to policies in floating\_ip files * Add description to policies in migrate\_server.py * Remove all discoverable policy rules * PowerVM Driver: console * Update doc/source/process.rst * 2.45: Remove Location header from createImage and createBackup responses * Clean up ClientRouter debt * api-ref: move createBackup to server-actions * Deprecate Multinic, floatingip action and os-virtual-interface API * Register osapi\_compute when nova-api is wsgi * disable keepalive for functional tests * Use plain routes list for '/os-aggregates' endpoint instead of stevedore * Use plain routes list for '/os-keypairs' endpoint instead of stevedore * Use plain routes list for flavors-access endpoint instead of stevedore * Use plain routes list for flavors-extraspecs endpoint instead of stevedore * Use plain routes list for flavor endpoint instead of stevedore[1] * Use plain routes list for '/servers' endpoint instead of stevedore * encryptors: Switch to os-brick encryptor classes * Fix unnecessary code block in a release note * Remove redundant code * api-ref: Fix a parameter description in servers.inc * api-ref: Parameter verification for servers-actions (4/4) * api-ref: Parameter verification for servers-actions (3/4) * Refactor a test method including 3 test cases * Sort CellMappingList.get\_all() for safety * Add workaround to disable group policy check upcall * Make server groups api aware of multiple cells for membership * libvirt: remove redundant and broken iscsi volume test * Remove BuildRequest.block\_device\_mapping clone workaround * libvirt: Always disconnect\_volume after rebase failures * Rework descriptions in os-hypervisors * Trivial Fix a typo * api-ref: Parameter verification for servers-actions (2/4) * Updated from global requirements * PowerVM Driver: spawn/destroy #4: full flavor * Remove archaic reference to QEMU errors during post live migration * Tell people that the nova-cells man page is for cells v1 * Add release note and update cell install guide for multi-cell limitations * PowerVM Driver: spawn/destroy #3: TaskFlow * Allow CONTENT\_LENGTH to be present but empty * libvirt: Remove is\_job\_complete polling after pivot * Adding auto\_disk\_config field to InstancePayload * add tags field to instance.update notification * Add description to policies in hypervisors.py * Explicitly define enum type as string in schema * PowerVM Driver: power\_on/off and reboot * Using max api version in notification sample test * PowerVM Driver: spawn/destroy #2: functional * Warn the user about orphaned extra records during keypair migration * Deprecate os-hosts API * Update resource tracker to PUT custom resource classes * [placement] Idempotent PUT /resource\_classes/{name} * Update detach to use V3 Cinder API * conf: Move 'floating\_ips' opts into 'network' * conf: Deprecate 'default\_floating\_pool' * conf: Add neutron.default\_floating\_pool * libvirt: Use config types to parse XML for root disk * libvirt: Add missing tests for utils.find\_disk * libvirt: Use config types to parse XML for instance disks * Updated from global requirements * Mock timeout in test\_\_get\_node\_console\_with\_reset\_wait\_timeout * Add test ensure all the microversions are sequential in placement API * fix overridden error * fix typos * Add interfaces functional negative tests * Remove unused os-pci API * Fix mitaka online migration for PCI devices * Fix port update exception when unshelving an instance with PCI devices * Fix docstring in \_validate\_requested\_port\_ids * Fix the evacuate API without json-schema validation in 2.13 * api-ref: Fix response code and parameters in evacuate * Remove json-schema extension variable for resize * Update etherpad url * Use deepcopy when process filters in db api * Add regression test for server filtering by tags bug 1682693 * remove unused parameter in rpc call * Remove usage of parameter enforce\_type * Remove test\_init\_nonexist\_schedulerdriver * Spelling error "paramenter" * api-ref: Parameter verification for servers-actions (1/4) * Revert "Make server\_groups determine deleted-ness from InstanceMappingList" 16.0.0.0b1 ---------- * Fix hypervisors api missing HostMappingNotFound handlers * Updated from global requirements * Fix HTTP 500 raised for getConsoleLog for stopped instance * Remove backend dependency for key types * Fix libvirt group selection in live migration test * Update network metadata type field for IPv6 * Add description to policies in servers.py * Add description to policies in security\_groups.py * api-ref: Nova Update Compute services Link * api-ref: Fix parameters in os-hosts.inc * Add uuid to Service model * Modify PciPassthroughFilter to accept lists * Deprecate CONF.api.allow\_instance\_snapshots * Read NIC features in libvirt * Fix api-ref for create servers response * placement: Add Traits API to placement service * Remove aggregate uuid generation on load from DB * Document and provide useful error message for volume-backed backup * PowerVM Driver: spawn/delete #1: no-ops * Refactor: Move post method to APIValidationTestCase base class * remove log translation tags from nova.cells * Get BDMs when we need to in \_handle\_cell\_delete * Remove dead db api code * Add description to policies in server\_password.py * remove flake8-import-order * Expand help text for [libvirt]/disk\_cachemodes * Updated from global requirements * Add description to policies in remote\_consoles.py * api-ref: fix os-extended-volumes:volumes\_attached in servers responses * Image meta min\_disk should be int in fake\_request\_spec * Optimize the link address * Add description to policies in quota\_sets.py * Fix joins in instance\_get\_all\_by\_host * Fix test\_instance\_get\_all\_by\_host * Remove the stevedore extension point for server create * Remove the json-schema extension point of server create * Remove the extension check for os-networks in servers API * Make server\_groups determine deleted-ness from InstanceMappingList * Add get\_by\_instance\_uuids() to InstanceMappingList * Remove Mitaka-era service version check * Teach HostAPI about cells * Make scheduler target cells to get compute node instance info * Deprecate the Cinder API v2 support * Limit exposure of network device types to the guest * Remove a fallacy in scheduler.driver config option help text * [placement] Allow PUT and POST without bodies * Use physical utilisation for cached images * Remove config opts for extension black/white list * Remove the usage of extension black/white list opt in scheduler hints * Cleanup wording on compute service version checks in API * Fix test\_no\_migrations\_have\_downgrade * Perform old-style local delete for shelved offloaded instances * Regression test for local delete with an attached volume * Set size/status during image create with FakeImageService * Commit usage decrement after destroying instance * Add regression test for quota decrement bug 1678326 * Short-circuit local delete path for cells v2 and InstanceNotFound * api-ref: make it clear that os-cells is for cells v1 * Add description to policies in security\_group\_default\_rules.py * Remove the usage of extension black/white list opt in user data * Add empty flavor object info in server api-ref * placement: Enable attach traits to ResourceProvider * docs: update description for AggregateInstanceExtraSpecsFilter * nova-net: remove get\_instance\_nw\_info from API subclass * API: accept None as content-length in HTTP requests * Switch from pip\_missing\_reqs to pip\_check\_reqs * nova-manage: Deprecate 'shell' commands * doc: Separate the releasenotes guide from the code-review section * Distinguish between cells v1 and v2 in upgrades doc * Use HostAddressOpt for opts that accept IP and hostnames * Stop using ResourceProvider in scheduler and RT * Updated from global requirements * Remove unnecessary tearDown function in testcase * Ensure reservation\_expire actually expires reservations * Remove unnecessary duplicated NOTE * Add description to policies in server\_diagnostics.py * Add description to policies in server\_external\_events.py * Add server-action-removefloatingip.json file and update servers-actions.inc * api-ref: networks is mandatory in Create Server * Trivial: Remove unused method * Make metadata doc more readable * Remove the usage of extension black/white list opt in AZ * Remove the usage of extension black/white list opt in config drive * Remove the usage of extension black/white list opts in multi-create * Remove the usage of extension black/white list opts in BDM tests * Rename the model object ResourceProviderTraits to ResourceProviderTrait * Short circuit notifications when not enabled * Add description to policies in services.py * compute: Move detach logic from manager into driver BDM * doc: Move code-review under developer policies * Add description to policies in servers\_migrations.py * Remove mox from nova/tests/unit/consoleauth/test\_consoleauth.py * Remove unnecessary setUp function in testcase * api-ref: Fix wrong HTTP response codes * Make conductor ask scheduler to limit migrates to same cell * Updated from global requirements * Consolidate unit tests for shelve API * Remove \_wait\_for\_state\_change() calls from notification (action)tests * Fix calling super function in setUp method * Remove namespace check in creating traits * Add description for /consoles * Ensure instance is in active state after notification test * Add description to policies in used\_limits * Add description to policies in lock\_server.py * Add description to policies in server\_metadata.py * Add description to policies in evacuate.py and rescue.py * Add description to policies in server\_groups.py * Use cursive for signature verification * Fix api-ref for adminPass behavior * Fix 'server' and 'instance' occurrence in api-ref * Add description to policies in flavor\_extra\_specs.py * code comment redundant * Add exclusion list for tempest for a libvirt+xen job * Add description to policies in cells\_scheduler.py * Add description to policies in aggregates.py * Add description to policies in pause\_server.py * Add description to policies in simple\_tenant\_usage.py * Add description to policies in keypairs.py * Remove unused policy rule in admin\_actions.py * Add description to policies in admin\_actions * Add description to policies in certificates.py * libvirt: Remove dead code * Add description to policies in console\_output.py * tox: Stop calling config/policy generators twice * There is a error on annotation about related options * Remove mox from nova.tests.unit.objects.test\_instance.py * fixed typos and reword stable api doc * Fix some reST field lists in docstrings * Add description to nova/policies/shelve.py * [placement] Split api-ref topics per file * Add description to policies in tenant\_networks.py * placement: Add Trait and TraitList objects * Remove legacy regeneration of RequestSpec in MigrationTask * remove i18n log markers from nova.api.\* * [placement] add api-ref for GET /resource\_providers * Structure for simply managing placement-api-ref * 'uplug' word spelling mistake * Make xenapi driver compatible with assert\_can\_migrate * Remove mox from nova/tests/unit/api/openstack/compute/test\_virtual\_interfaces.py * Remove mox from nova/tests/unit/api/openstack/compute/test\_quotas.py * Remove mox from nova/tests/unit/api/openstack/compute/test\_migrations.py * Fix wrong unit test about config option enabled\_apis * Do not attempt to load osinfo if we do not have os\_distro * Add confirm resized server functional negative tests * remove mox from unit/api/openstack/compute/test\_disk\_config.py * Revert "libvirt: Pass instance to connect\_volume and ..." * Add descripiton to policies in virtual\_interfaces.py * Add description to policies to availability\_zone * Add description to policies in suspend\_server.py * api-ref: fix description of volumeAttachment for attach/swap-volume * Get instance availability\_zone without hitting the api db * Set instance.availability\_zone whenever we schedule * [placement] Don't use floats in microversion handling * tests: fix uefi testcases * libvirt: make emulator threads to run on the reserved pCPU * libvirt: return a CPU overhead if isolate emulator threads requested * virt: update overhead to take into account vCPUs * numa: update numa usage to include reserved CPUs * numa: take into account cpus reserved * numa: fit instance NUMA node with cpus reserved onto host NUMA node * remove mox from unit/api/openstack/compute/test\_flavor\_manage.py * remove mox from unit/compute/test\_compute\_utils.py * api-ref: Complete all the verifications of remote consoles * remove mox from unit/virt/xenapi/image/test\_bittorrent.py * Fix some reST field lists in docstrings * Add lan9118 as valid nic for hw\_vif\_model property for qemu * Add description to policies in deferred\_delete.py * Add description to policies in create\_backup.py * Add description to policies in consoles.py * Add description to policies in cloudpipe.py * Add description to policies in console\_auth\_tokens.py * Add description to policies in baremetal\_nodes.py * conf: Final cleanups in conf/network * conf: Deprecate 'allow\_same\_net\_traffic' * libvirt: Ignore 'allow\_same\_net\_traffic' for port filters * conf: Deprecate 'use\_ipv6' * netutils: Ignore 'use\_ipv6' for network templates * Add check for invalid inventory amounts * Add check for invalid allocation amounts * Remove the Allocation.create() method * Add release note for CVE-2017-7214 * Add description to policies in cells.py * Tests: remove .testrepository/times.dbm in tox.ini (functional) * Pre-add functional tests stub to notification testing * libvirt: conditionally set script path for ethernet vif types * Add description to policies in agents.py * Add description to policies in admin\_password.py * libvirt: mark some Image backend methods as abstract * Add description to policies in assisted\_volume\_snapshots.py * Change os-server-tags default policy * Ironic: hardcode min\_unit for standard resources to 1 * Refactor: remove \_items() in nova/api/openstack/compute/attach\_interfaces.py * delete more i18n log markers * remove log translation from nova.api.metadata * update i18n guide for nova * Add description to policies in attach\_interfaces.py * Add description to policies in volumes\_attachments.py * Add description to policies in volumes.py * Fix rest\_api\_version\_history (2.40) * fix os-volume\_attachments policy checks * libvirt: Ignore 'use\_ipv6' for port filters * conf: Fix indentation in conf/netconf * Remove unused VIFModel.\_get\_legacy method * Add helper method to add additional data about policy rule * DELETE all inventory for a resource provider * nova-status: don't coerce version numbers to floats for comparison * remove mox from unit/api/openstack/compute/test\_flavors.y * Improve descriptions for hostId, host, and hypervisor\_hostname * compute: Only destroy BDMs after successful detach call * Remove old oslo.messaging transport aliases * Updated from global requirements * do not include context to exception notification * Add api-ref for filter/sort whitelist * Fix functional regression/recreate test for bug 1671648 * api-ref: fix description in os-services * flake8: Specify 'nova' as name of app * objects: Add attachment\_id to BlockDeviceMapping * db: Add attachment\_id to block\_device\_mapping * Updated from global requirements * Clarify os-stop API description * remove flake8-import-order for test requirements * Avoid lazy-loading projects during flavor notification * libvirt: add debug logging in detach\_device\_with\_retry * Transform instance.reboot.error notification * Transform instance.reboot notifications * remove hacking rule that enforces log translation * doc: configurable versioned notifications topics * Replace obsolete vanity openstack.org URLs * Add populate\_retry to schedule\_and\_build\_instances * Add a functional regression/recreate test for bug 1671648 * virt: implement get\_inventory() for Ironic * Fix the help for the disk\_weight\_multiplier option * Add a note about force\_hosts only ever having a single value * Make os-availability-zones know about cells * Introduce fast8 tox target * Duplicate JSON line ending check to pep8 * trivial: Remove \r\n line endings from JSON sample * [placement] Raising http codes on old microversion * Updated from global requirements * doc: add some documentation around quotas * Make versioned notifications topics configurable * Use proper user and tenant in the owner section of libvirt.xml * Prevent delete cell0 in nova-manage command * Refactor InstancePayload creation * nova-status: require placement >= 1.4 * Temporarily untarget context when deleting from cell0 * Decrement quota usage when deleting an instance in cell0 * VMware: use WithRetrieval in ds\_util module * VMware: use WithRetrieval in get\_network\_with\_the\_name * Remove VMware driver \_get\_vm\_ref\_from\_uuid method * trivial: Add a note about 'cells\_api' * Add description for Image location in snapshot * Typo fix in releasenotes: deprecate network options * api-ref: Fix parameters and examples in aggregate API * Transform instance.rebuild.error notification * Transform instance.rebuild notification * Add regression test for bug 1670627 * No API cell up-call to delete consoleauth tokens * Add identity helper property to CellMapping * Correctly set up deprecation warning * Add cell field to Destination object * Change MQ targeting to honor only what is in the context * Remove duplicate attributes in sample files * api-ref: Fix keypair API parameters * Fix missing instance.delete notification * conf: Fix formatting of network options * Teach simple\_tenant\_usage about cells * Teach os-migrations about cells * Teach os-aggregates about cells * Stop using mox in unit/virt/disk/test\_api.py * Avoid using fdatasync() when fetching images * Fix API doc about server attributes (2.3 API) * Refactor cell loading in compute/api * Target cell in super conductor operations * Ensure image conversion flushes output data to disk * fdatasync() downloaded images before use * conf: fix default values reporting infra worker * Error message should not include SQL command * Make consoleauth target the proper cell * Enlighten server tags API about cells * Update docstrings for legacy notification methods * conf: Deprecate most 'network' option * Use Cinder API v3 as default * get\_model method missing for Ploop image * trivial: Standardize naming of variables * trivial: Standardize indentation of test\_vif * autospec the virt driver mock in test\_resource\_tracker * Add functional test for bad res class in set\_inventory\_for\_provider * Remove unused placement\_database config options * libvirt: pass log\_path to \_create\_pty\_device for non-kvm/qemu * virt: add get\_inventory() virt driver API method * conf: remove console\_driver opt * Use flake8-import-order * numa: add numa constraints for emulator threads policy * Remove mox from nova.tests.unit.api.openstack.compute.test\_block\_device\_mapping * Revert "Add some metadata logging to root cause ssh failure" * Add comment to instance\_destroy() * Remove GlanceImageService * Use Sphinx 1.5 warning-is-error * Add warning on setting secure\_proxy\_ssl\_header * handle uninited fields in notification payload * Fix api-ref with Sphinx 1.5 * Imported Translations from Zanata * Reno for additional-notification-fields-for-searchlight * Default firewall\_driver to nova.virt.firewall.NoopFirewallDriver * Handle conflicts for os-assisted-volume-snapshots * Remove mox from nova.tests.unit.api.openstack.compute.test\_create\_backup * Log with cell.uuid if cell.name is not set * Updated from global requirements * re-orphan flavor after rpc deserialization * Stop using mox stubs in nova.tests.unit.api.openstack.compute.test\_serversV21 * Skip unit tests for SSL + py3 * Add functional test for ip filtering with regex * Add resize server functional negative tests * conf: resolved final todos in libvirt conf * Only create vendordata\_dynamic ksa session if needed * Check for 204 case in DynamicVendorData * Add some metadata logging to root cause ssh failure * Remove unused variable * Remove domains \*-log-\* from compile\_catalog * Updated from global requirements * Updated from global requirements * [placement] Add Traits related table to the api database * Remove mox from nova/tests/unit/db/test\_db\_api.py * Complete verification of servers-action-fixed-ip.inc * Remove mox in nova/tests/unit/compute/test\_shelve.py (3) * libvirt: Pass instance to connect\_volume and disconnect\_volume * Stop using mox in compute/test\_hypervisors.py * Add device\_id when creating ports * Make compute/api instance get set target cell on context * Remove mox from nova.tests.unit.virt.xenapi.test\_vmops[1] * Tests: remove .testrepository/times.dbm in tox.ini * Updated from global requirements * Remove invalid tests-py3 whitelist item * Ignore deleted services in minimum version calculation * Add RPC version aliases for Ocata * Remove mox from nova/tests/unit/test\_configdrive2.py * Remove usage of config option verbose * Remove check\_attach * Handle VolumeBDMIsMultiAttach in os-assisted-volume-snapshots * api/metadata/vendordata\_dynamic: don't import Requests for its constants * Fix typos detected by toolkit misspellings * remove a TODO as all set for tags * Clean up metadata param in doc * Remove extension in API layer * Correct some spelling errors * Fix typo in config drive support matrix docs * doc: Don't put comments inside toctree * Fix doc generation warnings * Remove run\_tests.sh * Fix spice channel type * Updated from global requirements * libvirt: drop MIN\_LIBVIRT\_HUGEPAGE\_VERSION * libvirt: drop MIN\_LIBVIRT\_NUMA\_VERSION * libvirt: drop MIN\_QEMU\_NUMA\_HUGEPAGE\_VERSION * libvirt: Fix misleading error in Ploop imagebackend * More usage of ostestr and cleanup an unused dependency * Ensure that instance directory is removed after success migration/resize * api-ref: Body verification for os-hypervisors.inc * Make conductor create InstanceAction in the proper cell * Allow nova-status to work with custom ca for placement * libvirt: Handle InstanceNotFound exception * Make scheduler get hosts from all cells * Make servers API use cell-targeted context * Make CellDatabases fixture work over RPC * Use the keystone session loader in the placement reporting * Verify project\_id when quotas are checked * Remove mox from nova/tests/unit/virt/vmwareapi/test\_vif.py * conf: Fix invalid rST comments * Revert "fix usage of opportunistic test cases with enginefacade" * Placement api: set custom json\_error\_formatter in resource\_class * Enable coverage report * Make server\_external\_events cells-aware * Remove service version check for Ocata/Newton placement decisions * Remove a dead cinder v1 check * Raise correct error instead of class exist in Placement API * Remove mox from nova/tests/unit/objects/test\_service.py * Skip soft-deleted records in 330\_enforce\_mitaka\_online\_migrations * Stop using mox from tests/unit/test\_service.py * Update placement\_dev with info about new decorator * Remove unused logging import * Deprecate xenserver.vif\_driver config option and change default * Fix live migrate with XenServer * Fix novncproxy for python3 * Remove mox stubs in api/openstack/compute/test\_server\_reset\_state.py * Fix some typo errors * Enable defaults for cell\_v2 update\_cell command * Remove dead code: \_safe\_destroy\_instance\_residue * Updated from global requirements * Make eventlet hub use a monotonic clock * Tolerate WebOb===1.7.1 * Tolerate jsonschema==2.6.0 * Stop using mox in test\_compute\_cells.py * Stop using mox in virt/xenapi/image/test\_glance.py * Remove mox from unit/api/openstack/compute/test\_aggregates.py * Remove mox from api/openstack/compute/test\_deferred\_delete.py * Typo fix: degredation => degradation * api-ref: Fix deprecated proxy API parameters * api-ref: note that boot ignores bdm:device\_name * Skip test\_stamp\_pattern in cells v1 job * Fix misuse of assertTrue * Fix improper prompt when update RC with existed one's name * Remove mox from nova/tests/unit/virt/vmwareapi/test\_configdrive.py * Placement api: set custom json\_error\_formatter in root * Cleanup some issues with CONF.placement.os\_interface * Placement api: set custom json\_error\_formatter in aggregate and usage * Fix suggested database migration command * Placement api: set custom json\_error\_formatter in resource\_provider * api-ref: Fix network\_label parameter type * Fix incorrect example for querying resource for RP * Use ListOfIntegersField in oslo.versionedobjects * libvirt: drop MIN\_QEMU\_PPC64\_VERSION * libvirt: drop MIN\_LIBVIRT\_AUTO\_CONVERGE\_VERSION * libvirt: drop MIN\_QEMU\_DISCARD\_VERSION * libvirt: drop MIN\_LIBVIRT\_HYPERV\_TIMER\_VERSION * libvirt: drop MIN\_LIBVIRT\_UEFI\_VERSION * libvirt: drop MIN\_LIBVIRT\_FSFREEZE\_VERSION * libvirt: drop MIN\_LIBVIRT\_BLOCKJOB\_RELATIVE\_VERSION * Bump minimum required libvirt/qemu versions for Pike * api-ref: fix instance action 'message' description * Placement api: set custom json\_error\_formatter in inventory * conf/libvirt: remove invalid TODOs * conf/compute: remove invalid TODOs * Remove straggling use of main db flavors in cellsv1 code * Add Cells V1 -> Cells V2 step-by-step example * nova-manage: Update deprecation timeline * Enable global hacking checks and removed local checks * Update hacking version * Use min parameter to restrict live-migration config options * Fix typo in nova/network/neutronv2/api.py * libvirt: wait for interface detach from the guest * libvirt: fix and break up \_test\_attach\_detach\_interface * api-ref: mark id as optional in POST /flavors * Fix nova-manage cell\_v2 metavar strings * Remove unused validation code from block\_device * Prepare for using standard python tests * Placement api: set custom json\_error\_formatter in allocations * [3/3]Replace six.iteritems() with .items() * conf: Deprecate 'firewall\_driver' * conf: Deprecate 'ipv6\_backend' * libvirt: set vlan tag for macvtap on SR-IOV VFs * Removed unnecessary parantheses and fixed formation * Fix the spelling mistake in host.py * Allow None for block\_device\_mapping\_v2.boot\_index * Edits for Cells V2 step-by-step examples * api-ref: fix delete server async postcondition docs * libvirt: check if we can quiesce before volume-backed snapshot * Default live\_migration\_progress\_timeout to off * libvirt: Remove redundant bdm serial mangling and saving during swap\_volume * Consider startup scenario in \_get\_compute\_nodes\_in\_db * libvirt: Introduce Guest.get\_config method * libvirt: Parse basic attributes of LibvirtConfigGuest from xml * libvirt: Parse filesystem elements of guest config * libvirt: Parse virt\_type attribute of LibvirtConfigGuest from xml * libvirt: Parse os attributes of LibvirtConfigGuest from xml * libvirt: Remove unused disk\_info parameter * libvirt: Simplify internal usage of get\_instance\_disk\_info * Stop failed live-migrates getting stuck migrating * Stop \_undefine\_domain erroring if domain not found * tests: fix vlan test type from int to str * Add an update\_cell command to nova-manage * allocations.consumer\_id is not used in query * api-ref: document the 'tenant\_id' query parameter * TrivialFix: replace list comprehension with 'for' * Reserve migration placeholders for Ocata backports * Update the upgrades part of devref * Cleanup the caches when deleting a resource provider * vomiting * Clarify the deployment of placement for cellsv1 users * conf: remove deprecated image url options * conf: add min parameter to scheduler opts * Add step-by-step examples for Cells V2 setup * Add nodename to \_claim\_test log messages * Update reno for stable/ocata 15.0.0.0rc1 ----------- * Add placement request id to log when GET or POST rps * Add placement request id to log when GET aggregates * Add more debug logging on RP inventory delete failures * Add more debug logging on RP inventory update failures * Delete a compute node's resource provider when node is deleted * Remove mox from unit/virt/libvirt/test\_imagebackend.py (end) * Mark compute/placement REST API max microversions for Ocata * Add release note for filter/sort whitelist * Clarify the language in the apache wsgi sample * Stop swap allocations being wrong due to MB vs GB * Clarify the [cells] config option help * Add offset & limit docs & tests * Report reserved\_host\_disk\_mb in GB not KB * Fix access\_ip\_v4/6 filters params for servers filter * Fix typo in cells v2 ocata reno * doc: add upgrade notes to the placement devref * Simplify uses of assert\_has\_calls * Fix typo in help for discover\_hosts\_in\_cells\_interval * Handle NotImplementedError in \_process\_instance\_vif\_deleted\_event * Fix the terminated\_at field in the server query params schema * Add release note for nova-status upgrade check CLI * Add prelude section for Ocata * Collected release notes for Ocata CellsV2 * reno for notification-transformation-ocata * Allow scheduler to run cell host discovery periodically * doc: update the man page entry for nova-manage db sync * doc: refer to the cell\_v2 man pages from the cells v2 doc * doc: add some detail to the map\_cell0 man page * Remove pre-cellsv2 short circuit in instance get * Continue processing build requests even if one is gone already * Allow placement endpoint interface to be set * Ensure build request exists before creating instance * placement-api: fix ResourceProviderList query * tests: Remove duplicate NumaHostInfo * tests: Combine multiple NUMA-generation functions * tests: Don't reinvent \_\_init\_\_ * Explain how allow\_resize\_to\_same\_host is useful * nova-status: relax the resource providers check * Read instances from API cell for cells v1 * [placement] Use modern attributes of oslo\_context * Fix map\_cell\_and\_hosts help * Fresh resource provider in RT must have generation 0 * libvirt: Limit destroying disks during cleanup to spawn * Use is\_valid\_cidr and is\_valid\_ipv6\_cidr from oslo\_utils * Ignore IOError when creating 'console.log' * Fix unspecified bahavior on GET /servers/detail?tenant\_id=X as admin * Remove unused exceptions from nova.exception * nova-manage docs: cell\_v2 delete\_cell * nova-manage docs: cell\_v2 list\_cells * nova-manage docs: cell\_v2 discover\_hosts * nova-manage docs: cell\_v2 create\_cell * nova-manage docs: cell\_v2 verify\_instance * nova-manage docs: cell\_v2 map\_cell\_and\_hosts * Fix tag attribute disappearing in 2.33 and 2.37 * Scheduler calling the Placement API * Block starting compute unless placement conf is provided * Added instance.reboot.error to the legacy notifications * Avoid redundant call to update\_resource\_stats from RT * api-ref: Fix path parameters in os-hypervisors.inc * libvirt: fix vCPU usage reporing for LXC/QEMU guests * Adding vlans field to Device tagging metadata * libvirt: expose virtual interfaces with vlans to metadata * objects: vlan field to NetworkInterfaceMetadata object * Move instance creation to conductor * Updated from global requirements * Fix server group functional test by using all filters * Hyper-V PCI Passthrough * Change exponential function to linear * Fixed indentation in virt/libvirt/driver.py * Cache boot time roles for vendordata * Optionally make dynamic vendordata failures fatal * Use a service account to make vendordata requests * libvirt: ephemeral disk support for virtuozzo containers 15.0.0.0b3 ---------- * ironic: Add trigger crash dump support to ironic driver * Only warn about hostmappings during ocata upgrade * nova-manage docs: cell\_v2 map\_instances * nova-manage docs: cell\_v2 map\_cell0 * nova-manage docs: cell\_v2 simple\_cell\_setup * Add new configuration option live\_migration\_scheme * Fix race condition in instance.update sample test * libvirt: Use the mirror element to detect job completion * libvirt: Mock is\_job\_complete in test\_driver * adding debug info for pinning calculation * PCI: Check pci\_requests object is empty before passing to support\_requests * Ironic: Add soft power off support to Ironic driver * Add sort\_key white list for server list/detail * Trivial-fix: replace "json" with "yaml" in policy README * Release PCI devices on drop\_move\_claim() * objects: add new field cpuset\_reserved in NUMACell * Make api\_samples tests use simple cell environment * Assign mac address to vf netdevice when using macvtap port * conf: Deprecate 'console\_driver' * libvirt: avoid generating script with empty path * placement: minor refactor \_allocate\_for\_instance() * placement: report client handle InventoryInUse * Multicell support for instance listing * scheduler: Don't modify RequestSpec.numa\_topology * Fix and add some notes to the cells v2 first time setup doc * Add deleting log when config drive was imported to rbd * Updated from global requirements * Amend the PlacementFixture * Prevent compute crash on discovery failure * Ironic: Add soft reboot support to ironic driver * os-vif: convert libvirt driver to use os-vif for fast path vhostuser * Updated from global requirements * Add a PlacementFixture * Set access\_policy for messaging's dispatcher * libvirt: make coherent logs when reboot success * Add ComputeNodeList.get\_all\_by\_uuids method * Fix typo in 216\_havana.py * placement: create aggregate map in report client * Support Ironic interface attach/detach in nova virt * Generate necessary network metadata for ironic port groups * Ensure we mark baremetal links as phy links * os-vif-util: set vif\_name for vhostuser ovs os-vif port * Move migration\_downtime\_steps to libvirt/migration * libvirt: fix nova can't delete the instance with nvram * Remove mox in libvirt destory tests * VMWare: Move constant power state strings to the constant.py * Remove references to Python 3.4 * hyperv: make sure to plug OVS VIFs after resize/migrate * Strict pattern match query parameters * Raise InvalidInput exception * Fix Nova to allow using cinder v3 endpoint * [py35] Fixes to get more tempest tests working * Move to tooz hash ring implementation * api-ref: Fix a parameter in os-availability-zone.inc * objects: remove cpu\_topology from \_\_init\_\_ of InstanceNUMATopology * Integrate OSProfiler and Nova * Remove mox from unit/virt/libvirt/test\_imagebackend.py (5) * Enable virt.vmwareapi test cases on Python * Enable virt.test\_virt\_drivers.AbstractDriverTestCase on Python 3 * Port compute.test\_user\_data.ServersControllerCreateTest to Python 3 * Add revert resized server functional negative tests * XenAPI: Fix vif plug problem during VM rescue/unrescue * Handle oslo.serialization type error and binascii error * Remove invalid URL in gabbi tests * nova-manage cell\_v2 map\_cell0 exit 0 * Add query parameters white list for server list/detail * nova-manage docs: add cells commands prep * Add --verbose option to discover\_hosts command * Add more details when test\_create\_delete\_server\_with\_instance\_update fails * Updated from global requirements * Add some cellsv2 setup docs * Fix the generated cell0 default database name * rt: use a single ResourceTracker object instance * Add nova-manage cell\_v2 delete\_cell command * Add InstanceMappingList.get\_by\_cell\_id * Create HostMappingList object * Add nova-manage cell\_v2 list\_cells command * Add nova-manage cell\_v2 create\_cell command * Add rudimentary CORS support to placement API * libvirt: workaround findmnt behaviour change * api-ref: Fix parameters whose values are 'null' * Fix broken link of Doc * api-ref: Fix parameters and response in os-quota-sets.inc * Remove nova-manage image from man pages * Updated from global requirements * Fixes to get all functional tests working on py35 * [placement] Add a bit about extraction plans to placement\_dev * [placement] Add an "Adding a Handler" section to placement\_dev * [placement] placement\_dev info for testing and gabbi * [placement] placement\_dev info for microversion handling * Updated from global requirements * placement: validate member\_of values are uuids * Make metadata server know about cell mappings * Remove redundant arg check in nova-manage cell\_v2 verify\_instance * Expose a REST API for a specific list of RPs * copy pasta error * Set sysinfo\_serial="none" in LibvirtDriverTestCase * [py35] Fixes to get rally scenarios working * Fix missing RP generation update * Add service\_token for nova-neutron interaction * rt: explicitly pass compute node to \_update() * Make unit tests work with os-vif 1.4.0 * Updated from global requirements * libvirt: make live migration possible with Virtuozzo * Small improvements to placement.rst * Better black list for py35 tests * Fix class type error in attach\_interface() function * Hyper-V: Adds vNUMA implementation * Don't bypass cellsv1 replication if cellsv2 maps are in place * Adds Hyper-V OVS ViF driver * docs - Connect to placement service & retries * Improve flavor sample in notification sample tests * xenapi: support the hotplug of a neutron port * Update notification for flavor * Add service\_token for nova-cinder interaction * Make allocate\_for\_instance take consistent args * XenAPI Remove useless files when use os-xenapi lib * XenAPI Use os-xenapi lib for nova * Make placement client keep trying to connect * releasenotes: Add missing releasenote for encryption provider constants * Stop using mox stubs in test\_attach\_interfaces.py * Remove mox from api/openstack/compute/test\_floating\_ip\_dns.py * Remove mox in nova/tests/unit/compute/test\_shelve.py (end) * Remove mox in unit/api/openstack/test\_wsgi.py * Document testing process for zero downtime upgrade * Remove mox in nova/tests/unit/compute/test\_shelve.py (2) * Notifications for flavor operations * Add debug possibility for nova-manage command * conf: Deprecate yet more nova-net options * conf: Resolve formatting issues with 'quota' * [2/3]Replace six.iteritems() with .items() * Port xenapi test\_vm\_utils to Python 3 * docs: sort the Architecture Concepts index * Make the SingleCellSimple fixture a little more comprehensive * Fix non-parameterized service id in hypervisors sample tests * Fix TypeError in \_update\_from\_compute\_node race * Trivial indentation fix * Add missing CLI commands in support-matrix.ini * tests: Replace use of CONF with monkey patching * correct misleading wording * Fix a typo in documents * Don't translate exceptions w/ no message * Fix ksa mocking in test\_cinderclient\_unsupported\_v1 * [placement] fix typo in call to create auth middleware * HTTP interface for resource providers by aggregates * Return uuid attribute for aggregates * Update docstring of \_schema\_validation\_helper * api-ref: use the examples with paging links * Port libvirt.test\_vif to Python 3 * Port libvirt.test\_firewall to Python 3 * Move quota options to a config group * Handle Unauthorized exception in report client's safe\_connect() * Remove mox from unit/virt/libvirt/test\_imagebackend.py (4) * Remove mox from unit/virt/libvirt/test\_imagebackend.py (3) * Remove mox from unit/virt/libvirt/test\_imagebackend.py (2) * Do not post allocations that are zero * Remove mox from unit/compute/test\_compute\_api.py (1) * Add aggregate notification related enum values * Transform aggregate.delete notification * Transform aggregate.create notification * Added missing decorator for instance.create.error * Enable Neutron by default * Port virt.libvirt.test\_imagebackend to Python 3 * move gate hooks to gate/ * tools: Remove 'colorizer' * tools: Remove 'with\_venv' * tools: Remove 'install\_venv', 'install\_venv\_common' * tools: Remove 'clean-vlans' * tools: Remove 'enable-pre-commit-hook' * Use JSON-Schema to validate query parameters for keypairs API * Adds support for versioned schema validation for query parameters * Remove mox from api/openstack/compute/test\_extended\_hypervisors.py * Stop using mox in compute/test\_server\_actions.py * Remove mox from unit/api/openstack/compute/test\_cloudpipe.py * Add support matrix for attach and detach interfaces * Make last remaining unit tests work with Neutron by default * Make test\_metadata pass with CONF.use\_neutron=True by default * Make test\_nova\_manage pass with CONF.use\_neutron=True by default * Stub out os\_vif.unplug in libvirt instance destroy tests * Make test\_attach\_interfaces work with use\_neutron=True by default * Make test\_floating\_ip\* pass with CONF.use\_neutron=True by default * Make several API unit tests pass with CONF.use\_neutron=True by default * Make test\_server\_usage work with CONF.use\_neutron=True by default * Make test\_security\_group\_default\_rules work with use\_neutron=True by default * Make test\_tenant\_networks pass with CONF.use\_neutron=True by default * Make test\_security\_groups work with CONF.use\_neutron=True by default * Make test\_virtual\_interfaces work with CONF.use\_neutron=True by default * Make test\_user\_data and test\_multiple\_create work with use\_neutron=True * Make test\_quota work with CONF.use\_neutron=True by default * Make test\_compute pass with CONF.use\_neutron=True by default * api-ref: Fix parameters in os-server-groups.inc * Remove mox in test\_block\_device\_mapping\_v1.py * placement: Do not save 0-valued inventory * Add 'disabled' to WatchdogAction field * Remove more deprecated nova-manage commands * Make servers api view load instance fault from proper cell * Add support for setting boot order in Hyper-V * Create schema generation for NetworkModel * conf: added notifications group * Missing usage next links in api-ref * [placement] start a placement\_dev doc * Stop handling differences in registerCloseCallback * Enable TestOSAPIFixture.test\_responds\_to\_version on Python 3 * pci: Clarify SR-IOV ports vs direct passthrough ports * nova-status: check for compute resource providers * doc: add recomendation for delete notifications * Move FlavorPayload to a seperate file * Remove Rules.load\_json warning * Handle unicode when dealing with duplicate aggregate errors during migration * Handle unicode when dealing with duplicate flavors during online migrations * Actually test online flavor migrations * Remove unused init\_only kwarg from wsgi app init * api-ref: add notes about POST/DELETE errors for os-tenant-networks * Remove unnecessary attrs from TenantNetworksDeprecationTest * api-ref: microversion 2.40 overview * Fix assertion in test\_instance\_fault\_get\_by\_instance * Add more field's in InstancePayload * api-ref: cleanup os-server-groups 'policies' parameter description * objects: add new field cpu\_emulator\_threads\_policy * Support filtering resource providers by aggregate membership * Resource tracker doesn't free resources on confirm resize * Stop using mox stubs in nova/tests/unit/cells * Add release note to PCI passthrough whitelist regex support * api-ref: Fix parameter type in servers-admin-action.inc * Port security group related tests to Python 3 * Add create image functional negative tests * Don't apply multi-queue to SRIOV ports * Avoid multiple initializations of Host class * placement: correct improper test case inheritance * Remove mox in tests/unit/objects/test\_instance\_info\_cache * Port compute unit tests to Python 3 * Fix urllib.urlencode issue in functional tests on Python 3 * Trival fix typo * Enble network.test\_neutronv2.TestNeutronv2 on Python 3 * Enble compute.test\_compute\_mgr.ComputeManagerUnitTestCase on Python 3 * Port api.openstack.compute.test\_disk\_config to Python 3 * Updated from global requirements * Ignore 404s when deleting allocation records * nova-status: return 255 for unexpected errors * VMware: Update supported OS types for ESX 6.5 * Replace "Openstack" with "OpenStack" * Use bdm destination type allowed values hard coded * Fix BDM JSON-Schema validation * [TrivialFix] Fix comment and function name typo error * [TrivialFix] Fix comment typo error * Fix python3 issues with devstack * [1/3]Replace six.iteritems() with .items() * Fix typo * Fix misleading port delete description * conf: remove deprecated barbican options * conf: Remove 'virt' file * Trival fix typos in api-ref * make 2.31 microversion wording better * Add soft delete wrinkle to api-ref * Add document update for get console usage * Trivial: add ability to define action description * Added missed "raises:" docstrings into numa\_get\_constraints() method * Removes unnecessary utf-8 encoding * Port test\_matchers.TestDictMatches.test\_\_str\_\_ to Python 3 * Skip network.test\_manager.LdapDNSTestCase on Python 3 * Remove mox in tests/unit/objects/test\_security\_group * Remove v2.40 from URL string in usage API docs * nova-status: add basic placement status checking * nova-status: check for cells v2 upgrade readiness * Add nova-status upgrade check command framework * rt: remove fluff from test\_resource\_tracker * rt: pass the nodename to public methods * conf: make 'default' upper case * conf: move few console opts to xenserver group * conf: remove deprecated ironic options * conf: refactor conf\_fixture.py * Add unit test for extract\_snapshot with compression enabled * Refactor the code to add generic schema validation helper * Updated from global requirements * Fix error if free\_disk\_gb is None in CellStateManager * nova-manage: squash oslo\_policy debug logging * Pre-load info\_cache when handling external events and handle NotFound * Make nova-manage cell\_v2 discover\_hosts tests use DBs * Fix nova-manage cell\_v2 discover\_hosts RequestContext * Make nova-manage emit a traceback when things blow up * XenAPI: Remove ovs\_integration\_bridge default value * rt: pass nodename to internal methods * Failing test (mac osx) - test\_cache\_ephemeral * Catch VolumeEncryptionNotSupported during spawn * Updated from global requirements * Fix exception message formatting error in test * osapi\_max\_limit -> max\_limit * Add more detail to help text for reclaim\_instance\_interval option * Added PRSM to HVType class for support PR/SM hypervisor * conf: Deprecate more nova-net options 15.0.0.0b2 ---------- * [test]Change fake image info to fit instance xml * Cleanup Newton Release Notes * Port libvirt.storage.test\_rbd to Python 3 * VMware: ensure that provider networks work for type 'portgroup' * libvirt: Stop misusing NovaException * Fix the file permissions of test\_compute\_mgr.py * Add detail to cellsv2-related release notes * Revert "Use liberty-eol tag for liberty release notes" * Fix some release notes in preparation for the o-2 beta release * Add schedule\_and\_build\_instances conductor method * libvirt: Detach volumes from a domain before detaching any encryptors * libvirt: Flatten 'get\_domain' function * fakelibvirt: Remove unused functions * libvirt: Remove slowpath listing of instances * Only return latest instance fault for instances * Remove dead begin/end code from InstanceUsageAuditLogController * Use liberty-eol tag for liberty release notes * api-ref: Fix description of os-instance-usage-audit-log * conf: fix formatting in base * Stop allowing tags as empty string * libvirt: remove hack for dom.vcpus() returning None * Add Python 3.5 functional tests in tox.ini * Simple tenant usage pagination * Modify mistake of scsi adapter type class * Remove the EC2 compatible API tags filter related codes * Port virt vmwareapi tests to Python 3 * Mark sibling CPUs as 'used' for cpu\_thread\_policy = 'isolated' * Added missed "raises:" docstrings into numa\_get\_constraints() method * Changed NUMACell to InstanceNUMACell in test\_stats.py * TrivialFix: changed log message * api-ref: Fix 'id' (attachment\_id) parameters * Move tags validation code to json schema * Let nova-manage cell\_v2 commands use transport\_url from CONF * Make test\_create\_delete\_server\_with\_instance\_update deterministic * restore locking in notification tests * Remove mox from unit/compute/test\_compute\_api.py(2) * Deprecate compute options * Remove support for the Cinder v1 API * Make simple\_cell\_setup fully idempotent * Corrects the type of a base64 encoded string * Fix instructions for running simple\_cell\_setup * Quiet unicode warnings in functional test\_resource\_provider * conf: Detail the 'injected\_network\_template' opt * Add more description for rx and tx param * move rest\_api\_version\_history.rst to compute layer * Enhance PCI passthrough whitelist to support regex * Better wording for micorversion 2.36 * Port test\_servers to py3 * Catch InstanceNotFound exception * Remove mox in tests/unit/objects/test\_compute\_node * Refactor REGEX filters to eliminate 500 errors * Fix crashing during guest config with pci\_devices=None * Provide an online data migration to cleanup orphaned build requests * Add SecurityGroup.identifier to prefer uuid over name * Setup CellsV2 environment in base test * conf: add warning for vm's max delete attempts * Cleanup after any failed libvirt spawn * Guestfs handle no passwd or group in image * Return 400 when name is more than 255 characters * Check that all JSON files don't have \r\n in line * Enable test\_bdm.BlockDeviceMappingEc2CloudTestCase on Python 3 * network id is uuid instead of id * fix for auth during live-migration * Don't trace on ImageNotFound in delete\_image\_on\_error * Cascade deletes of RP aggregate associations * Make resource provider objects not remotable * Bump prlimit cpu time for qemu from 2 to 8 * test: drop unused config option fake\_manager * conf: Remove config option compute\_ manager * Extend get\_all\_by\_filters to support resource criteria * Port test\_virt\_drivers to Python 3 * Don't use 'updated\_at' to check service's status * libvirt: Fix initialising of LVM ephemeral disks * Remove extra ^M for json file * Port virt.disk.mount.test\_nbd to Python 3 * Remove unnecessary comment of BDM validation * Update ironic driver get\_available\_nodes docstring * api-ref: note that os-virtual-interfaces is nova-network only * Fix up non-cells-aware context managers in test\_db\_api * Add SingleCellSimple fixture * [proxy-api] microversion 2.39 deprecates image-metadata proxy API * Make RPCFixture support multiple connections * tests: avoid starting compute service twice in sriov functional test * tests: generate correct pci addresses for fake pci devices * Fix nova-serialproxy when registering cli options * Updated from global requirements * Revert "reduce pep8 requirements to just hacking" * conf: Improve help text for network options * conf: Deprecate all nova-net related opts * libvirt: Mock imagebackend template funcs in ImageBackendFixture * libvirt: Combine injection info in InjectionInfo * Fix misuse of assertTrue * Return 400 when name is more than 200 characters * Replace the assertEqual(None,A) with assertIsNone(A) * Rename few tests as per new config options * Handle MarkerNotFound from cell0 database * Removed unused ComputeNode create/update\_inventory methods * Fix a typo in a comment in microversion history * Handle ImageNotFound exception during instance backup * Add a CellDatabases test fixture * Pass context as kwarg instead of positional arg to get\_engine * Transform instance.snapshot notifications * libvirt: virtlogd: use virtlogd for char devices * libvirt: create consoles in an understandable/extensible way * Add more log when delete orphan node * libvirt: Add comments in \_hard\_reboot * Update cors-to-versions-pipeline release note * Unity the comparison of hw\_qemu\_guest\_agent * Add metadata functional negative tests * Require cellsv2 setup before migrating to Ocata * Improving help text for xenapi\_vmops\_opts * convert libvirt driver to use os-vif for vhost-user with ovs * Handle ComputeHostNotFound when listing hypervisors * Improve the error message for failed RC deletion * refactor: move down \`\`dev\_number\`\` in xenapi * Fix placement API version history 1.1 title * placement: Perform build list of standard classes once * placement: REST API for resource classes * Add a retry loop to ResourceClass creation * conf: Remove deprecated service manager opts * support polling free notification testing * conf: Standardize formatting of virt * Updated from global requirements * Remove invalid tests for config option osapi\_compute\_workers * placement: adds ResourceClass.save() * Add CORS filter to versions pipeline * Create hyperv fake images under proper directory * Some improvement to the process doc * libvirt: Improve \_is\_booted\_from\_volume implementation * libvirt: Delete duplicate check when live-migrating * Add block\_device\_mapping\_v2.uuid to api-ref * Correct the sorting of datetimes for migrations * Fix pci\_alias that include white spaces * Raise DeviceNotFound detaching volume from persistent domain * Always use python2.7 for docs target * objects: Removes base code that already exists in o.vo * libvirt: Don't re-resize disks in finish\_migration() * libvirt: Never copy a swap disk during cold migration * libvirt: Rename Backend snapshot and image * libvirt: Cleanup test\_create\_configdrive * libvirt: Test disk creation in test\_hard\_reboot * libvirt: Rewrite \_test\_finish\_migration * guestfs: Don't report exception if there's read access to kernel * Fix for live-migration job * Handle maximum limit in schema for int and float type parameters * Port compute.test\_extended\_ip\* to Python 3 * Remove more tests from tests-py3.txt * Support detach interface with same MAC from instance * placement: adds ResourceClass.destroy() * Make test\_shelve work with CONF.use\_neutron=True by default * Restrict test\_compute\_cells to nova-network * Make test\_compute\_mgr work with CONF.use\_neutron=True by default * Make test\_compute\_api work with CONF.use\_neutron=True by default * Make nova.tests.unit.virt pass with CONF.use\_neutron=True by default * Make xenapi tests work with CONF.use\_neutron=True by default * Make libvirt unit tests work with CONF.use\_neutron=True by default * Make vmware unit tests work with CONF.use\_neutron=True * Explicitly use nova-network in nova-network network tests * Make test\_serversV21 tests work with neutron by default * neutron: handle no\_allocate in create\_pci\_requests\_for\_sriov\_ports * Add a releasenote for bug#1633518 * libvirt: prefer cinder rbd auth values over nova.conf * libvirt: cleanup network volume driver auth config * Fix wait for detach code to handle 'disk not found error' * [api-ref] Minor text clean-up, formatting * Convert live migration uri back to string * conf: improve libvirt lvm * conf: Trivial fix of indentation in 'api' * config options: improve libvirt utils * Never pass boolean deleted to instance\_create() * Port xenapi test\_xenapi to Python 3 * Port libvirt test\_driver to Python 3 * conf: Deprecate 'torrent\_' options * hacking: Use uuidutils or uuidsentinel to generate UUID * Replace uuid4() with uuidsentinel * Replace uuid4() with uuidsentinel * Replace uuid4() with uuidsentinel * Add os-start/stop functional negative tests * Port ironic unit tests to Python 3 * Port test\_keypairs to Python 3 * Port test\_metadata to Python 3 * Fix expected\_attrs kwarg in server\_external\_events * Check deleted flag in Instance.create() * Revert "Revert "Make n-net refuse to start unless using CellsV1"" * Revert "Log a warning when starting nova-net in non-cellsv1 deployments" * Default deleted if the instance from BuildRequest is not having it * docs: cleanup wording for 'SOFT\_DELETED' in api-guide * libvirt: Acquire TCP ports for console during live migration * conf: Deprecate 'remap\_vbd\_dev' option * conf: Covert StrOpt -> PortOpt * Check Config Options Consistency for xenserver.py * Add description for 2.9 microversion * Remove AdminRequired usage in flavor * Optional name in Update Server description in api-ref * List support for force-completing a live migration in Feature support matrix * Remove mox from nova/tests/unit/compute/test\_virtapi.py * Remove mox from nova/tests/unit/virt/test\_virt.py * Catch ImageNotAuthorized during boot instance * Remove require\_admin\_context * remove NetworkDuplicated exception * InstanceGroupPolicyNotFound not used anymore * UnsupportedBDMVolumeAuthMethod is not used * Port virt.xenapi.client.test\_session to Python 3 * vif: allow for creation of multiqueue taps in vrouter * conf: Move api options to a group * [scheduler][tests]: Fix incorrect aggr mock values * objects: Move 'vm\_mode' to 'fields.VMMode' * objects: Move 'hv\_type' to 'fields.HVType' * objects: Move 'cpumodel' to 'fields.CPU\*' * objects: Move 'arch' to 'fields.Architecture' * Show team and repo badges on README * Remove config option snapshot\_name\_template * Remove deprecated compute\_available\_monitors option * Improve help text for interval\_opts * config options: improve libvirt remotefs * Improve consistency in libvirt * Fix root\_device\_name for Xen * Move tag schema to parameter\_types.py * Remove tests from tests-py3.txt * hardware: Flatten functions * add host to vif.py set\_config\_\* functions * linux\_net: allow for creation of multiqueue taps * Fix notification doc generator * Config options: improve libvirt help text (2) * Placement api: Add informative message to 404 response * Remove sata bus for virtuozzo hypervisor * Fix a typo in nova/api/openstack/compute/volumes.py * Fix race in test\_volume\_swap\_server\_with\_error * libvirt: Call host connection callbacks asynchronously * conf: remove deprecated cert\_topic option * Return build\_requests instead of instances * conf: remove deprecated exception option * doc: Add guidline about notification payload * Port libvirt test\_imagecache to Python 3 * Port test\_serversV21 to Python 3 * encryptors: Introduce encryption provider constants * Add TODO for returning a 202 from the volume attach API * Fix typo in image\_meta.py & checks.py & flavor.py * Refactor two nearly useless secgroup tests * Transform instance.finish\_resize notifications * Remove redundant VersionedObject Fields * Transform instance.create.error notification * Transform instance.create notification * api-ref: add missing os-server-groups parameters * libvirt: prepare domain XML update for serial ports * [placement] increase gabbi coverage of handlers.resource\_provider * [placement] increase gabbi coverage of handlers.inventory * [placement] increase gabbi coverage of handlers.allocation * libvirt: do not return serial address if disabled on destination * Remove mox from api/openstack/compute/test\_fping.py * Add index on instances table across project\_id and updated\_at * Complete verification for os-floating-ips * libvirt: handle os-brick InvalidConnectorProtocol on init * placement: adds ResourceClass.get\_by\_name() * placement: adds ResourceClass.create() * Improve help text for libvirt options * Use byte string or utf8 depending on python version for wsgi * Separate CRUD policy for server\_groups * Stop using mox stubs in nova/tests/unit/virt/disk * Remove the description of compute\_api\_class option * Remove mox in virt/xenapi/image/test\_bittorrent.py * Add context param to confirm\_migration virt call * Use pick\_context\_manager throughout DB APIs * Database poison note * tests: verify cpu pinning with prefer policy * api-ref: Body verification for os-simple-tenant-usage.inc * remove additional param * Fix typo for 'infomation' * Remove unused code in nova/api/openstack/wsgi.py * conf: remove deprecated cells driver option * Fix detach\_interface() call from external event handler * Implement get and set aggregates in the placement API * Add {get\_,set\_}aggregates to objects.ResourceProvider * Log a warning when starting nova-net in non-cellsv1 deployments * Revert "Make n-net refuse to start unless using CellsV1" * HyperV: use os-brick for volume related operations * INFO level logging should be useful in resource tracker * hyper-v: wait for neutron vif plug events * Remove mox in nova/tests/unit/api/openstack/compute (1) * Use available port binding constants * Rename PCS to Virtuozzo in error message * [PY3] byte/string conversions and enable PY3 test * Fix mock arg list order in test\_driver.py * Add handle for 2 exceptions in force\_delete * Typo error about help libvirt.py * Updated from global requirements * Introduce PowerVMLiveMigrateData * Make n-net refuse to start unless using CellsV1 * Store security groups in RequestSpec * api-ref: body verification for abort live migration * Fix data error in api samples doc 15.0.0.0b1 ---------- * Typo error servers.py * Typo error allocations.yaml * Refactor console checks in live migration process * Remove mox in tests/unit/objects/test\_pci\_device * Add microversion cap information * No return for flavor destroy * neutron: actually populate list in populate\_security\_groups * Clarify the approval process of specless blueprints * Add uuid field to SecurityGroup object * api-ref: body verification for force\_complete server migration * api-ref: body verification for show server migration * api-ref: body verification for list server migrations * api-ref: example verification for server-migrations * api-ref: parameter verification for server-migrations * api-ref: method verification for server-migrations * [placement] Enforce min\_unit, max\_unit and step\_size * Remove ceph install/config functions from l-m hook * Ceph bits for live-migration job * Avoid unnecessary db\_calls in objects.Instance.\_from\_db\_object() * placement: genericize on resource providers * api-ref: fix server\_id in metadata docs * Add the initial documentation for the placement API * API Ref: update server\_id params * conf: fix formatting in wsgi * Transform requested secgroup names to uuids * conf: fix formatting in availability\_zone * libvirt: Cleanup spawn tests * Rename security\_group parameter in compute.API:create * Change database poison warning to an exception * Fix database poison warnings, part 25 * Updated from global requirements * Correct wrong max\_unit in placement inventory * Add flavor extra\_spec info link to api\_ref * Fix database poison warnings in resource providers * Placement api: 404 response do not indicate what was not found * Instance obj\_clone leaves metadata as changed * Add a no-op wait method to NetworkInfo * Move driver\_dict\_from\_config to libvirt driver * Create schema generation for AddressBase * conf: Improve help text for ldap\_dns\_opts * conf: Fix indentation of network * Fix config option types * libvirt: Fix incorrect libvirt library patching in tests * libvirt: refactor console device creation methods * libvirt: read rotated "console.log" files * libvirt: change get\_console\_output as prep work for bp/libvirt-virtlogd * Updated from global requirements * api-ref: Fix a 'port' parameter in os-consoles.inc * Update nova api.auth tests to work with newer oslo.context * Remove ironic instance resize from support matrix doc * [placement] add a placement\_aggregates table to api\_db * libvirt: remove py26 compat code in "get\_console\_output" * Change RPC post\_live\_migration\_at\_destination from cast to call * libvirt: add migration flag VIR\_MIGRATE\_PERSIST\_DEST * Revert MTU hacks for bug 1623876 * Pass MTU into os-vif Network object * Updated from global requirements * api-ref: fix addFloatingIp action docs * Fix a TypeError in notification\_sample\_base.py * Add functional api\_samples test for addFloatingIp action * Fix qemu-img convert image incompatability in alpine linux * migration.source\_compute should be unchanged after finish\_revert\_resize * Add explicit dependency on testscenarios * Updated from global requirements * cors: update default configuration in config * api-ref: remove user\_id from keypair list response and fix 2.10 * Don't parse PCI whitelist every time neutron ports are created * conf: Remove deprecated 'compute\_stats\_class' opt * conf: Remove extraneous whitespace * hardware: Split '\_add\_cpu\_pinning\_constraint' * libvirt: Delete the lase\_device of find\_disk\_dev\_for\_disk\_bus * EventReporterStub * Catch all local/catch-all addresses for IPv6 * placement: add ResourceClass and ResourceClassList * placement: raise exc when resource class not found * fix connection context manager in rc cache * pci: remove pci device from claims and allocations when freeing it * PCI: Fix PCI with fully qualified address * Log warning when user set improper config option value * libvirt: fix incorrect host cpus giving to emulator threads when RT * Transform instance.shutdown notifications * encryptors: Workaround mangled passphrases * Fix cold migration with qcow2 ephemeral disks * Updated from global requirements * config options: Improve help for SPICE * Remove manual handling of old context variables * api-ref: cleanup bdm.delete\_on\_termination field * api-ref: document the power\_state enum values * libvirt: Pass Host instead of Driver to volume drivers * conf: Attempt to resolve TODOs in scheduler.py * conf: Remove 'scheduler\_json\_config\_location' * Remove unreachable code * [api-ref] Fix path parameter console\_id * doc: add a note about conditional support for xenserver change password * Replace admin check with policy check in placement API * Fix import statement order * Fix database poison warnings, part 24 * libvirt: sync time on resumed from suspend instances * Fix database poison warnings, part 23 * Add RPC version aliases for Newton * Transform instance.unpause notifications * Catch NUMA related exceptions in create server API method * Notification object version test depends on SCHEMA * Updated from global requirements * Virt: add context to attach and detach interface * Imported Translations from Zanata * Stop using mox stubs in test\_shelve.py * Fix SAWarning in TestResourceProvider * Transform instance.unshelve notifications * TrivialFix: Fixed typo in 'MemoryPageSizeInvalid' exception name in docstrings * Make build\_requests.instance MediumText * Use six.wraps * Transform instance.resume notifications * Transform instance.shelve\_offload notifications * api-ref: fix image GET response example * Fix exception raised in exception wrapper * Add missing compat routine for Usage object * Updated from global requirements * Transform instance.power\_off notifications * conf: Removed TODO note and updated desc * Set 'last\_checked' flag if start to check scheduler file * Remove bandit.yaml in favor of defaults * Pre-add instance actions to avoid merge conflicts * Add swap volume notifications (error) * libvirt: add supported vif types for virtuozzo virt\_type * fix testcase test\_check\_can\_live\_migrate\_dest\_fills\_listen\_addrs * doc: Integrate oslo\_policy.sphinxpolicygen * Using get() method to prevent KeyError * tests: verify pci passthrough with numa * tests: Adding functional tests to cover VM creation with sriov * [placement] Add support for a version\_handler decorator * pci: in free\_device(), compare by device id and not reference * Mention API V2 should no longer be used * doc: Update libvirt-numa guide * Remove deprecated nova-manage vm list command * Remove block\_migration from LM rollback * PCI: Avoid looping over PCI devices twice * Update docs for serial console support * Remove conductor local api:s and 'use\_local' config option * Cleanup before removal of conductor local apis * compute: fixes python 3 related unit tests * XenAPI: Fix VM live-migrate with iSCSI SR volume * Fix the scope of cm in ServersTestV219 * Explicitly name commands target environments * \_run\_pending\_deletes does not need info\_cache/security\_groups * Updated from global requirements * hardware: Standarized flavor/image meta extraction * Tests: improve assertJsonEqual diagnostic message * api-ref: Fix wrong parameters in os-volumes.inc * Remove mox from unit/virt/libvirt/test\_imagebackend.py (1) * Send events to all relevant hosts if migrating * Catch error and log warning when not able to update mtimes * Clarify what changed with scheduler\_host\_manager * Add related options to floating ip config options * Correct bug in microversion headers in placement * Ironic Driver: override get\_serial\_console() * Updated from global requirements * Drop deprecated support for hw\_watchdog\_action flavor extra spec * Remove watchdog\_actions module * Removal of tests with different result depending on testing env * Add debug to tox environment * Document experimental pipeline in Nova CI * Update rolling upgrade steps from upgrades documentation * Add migrate\_uri for invoking the migration * Fix bug in "nova/tests/unit/virt/test\_virt\_drivers.py" for os-vif * Remove redundant req setting * Changed the name of the standard resource classes * placement: change resource class to a StringField * Remove nova/openstack/\* from .coveragerc * Remove deprecated nova-all binary * Fix issue with not removing rbd rescue disk * Require WebOb>=1.6.0 * conf: Remove deprecated \`\`use\_glance\_v1\`\` * Adding hugepage and NUMA support check for aarch64 * hacking: Use assertIs(Not), assert(True|False) * Use more specific asserts in tests * Add quota related tables to the api database * doc: add dev policy about no new metrics monitors * Always use python2.7 for functional tests * doc: note the future of out of tree support * docs: update the Public Contractual API link * Remove \_set\_up\_controller() from attach tests * Add InvalidInput handling for attach-volume * placement: add cache for resource classes * placement: add new resource\_classes table * hardware: Rework docstrings * doc: Comment on latin1 vs utf8 charsets * Improve help text for libvirt options * block\_device: Make refresh\_conn\_infos py3 compatible * Add swap volume notifications (start, end) * Add a hacking rule for string interpolation at logging * Stop using mox stubs in test\_snapshots.py * Stop using mox from compute/test\_multiple\_create.py * Don't attempt to escalate nova-manage privileges * Improve help text for upgrade\_levels options * Remove dead link from notification devref * Stop using mox stubs in test\_evacuate.py * Tests: fix a typo * ENOENT error on '/dev/log' * Patch mkisofs calls * conf: Group scheduler options * conf: Move consoleauth options to a group * Fix exception due to BDM race in get\_available\_resource() * Delete traces of in-progress snapshot on VM being deleted * Add error handling for delete-volume API * Catch DevicePathInUse in attach\_volume * Enable release notes translation * Fix drop\_move\_claim() on revert resize * Updated from global requirements * Fix API doc for os-console-auth-tokens * tests: avoid creation of instances dir in the working directory * config options: improve libvirt imagebackend * libvirt: fix DiskSmallerThanImage when block migrate ephemerals * Remove unnecessary credential sanitation for logging * Replace uuid4() with uuidsentinel * Change log level to debug for migrations pairing * Remove the duplicated test function * Move get\_instance() calls from try-except block * Allow running db archiving continuously * Add some extra logging around external event handling * Fix a typo in driver.py * Avoid Forcing the Translation of Translatable Variables * Fix database poison warnings, part 21 * libvirt: Fix BlockDevice.wait\_for\_job when qemu reports no job * Stop using mox from compute/test\_used\_limits.py * Updated from global requirements * Remove mox from tests/unit/conductor/tasks/test\_live\_migrate.py(3) * Remove mox from tests/unit/conductor/tasks/test\_live\_migrate.py(2) * Remove mox from tests/unit/conductor/tasks/test\_live\_migrate.py(1) * Fix calling super function in setUp method * refresh instances\_path when shared storage used * Prevent us from sleeping during DB retry tests * Fix error status code on update-volume API * conf: Trivial cleanup of console.py * conf: Trivial cleanup of compute.py * conf: Trivial cleanup of 'cells' * conf: Deprecate all topic options * Updated from global requirements * Disable 'supports\_migrate\_to\_same\_host' HyperV driver capability * Fix periodic-nova-py{27,35}-with-oslo-master * Report actual request\_spec when MaxRetriesExceeded raised * Make db archival return a meaningful result code * Remove the sample policy file * libvirt/guest.py: Update docstrings of block device methods * Fix small RST markup errors * [Trivial] changes tiny RST markup error * Add get\_context helper method * Use gabbi inner\_fixtures for better error capture * Hyper-V: Fixes os\_type image property requirement * conf: Cleanup of glance.py * conf: Move PCI options to a PCI group * Add Apache 2.0 license to source file * Updated from global requirements * Make releasenotes reminder detect added and untracked notes * [placement] reorder middleware to correct logging context * Fixes RST markup error to create a code-box * libvirt: support user password settings in virtuozzo * Removing duplicates from columns\_to\_join list * Ignore BuildRequest during an instance reschedule * Remove stale pyc files when running the cover job * Add a post-test-hook to run the archive command * [placement] ensure that allow headers are native strings * Fix a few typos in API reference * Fix typo on api-ref parameters * Fix typo in comment * Remove mox in nova/tests/unit/compute/test\_shelve.py (1) * Let schema validate image metadata type and key lengths * Remove scheduled\_at attribute from instances table * Fix database poison warnings, part 22 * Archive instance-related rows when the parent instance is deleted * Unwind circular import issue with api / utils * Fix database poison warnings, part 18 * Remove context object in oslo.log method * libvirt: pick future min libvirt/qemu versions * Improve consistency in serial\_console * conf: Improve consistency in scheduler opts * Move notification\_format and delete rpc.py * config options: improve libvirt smbfs * Fix database poison warnings, part 17 * Updated from global requirements * Fix database poison warnings, part 16 * Hyper-V: Adds Hyper-V UEFI Secure Boot * Stop overwriting thread local context in ClientRouter * Cleanup some redundant USES\_DB\_SELF usage * Fix database poison warnings, part 20 * Fix database poison warnings, part 19 * use proper context in libvirt driver unit test * Renamed parameters name in config.py * [placement] Allow both /placement and /placement/ to work * numa: Fixes NUMA topology related unit tests * VMware: Do not check if folder already exists in vCenter * libvirt: fixes python 3 related unit tests * Clean up stdout/stderr leakage in cmd testing * Capture stdout in for test\_wsgi:test\_debug * Add destroy method to the RequestSpec object * Remove last sentence * VMware: Enforce minimum vCenter version of 5.5 * test:Remove unused method \_test\_get\_test\_network\_info * Determine disk\_format for volume-backed snapshot from schema * Fix database poison warnings, part 15 * Fix CONTAINER\_FORMATS\_ALL to have ova insteadk of vmdk * Config options consistency of ephemeral\_storage.py * docs: Clarify sections & note on filter scheduler * Fixes python 3 unit tests * Add Hyper-V storage QoS support * Add blocker migration to ensure for newton online migrations * hacking: Always use 'assertIs(Not)None' * Hyper-V: fix image handling when shared storage is being used * Annotate online db migrations with cycle added * properly capture logging during db functional tests * [placement] 404 responses do not cause exception logs * Fix pep8 E501 line too long * Remove unused code * Replace uuid4() with generate\_uuid() from oslo\_utils * Return instance of Guest from method write\_instance\_config * Mock.side\_effects does not exist, use Mock.side\_effect instead * Remove redundant str typecasting * VMware: deprecate wsdl\_location conf option * Remove nova.image.s3 and configs * Remove internal\_id attribute from instances table * Fix stdout leakage during opportunistic db tests * Updated from global requirements * Improve help text for glance options * libvirt: ignore conflict when defining network filters * Add placeholder DB migrations for Ocata * Remove PCI parent\_addr online migration * Make nova-manage online migrations more verbose * Fix check\_config\_option\_in\_central\_place * Skip malformed cookies * Fix database poison warnings, part 14 * Standardize output capture for nova-manage tests * Work around tests that don't use nova.test as a base * Don't print to stdout when executing hacking checks * Make test logging setup fixture disable future setup * Fix typo in docsting in test\_migrations.py * Remove support for deprecated driver import * conf: Add 'deprecated\_reason' to osapi opts * Add hacking checks for xrange() * Using assertIsNone() instead of assertEqual(None) * move os\_vif.initialize() to nova-compute start * Add deprecated\_since parameter * [placement] Manage log and other output in gabbi fixure * Reduce duplication and complexity in format\_dom * Fix invalid exception mock for InvalidNUMANodesNumber * libvirt: fix serial console not correctly defined after live-migration * Add more description when service delete * trivial: Rewrap guide at 79 characters * plugins/xenserver: Add '.py' extension * conf: Fix opt indentation for scheduler.py * conf: Reorder scheduler opts * Updated from global requirements * Revert "Set 'serial' to new volume ID in swap volumes" * [placement] Adjust the name of the gabbi tests * placement: refactor instance translate function * Move wsgi-intercept to test-requirements.txt * Add missing slash to dir path * Expand feature classification matrix with gate checks * [placement] Stringify class and provider uuid in error * [api-ref] Correct parameter type * Remove default=None for config options * libvirt: cleanup never used migratable flag checking * Remove unnecessary setUp and tearDown * Remove unused parameters * Remove duplicate key from dictionary * Updated from global requirements * placement: refactor translate from node to dict * stub out instances\_path in unit tests * Add a new release note * XenAPI: add unit test for plugin test\_pluginlib\_nova.py * Add link ref to nova api concept doc * libvirt: Use the recreated disk.config.rescue during a rescue * Add members in InstanceGroup object members field * Updates URL and removes trailing characters * Stop ovn networking failing on mtu * Update reno for stable/newton * Don't pass argument sqlite\_db in method set\_defaults 14.0.0.0rc1 ----------- * Override MTU for os\_vif attachments * Fix object assumption in remove\_deleted\_instances() * Add is\_cell0 helper method * Set a bigger TIMEOUT\_SCALING\_FACTOR value for migration tests * Update minimum requirement for netaddr * [placement] consolidate json handling in util module * Fix unnecessary string interpolation * Handle TypeError when disabling host service * Fix an error in archiving 'migrations' table * Remove deprecated flag in neutron.py * Clean up allocation when update available resources * [placement] Mark HTTP error responses for translation * [placement] prevent a KeyError in webob.dec.wsgify * Body Verification of api-ref os-volume-attachments.inc * Add functional regression test for bug 1595962 * Use tempest tox with regex first * libvirt: add ps2mouse in choice for pointer\_model * Doc fix for Nova API Guide, added missing word * conf: Make list->dict conversion more specific * Revert "tox: Don't create '.pyc' files" * Improve help text for xenapi\_session\_opts * Improve help text for service options * Correct image.inc for heading * Complete verification for os-cloudpipe.inc * Use assertEqual() instead of assertDictEqual() * Fix typo of stevedore * [placement] functional test for report client * Add regression test for immediate server name update * Fixed suspend for PCI passthrough * libvirt: Rewrite test\_rescue and test\_rescue\_config\_drive * Guard against failed cache refresh during inventory * More conservative allocation updates * [placement] Correct serialization of inventory collections * Switching expression order within if condition * Correct sort\_key and sort\_dir parameter for flavor * Correct address, version parameter in ips.inc * Use to\_policy\_values for policy credentials * Doc fix for Nova API Guide, fixed wording * Nova shelve creates duplicated images in cells * More conservative inventory updates * Fix server group name on api-ref * Update BuildRequest if instance currently being scheduled * Fix reno for removal of nova-manage service command * Add note about display\_name in \_populate\_instance\_names * Extended description for sync\_power\_state\_pool\_size option * Use recursive obj\_reset\_changes in BuildRequest * HyperV: ensure config drives are copied as well during resizes * [placement] make PUT inventory consistent with GET * Fill destination check data with VNC/SPICE listen addresses * Revert "libvirt: move graphic/serial consoles check to pre\_live\_migration" * Fix MonitorMetric obj\_make\_compatible * Using assertIsNotNone() instead of assertIsNot(None,) * [api-ref] fix availability\_zone for server create * Fix SafeConfigParser DeprecationWarning in Python 3.2 * Set 'serial' to new volume ID in swap volumes * Fix policy tests for project\_id enforcement * neutron: don't trace on port not found when unbinding ports * Remove RateLimitFault class * Rate limit is removed , update doc accordingly * Fix a typo from ID to Id * context: change the name 'rule' to 'action' in context.can * Add description for v2.20 changes in api-ref * Add sync\_power\_state\_pool\_size option * Additional logging for placement API * Fix resizing in imagebackend.cache() * [placement] cleanup some incorrect comments * Updated from global requirements * Compute: ensure that InvalidDiskFormat is handled correctly * Add keypairs\_links into resp * Add hypervisor\_links into hypervisor v2.33 * Throw exception if numa\_nodes is not set to integer greater than 0 * Add reserved param for v2.4 * Add more description on v2.9 history * libvirt: inject files when config drive is not requested * Pin maximum API version of microversion * XenAPI: resolve the fetch\_bandwidth failure * Fix api-ref doc for server-rebuild * [api-ref] Update configuration file * fix broken link in api-ref * Trivial fix remove not used var in parameters * Trival fix a typo * Increase BDM column in build\_requests table * VMware: Refactor the image transfer * Pass GENERATE\_HASHES to the tox test environment * [placement] add two ways to GET allocations * Handle ObjectActionError during cells instance delete * [placement] Add some tests ensuring unicode resource provider info * cleanup: separate the creation of a local root to it's own method * standardize release note page ordering * Remove misleading warning message * Add deprecated\_reason for use\_usb\_tablet option * db: retry on deadlocks while adding an instance * virt: handle unicode when logging LifecycleEvents * Ensure ResourceProvider/Inventory created before add Allocations record * Libvirt: Correct PERF\_EVENTS\_CPU\_FLAG\_MAPPING * Enable py3 tests for unit.api.openstack.compute.test\_console\_output * Implement setup\_networks\_on\_host for Neutron networks * Add tests for safe\_connect decorator * libvirt: improve logging for shared storage check * Cleanup allocation todo items * [placement] Allow inventory to violate allocations * Refresh info\_cache after deleting floating IP * Remove deprecated configuration option network\_device\_mtu * Example & Parameter verification of os-security-group-default-rules.inc * [placement] clean up some nits in the requestlog middleware * correctly join the usage to inventory for capacity accounting * Annotate db models that have moved to the nova\_api db * Stop using mox in virt/libvirt/test\_imagecache.py * Stop using mox in unit/fake\_processutils.py * [api-ref]: Correcting server\_groups\_list parameter's type * Fix race condition bug during live\_snapshot * ironic: Rename private methods for instance info * [placement] Fix misleading comment in wsgi loader * Remove mox from api/openstack/compute/test\_networks.py * Remove mox from api/openstack/compute/test\_rescue.py * Remove mox from api/openstack/compute/test\_image\_size.py * Remove mox from api/openstack/compute/test\_extended\_ips.py * Remove mox from nova/tests/unit/virt/xenapi/test\_driver.py * Remove mox from unit/api/openstack/compute/test\_hide\_server\_addresses.py * fixing block\_device\_mapping\_v2 data\_type * Updated from global requirements * Add bigswitch command to compute rootwrap filters * libvirt: add hugepages support for Power * incorrect description in nova-api.log about quota check * Removed enum duplication from nova.compute * Remove unused conf 14.0.0.0b3 ---------- * Remove deprecated cinder options * Simple instance allocations from resource tracker * Add support for allocations in placement API * Add create\_all and delete\_all for AllocationList * Pull from cell0 and build\_requests for instance list * Remove hacked test that fails with latest os-brick * Report compute node inventories through placement * Delete BuildRequest regardless of service\_version * Fix service version lookups * Remove BuildRequest when scheduling fails * Run cell0 db migrations during nova-manage simple\_cell\_setup * Move cell message queue switching and add caching * Add basic logging to placement api * Fixed indentation * Update placement config reno * Ignore generated merged policy files * Register keystone opts for placement sample config * Remove deprecated neutron options * ironic\_host\_manager: fix population of instances info on start * Eliminate additional DB queries in nova lists * Remove the incomplete wsgi script placement-api.py * ironic\_host\_manager: fix population of instances info on schedule * rt: ensure resource provider records exist from RT * Allow linear packing of cores * Return 400 error for non-existing snapshot\_id * create placement API wsgi entry point * Fix qemu version check * Documentation for the vendordata reboot * Add more vd2 unit tests * Add a TODO and add info to a releasenote * [placement] remove a comment that is no longer a todo * Make api-ref bug link point to nova * Api-ref: Improve os-migrateLive input parameters * Fix a typo in the driver.py file * New discover command to add new hosts to a cell * Clean up instance mappings, build requests on quota failure * Not allow overcommit ratios to be negative * Updated from global requirements * Use StableObjectJsonFixture from o.vo * test\_keypairs\_list\_for\_different\_users for v2.10 * Fix using filter() to meet python2,3 * Emit warning when use 'user\_id' in policy rule * Adds nova-policy-check cmd * Reduce code complexity - api.py * Use cls in class method instead of self \_delete\_domain is a class method, so cls should be used instead of self * Revert "Optional separate database for placement API" * Changed exception catching order * Add BuildRequestList object * In InventoryList.find() raise NotFound if invalid resource class * Updated from global requirements * Imported Translations from Zanata * TrivialFix: Remove cfg import unused * Add oslopolicy script runs to the docs tox target * Add entry\_point for oslo policy scripts * Tests: use fakes.HTTPRequest in compute tests * Remove conversion from dict to object from xenapi live\_migration * Hyper-V: properly handle shared storage during migrations * TrivialFix: Remove logging import unused * Hyper-V: properly handle UNC instance paths * Get ready for os-api-ref sphinx theme change * Update link in general purpose feature matrix * List system dependencies for running common tests * [api-ref]: Update link reference * Abort on HostNotCompatibleWithFixedIpsClient * Add warning if metadata\_proxy\_shared\_secret is not configured * devspec: remove unused dev\_count in devspec * TrivialFix: removed useless storing of sample directory * [api-guide]: Update reference links * Fix link reference in Nova API version * Provide more duplicate VLAN network error info * Correct microversions URL in api\_plugins.rst * Create Instance from BuildRequest if not in a cell * Added todo for deletion LiveMigrateData.detect\_implementation usage * driver.pre\_live\_migration migrate\_data is always an object * Manage db sync command for cell0 * Updated common create server sample request because of microversion 2.37 * Remove TODO for service version caching * removed db\_exc.DBDuplicateEntry in bw\_usage\_update * Add online migration to move instance groups to API database * Remove locals() for formatting strings * Hyper-V: update live migrate data object * Config options consistency of notifications.py * Add networks to quota's update json-schema when network quota enabled * rt: isolate report and query sched client tests * rt: remove ComputeNode.create\_inventory * rt: rename test\_tracker -> test\_resource\_tracker * rt: remove old test\_resource\_tracker.py * Updated from global requirements * Remove deprecated security\_group\_api config option * Added min\_version field to 'host\_status' in 'api-ref' * Make InstanceGroup object favor the API database * Doc: Update PCI configuration options * Don't maintain user\_id and project\_id in context * Add support for usages in the placement API * Add a Usage and UsageList object * Add support for inventories to placement API * Check capacity and allocations when changing Inventory * Add release note to warn about os-brick lock dir * config options: improve help netconf * Config options consistency for consoleauth.py * Support Identity v3 when connecting to Ironic * Copy edit feature classification * don't report network limits after 2.35 * Adding details in general purpose feature matrix [1] * Improve placement API 404 and 405 response tests * doc: fix disk=0 use case in flavor doc * Config options: improve libvirt help text (1) * Dump json for nova.network.model.Model objects * Improve error message for empty cached\_nwinfo * Return HTTP 400 on list for invalid status * Move some flavor fakes closer to where they are being used * Replace flavors.get\_all\_flavors\_sorted\_list() with object call * Refactor and objectify flavor fakes used in api tests * Fix 'No data to report' error * Change api-site to v2.1 format * Refuse to run simple\_cell\_setup on CellsV1 * In placement API send microversion header when error * libvirt: Improve mocking of imagebackend disks * Updated flags for XVP config options * Add unit tests for nova.virt.firewall.IpTablesFirewallDriver (Part 4) * [libvirt] Remove live\_migration\_flag & block\_migration\_flag * placement: add filtering by attrs to resource\_providers * Add support for resource\_providers urls * Remove nova/api/validator.py * Updated from global requirements * Change default value of live\_migration\_tunnelled to False * Remove code duplication in enums * [vncproxy] log for closing web is misleading * Return None in get\_instance\_id\_by\_floating\_address * Make simple\_cell\_setup work when multiple nodes are present * Add REST API support for get me a network * plugins/xenserver: Resolve PEP8 issues * Fix migration list + MigrationList operation * rt: Create multiple resize claim unit test * rt: Refactor unit test for trackable migrations * VIF: add in missing translation * Clean imports in code * Fix neutron security group tests for 5.1.0 neutronclient * modify description of "Inject guest networking config" * os-vif: do not set Route.interface if None * Check opt consistency for neutron.py * Improve help text for compute manager options * Make simple\_cell\_setup idempotent * Add cell\_v2 verify\_instance command * Remove unnecessary debug logs of normal API ops * Replace mox with mock in test\_validate\_bdm * Replace mox with mock in test\_cinder * Allow Nova Quotas to be Disabled * Allow authorization by user\_id for server evacuate * Allow authorization by user\_id for server update * Allow authorization by user\_id for server delete * Allow authorization by user\_id for server changePassword action * Update binding:profile for SR-IOV ports on resize-revert * Verified deprecation status for vnc options * Add tests for user\_id policy enforcement on trigger\_crash\_dump * Allow authorization by user\_id for server shelve action * Allow authorization by user\_id for force\_delete server * Allow authorization by user\_id for server resize action * Allow authorization by user\_id for server pause action * Add tests for user\_id policy enforcement on stop * Fix consistency in crypto conf * Add placement API web utility methods * Improve help text for XenServer Options * Improve help text for xenapi\_vm\_utils\_opts * network: fix handling of linux-bridge in os-vif conversion * Fix consistency in API conf * Improve consistency in WSGI opts * Add unit tests for nova.virt.firewall.IpTablesFirewallDriver (Part 3) * Improve help text for xenapi\_opts * Maintain backwards compat for listen opts * Allow authorization by user\_id for server rescue action * Allow authorization by user\_id for server rebuild * Allow authorization by user\_id for server suspend action * Allow authorization by user\_id for server lock action * Optional separate database for placement API * Replace fake\_utils by using Fixture * virt/image: between two words without a space in output message * config options: improve help text of database (related) options (2/2) * config options: improve help text of database (related) options (1/2) * Remove hacking check [N347] for config options * Skipping test\_volume\_backed\_live\_migration for live\_migration job * rt: New unit test for rebuild\_claim() * List instances for secgroup without joining on rules * Improve help text for vmwareapi\_opts * Updated from global requirements * vnc host options need to support hostnames * Removed flag "check\_opt\_group\_and\_type" from pci.py * Removed flag "check\_opt\_group\_and\_type" * libvirt: convert over to use os-vif for Linux Bridge & OVS * Remove left over conf placeholders * libvirt: Rename import of nova.virt.disk.api in driver * Fix server operations' policies to admin only * Add support for vd2 user context to other drivers * api-ref: Example verification for os-simple-tenant-usage.inc * Remove unused exception: ImageNotFoundEC2 * Fix opt description for s3.py * virt/hardware: Check for threads when "required" * Improve consistency in VNC opts * Improve help text for compute\_opts * Config options: Improve help text for console options * Config options: Consistency check for remote\_debug options * docs: update code-review guide for config options * Add separate create/delete policies to attach\_interface * Fix handling of status in placement API json\_error\_formatter * Use constraints for all tox environments * Move JSON linting to pep8 * HyperV: remove instance snapshot lock * rt: Move monitor unit tests into test\_tracker * rt: Move unit tests for update usage for instance * rt: Move unit tests for update mig usage * rt: Remove useless unit test in resource tracker * rt: Remove dup tests in test\_resource\_tracker * rt: Remove incorrect unit test of resize revert * rt: Refactor test\_dupe\_filter unit test * rt: Remove duplicate unit test for missing mig ctx * rt: Refactor resize claim abort unit test * rt: Refactor resize\_claim unit test * Set enforce\_type=True in method flags * Use constraints for releasenotes * Add some logging and a comment for shelve/unshelve operations * Run shelve/shelve\_offload\_instance in a semaphore * Check opt consistency for api.py * Allow empty CPU info of hypervisors in API response * Config options consistency of rdp.py * Improve consistency in workarounds opts * Refresh README and its docs links * Correct InventoryList model references * instance.name should be blank if instance.id is not set * Cells: Handle delete with BuildRequest * Add NoopConductorFixture * Make notification objects use flavor capacity attributes * Fix busted release notes * config options: Improve help for conductor * Config options: base path configuration * PCI: Fix network calls order on finish\_revert\_resize() * Remove deprecated legacy\_api config options * Config Options: Improve help text for Ipv6 options * Update tags for Image file url from filesystems config option * Check options consistency in hyperv.py * Improve help text for floating ips options * config options: Improve help for base * Improve consistency in API * cleanup: some update xml cases in test\_migration * Use stashed volume connector in \_local\_cleanup\_bdm\_volumes * Ironic: allow multiple compute services * api-ref: Parameter verification for os-simple-tenant-usage.inc * Ironic: report node.resource\_class * network: introduce helper APIs for dealing with os-vif objects * ironic: Cleanup instance information when spawn fails * update wording around pep8 exceptions * Remove backward compatibility with pre-grizzly releases * use the HostPortGroupSpec.vswitchName instead of HostPortGroup.vswitch.split * Replace functions 'Dict.get' and 'del' with 'Dict.pop' * Updated from global requirements * Strict ImageRef validation to UUID only * Add the ability to configure glanceclient debug logging * Deprecate cert option * Merged barbican and key\_manager conf files into one * Config options consistency of pci.py * config option: rename libvirt iscsi\_use\_multipath * Fix require thread policy for multi-NUMA computes * Allocate PCI devices on migration * TrivialFix: Fixed a typo in nova/test.py * Updated from global requirements * Improve help text of image\_file\_url * Ironic: enable multitenant networking * libvirt: Remove some unnecessary mocking in test\_driver * libvirt: Pass object to \_create\_images\_and\_backing in test * libvirt: Reset can\_fallocate in test setUp() * libvirt: Create console.log consistently * Fixed invalid UUIDs in unit tests * Remove deprecated manager option in cells.py * Refactor deallocate\_fixed tests to use one mock approach instead of three * Improve consistency in virt opts * Updated header flag in SSL opts * Updated from global requirements * Don't cache RPC pin when service\_version is 0 * Imported Translations from Zanata * Remove white space between print and () * Flavor: correct confusing error message about flavorRef * Consistency changes for osapi config options * Fixed typos in nova: compute, console and conf dir * Add objects.ServiceList.get\_all\_computes\_by\_hv\_type * Add InstanceList.get\_uuids\_by\_host() call * Conf options: updated flags for novnc * Address feedback on cell-aggregate-api-db patches * Updated from global requirements * Add data migration methods for Aggregate * Config options: Consistency check for quota options * Add server name verification in instance search * Fix typo in DeviceDetachFailed exception message * Straddle python-neutronclient 5.0 for testing * Initialise oslo.privsep early in main * Cells: Simple setup/migration command * Aggregate create and destroy work against API db * Make Aggregate.save work with the API db * Improve help text for vmware * Config options consistency of exceptions.py * Help text for the mks options * Trivial option fixes * Properly quote IPv6 address in RsyncDriver * rbd\_utils: wrap blocking calls in tpool.Proxy() * Resolve PCI devices on the host during Guest boot-up * Fixed typos in nova, nova/api, nova/cells directory * Fix misspellings * Trivial: add 'DEPRECATED' for os-certificates API ref * Mention proxy API deprecation microversion in api-ref * xenserver: fix an output format error in cleanup\_smp\_locks * Add log for instance without host field set * Improve consistency in crypto * Deprecate barbican options * Improve consistency in flavors * Improve the help text for the guestfs options * Reminder that release notes are built from commits * Add initial framing of placement API * Add missing ComputeHostNotFound exception in live-migration * Free new pci\_devices on revert-resize * Use oslo\_config new type PortOpt for port options * Updated from global requirements * Remove unused imports in api/openstack/fakes.py * Add docs about microversion testing in Tempest * Remove leftover list\_opts entry points * Remove nova.cache\_utils oslo.config.opts entrypoint * Remove nova.network namespace from nova-config-generator.conf * Remove neutronv2.api oslo.config.opt entry point * Follow up on Update binding:profile for SR-IOV ports * Improve consistency in servicegroup opts * Improve help text for cloudpipe * Remove the useless version calculation for proxy api deprecated version * numa: remove the redundant check for hw\_cpu/hw\_mem list * Add support for oslo.context 2.6.0 * Update tags for Cache config option * Remove unused validation code for quota\_sets * Revert "Don't assert exact to\_dict output" * cleanup\_live\_migration\_destination\_check spacing * Default image.size to 0 when extracting v1 image attributes * Add details to general purpose feature matrix * Adding functional tests for 2.3 microversion * compute: Skip driver detach calls for non local instances * libvirt: Fix invalid test data * libvirt: Fix fake \_disk\_info data in LibvirtDriverTestCase * Don't set empty kernel\_id and ramdisk\_id to glance image * Config options consistency for cell.py * Refuse to have negative console ttls * Option Consistency for availability\_zone.py * Add a small debug line to show selection location * Fix wrong override value of config option vswitch\_name * Fix wrong override value of config option proxyclient\_address * Call release\_dhcp via RPC to ensure correct host * Adjust MySQL access with eventlet * Improve consistency in cert * Updated from global requirements * rt: don't log pci\_devices twice when updating resources * Config options consistency for configdrive.py * Remove deprecated ironic.api\_version config option * Improve the help text for compute timeout\_opts * Deprecate the nova-manage commands that rely on nova-network * Improve consistency in xenserver * Add the 'min' param to IntOpts where applicable * Remove unused config option 'fake\_call' * Make Aggregate metadata functions work with API db * Use deprecated\_reason for network quota options * "nova list-extensions" not showing summary for all * Fix typos in deprecates-proxy-apis release note * Enable deferred IP on Neutron ports * Improve help text for XenServer pool opts * remove config option iqn\_prefix * Deprecate os-certificates * Update RequestSpec nested flavor when a resize comes in * New style vendordata support * Add metadata server fixture * Improve help text for quota options * Improve help text for consoleauth config options * Bump Microversion to 2.36 for Proxy API deprecation * api: use 'if else' instead of 'try exception' to get password value * Add better help to rdp options * Adding details in general purpose feature matrix * Enables Py34 tests for unit.api.openstack.compute.test\_server\_actions * Filter network related limits from limits API * Filter network related quotas out of quotas API * Deprecate Baremetal and fping API * Deprecate volumes related APIs * Deprecate SecurityGroup related proxy API * Deprecated floating ip related proxy APIs * Complete verification of os-instance-actions.inc * Check opt group and type for nova.conf.service.py * Fix links to network APIs from api-ref * Add comment about how status field changed * Fix database poison warnings, part 13 * Deprecate network quota configuration * Verify os-aggregates.inc on sample files * Cleanup: validate option at config read level * :Add missing %s in print message * api-ref: unify the no response output in delete operation * Return 400 when SecurityGroupCannotBeApplied is raised * network: handle forbidden exception from neutron * Avoid update resource if compute node not updated * Document update\_task\_state for ComputeDriver.snapshot * Config Option consistency for crypto.py * Fix database poison warnings, part 12 * Don't check cinder volume states during attach * Clean up test\_check\_attach\_availability\_zone\_differs * Fix database poison warnings, part 11 * Fix opt description and indentation for flavors.py * Remove redundant flag value check * Improve help context of ironic options * Update instance node on rebuild only when it is recreate * Remove unneeded bounds-checking code * Improve the help text for the linuxnet options (4) * Don't assert exact to\_dict output * Fix database poison warnings, part 10 * config options: help text for enable\_guestfs\_debug\_opts * Fix database poison warnings, part 9 * Improve help text of s3 options * Remove deprecated config option volume\_api\_class * Fix inappropriate notification send * libvirt: Fix signature and behaviour of fake get\_disk\_backing\_file * libvirt: Pass path to Image base class * Remove max\_size argument to images.fetch and fetch\_to\_raw * Update tox.ini: Constraints are possible for api\* jobs * Separate api-ref for list security groups by server * Deprecate FixedIP related proxy APIs * Deprecated networks related proxy APIs * Check option descriptions and indentations for configdriver.py * Make Aggregate host operations work against API db * libvirt: open RBD in read-only mode for read-only operations * Remove unnecessary code added for ec2 deprecation * Enhance notification doc generation with samples * Depracate Images Proxy APIs * Correct the network config option help text * config options: improve help for noVNC * Replace deprecated LOG.warn with LOG.warning * Fixed typos in api-ref and releasenotes directory * Fix invalid import order and remove import \* * Improve the help text for the network options (4) * Add async param to local conductor live\_migrate\_instance * libvirt: update guest time after suspend * libvirt: Modify the interface address object assignment * Update binding:profile for SR-IOV ports * Port nova test\_serversV21.Base64ValidationTest to Python 3 * Refactor instance action notification sample test * Config option update tasks for availability\_zone * Expand initial feature classification lists * Add prototype feature classification matrix * [libvirt] Live migration fails when config\_drive\_format=iso9660 * Modify docstring of numa\_get\_reserved\_huge\_pages method * Use constraints for coverage job * Remove compute host from all host aggregates when compute service is deleted * Fix incorrect cellid numbering for NUMA memnode * Fix opt descripton for cells.py * Fix host mapping saving * Example and body verification of os-quota-sets.inc * Remove deprecated network\_api\_class option * neutron: destroy VIFs if allocating ports fails * Validate pci\_passthrough\_whitelist when starting n-cpu * Rename compute manager \_check\_dev\_name to \_add\_missing\_dev\_names * Remove unused context argument to \_default\_block\_device\_names() * Fix typo in AdminPasswordController 14.0.0.0b2 ---------- * Use from\_environ when creating a context * Pass kwargs through to base context * Fix opt description and check deprecate status for hyperv.py * VMware: Enable disk.EnableUUID=True in vmx * hyper-v: device tagging * Add release notes for notification transformation * Assert reservation\_id in notification sample test * Remove redundant DEPRECATED tag from help messages * Fix PUT server tag 201 to return empty content * Clean up helper methods in ResourceProvider * Transform instance.restore notifications * neutron: delete VIFs when deallocating networking * Add VirtualInterface.destroy() * Make notifications module use flavor capacity attributes * Make ironic driver use flavor fields instead of legacy ones * Make xenapi driver use flavor fields instead of legacy ones * Make libvirt driver use flavor fields instead of legacy ones * Make hyperv driver use flavor fields instead of legacy ones * Make vmware driver use flavor fields instead of legacy ones * Bump service version for BuildRequest deletion * Stop instance build if BuildRequest deleted * Add block\_device\_mappings to BuildRequest * Improve help text of flavors config options * Improve help text for cinder config options * Microversion 2.35 adds keypairs pagination support * Fix up legacy resource fields in simple-tenant-usage * Use flavor attributes instead of deprecated instance resources * Typo fix: remove multiple whitespace * network: handle unauthorized exception from neutron * Fix the broken links * 'limit' and 'marker' support for db\_api and keypair\_obj * Improve help text for exceptions * Improve help text for compute running\_deleted\_opts * rest api version bumped for async pre live migration checks * Add user\_id request parameter in os-keypairs list * Revert "Detach volume after deleting instance with no host" * Don't overwrite MarkerNotFound error message * tox: Use conditional targets * tox: Don't create '.pyc' files * Improve help text for allocation\_ratio\_opts * Release note for vzstorage volume driver * Fix typo in \_update\_usage\_from\_migrations * Transform instance.resize notifications * Refactors nova.cmd utils * Replace DOS line ending with UNIX * migration volume failed for invalid type * api-ref: fix wrong description about response example in os-hypervisor * api-ref: body verification of os-agents * Fix wrong JSON format in API samples * Implement ResourceProvider.destroy() * Add Allocation and AllocationList objects * Deprecate nova-manage vm list command * Remove live-migration from nova-manage man page * Deprecate the quota\_driver config option * Allow irrelevant,self-defined specs in ComputeCapacityFilter * Transform instance.pause notifications * Fix opt description for scheduler.py * Verify "needs:check\_deprecation\_status" for serial\_console.py * API: catch InstanceNotReady exception * Transform instance.shelve notifications * Replace unicode with six.text\_type * Added support for new block device format in vmops * XenAPI: add unit test for plugin bandwidth * api-ref: unify the delete response infomation * Add nova-manage quota\_usage\_refresh command * Quota changes for the nova-manage quota\_usage\_refresh command * Remove DictCompat from SecurityGroup * Replace use of eval with ast.literal\_eval * libvirt: fix missed test in migration * Improve the help text for the network options (3) * Correct reraising of exception * api-ref: Parameter verification for servers-actions.inc Part 1 * Body verification of os-interface.inc * Parameter verification of os-instance-actions.inc * xvp: change the default xvp conf path to CONF.xvp group * libvirt:code flow problem in wait\_for\_job * Clean up service version history comments * Add a ResourceProviderList object * Refactor block\_device\_mapping handling during boot * Remove spaces around keyword argument * Use ovo in test\_obj\_make\_compatible() * Improve the help text for the network options (2) * Update mutable-config reno with LM timeout params * Added better error messages during (un)pinning CPUs * Remove duplicate policy test * Complete verification for os-virtual-interfaces * api-ref: os-volumes.inc * Enable python34 tests for nova.tests.unit.pci.test\_manager and test\_stats * api-ref: merge multiple create to servers.inc * Improve the help text for configdrive options * Revert "Remove manual creation of console.log" * Fix invalid import order * Fix invalid import order * Fix invalid import order * config options: improve help for notifications * Fix invalid import order * Fix invalid import order * Remove unused itype parameter from get migration context * Do not try to backport when db has older object version * Detach volume after deleting instance with no host * Transform instance.suspend notifications * Hacking check for \_ENFORCER.enforce() * Remove final use of \_ENFORCER.enforce * Hacking check for policy registration * Extract \_update\_ports\_for\_instance * Extract port create from allocate\_for\_instance * Improve help text for resource tracker options * Transform instance.power\_on notifications * Add a py35 environment to tox * api-ref: add note about os-certificates API * XenAPI: UT: Always mock logging configuration * Fix api\_validation for Python 3 * api-ref: verify assisted-volume-snapshots.inc * Delete reduplicate code in test\_compute\_mgr.py * Port test\_hacking to Python 3 * Fix comment for version 1.15 ComputeNodeList * Microversion 2.33 adds pagination support for hypervisors * VMware: create vif with resource limitations * policy: clean-up * Make VIF.address unique with port id for neutron * Device tagging metadata API support * trivial: remove unnecessary mock from servers API test * Return HTTP 200 on list for invalid status * Complete verification for os-floating-ips-bulk * Transform instance.update notification * Pre-add instance actions to avoid merge conflicts * Transform instance.delete notifications * XenAPI: Add UT for independent compute option * Log DB exception if VIF creation fails * Fixes compute API unit tests for python3 * Reduce complexity in \_stub\_allocate\_for\_instance * Reorder allocate\_for\_instance preamble * Make \_validate\_requested\_network\_ids return a dict * Extract \_validate\_requested\_network\_ids * Create \_validate\_requested\_port\_ids * Extract \_filter\_hypervisor\_macs * Always call port\_update in allocate\_for\_instance * Device tagging API support * Mapping power\_state from integer to string * Compute manager device tagging support * trivial: comment about vif object address field * Example verification for os-fixed-ips.inc * Revert "Detach volume after deleting instance with no host" * policy: Replaces 'authorize' in nova-api (part 5) * libvirt: add todo about bdms in \_build\_device\_metadata * libvirt: virtuozzo instance rescue mode support * api-ref: os-certificates.inc * policy: Replaces 'authorize' in nova-api (part 4) * Make LM timeout params mutable * Help text for the ephemeral storage options * Config Options: Improve help text for debugger * Make Ironic options definitions consistent * Fix some typos * Add namespace oslo.db.concurrency in nova-config-generator.conf * Remove mox in tests/unit/objects/test\_quotas * Remove network information from IOVisor vif * Add automatic switching to postcopy mode when migration is not progressing * Extend live-migration-force-complete to use postcopy if available * Add a test utility for checking mock calls with objects * Remove invalid test for config option scheduler\_host\_manager * Complete verification for api-ref os-flavor-extra-specs * policy: Replaces 'authorize' in nova-api (part 3) * libvirt: Add migration support for perf event support * Libvirt driver implementation of device tagging * Add policy sample generation * Cleanup instance device metadata object code * libvirt: virtuozzo instance resize support * Fix test\_ipv6 and simplify to\_global() * Remove russian from unit/image/test\_glance.py * Py3: fix serial console output * \_security\_group\_get\_by\_names cleanup * Add reminder comments for compute rpcapi version bump * Update get\_instance\_diagnostics for instance objects * Improve help text for wsgi options * Don't immediately null host/node when shelving * Evaluate 'task\_state' in resource (de)allocation * Add new configuration option to turn auto converge on/off * Add new configuration option to turn postcopy on/off * Improve nova.rpc conf options documentation * Fix spelling mistake * Add ability to select specific tests for py34 * Remove mox from unit/compute/test\_compute.py (4) * Remove mox from unit/compute/test\_compute.py (end) * Remove mox from unit/compute/test\_compute.py (11) * Remove mox from unit/compute/test\_compute.py (10) * Remove mox from unit/compute/test\_compute.py (9) * Remove mox from unit/compute/test\_compute.py (8) * Remove mox from unit/compute/test\_compute.py (7) * Remove mox from unit/compute/test\_compute.py (6) * Remove mox from unit/compute/test\_compute.py (5) * UT: cleanup typo in libvirt test\_config * Remove mox from unit/compute/test\_compute.py (3) * Remove mox from unit/compute/test\_compute.py (2) * Remove mox from unit/compute/test\_compute.py (1) * Improve image signature verification failure notification * libvirt: attach configdrive after instance XML * libvirt: add nova volume driver for vzstorage * Moving test helpers to a common place * On port update check port binding worked * Refactor to create \_ensure\_no\_port\_binding\_failure * policy: Replaces 'authorize' in nova-api (part 2) * XenAPI: Add option for running nova independently from hypervisor * XenAPI: Stream config drive to XAPI * XenAPI: Perform disk operations in dom0 * Port test\_ipv6 to py3 and simplify to\_global() * api-ref: Example verification for os-agents.inc * Allow monitor plugins to set own metric object * api-ref: correct the order of APIs in server-tags * Remove unused LOG * Remove unnecessary \_\_init\_\_ * Release notes: fix typos * Make print py3 compatible * libvirt: fix disk size calculation for VZ container instances * Fix error message for VirtualInterfaceUnplugException * libvirt: Add boot ordering to individual disks * image\_meta: Add hw\_rescue\_device and hw\_rescue\_bus * collapse servers.ViewBuilderV21 into servers.ViewBuilder * remove personality extension * remove preserve-ephemeral rebuild extension * remove access\_ips extension * Bump the service version for get-me-a-network support * neutron: handle 'auto' network request in allocate\_for\_instance * Add unit tests for nova.virt.firewall.IpTablesFirewallDriver (Part 2) * libvirt: split out code for recovering after migration tasks * libvirt: split out code for processing migration tasks * libvirt: split off code for updating migration stats in the DB * libvirt: split off code for updating live migration downtime * api-ref: verify images.inc * libvirt: split out code for determining if migration should abort * libvirt: split out code for detecting live migration job type * policy: Replaces 'authorize' in nova-api (part 1) * Check if flavor.vcpus is more than MAX\_TAP\_QUEUES * policy: Add defaults in code (part 6) * objects: Add devices\_metadata to instance object * objects: new InstanceDeviceMetadata object * db: add a device\_metadata column to instance\_extra * libvirt: add perf event support when create instance * Improve help text of crypto.py * objects: adding an update method to virtual\_interface * Rename driver method check\_can\_live\_migrate\_destination\_cleanup * api-ref: added docs for microversion 2.26 * policy: Add defaults in code (part 5) * policy: Add defaults in code (part 4) * policy: Add defaults in code (part 3) * policy: Add defaults in code (part 2) * add ploop support into qemu-img info * policy: Add defaults in code (part 1) * Handle UnableToAutoAllocateNetwork in \_build\_and\_run\_instance * Add note about preserve\_ephemeral limitations * Add console auth tokens db api methods * Remove mox from unit/virt/libvirt/volume/\*.py * Port cinder unit tests to Python 3 * Port test\_pipelib and test\_policy to Python 3 * Adding missing log translation hints * Add instance groups tables to the API database * Make live migration checks async * Check for None max\_count for Python 3 compat * Updated from global requirements * fix developer docs on API * libvirt: virtlogd: use "log" element in char devices * Fix ConsoleAuthTokens to work for all console types * remove os-disk-config part 4 * remove os-disk-config part 3 * remove load\_standard\_extensions method * Modify "policy.conf" to "policy.json" * Ensures that progress\_watermark and progress\_time are updated * Add a note for policy enforcement by user\_id * XenAPI: Support neutron security group * Added instance actions for conductor * Stop using mox stubs in nova/tests/unit/test\_metadata.py * remove support for legacy v2 generator extensions * Remove duplicate unit test resource tracker * Prevent instance disk overcommit against itself * api-ref: parameter verification os-agents * make failures on api\_samples more clear * api-ref, os-services.inc * api-ref: docs for microversion v2.28 * Update dhcp\_opts on both create and update * api-ref: Improve os-instance\_usage\_audit\_log samples * Add ironic mac address when updating and creating * pci: Deprecate is\_new from pci requests * Enhance notification sample test base * Handle multiple samples per versioned notification * Transform wrap\_exception notification to versioned format * XenAPI: OVS agent updates the wrong port with Neutron * Stop using mox from unit/fake\_server\_actions.py * objects: you want'em * libvirt: enhance method to return pointer\_model from image prop * Improve help text for service group options * Updated from global requirements * Skip network allocation if 'none' is requested * Separete notification object version test * [typo] replaced comupte to compute in test * api-ref, os-availability-zone.inc * Config: no need to set default=None * Add delete\_, update\_ and add\_ inventory to ResourceProvider * libvirt: fix typos in comments * Remove the nova.compute.resources entrypoint * Re-deprecate use\_usb\_tablet config option * Log the network when neutron won't apply security groups * api-ref: parameter verification os-fixed-ips * Add CellMappingList object * Add console auth tokens table and model * live migration check source failed caused bdm.device\_path lost * Use is\_valid\_ipv4 from oslo.utils * Include exception in \_try\_deallocate\_network error log * Remove mox from tests/unit/virt/test\_imagecache.py * Fix docstring nits from ResourceProvider.set\_inventory() review * fix errors in revert resize api docs * Add set\_inventory() method on ResourceProvider * Improve the help text for cells options (8) * VMware: Fix bug of TypeError when getting reference of VCenter cluster is None * XenAPI: Integers returned from XAPI are actually strings * Remove virt.block\_device.\_NoLegacy exception * rename libvirt has\_default\_ephemeral * Remove ec2\_code from exception * Add specific lazy-load method for instance.tags * Don't attempt to lazy-load tags on a deleted instance * Pre-load tags when showing server details * Policy-in-code servers rules * Fix image meta which is sent to glance v2 * Extract update\_port call into method * Refactor to create \_populate\_mac\_address * Rename \_populate\_mac\_address adding pci * Rename created\_port to created\_port\_id * Flip allocate\_for\_instance create or update if * libvirt: cleanup baselineCPU return value checking * Updated from global requirements * Remove mox from tests/unit/objects/test\_aggregate.py * Handle keypair not found from metadata server * Skip network validation if explicitly requesting no networks * nova-net: handle 'auto' network request in allocate\_for\_instance * neutron: validate auto-allocate is available * Add helpers to NetworkRequest(List) objects for auto/none cases * Remove api\_rate\_limit config option * Tear down of os-disk-config part 2 * Tear down os-disk-config part 1 * Disallow instance tag set for invalid instance states * Make instance as second arg in compute api calls * TrivialFix: Remove extra comma from json * Skip NFS and Ceph in live migration job test run * Added missed response to test\_server\_tags * api-ref: console types * api-ref: add version 2.3 parameters to servers * Remove extra expected error code (413) from image metadata * Use instance object instead of db record * Publish proxy APIs deprecation in api ref doc * Fix outdated parameter network\_info description in virt/driver * api-ref: Fix parameters in os-instance-usage-audit-log * Remove python code validation specific to legacy\_v2 * Remove DictCompat from instance\_info\_cache * Remove redundant test in test\_resource\_tracker * nova shared storage: rbd is always shared storage * Modify the disk bus and device name for Aarch64 * Remove mox from unit/compute/test\_compute\_mgr.py (end) * Remove mox in tests/unit/objects/test\_instance\_faults * Remove mox from unit/compute/test\_compute\_mgr.py (6) * Remove mox from unit/compute/test\_compute\_mgr.py (8) * Remove mox from unit/compute/test\_compute\_mgr.py (7) * Trivial-Fix: Fix typos * Fix some typos * Remove mox from unit/compute/test\_compute\_mgr.py (5) * Remove mox from unit/compute/test\_compute\_mgr.py (4) * Remove mox from unit/compute/test\_compute\_mgr.py (3) * Remove mox from unit/compute/test\_compute\_mgr.py (2) * Updated from global requirements * Make Aggregate.get\_by\_uuid use the API db * api-ref: parameter verification for os-aggregates * Improve help text for neutron\_opts * remove processing of blacklist/whitelist/corelist extensions * fix OS-SCH-HNT:scheduler\_hints location in sample * Fix reno from hyper-v-remotefx * Yield the thread when verifying image's signature * Remove invalid test methods for config option port\_range * libvirt: Prevent block live migration with tunnelled flag * Trivial: remove none existing py3 test from tests-py3.txt * Make host as second arg in compute api calls * Stop using mox stubs in tests/unit/fake\_notifier * Remove unused \_get\_flags method from integrated\_helpers * Enable all extension for all remaining sample tests * tox.ini: Remove unnecessary comments in api-ref target * Stop using mox stubs in nova/tests/unit * Updated from global requirements * Raise exception if BuildRequest deleted twice * Replace mox with mock for xenapi vm\_utils.lookup * Detach volume after deleting instance with no host * pci: Allow updating pci\_requests in instance\_extra * Change default fake\_ server status to ACTIVE * Fix update inventory for multiple providers * Default to using glance v2 * Enable all extension for remaining server API tests * Enable all extension for server API tests part-1 * Remove mox from unit/compute/test\_compute\_mgr.py (1) * Fixes py3 unit tests for nova.tests.unit.test\_block\_device.\* * Reno for mutable-config * Remove invalid test of config option default\_notification\_level * Improve the help text for cells options (7) * test: pass enable\_pass as kwarg in test\_evacuate * Remove config option config\_drive\_format's invalid value test * test: remove invalid test method in libvirt/test\_imagebackend * xenapi: Remove invalid values for config option image\_compression\_level * Remove mox from api/openstack/compute/test\_pci.py * Stop using mox from openstack/compute/test\_cells.py * Enable all extension for server actions sample tests * Enable all extension for Flavor API sample tests * Fix resource tracking for instances with no numa topology * Clarified "user" to plural type * Revert "Optimize \_cleanup\_incomplete\_migrations periodic task" * Remove unused authorizer methods * Remove legacy v2 policy rules * Add unit tests for nova.virt.firewall.IpTablesFirewallDriver (Part 1) * Make create\_inventory() handle name change * Add ResourceProvider.save() * Remove the skip\_policy\_check flags * api-ref: verify keypairs * Make Xenplugin to work with glance v2 api * Trival: version history 2.30 is not indented as others * Do not register notification objects * Move notification objects to a separate package * Move notification related code to separate package * Adjust field types and defaults on Inventory * Add InventoryList.find() method * Add a get\_by\_uuid for aggregates * Imported Translations from Zanata * get rid of the old \_vhd methods * Make Hyper-V to work with glance v2 api * Stop using mox stubs in stub\_out\_key\_pair\_funcs * Remove v2 extension setting from functional tests * Add name and generation to ResourceProvider object * Remove duplicate test of DELETED instances * Added support for new block device format in Hyper-V * Enable mutable config in Nova * Improve help text for availability zones options * tests: make XMLMatches work with Python3 * Catch PciRequestAliasNotDefined exception * api-ref: parameter verification for os-hypervisors * xen: skip two more racey mox py34 test classes * libvirt: handle reserved pages size * Fix nova-compute start failed when reserved\_huge\_pages has value * Make the base options definitions consistent * virt: set address space & CPU time limits when running qemu-img * Remove manual creation of console.log * Fix imagecache.get\_cache\_fname() to work in python3 * Remove policy checkpoints for SecurityGroupAPI and NetworkAPI * Remove policy checkpoints from ComputeAPI * Stop using mox from objects/test\_instance.py (3) * Stop using mox from objects/test\_instance.py (2) * Stop using mox from objects/test\_instance.py (1) * Fix wrong patch of unittest in unit/test\_metadata.py * Remove code referencing inventory table in cell DB * Handle SetAdminPasswdNotSupported raised by libvirt driver * Prevent boot if ephemeral disk size > flavor value * [libvirt] Incorrect parameters passed to migrateToURI3 * Revert inventory/allocation child DB linkage * Only chown console log in rescue * Don't chown a config disk which already exists * Don't overwrite config disk when using Rbd * Add 'update' method to GlanceImageServiceV2 * Add 'create' method to GlanceImageServiceV2 * Add 'detail' method to GlanceImageServiceV2 * Add 'delete' method to GlanceImageServiceV2 * Add 'download' method to GlanceImageServiceV2 * Add 'show' method to GlanceImageServiceV2 * Split the glance API path based on config * Remove image\_meta * add "needs:\*" tags to the config option modules * api-ref method verification for os-cells * API change for verifying the scheduler when live migrating * Stop using mox stubs in volume/encryptors/test\_base.py * Introduce a CONF flag to determine glance client version * fix a typo in comment * Fix white spaces in api-ref * Updated from global requirements * virt/hardware: Add diagnostic logs for scheduling * Use assertNotIn instead of assertTrue(all(A != B)) * Use assert(Not)Equal instead of assertTrue(A == X) * Use assertLess(Equal) instead of assertTrue(A > X) * Use assertGreater(A, X) instead of assertTrue(A > X) * Fall back to flat config drive if not found in rbd * libvirt: Fix the content of "disk.config" lost after migrate/resize * remove /v2.1/{tenant\_id} from all urls * Remove "or 'reserved'" from \_create\_volume\_bdm * pci: Move PCI devices and PCI requests into migration context * Updated from global requirements * Fixes invalid uuid usages in test\_neutronv2 * Clarify message for Invalid/Bad Request exception * Cancelled live migration are not in progress * set wrap\_width for config generator to 80 * API change for verifying the scheduler when evacuating * Fix invalid uuid warnings in virt testcases 14.0.0.0b1 ---------- * Remove mox from nova/tests/unit/virt/libvirt/test\_utils.py * Fix multipath iSCSI encrypted volume attach failure * libvirt: add "get\_job\_info" to Guest's object * Modify 'an network' to 'a network' * Remove legacy v2 API code completely * Remove the usage of RateLimitingMiddleware * Remove unused inner\_app\_v21 and ext\_mgr * Remove legacy API code from sample tests * Remove InstanceUsageAuditLogTest for legacy API * Change instance\_claim parameter from instance\_ref to instance * Make AggregateList.get\_ return API & cell db items * Make Aggregate.get operation favor the API db * Add aggregates tables to the API db * Microversion 2.28 changes cpu\_info string to JSON object * libvirt: Skip CPU compatibility check for emulated guests * Specify the default cdrom type "scsi" for AARCH64 * Remove mox from nova/tests/unit/test\_iptables\_network.py * Updated from global requirements * pci: Make sure PF is 'available' when last VF is freed * pci: related updates are done without DB lookups * pci: make sure device relationships are kept in memory * Remove mox from nova/tests/unit/virt/libvirt/test\_vif.py * verify api-ref os-migrations.inc * Nova UTs broken due to modifying loopingcall global var * Remove mox from unit/api/openstack/compute/test\_consoles.py * Stop using mox from virt/libvirt/storage/test\_lvm.py * Update functional tests for fixtures 3 * Stop using mox in test\_firewall * Add tests to attach/detach vols for shelved server * Remove unused \_vlan\_is\_disabled test flag * libvirt: New configuration classes to parse device address element * Fixed clean up process in confirm\_resize() after resize/cold migration * VMware: remove dead code in test\_get\_vm\_create\_spec() * Remove mox from compute/test\_scheduler\_hints.py * Updated from global requirements * Remove normal API operation logs from API layer * Remove unused LOG from v2.1 API code * Adds RemoteFX support to the Hyper-V driver * libvirt: fix serial ports lost after hard-reboot * Stop using mox stubs in test\_server\_usage.py * Remove mox from compute/test\_instance\_usage\_audit\_log.py * api-ref: os-consoles.inc * Add proxy middleware to application pipeline * api-ref: Example verification for os-interface.inc * Remove redundant orphan instances unit test * Remove duplicate migration RT unit tests * Redundant test of CPU resources in test\_tracker * Remove duplicate test of RT.stats.current\_workload * Remove duplicate test of claim context manager * Remove pointless "additive claims" unit test * Remove oversubscribe test in test\_resource\_tracker * api: Improve the \_check\_multiple\* function names readability * api-ref verify servers-action-deferred-delete.inc * Fix the order of expected error codes * Remove DictCompat from NetworkRequest * api-ref: Add a sample test for os-interface * Use oslo\_log instead of logging * Verify requested\_destination in the scheduler * Add requested\_destination field to RequestSpec * Remove mox from compute/test\_extended\_ips\_mac.py * Ironic nodes with instance\_uuid are not available * Updated from global requirements * Fixes python 3 urllib quote / unquote usage * Make compute nodes update their own inventory records * Remove unused WsgiLimiter * Remove unused args from RateLimitingMiddleware * Remove unused use\_no\_auth from wsgi\_app\_v21() * Fix incorrectly named vmwareapi test * Make Inventory and ResourceProvider objects use the API DB instead * Rename ImageCacheManager.\_list\_base\_images to \_scan\_base\_images * Remove all references to image\_popularity from image cache * Remove image cache image verification * Fix test\_age\_and\_verify\_swap\_images * api and availablity\_zone opt definition consistent * Rename Image.check\_image\_exists to Image.exists() * Remomve mox from api/openstack/compute/test\_console\_output.py * Remove mox from api/openstack/compute/test\_config\_drive.py * VMware: set service status based on vc connection * Return 400 HTTP error for invalid flavor attributes * Get transport\_url from config in Cells v2 cell map utility * Support for both microversion headers * Fix unit test after the replace of key manager * Fix "KeyError: u'instance\_id'" in string format operation * Save all instance extras in a single db call * Remove APIRouter of legacy v2 API code * Remove legacy v2 API tests which use wsgi\_app() * limits.inc example verification * Remove duplicate unit test in test\_tracker * Remove delete stubs in test\_resource\_tracker * Remove service crud from test\_resource\_tracker * Remove conductor from test\_resource\_tracker * Remove StatsDicTestCase from test\_resource\_tracker * rt-unit: Replace hard-coded strings with constants * Remove useless test of incorrect stats value * Remove RT duplicate unit test for PCI stats * Remove more duplicate RT unit tests * Removes test\_claim\_saves\_numa\_topology() * objects: added 'os\_secure\_boot' property to ImageMetaProps object * Trivial: Fixes serial console minor nits * Revert "glance:add helper method to get client version" * Add length check in comparing object lists * Update Support Matrix * Improve the help text for the rdp options * No disable reason defined for new services * api-ref: limits.inc validate parameters * Make available to build docs with python3 * Updated from global requirements * remove db2 support from tree * Adds Hyper-V imagecache cleanup * raise exception ComputeHostNotFound if host is not found * Skip instance name templating in API cell * Add http\_proxy\_to\_wsgi to api-paste * Stop using mox stubs in test\_pipelib.py * api-ref: Parameter verification for os-interface.inc * devspec: remove unused VIRTFN\_RE and re * Remove duplicate test of set inst host/node * Remove SchedulerClientTrackerTestCase * Move unit tests of set\_instance\_host\_and\_name() * Remove MissingComputeNodeTestCase for res tracker * Remove tests for missing get\_available\_resource() * api-ref, os-fping.inc * Pass OS\_DEBUG to the tox test environment * Hyper-V: Implement nova rescue * Add resource provider tables to the api database * HyperV: Nova serial console access support * Let setup.py compile\_catalog process all language files * use\_neutron\_default\_nets: StrOpt ->BoolOpt * api-ref: Add fault parameter details * be more explicit that rate limits are gone in v2.1 * Warn when using null cache backend * Enable 'null' value for user\_data in V2.1 API * Updated from global requirements * fix Quota related error return incorrect problem * Add online migration to move keypairs from main to API database * Completed migrations are not "in progress" * Make flavor-manage api call destroy with Flavor object * Move is\_volume\_backed\_instance to compute.utils * Updated from global requirements * api-ref: verify flavors.inc * Fix use of invalid assert calls * Config options: remove import\_opts from cloudpipe section * Enables Py34 tests for unit.api.openstack.compute.test\_server\_tags * Fix the versions API for api-ref * Update link for hypervisor support matrix message * api-ref: complete verification of baremetal api * Keep BuildRequest db entry around longer * Drop fields from BuildRequest object and model * Resize API operation passing down original RequestSpec * Augment release note for import\_object\_ns removal * pci: add safe-guard to \_\_eq\_\_ of PciDevice * deprecate config option "fatal\_exception\_format\_errors" * config options: centralize exception options * libvirt: Add serial ports to the migration data object * Hyper-V: Fixes disk overhead claim issue * Config options: move set default opt of db section to centralized place * [Trivial] Fix a grammar error in comments * api-ref: Example verification for servers-action-shelve.inc * [Ironic] Correct check for ready to deploy * api-ref: Fix parameters in servers-action-shelve.inc * api-ref: parameter verification for os-server-groups * api-ref: servers-action-evacuate.inc * remove FlavorCreateFailed exception * Add tests for floating\_ip private functions * Trivial: remove os-security-groups needs:method\_verification line * Add RC file for excluding tempest tests for LVM job * Move config options from nova/api directory (5) * libvirt: add method to configure max downtime when migrating * libvirt: add "abort\_job" to Guest's object * libvirt: add method "migrate" to Guest's object * Only attempt to inject files if the injection disk exists * Remove deprecated option libvirt.remove\_unused\_kernels * Rename Raw backend to Flat * deprecate s3 image service config options * Cold migrate using the RequestSpec object * Add a RequestSpec generation migration script * Enables Py34 tests for unit.compute.test\_compute * Fixes invalid uuid usages in functional tests * Make neutronapi get\_floating\*() methods return objects * Switch api unit tests to use v2.1 API * Remove mox used in tests/unit/api/openstack/compute/test\_server\_start\_stop * Remove marker from nova-manage cells\_v2 map\_instances UI * api-ref: complete verification for os-flavor-access * Make some build\_requests columns nullable * Add message queue switching through RequestContext * trivial: remove unused argument from a method * baseproxy: stop requiring CONF.verbose * Cleanup validation logic in \_get\_requested\_networks * api-ref: complete verification of servers-action-crash-dump.inc * migrate to os-api-ref * api-ref: image.inc - Update method validation * config options: centralize section "database" + "api\_database" * api-ref: parameter verification for os-quota-sets * Fix network mtu in network\_metadata * Add a note about egress rules to os-security-group-rules api-ref * ironic: fix call to \_cleanup\_deploy on config drive failure * Follow-up for the API config option patch * api-ref: reorder parameters.yaml * Network: fix typo * Add online migration to store keypairs with instances * Make Keypair object favor the API database * api-ref: ips.inc example verification * Fix spelling mistake in libvirt * Body Verification of os-aggregates.inc * Move placement api request logging to middleware * conf: Move cloudpipe options to a group * conf: Address nits in I92a03cb * Fix corrupt "host\_aggregates\_map" in host\_manager * Fix spelling mistake * api-ref: Example verification for os-volume\_attachments.inc * api-ref: Parameter verification for os-volume\_attachments.inc * Remove fake\_imagebackend.Raw and cleanup dependent tests * Remove unused arguments to images.fetch and images.fetch\_to\_raw * api-ref: finish validation for os-server-external-events.inc * report info if parameters are out of order * Method verification of os-floating-ips-bulk.inc * api-ref: os-volumes.inc method verification * config options: move s3 related options * deprecate "default\_flavor" config option * config options: centralize default flavor option * Return HTTP 400 on boot for invalid availability zone * Config options: remove import\_opts from completed section * Fix migration query with unicode status * Config options: centralize cache options * Change 5 space indent to 4 spaces * Remove deprecated "memcached\_server" in Default section * Updated from global requirements * Add a functional test for instance fault message with retry * api-ref: complete verification for extensions resource * live-migration ceph: fix typo in ruleset parsing * api-ref: os-floating-ip-dns.inc method verification * api-ref: Method verification for servers-actions * Eager load keypairs in instance metadata * Complete method verification of os-networks * Method verification of os-security-group-default-rules * virt: reserved number of mempages on compute host * deprecate "file transfer" feature for Glance images * centralized conf: nova/network/rpcapi.py * Config options: centralize remotefs libvirt options (end) * Config options: centralize smbfs libvirt options (16) * imagebackend: Check that the RBD image exists before trying to cleanup * Rewrite \_cleanup\_resize and finish\_migration unit tests to use mock instead of mox * Remove mox in test\_volume\_snapshot\_create\_outer\_success * api-ref: Method verification for os-volume\_attachments.inc * Improve the help text for the API options (4) * Improve the help text for the API options (3) * api-ref: ips.inc parameter verification * Add Keypairs to the API database * Create Instances with keypairs * Method verification for server-action-deferred-delete * method verification for server-action-remote-consoles * method verification of os-server-external-events * method verification of os-instance-usage-audit-log * Add keypairs to Instance object * Complete method verification of os-baremetal-nodes.inc * api-ref: parameter validation for os-security-group-rules * Fixed missing variable * api-ref: Method verification for os-floating-ips * force\_live\_migration remove redundant check * pci: create PCI tracker in RT.\_init\_compute\_node * Fix race condition for live-migration-force-complete * api-ref: servers-action-shelve.inc * Added fault response parameter to Show Server Details API * pci: Remove unused 'all\_devs' method * Corrected the typo * Denormalize personality extension * method verification of os-assisted-volume-snapshots * api-ref: os-certificates.inc method verification * Complete method verification of os-cloudpipe.inc * Fix service version to update the DB * method verification for servers-action-fixed-ip * Added new exception to handle CinderClientException * Drop paramiko < 2 compat code * Config options: centralize scality libvirt options (15) * Compute: Adds driver disk\_gb instance overhead estimation * config options: move image\_file\_url download options * crypto: Add support for Paramiko 2.x * Denormalize extensions for clarity * Complete method verification of os-fping * Complete method verification of os-security-group-rules * Fix invalid uuid warnings * Correct some misspell words in nova * Remove 404 for list and details actions of servers * Improve the help text for the API options (2) * Improve the help text for the API options (1) * Complete method verification of os-migrations * Move config options from nova/api directory (4) * api-ref: perform all 4 phases of verification for action console output * api-ref: add url parameter to expand all sections * api-ref: complete verification for diagnostics.inc * api-ref: update parameter validation on servers * Complete method verification of os-tenant-networks * trivial: removed unused networks var from os-tenant-networks:create * Complete method verification of os-security-groups * Move config options from nova/api directory (3) * Move config options from nova/api directory (2) * Move config options from nova/api directory (1) * api-ref: method verification and fixes for servers.inc * Instance mapping save, properly load cell mapping * Fix exception when vcpu\_pin\_set is set to "" * config: remove deprecated ironic.client\_log\_level * Complete method verification of os-quotas * Compelete method verification of os-servers-admin * Complete method verification of os-shevle * Add api-sample test for showing quota detail * Remove legacy v2 tests which use APIRouter * pci: eliminate DB lookup PCI requests during claim * pci: pass in instance PCI requests to claim * Remove rate\_limit param in builder * Remove comment on v3 API * Not talking about V2 API code in review doc guide * Add keypairs to instance\_extra * Trivial: No need to exclude TestMoveClaim from py34 tests * Remove 400 as expected error * Cleaned up request and response formats page * Complete method verification of os-agents * update servers policy in code to use formats * Complete method verification of os-fixed-ips * Consolidate image\_href to image uuid validation code * Fix TestNeutronv2.test\_deallocate\_for\_instance\_2\* race failures * Centralize config option for nova/network/driver.py * Don't raise error when filtering on custom metadata * Config options: centralize quobyte libvirt options (14) * Config options: centralize volume nfs libvirt options (13) * Config options: centralize volume net libvirt options (12) * Config options: centralize iser libvirt options (11) * Config options: centralize iscsi libvirt options (10) * Config options: centralize glusterfs libvirt options (9) * Config options: centralize aoe vol libvirt options (8) * Config options: centralize volume libvirt options (7) * Config options: centralize vif libvirt options (6) * Config options: centralize utils libvirt options (5) * Config options: centralize lvm libvirt options (4) * Remove legacy v2 unit tests[q-v] * Remove legacy v2 unit tests[f-n] * Remove Limits dependency of legacy v2 API code * Remove mox in unit/virt/xenapi/test\_agent.py * Set migration status to 'error' on live-migration failure * Add pycrypto explicitly * Centralize vif,xenpool & vol\_utils config options * Config options: centralize imagecache libvirt options (3) * Config options: centralize imagebackend libvirt options (2) * Remove the legacy v2 API entry from api-paste.ini * Update stable API doc to indicate code removal * Config options: centralize driver libvirt options (1) * UEFI - instance terminates after boot * Fix unit tests for v2.1 API * Remove legacy v2 unit tests[a-e] * Config options: Centralize servicegroup options * libvirt: release serial console ports when destroying guests * Remove mox from tests/unit/network/test\_api.py * Remove legacy v2 API functional tests * fix wrong key name in test code * Remove the legacy v2 API test scenarios from API sample tests * Remove 413 expect in servers.py * Remove core extension list * rt: remove unused image\_meta parameter * Fail to start nova-api if no APIs were able to be started * Test that nova-api ignores paste failures, but continues on * libvirt: introduces module to handle domain xml migration * Trivial: dead code * Fix database poison warnings, part 8 * docs: link to Laski's cells talk from the Austin summit * compute: Retain instance metadata for 'evacuate' on shared storage * Archive instance\_actions and instance\_actions\_event * Add os-interface functional negative tests * api-ref: verify os-server-groups.inc * Avoid unnessary \_get\_power\_state call * Remove mox in test\_certificates.py * api-ref: verfiy limits body * api-ref: body verification of ips.inc * Change message format of Forbidden * Updated from global requirements * api-ref verify of servers-admin-action.inc * pci: Allow to assign pci devices in pci device list * Fix typo in support-matrix.ini: re(set)=>(re)set * Add ability to filter migrations by instance uuid * Wrong mocks, wrong mock order * verify api-ref metadata.inc * verify api-ref os-server-password.inc * Updated from global requirements * Fix database poison warnings, part 7 * Declare nova.virt namespace * [doc] fix 5 typos * Make compute rpcapi 'live\_migration' backward compatible * Replace key manager with Castellan * Deprecate Nova Network * verify api-ref os-instance-usage-audit-log.inc * Only reset dns\_name when unbinding port if DNS is integrated * Changed the storage size from GB to GiB * Remove unused FAKE\_UUID variables * Deprecated the concept of extensions in v2.1 * Fix database poison warnings, part 6 * Fix database poison warnings, part 5 * Avoid unconditional warnings in nova-consoleauth * libvirt: remove version checks for hyperv PV features * libvirt: remove version checks for libvirt disk discard feature * libvirt: remove version checks for block job handling * libvirt: remove version checks for PCI device detach * libvirt: remove version checks for live snapshot feature * libvirt: add explicit check for min required QEMU version * libvirt: increase min required libvirt to 1.2.1 * network: Fix nova boot with multiple security-groups * Updated config description on live snapshot * Fix NoSuchOptError when referring to conf.neutron.auth\_plugin * api-ref host verification (os-hosts.inc) * api-ref verify os-floating-ip-pools.inc * Complete Verification of server-metadata * Complete method Verification of os-hypervisors * Fix invalid uuid warnings in compute api testcases * Fix invalid uuid warnings * complete Method Verification of aggregates * Complete Method Verification of ips * Fix resize to same host failed using anti-affinity group * Complete method Verification of consoles * Config options: Centralize netconf options * Remove 413 as expected error code * Complete Verification of os-server-password * Complete Verification of os-hosts * Add links to API guide to describe links * Complete Method Verification of os-interface * Complet Method Verification of flavor-access * Complete Verification of os-virtual-interfaces * Complet Method Verification of os-instance-actions * Complete Verification of os-flavor-extra-specs * Fix database poison warnings, part 4 * Complet Method Verification of flavor * Complet Method Verification of server group * Trivial: fix mock decorator order * Add test for nova-compute and nova-network main database blocks * Prevent nova-api from dying if enabled\_apis is wrong * Complet Method Verification of keypair * Complet Method Verification of availability-zone * Complet Method Verification of simple tenant usage * remove the use of import\_object\_ns * Fixed typo in word "were" * Complet Method Verification of os-services * Complet Method Verification of server diag * Remove mox in tests/unit/compute/test\_host\_api.py * Config options: completing centralize neutron options * Add instances into dict when handle exception * Complet Method Verification of limits * Improve the help text for the compute rpcapi option * Move config options from nova/compute/rpcapi.py file * Updated from global requirements * deprecate nova-all * Remove unused base\_options param from \_get\_image\_defined\_bdms * Change BuildRequest to contain a serialized instance * Split out part of map\_cell\_and\_hosts to return a uuid * Add manage command for cell0 * Config options: centralize section "ssl" * config options: centralize security\_group\_api opt * Imported Translations from Zanata * Stop using mox stubs in test\_multinic.py * libvirt: deprecate use\_usb\_tablet in favor of pointer\_model * Config options: Centralize neutron metadata options * add tags to files for the content verification phase * Config options: Centralize compute options * Add 415 to list of exceptions for microversions devref * Added validation for rescue image ref * Final warnings removals for api-ref * Clean port dns\_name in case of port detach * Fix remaining json reference warnings * Add validations for volume\_size and destination\_type * Remove duplicate api ref for os-networks/actions * Fix all remaining sample file path * Stop using mox stubs in test\_access\_ips.py * Stop using mox stubs in test\_admin\_password.py * libvirt - Add log if libguestfs can't read host kernel * Fix sample file path for 4 files * Fix invalid uuid warnings in objects testcases * Fix invalid uuid warnings in server-group unit tests * Create image for suspended instance booted from volume * Fix content and sample file for keypair, migration, networks * Fix sample file path for os-i\* API * Fix the parameters for os-agents API * Fix sample file path for fixed, floating ips API * Fix sample path for aggregate, certificate, console * Add remaining image API ref * Fix the schema of assisted\_volume\_snapshots * config options: conductor live migrate options * xenapi: Fix xmlrpclib marshalling error * fix samples references in security group files * fix samples references in os-services * Fix api samples references in 3 more files * Fix reverse\_upsize\_quota\_delta attempt to look up deleted flavors * Fix api ref for os-hosts, os-quota-sets and os-fping * Fix api ref for os-cells, os-cloudpipe and server-action-shelve * Fix api sample references in 2 more files * Updated from global requirements * hardware: thread policy default value applied even if specified * Fix api ref for ips, limits, metdata and agent * virt: use more realistic fake network / VIF data * Fix json response example heading in api ref * Fix database poison warnings, part 3 * Remove 40X and 50X from Normal response codes * Specify normal status code on os-baremetal-nodes * Remove unused rotation param from \_do\_snapshot\_instance * Remove unused filter\_class\_names kwarg from get\_filtered\_hosts * Remove deprecated ability to load scheduler\_host\_manager from path * Fix "Creates an aggregate" parameters * Unavailable hosts have no resources for use * HyperV: Add SerialConsoleOps class * HyperV: Add serial console handler class * HyperV: Add serial console proxy * fix samples references for 2 files * Update servers.inc to be as accurate as api-site * Fix database poison warnings, part 2 * Fix "Creates an agent build" parameters * Update get\_by\_project\_id on InstanceMappingList * Clean up cell handling in nova-manage cell\_v2 map\_instances * Properly clean up BDMs when \_provision\_instances fails * clean up versions.inc reference document * Collection of CSS fixes * Fixes unexpectedly passing functional test * move sphinx h3 to '-' instead of '^' * fix blockquote font size * Add 'Show All' / 'Hide All' toggle * use 'required' instead of 'optional' for parameters * Fix css references to the glyphicons font * Initial use of microversion\_parse * Changed an HTTP exception to return proper code * Compute API: omit disk/container formats when creating images of snapshots * Fix formatting of rst in parameters.yaml * Add instance/instance\_uuid to build\_requests table * network: make nova to handle port\_security\_enabled=False * BaseCoreFilter docstring and formating improved * Fix NoMoreNetworks functional test traces * Fix typo in nova release notes * Updated from global requirements * Fix generation of Guru Meditation Report * Fix invalid uuid warnings in cell api testcases * cleanup some issues in parameters.yaml * Import RST files for documentation * add combined parameters.yaml file * claims: Do not assume image-meta is a dict * Fix nova opts help info * Fix doc build if git is absent * Add checks for driver attach\_interfaces capability * Updated from global requirements * Add AllServicesCurrent fixture * Improve the help text for the linuxnet options (3) * Improve the help text for the linuxnet options (2) * Fix signature of copy\_image * libvirt: remove live migrate workaround for an unsupported ver * libvirt: move graphic/serial consoles check to pre\_live\_migration * Fix invalid uuid warnings in api testcases * Minor updates to the how\_to\_get\_involved docs * Put more into compute.api.\_populate\_instance\_for\_create * Remove unused parameter from \_get\_requested\_instance\_group * Improved test coverage * Check API versions intersects * virt/hardware: Fix 'isolate' case on non-SMT hosts * Migrate compute node resource information to Inventory objects * Drop compute node uuid online migration code * increase error handling for dirty files * config options: centralize 'spice' options * Fix max concurrent builds's unlimited semaphore * VMware: add in context for log messages * XenAPI: specify block size for writing config drive * Fix database poison warnings * Make swap-volume an admin-only API by default * Updated from global requirements * Improve the help text for the linuxnet options (1) * Config options: Centralize network options * Config options: centralize base path configuration * Add new NeutronFloatingIP object * Add "\_\_repr\_\_" method to class "Service" * remove alembic from requirements.txt * Config options: centralize section "xvp" * Imported Translations from Zanata * Updated from global requirements * allow samples testing for PUT to not have a body * libvirt: delete the last file link in \_supports\_direct\_io() * db: retry instance\_info\_cache\_update() on deadlock * Moved tags filtering tests to TestInstanceTagsFiltering test case * Move config options from nova/network/linux\_net.py * Remove nova-manage service subcommand * config options: centralize quota options * DB API changes for the nova-manage quota\_usage\_refresh command * Improve the help text for the network options (1) * Fix typo in compute node mega join comments * Add api-ref/build/\* to .gitignore * Improve help text for the network object options * Config options: Centralize console options * Config options: Centralize notification options * Remove mox from tests/unit/network/security\_group/test\_neutron\_driver.py * Added server tags support in nova-api * Added server tags controller * Added db API layer to add instance tag-list filtering support * Improve 'workarounds' conf options documentation * Config options: centralize "configdrive" options * config options: centralize baseproxy cli options * Check if a exception has a code on it before read the code * Fix import statement order in nova/rpc.py * Document our policy on fixing v2.0 API bugs * Config options: Centralize neutron options * Remove mox from tests/unit/compute/test\_compute\_xen.py * Fix typo in comments of affinity and anti-affinity * Fix up online\_data\_migrations manage command to be consistent * Adds missing discoverable rules in policy.json * Config options: Centralize ipv6 options * config options: centralize xenserver vmops opts * Config options: Centralize xenapi driver options * config options: centralize xenserver vm\_utils opts * Remove flavor seeding from the base migration * Rely on devstack to skip rescue tests for cells v1 * Replace topic with topics for messaging.Notifier * Updated from global requirements * Fix test for empty policy rules * Improve 'monkey\_patch' conf options documentation * conf: Remove 'destroy\_after\_evacuate' * config options: Move crypto options into a group * config options: centralize section: "crypto" * config options: Centralise 'monkeypatch' options * config options: Centralise 'utils' options * doc: clean up oslo-incubator related stuff * config option generation doesn't work with a generator * Add link to the latest nova.conf example * Change the nova tempest blacklist to use to idempotent ids * HyperV: Refactor livemigr, avoiding getting disk paths remotely * Remove DictCompat from mapping objects * Enhance value check for option notify\_on\_state\_change * Fix flavor migration tests and edge case found * config options: Centralize upgrade\_levels section * config options: Centralize mks options * Remove DictCompat from S3 object * config options: Centralize vmware section * config options: centralize section "service" * Define context.roles using base class * TrivialFix: removed unnecessary cycle in servicegroup/test\_api.py * Handle pre-migration flavor creation failures in the crusty old API * config options: centralize section "guestfs" * config options: centralize section "workarounds" * config options: Centralize 'nova.rpc' options * Cleanup NovaObjectDictCompat from BandwidthUsage * config options: fix the missed cli options of novncproxy * Add metadata objects for device tagging * Nuke cliutils from oslo-incubator * libvirt: pci detach devices should use dev.address * Fix stale file handle error in resource tracker * Updated from global requirements * config options: Centralize xenapi torrent options * Fix: unable to delete instance when cinder is down * Block flavor creation until main database is empty * Further hack up the n.t.unit.db.fakes module of horribleness * Add flavor migration routine * Make Flavor create() and destroy() work against API DB * Move config options from nova/objects/network.py * Add tag column to vifs and bdm * Remove extensible resource tracking * Fix error message of nova baremetal-node-delete * Enhanced error handling for rest\_parameters parser * Fix not supported error message * config options: Centralise 'image\_file\_url' options * neutron: Update the port with a MAC address for PFs * Remove mox from tests/unit/network/test\_rpcapi.py * Remove mox from tests/unit/objects/test\_migration.py * The 'record' option of the WebSocketProxy should be string * config options: centralize section: "glance" * Move resource provider staticmethods to proxies * Add Service.get\_minimum\_version\_multi() for multiple binaries * remove the ability to disable v2.1 * Make git clean actually remove covhtml * Set 'libvirt.sysinfo\_serial' to 'none' in RealTimeServersTest * Make compute\_node\_statistics() use new schema * remove glance deprecated config * Config options: Centralize consoleauth options * config options: centralize section "cloudpipe" * After migrate in-use volume the BDM information lost * Allow to update resource per single node * pci: Add utility method for getting the MAC addr 13.0.0 ------ * Imported Translations from Zanata * VMware: Use Port Group and Key in binding details * Config options: Centralize resource tracker options * Fixed incorrect behavior of xenapi driver * Remove DictCompat from ComputeNode * config options: Centralise 'virt.imagecache' options * neutron: pci\_request logic considers 'direct-physical' vnic type * config options: remove the scheduler import\_opt()s * Improve the help text for hyperv options (3) * Improve the help text for hyperv options (2) * Improve the help text for hyperv options (1) * Imported Translations from Zanata * Remove a redundant 'that' * Cleanup NovaObjectDictCompat from NumaTopology * Fix detach SR-IOV when using LibvirtConfigGuestHostdevPCI * Stop using mox in test\_security\_groups * Cleanup the exception LiveMigrationWithOldNovaNotSafe * Add sample API content * Create api-ref docs site * Config options: Centralize debugger options * config options: centralize section: "keymgr" * libvirt: fix ivs test to use the ivs vif object * libvirt: pass a real instance object into vif plug/unplug methods * Add a vnic type for PF passthrough and a new libvirt vif driver * libvirt: live\_migration\_flags/block\_migration\_flags default to 0 * Imported Translations from Zanata * config options: Centralize xenapi options * Populate instance\_mappings during boot * libvirt: exercise vif driver 'plug' method in tests * config options: centralize xenserver options * Fix detach SR-IOV when using LibvirtConfigGuestHostdevPCI * Reduce number of db calls during image cache manager periodic task * Imported Translations from Zanata * Update cells blacklist regex for test\_server\_basic\_ops * Update cells blacklist regex for test\_server\_basic\_ops * Remove mox from tests/functional/api\_sample\_tests/test\_cells.py * Remove mox from tests/unit/api/openstack/compute/test\_baremetal\_nodes.py * Config options: Centralize ldapdns options * Add NetworkRequestList.from\_tuples helper * Stop providing force\_hosts to the scheduler for move ops * Enforce migration tests for api database * Objectify test\_flavors and test\_flavors\_extra\_specs * Allow ironic driver to specify cafile * trivial: Fix alignment of wsgi options * config options: Remove 'wsgi\_' prefix from opts * VMware: Always update image size for sparse image * VMware: create temp parent directory when booting sparse image * VMware: Use datastore copy when the image is already in vSphere * Imported Translations from Zanata * Fix typos in document * Removes some redundant words * Stop providing force\_hosts to the scheduler for move ops * Include CellMapping in InstanceMapping object * Make flavor extra\_specs operations work against the API DB * Make Flavor access routines work against API database * Clarify the \`\`use\_neutron\`\` option upgrade notes 13.0.0.0rc2 ----------- * Imported Translations from Zanata * Try to repopulate instance\_group if it is None * Try to repopulate instance\_group if it is None * modify duplicate // to / in doc * change host to host\_migration * Fixup test\_connection\_switch functional test * Fix SAWarning in \_flavor\_get\_by\_flavor\_id\_from\_db * Update 'os-hypervisors.inc' in api-ref * Fix os-server-groups.inc * cinder: accommodate v1 cinder client in detach call * Move config options from nova/network/manager.py * Change adminPass for several server actions * Fix os-virtual-interfaces and flavors api-ref * Make FlavorList.get\_all() return results from the API and main DBs * Objectify some tests in test\_compute and test\_flavors * Objectify test\_instance\_type\_extra\_specs * Add a DatabasePoisonFixture * config options: Use OptGroup for listing options * Live migration failure in API leaves VM in MIGRATING state * Fix flavor-access and flavor-extras api-ref * Fix diagnostics, extensions api ref * Fix typo 'mappgins' to 'mappings' * Imported Translations from Zanata * Fix hosts and az api samples * Change "libvirt.xml" back to the original after doing unrescue * Fix os-service related reference missing * Add 'binary' and 'disable-reason' into os-service * Remove unused argument v3mode * Clean up the TestGlanceClientWrapper retry tests * stop setting mtu when plugging vhost-user ports * config options: Move wsgi options into a group * Rewrite 'test\_filter\_schedule\_skipping' method using Mock * Remove stub\_compute config options * Added missing "=" in debug message * libvirt: serial console ports count upper limit needs to be checked * Imported Translations from Zanata * Return 400 on boot for invalid image metadata * Fix JSON format of server\_concepts * Remove /v1.1 endpoint from api-guide * config options: centralize section: "rdp" * Fixes hex decoding related unit tests * Fix conversion of config disks to qcow2 during resize/migration * xenapi: Fix when auto block\_migration in the API * xenapi: Fix up passing of sr\_uuid\_map * xenapi: Fix the live-migrate aggregate check * Add rebuild action descriptions in support-matrix * Config options: centralize section "hyperv" * Removal of unnecessary \`import\_opt\`s for centralized config options * Imported Translations from Zanata * Fixes bug with notify\_decorator bad getattr default value * config options: centralize section "monitors" * config options: Centralise floating ip options * Fix API Error on hypervisor-uptime API * VMware: make the opaque network attachment more robust * Add functional test for v2.7 * avoid microversion header in functional test * Add backrefs to api db models * Update reno for stable/mitaka * stop setting mtu when plugging vhost-user ports * Removes redundant object fields * Blacklist TestOSAPIFixture.test\_responds\_to\_version in python3 * Fix conversion of config disks to qcow2 during resize/migration * Remove auto generated module api documentation * Imported Translations from Zanata * Mark 2.25 as Mitaka maxmium API version * Add a hacking check for test method closures * Make Flavor.get operations prefer the API database * xenapi: Fix when auto block\_migration in the API * xenapi: Fix up passing of sr\_uuid\_map * Update to openSUSE versions * xenapi: Fix the live-migrate aggregate check * Error on API Guide warnings * Add Newton sanity check migration * Add placeholder migrations for Mitaka backports * Update .gitreview for stable/mitaka * Set RPC version aliases for Mitaka 13.0.0.0rc1 ----------- * Fix reno reverts that are still shown * Wait for device to be mapped * Add a prelude section for Mitaka relnotes * Fix reno for RC1 * libvirt: Fix ssh driver to to prevent prompting * Support-matrix of vmware for chap is wrong * Imported Translations from Zanata * Allocate free bus for new SCSI controller * config options: centralize cinder options * Add os-brick rootwrap filter for privsep * Fix retry mechanism for generator results * Add a cell and host mapping utility to nova-manage * Add release note for policy sample file update * Fix vmware quota extra specs reno formatting * Avoid lazy-loads of ec2\_ids on Instance * Replace deprecated LOG.warn with LOG.warning * libvirt: Allow use of live snapshots with RBD snapshot/clone * Typo fix in documentation * Redundant parentheses removed * Trivial: Use exact docstring for quota module * Replace deprecated LOG.warn with LOG.warning * Revert "virt: reserved hugepages on compute host" * Make tuple actually a tuple * xenapi: Image cache cannot be disabled * VMware: enable a resize of instance with no root disk * fixed typo in word "OpenStack" * hyper-v: Copies back files on failed migration * Add functional test for OverQuota * Translate OverLimit exceptions in Cinder calls * Add regression test for Cinder 403 forwarding * register the config generator default hook with the right name * pci - Claim devices outside of Claim constructor * Get instance security\_groups from already fetched instance * Use migrate\_data.block\_migration instead of block\_migration * Fix pre\_live\_migration result processing from legacy computes * Add reno for disco driver * linux\_net: use new exception for ovs-vsctl failures * Insure resource tracker updated for deleted instances * VMware: use datacenter path to fetch image * libvirt: check for optional LibvirtLiveMigrateData attrs before loading * Change SpawnIsSynchronous fixture return * Report instance-actions for live migration force complete API * Add release notes for security fixes in 13.0.0 mitaka GA * API: Raise up HTTPNotFound when no availabe while get\_console\_output * libvirt: Comment non-obvious security implications of migrate code * Update the doc of notification * fixed log warning in sqlalchemy/api.py * Add include\_disabled parameter to service\_get\_all\_by\_binary * Imported Translations from Zanata * Set personality/injected\_files to empty list if not specified * Fix processing of libvirt disk.info in non-disk-image cases * pci: avoid parsing whitelist repeatedly * Add Forbidden to caught cinder exceptions * Missing info\_cache.save() in db sqlalchemy api * tests: Add some basic compute\_api tests for attaching volumes * Clean up networks with SR-IOV binding on reschedule * virt: refactor method compute\_driver\_matches * Make force\_ and ignore\_hosts comparisons case insensitive * xenapi: fix when tar exits early during download * Address nits in I83a5f06ad * Fix config generation for Neutron auth options * Remove an unused method in FakeResourceTracker * Rework 'limited' and 'get\_limit\_and\_marker' * plugins/xenserver: Resolve PEP8 issues * Remove unused variable and redundant code path * Soft delete instance group member when delete instance * VMware: Refactor the formatting instance metadata * Remove sizelimit.py in favor of oslo\_middleware.sizelimit * libvirt: make snapshots call suspend() instead of reimplementing it * Use generic wrapper for cinder exceptions * Add ppc64le architecture to some libvirt unit tests * Add Database fixture to sync to a specific version * Drop the use of magic openstack project\_id * Aggregate object fixups * Address nits in Ia2296302 * Remove duplicated oslo.log configuration setup * libvirt: Always copy or recreate disk.info during a migration * nova-manage: Print, not raise, exceptions * virt: reserved hugepages on compute host * XenAPI:Resolve Nova/Neutron race condition * Don't use locals() and globals(), use a dict instead * update the deprecated \`\`security\_group\_api\`\` and \`\`network\_api\_class\`\` * [Ironic]Match vif-pif mac address before setting 'vif\_port\_id' * Correct the wrong usage of 'format' jsonschema keyword in servers API * Add ComputeNode and Aggregate UUID operations to nova-manage online migrations * Extend FakeCryptoCertificate.cert\_not\_valid\_after to 2 hours * Revert "functional: Grab the service version from the module" * libvirt: Fix resize of instance with deleted glance image * Reno for libvirt libosinfo with OS * Fix hyperv use of deprecated network\_api\_class * Fix v2.12 microversion REST API history doc * Add docstrings for nova.network.base\_api.get\_vifs\_by\_instance * Style improvements * Reno for Ironic api\_version opt deprecation * Release notes: online\_data\_migrations nova-manage command * nova-manage: Declare a PciDevice online migration script * test\_fields: Remove all 'Enum' subclass tests * Make test cases test\_crypto.py from NoDBTestCase * Ironic: remove backwards compatibility code * Ironic: Use ironicclient native retries for connection errors * RT: aborting claims clears instance host and NUMA info * Provide correct connector for evacuate terminate * Reset instance progress when LM finishes * Forbid new legacy notification event\_type * VMware: Remove VMwareHTTPReadFile * API: Mapping ConsoleTypeInvalid exception to HTTPBadRequest * VMware: remove deprecation warnings from oslo\_versionedobjects * Reject empty-named AZ in aggregate metadata * add checking for new image metadata property 'hw\_cpu\_realtime\_mask' * Remove unused methods in nova/utils.py * Fix string interpolations at logging calls * Generate better validation error message when using name regexes * Return 400 for os-virtual-interfaces when using Neutron * Dump metric exception text to logs * Updated from global requirements * Use SensitiveStringField for BlockDeviceMapping.connection\_info * Add index on instances table across deleted/created\_at columns * Tweak the resize\_confirm\_window help text * Enable rebuild tests in cellsv1 job * libvirt: clean up help text for live\_migration\_inbound\_addr option * Add release note for nova using neutron mtu value for vif plugging * deprecate security\_group\_api config option * update tests for use\_neutron=True; fix exposed bugs * deprecate \`\`volume\_api\_class\`\` and \`\`network\_api\_class\`\` * deprecate \`\`compute\_stats\_class\`\` config option * Deprecate the \`\`vendordata\_driver\`\` config option * Deprecate db\_driver config option * deprecate manager class options * remove default=None for config options * Check 'destination\_type' instead of 'source\_type' in \_check\_and\_transform\_bdm * Documentation fix regarding triggering crash dump * Use db connection from RequestContext during queries * Ironic: Clean up if configdrive build fails * Revert "Generate better validation error message when using name regexes" * Add unit tests for live\_migration\_cleanup\_flags * Replaced unittest and unittest2 to testtools * Sample nova.conf file has missing/duplicated config options 13.0.0.0b3 ---------- * Fix missing of unit in HostState.\_\_repr\_\_() * Make InstanceMappings.cell\_id nullable * Create BuildRequest object during boot process * Add BuildRequest object * Api\_version\_request.matches does not accept a string or None * Added Keystone and RequestID headers to CORS middleware * Generate better validation error message when using name regexes * XenAPI: introduce unit test for XenAPI plugins * Abstract a driver API for triggering crash dump * Fix evacuate support with Nova cells v1 * libvirt: don't attempt to get baseline cpu features if host cpu model is None * Ensure there are no unreferenced closures in tests * libvirt: set libvirt.sysinfo\_serial='none' for virt driver tests * libvirt: Add ppc to supported arch for NUMA * Use new inventory schema in all compute\_node gets * Remove unused libvirt \_get\_all\_block\_devices and \_get\_interfaces * Use new inventory schema in compute\_node\_get\_all() * Deprecate nova.hooks * Adjust resource-providers models for resource-pools * Fix Cells RPC API by accepting a RequestSpec arg * API: Improve os-migrateLive input parameters * Allow block\_migration and disk\_over\_commit to be None * Update time is not updated when metadata of aggregate is updated * complete the removal of api\_version from rest client parameters * objects: add HyperVLiveMigrateData stub * functional: Grab the service version from the module * Added missed '-' to the rest api history doc * Gracefully handle cancelling all events more than once * Cleanup service.kill calls in functional tests * Do not use constraints for venv * VMware: Use actual VM state instead of using the instance vm\_state * Do not pass call\_xenapi unmarshallable type * check max\_net\_count against min\_count when booting * objects: Allow instance to reset the NUMA topology * Mark 'network\_device\_mtu' as deprecated * Add service binary/host to service is down log for context * Abort an ongoing live migration * Add new APIs and deprecate old API for migrations * Deprecate conductor manager option * Xen: Calculate block\_migration if it's None * Libvirt: Calculate block\_migration if it's None * NUMATopologyFilter raise exception and not continue filter next node * Updated from global requirements * Add specific method to lazy-load instance.pci\_devices * Move logging outside of LibvirtConfigObject.to\_xml * Update the help for deprecated glance host/port/protocol options * Added missing execution of the test * Add build\_requests database table and model * Make db.aggregate\_get a reader not a writer * Remove an unnecessary variable in a unit test * Remove duplicate test case flavor\_create * Don't lazy-load instance.services if the instance is deleted * Add functional regression test for list deleted instances on v2.16 * Use constant\_time\_compare from oslo.utils * Remove APIRouterV3 * reduce pep8 requirements to just hacking * fix usage of opportunistic test cases with enginefacade * add regression test for bug #1541691 * Creates flavor\* tables in API database * Add test for unshelve in the conductor API * add a place for functional test to block specific regressions * make microversion a client level construct for tests * Allocate uuids for aggregates as they are created or loaded * bug and tests in 'instance\_info\_cache' * fix typo in comment * Fix conductor to \*really\* pass the Spec obj * Updated from global requirements * Catch iscsi VolumeDeviceNotFound when detaching * Add note about using OS-EXT-\* prefix for attribute naming * Remove use of \`list\` as variable name * resource-provider versioned objects * Fix networking exceptions in ComputeTestCase * Fix online\_data\_migrations() not passing context * Fix two bugs in online\_data\_migrations() * Make online\_data\_migrations do smaller batches in unlimited case * Use MTU value from Neutron in OVS/LB VIF wiring * tox: Remove 'oslo.versionedobjects' dependency * Fix API Guide doc * Add functional regression test for bug 1552888 * Fix an unnecessary interpolation * Change wording of microversion bump about 503 * Validate subs in api samples base class to improve error handling * Add a column for uuid to aggregate\_hosts * Hyper-V: Removes pointless check in livemigrationops * XenAPI: Fix VIF plug and unplug problem * Update ComputeNode values with disk allocation ratios in the RT * Update HostManager and DiskFilter to use ComputeNode disk ratio * Add disk\_allocation\_ratio to ComputeNode * config options: Centralise 'virt.disk' options * config options: Centralise 'virt.netutils' options * Improve 'virt.firewall' conf options documentation * config options: Centralise 'virt.firewall' options * Improve 'virt.images' conf options documentation * config options: Centralise 'virt.images' options * Update wrong comment * Fix misuse of assertTrue in console and virt tests * Failed migration shoudn't be reported as in progress * Fix missing of unit in debug info * always use python2.7 for pep8 * servicegroup: remove the zookeeper driver * Hacking: check for deprecated os.popen() * Log successful reverts\_task\_state calls * Hyper-V: os\_win related updates * Partial revert of ec2 removal patch * Fixed leaked UnexpectedMethodCallErrors in test\_compute * Unshelve using the RequestSpec object * Provide ReqSpec to live-migrate conductor task * Fix cell capacity when compute nodes are down * Fix misleading test name * Default "discoverable" policies to "@" * build smaller name regexes for validation * Add reno for block live migraton with cinder volumes * Remove support for integer ids in compute\_api.get * Add annotation to the kill() method * Add missing os types: suseGuest64/suseGuest * Hypervisor support matrix: add feature "trigger crash dump" * Update example policy.json to remove "" policies * Fixed arguement order in remove\_volume\_connection * Add better help text to scheduler options (7) * Add better help text to scheduler options (6) * RT: Decrese usage for offloaded instances * Allow saving empty pci\_device\_pools in ComputeNode object * Add StableObjectJsonFixture and use it in our base test class * nova-manage: Add hooks for running data-migration scripts * always use pip constraints * Update instance host in post live migration even when exception occurs * Use imageutils from oslo.utils * Remove duplicate key from dictionary * reset task\_state after select\_destinations failed * Pass bdm info to \_get\_instance\_disk\_info method * Fix create snapshot failure on VMs with SRIOV * Reorder name normalization for DNS * Allocate UUID for compute node * rpc.init() is being called twice per test * Use instance hostname for Neutron DNS unit tests * objects: Rename PciDevice \_migrate\_parent\_addr method * Use assertRaises() to check specific exception * libvirt: make live\_migration\_uri flag dependent on virt\_type * Remove unused CONF imports * Add /usr/local/{sbin,bin} to rootwrap exec\_dirs * write live migration progress detail to DB in migration monitor * Add migration progress detail in DB * Tolerate installation of pycryptodome * neutron: handle attach interface case with no networks * Move Disk allocation ratio to ResourceTracker * Updated from global requirements * HyperV: Fix vm disk path issue * Removal of unnecessary \`import\_opt\`s for cells config options * Fix 500 error for showing deleted flavor details * Fix \_compare\_result type handling comparison * neutron: remove redundant request.network\_id assignment * Fix reported ppc64le bug on video selection * Improve 'virt.driver' conf options documentation * Improve unit tests for instance multiple create * Change populate\_security\_groups to return a SecurityGroupList * Fix error message in imagebackend * config options: Centralise 'virt.driver' options * Avoid lazy-loading flavor during usage audit * resource\_providers, allocations and inventories models * Revert "Add new test\_rebuild\_instance\_with\_volume to cells exclude list" * Update the CONF import path for VNC * Improve 'vnc' conf options documentation * Remove discoverable policy from server:migrations resource * Improve the help text for cells options (6) * Improve the help text for cells options (5) * Improve the help text for cells options (4) * Improve the help text for cells options (3) * Improve the help text for cells options (2) * Allow block live migration of an instance with attached volumes * Implement an indexed ResourceClass Enum object * Add check to limit maximum value of max\_rows * Fix spelling mistake * Add methods for RequestContext to switch db connection * virt: osinfo will report once if libosinfo is not loaded * Replace eventlet-based raw socket client with requests * Add a tool for reserving migration placeholders during release time * libvirt: check for interface when detach\_interface fails * libvirt: implement LibvirtConfigGuestInterface.parse\_dom * Filter APIs out from services list * Config options: centralize options in conductor api * Improve the help text for cells options (1) * VMware: add release notes for the limits * Get a ReqSpec in evacuate API and pass it to scheduler * Fixes cells py3 unit tests * Fixes network py3 unit tests * Fixes Python 3 unit tests for nova.compute * Add new test\_rebuild\_instance\_with\_volume to cells exclude list * Add some obvious detail to nw\_info warning log * Fix fallocate test on newer util-linux * Remove \_create\_local function * Trivial logic cleanup in libvirt pre\_live\_migration * Return HTTP 400 for invalid server-group uuid * Properly inject network\_data.json in configdrive * enginefacade: remove 'get\_session' and 'get\_api\_session' * enginefacade: 'request\_spec' object * Add new API to force live migration to complete * Add new DB API method to retrieve migration for instance * Imported Translations from Zanata * Updated from global requirements * Sync L3Driver, NullL3 interface with LinuxNetL3 * Top 100 slow tests: api.openstack.compute.test\_api * Top 100 slow tests: api.openstack.compute.test\_versions * Top 100 slow tests: legacy\_v2.test\_servers * Top 100 slow tests: api.openstack.compute.test\_flavor\* * Top 100 slow tests: api.openstack.compute.test\_image\_size * Top 100 slow tests: api.openstack.compute.test\_volumes * Confusing typo fixed * doc: all\_tenants query option incorrectly identified as non-admin * Update driver support matrix for Ironic * parametrize max\_api\_version in tests * libvirt: Race condition leads to instance in error * Avoid lazy-loads in metadata requests * Join flavor when re-querying instance for floating ip association * Allow all api\_samples tests to be run individually * Make os-instance-action read deleted instances * enginefacade: 'flavor' * Updated from global requirements * Use instance hostname for Neutron DNS * libvirt: Make behavior of os\_require\_quiesce consistent * Split-network-plane-for-live-migration * Database not needed for most cells messaging tests * libvirt: use osinfo when configuring the disk bus * libvirt: use osinfo when configuring network model * Database not needed for test class: ConsoleAPITestCase * Database not needed for test class: ConductorImportTest * virt: adjusting the osinfo tests to use fakelibosinfo * Database not needed for RPC serializer tests * Database not needed for most crypto tests * Database not needed for most nova manage tests * ebtables/libvirt workaround * Test that new tables don't use soft deletes * Use instance in setup\_networks\_on\_host * enginefacade: test\_db\_api cleanup, missed decorators * Database not needed for test class: PciGetInstanceDevs * Add test coverage to functional api tests \_compare\_result method * Remove and deprecate conductor provider\_fw\_rule\_get\_all() * Remove prelude from disk-weight-sch reno * Enable volume operations for shelved instances * Gracefully handle a deleting instance during rebuild * remove the unnecessary parem of set\_vm\_state\_and\_notify * tests: adding fake libosinfo module * config options: Centralise 'vnc' options * config options: Make noVNC proxy into vnc group * Improve 'pci' conf options documentation * config options: centralize section "wsgi" * libvirt: deprecate live/block\_migration\_flag opts * Tidy up scheduler\_evolution.rst * config options: add hacking check for help text length * xrange() is renamed to range() in Python 3 * Do not use "file" builtin, but "open" instead * Fix some word spellings in messages * No need to have ironicclient parameter in methods * Add a TODO to make ComputeNode.cpu\_info non-nullable * Fix missing marker functions in nova/pci * Adding volume operations for shelved instances * Optimize Instance.create() for optional extra fields * Optimize servers path by pre-joining numa\_topology * Trivial: Remove a duplicated word * Update the home-page * Add better help text to scheduler options (5) * Switch to oslo.cache lib * Remove all remaining references to Quantum * doc: remove detail about extensions * Add description for trigger crash dump * Object: Give more helpful error message in TestServiceVersion * Spread allocations of fixed ips * Updated from global requirements * Stop using mox (scheduler) * Fix xvpvncproxy config path when running n-xvnc * Optimize the instance fetched by floating\_ips API * Improve efficiency of Migration.instance property * Prevent \_heal\_instance\_info\_cache() periodic lazy-loads * Revert "Added new scheduler filter: AggregateTypeExtraSpecsAffinityFilter" * Remove unused provider firewall rules functionality in nova * enginefacade: 'instance\_tags' * Apply scheduler limits to Exact\* filters * Fix typos in nova/scheduler and nova/virt * Replace exit() by sys.exit() * Trivial: Fix a typo in test\_policy.py * neutronv2: Allow Neutron to specify OVS/LB bridge * HyperV: do not log twice with different level * Replace stubs.Set with stub\_out (db) * Add a disk space weight-based scheduler * Fix up live-migration method docstrings * Libvirt: Support ovs fp plug in vhostuser vif * xenapi: simplify swap\_xapi\_host() * Allow sending the migrate data objects over the wire * Added new scheduler filter: AggregateTypeExtraSpecsAffinityFilter * Replace "all\_mappings" variable by "block\_device\_mappings" * Add better help text to scheduler options (4) * Migrate from keystoneclient to keystoneauth * fast exit dhcpbridge on 'old' * Ironic: Lightweight fetching of nodes * Fix RequestSpec \_from\_db\_object * doc:Ask reviews to reject new legacy notifications * Generate doc for versioned notifications * doc: add devref about versioned notifications * Adds json sample for the versioned notifications * relocate os\_compute\_api:servers:discoverable * libvirt: convert to use instance.image\_meta property * Updated from global requirements * doc: fix malformed api sample * Persist the request spec during an instance boot * Revise the compute\_upgrade\_levels\_auto release note * Adding guard on None value for some helpers method * Return HTTP 400 if volume size is not defined * API: Rearrange HTTPBadRequest raising in \_resize * remove the wrong param of fake\_db\_migration initiation * Enable all extension for server PUT API sample tests * Config options: centralize options in availability\_zones * We now require gettext for dev environments * Revert "Pass host when call attach to Cinder" * update feature support matrix documentation * Config options: centralize section "cells" * Use uuidsentinel in host\_status test * remove not used tpl * Return 409 instead of 503 when cidr conflict * releasenotes: Note on CPU thread pinning support * Use extra\_data\_func to get fingerprints of objects * Use stevedore for scheduler driver * Use stevedore for scheduler host manager * Enables conductor py3 unit tests * REST API changes for user settable server description * Use get\_notification\_transport() for notifications * Stop using stubs.Set in vmwareapi unit tests * Add tests for nova.rpc module * libvirt: check min required qemu/kvm versions on ppc64/ppc64le * VMware: Handle image size correctly for OVA and streamOptimized images * enginefacade: 'instance\_group' * enginefacade: 'floating\_ip' * enginefacade: 'compute\_node' * enginefacade: 'service' * Hyper-V: Trace original exception before converting exception * Fixed incorrect names/comments for API version 2.18 * Remove mox from tests/unit/objects/test\_keypair.py * API: Remove unexpected from errors get\_console\_output * Updated from global requirements * Fix docstrings for sphinx * Make project\_id optional in v2.1 urls * remove not used tpl file * Log retries at INFO level per guidelines * make logic clearer about template selection * Add ITRI DISCO os-brick connector for libvirt * Fix misleading comment of pci\_stats * cleanup: remove python 2.6 compat assertNotIsInstance * Add better help text to scheduler options (3) * (lxc) Updated regex to ignore failing tests * Add better help text to scheduler options (2) * Add better help text to scheduler options (1) * Note in HypervisorSupportMatrix for Libvirt/LXC shutdown kernel bug * Ceph for live-migration job * enginefacade: 'security\_group' * enginefacade: 'instance' * enginefacade: 'fixed\_ip' * enginefacade: 'quota' and 'reservation' * Python3: Replace dict.iteritems with six.iteritems * Updated from global requirements * Object: Fix wrong usage migrate\_data\_obj * \_can\_fallocate should throw a warning instead of error * VMware: no longer convert image meta from dict to object * cleanup: add comments about the pre/post extension processing * cleanup: remove custom serializer support * Add description for server query * remove docs about format extensions * Remove catching of ComputeHostNotFound exception * Return empty object list instead [] * cleanup: remove configurable action\_peek * libvirt: use native AIO mode for cinder volumes * libvirt: use native AIO mode for image backends * Issue an info log msg when port quota is exceeded * Validate translations * Imported Translations from Zanata 13.0.0.0b2 ---------- * doc: add client interactive guideline for microversions * doc: add version discovery guideline in api concept doc * doc: completes microversion use-cases in api concept doc * Fix indents of servers-detail-resp.json * libvirt: make snapshot use RBD snapshot/clone when available * Improve the help text for the cert options * cleanup: remove infrastructure for content/type deserializer * Pass host when call attach to Cinder * Pass attachment\_id to Cinder when detach a volume * libvirt: Fix/implement revert-resize for RBD-backed images * Added super() call in some of the Model's child * enginefacade: 'ec2\_instance' and 'instance\_fault' * cleanup: collapse wsgi serializer test hierarchy * Add service status notification * cleanup: remove wsgi serialize/deserialize decorators * enginefacade: 'block\_device\_mapping' * Fix invalid import order * Add a REST API to trigger crash dump in an instance * libvirt: adding a class to retrieve hardware properties * virt: introduce libosinfo library to set hardware policy * pci: changing the claiming and allocation logic for PF/VF assignment * pci: store context when creating pci devices * Make emitting versioned notifications configurable * Add infra for versioned notifications * Make sure that we always have a parent\_addr set * change set\_stubs to use stub\_out in vmwareapi/stubs.py * Add note to ComputeNode.numa\_topology * Reno for lock policy * Clean up nova/conf/scheduler.py * Reno for Xen rename * config options: Make xvp proxy into vnc group * XenAPI: Fix race on rotate\_xen\_guest\_logs * Add exception handling in \_cleanup\_allocated\_network * hardware: check whether realtime capable in API * Remove releasenotes/build between releasenotes runs * Add python3\* packages to development quickstart guide * Make sure full stack trace is logged on RT update failure * Changed filter\_by() to filter() during filtering instances in db API * config options: Centralise PCI options * HyperV: Set disk serial number for attached volumes * Use "regex" of StrOpt to check option "port\_range" * enable uefi boot * VMware: convert to use instance.image\_meta property * Config drive: convert to use instance.image\_meta property * Use of six.PY3 should be forward compatible * Add host\_status attribute for servers/detail and servers/{server\_id} * Revert "Workaround reno reverts by accepting warnings" * Adds relase notes for soft affinity feature * libvirt: handle migrate\_data as object in cleanup method * Create filter\_properties earlier in boot request * Parse availability\_zone in API * Add object and database support for host\_status API * Workaround reno reverts by accepting warnings * ports & networks gather should validate existance * objects: add virtual 'image\_meta' property to Instance object * compute: convert manager to use nova.objects.ImageMeta * Replace stubs.Set with stub\_out (os) * Fix Mock assert\_called\_once\_with() usage * ServerGroupsV213SampleJsonTest should actually test v2.13 * Move config options from nova/cert directory * Remove dead code from reserve\_block\_device\_name rpcapi * Adapt the code to the new get\_by\_volume BDM functions * Fix undetected races when getting BDMs by volume id * Fix instance not destroyed after successful evacuation * Use TimeFixture from oslo\_utils in functional tests * Fix indexing of dict.keys() in python3 * libvirt: add a new live\_migration\_tunnelled config * libvirt: force config related migration flags * libvirt: force use of direct vs p2p migration * libvirt: force use/non-use of NON\_SHARED\_INC flag * libvirt: parse live migration flags at startup * enginefacade: 'aggregate' * Add helper shim for getting items * hacking: check for common double word typos * Fix backing file detection in libvirt live snapshot * trivial: Add additional logs for NUMA scheduling * Add 'hw:cpu\_threads\_policy=isolate' scheduling * Replaces itertools.izip with six.moves.zip * Clean up network resources when reschedule fails * Replace stubs.Set with stub\_out (fakes) * Add maximum microversions for each releases * Remove "or 'reserved'" condition from reserve\_block\_device\_name * live-migration hook ansible 2.0 compaitability * update min tox version to 2.0 * pci: adding support to specify a device\_type in pci requests * Block flaky python34 test : vmwareapi.test\_configdrive.ConfigDriveTestCase * Actually pass the migration data object down to the virt drivers * nova conf single point of entry: fix error message * Fix sphinx warnings from signature\_utils * Sets binding:profile to empty dic when unbinding port * Use timedelta.total\_second instead of calculating * Use stub\_out and mock to remove mox:part 3 * Replaces \_\_builtin\_\_ with six.moves.builtins * Remove mm-ctl from network.filters * Add mm-ctl to compute.filters * Add reviewing point related to REST API * Stop using mox stubs in nova.tests.unit.console * pci: do not filter out any SRIOV Physical Functions * objects: update the old location parent\_addr only if it has value * Add xenapi support for XenapiLiveMigrateData objects * Fixes Hyper-V unit tests for latest os\_win release * Add 'hw:cpu\_thread\_policy=require' scheduling * add "hw\_firmware\_type" image metadata * Docstring change for consistency * Add tests for metadata functions * libvirt: fix TypeError calling \_live\_migration\_copy\_disk\_paths * Add DiskFormat as Enum in fields * Remove DictCompat from EC2 objects * Remove DictCompat from DNSDomain * Add description on how to run ./run\_test.sh -8 * Propagate qemu-img errors to compute manager * Change assertEqual(True/False) to assertTrue/False * objects: adding a parent\_addr field to the PciDevice object * Add caching of service\_min\_versions in the conductor * Scheduler: enforce max attempts at service startup * Fix unit tests on Mac OS X * Stop using mox stubs in nova.tests.unit.api.openstack.compute.test\_services * libvirt: add discard support for attached volumes * Remove DictCompat from CellMapping * Remove NovaObjectDictCompat from Aggregate * XenAPI: Cope with more Cinder backends * single point of entry for sample config generation * Remove Deprecated EC2 and ObjectStore impl/tests * libvirt: add realtime support * Imported Translations from Zanata * libvirt: update to min required version to 0.10.2 * Remove null AZ tests from API tests * Replace stubs.Set with stub\_out (functional tests) * Updated from global requirements * doc: minor corrections to the API version docco * Refactor \_load\_support\_matrix * Fix format conversion in libvirt snapshot * Fix format detection in libvirt snapshot * api: add soft-affinity policies for server groups * scheduler: fill RequestSpec.instance\_group.members * scheduler: add soft-(anti-)affinity weighers * Implements proper UUID format for compute/test\_stats\* * Add image signature verification * Convert nova.tests.unit.image.fake.stub\_out\_image\_service to use stub\_out * Block more flaky py34 tests * Replace deprecated library function os.popen() with subprocess * Remove mox and Stubs from tests/unit/pci/test\_manager.py * Correct the code description * Fix advice for new contribs * libvirt: better error for bad live migration flag * Add argument to support-matrix sphinx extension * Wrong URL reported by the run\_tests.sh message * Make use of 'InstanceNUMACell.cpu\_policy' field * Add 'cpu\_policy' and 'cpu\_thread\_policy' fields * Add 'CPUThreadAllocationPolicy' enum field * Blacklist flaky tests and add warning * Modify Scheduler RPC API to use RequestSpec obj * Implements proper UUID format for test\_compute\_mgr * Remove get\_lock method and policy action * libvirt: sort block\_device\_list in volume\_in\_mapping log * Stop explicitly running test discovery for py34 * introduce \`\`stub\_out\`\` method to base test class * Cleanup NovaObjectDictCompat from security\_group\_rule * Remove useless header not need microversion * Implements proper UUID format for test\_compute * Move Process and Mentoring pages to devref * Document restrictions for working on cells v1 * api-guide: add a doc on users * Assignment (from method with no return) removed * remove use of \_get\_regexes in samples tests * Improve 'virt' conf options documentation * config options: Centralise 'virt.hardware' options * Get list of disks to copy early to avoid multiple DB hits * Remove non-unicode bind param warnings * Fix typo, ReST -> REST * Wrong spelling of defined * libvirt: fix typo in test\_init\_host\_migration\_flags * docs: update refs to mitaka release schedule * doc: add how to arrange order of scheduler filters * libvirt: only get instance.flavor if needed in get\_disk\_mapping * Replace backtick with apostrophe in lazy-loading debug log * libvirt: fix TypeError in find\_disk\_dev\_for\_disk\_bus * Fix RPC revision log entry for 4.6 * signature\_utils: move to explicit image metadata * Unreference mocks are listed in the wrong order * remove API v1.1 from testing * remove /v1.1 from default paste.ini * libvirt: verify cpu bw policy capability for host * Implements proper UUID format for test\_compute\_cells and test\_compute\_utils * Add the missing return value in the comment * Updated from global requirements * xen: block BootableTestCase from py34 testing * Modify conductor to use RequestSpec object * db: querry to retrieve all pci device by parent address * db: adding columns to PciDevice table * Replace except Exception with specific exception * pci: minor fix to exception message format * Python 3 deprecated the logger.warn method in favor of warning * Check added for mandatory parameter size in schema * Remove redundant driver initialization in test * enginefacade: 'instance\_metadata' * Misspelling in messages * Add lock to host-state consumption * Add lock to scheduler host state updating * Allow virt driver to define binding:host\_id * [python3] Webob request body should be bytes * Replace copy.deepcopy of RequestContext with copy.copy * DriverBlockDevice must receive a BDM object, not a dict * Misspelling in message * Wrong usage of "a" * Remove unused logging import and LOG global var * Reduce the number of db/rpc calls to get instance rules * Use is\_supported() to check microversion * SameHostFilter should fail if host does not have instances * VMware: add method for getting hosts attached to datastore * Trivial: Fix wrong comment in service version * signature\_utils: handle ECC curve unavailability * Updated from global requirements * tests: Remove duplicate check * enginefacade: 'bw\_usage', 'vol\_usage' and 's3\_image' * VMware: improve instance names on VC * VMware: add in folder support on VC * VMware: cleanup unit test global variable * signature\_utils: refactor the list of ECC curves * Nuke EC2 API from api-paste and remove wsgi support * Remove cruft for things o.vo handles * Make scheduler\_hints schema allow list of id * Change logging level for 'oslo\_db' * Remove unused compute\_api in ServerUsageController * network: Don't repopulate instance info cache from Neutron ports * Fix doc comment for get\_available\_resource * objects: lazy-load instance.security\_groups more efficiently * VMware: cleanup unit tests * Use SpawnIsSynchronousFixture in most unit tests * Use stub\_out and mock to remove mox: part 1 * Disable the in tree EC2 API by default * deprecate old glance config options * remove temporary GlanceEndpoint object * convert GlanceClientWrapper to endpoint * Use stub\_out and mock to remove mox: part 2 * Add a compute API to trigger crash dump in instance * Make libvirt driver return migrate data objects for source and dest checks * Use TimeFixture from oslo\_utils to override time in tests * enginefacade: 'vif' and 'task\_log' * review guide: add location details for config options * libvirt: wrapper list\_guests to Host's object * remove vestigial XML\_NS\_V11 variable * remove unused EXTENSION\_DESERIALIZE\_\* constants * config options: Centralise 'virt.ironic' options * remove unused pipeline\_factory\_v3 alias * remove unused methods from integrated\_helpers test class * remove unused extends\_name attribute * Add upload/download vhd2 interfaces * Replace unicode with six.text\_type * conductor: fix unbound local variable request\_spec * Use just ids in all request templates for flavors/images * extract non instance methods * remove unused trigger\_handler * remove unused update\_dhcp\_hostfile\_with\_text method * remove nova-cert from most functional tests * enginefacade: 'migration' * XenAPI: Fix race in rotate\_xen\_guest\_logs * libvirt: introduce "pause" to Guest's object * libvirt: introduce "shutdown" to Guest's object * libvirt: introduce "snapshot" to Guest's object * libvirt: introduce thaw filesystems * libvirt: introduce freeze filesystems * libvirt: replace direct libvirt's call AbortJobBlock * Allow to update 'v2.1' links in sample files * Do not update links for 'versions' tests * centeralized conf:compute/emphemeral\_storage\_encryption * Add instance.save() when handling reboot in init instance * Add transitional support for migrate data objects to compute manager * Implements proper UUID format for few objects tests * Filter by leased=False when allocating fixed IPs * Increase informations in nova-net warnings * docs: add concept guide for certificate * Fix reclaim\_instance\_interval < 0 never delete instance completely * Updated from global requirements * Add placeholders for config options * Implements proper UUID format for the fake\_network * Refresh stale volume BDMs in terminate\_connection * Block requests 2.9.0 * Implements proper UUID format for the test\_compute\_api * Remove onSharedStorage from evacuate API * Fix CPU pinning for odd number of CPUs w hyperthreading * hardware: stop using instance cell topology in CPU pinning logic * Check context before returning cached value * deprecate run\_tests.sh * remove archaic references to XML in api * simplify the request / response format document * Add signature\_utils module * Remove XML description from extension concept * remove ctype from classes * Remove cells service from api samples that don't test cells * Add uuidsentinel test module * Remove the wrong usage of api\_major\_version in api sample tests * Updated from global requirements * Fix wrong method name in doc filter\_scheduler * doc: update threading.rst * Makes GET extension info sample tests run for v2 also * update api\_samples code to use better variables * Remove incorrect comments about file injection * Remove a restriction on injection files * Remove unnecessary log when search servers * Deprecated tox -downloadcache option removed * rework warning messages for extension whitelist/blacklist * Make sure bdm.volume\_id is set after auto-creating volumes * Replace safe\_utils.getcallargs with inspect.getcallargs * Fix wrap\_exception to get all arguments for payload * Add hypervisor, aggregates, migration description * retool xen glance plugin to work with urls * always create clients with GlanceEndpoint * Implement GlanceEndpoint object * Clean up glance url handling * Use RequestSpec in the ChanceScheduler * Modify left filters for RequestSpec * Modify NUMA, PCI and num\_instances filters for RequestSpec * Improve inject\_nmi() in libvirt driver and add tests * Report compute-api bugs against nova * XenAPI: Expose labels for ephemeral disks * Fix use of safeutils.getcallargs * Cache SecurityGroupAPI results from neutron multiplexer * Remove the executable bit from several python files * Optimize \_cleanup\_incomplete\_migrations periodic task * [Py34] api.openstack.compute.legacy\_v2.test\_servers.Base64ValidationTest * [Py34] api.openstack.test\_faults.TestFaultWrapper * [Py34] Enable api.openstack.test\_wsgi unit test * default host to service name instead of uuid * Remove start\_service calls from the test case * Add SIGHUP handlers for compute rpcapi to console and conductor * Cache the automatic version pin to avoid repeated lookups * virt: allow for direct mounting of LocalBlockImages * Use testscenarios to set attributes directly * update API samples to use endpoints * Updated from global requirements * Add project-id and user-id when list server-groups * Fixes Python 3 compatibility for filter results * Remove duplicate default=None for option compute\_available\_monitors * Disable IPv6 on bridge devices * Don't load deleted instances * Improve Filter Scheduler doc clarity * libvirt: report pci Type-PF type even when VFs are disabled * Remove deprecated neutron auth options * Fix capitalization of IP * Add separated section for configure guest os * Add separated section for extra specs and image properties * Add a note about fixing "db type could not be determined" with py34 * neutron: skip test\_deallocate\_for\_instance\_2\* in py34 job * tighten regex on objectify * Replace os.path.join() for URLs * Add hv testing for ImageMetaProps.\_legacy\_property\_map * Edit the text to be more native-English sounding * docs: add test strategy and feature classification * Fix the endpoint of /v2 on concept doc * Drop JSON decoding for supported\_instances * docs: update old stuff in version section * Scheduler: honor the glance metadata for hypervisor details * Implements proper UUID format for the ComputeAPITestCase * docs: add microversions description in the concept doc * Make admin consistent * Add more concepts for servers * Make "ReSTful service" consistent * Add retry logic for detaching device using LibVirt * Fix Exception message consistency with input protocol * Remove SQLite BigInteger/Integer translation logic * xen: Drop JSON for supported\_instances * vmware: Drop JSON for supported\_instances * ironic: Drop JSON for supported\_instances * hyperv: Drop JSON for supported\_instances * libvirt: Drop JSON for supported\_instances * Drop JSON for stats in virt API * Replaces izip\_longest with six.moves.zip\_longest * Fixes dict keys and items references for Python 3 * Scheduler: correct control flow when forcing host * Replaces longs with ints * neutron: only get port id when listing ports in validate\_networks * neutron: only list ports if there is a quota limit when validating * Add reviewing point related to REST API * Revert "Enable options for oslo.reports" * Fix wrong CPU metric value in metrics\_filter * Reset the compute\_rpcapi in Compute manager on SIGHUP * Remove the unused sginfo rootwrap filter * docs: ensure third party tests pass before +2 * Config options: centralize section "scheduler" * add api-samples tox target * Remove Instance object flavor helper methods only used in tests * Remove unnecessary extra instance saves during resize * docs: using the correct format and real world example for fault message * VMware: cleanup ExtraSpecs * Remove HTTPRequestEntityTooLarge usage in test * Enables py3 unit tests for libvirt.host module * Replaces \_\_builtin\_\_ with six.moves.builtins * Converting nova.virt.hyperv to py3 * Hyper-V: removes \*Utils modules and unit tests * docs: update services description for concept guide * docs: remove duplicated section about error handling * Remove Useless element in migrate\_server shcema * Optimize "open" method with context manager * trivial: Add some logs to 'numa\_topology\_filter' * Updated from global requirements * Docs: update the concept guide for Host topics * Cleanup of compute api reboot method * Hyper-V: adds os-win library * Remove description about image from faults section * api-guide: add note about users * Updated from global requirements * xenapi: Add helper function and unit tests for client session * Config options: centralize section "scheduler" * Ironic: Workaround to mitigate bug #1341420 * Libvirt: Support fp plug in vhostuser vif * Remove version from setup.cfg 13.0.0.0b1 ---------- * Add note for automatic determination of compute\_rpc version by service * Add note for Virtuozzo supporting snapshots * Add note for HyperV 2008 drop of support * Imported Translations from Zanata * Add note for removing conductor RPC API v2 * Add note for dropping InstanceV1 objects * Add note for force\_config\_drive opt change * Add note for deprecating local conductor * Revert "Detach volume after deleting instance with no host" * force releasenotes warnings to be treated as errors * Fix reno warning for API DB relnote * Adding a new vnic\_type for Ironic/Neutron/Nova integration * Use o.vo DictOfListOfStringsField * libvirt: remove todo note not useful anymore * Modify metric-related filters for RequestSpec * Modify default filters for RequestSpec * servicegroup: stop zombie service due to exception * Add persistence to the RequestSpec object * Updated from global requirements * add hacking check for config options location * Correct some nits for moving servers in concept doc * use graduated oslo.policy * TrivialFix: remove 'deleted' flag * Make server concept guide use 'server' consistently * api-guide: fix up navigation bar * Use version convert methods from oslo\_utils.versionutils * docs: reorder move servers text * docs: add clarifications to move servers * Change some wording on server\_concepts.rst * Cleanup unused test code in test\_scheduler.py * Modify Aggregate filters for RequestSpec * Add code-review devref for release notes * Hyper-V: refines the exceptions raised in the driver * Use o.vo FlexibleBooleanField * docs: describe migration and other movement concepts * Double 'an' in message * Unify on \_schedule\_instances * Add review guideline to microversion API * Remove the TestRemoteObject class * Catch FixedIpNotFoundForAddress when create server * doc: add server status to concept.rst * docs: update the concept guide shelve actions * Fixed incorrect name of 'tag' and 'tag-any' filters * Fix resource tracker VCPU counting * Add relnote for change in default setting * use NoDBTestCase for KeypairPolicyTest * doc: change policies.rst to indicate API links * Remove useless code in \_poll\_volume\_usage function * Neutron: add logging context * Remove unused param of CertificatesController * Add user data into general concept * Fix a typo in api-guide doc * Make some classes inherit from NoDBTestCase * XenAPI: Workaround for 6.5 iSCSI bug * NFS setup for live-migration job * Fix ebtables-version release note * config options: enhance help text of section "serial\_console" * Updating nova config-reference doc * Updated from global requirements * Prevent redundant instance.update notifications * VMware: fix docstring for cluster management * api: remove re-declared type in migrate schema * enginefacade: 'agent' and 'action' * config options: centralize section "serial\_console" * Replaced private field in get\_session/engine with public method * SR-IOV: Improve the vnic type check in the neutron api * Simplified boolean variable check * update connect\_volume test * Enable options for oslo.reports * Reverse sort tables before archiving * scheduler: fix incorrect log message * Updated from global requirements * Add release note for API DB migration requirements * Replaced deprecated timeutils methods * Multinode job for live-migration * Use o.vo VersionPredicateField * Use flavor instead of flavour * Corrected few grammatical nitpics * Add more 'actions' for server concepts doc * libvirt: mlnx\_direct vif type removal * xen: mask passwords in volume connection\_data dict * Updated from global requirements * Use --concurrent with ebtables * Removed extra spaces from double line strings * Change test function name to make more sense * Change Invalid exception to a specified exception * Add 'lxd' to the list of recognized hypervisors * Add microversions schema unit test for None * Clean up legacy multi-version test constructs * Fix Nova's indirection fixture override * Remove skips for resize tests from tempest-dsvm-cells-rc * Modify Affinity filter for RequestSpec * Prepare filters for using RequestSpec object * Use ServiceList object rather than direct db call * Add relnote for ERT deprecation * Remove IN-predicate warnings * docs: update the API faults concept guide * Deprecate nova-manage service subcommand * Double detach volume causes server fault * Use JSON format instead of json format * Network: add in missing translation * cells is a sad panda about scheduler hints * VMware: expand support for Opaque networks * Fix is\_volume\_backed\_instance() for unset image\_ref * Add \_LE to LOG.error statement in nova/service * Add service records for nova-api services * Added method is\_supported to check API microversions * enginefacade: 'host\_mapping' * Removes support for Hyper-V Server 2008 R2 * Fix the bug of "Error spelling of 'explicitely'" * Claims: fix log message * Fix paths for api-guide build * Remove flavors.get\_flavor() only used in tests * VMware: Raise DiskNotFound for missing disk device * Remove two unneeded db lookups during delete of a resizing instance * Fix pci\_stats logging in resource tracker * live-mig: Mark migration as failed on fail to schedule * Move the Migration set-status-if-exists pattern to a method * Don't track migrations in 'accepted' state * live-migrate: Change the status Migration is created with * compute: split check\_can\_live\_migrate\_destination * Replace N block\_device\_mapping queries with 1 * Add "unreleased" release notes page * Add reno for release notes management * XenAPI: Correct hypervisor type in Horizon's admin view * Fix typo in test\_post\_select\_populate * Rearranges to create new Compute API Guide * Added CORS support to Nova * Aggregate Extra Specs Filter should return if extra\_specs is empty * cells: skip 5 networking scenario tests that use floating IPs * force\_config\_drive: StrOpt -> BoolOpt * Updated from global requirements * Add test coverage for both types of not-found-ness in neutronclient for floating * Fix impotent \_poll\_shelved\_instances tests * Fix race in \_poll\_shelved\_instances task * Handle a NeutronClientException 404 Error for floating ips * Handle DB failures in servicegroup DB driver * Hook for live-migration job * Omit RescheduledException in instance\_fault.message * Remove duplicate server.kill on test shutdown * make the driver.Scheduler as abstract class * Fix a spelling mistake in the log * objects: remove remote\_object\_calls from \_BaseTestCase * Repair and rename test\_is\_volume\_backed\_instance\_no\_bdms() * Use ObjectVersionChecker fixture from oslo.versionedobjects * VMware: add in vif resource limitations * Untie subobject versions * Block oslo.messaging 2.8.0 * Split up test\_is\_volume\_backed\_instance() into five functions * Avoid the dual-naming confusion * enginefacade: 'provider\_fw', 'console\_pool' and 'console' * enginefacade: 'network' * clean up regex in tempest-dsvm-cells-rc * skip lock\_unlock\_server test for cells * ScalityVolume:fix how remote FS mount is detected * OpenStack typo * Remove duplicate keys in policy.json * Add missing policy rules * devref:Don't suggest decorate private method * VMware: use a constant for 'iscsi' * Config drive: make use of an instance object * Fix attibute error when cloning raw images in Ceph * Properly log BlockDeviceMappingList in \_create\_block\_device\_mapping * Exclude all BDM checks for cells * glance:add helper method to get client version * enginefacade: 'dnsdomain' and 'ec2' * enginefacade: 'certificate' and 'pci\_device' * enginefacade: 'key\_pair' and 'cell' * enginefacade: 'instance\_mapping' * enginefacade: 'cell\_mapping' * enginefacade: 'instance\_info' and 'instance\_extra' * Use EngineFacade from oslo\_db.enginefacade * VMware: fix trivial indentations * Remove flavors.get\_all\_flavors() only used in tests * Make lock policy default to admin or owner * libvirt:Fix a typo of test cases * Deprecate local conductor mode * Deprecate Extensible Resource Tracker * Change image to instance in comment * VMware: use oslo\_config new type PortOpt * Remove vcpu resource from extensible resource tracker * Add logging to snapshot\_volume\_backed method * Remove unnecessary destroy call from Ironic virt driver * cells: add debug logging to bdm\_update\_or\_create\_at\_top * Drop Instance v1.x support * Check prefix with startswith() instead of slicing * Add debug logging for when boot sequence is invalid in \_validate\_bdm * remove the redundant policy check for SecurityGroupsOutputController * virt: add constraint to handle realtime policy * libvirt: add cpu schedular priority config * libvirt: rework membacking config to support future features * Do not mask original spawn failure if shutdown\_instance fails * Point to cinder options in nova block alloc docs * Fix booting fail when unlimited project quota * Remove useless get\_instance\_faults() * Remove "Can't resolve label reference" warnings * Remove reservation\_id from the logs when a schedule fails * Use RequestSpec object in HostManager * Use RequestSpec object in the FilterScheduler * Add ppcle architectures to libvirt blockinfo * Deprecated: failIf * Imported Translations from Zanata * Remove obj\_relationships from objects * Delete dead test code * Add tempest-dsvm-lxc-rc * Mark set-admin-password as complete for libvirt in support matrix * Hypervisor support matrix: define pause & unpause * Revert "Implement online schema migrations" * Fix the os-extended-volumes key reference in the REST API history docs * Remove get\_all method from servicegroup API * Remove SoftDeleteMixin from NovaBase * libvirt: support snapshots with parallels virt\_type * Use oslo.config choices kwarg with StrOpt for servicegroup\_driver * Imported Translations from Zanata * Add -constraints sections for CI jobs * Add "vnc" option group for sample nova.conf file * Updated from global requirements * Expands python34 unit tests list * Fix missing obj\_make\_compatible() for ImageMetaProps object * Fix error handling in nova.cmd.baseproxy * Change 'ec2-api' stackforge url to openstack url * Fixes Python 3 str issue in ConfigDrive creation * Revert "Store correct VirtCPUTopology" * Enable all extension for image API sample tests * Add tags to .gitignore * Updated from global requirements * Add a nova functional test for the os-server-groups GET API with all\_projects parameter * Image meta: treat legacy vmware adapter type values * Attempt rollback live migrate at dest even if network dealloc fails * hacking check for contextlib.nested for py34 support * Print number of rows archived per table in db archive\_deleted\_rows * Updated from global requirements * Fix more inconsistency between Nova-Net and Neutron * Fix metadata service security-groups when using Neutron * Remove redundant deps in tox.ini * Add some tests for map\_dev * Clean up tests for dropping obj\_relationships * Fix up Service object for manifest-based backports * Fix service\_version minimum calculation for compute RPC * docs: add the scheduler evolution plans * Revert "virt: Use preexec\_fn to ulimit qemu-img info call" * Updated from global requirements * Ensure Glance image 'size' attribute is 0, not 'None' * Ignore errorcode=4 when executing \`cryptsetup remove\` command * libvirt: Don't attempt to convert initrd images * Revert "Fixes Python 3 str issue in ConfigDrive creation" * Monkey patch nova-ec2 api * Compute: remove unused parameter 12.0.0 ------ * Omnibus stable/liberty fix * Drop outdated sqlite downgrade script * Updated from global requirements * Fix Status-Line in HTTP response * Imported Translations from Zanata * Default ConvertedException code to 500 * Updated from global requirements * VMware: fix bug for config drive when inventory folder is used * Fix a typo * code-review guidelines: add checklist for config options * Add a code-review guideline document * virt: Use preexec\_fn to ulimit qemu-img info call * Clean up some Instancev1 stuff in the tests * Updated from global requirements * Replaces contextlib.nested with test.nested * Sync cliutils from oslo-incubator * Make archive\_deleted\_rows\_for\_table private 12.0.0.0rc2 ----------- * load consoleauth\_topic option before using it * Revert "[libvirt] Move cleanup of imported files to imagebackend" * Add more documentation for RetryFilter * Fix InstanceV1 backports to use context * Imported Translations from Zanata * Add test of claim context manager abort * Log DBReferenceError in archive\_deleted\_rows\_for\_table * Use DBReferenceError in archive\_deleted\_rows\_for\_table * Add testresources used by oslo.db fixture * Remove unused context parameter from db.archive\_deleted\_rows\* methods * xenapi\_device\_id integer, expected string * Fix InstanceV1 backports to use context * Drop unused obj\_to\_primitive() override * Updated from global requirements * libvirt: remove unnecessary else in blockinfo.get\_root\_info * Make test cases in test\_test.py use NoDBTest * XenAPI: Fix unit tests for python34 * docs: re-organise the API concept docs * VMware: specify chunk size when reading image data * Make ConsoleauthTestCase inherit from NoDBTest * Change a test class of consoleauth to no db test * Imported Translations from Zanata * Catch 3 InvalidBDM related exc when boot instance * Move create vm states to svg diagram * Ironic: Fix bad capacity reporting if instance\_info is unset * Revert "[libvirt] Move cleanup of imported files to imagebackend" * Honor until\_refresh config when creating default security group * remove sphinxcontrib-seqdiag * [Py34] nova.tests.unit.api.openstack.test\_common * [Py34] Enable api.openstack.test\_mapper unit test * [Py34] Enable test\_legacy\_v2\_compatible\_wrapper * Extend the ServiceTooOld exception with more data * Make service create/update fail if version is too old * Allow automatic determination of compute\_rpc version by service * Add get\_minimum\_version() to Service object and DB API * Correct memory validation for live migration * devref: change error messages no need microversion * Replace f.func\_name and f.func\_code with f.\_\_name\_\_ and f.\_\_code\_\_ * Imported Translations from Zanata * Add a note about the 500->404 not requiring a microversion * Ensure Nova metrics derived from a set of metrics * Updated from global requirements * Fixes Python 3 str issue in ConfigDrive creation * Make secgroup rules refresh with refresh\_instance\_security\_rules() * Remove unused refresh\_security\_group\_members() call * Imported Translations from Zanata * Check DBReferenceError foreign key in Instance.save * Fix Instance unit test for DBReferenceError * Ironic: Fix bad capacity reporting if instance\_info is unset * libvirt: check if ImageMeta.disk\_format is set before accessing it * libvirt: check if ImageMeta.disk\_format is set before accessing it * Rollback is needed if initialize\_connection times out * Updated from global requirements * Add Pillow to test-requirements.txt * VMware: raise NotImplementedError for live migration methods * xapi-tools: fixes cache cleaner script * Cleanup of Translations * Add Pillow to test-requirements.txt * Update rpc version aliases for liberty * Remove orphaned code related to extended\_volumes * Add checkpoint logging when terminating an instance * Add checkpoint logging when building an instance in compute manager * Removed unused method from compute/rpcapi * Remove unused read-only cell code * Change warn to debug logs when migration context is missing * Use os-testr for py34 tox target * Add sample config file to nova docs * Remove lazy-loading property compute\_task\_api from compute api * Remove conductor 2.x RPC API * Reserve 10 migrations for backports * Use StrOpt's parameter choices to restritct option auth\_strategy * vmware: set default value in fake \_db\_content when creating objects * Avoid needless list copy in 'scheduler\_host\_subset\_size' case * libvirt: Log warning for wrong migration flag config options * Slightly better translation friendly formatting * Identify more py34 tests that already pass * rebuild: Apply migration context before calling the driver * hardware: improve parse\_cpu\_spec to handle exclusion range * Correct Instance type check to work with InstanceV1 * Imported Translations from Zanata * Correct Instance type check to work with InstanceV1 * Only create volumes with instance.az if cinder.cross\_az\_attach is False * Fix the help text of monkey\_patch config param * Rollback of live-migration fails with the NFS driver * Set TrustedFilter as experimental * doc: gmr: Update instructions to generate GMR error reports * rebuild: Apply migration context before calling the driver * Fix MetricWeigher to use MonitorMetricList * VMware: update log to be warning * Add more help text to the cinder.cross\_az\_attach option * Cleanup of Translations * Revert "Deprecate cinder.cross\_az\_attach option" * Fix some spelling typo in manual * Fix NoneType error when calling MetricsWeigher * wsgi: removing semicolon * Fix logging\_sample.conf to use oslo\_log formatter * Remove unused \_check\_string\_length() * Deprecate cinder.cross\_az\_attach option * Neutron: update cells when saving info\_cache * Fix MetricWeigher to use MonitorMetricList 12.0.0.0rc1 ----------- * Imported Translations from Zanata * Detach volume after deleting instance with no host * Remove unnecessary call to info\_cache.delete * Filter leading/trailing spaces for name field in v2.1 compat mode * Give instance default hostname if hostname is empty * If rescue failed set instance to ERROR * Add some devref for AZs * Change parameter name in utility function * RT: track evacuation migrations * rebuild: RPC sends additional args and claims are done * Cells: Limit instances pulled in \_heal\_instances * Open Mitaka development * Fix order of arguments in assertEqual * devref: update the nova architecture doc * Imported Translations from Zanata * Fix quota update in init\_instance on nova-compute restart * net: explicitly set mac on linux bridge * live-migration: Logs exception if operation failed * libvirt: add unit tests for the designer utility methods * Add test cases for some classes in objects.fields * Change ignore-errors to ignore\_errors * libvirt: fix direct OVS plugging * claims: move a debug msg to a warn on missing migration * Fix order of arguments in assertEqual * Remove duplicate VALID\_NAME\_REGEX * Pep8 didn't check api/openstack/common.py * Updated from global requirements * libvirt: Add unit tests for methods * Devref: Document why conductor has a task api/manager * Imported Translations from Zanata * Fix nova configuration options description * libvirt:on snapshot delete, use qemu-img to blockRebase if VM is stopped * Allow filtering using unicode characters * Updated from global requirements * Imported Translations from Zanata * Test both NoAuthMiddleware and NoAuthMiddlewareV3 * Remove redundant variable 'context' * Add 'OS-EXT-VIF-NET:net\_id' for v21 compatible mode * libvirt: Add NUMA cell count to cpu\_info * Xenapi: Don't access image\_meta.id when booting from a volume * Imported Translations from Zanata * Fix typo in HACKING.rst * Remove comment in wrong place * Fix string formatting in api/metadata/vendordata\_json.py * Raise exception.Migration earlier in REST API layer * Remove "shelved\_image\_id" key from instance system metadata * Only set access\_ip\_\* when instance goes ACTIVE * VMware: fix typo in comment * RT: Migration resource tracking uses migration context * compute: migrate/resize paths properly handle stashed numa\_topology * Claims: Make sure move claims create a migration context records * libvirt:update live\_migration\_monitor to use Guest * VMware: create method for getting datacenter from datastore * User APIRouterV21 instead of APIRouterV3 for v2.1 unittests * Remove TestOpenStackClientV3 from nova functional tests * Rename all the ViewBuilderV3 to ViewBuilderV21 * libvirt: Split out resize\_image logic from create\_image * Reuse method to convert key to passphrase * Creating instance fail when inject ssh key in cells mode * Fix the usage output of the nova-idmapshift command * Make test\_revoke\_cert\_project\_not\_found\_chdir\_fails deterministic * Reduce the number of Instance.get\_by\_uuid calls * Remove 'v3' from comments in Nova API code * xapi: cleanup volume sr on live migration rollback * Hyper-V: Implements attach\_interface and detach\_interface method * Remove unnecessary 'context' param from quotas reserve method call * VMware: Replace get\_dynamic\_properties with get\_object\_properties\_dict * VMware: Replace get\_dynamic\_property with get\_object\_property * Return empty PciDevicePoolList obj instead of None * libvirt: add debug logging for lxc teardown paths * Add API schema for different\_cell filter * Add microversion bump exception for scheduler-hint * Use six.text\_type instead of str in serialize\_args * Set vif and allocated when associating fixed ip * Fix ScaleIO commands in rootwrap filters * Add missing information to docstring * Add microversion rule when adding attr to request * Check unknown event name when create external server event * Don't expect meta attributes in object\_compat that aren't in the db obj * CONF.allow\_resize\_to\_same\_host should check only once in controller * Updated from global requirements * Fix debug log format in object\_backport\_versions() * Add version 3.0 of conductor RPC interface * Remove and deprecate conductor object\_backport() * Invalidate AZ cache when the instance AZ information is different * Consolidate code to get the correct availability zone of an instance * Fix order of arguments in assertEqual * Ironic: Call unprovison for nodes in DEPLOYING state * libvirt: use guest as parameter for get serial ports * Separate API schemas for v2.0 compatible API * api: allow any scheduler hints * API: Handle InstanceUnknownCell exceptions * Updated from global requirements * Add some explanation for the instance AZ field * Remove 'v3' from extension code * Remove more 'v3' references from the code * Sorting and pagination params used as filters * Freeze v1 Instance and InstanceList schema hashes * Imported Translations from Transifex * Remove unused parameter overwrite in elevated * Add missing delete policies in the sample file * Fix a few typos * ironic: convert driver to use nova.objects.ImageMeta * objects: convert config drive to use ImageMeta object * VMware: ensure that instance is deleted when volume is missing * libvirt:Rsync compression removed * xenapi: Support extra tgz images that with only a single VHD * Hyper-V: Fixes snapshoting inexistent VM issue * Hyper-V: Adds RDPConsoleOps unit tests * Rectify spelling mistake in nova * libvirt: Add a finish log * Remove old unused baremetal rootwrap filters * Relax restrictions on server name * network\_request\_obj: Clean up outdated code * Object: Fix KeyError when loading instance from db * Add os-brick's scsi\_id command to rootwrap * Expose keystoneclient's session and auth plugin loading parameters * Remove and deprecate conductor compute\_node\_create() * Drop unused conductor manager vol\_usage\_update() mock * Add constraint target to tox.ini * nova-net: fix missing log variable in deallocate\_fixed\_ip * Provide working SQLA\_VERSION attribute * Don't "lock" the DB on expand dry run * New sensible network bandwidth quota values in Nova tests * Fix Cells gate test by modifying the regressions regex * Add functional test for server group * Reject the cell name include '!', '.' and '@' for Nova API * Hyper-V: Adds HyperVDriver unit tests * claims: Remove compat code with instance dicts * Add Instance and InstanceList v2.0 objects * Teach conductor to do manifest-based object\_class\_action() things * Make the conductor fixture use version manifests * Update objects test infrastructure for multiple versions * Refactor Instance tests to use objects.Instance * Fix an issue with NovaObjectRegistry hook * Pull out the common bits of InstanceList into \_BaseInstanceList * Pull out the common bits of Instance into \_BaseInstance * Clarify max\_local\_block\_devices config option usage * Allow to use autodetection of volume device path * Remove the blacklisted nova-cells shelve tests * Update from global requirements * objects: Hook migration object into Instance * Fix order of arguments in assertEqual * Fix order of arguments in assertEqual * Detach and terminate conn if Cinder attach fails * [libvirt] Move cleanup of imported files to imagebackend * hyperv: convert driver to use nova.objects.ImageMeta 12.0.0.0b3 ---------- * Add notes explaining vmware's suds usage * Adds instance\_uuid index for instance\_system\_metadata * Handle nova-compute failure during a soft reboot * Fix mistake in UT:test\_detach\_unattached\_volume * Fix RequestSpec.instance\_group hydration * Remove unused root\_metadata method of BlockDeviceMappingList * Add JSON-Schema note to api\_plugins.rst * Compute: update finish\_revert\_resize log to have some context * Revert "Remove references to suds" * Fix API directories on the doc * Fix incomplete error message of quota exceeded * Add secgroup param checks for Neutron * Implement manifest-based backports * Delete orphaned instance files from compute nodes * Fixed incorrect keys in cpu\_pinning * api: deprecate the api v2 extension configuration * Remove the v3 word from help message of api\_rate\_limit option * Use the same pci\_requests field for all filters and HostManager * objects: Add MigrationContext object * Don't query database with an empty list of tags for creation * Remove duplicate NullHandler test fixture * Add migration policy to upgrades devref * Add warning log when deprecated v2 and v3 code get used * Update ComputeNode values with allocation ratios in the RT * Update HostManager and filters to use ComputeNode ratios * Add cpu\_allocation\_ratio and ram\_allocation\_ratio to ComputeNode * VMware: adds support for rescue image * filter pre\_assigned\_dev\_names when finding disk dev * Fix order of arguments in assertEqual * Fix order of arguments in assertEqual * Fix order of arguments in assertEqual * rt: Rewrite abort and update\_usage tests * Cleanup RT \_instance\_in\_resize\_state() * Compute: be consistent with logs about NotImplemented methods * VMware: pass network info to config drive * Remove/deprecate conductor instance\_update() * Make compute manager instance updates use objects * xenapi: add necessary timeout check * Fix permission issue of server group API * Make query to quota usage table order preserved * Change v3 to v21 for devref api\_plugins.rst * Remove duplicate exception * Don't trace on InstanceInfoCacheNotFound when refreshing network info\_cache * Cells: Improve block device mapping update/create calls * Rm openstack/common/versionutils from setup.cfg * Add a warning in the microversion docs around the usage of 'latest' * Fix exception message mistake in WSGI service * Replace "vol" variable by "bdm" * Remove v3 references in unit test 'contrib' * Removed unused dependency: discover * Rename tests so that they are run * Adds unit tests to test\_common.py * db: Add the migration\_context to the instance\_extra table * tests: Make test\_claims use Instance object * api: use v2.1 only in api-paste.ini * VMware: Update to return the correct ESX iqn * Pass block\_device\_info when delete an encrypted lvm * Handle neutron exception on bad floating ip create request * API: remove unused parameter * Consider that all scheduler calls are IO Ops * Add RequestSpec methods for primitiving into dicts * Add a note about the 400 response not requiring a microversion * api: deprecate the concept of extensions in v2.1 * Fix precedence of image bdms over image mappings * Cells: remove redundant check if cells are enabled * Strip the extra properties out when using legacy v2 compatible middleware * Remove unused sample files from /doc dir * Expose VIF net-id attribute in os-virtual-interfaces * libvirt: take account of disks in migration data size * Add deprecated\_for\_removal parm for deprecated neutron\_ops * Use compatibility methods from oslo * compute: Split the rebuild\_instance method * Allow for migration object to be passed to \_move\_claim * rt: move filtering of migration by type lower in the call stack * rt: generalize claim code to be useful for other move actions * libvirt: make guest to return power state * libvirt: move domain info to guest * Xen: import migrated ephemeral disk based on previous size * cleanup NovaObjectDictCompat from external\_event * cleanup NovaObjectDictCompat from agent * Catch invalid id input in service\_delete * Convert percent metrics back into the [0, 1] range * Cleanup for merging v2 and v2.1 functional tests * Remove doc/source/api and doc/build before building docs * Fixes a typo on nova.tests.unit.api.ec2.test\_api.py * Add a note about the 403 response not requiring a microversion * Pre-load expected attrs that the view builder needs for server details * Remove 'Retry-After' in server create and resize * Remove debug log message in SG API constructor * Updated from global requirements * Refactor test cases for live-migrate error case * Fixes Bug "destroy\_vm fails with HyperVException" * libvirt: refactor \_create\_domain\_setup\_lxc to use Image.get\_model * Set task\_state=None when booting instance failed * libvirt: Fix snapshot delete for network disk type for blockRebase op * [Ironic]Not count available resources of deployed ironic node * Catch OverQuota in volume create function * Don't allow instance to overcommit against itself * n-net: add more debug logging to release\_fixed\_ip * Fix scheduler code to use monitor metric objects * objects: add missing enum values to DiskBus field * Move objects registration in tests directory * xenapi: convert driver to use nova.objects.ImageMeta * libvirt: convert driver to use nova.objects.ImageMeta * Updated from global requirements * VMware: Delete vmdk UUID during volume detach * Move common sample files methods in test base class * Share server POST sample file for microversion too * Fix remote\_consoles microversion 2.8 not to run on /v3 * Remove merged sample tests and file for v2 tests * Move "versions" functional tests in v2.1 tests * Nil out inst.host and inst.node when build fails * Fix link's href to consider osapi\_compute\_link\_prefix * Fix abnormal quota usage after restore by admin * Specify current directory using new cwd param in processutils.execute * Remove and deprecate unused conductor method vol\_usage\_update() * Replace conductor proxying calls with the new VolumeUsage object * Add a VolumeUsage object * Updated from global requirements * Move CPU and RAM allocation ratios to ResourceTracker * Pull the all\_tenants search\_opts checking code into a common utility * Gate on nova.conf.sample generation * libvirt: use proper disk\_info in \_hard\_reboot * Update obj\_reset\_changes signatures to match * libvirt: only get bdm in \_create\_domain\_setup\_lxc if booted from volume * libvirt: \_create\_domain\_setup\_lxc needs to default disk mapping as a dict * libvirt: add docstring for \_get\_instance\_disk\_info * Add rootwrap daemon mode support * Removed duplicated keys in dictionary * Xenapi: Correct misaligned partitioning * libvirt:Remove duplicated check code for config option sysinfo\_serial * Test cases for better handling of SSH key comments * Allow compute monitors in different namespaces * cleanup NovaObjectDictCompat from hv\_spec * cleanup NovaObjectDictCompat from quota * Correct a wrong docstring * Create RequestSpec object * Clarify API microversion docs around handling 500 errors * libvirt: Fix KeyError during LXC instance boot * Xenapi: Handle missing aggregate metadata on startup * Handle NotFound exceptions while processing network-changed events * Added processing /compute URL * libvirt: enable live migration with serial console * Remove the useless require\_admin\_context decorator * Correct expected error code for os-resetState action * libvirt: add helper methods for getting guest devices/disks * compute: improve exceptions related to disk size checks * Improve error logs for start/stop of locked instance * pci: Remove nova.pci.device module * pci: Remove objects.InstancePCIRequests.save() * Remove unused db.security\_group\_rule\_get\_by\_security\_group\_grantee() * Revert "Make nova-network use conductor for security groups refresh" * Make compute\_api.trigger\_members\_refresh() issue a single db call * Fix cells use of legacy bdms during local instance delete operations * Hyper-V: Fixes serial port issue on Windows Threshold * Consolidate initialization of instance snapshot metadata * Fix collection of metadata for a snapshot of a volume-backed instance * Remove unnecessary ValueError exception * Update log's level when backup a volume backend instance * The API unit tests for serial console use http instead of ws * Drop scheduler RPC 3.x support * Move quota delta reserve methods from api to utils * nova.utils.\_get\_root\_helper() should be public * Host manager: add in missing log hints * Removing extension "OS-EXT-VIF-NET" from v2.1 extension-list * nova-manage: fix typo in docstring about mangaging * hyper-v: mock time.sleep in test\_rmtree * Remove tie between system\_metadata and extra.flavor * Fixes Hyper-V boot from volume fails when using ephemeral disk * Re-write way of compare APIVersionRequest's * Store "null api version" as 0.0 * add docstring to virt driver interface (as-is) [1 of ?] * Remove last of the plugins/v3 from unit tests * Rename classes containing 'v3' to 'v21' * Move the v2 api\_sample functional tests * Updated from global requirements * Add logging when filtering returns nothing * libvirt: cleanup() serial\_consoles after instance failure * Don't query database with an empty list of tags for IN clause * Libvirt: Make live\_migration\_bandwidth help msg more meaning * Move V2.1 API unittest to top level directory * Neutron: Check port binding status * Move legacy v2 api smaple tests * conductor: update comments for rpc and use object * Load flavor when getting instances for simple-tenant-usage * Make pagination tolerate a deleted marker * Updated from global requirements * Cleanup HTTPRequest for security\_groups test * Add api samples impact to microversion devref * Use min and max on IntOpt option types * Add hacking check for eventlet.spawn() * Updated from global requirements * neutron: filter None port\_ids from ports list in \_unbind\_ports * VMware: treat deletion exception with attached volumes * VMware: ensure that get\_info raises the correct exception * Allow resize root\_gb to 0 for volume-backed instances * Limit parallel live migrations in progress * Validate quota class\_name * Move V2 API unittests under legacy\_v2 directory * Updated from global requirements * Replace get\_cinder\_client\_version in cinder.py * Avoid querying for Service in resource tracker * Remove/deprecate unused parts of the compute node object * Make ComputeNode.service\_id nullable to match db schema * Add missing rules in policy.json * Add V2.1 API tests parity with V2 API tests * Fixed indentation * Simplify interface for creating snapshot of volume-backed instance * Add instance action events for live migration * Remove 'v3' directory for v2.1 json-schemas * Move v2.1 code to the main compute directory - remove v3 step3 * libvirt: qemu-img convert should be skipped when migrating * Add version counter to Service object * Fix the peer review link in the 'Patches and Reviews' policy section * Handle port delete initiated by neutron * Don't check flavor disk size when booting from volume * libvirt: make instance compulsory in blockinfo APIs * xapi: ensure pv driver info is present prior to live-migration * Move existing V2 to legacy\_v2 - step 2 * Move existing V2 to legacy\_v2 * Return v2 version info with v2 legacy compatible wrapper * Ironic: Add numa\_topology to get\_available\_resource return values * Fix three typos on nova/pci directory * Imported Translations from Transifex * pci: Use PciDeviceList for PciDevTracker.pci\_devs * pci: Remove get\_pci\_devices\_filter() method * pci: Move whitelist filtering inside PCI tracker * libvirt: call host.get\_capabilities after checking for bad numa versions * libvirt: log when BAD\_LIBVIRT\_NUMA\_VERSIONS detected * Use string substitution before raising exception * Hyper-V: deprecates support for Windows / Hyper-V Server 2008 R2 * VMware: Do not untar OVA on the file system * Add hacking check for greenthread.spawn() * Ironic: Use ironicclient native retries for Conflict in ClientWrapper * Prevent (un)pinning unknown CPUs * libvirt: use instance UUID with exception InstanceNotFound * Fix notify\_decorator errors * VMware: update supported vsphere 6.0 os types * libvirt: convert Scality vol driver to LibvirtBaseFileSystemVolumeDriver * libvirt: convert Quobyte driver to LibvirtBaseFileSystemVolumeDriver * pci: Use fields.Enum type for PCI device type * pci: Use fields.Enum type for PCI device status * More specific error messages on building BDM * Ensure test\_models\_sync() works with new Alembic releases * Hyper-V: Adds VolumeOps unit tests * Hyper-V: Adds MigrationOps unit tests * Suppress not image properties for image metadata from volume * Add non-negative integer and float fields * Fix DeprecationWarning when using BaseException.message * Added support for specifying units to hw:mem\_page\_size * Compute: use instance object for refresh\_instance\_security\_rules * libvirt: convert GPFS volume driver to LibvirtBaseFileSystemVolumeDriver * Updated from global requirements * Add os-brick based LibvirtVolumeDriver for ScaleIO * docs: add link to liberty summit session on v2.1 API * Refactor unit test for InstanceGroup objects * Don't pass the service catalog when making glance requests * libvirt: check min required qemu/libvirt versions on s390/s390x * libvirt: ensure LibvirtConfigGuestDisk parses readonly/shareable flags * libvirt: set caps on maximum live migration time * libvirt: support management of downtime during migration * cleanup NovaObjectDictCompat from numa object * Fix test\_relationships() for subobject versions * libvirt: don't open connection in driver constructor * Skip SO\_REUSEADDR tests on BSD * \_\_getitem\_\_ method not returning value * Compute: replace incorrect instance object with dict * Fix live-migrations usage of the wrong connector information * Honour nullability constraints of Glance schema in ImageMeta * Change docstring in test to comment * libvirt: convert GlusterFS driver to LibvirtBaseFileSystemVolumeDriver * libvirt: convert SMBFS vol driver to LibvirtBaseFileSystemVolumeDriver * libvirt: convert NFS volume driver to LibvirtBaseFileSystemVolumeDriver * Introduce LibvirtBaseFileSystemVolumeDriver * Add test to check relations at or below current * Add documentation for the nova-cells command * libvirt:Rsync remote FS driver was added * Clean the deprecated noauth middleware * Add os\_brick-based VolumeDriver for HGST connector * libvirt: add os\_admin\_user to use with set admin password * Fixed incorrect behaviour of method \_check\_instance\_exists * Squashing down update method * Fix the wrong file name for legacy v2 compatible wrapper functional test * Add scenario for API sample tests with legacy v2 compatible wrapper * Skip additionalProperties checks when LegacyV2CompatibleWrapper enabled * Libvirt: correct libvirt reference url link when live-migration failed * libvirt: enable virtio-net multiqueue * Replacing unichr() with six.unichr() and reduce with six.moves.reduce() * Fix resource leaking when consume\_from\_instance raise exception * :Add documentation for the nova-idmapshift command * RBD: Reading rbd\_default\_features from ceph.conf * New nova API call to mark nova-compute down * libvirt: move LibvirtISCSIVolumeDriver into it's own module * libvirt: move LibvirtNETVolumeDriver into it's own module * libvirt: move LibvirtISERVolumeDriver into it's own module * libvirt: move LibvirtNFSVolumeDriver into it's own module * allow live migration in case of a booted from volume instance * Handle MessageTimeout to MigrationPreCheckError * Create a new dictionary for type\_data in VMwareAPIVMTestCase class * resource tracker style pci resource management * Added missed '-' to the rest\_api\_version\_history.rst * Imported Translations from Transifex * Remove db layer hard-code permission checks for keypair * Fix a couple dead links in docs * cleanup NovaObjectDictCompat from virt\_cpu\_topology * Adding user\_id handling to keypair index, show and create api calls * Updated from global requirements * Remove legacy flavor compatibility code from Instance * libvirt: Fix root device name for volume-backed instances * Fix few typos in nova code and docs * Helper script for running under Apache2 * Raise NovaException for missing/empty machine-id * Fixed random failing of test\_describe\_instances\_with\_filters\_tags * libvirt: enhance libvirt to set admin password * libvirt: rework quiesce to not share "sensitive" informations * Metadata: support proxying loadbalancers * formely is not correct * Remove 'scheduled\_at' - DB cleanup * Remove unnecessary executable permission * Neutron: add in API method for updating VNIC index * Xen: convert image auto\_disk\_config value to bool before compare * Make BaseProxyTestCase.test\_proxy deterministic wrt traffic/verbose * Cells: Handle instance\_destroy\_at\_top failure * cleanup NovaObjectDictCompat from virtual\_interface * Fix test mock that abuses objects * VMware: map one nova-compute to one VC cluster * VMware: add serial port device * Handle SSL termination proxies for version list * Use urlencode instead of dict\_to\_query\_str function * libvirt: move LibvirtSMBFSVolumeDriver into it's own module * libvirt: move LibvirtAOEVolumeDriver into it's own module * libvirt: move LibvirtGlusterfsVolumeDriver into it's own module * libvirt: move LibvirtFibreChannelVolumeDriver into it's own module * VMware: set create\_virtual\_disk\_spec method as local * Retry live migration on pre-check failure * Handle config drives being stored on rbd * Change List objects to use obj\_relationships * Fixes delayed instance lifecycle events issue * libvirt-vif: Allow to configure a script on bridge interface * Include DiskFilter in the default list * Adding support for InfiniBand SR-IOV vif type * VMware: Add support for swap disk * libvirt: Add logging for dm-crypt error conditions * Service group drivers forced\_down flag utilization * libvirt: Replace stubs with mocks for test\_dmcrypt * clarify docs on 2.9 API change * Remove db layer hard-code permission checks for instance\_get\_all\_hung\_in\_rebooting * Undo tox -e docs pip install sphinx workaround * Set autodoc\_index\_modules=True so tox -e docs builds module docs again * Allow NUMA based reporting for Monitors * libvirt: don't add filesystem disk to parallels containers unconditionally * objects: add hw\_vif\_multiqueue\_enabled image property * Prepare for unicode enums from Oslo * rootwrap: remove obsolete filters for baremetal * Create class hierarchy for tasks in conductor * return more details on assertJsonEqual fail * Fix IronicHostManager to skip get\_by\_host() call * Store correct VirtCPUTopology * Add documentation for block device mapping * Show 'locked' information in server details * VMware: add resource limits for disk * VMware: store extra\_specs object * VMware: Resource limits for memory * VMware: create common object for limits, reservations and shares * VMware: add support for cores per socket * Add DiskNotFound and VolumeNotFound test * Not check rotation at compute level * Instance destroyed if ironic node in CLEANWAIT * Ironic: Better handle InstanceNotFound on destroy() * Fix overloading of block device on boot by device name * tweak graphviz formatting for readability * libvirt: rename parallels driver to virtuozzo * libvirt: Add macvtap as virtual interface (vif) type to Nova's libvirt driver * cells: document upgrade limitations/assumptions * rebuild: make sure server is shut down before volumes are detached * Implement compare-and-swap for instance update * docs: add a placeholder link to mentoring docs * libvirt: Kill rsync/scp processes before deleting instance * Updated from global requirements * Add console allowed origins setting * libvirt: move the LibvirtScalityVolumeDriver into it's own module * libvirt: move the LibvirtGPFSVolumeDriver into it's own module * libvirt: move the LibvirtQuobyteVolumeDriver into the quobyte module * libvirt: move volume/remotefs/quobyte modules under volume subdir * Add missing policy for limits extension * Move to using ovo's remotable decorators * Base NovaObject on VersionedObject * Document when we should have a microversion * libvirt: do relative block rebase only with non-null base * Add DictOfListOfStrings type of field * Get py34 subunit.run test discovery to work * Enable python34 tests for nova/tests/unit/scheduler/test\*.py * libvirt: mark NUMA huge page mappings as shared access * libvirt:Add a driver API to inject an NMI * virt: convert hardware module to use nova.objects.ImageMeta 12.0.0.0b2 ---------- * Replace openssl calls with cryptography lib * libvirt: move lvm/dmcrypt/rbd\_utils modules under storage subdir * Fix Instance object usage in test\_extended\_ips tests * Fix test\_extended\_server\_attributes for proper Instance object usage * Fix test\_security\_groups to use Instance object properly * Refactor test\_servers to use instance objects * Switch to using os-brick * Updated from global requirements * VMware: remove redundant check for block devices * Remove unused decorator on attach/detach volume * libvirt: test capability for supports\_migrate\_to\_same\_host * Added removing of tags from instance after its deletion * Remove unused import of the my\_ip option from the manager * Scheduler: enhance debug messages for multitenancy aggregates * VMware: Handle missing vmdk during volume detach * Running microversion v2.6 sample tests under '/v2' endpoint * VMware: implement get\_mks\_console() * Add MKS protocol for remote consoles * Add MKS console support * libvirt: improve logging in the driver.py code * Fix serializer supported version reporting in object\_backport * Updated from global requirements * Revert "Add error message to failed block device transform" * tox: make it possible to run pep8 on current patch only * Fix seven typos on nova documentation * Add two fields to ImageMetaProps object * Check flavor type before add tenant access * Switch to the oslo\_utils.fileutils * hypervisor support matrix: fix snapshot for libvirt Xen * libvirt: implement get\_device\_name\_for\_instance * libvirt: Always default device names at boot * Remove unused import of the compute\_topic option from the DB API * Remove unused call to \_get\_networks\_by\_uuids() * libvirt: fix disk I/O QOS support with RBD * Updated from global requirements * Remove unnecessary oslo namespace import checks * VMware: Fixed redeclared CONF = cfg.CONF * Execute \_poll\_shelved\_instances only if shelved\_offload\_time is > 0 * Switch to oslo.reports * Support Network objects in set\_network\_host * Fix Filter Schedulers doc to refer to all\_filters * Fixup uses of mock in hyperv tests * Cleanup log lines in nova.image.glance * Revert "Add config drive support for Virtuozzo containers" * Virt: fix debug log messages * Virt: use flavor object and not flavor dict * Add VersionPredicate type of field * Remove unnecessary method in FilterScheduler * Use utf8\_bin collation on the flavor extra-specs table in MySQL * docs: clear between current vs future plans * cleanup NovaObjectDictCompat subclassing from pci\_device * libvirt: make unit tests concise by setup guest object * libvirt: introduce method to wait for block device job * Decouple instance object tests from the api fakes module * Fixed typos in self parameter * Hyper-V: restart serial console workers after instance power change * Only work with ipv4 subnet metadata if one exists * Do not import using oslo namespace * Refresh instance info cache within lock * Remove db layer hard-code permission checks for fixed\_ip\_associate\_\* * Add middleware filterout Microversions http headers * Correct backup\_type param description * Fix a request body template for secgroup tests * Images: fix invalid exception message * Updated from global requirements * rebuild: fix rebuild of server with volume attached * objects: send PciDeviceList 1.2 to all code that can handle it * Fix libguestfs failure in test\_can\_resize\_need\_fs\_type\_specified * Fix the incorrect PciDeviceList version number * objects: Don't import CellMapping from the objects module * Deprecate the osapi\_v3.enabled option * Remove conductor api from resource tracker * Fix test\_tracker object mocks * Fix Python 3 issues in nova.utils and nova.tests * Remove db layer hard-code permission checks for instance\_get\_all\_by\_host\_and\_not\_type * Support all\_tenants search\_opts for neutron * libvirt : remove broken olso\_config choices option * Convert instance\_type to object in prep\_resize * VMware: clean up exceptions * Revert "Remove useless db call instance\_get\_all\_hung\_in\_rebooting" * VMware: Use virtual disk size instead of image size * Remove db layer hard-code permission checks for provider\_fw\_rule\_\* * Remove db layer hard-code permission checks for archive\_deleted\_rows\* * Revert "Implement compare-and-swap for instance update" * Add tool to build a doc latex pdf * make test\_save\_updates\_numa\_topology stable across python versions * Update HACKING.rst for running tests and building docs * Cleanup quota\_class unittest with appropriate request context * Remove db layer hard-code permission checks for quota\_class\_create/update * Remove db layer hard-code permission checks for quota\_class\_get\_all\_by\_name * Improve functional test base for microversion * Remove db layer hard-code permission checks for reservation\_expire * Introducing new forced\_down field for a Service object * Use stevedore for loading monitor extensions * libvirt: Remove dead code path in method clear\_volume * Switch to oslo.service library * Include project\_id in instance metadata * Convert test\_compute\_utils to use Instance object * Fix for mock-1.1.0 * Port crypto to Python 3 * Add HostMapping object * Remove useless db call instance\_get\_all\_hung\_in\_rebooting * Cleanup unused method fake\_set\_snapshot\_id * Handle KeyError when volume encryption is not supported * Expose Neutron network data in metadata service * Build Neutron network data for metadata service * Implement compare-and-swap for instance update * Added method exists to the Tag object * Add DB2 support * compute: rename ResizeClaim to MoveClaim * Fix the little spelling mistake of the comment * Remove db layer hard-code permission checks for quota\_create/update * Fix the typo from \_pre\_upgrade\_294 to \_pre\_upgrade\_295 for tests/unit/db/test\_migration * Ironic:check the configuration item api\_max\_retries * Modified testscenario for micro version 2.4 * Add some notifications to the evacuate path * Make evacuate leave a record for the source compute host to process * Fix incorrect enum in Migration object and DB model * Refactoring of the os-services module * libvirt: update docstring in blockinfo module for disk\_info * Ignore bridge already exists error when creating bridge * libvirt: handle rescue flag first in blockinfo.get\_disk\_mapping * libvirt: update volume delete snapshot to use Guest * libvirt: update live snapshot to use Guest object * libvirt: update swap volume to use Guest * libvirt: introduce GuestBlock to wrap around Block API * libvirt: rename GuestVCPUInfo to VCPUInfo * libvirt: save the memory state of guest * removed unused method \_get\_default\_deleted\_value * Remove flavor migration from db\_api and nova-manage * Rework monitor plugin interface and API * Adds MonitorMetric object * virt: add get\_device\_name\_for\_instance to the base driver class * libvirt: return whether a domain is persistent * Cells: fix indentation for configuration variable declaration * VMware: add unit tests for vmops attach and detach interface * Remove unneeded OS\_TEST\_DBAPI\_ADMIN\_CONNECTION * Switch from MySQL-python to PyMySQL * virt: fix picking CPU topologies based on desired NUMA topology * Port test\_exception to Python 3 * devref: virtual machine states and transitions * Consolidate the APIs for getting consoles * Remove db layer hard-code permission checks for floating\_ip\_dns * Fix typo in model doc string * virt: Fix AttributeError for raw image format * log meaningful error message on download exception * Updated from global requirements * Add bandit for security static analysis testing * Handle unexpected clear events call * Make on\_shared\_storage optional in compute manager * snapshot: Add device\_name to the snapshot bdms * compute: Make swap\_volume with resize updates BDM size * Make Nova better at keeping track of volume sizes in BDM * API: make sure a blank volume with no size is rejected * Ironic: Improve driver logs * Drop MANIFEST.in - it's not needed with PBR * Libvirt: Define system\_family for libvirt guests * Convert RT compute\_node to be a ComputeNode object * glance:check the num\_retries option * tests: Move test\_resource\_tracker to Instance objects * Remove compat\_instance() * Enable python34 tests for nova/tests/unit/objects/test\*.py * Soft delete system\_metadata when destroy instance * Remove python3 specific test-requirements file * Try luksFormat up to 3 times in case the device is in use * rootwrap: update ln --symbolic filter for FS and FC type volume drivers * Add wording to error message in TestObjectVersions.test\_relationships * Close temporary files in virt/disk/test\_api.py * Add BlockDeviceType enum field * Add BlockDeviceDestinationType enum field * Add BlockDeviceSourceType enum field * Avoid recursion in object relationships test * tests: move a test to the proper class in test\_resource\_tracker * Remove db layer hard-code permission checks for network\_set\_host * Block subtractive operations in migrations for Kilo and beyond * Remove db layer hard-code permission checks for network\_disassociate * libvirt: Correct domxml node name * Test relationships of List objects * libvirt: configuration for interface driver options * Fix Python 3 issues in nova.db.sqlalchemy * Update test\_db\_api for oslo.db 2.0 * Fix is\_image\_extendable() thinko * Validate maximum limit for quota * utils: ignore block device mapping in system metadata * libvirt: add in missing doc string for hypervisor\_version * Remove useless policy rule from fake\_policy.py * Replace ascii art architecture diagram with svg image * Adds MonitorMetricTypeField enum field * Unfudge tox -e genconfig wrt missing versionutils module * virt: update doctrings * hypervisor support matrix: add feature "evacuate" * XenAPI: Refactor rotate\_xen\_guest\_logs to avoid races * hypervisor support matrix: add feature "serial console" * hypervisor support matrix: add CLI commands to features * Fix typos detected by toolkit misspellings * hypervisor support matrix: fix "evacuate" for s390 and hyper-v * Make live migration create a migration object record * Cells: add instance cell registration utility to nova-manage * fix typos in docs * Logging corrected * Check mac for instance before disassociate in release\_fixed\_ip * Add the rule of separate plugin for Nova REST API in devref * Use flavor object in compute manager 12.0.0.0b1 ---------- * Changes conf.py for Sphinx build because oslosphinx now contains GA * Fix testing object fields with missing instance rows * Change group controller of V2 test cases * Reduce window for allocate\_fixed\_ip / release\_fixed\_ip race in nova-net * Make NoValidHost exceptions clearer * Hyper-V: Fixes method retrieving free SCSI controller slot on V1 * Refactor network API 'get\_instance\_nw\_info' * Removed extra '-' from rest\_api\_version\_history.rst * Remove an useless variable and fix a typo in api * VMware: convert driver to use nova.objects.ImageMeta * Bypass ironic server not available issue * Fix test\_create\_security\_group\_with\_no\_name * Remove unused "id" and "rules" from secgroup body * cells: add devstack/tempest-dsvm-cells-rc for gating * Add common function for v2.1 API flavor\_get * Fix comment typo * Fix up instance flavor usage in compute and network tests * Fix up ec2 tests for flavors on instances * Fix up xenapi tests for instance flavors * Fix up some bits of resource\_tracker to use instance flavors * Register the vnc config options under group 'vnc' * Cells: cell scheduler anti-affinity filter * Cells: add in missing unit test for get\_by\_uuid * VMware driver: Increasing speed of downloading image * Hyper-V: Fix virtual hard disk detach * Add flag to force experimental run of db contract * Make readonly field tests use exception from oslo.versionedobjects * Fixes "Hyper-V destroy vm fails on Windows Server 2008R2" * Add microversion to allow server search option ip6 for non-admin * Updated from global requirements * VMware: Handle port group not found case * Imported Translations from Transifex * libvirt: use correct translation format * Add explicit alembic dependency * network: add more debug logging context for race bug 1249065 * Add virt resource update to ComputeNode object * xenapi: remove bittorrent entry point lookup code * Use oslo-config-generator instead of generate\_sample.sh * Add unit tests for PCI utils * Support flavor object in migrate\_disk\_and\_power\_off * Remove usage of WritableLogger from oslo\_log * libvirt: Don't fetch kernel/ramdisk files if you already have them * Allow non-admin to list all tenants based on policy * Remove redundant policy check from security\_group\_default\_rule * Return bandwidth usage after updating * Update version for Liberty * neutron: remove deprecated allow\_duplicate\_networks config option * Validate maximum limit for integer * Improve the ability to resolve capabilities from Ironic * Fix the wrong address ref when the fixed\_ip is invalid * The devref for Nova stable API * Fix wrong check when use image in local * Fixes TypeError when libvirt version is BAD\_LIBVIRT\_CPU\_POLICY\_VERSIONS 12.0.0a0 -------- * Remove hv\_type translation shim for powervm * cells: remove deprecated mute\_weight\_value option * Make resize api of compute manager to send flavor object * VMware: detach cinder volume when instance destroyed * Add unit tests for the exact filters * test: add MatchType helper class as equivalent of mox.IsA * Validate int using utils.validate\_integer method * VMware: use min supported VC version in fake driver * Updated from global requirements * Added documentation around database upgrades * Avoid always saving flavor info in instance * Warn when CONF torrent\_base\_url is missing slash * Raise invalid input if use invalid ip for network to attach interface * Hyper-V: Removes old instance dirs after live migration * DB downgrades are no longer supported * Add Host Mapping table to API Database * VMware: verify vCenter server certificate * Implement online schema migrations * Hyper-V: Fixes live migration configdrive copy operation * Avoid resizing disk if the disk size doesn't change * Remove openstack/common/versionutils module * Fix TestObjEqualPrims test object registration * Remove references to suds * VMware: Remove configuration check * Remove and deprecate conductor task\_log methods * Remove unused compute utils methods * Make instance usage audit use the brand new TaskLog object * Add a TaskLog object * Updated from global requirements * Fix noVNC console access for an IPv6 setup * hypervisor support matrix: add status "unknown" * VMware: typo fix in config option help * Sync with latest oslo-incubator * Associating of floating IPs corrected * Minor refactor in nova.scheduler.filters.utils * Cleanup wording for the disable\_libvirt\_livesnapshot workaround option * Remove cell api overrides for force-delete * libvirt: convert imagebackend to support nova.virt.image.model classes * virt: convert disk API over to use nova.virt.image.model * Cells: Skip initial sync of block\_device\_mapping * Pass Down the Instance Name to Ironic Driver * Handle InstanceNotFound when sending instance update notification * Add an index to virtual\_interfaces.uuid * Updated from global requirements * Add config drive support for Virtuozzo containers * Update formatting of microversion 2.4 documentation * Consolidates scheduler utils tests into a single file * Send Instance object to cells instance\_update\_at\_top * VMware: use vCenter instead of VC * fix "down" nova-compute service spuriously marked as "up" * Improve formatting of rest\_api\_version\_history * Link to microversion history in docs * libvirt: fix live migration handling of disk\_info * libvirt: introduce method to get domain XML * libvirt: introduce method detach\_device to Guest object * Remove db layer hard-code permission checks for quota\_usage\_update * pass environment variables of proxy to tox * Remove db layer hard-code permission checks for quota\_get\_all\_\* * Fixed some misspellings * Clean up Fake\_Url for unit test of flavor\_access * Updated from global requirements * Add AggregateTypeAffinityFilter multi values support * volume: log which encryptor class is being used * VMware: Don't raise exception on resize of 0 disk * Hyper-V: sets supports\_migrate\_to\_same\_host capability * libvirt: remove \_get\_disk\_xml to use get\_disk from Guest * libvirt: introduce method to attach device * libvirt: update tests to use Mock instead of MagicMock * libvirt: Remove unnecessary JSON conversions * objects: fix parsing of NUMA cpu/mem properties * compute: remove get\_image\_metadata method * compute: only use non\_inheritable\_image\_properties if snapshotting * objects: add os\_require\_quiesce image property * libvirt: make default\_device\_names DRY-er * virt: Move building the block\_device\_info dict into a method * Objects: update missing adapter types * Add error handling for creating secgroup * libvirt: handle code=38 + sigkill (ebusy) in destroy() * Removed a non-conditional 'if' statement * Map uuid db field to instance\_uuid in BandwidthUsage object * Hyper-V: Fix missing WMI namespace issue on Windows 2008 R2 * Replace metaclass registry with explicit opt-in registry from oslo * Fix an objects layering violation in compute/api * Remove assertRemotes() from objects tests * Use fields from oslo.versionedobjects * Convert test objects to new field formats * Begin the transition to an explicit object registry * Set default event status to completed * Add a hacking rule for consistent HTTP501 message * Add and use raise\_feature\_not\_supported() * Objects: fix typo with exception * Remove useless volume when boot from volume failed * Hyper-V: Lock snapshot operation using instance uuid * Refactor show\_port() in neutron api * Ironic: Don't report resources for nodes without instances * libvirt: Remove unit tests for \_hard\_reboot * Adds hostutilsv2 to HyperV * libvirt: introduce method to delete domain config * libvirt: introduce method to get vcpus info * libvirt: Don't try to confine a non-NUMA instance * Removed explicit return from \_\_init\_\_ method * libvirt: introduce method resume to Guest object * libvirt: introduce method poweroff to Guest object * libvirt: make \_create\_domain return a Guest object * Raise InstanceNotFound when save FK constraint fails * Updated from global requirements * Add new VIF type VIF\_TYPE\_TAP * libvirt: Disable NUMA for broken libvirt * Handle FlavorNotFound when augmenting migrated flavors * virt: convert VFS API to use nova.virt.image.model * virt: convert disk mount API to use nova.virt.image.model * virt: introduce model for describing local image metadata * Remove unused instance\_group\_policy db calls * Improve compute swap\_volume logging * libvirt: introduce method get\_guest to Host object * libvirt: introduce a Guest to wrap around virDomain * Remove unused exceptions * Extract helper method to get image metadata from volume * Fix \_quota\_reserve test setup for incompatible type checking * Fixes referenced path in nova/doc/README.rst * Updated from global requirements * Handle cells race condition deleting unscheduled instance * Compute: tidy up legacy treatment for vif types * Allow libvirt cleanup completion when serial ports already released * objects: define the ImageMeta & ImageMetaProps objects * Ensure to store context in thread local after spawn/spawn\_n * Ironic: Parse and validate Node's properties * Hyper-V: Fix SMBFS volume attach race condition * Remove unit\_test doc * Make blueprints doc a reference for nova blueprints * Remove jenkins, launchpad and gerrit docs * Prune development.environment doc * docs: fixup libvirt NUMA testing docs to match reality * Fix some issues in devref for api\_microversions * nova response code 403 on block device quota error * Updated from global requirements * Remove unused variables from images api * Compute: improve logging using {} instead of dict * snapshot: Copy some missing attrs to the snapshot bdms * bdm: Make sure that delete\_on\_termination is a boolean * Get rid of oslo-incubator copy of middleware * Make nova-manage handle completely missing flavor information * Use oslo\_config choices support * Let soft-deleted instance\_system\_metadata readable * Make InstanceExternalEvent use an Enum for status * Add error message to failed block device transform * network: fix instance cache refresh for empty list * Imported Translations from Transifex * Add common function for v2 API flavor\_get * Remove cell policy check * VMware: replace hardcoded strings with constants * Add missing @require\_context * Standardize on assertJsonEqual in tests * Tolerate iso style timestamps for cells rpc communication * Force the value of LC\_ALL to be en\_US.UTF-8 * libvirt: disconnect\_volume does not return anything * Remove hash seed comment from tox.ini * Allow querying for migrations by source\_compute only * libvirt: Do not cache number of CPUs of the hypervisor * Create instance\_extra entry if it doesn't update * Ignore Cinder error when shutdown instance * Remove use of builtin name * Hyper-V: Fixes cold migration / resize issue * Fix cells capacity calculation for n:1 virt drivers * VMware: Log should use uuid instead of name * VMware: fill in instance metadata when resizing instances * VMware: fill in instance metadata when launching instances * Add the swap and ephemeral BDMs if needed * Updated from global requirements * Block oslo.vmware 0.13.0 due to a backwards incompatible change * hypervisor support matrix: update libvirt KVM (s390x) * Hyper-V: ensure only one log writer is spawned per VM * Prevent access to image when filesystem resize is disabled * Share admin password func test between v2 and v2.1 * VMware: remove dead function in vim\_util * Fix version unit test on Python 3 * Resource tracker: remove invalid conductor call from tests * Remove outdated TODO comment * Disable oslo.vmware test dependency on Python 3 * Run tests with PyMySQL on Python 3 * Drop explicit suds dependency * improve speed of some ec2 keypair tests * Add nova object equivalence based on prims * Cleanups for pci stats in preparation for RT using ComputeNode * Replace dict.iteritems() with six.iteritems(dict) * Add a maintainers file * virt: make sure convert\_all\_volumes catches blank volumes too * compute utils: Remove a useless context parameter * make SchedulerV3PassthroughTestCase use NoDBTest * Don't use dict.iterkeys() * VMware: enforce minimum support VC version * Split up and improve speed of keygen tests * Replace dict(obj.iteritems()) with dict(obj) * libvirt: Fix cpu\_compare tests and a wrong method when logging * Detect empty result when calling objects.BlockDeviceMapping.save() * remove \_rescan\_iscsi from disconnect\_volume\_multipath\_iscsi * Use six.moves.range for Python 3 * Use EnumField for instance external event name * Revert "Detach volume after deleting instance with no host" * Removed unused methods and classes * Removed unused variables * Removed unused "as e/exp/error" statements * Resource tracker: use instance objects for claims * Remove db layer hard-code permission checks for security\_group\_default\_rule\_destroy * Avoid AttributeError at instance.info\_cache.delete * Remove db layer hard-code permission checks for network\_associate * Remove db layer hard-code permission checks for network\_create\_safe * Pass project\_id when create networks by os-tenant-networks * Disassociate before deleting network in os-tenant-networks delete method * Remove db layer hard-code permission checks for v2.1 cells * Move unlock\_override policy enforcement into V2.1 REST API layer * tests: libvirt: Fix test\_volume\_snapshot\_delete tests * Add a finish log * Add nova-idmapshift to rootwrap filters * VMware: Missing docstring on parameter * Update docs layout * Add note to doc explaining scope * Show 'reserved' status in os-fixed-ips * Split instance event/tag correctly * libvirt: deprecate libvirt version usage < 0.10.2 * Fix race between resource audit and cpu pinning * Set migration\_type for existing cold migrations and resizes * Add migration\_type to Migration object * Add migration\_type and hidden to Migration database model * libvirt: improve logging * Fix pip-missing-reqs * objects: convert HVSpec to use named enums * objects: convert VirtCPUModel to use named enums * Ironic: Fix delete instance when spawning * Retry a cell delete if host constraint fails * objects: introduce BaseEnumField to allow subclassing * Add policy to cover snapshotting of volume backed instances * objects: add a FlexibleBoolean field type * Don't update RT status when set instance to ERROR * Delete shelved\_\* keys in n-cpu unshelve call * Fix loading things in instance\_extra for old instances * VMware: remove invalid comment * neutron: log hypervisor\_macs before raising PortNotUsable * VMware: use get\_object\_properties\_dict from oslo.vmware * VMware: use get\_datastore\_by\_ref from oslo.vmware * Unshelving volume backed instance fails * Avoid useless copy in get\_instance\_metadata() * Fix raise syntax for Python 3 * Replace iter.next() with next(iter) * libvirt: use instance UUID with exception InstanceNotFound * devref: add information to clarify nova scope * Refactor an unit test to use urlencode() * Additional cleanup after compute RPC 3.x removal * Drop compute RPC 3.x support * libvirt: deprecate the remove\_unused\_kernels config option * Updated from global requirements * libvirt: Use 'relative' flag for online snapshot's commit/rebase operations * Remove db layer hard-code permission checks for quota\_destroy\_all\_\* * Replace unicode with six.text\_type * Replace dict.itervalues() with six.itervalues(dict) * Use compute\_node consistently in ResourceTracker * Fix the wrong comment in the test\_servers.py file * Move ebrctl to compute.filter * libvirt: handle NotSupportedError in compareCPU * Hypervisor Support Matrix renders links in notes * Update fake flavor's root and ephemeral disk size * Code clean up db.instance\_get\_all\_by\_host() * use block\_dev.get\_bdm\_swap\_list in compute api * Catch SnapshotNotFound exception at os-volumes * Rename \_CellProxy.iteritems method to items on py3 * Overwrite NovaException message * API: remove unuseful expected error code from v2.1 service delete api * Fix quota-update of instances stuck in deleting when nova-compute startup finish * API: remove admin require from certificate\_\* from db layer * API: Add policy enforcement test cases for pci API * API: remove admin require for compute\_node(get\_all/search\_by\_hyperviso) from db * API: remove admin require for compute\_node\_create/update/delete from db layer * API: remove admin require from compute\_node\_get\_all\_by\_\* from db layer * Share deferred\_delete func tests between v2 and v2.1 * VMware: add support for NFS 4.1 * Compute: remove reverts\_task\_state from interface attach/detach * VMware: ensure that the adapter type is used * Fix failure of stopping instances during init host * Share assisted vol snapshots test between v2 and v2.1 * Compute: use instance object for \_deleted\_old\_enough method * API: remove instance\_get\_all\_by\_host(\_and\_node) hard-code admin check from db * Remove db layer hard-code permission checks for service\_get\_by\_host\* * Remove db layer hard-code permission checks for service\_get\_by\_compute\_host * Detach volume after deleting instance with no host * libvirt: safe\_decode xml for i18n logging * Fix scheduler issue when multiple-create failed * Move our ObjectListBase to subclass from the Oslo one * Fix cinder v1 warning with cinder\_catalog\_info option reference * Deprecate nova ironic driver's admin\_auth\_token * Handle return code 2 from blkid calls * Drop L from literal integer numbers for Python 3 * Libvirt: Use tpool to invoke guestfs api * Minor edits to support-matrix doc * hacking: remove unused variable author\_tag\_re * Update kilo version alias * Refactor tests that use compute's deprecated run\_instance() method * Helper scripts for running under Apache2 * downgrade log messages for memcache server (dis)connect events * don't report service group connection events as errors in dbdriver * Updated from global requirements * Switch to \_set\_instance\_obj\_error\_state in build\_and\_run\_instance * Add SpawnFixture * Log the actual instance.info\_cache when empty in floating ip associate * unify libvirt driver checks for qemu * VMware: Allow other nested hypervisors (HyperV) * servicegroup: remove get\_all method never used as public * libvirt: add todo note to avoid call to libvirt from the driver * libvirt: add method to compare cpu to Host * libvirt: add method to list pci devices to Host * libvirt: add method to get device by name to Host * libvirt: add method to define instance to host * libvirt: add method to get cpu stats to host * monitor: remove dependance with libvirt * Clean up ComputeManager.\_get\_instance\_nw\_info * Updated from global requirements * Cells: Call compute api methods with instance objects * Correct docstring info on two parameters * Start the conversion to oslo.versionedobjects * Cleanup conductor unused methods * Revert "Ironic: do not destroy if node is in maintenance" * fix network setup on evacuate * Reschedules sometimes do not allocate networks * Incorrect argument order passed to swap\_volume * Mark ironic credential config as secret * Fix missing format arg in compute manager * objects: remove field ListOfEnumField * Cleaning up debug messages from previous change in vmops.py * Remove orphaned tables - iscsi\_targets, volumes * console: clean tokens do not happen for all kind of consoles * Fix import order * Skip only one host weight calculation * Fix typo for test cases * VMWare: Isolate unit tests from requests * Imported Translations from Transifex * Cleanup docs landing page * Updated from global requirements * Add ability to inject routes in interfaces.template * tests: make API signature test also check static function * Make test\_version\_string\_with\_package\_is\_good work with pbr 0.11 * Fix disconnect\_volume issue when find\_multipath\_device returns None * Updated from global requirements * Fix assert on call count for encodeutils.safe\_decode mock * Don't wait for an event on a resize-revert * minor edit to policy\_enforcement.rst * Update self with db result in InstanceInfoCache.save * libvirt: retry to undefine network filters during \_post\_live\_migration * Wedge DB migrations if flavor migrations are not complete * Removed twice declared variables * Removed variables used not in the scope that they are declared * libvirt: add method to get hardware info to Host * libvirt: avoid call of listDefinedDomains when post live migration * Remove unused db.aggregate\_metadata\_get\_by\_metadata\_key() call * Removed 'PYTHONHASHSEED=0' from tox.ini * Changed logic in \_compare\_result api\_samples\_test\_base * Convert bandwidth\_usage related timestamp to UTC native datetime * Drop use of 'oslo' namespace package 2015.1.0 -------- * Add a method to skip cells syncs on instance.save * Add some testing for flavor migrations with deleted things * Add support for forcing migrate\_flavor\_data * Virt: update shared storage log information message * Fixed functional in tests\_servers, to pass with random PYTHONHASHSEED * Adds toctree to v2 section of docs * Fixes X509 keypair creation failure * Update rpc version aliases for kilo * libvirt/utils.py: Remove 'encryption' flag from create\_cow\_image * Libvirt: Correct logging information and progress when LM * libvirt/utils.py: Remove needless code from create\_cow\_image * libvirt/utils.py: Clarify comment in create\_cow\_image function * Fix documentation for scheduling filters * libvirt: check qemu version for NUMA & hugepage support * Add security group calls missing from latest compute rpc api version bump * Add security group calls missing from latest compute rpc api version bump * Make objects serialize\_args() handle datetimes in positional args * Imported Translations from Transifex * view hypervisor details rest api should be allowed for non-admins * n-net: turn down log level when vif isn't found in deallocate\_fixed\_ip * Associate floating IPs with first v4 fixed IP if none specified * Correct the help text for the compute option * Convert NetworkDuplicated to HTTPBadRequest for v2.1 API * Remove comment inconsistent with code * Remove db layer hard-code permission checks for fixed\_ip\_get\_\* * Fixed nova-network dhcp-hostsfile update during live-migration * Remove db layer hard-code permission checks for network\_get\_all\_by\_host * Remove db layer hard-code permission checks for security\_group\_default\_rule\_create * Remove db layer hard-code permission checks for floating\_ips\_bulk * sync oslo: service child process normal SIGTERM exit * Remove downgrade support from the cellsv2 api db * Fix migrate\_flavor\_data() to catch instances with no instance\_extra rows * libvirt: use importutils instead of python built-in 2015.1.0rc2 ----------- * Imported Translations from Transifex * Updated from global requirements * Control create/delete flavor api permissions using policy.json * Add config option to disable handling virt lifecycle events * Ironic: pass injected files through to configdrive * libvirt: Allow discrete online pCPUs for pinning * Fix migrate\_flavor\_data() to catch instances with no instance\_extra rows * libvirt: unused imported option default\_ephemeral\_format * libvirt: introduce new method to guest tablet device * Fix migrate\_flavor\_data string substitution * Object: Fix incorrect parameter set in flavor save\_extra\_specs * Fix max\_number for migrate\_flavor data * remove downgrade support from our database migrations * Add policy check for extension\_info * Cleanup unnecessary session creation in floating\_ip\_deallocate * Fix inefficient transaction usage in floating\_ip\_bulk\_destroy * Control create/delete flavor api permissions using policy.json * Fix handling of pci\_requests in consume\_from\_instance * Use list of requests in InstancePCIRequests.obj\_from\_db * Add numa\_node field to PciDevicePool * scheduler: re-calculate NUMA on consume\_from\_instance * VMware: remove unused method * VMware: enable configuring of console delay * Don't query compute\_node through service object in nova-manage * Fixed test in test\_tracker to work with random PYTHONHASHSEED * Update rpc version aliases for kilo * remove the CONF.allow\_migrate\_to\_same\_host * Fix kwargs['migration'] KeyError in @errors\_out\_migration decorator * Add equality operators to PciDeviceStats and PciDevice objects * libvirt: Add option to ssh to prevent prompting * Validate server group affinity policy * VMware: use oslo.vmware methods for handling tokens * Remove db layer hard-code permission checks for network\_get\_associated\_fixed\_ips * tests: use numa xml automatic generation in libvirt tests * Resource tracker: unable to restart nova compute * Include supported version information * Release Import of Translations from Transifex * Fixed tests in test\_glance to pass with random PYTHONHASHSEED * Refactored tests in test\_neutron\_driver to pass with random PYTHONHASHSEED * refactored test in vmware test\_read\_write\_util to pass with random PYTHONHASHSEED * fixed tests in test\_matchers to pass with random PYTHONHASHSEED * fix for vmware test\_driver\_api to pass with random PYTHONHASHSEED * Update hypervisor support matrix with kvm on system z * Fix kwargs['migration'] KeyError in @errors\_out\_migration decorator * VMware: remove unused parameter for VMOPS spawn * libvirt: make \_get\_instance\_disk\_info conservative * refactored tests to pass in test\_inject to pass with random PYTHONHASHSEED * fixed tests in test\_iptables\_network to work with random PYTHONHASHSEED * refactored tests in test\_objects to pass with random PYTHONHASHSEED * fixed tests in test\_instance to pass with random PYTHONHASHSEED * Replace ssh exec calls with paramiko lib * Fix handling of pci\_requests in consume\_from\_instance * Use list of requests in InstancePCIRequests.obj\_from\_db * Share hide server add tests between v2 and v2.1 * Share V2 and V2.1 images functional tests * change the reboot rpc call to local reboot * 'deleted' filter does not work properly * Spelling mistakes in nova/compute/api.py * Use kwargs from compute v4 proxy change\_instance\_metadata * Delay STOPPED lifecycle event for all domains, not just Xen * Use kwargs from compute v4 proxy change\_instance\_metadata * compute: stop handling virt lifecycle events in cleanup\_host() * Replace BareMetalDriver with IronicDriver in option help string * tests: introduce a NUMAServersTest class * Fix test\_set\_admin\_password\_bad\_state() * Fix test\_attach\_interface\_failure() * Fix test\_swap\_volume\_api\_usage() * Resource tracker: unable to restart nova compute * Forbid booting of QCOW2 images with virtual\_size > root\_gb * Pass migrate\_data to pre\_live\_migration * Fixed order of arguments during execution live\_migrate() * update .gitreview for stable/kilo * Add min/max of API microversions to version API * VMware: Fix attribute error in resize * Release bdm constraint source and dest type * Fix check\_can\_live\_migrate\_destination() in ComputeV4Proxy * compute: stop handling virt lifecycle events in cleanup\_host() * Store context in local store after spawn\_n * Fixed incorrect dhcp\_server value during nova-network creation * Share multiple create server tests between v2 and v2.1 * Remove power\_state.BUILDING * libvirt: cleanup unused lifecycle event handling variables from driver * Add min/max of API microversions to version API * Pass migrate\_data to pre\_live\_migration * libvirt: add debug logging to pre\_live\_migration * Don't ignore template argument in get\_injected\_network\_template * Refactor some service tests and make them not require db * Remove and deprecate unused conductor service calls * Convert service and servicegroup to objects * Add numa\_node field to PciDevicePool * Ironic: do not destroy if node is in maintenance * libvirt: remove unnecesary quotes * VMware: fix log warning * libvirt: quit early when mempages requested found * VMware: validate CPU limits level * Remove and deprecate conductor get\_ec2\_ids() * Remove unused metadata conductor parameter * Replace conductor get\_ec2\_ids() with new Instance.ec2\_ids attribute * Add EC2Ids object and link to Instance object as optional attribute * neutron: reduce complexity of allocate\_for\_instance (security\_groups) * neutron: reduce complexity of allocate\_for\_instance (requested\_networks) * Avoid indexing into an empty list in getcallargs * Fixed order of arguments during execution live\_migrate() * Fix check\_can\_live\_migrate\_destination() in ComputeV4Proxy 2015.1.0rc1 ----------- * Add compute RPC API v4.0 * Reserve 10 migrations for backports * Honor uuid parameter passed to nova-network create * Update compute version alias for kilo * Refactor nova-net cidr validation in prep for bug fix * Fix how service objects are looked up for Cells * websocketproxy: Make protocol validation use connection\_info * scheduler: re-calculate NUMA on consume\_from\_instance * Prevent scheduling new external events when compute is shutdown * Print choices in the config generator * Manage compute node that exposes no pci devices * libvirt: make fakelibvirt more customizable * Use cells.utils.ServiceProxy object within cells\_api * Fix Enum field, which allows unrestricted values * consoleauth: Store access\_url on token authorization * tests: add a ServersTestBase class * tests: enhance functional tests primitives * libvirt: Add version check when pinning guest CPUs * Open Liberty development * xenapi: pull vm\_mode and auto\_disk\_config from image when rescue * VMware: Fix attribute error in resize * Allow \_exec\_ebtables to parse stderr * Fix rebuild of an instance with a volume attached * Imported Translations from Transifex * Stacktrace on live migration monitoring * Add 'docker' to the list of known hypervisor types * Respect CONF.scheduler\_use\_baremetal\_filters * Make migration 274 idempotent so it can be backported * Add 'suspended' lifecycle event * Fix how the Cells API is returning ComputeNode objects * Ironic: fix log level manipulation * Fix serialization for Cells Responses * libvirt: fix disablement of NUMA & hugepages on unsupported platforms * Optimize periodic call to get\_by\_host * Fix multipath device discovery when UFN is enabled * Use retrying decorator from oslo\_db * virt: Make sure block device info is persisted * virt: Fix block\_device tests * instance termination with update\_dns\_entries set fails * Filter fixed IPs from requested\_networks in deallocate\_for\_instance * Fixes \_cleanup\_rbd code to capture ImageBusy exception * Remove old relation in Cells for ComputeNode and Service * consoleauth: remove an instance of mutation while iterating * Add json-schema for v2.1 fixed-ips * Share V2 and V2.1 tenant-networks functional tests * Share migrations tests between V2 and V2.1 * Merging instance\_actions tests between V2 and V2.1 * Share V2 and V2.1 hosts functional tests * Add serialization of context to FakeNotifier * Handle nova-network tuple format in legacy RPC calls * remove usage of policy.d which isn't cached * Update check before migrating flavor * Expand Origin header check for serial console * libvirt: reuse unfilter\_instance pass-through method * No need to create APIVersionRequest every time * Libvirt: preallocate\_images CONFIG can be arbitrary characters * Add some tests for the error path(s) in RBD cleanup\_volumes() * VMware: add instance to log messages * Hyper-V: checks for existent Notes in list\_instance\_notes * Fix incorrect statement in inline neutronv2 docs * Imported Translations from Transifex * Vmware:Find a SCSI adapter type for attaching iSCSI disk * Avoid MODULEPATH environment var in config generator * Be more forgiving to empty context in notification * Store cells credentials in transport\_url properly * Fix API links and labels * Stale rc.local file - vestige from cloudpipe.rst * Remove stale test + opensssl information from docs * Add the last of the oslo libraries to hacking check * Cancel all waiting events during compute node shutdown * Update hypervisor support matrix for ironic wrt pause/suspend * Scheduler: deprecate mute\_weight\_value option on weigher * Pass instance object to add\_instance\_fault\_from\_exc * Remove dead vmrc code * Add vnc\_keymap support for vmware compute * Remove compute/api.py::update() * add ironic hypervisor type * Removes XML MIME types from v2 API information * API: fix typo in unit tests * Add field name to error messages in object type checking * Remove obsolete TODO in scheduler filters * Expand valid server group name character set * Raise exception when backup volume-backed instance * Libvirt SMB volume driver: fix volume attach * Adds Compute API v2 docs * PCI tracker: make O(M \* N) clean\_usage algo linear * Fix v2.1 list-host to remove 'services' filter * Fix incorrect http\_conflict error message * Link to devstack guide for appropriate serial\_console instructions * Skip socket related unit tests on OSX * Add debug logging to quota\_reserve flow * Fix missing the cpu\_pinning request * Hyper-V: Sets \*DataRoot paths for instances * Refactored test in test\_neutron\_driver to pass with random PYTHONHASHSEED * fixed tests in test\_neutrounv2 to pass with random PYTHONHASHSEED * Refactored test in linux\_net to pass with random PYTHONHASHSEED * refactored tests in test\_wsgi to pass with random PYTHONHASHSEED * fixed tests in test\_simple\_tenant\_usage to pass with random PYTHONHASHSEED * Refactored test\_availability\_zone to work properly with random PYTHONHASHSEED * fixed test in test\_disk\_config to work with random PYTHONHASHSEED * Fixed test to work with random PYTHONHASHSEED * Fix \_instance\_action call for resize\_instance in cells * Add some logging in the quota.reserve flow * Check host cpu\_info if no cpu\_model for guest * Move ComputeNode creation at init stage in ResourceTracker * Releasing DHCP in nova-network fixed * Fix PCIDevicePool.to\_dict() when the object has no tags * Convert pci\_device\_pools dict to object before passing to scheduler * Sync from Oslo-Incubator - reload config files * Fix v2.1 hypervisor servers to return empty list * Add support for cleaning in Ironic driver * Adjust resource tracker for new Ironic states * Ironic: Remove passing Flavor's deploy\_{kernel, ramdisk} * don't 500 on invalid security group format * Adds cleanup on v2.2 keypair api and tests * Set conductor use\_local flag in compute manager tests * Use migration object in resource\_tracker * Move suds into test-requirements.txt * Make refresh\_instance\_security\_rules() handle non-object instances * Add a fixture for the NovaObject indirection API * Add missing \`shows\` to the RPC casts documentation * Override update\_available\_resources interval * Fix for deletes first preexisting port if second was attached to instance * Avoid load real policy from policy.d when using fake policy fixture * Neutron: simplify validate\_networks * Switch to newer cirros image in docs * Fix common misspellings * Scheduler: update doctring to use oslo\_config * Skip 'id' attribute to be explicitly deleted in TestCase * Remove unused class variables in extended\_volumes * libvirt: remove volume\_drivers config param * Make conductor use instance object * VMware: add VirtualVmxnet3 to the supported network types * Fix test cases still use v3 prefix * Typo in oslo.i18n url * Fix docs build break * Updated from global requirements * Fix typo in nova/tests/unit/test\_availability\_zones.py * mock out build\_instances/rebuild\_instance when not used * Make ComputeAPIIpFilterTestCase a NoDBTestCase * Remove vol\_get\_usage\_by\_time from conductor api/rpcapi * default tox cmd should also run 'functional' target * VMware: Consume the oslo.vmware objects * Release bdm constraint source and dest type * VMware: save instance object creation in test\_vmops * libvirt: Delay only STOPPED event for Xen domain * Remove invalid hacking recheck for baremetal driver * Adds Not Null constraint to KeyPair name * Fix orphaned ports on build failure * VMware: Fix volume relocate during detach 2015.1.0b3 ---------- * Fix AggregateCoreFilter return incorrect value * Remove comments on API policy, remove core param * Add policy check for consoles * Sync from oslo-incubator * Rename and move the v2.1 api policy into separated files * Disable oslo\_messaging debug logging * heal\_instance\_info\_cache\_interval help clearer * Forbid booting of QCOW2 images with virtual\_size > root\_gb * don't use oslo.messaging in mock * BDM: Avoiding saving if there were no changes * Tidy up sentinel comparison in pop\_instance\_event * Tidy up dict.setdefault() usage in prepare\_for\_instance\_event * Remove duplicate InvalidBDMVolumeNotBootable * libvirt: make default value of numa cell memory to 0 when not defined * Add the instance update calls from Compute * Save bdm.connection\_info before calling volume\_api.attach\_volume * Add InstanceMapping object * Add CellMapping object * load ram\_allocation\_ratio when asked * Remove pci\_device.update\_device helper function * Tox: reduce complexity level to 35 * Remove db layer hard-code permission checks for service\_get\_all * Expand help message on some quota config options * Test fixture for the api database * remove duplicate calls to cfg.get() * Remove context from remotable call signature * Actually stop passing context to remotable methods * Remove usage of remotable context parameter in service, tag, vif * Remove usage of remotable context parameter in security\_group\* * Remove usage of remotable context parameter in pci\_device, quotas * let fake virt track resources * doc: fix a docstext formatting * Update unique constraint of compute\_nodes with deleted column * Modify filters to get instance info from HostState * Add the RPC calls for instance updates * Implement instance update logic in Scheduler * Log exception from deallocate\_port\_for\_instance for triage * Remove usage of remotable context parameter in migration, network * Remove usage of remotable context parameter in compute\_node, keypair * Remove usage of remotable context parameter in instance\* objects * Remove usage of remotable context parameter in fixed\_ip, flavor, floating\_ip * Remove usage of remotable context parameter in ec2 object * libvirt: partial fix for live-migration with config drive * Added assertJsonEqual method to TestCase class * VMware: Improve reporting of path test failures * libvirt test\_cpu\_info method fixed random PYTHONHASHSEED compatibility * Remove usage of remotable context parameter in bandwidth, block\_device * Remove usage of remotable context parameter in agent, aggregate * Remove db layer hard-code permission checks for pci * Objects: use setattr rather than dict syntax in remotable * Split out NovaTimestampObject * libvirt: Resize down an instance booted from a volume * add neutron api NotImplemented test cases for Network V2.1 * Stop using exception.message * Remove unused oslo logging fixture * libvirt: don't allow to resize down the default ephemeral disk * Add api microvesion unit test case for wsgi.action * Change some comments for instance param * Hyper-V: Adds VMOps unit tests (part 2) * Add get\_api\_session to db api * Use the proper database engine for nova-manage * Add support for multiple database engines * Virt: update fake driver to use UUID as lookup key * VMware: use instance UUID as instance name * VMware: update test\_vm\_util to use instance object * Handle exception when doing detach\_interface * Variable 'name' already declared in 'for' loop * Handle RESIZE\_PREP status when nova compute do init\_instance * Move policy enforcement into REST API layer for v2.1 api volume\_attachment * Remove the elevated context when get network * Handles exception when unsupported virt-type given * Fix confusing log output in nova/nova/network/linux\_net.py * Workaround for race condition in libvirt * remove unneeded teardown related code * Fixed archiving of deleted records * libvirt: Remove minidom usage in driver.py * Stop spamming logs when creating context * Fix ComputeNode backport for Service.obj\_make\_compatible * Break out the child version calculation logic from obj\_make\_compatible() * Fix PciDeviceDBApiTestCase with referential constraint checking * Verify all quotas before updating the db * Add shadow table empty verification * Add @wrap\_exception() for 3 compute functions * Remove FK on service\_id and make service\_id nullable * Using Instance object instead of db call * Revert "Removed useless method \_get\_default\_deleted\_value." * Remove db layer hard-code permission checks for network\_count\_reserved\_ips * implement user negative testing for flavor manage * refactor policy fixtures to allow use of real policy * libvirt: remove unnecessary flavor parameter * Compute: no longer need to pass flavor to the spawn method * Update some ResizeClaimTestCase tests * Move InstanceClaimTestCase.test\_claim\_and\_audit * Handle exception when attaching interface failed * Deprecate Nova in tree EC2 APIs * cells: don't pass context to instance.save in instance\_update\_from\_api * ensure DatabaseFixture removes db on cleanup * objects: introduce numa topology limits objects * Add a test that validates object backports and child object versions * Fix ArchiveTestCase on MySQL due to differing exceptions * VMware: fix VM rescue problem with VNC console * VMware: Deprecation warning - map one nova-compute to one VC cluster * compute: don't trace on InstanceNotFound in reverts\_task\_state * Fix backporting objects with sub-objects that can look falsey * neutron: deprecate 'allow\_duplicate\_networks' config option * Fix Juno nodes checking service.compute\_node * Fix typo in \_live\_migration\_cleanup\_flags method * libvirt: add in missing translation for exception * Move policy enforcement into REST API layer for v2.1 extended\_volumes * Remove useless policy rules for v2.1 api which removed/disabled * Remove db layer hard-code permission checks for service\_get\_all\_by\_\* * Fix infinite recursion caused by unnecessary stub * Websocket Proxy should verify Origin header * Improve 'attach interface' exception handling * Remove unused method \_make\_stub\_method * Remove useless get\_one() method in SG API * Fix up join() and leave() methods of servicegroup * network: Fix another IPv6 test for Mac * Add InstanceList.get\_all method * Use session with neutronclient * Pass correct context to get\_by\_compute\_node() * Revert "Allow force-delete irrespective of VM task\_state" * Fix kwargs['instance'] KeyError in @reverts\_task\_state decorator * Fix copy configdrive during live-migration on HyperV * Move V2 sample files to respective directory * V2 tests -Reuse server post req/resp sample file * V2.1 tests - Reuse server post req/resp sample file * Remove an unused config import in nova-compute * Raise HTTPNotFound for Port/NetworkNotFound * neutronv2: only create client once when adding/removing fixed IPs * Stop stacktracing in \_get\_filter\_uuid * libvirt: Fix live migration failure cleanup on ceph * Sync with latest oslo-incubator * Better logging of resources * Preserve preexisting ports on server delete * Move oslo.vmware into test-requirements.txt * Remove db layer hard-code permission checks for network\_get\_by\_uuid * Refactor \_regex\_instance\_filter for testing * Add instance\_mappings table to api database * ec2: clean up in test\_cinder\_cloud * Remove unused method queue\_get\_for * Remove make\_ip\_dict method which is not used * Remove unused method delete\_subnet * Remove unused method disable\_vlan * Remove unused method get\_request\_extensions * Fix wrong log output in nova/nova/tests/unit/fake\_volume.py * Updated from global requirements * Remove db layer hard-code permission checks for network\_get\_by\_cidr * Add cell\_mappings table to api database * Ban passing contexts to remotable methods * Fix a remaining case of passing context to a remotable in scheduler * Fix several cases of passing context to quota-related remotable methods * Fix some cases of passing context to remotables with security groups * Replace RPC topic-based service queries with binary-based in cells * Replace RPC topic-based service queries with binary-based in scheduler * Fix some straggling uses of passing context to remotable methods in tests * VMware: remove code invoking deprecation warning * Fix typo in nova/scheduler/filters/utils.py * Remove db layer hard-code permission checks for network\_delete\_safe * Don't add exception instance in LOG.exception * Move policy enforcement into REST API layer for v2.1 servers * Move policy enforcement into REST API layer for v2.1 api attach\_interfaces * Remove db layer hard-code permission checks for flavor-manager * Remove db layer hard-code permission checks for service\_delete/service\_get * Remove db layer hard-code permission checks for service\_update * Fix 'nova show' return incorrect mac info * Use controller method in all admin actions tests * Remove db layer hard-code permission checks for flavor\_access * Modify filters so they can look to HostState * let us specify when samples tests need admin privs * Updated from global requirements * Remove cases of passing context to remotable methods in Flavor * Remove cases of passing context to remotable methods in Instance * Fix up PciDevice remotable context usage * libvirt: add comment for vifs\_already\_plugged=True in finish\_migration * neutron: check for same host in \_update\_port\_binding\_for\_instance * Move policy enforcement into REST API layer for v2.1 security groups * Keep instance state if lvm backend not impl * Replace RPC topic-based service queries in nova/api with binary-based * Remove service\_get\_by\_args from the DB API * Remove usage of db.service\_get\_by\_args * Make unit tests inherit from test.NoDBTestCase * Fixed incorrect behavior of method sqlalchemy.api.\_check\_instance\_exists * Remove db layer hard-code permission checks for migrations\_get\* * vmware: support both hard and soft reboot * xenapi: Fix session tests leaking state * libvirt: Cleanup snapshot tests * Change instance disappeared during destroy from Warning to Info * Replace instance flavor delete hacks with proper usage * Add delattr support to base object * Use flavor stored with instance in vmware driver * Use flavor stored with instance in ironic driver * Modify AggregateAPI methods to call the Scheduler client methods * Create Scheduler client methods for aggregates * Add update and delete \_aggregate() method to the Scheduler RPC API * Instantiate aggregates information when HostManager is starting * Add equivalence operators to NUMACell and NUMAPagesTopology * Adds x509 certificate keypair support * Better round trip for RequestContext<->Dict conversion * Make scheduler client reporting use ComputeNode object * Prevent update of ReadOnlyDict * Copy the default value for field * neutron: add logging during nw info\_cache refresh when port is gone * Add info for Standalone EC2 API to cut access to Nova DB * VMware: Fix disk UUID in instance's extra config * Update config generator to use new style list\_opts discovery * Avoid KeyError Exception in extract\_flavor() * Imported Translations from Transifex * Updated from global requirements * Move policy enforcement into REST API layer for v2.1 create backup * Truncate encoded instance sys meta to 255 or less * Adds keypair type in nova-api * Switch nova.virt.vmwareapi.\* to instance dot notation * Allow disabling the evacuate cleanup mechanism in compute manager * Change queries for network services to use binary instead of topic * Add Service.get\_by\_host\_and\_binary and ServiceList.get\_by\_binary * Compute: update config drive settings on instance * Fix docstrings for assorted methods * Config driver: update help text for force\_config\_drive * libvirt-numa.rst: trivial spelling fixes * Ensure bridge deleted with brctl delbr * create noauth2 * enhance flavor manage functional tests * Add API Response class for more complex testing * Add more log info around 'not found' error * Remove extended addresses from V2.1 update & rebuild * Switch nova.virt.hyperv.\* to instance dot notation * Revert instance task\_state when compareCPU fails * Libvirt: Fix error message when unable to preallocate image * Switch nova.virt.libvirt.\* to instance dot notation * Add nova-manage commands for the new api database * Add second migrate\_repo for cells v2 database migrations * Updated from global requirements * Force LANGUAGE=en\_US in test runs * neutron: consolidate common unbind ports logic * Sync oslo policy change * Remove compute\_node field from service\_get\_by\_compute\_host * Fix how the Service object is loading the compute\_node field * Remove compute\_node from service\_get\_by\_cn Cells API method * Remove want\_objects kwarg from nova.api.openstack.common.get\_instance * Switch nova.virt.\* to use the object dot notation * add string representation for context * Remove db layer hard-code permission checks for migration\_create/update * Disables pci plugin for v2.1 & microversions * Fix logic for checking if az can be updated * Add obj\_alternate\_context() helper * libvirt: remove libvirt import from tests so we only use fakelibvirt * capture stdout and logging for OSAPIfixture test * remove unused \_authorize\_context from security\_group\_default\_rules.py * Switch nova.context to actually use oslo.context * Fixed incorrect indent of test\_config\_read\_only\_disk * Fixed incorrect assertion in test\_db\_api * Remove TranslationFixture * Replace fanout to False for CastAsCall fixture * Make ConsoleAuthTokensExtensionTestV21 inherit from test.NoDBTestCase * Remove db layer hard-code permission checks for task\_log\_get\* * Remove db layer hard-code permission checks for task\_log\_begin/end\_task * Api: remove unusefull compute api from cells * Remove db layer hard-code permission checks for service\_create * Imported Translations from Transifex * Change v3 import to v21 in 2.1 api unit test * Fix NotImplementedError handling in interfaces API * Support specifing multiple values for aggregate keys * Remove attach/detach/swap from V2.1 extended\_volumes * Make metadata cache time configurable * Remove db layer hard-code permission checks for fixed\_ip\_disassociate\_all\_by\_timeout * Move policy enforcement into REST API layer for v2.1 api assisted\_volume\_snapshots * Fix tiny typo in api microversions doc * Fixes Hyper-V: configdrive is not migrated to destination * ensure that ram is >= 1 in random flavor creation * Fixes 500 error message and traces when no free ip is left * db: Add index on fixed\_ips updated\_at * Display host chosen for instance by scheduler * PYTHONHASHSEED bug fix in test\_utils * fixed tests in test\_vm\_util to work with random PYTHONHASHSEED * Add microversion allocation on devref * Remove OS-EXT-IPS attributes from V2.1 server ips * Remove 'locked\_by' from V2.1 extended server status * Remove 'id' from V2.1 update quota\_set resp * Fix bad exception logging * VMware: Ensure compute\_node.hypervisor\_hostname is unique * Inherit exceptions correctly * Remove en\_US translation * Move policy enforcement into REST API layer for v2.1 cloudpipe * Move policy enforcement into REST API layer for v2.1 security\_group\_default\_rules * linux\_net.metadata\_accept(): IPv6 support * Enforce in REST API layer on v2.1 api remote consoles * Remove accessips attribute from V2.1 POST server resp * Move policy enforcement into REST API layer for v2.1 floating\_ip\_dns * Fix bad interaction between @wsgi.extends and @wsgi.api\_version * Enforce in REST API layer on v2.1 shelve api * Move policy enforcement into REST API layer for v2.1 api evacuate * Add manual version comparison to microversion devref document * Switch to uuidutils from oslo\_utils library * Add developer documentation for writing V2.1 API plugins * Convert nova.compute.\* to use instance dot notation * Better power\_state logging in \_sync\_instance\_power\_state * Use instance objects in fping/instance\_actions/server\_metadata * Fix misspellings words in nova * Fix KeyErrors from incorrectly formatted NovaExceptions in unit tests * Move policy enforcement into REST API layer for v2.1 floating ips * Switch nova.network.\* to use instance dot notation * Revert : Switch off oslo.\* namespace check temporarily * Move policy enforcement into REST API layer for v2.1 networks related * Remove db layer hard-code permission checks for v2.1 agents * Move v2.1 virtual\_interfaces api policy enforcement into REST API layer * fix 'Empty module name' exception attaching volume * Use flavor stored with instance in libvirt driver * Handle 404 in os-baremetal-nodes GET * API: Change the API cpu\_info to be meaning ful * Updated from global requirements * Make compute unit tests inherit from test.NoDBTestCase * Request objects in security\_groups api extensions * Reuse is\_int\_like from oslo\_utils * VMware: fix network connectivity problems * Move policy enforcement into REST API layer for v2.1 admin password * Fix the order of base classes in migrations test cases * Libvirt: Allow missing volumes during delete * Move policy enforcement into REST API layer for v2.1 server\_diagnostics * Fix wrong log when reschedule is disabled * Replace select-for-update in fixed\_ip\_associate * Move policy enforcement into REST API layer for v2.1 fping * Consolidate use api request version header * Copy image from source host when ImageNotFound * VMware: update get\_available\_datastores to only use clusters * Add useful debug logging when policy checks fail * Remove unused conductor methods * Call notify\_usage\_exists() without conductor proxying * Updated from global requirements * Make notifications use BandwidthUsageList object * libvirt: Fix migration when image doesn't exist * Fix a typo of devref document for api\_plugin * console: add unit tests for baseproxy * libvirt: log host capabilities on startup * Allow configuring proxy\_host and proxy\_port in nova.conf * Fixes novncproxy logging.setup() * Add descriptions to some assertBooleans * Remove update\_store usage * Enforce policy checking in REST API layer for v2.1 server\_password * Add methods that convert any volume BDM to driver format * Split scheduler weight test on ram * Split scheduler weight test on metrics * Split scheduler weight test on ioops * Fix 500 when deleting a not existing ec2 security group * Remove backwards compat oslo.messaging entries from setup.cfg * Change utils.vpn\_ping() to return a Boolean * Enable retry when there are multiple force hosts/nodes * Use oslo.log * switch LOG.audit to LOG.info * Add catch FlavorExtraSpecsNotFound in V2 API * tests: remove duplicate keys from dictionary * Add blkid rootwrap filter * Fix idempotency of migration 269 * objects: fix issue in test cases for instance numa * VMware: Accept image and block device mappings * nova flavor manage functional test * extract API fixture * Fix V2 hide server address functional tests * Remove unused touch command filter * Add a test for block\_device\_make\_list\_from\_dicts * Move policy enforcement into REST API layer for v2.1 floating\_ip\_pools * libvirt: address test comments for zfcp volume driver changes * libvirt: Adjust Nova to support FCP on System z systems * Fix BM nodes extension to deal with missing node properties * VMware: update the support matrix for security groups * Ignore 'dynamic' addr flag on gateway initialization * Adds xend to rootwrap.d/compute.filters * Create volume in the same availability zone as instance * Wrap IPv6 address in square brackets for scp/rsync * fake: fix public API signatures to match virt driver * Added retries in 'network\_set\_host' function * Use NoDBTestCase instead of TestCase * Change microversion header name * VMware: ensure that resize treats CPU limits correctly * Compute: pass flavor object to migrate\_disk\_and\_power\_off * extract method from fc volume discovery * Set instance NUMA topology on HostState * Support live-migrate of instances in PAUSED state * Fix DB access by FormatMappingTestCase * api: report progress when instance is migrating * libvirt: proper monitoring of live migration progress * libvirt: using instance like object * libvirt: convert tests from mox to mock * XenAPI: Fix data loss on resize up * Delete instance files from dest host in revert-resize * Pass the capabilities to ironic node instance\_info * No need to re-fetch instance with sysmeta * Switch nova.api.\* to use instance dot notation * Objectify calls to service\_get\_by\_compute\_host * Refactor how to remove compute nodes when service is deleted * Move policy enforcement into REST API layer for v2.1 admin actions * Contrail VIF Driver changes for Nova-Compute * libvirt : Fix slightly misleading parameter name, validate param * libvirt: cleanup setattr usage in test\_host * libvirt: add TODOs for removing libvirt attribute stubs * Expand try/except for get\_machine\_ips * Switch nova.compute.manager to use instance dot notation * libvirt: stub out VIR\_CONNECT\_LIST\_DOMAINS\_INACTIVE * libvirt: stub out VIR\_SECRET\_USAGE\_TYPE\_ISCSI for older libvirt * Change calls to service information for Hypervisors API * Add handling for offlined CPUs to the nova libvirt driver * Make compute API create() use BDM objects * Remove redundant tearDown from ArchiveTestCase * libvirt: switch LibvirtConnTestCase back to NoDBTestCase * Replace usage of LazyPluggable by stevedore driver * Don't mock time.sleep with None * Libvirt: Support ovs plug in vhostuser vif * Removed duplicate key from dictionary * Fixes Attribute Error when trying to spawn instance from vhd on HyperV * Remove computenode relationship on service\_get * Remove nested service from DB API compute\_nodes * libvirt: Use XPath instead of loop in \_get\_interfaces * fixed tests to work with random PYTHONHASHSEED * Imported Translations from Transifex * Make the method \_op\_method() public * Quiesce boot from volume instances during live snapshot * Fix "Host Aggregate" section of the Nova Developer Guide * network: Fix another IPv6 test for Mac * Pre-load default filters during scheduler initialization * Libvirt: Gracefully Handle Destroy Error For LXC * libvirt: stub VIR\_CONNECT\_LIST\_DOMAINS\_ACTIVE for older libvirts * Fix VNC access, when reverse DNS lookups fail * Remove now useless requirements wsgiref * Add JSON schema for v2.1 add network API * Handle MessagingException in unshelving instance * Compute: make use of dot notation for console access * Compute: update exception handling for spice console * Add missing api samples for floating-ips api(v2) * Move v2.1 rescue api policy enforcement into REST API layer * Move policy enforcement into REST API layer for v2.1 ips * Move policy enforcement into REST API layer for v2.1 multinic * Move policy enforcement into REST API layer for v2.1 server\_metadata * VMware: fix resize of ephemeral disks * VMware: add in a utility method for detaching devices * VMware: address instance resize problems * Fixes logic in compute\_node\_statistics * Cover ListOfObjectField for relationship test * Replace oslo-incubator with oslo\_context * Libvirt: add in unit tests for driver capabilities * Ironic: add in unit tests for driver capabilities * Tests: Don't require binding to port 4444 * libvirt: fix overly strict CPU model comparison in live migration * Libvirt: vcpu\_model support * IP filtering is not accurate when used with limit * Change how the API is getting a list of compute nodes * Change how Cells are getting the list of compute nodes * Change how HostManager is calling the service information * Move scheduler.host\_manager to use ComputeNode object * patch out nova libvirt driver event thread in tests * Change outer to inner join in fixed IP DB API func * Small cleanup in pci\_device\_update * Remove useless NotFound exception catching for v2/v2.1 fping * V2.1 cleanup: Use concrete NotFound exception instead of generic * Drop deprecated namespace for oslo.rootwrap * Add vcpu\_model to instance object * Pass instance primitive to instance\_update\_at\_top() * Adds infrastructure for microversioned api samples * Libvirt: Support for generic vhostuser vif * Pull singleton config check cruft out of SG API * hacking: Got rid of unnecessary TODO * Remove unused function in test * Remove unused function * hardware: fix reported host mempages in numa cell * objects: fix numa obj relationships * objects: remove default values for numa cell * Move policy enforcement into REST API layer for v2.1 suspend/resume server * Move policy enforcement into REST API layer for v2.1 api console-output * Move policy enforcement into REST API layer for v2.1 deferred\_delete * Move migrate-server policy enforce into REST API * Add API schema for v2.1 tenant networks API * Move policy enforcement into REST API layer for v2.1 lock server * Libvirt: cleanup rescue lvm when unrescue * Sync simple\_tenant\_usage V2.1 exception with V2 and add test case * IP filtering can include duplicate instances * Add recursive flag to obj\_reset\_changes() * Compute: use dot convension for \_poll\_rescued\_instances * Add tests for nova-manage vm list * libvirt: add libvirt/parallels to hypervisor support matrix * Compute: update reboot\_instance to use dot instance notation * Fix incorrect compute api config indentation * libvirt: fix emulator thread pinning when doing strict CPU pinning * libvirt: rewrite NUMA topology generator to be more flexible * libvirt: Fix logically inconsistent host NUMA topology * libvirt: utils canonicalize now the image architecture property * A couple of grammar fixes in help strings * Implement api samples test for os-baremetal-nodes Part 2 * Compute: use consistant instance dot notation * Log warning if CONF.my\_ip is not found on system * libvirt: remove \_destroy\_instance\_files shim * virt: Fix interaction between disk API tests * network: Fix IPv6 tests for Mac * Use dot notation on instance object fields in \_delete\_instance * libvirt: memnodes shuold be set to a list instead of None * Cleanup add\_fixed\_ip\_to\_instance tests * Cleanup test\_instance\_dns * Fix detach\_sriov\_ports to get context to be able to get image metadata * Implement api samples test for os-baremetal-nodes * Fix description of parameters in nova functions * Stop making the database migration backend lazy pluggable * Updated from global requirements * Libvirt: Created Nova driver for Quobyte * Adds keypair type database migration * libvirt: Enable serial\_console feature for system z * Make tests use sha256 as openssl default digest algorithm * Improved performance of db method network\_in\_use\_on\_host * Replace select-for-update in floating\_ip\_allocate\_address * Move policy enforcement into REST API layer for v2.1 pause server * Libvirt: update log message * Update usage of exception MigrationError * Extract preserve ephemeral on rebuild from servers plugin * VMware: update get\_vm\_resize\_spec interface * VMware: Enable spawn from OVA image * Raise bad request for missing 'label' in tenant network * CWD is incorrectly set if exceptions are thrown * VMware: add disk device information to VmdkInfo * Use controller methods directly in test\_rescue * Call controller methods directly in test\_multinic * Add version specific test cases for microverison * Change v2.1 API status to CURRENT * Remove wsgi\_app usage from test\_server\_actions * Change some v2.1 extension names to v2 * Add VirtCPUModel nova objects * Add enum fieldtype field * Convert v2.1 extension\_info to show V2 API extension list * Remove compability check for ratelimit\_v3 * Keep instance state if ssh failed during migration * Cleanup and removal of unused code in scheduler unit tests * Fix incorrect use of mock in scheduler test * Make test re-use HTTPRequest part 5 * Refactor test\_filter\_scheduler use of fakes * consoliate set\_availability\_zones usage * Warn about zookeeper service group driver usage * Updated from global requirements * Update matrix for kvm on ppc64 * Switch off oslo.\* namespace check temporarily * Switch to using oslo\_\* instead of oslo.\* * Adjust object\_compat wrapper order * Add more tests for tenant network API * Sync with oslo-incubator * Make compute use objects usage 'best practice' * Enable BIOS bootmenu on AMI-based images 2015.1.0b2 ---------- * libvirt: fix console device for system z for log file * Fix references to non-existent "pause" section * libvirt: generate proper config for PCS containers * libvirt: add ability to add file and block based filesystem * libvirt: add ploop disks format support * Fix improper use of Stevedore * libvirt: Fail when live block migrating instance with volumes * Add notification for suspend * Add API schema for v2.1 networks API * Remove v1.1 from v2.1 extension description * Add \_LW for missing translations * Treat LOG.warning and LOG.warn same * Add JSON schema for v2.1 'quota\_class' API * Add missing setup.cfg entry for os-user-data plugin * Add api\_version parameter for API sample test base class * Add suggestion to dev docs for debugging odd test failures * Add max\_concurrent\_builds limit configuration * Fixes Hyper-V configdrive network injection issue * Update Power State after deleting instance * Remove temporary power state variables * Make obj\_set\_defaults() more useful * Adds devref for API Microversions * PCI NUMA filtering * Ensure publisher\_id is set correctly in notifications * libvirt: Use XPath instead of loop in \_get\_all\_block\_devices * libvirt: Use XPath instead of loop in get\_instance\_diagnostics * fix typo in rpcapi docstring * Fix conductor servicegroup joining when zk driver is used * Do not treat empty key\_name as None * Failed to discovery when iscsi multipath and CHAP both enabled * Fix network tests response code checking * Remove unused error from v2.1 create server * Fix corrupting the object repository with test instance objects * Change cell\_type values in nova-manage * Fix bad mocking of methods on Instance * Updated from global requirements * VMware: fix resume\_state\_on\_host\_boot * Fix cells rpc connection leak * Remove redundant assert of mock volume save call * Don't create block device mappings in the API cell * Add formal doc recording hypervisor feature capability matrix * Ironic: Adds config drive support * libvirt-xen: Fix block device prefix and disk bus * libvirt-xen: don't request features ACPI or APIC with PV guest * Make EC2 compatible with current AWS CLI * libvirt: remove pointless loop after live migration finishes * Remove useless argparse requirement * add asserts of DriverBlockDevice save call parameters * fix call of DriverVolumeBlockDevice save in swap\_volume * Use a workarounds group option to disable live snaphots * libvirt : Add support for --interface option in iscsiadm * Cells: Fix service\_get\_by\_compute\_host * Expand instances project\_id index to cover deleted as well * Remove unused conductor parameter from get\_host\_availability\_zone() * Fixes Hyper-V instance snapshot * Add more status when do \_poll\_rebooting\_instances * Adds barbican keymgr wrapper * libvirt: avoid setting the memnodes where when it's not a supported option * Make code compatible with v4 auth and workaround webob bug * Fix likely undesired use of redirection * Save bdm in swap\_volume * doc: document manual testing procedure for serial-console * nova net-delete network is not informative enough * Improvement in 'network\_set\_host' function * Fix typo in nova/virt/disk/vfs/localfs.py * Fix expected error in V2.1 add network API * libvirt: fix failure when attaching volume to iso instance * Add log message to is\_luks function * Access migration fields like an object in finish\_revert\_resize * Remove unused migration parameter from \_cleanup\_stored\_instance\_types * object: serialize set to list * Fix leaking exceptions from scheduler utils * Adds tests for Hyper-V LiveMigration utils * Adds tests for Hyper-V VHD utils * libvirt: fix missing block device mapping parameter * libvirt: add QEMU built-in iSCSI initiator support * Add update\_or\_create flag to BDM objects create() * Typos fixed * Remove unused method from test\_metadata * libvirt: Support iSCSI live migration for different iSCSI target * Add JSON schema for "associate\_host" API * Add migrate\_flavor\_data to nova-manage * Adds logging to ComputeCapabilitiesFilter failures * Add flavor fields to Instance object * Fix up some instance object creation issues in tests * Fix misspellings in hardware.py * VMware: add in utility methods for copying and deleting disks * Apply v2.1 API to href of version API * Revert "Raise if sec-groups and port id are provided on boot" * libvirt: always pass image\_meta when getting guest XML * libvirt: assume image\_meta is non-None in blockinfo module * libvirt: always pass image meta when getting disk info from bdm * Calls to superclass' \_\_init\_\_ function is optional * Enforce DB model matches results of DB migrations * Add missing foreign keys for sqlite * Fix an indentation in server group api samples template * Allow instances to attach to shared external nets * Handle ironic\_client non-existent case * Cells: Record initial database split in devref * Use a workarounds option to disable rootwrap * virt: Fix images test interaction * libvirt: add parallels virt\_type * Convert nova-manage list to use Instance objects * Create a 'workarounds' config group * Updated from global requirements * don't use exec cat when we can use read * don't assert\_called\_once\_with with a real time * Network: correct VMware DVS port group name lookup * Refactor ComputeCapabilitiesFilter as bugfix preparation * libvirt: Set SCSI as the default cdrom bus on System z * Adds common policy authorizer helper functions for Nova V2.1 API * Adds skip\_policy\_check flag to Compute/Network/SecurityGroup API * Make test re-use HTTPRequest part 4 * libvirt: update uri\_whitelist in fakelibvirt.Connection * Revert "Adds keypair type database migration" * Support for ext4 as default filesystem for ephemeral disks * Raise NotFound if attach interface with invalid net id or port id * Change default value of multi\_instance\_display\_name\_template * Check for LUKS device via 'isLuks' subcommand * disk: use new vfs method and option to extend * Replace select-for-update in fixed\_ip\_associate\_pool * Remove unused content\_type\_params() * libvirt: always pass image meta when getting disk mapping * libvirt: always pass image meta when getting disk info * Add API schema for v2.1 server reboot actions * objects: fix typo in changelog of compute\_node * Add API schema for v2.1 'removeFloatingIp' * Add API schema for v2.1 'addFloatingIp' * Add parameter\_types.ip\_address for cleanup * Reply with a meaningful exception when ports are over the quota limit * Adds keypair type database migration * A minor change of CamelCase parameter * Imported Translations from Transifex * Remove N331 hacking rules * GET details REST API next link missing 'details' * Add missing indexes in SQLite and PostgreSQL * libvirt: cleanup warning log formatting in \_set\_host\_enabled * Revert temporary hack to monkey patch the fake rpc timeout * Remove H238 comment from tox.ini * libvirt: use image\_meta when looking up default device names * Fix bdm transformation for volume backed servers * Removed host\_id check in ServersController.update * Fix policy validation in JSONSchema * Adds assert\_has\_no\_errors check * Removed useless method \_get\_default\_deleted\_value * virt: make tests pass instance object to get\_instance\_disk\_info * libvirt: rename conn variable in LibvirtConnTestCase * Raise if sec-groups and port id are provided on boot * Begin using ironic's "AVAILABLE" state * Transform IPAddress to string when creating port * Break base service group driver class out from API * Remove unused \_get\_ip\_and\_port() * Updated from global requirements * Add method for getting the CPU pinning constraint * libvirt: Consider CPU pinning when booting * Make ec2/cloud.py use get\_instance\_availability\_zone() helper * HACKING.rst: Update the location of unit tests' README.rst * Remove unused method log\_db\_contents * Make use of controller method in test\_flavor\_manage * libvirt: Use XPath instead of loop in \_get\_disk\_xml * Avoid bdms db call when cleaning deleted instance * Ignore warnings from contextlib.nested * Cleanup bad JSON files * Switch to oslo.vmware API for reading and writing files * Make test re-use HTTPRequest part 1 * Make test re-use HTTPRequest part 2 * Make test re-use HTTPRequest part 3 * Remove HTTPRequestV3 in scheduler\_hints test * Hyper-V: Adds instance missing metrics enabling * ephemeral file names should reflect fs type and mkfs command * Reschedule queries to nova-scheduler after a timeout occurs * libvirt: remove use of utils.instance\_sys\_meta * libvirt: remove use of fake\_instance.fake\_instance\_obj * Remove redundant catch for InstanceNotFound * Add to\_dict() method to PciDevicePool object * libvirt: rename self.conn in LibvirtVolume{Snapshot||Usage}TestCase * libvirt: rename self.libvirtconnection in LibvirtDriverTestCase * libvirt: convert LibvirtConnTestCase to use fakelibvirt fixture * Remove unused network rpcapi calls * Added hacking rule for assertEqual(a in b, True/False) * Add API schema for v2.1 createImage API * Fix errors in string formatting operations * libvirt: Create correct BDM object type for conn info update * Fixes undocumented commands * Make \_get\_instance\_block\_device\_info preserve root\_device\_name * Convert tests to NoDBTestCase * Fixes Hyper-V should log a clear error message * Provide compatibliity for db.compute\_node\_statistics * Update network resource when shelve offload instance * Update network resource when rescheduling instance * libvirt: Expanded test libvirt driver * Adds "file" disk driver support to Xen libvirt driver * Virt: remove unused 'host' parameter from get\_host\_uptime * Don't translate logs in tests * Don't translate exceptions in tests * disk/vfs: introduce new option to setup * disk/vfs: introduce new method get\_image\_fs * initialize objects with context in block device * Remove unused controller instance in test\_config\_drive * Fix v2.1 os-tenant-networks/networks API * Use controller methods in test\_floating\_ips * Cleanup in test\_admin\_actions * Calling controller methods directly in test\_snapshots * Add checking changePassword None in \_action\_change\_password(v2) * Add more exceptions handle when change server password (v2) * Share admin\_password unit test between V2 & V2.1 * Share server\_actions unit test between V2 & V2.1 * Fix server\_groups schema on v2.1 API * Implement a safe copy.copy() operation for Nova models * clean up extension loading logging * Hyper-V: Fixes wrong hypervisor\_version * console: introduce baseproxy and update consoles cmd * libvirt: update get\_capabilities to Host class * libvirt: add get\_connection doc string in Host class * Enable check for H238 rule * Call ComputeNode instead of Service for getting the nodes * Remove mox dependency * Fix JSONFilter docs * libvirt: move \_get\_hypervisor\_\* functions to Host class * libvirt: don't turn time.sleep into a no-op in tests * Adds Hyper-V generation 2 VMs implementation * VMware: ensure that correct disk details are returned * Improve api-microversion hacking check * Add unit test for getting project quota remains * Fix py27 gate failure - test\_create\_instance\_both\_bdm\_formats * Reduce complexity of the \_get\_guest\_config method * Cleanups in preparation of flavor attributes on Instance * Add flavor column to instance\_extra table * docs: document manual testing procedure for NUMA support * Add setup/cleanup\_instance\_network\_on\_host api for neutron/nova-network * Remove useless requirements * Make get\_best\_cpu\_topology consider NUMA requested CPU topology * Make libvirt driver expose sibling info in NUMA topology * VMware: snapshot as stream-optimized image * VMware: refactor utility functions related to VMDK * Get settable user quota maximum correctly * Add missing policy for nova in policy.json * Fix typo in nfs\_mount\_options option description * increase fake rpc POLL\_TIMEOUT to 0.1s * work around for until-failure * Fix inconsistencies in the ComputeNode object about service * Fixed incorrect initialization of availability zone tests * Revert "initialize objects with context in block device" * Fix wrong instructions for rebuilding API samples * Performance: leverage dict comprehension in PEP-0274 * Sync with latest oslo-incubator * initialize objects with context in VirtualInterface object tests * initialize objects with context in Tag object tests * initialize objects with context in Service object tests * Fixes Hyper-V boot from volume live migration * Expansion of matching XML strings logic * Xenapi: Attempt clean shutdown when deleting instance * don't use debug logs for object validation * create some unit of work logging in n-net * Make service-update work in API cells * oslo: remove useless modules * Do not use deprecated assertRaisesRegexp() * Honor shared storage on resize revert * Stub out instance action events in test\_compute\_mgr * Remove unused instance\_group\_metadata\_\* DB APIs * initialize objects with context in block device * Reduce the complexity of the create() method * speed up tests setting fake rpc polling timeout * xenapi: don't send terminating chunk on errors * Make service-delete work in API cells * Add version as request param for fake HTTPRequest * Fix OverQuota headroom KeyError in nova-network allocate\_fixed\_ip * Updated from global requirements * Make numa\_usage\_from\_instances consider CPU pinning * Cleanup in admin\_actions(v2.1api) * Cache ironic-client in ironic driver * tests: fix handling of TIMEOUT\_SCALING\_FACTOR * libvirt: remove/revert pointless logic for getVersion call * libvirt: move capabilities helper into host.py * libvirt: move domain list helpers into Host class * libvirt: move domain lookup helpers into Host class * Fix live migration RPC compatibility with older versions * Added \_get\_volume\_driver method in libvirt driver * fix wrong file path in docstring of hacking.checks * Make ec2 auth support v4 signature format * VMware: driver not handling port other than 443 * libvirt: use XPath in \_get\_serial\_ports\_from\_instance * Remove non existent rule N327 from HACKING.rst * Replace Hacking N315 with H105 * Enable W292 * Fix and re-gate on H306 * Move to hacking 0.10 * Fix nova-manage shell ipython * Make service-list output consistent * Updated from global requirements * Make V2.1 servers filtering (--tenant-id) same as V2 * Fix failure rebuilding instance after resize\_revert * Move WarningsFixture after DatabaseFixture so emit once * libvirt: Use arch.from\_host instead of platform.processor * Cells: Improve invalid hostname handling * Fix obj\_to\_primitive() expecting the dict interface methods * Remove unused XML\_WARNING variable in servers API * Guard against missing X-Instance-ID-Signature header * libvirt: not setting membacking when mempages are empty host topology * remove pylint source code annotations * Cleanup XML for api samples tests for Nova REST API * remove all traces of pylint testing infrastructure * initialize objects with context in SecurityGroupRule object tests * initialize objects with context in SecurityGroup object tests * initialize objects with context in base object tests * initialize objects with context in Migration object tests * initialize objects with context in KeyPair object tests * initialize objects with context in InstanceNUMATopology object tests * initialize objects with context in InstanceGroup object tests * initialize objects with context in InstanceFault object tests * Fix error message when no IP addresses available * Update WSGI SSL IPv6 test and SSL certificates * Catch more specific exception in \_get\_power\_state * Add WarningsFixture to only emit DeprecationWarning once in a test run * Maintain the creation order for vifs * Update docstring for wrap\_exception decorator * Doc: Adds python-tox to Ubuntu dependencies * Added hacking rule for assertTrue/False(A in B) * ironic: use instance object in driver.py * Add LibvirtGPFSVolumeDriver class * Make pagination work with deleted marker * Return 500 when unexpected exception raising when live migrate v2 * Remove no need LOG.exception on attach\_interface * Make LOG exception use format\_message * make IptablesRule debug calls meaningful * Switch to tempest-lib's packaged subunit-trace * Update eventlet API in libvirt driver * initialize objects with context in Instance object tests * initialize objects with context in Flavor object tests * initialize objects with context in FixedIP object tests * initialize objects with context in EC2 object tests * initialize objects with context in ComputeNode object tests * initialize objects with context in BlockDeviceMapping object tests * Nuke XML support from Nova REST API - Phase 3 * Return floating\_ip['fixed\_ip']['instance\_uuid'] from neutronv2 API * Add handling of BadRequest from Neutron * Add numa\_node to PCIDevice * Nuke XML support from Nova REST API - Phase 2 * Remove unused methods in nova utils * Use get\_my\_ipv4 from oslo.utils * Add cpu pinning check to numa\_fit\_instance\_to\_host * Add methods for calculating CPU pinning * Remove duplicated policy check at nova-network FlatManager * boot instance with same net-id for multiple --nic * XenAPI: Check image status before uploading data * XenAPI: Refactor message strings to remove locals * Cellsv2 devref addition * Nuke XML support from Nova REST API - Phase 1 * hardware: fix numa topology from image meta data * Support both list and dict for pci\_passthrough\_whitelist * libvirt: Add balloon period only if it is not None * Don't assume contents of values after aggregate\_update * Add API schema for server\_groups API * Remove unused function \_get\_flavor\_refs in flavor\_access extension * Make rebuild server schema 'additionalProperties' False * Tests with controller methods in test\_simple\_tenant\_usage * Convert wsgi call to controller in test\_virtual\_interfaces * Fix the comment of host index api * Imported Translations from Transifex * Use controller methods directly in test\_admin\_password * Drop workarounds for python2.6 * VMware: add in utility method for copying files * Remove lock files when remove libvirt images * Change log when set\_admin\_password failed * Catch InstanceInvalidState for start/stop action * Unshelving a volume backed instance doesn't work * Cache empty results in libvirt get\_volume\_connector * VMware: improve the performance of list\_instances * VMware: use power\_off\_instance instead of power\_off * VMware: refactor unit tests to use \_get\_info * libvirt: clean instance's directory when block migration fails * Remove unused scheduler driver methods * Reuse methods from netutils * VMware: make use of oslo.vmware logout * Remove unused directory nova/tests/unit/bundle * Prevent new code from using namespaced oslo imports * Move metadata filtering logic to utils.py * Make test\_consoles to directly call controller methods * Catch expected exceptions in remote console controller * Make direct call to controller test\_server\_password * Cleanup in test\_keypairs not to use wsgi\_app * Add ipv6 support to fake network models * Add host field when missing from compute\_node * Remove condition check for python2.6 in test\_glance * Cleanup in test\_availability\_zone not to use wsgi\_app * Call controller methods directly in test\_evacuate * VMware: Use datastore\_regex for disk stats * Add support for clean\_shutdown to resize in compute api layer * Fix Instance relationships in two objects * objects: remove NovaObjectDictCompat from Tag object * libvirt: introduce new helper for getting libvirt domain * libvirt: remove pointless \_get\_host\_uuid method * libvirt: pass Host object into firewall class * Cleanup in server group unit tests * Enhance EvacuateHostTestCase test cases * Call controller methods directly in test\_console\_output * Make direct call to controller in test\_console\_auth\_tokens * Populates retry info when unshelve offloaded instance * Catch NUMA related exceptions for create server v2.1 API * Remove unnecessary cleanup from ComputeAPITestCase * extract RPC setup into a fixture 2015.1.0b1 ---------- * Fix recent regression filling in flavor extra\_specs * remove detail method from LimitsController * Remove instance\_uuids from request\_spec * libvirt: remove unused get\_connection parameter from VIF driver * libvirt: sanitize use of mocking in test\_host.py * libvirt: convert test\_host.py to use FakeLibvirtFixture * libvirt: introduce a fixture for mocking out libvirt connections * Expand valid resource name character set * Set socket options in correct way * Make resize server schema 'additionalProperties' False * Make lock file use same function * Remove unused db.api.dnsdomain\_list * Remove unused db.api.instance\_get\_floating\_address * Remove unused db.api.aggregate\_host\_get\_by\_metadata\_key * Remove unused db.api.get\_ec2\_instance\_id\_by\_uuid * Join instances column before expecting it to exist * ec2: Change FormatMappingTestCase to NoDBTestCase * libvirt: enhance driver to configure guests based on hugepages * Fix ironic delete fails when flavor deleted * virt: pass instance object to block\_stats & get\_instance\_disk\_info * Add pci\_device\_pools to ComputeNode object * Handle invalid sort keys/dirs gracefully * hardware: determine whether a pagesize request is acceptable * objects: add method to verify requested hugepages * hardware: make get\_constraints to return topology for hugepages * hardware: add method to return requested memory page size * Cleanup in ResourceExtension ALIAS(v2.1api) * Replace use of handle\_schedule\_error() with set\_vm\_state\_and\_notify() * Fix set\_vm\_state\_and\_notify passing SQLA objects to send\_update() * Imported Translations from Transifex * Libvirt: use strutils.bool\_from\_string * Use constant for microversions header name (cleanup) * Adds support for versioned schema validation for microversions api * Add support for microversions API special version latest * Adds API microversion response headers * Use osapi\_compute worker for api v2 service * initialize objects with context in Aggregate object tests * Replace the rest of the non-object-using test\_compute tests * Fix using anyjson in fake\_notifier * Fix a bug in \_get\_instance\_nw\_info() where we re-query for sysmeta * Corrects link to API Reference on landing page * libvirt: disk\_bus setting is being lost when migration is reverted * libvirt: enable hyperv enlightenments for windows guests * libvirt: enhance to return avail free pages on cells * libvirt: move setting of guest features out into helper method * libvirt: add support for configuring hyperv enlightenments in XML * libvirt: change representation of guest features * libvirt: add support for hyperv timer source with windows guests * libvirt: move setting of clock out into helper method * libvirt: don't pass a module import into methods * Reject non existent mock assert calls * VMware: remove unused method in the fake module * Use oslo db concurrency to generate nova.conf.sample * Make instance\_get\_all\_\*() funtions support the smart extra.$foo columns * Make cells send Instance objects in build\_instance() * Fix spelling error in compute api * objects: fix changed fields for instance numa cell * Hyper-V: Fix volume attach issue caused by wrong constant name * Move test\_extension\_info from V3 dir to V2.1 * Make create server schema 'additionalProperties' False * Make update server schema 'additionalProperties' False * Updated from global requirements * Update devref with link to kilo priorities * Add vision of nova rest API policy improvement in devref * objects: remove dict compat support from all XXXList() objects * objects: stop conductor manager using dict field access on objects * objects: allow creation of objects without dict item compat * Remove duplicated constant DISK\_TYPE\_THIN * Hyper-V: Fix retrieving console logs on live migration * Remove FlavorExtraSpecsNotFound catch in v3 API * Add API schema for v2.1 block\_device\_mapping\_v1 * Add API schema for v2.1 block\_device\_mapping extension * VMware: Support volume hotplug * fix import of oslo.concurrency * libvirt: set guest cpu\_shares value as a multiple of guest vCPUs * Make objects use the generalized backport scheme * Fix base obj\_make\_compatible() handling ListOfObjectsField * VMware: make use of oslo.vmware pbm\_wsdl\_loc\_set * Replace stubs with mocks * Updated from global requirements * use more specific error messages in ec2 keystone auth * Add backoff to ebtables retry * Add support for clean\_shutdown to rescue in compute api layer * Add support for clean\_shutdown to shelve in compute api layer * Add support for clean\_shutdown to stop in compute api layer * Extend clean\_shutdown to the compute rpc layer * initialize objects with context in compute manager * Add obj\_as\_admin() to NovaPersistentObject * Bump major version of Scheduler RPC API to 4.0 * Use model\_query from oslo.db * Only check db/api.py for session in arguments * Small cleanup in db.sqlalchemy.api.action\_finish() * Inline \_instance\_extra\_get\_by\_instance\_uuid\_query * libvirt: Convert more tests to use instance objects * virt: Convert more tests to use instance objects * virt: delete unused 'interface\_stats' method * objects: fix version changelog in numa * libvirt: have \_get\_guest\_numa\_config return a named tuple * simplify database fixture to the features we use * extract the timeout setup as a fixture * Stop neutron.api relying on base neutron package * Move pci unit test from V3 to V2.1 * Clarify point of setting dirname in load\_standard\_extensions * Remove support for deprecated header X\_ROLE * move all conf overrides to conf\_fixture * move ServiceFixture and TranslationFixture * extract fixtures from nova.test to nova.test.fixtures * libvirt: Fix NUMA memnode assignments to host cells * libvirt: un-cruft \_get\_guest\_numa\_config * Make scheduler filters/weighers only load once * Refactor unit tests for scheduler weights * Fix cells RPC version 1.30 compatibility with dict-based Flavors * Objects: add in missing translation * network:Separate the translatable messages into different catalogs * objects: introduce numa pages topology as an object * check the configuration num\_vbd\_unplug\_retries * Doc: minor fixes to unit testing devref * Doc: Update i18n devref * VMware: remove flag in tests indicating VC is supported * virt: use instance object for attach in block\_device * VMware: clean up unit tests * Do not compute deltas when doing migration * Modify v21 alias name for compatible with v2 * Clean bdms and networks after deleting shelved VM * move eventlet GREENDNS override to top level * fix pep8 errors that apparently slipped in * include python-novaclient in abandon policy * replace httplib.HTTPSConnection in EC2KeystoneAuth * Re-revert "libvirt: add version cap tied to gate CI testing" * ironic: remove non-standard info in get\_available\_resource dict * hyperv: use standard architecture constants for CPU model * xenapi: fix structure of data reported for cpu\_info * ironic: delete cpu\_info data from get\_available\_resource * vmware: delete cpu\_info data from get\_available\_resource * pci: move filtering of devices up into resource tracker * Libvirt: Fsfreeze during live-snapshot of qemu/kvm instances * libvirt: Fixes live migration for volume backed instances * Updated from global requirements * Remove unused db.api.fixed\_ip\_get\_by\_address\_detailed * VMware: Remove unused \_check\_if\_folder\_file\_exists from vmops * VMware: Remove unused \_get\_orig\_vm\_name\_label from vmops * VMware: enable a cache prefix configuration parameter * Hyper-V: attach volumes via SMB * etc: replace NullHandler by Python one * Add cn\_get\_all\_by\_host and cn\_get\_by\_host\_and\_node to ComputeNode * Add host field to ComputeNode * Reject unsupported image to local BDM * Update LVM lockfile name identical to RAW and Qcow * Fix invalid read\_deleted value in \_validate\_unique\_server\_name() * Adds hacking check for api\_version decorator * Parse "networks" attribute if loading os-networks * Fixes interfaces template identification issue * VMware: support passing flavor object in spawn * Libvirt: make use of flavor passed by spawn method * Virt: change instance\_type to flavor * rename oslo.concurrency to oslo\_concurrency * Support macvtap for vif\_type being hw\_veb * downgrade 'No network configured!' to debug log level * Remove unnecessary timeutils override cleanup * Cleanup timeutils override in tests/functional/test\_servers * Downgrade quota exceeded log messages * libvirt: Decomposition plug hybrid methods in vif * Remove unused cinder code * Libvirt normalize numa cell ids * Remove needless workaround in utils module * Check for floating IP pool in nova-network * Remove except Exception cases * Fixes multi-line strings with missing spaces * Fix incorrectly formatted log message * libvirt: check value of need\_legacy\_block\_device\_info * Fixed typo in testcase and comment * Share server access ips tests between V2 & V2.1 * Workflow documentation is now in infra-manual * Add a validation format "cidr" * Use a copy of NEW\_NETWORK for test\_networks * Adds global API version check for microversions * Implement microversion support on api methods * Fix long hostname in dnsmasq * This patch fixes the check that 'options' object is empty correctly * Assert order of DB index members * Updated from global requirements * object-ify flavors manager side of the RPC * Add CPU pinning data to InstanceNUMACell object * Enforce unique instance uuid in data model * libvirt: Handle empty context on \_hard\_reboot * Move admin\_only\_action\_common out of v3 directory(cleanup) * Compute Add build\_instance hook in compute manager * SQL scripts should not manage transactions * Clear libvirt test on LibvirtDriverTestCase * Replacement \`\_\` on \`\_LW\` in all LOG.warning part 4 * Replacement \`\_\` on \`\_LW\` in all LOG.warning part 3 * Convert v3/v2.1 extension info to present v2 API format * Adds NUMA CPU Pinning object modeling * objects: Add several complex field types * VMware: ephemeral disk support * Imported Translations from Transifex * Fix disconnecting necessary iSCSI sessions issue * VMware: ensure that fake VM deletion returns a task * Compute: Catch binding failed exception while init host * libvirt: Fix domain creation for LXC * Xenapi: Allow volume backed instances to migrate * Break V2 XML Support * Libvirt: SMB volume driver * libvirt: Enable console and log for system z guests * libvirt: Set guest machine type on system z * Drop support for legacy server groups * Libvirt: Don't let get\_console\_output crash on missing console file * Hyper-V: Adds VMOps unit tests (part 1) * VMware: allow selection of vSAN datastores * libvirt: enhance config memory backing to handle hugepages * VMware: support spawn of stream-optimized image * libvirt: reuse defined method to return instance numa topology * Remove the volume api related useless policy rules * Error code for creating secgroup default rule * Don't mock external locks with Semaphore * Add shelve and unshelve info into devref doc * VMware: optimize resource pool usage * Added objects Tag and TagList * libvirt: video RAM setting should be passed in kb to libvirt * Switch to moxstubout and mockpatch from oslotest * Check that volume != root device during boot by image * Imported Translations from Transifex * Make a flavorRef validation strict * Add missing indexes from 203 migration to model * Fix type of uniq\_security\_groups0project\_id0name0deleted * Correct columns covered in migrations\_instance\_uuid\_and\_status\_idx * Add debug log for url not found * Optimize 'floating\_ip\_bulk\_create' function * factor out \_setup\_logging in test.py * extract \_setup\_timeouts in test.py * Scheduler: return a namedtuple from \_get\_group\_details * Use "is\_neutron\_security\_groups" check * Fix function name mismatch in test case * VMware: prevent exception with migrate\_disk\_and\_power\_off * Fix URL mapping of image metadata PUT request * Compute: catch correct exception when host does not exists * Fix URL mapping of server metadata PUT request * objects: move numa host and cell to objects * objects: introduce numa objects * Code cleanup: quota limit validation * Add api validation schema for image\_metadata * Correct InvalidAggregateAction translation&format * Remove blanks before ':' * Port virtual-interfaces plugin to v2.1(v3) API * Catch ComputeServiceUnavailable on v2 API * GET servers API sorting REST API updates * Add API validation schema for volume\_attachments * Changed testcase 'test\_send\_on\_vm\_change' to test vm change * VMware: associate instance with storage policy * VMware: use storage policy in datastore selection * VMWare: get storage policy from flavor * Share CreateBackup unit test between V2 & V2.1 * Share suspend\_server unit test between V2 & V2.1 * Share pause\_server unit test between V2 & V2.1 * Share lock\_server unit test between V2 & V2.1 * VMware: enable VMware driver to use new BDM format * Use admin only common test case in admin action unit test cases * objects: move virt numa instance to objects * Fix v2.1 API os-simple-tenant-usage policy * Set vm state error when raising unexpected exception in live migrate * Add delete not found unit testcase for floating\_ip api * Improve error return code of floating\_ips in v2/v2.1 api * Port floating\_ips extension to v2.1 * Removing the headroom calculation from db layer * Make multiple\_create unit tests share between v2 and v2.1 * Set API version request information on request objects * Change definition of API\_EXTENSION\_NAMESPACE to method * Adds APIVersionRequest class for API Microversions * Updated from global requirements * remove test.ReplaceModule from test.py * Added db API layer to add instance tag-list filtering support * Added db API layer for CRUD operations on instance tags * Implement 'personality' plugin for V2.1 * Fix API samples/templates of multinic-add-fixed-ip * move the integrated tests into the functional tree * Sync latest from oslo-incubator * Fix use of conf\_fixture * Make network/\* use Instance.get\_flavor() * Make metadata server use Instance.get\_flavor() * Fix use of extract\_flavor() in hyper-v driver * Check server group policy on migrate/evacuate * VMware: fix exception when multiple compute nodes are running * Add API json schema for server\_external\_event(v2.1) * Port v2 quota\_classes extension to work in v2.1(v3) framework * Share unit test case for server\_external\_events api * Add API schema for v2.1/v3 scheduler\_hints extension * Make compute/api.py::resize() use Instance.get\_flavor() * Make get\_image\_metadata() use Instance.get\_flavor() * Fix instance\_update() passing SQLA objects to send\_update() * Fix EC2 volume attachment state at attaching stage * Fixes Hyper-V agent IDE/SCSI related refactoring * dummy patch to let tox functional pass * Remove Python 2.6 classifier * Make aggregate filters use objects * hardware: clean test to use well defined fake flavor * Enable pep8 on ./tools directory * objects: Add test for instance \_save methods * Error code for creating duplicate floating\_ip\_bulk * Use HTTPRequest instead of HTTPRequestV3 for v2/v2.1 tests * objects: make instance numa topology versioned in db * Clean up in test\_server\_diagnostics unit test case * Add "x-compute-request-id" to a response header * Prevent admin role leak in context.elevated * Hyper-V: Refactors Hyper-V VMOps unit tests * Hyper-V: Adds Hyper-V SnapshotOps tests * Introduce a .z version element for backportable objects * Adds new RT unit tests for \_sync\_compute\_node * Fix for extra\_specs KeyError * Remove old Baremetal Host Manager * Remove unused network\_api.get\_instance\_uuids\_by\_ip\_filter() * Remove unused network\_api.get\_floating\_ips\_by\_fixed\_address() * add abandon\_old\_reviews script * Remove havana compat from nova.cert.rpcapi * Retry ebtables on race * Eventlet green threads not released back to pool * Hyper-V: Adds LiveMigrationOps unit tests * Hyper-V: Removes redundant utilsfactory tests from test\_hypervapi * Hyper-V: Adds HostOps unit tests * Make nova-api use quotas object for create\_security\_group * Make nova-api use quotas object for count() and limit\_check() * Add count and limit\_check methods to quota object * Make neutronapi get networks operations return objects * Hyper-V: fix tgt iSCSI targets disconnect issue * Network object: add missing translations * Adapting pylint runner to the new message format * Cleanup v2.1 controller inheritance * Load extension 2 times fix load sequence issue * Make get\_next\_device\_name() handle an instance object * Add obj\_set\_defaults() to NovaObject * Switch to oslo.config fixture * Remove VirtNUMAHostTopology.claim\_test() method * Instances with NUMA will be packed onto hosts * Make Instance.save() update numa\_topology * objects: remove VirtPageSize from hardware.py * VMware: enable backward compatibility with existing clusters * Make notifications use Instance.get\_flavor() * Make notify\_usage\_exists() take an Instance object * Convert hardware.VirtCPUTopology to nova object * Updated from global requirements * Replacement \`\_\` on \`\_LW\` in all LOG.warning part 2 * compute: rename hvtype.py to hv\_type.py * Replacement \`\_\` on \`\_LW\` in all LOG.warning part 1 * Replacement \`\_\` on \`\_LE\` in all LOG.exception * Use opportunistic approach for migration testing * Replacement \`\_\` on \`\_LI\` in all LOG.info - part 2 * Replacement \`\_\` on \`\_LI\` in all LOG.info - part 1 * Add ALL-IN operator to extra spec ops * Sync server\_external\_events v2 to v2.1 Part 2 * Sync server\_external\_events v2 to v2.1 Part 1 * Fix connecting unnecessary iSCSI sessions issue * Add API validation schema for services v2.1 plugin * Fix exception handling in \_get\_host\_metrics() * initialize objects with context in network manager tests * initialize objects with context in flavors * initialize objects with context in compute api * initialize objects with context in resource tracker * Use common get\_instance call in API plugins part 3 * Clean the test cases for service plugins * initialize objects with context in server groups api * initialize objects with context in cells * tests: update \_get\_instance\_xml to accept custom flavor object * libvirt: vif tests should use a flavor object * Compute: improve test\_compute\_utils time * Compute: improve usage of Xen driver support * libvirt: introduce new 'Host' class to manage the connection * Add CHAP credentials support * Document the upgrade plans * Move test\_hostops into nova/tests/unit * Fix get\_all API to pass search option filter to cinder api * VMware: remove ESX support for getting resource pool * objects: Makes sure Instance.\_save methods are called * Add support for fitting instance NUMA nodes onto a host * VMware: remove unnecessary brackets * Imported Translations from Transifex * Port volume\_attachments extension to v2.1 API * Only filter once for trusted filters * Indicate whether service is down for mc driver * Port assisted-volume-snapshots extension to v2.1 * Updated from global requirements * Add custom is\_backend\_avail() method * Fixes differencing VHDX images issue on Hyper-V * Add debug log when over quota exception occurs * Fix rule not found error in sec grp default rule API * Convert service v3 plugin to v2.1 API * Decrease admin context usage in \_get\_guest\_config * Catch NotImplemented nova exceptions in API extension * Add API json schema to volumes api(v2.1) * Don't modify columns\_to\_join formal parameter in \_manual\_join\_columns * Limit tcp/udp port to be empty string in json-schema * Fix the cell API with string rpc\_port failed * Add decorator expected\_errors for security\_group extension * Fix bulk floating ip ext to show uuid and fixed\_ip * Use session in cinderclient * Make objects.Flavor.\_orig\_projects a list * Refactor more compute tests to use Instance objects * Use Instance.get\_flavor() in more places * Support instance\_extra fields in expected\_attrs on Instance object * Adds host power actions support for Hyper-V * Exceptions: finish sentence with fullstop * Type conflict in trusted\_filter.py using attestation\_port default value * Get EC2 metadata localip return controller node ip * Rename private functions in db.sqla.api * Enable hard-reboot on more states * Better error message when check volume status * libvirt: use qemu (qdisk) disk driver for Xen >= 4.2.0 * Add resource types for JSON-Schema validation * Add integer types for JSON-Schema * Revert pause/unpause state when host restart * Extends use of ServiceProxy to more methods in HostAPI in cells * Nova devref: Fix the rpc documentation typos * Remove duplicated code in services api integrated test case * Share server\_password unit test between V2 & V2.1 * Key manager: ensure exception reason is translated * Virt: update spawn signature to pass instance\_type * Compute: set instance to ERROR if resume fails * Limit InstanceList join to system\_metadata in os-simple-tenant-usage * Pass expected\_attrs to instance\_get\_active\_by\_window\_joined * VMware: remove unused parameter (mountpoint) * Truncate encoded instance message to 255 or fewer * Only load necessary instance info for use in sync power state * Revert "Truncate encoded instance message to 255" * VMware: refactor cpu allocations * Fixes spawn issue on Hyper-V * Refine HTTP error code for os-interface * Share migrations unit test between V2 & V2.1 * Use common get\_instance call in API plugins part 2 * make get\_by\_host use slave in periodic task * Add update\_cells to BandwidthUsage.create() * Fix usage of BandwidthUsage.create() * Updated from global requirements * Hard reboot doesn't re-create instance folder * object-ify flavors api and compute/api side of RPC * Allow passing columns\_to\_join to instance\_get\_all\_by\_host\_and\_node() * Don't make a no-op DB call * Remove deprecated affinity filters * Generalize dependent object backporting * GET servers API sorting compute/instance/DB updates * Hyper-V: cleanup basevolumeutils * Specify storage IP for iscsi connector * Fix conductor processes race trying to join servicegroup (zk driver) * Remove unused db.api.floating\_ip\_set\_auto\_assigned * Remove unused db.api.flavor\_extra\_specs\_get\_item * Remove unused oslo.config import * Create instance\_extra items atomically with the instance itself * Shelve\_offload() should give guests a chance to shutdown * Fixes Hyper-V driver WMI issue on 2008 R2 * Fix circular reference error when live migration failed * Fix live migration api stuck when migrate to old nova node * Remove native security group api class * libvirt: pin emulator threads to union of vCPU cpuset * libvirt: add classes for emulator thread CPU pinning configuration * libvirt: set NUMA memory allocation policy for instances * Fixed quotas double decreasing problem * Convert v3 console plugin to v2.1 * Virt: make use of the InstanceInfo object * Virt: create an object InstanceInfo * Metadata service: make use of get\_instance\_availability\_zone * Metadata service: remove check for the instance object type * Metadata: use instance objects instead of dictionary * VMware: Fix problem transferring files with ipv6 host * VMware: pass vm\_ref to \_set\_machine\_id * VMware: pass vm\_ref to \_get\_and\_set\_vnc\_config * Add API schema for aggregates set\_metadata API * Compute: Add start notification for resume * VMware: fix regression for 'TaskInProgress' * Remove havana compat from nova.console.rpcapi * Remove havana compat from nova.consoleauth.rpcapi * Share console-auth-tokens tests between V2 & V2.1 * Raise HTTPNotFound in V2 console extension * Add 'instance-usage-audit-log' plugin for V2.1 * Truncate encoded instance message to 255 * Deduplicate some INFO and AUDIT level messages * move all tests to nova/tests/unit * Add tox -e functional * Don't touch info\_cache after refreshing it in Instance.refresh() * Drop max-complexity to 47 * Aggregate.save() shouldn't return a value * Remove useless host parameter in virt * Use real disk size to consider a resize down * Add virtual interface before add fixed IP on nova-network * image cache clean-up to clean swap disk * Make unit test floating ips bulk faster * Remove flush\_operations in the volume usage output * Updated from global requirements * xenapi plugins must target only Python 2.4 features * libvirt: add classes for NUMA memory binding configuration * libvirt: add in missing translation for LVM migration * Config bindings: remove redundant brackets * Config drive: delete deprecated config var config\_drive\_tempdir * Refactor Ironic driver tests as per review comment * Switch default cinder API to V2 * Remove deprecated spicehtml5 options * Fix xen plugin to retry on upload failure * Log sqlalchemy exception message in migration.py * Use six.text\_type instead of unicode * XENAPI add duration measure to log message * Quotas: remove deprecated configuration variable * Glance: remove deprecated config options * Cinder: remove deprecated configuration options * Neutron: remove deprecated config options * object: update instance numa object to handle pagesize * hardware: make cell instance topology to handle memory pages * hardware: introduce VirtNUMATopologyCellInstance * hardware: fix in doctstring the memory unit used * virt: introduce types VirtPageSize and VirtPagesTopology * Clearer default implmentation for dhcp\_options.. * Fix instance\_usage\_audit\_log test to use admin context * VMware: remove unused method \_get\_vmfolder\_ref * libvirt: safe\_decode domain.XMLDesc(0) for i18n logging * VMware: trivial fix for comment * Fix the uris in documentation * Make test\_security\_groups nose compatible * Make test\_quotas compatible with nosetests * Return HTTP 400 if use invalid fixed ip to attach interface * Fixed typos in nova.objects.base docstrings * Add note on running single tests to HACKING.rst * Use sizelimit from oslo.middleware * Use oslo.middleware * Make resource tracker always use Flavor objects * maint:Don't translate debug level logs * Make console show and delete exception msg better * Change error code of floating\_ip\_dns api(v2.1) * Make scheduler code use object with good practice * Switch Nova to use oslo.concurrency * scheduler: Remove assert on the exact number of weighers * Update docstring for check\_instance\_shared\_storage\_local * remove use of explicit lockutils invocation in tests * Delay STOPPED lifecycle event for Xen domains * Remove warning & change @periodic\_task behaviour * Fix nova-compute start issue after evacuate * Ignore DiskNotFound error on detaching volumes * Move setup\_instance\_group to conductor * Small doc fix in compute test * libvirt: introduce config to handle cells memory pages caps * Fixes DOS issue in instance list ip filter * Use 404 instead of 400 when security\_group is non-existed * Port security-group-default-rules extension into v2.1 * Port SecurityGroupRules controller into v2.1 * error if we don't run any tests * Revert "Switch Nova to use oslo.concurrency" * Updated from global requirements * Remove admin context which is not needed * Add API validation schema for disk\_config * Make test\_host\_filters a NoDBTestCase * Move group affinity filters tests to own test file * Split out metrics filter unit tests * Splits out retry filter unit tests * Split out compute filters unit tests * Update hooks from oslo-incubator copy * Split out aggregate disk filter unit tests * Split out core filter unit tests * Split out IO Ops filter unit tests * Split out num instances filter unit tests * Split and fix the type filters unit tests * Split and fix availability zone filter unit tests * Split out PCI passthrough filter unit tests * Use common get\_instance call in API plugins * Fix nova evacuate issues for RBD * DB API: Pass columns\_to\_join to instance\_get\_active\_by\_window\_joined * Read flavor even if it is already deleted * Use py27 version of assertRaisesRegexp * update retryable errors & instance fault on retry * xenapi: upload/download params consistency change * Use assertRaisesRegexp * Drop python26 support for Kilo nova * Switch Nova to use oslo.concurrency * Remove param check for backup type on v2.1 API * Set error state when unshelve an instance due to not found image * fix the error log print in encryptor \_\_init\_\_.py * Remove unused compute\_api in extend\_status * Compute: maint: adjust code to use instance object format * VMware: use instance.uuid instead of instance['uuid'] * Network: manage neutron client better in allocate\_for\_instance * Split out agg multitenancy isolation unit tests * Split agg image props isolation filter unit tests * Separate isolated hosts filter unit tests * Separate NUMA topology filter unit tests * resource-tracker: Begin refactor unit tests * Faster get\_attrname in nova/objects/base.py * Hyper-V: Skip logging out in-use targets * Compute: catch more specific exception for \_get\_instance\_nw\_info * typo in the policy.json "rule\_admin\_api" * Fix the unittest use wrong controller for SecurityGroups V2 * host manager: Log the host generating the warning * Add API validation schema for floating\_ip\_dns * Remove \`domain\` from floating-ip-dns-create-or-update-req body * Port floating\_ip\_dns extention to v2.1 * Remove LOG outputs from v2.1 API layer * Run build\_and\_run\_instance in a separate greenthread * VMware: Improve the efficiency of vm\_util.get\_host\_name\_for\_vm * VMware: Add fake.create\_vm() * Use wsgi.response for v2.1 API * Use wsgi.response for v2.1 unrescue API * Add API schema for v2.1 "resize a server" API * Remove use of unicode on exceptions * Fix error in comments * Make pci\_requests a proper field on Instance object * libvirt: fully parse PCI vendor/product IDs to integer data type * Remove uncessary instance.save in nova compute * api: add serial console API calls v2.1/v3 * Add API validation schema for cloudpipe api * Remove project id in ViewBuilder alternate link * Handle exception better in v2.1 attach\_interface * Cleanup of tenant network tests * Port floating\_ips\_bulk extention to v2.1 * Make v2.1 tests use wsgi\_app\_v21 and remove wsgi\_app\_v3 * Translate 'powervm' hypervisor\_type to 'phyp' for scheduling * Give a reason why NoValidHost in select\_destinations * ironic: use instance object for \`\_add\_driver\_fields\` * ironic: use instance object for \`\_wait\_for\_active\` * ironic: use instance object for \`get\_info\` * ironic: use instance object for \`rebuild\` * ironic: use instance object for plug\_vifs * Revert "Replace outdated oslo-incubator middleware" * Set logging level for glanceclient to WARN * Nova should be in charge of its log defaults * Reduce the complexity of \_get\_guest\_config() * VMware: fix compute node exception when no hosts in cluster * libvirt: use instance object for detach\_volume * libvirt: use instance object for attach\_volume * libvirt: use instance object for resume\_state\_on\_host\_boot * libvirt: treat suspend instance as an object * VMware: Remove redundant fake.reset() in test\_vm\_util * VMware: add tests for spawn with config drive enabled * Adds tests for Hyper-V Network utils * Adds tests for Hyper-V Host utils * Fix order of arguments in assertEqual * Replace custom patching in \`setUp\` on HypervisorsSampleJsonTests * Console: delete code for VMRCConsole and VMRCSessionConsole * VMware: delete the driver VMwareESXDriver * Replacement \`\_\` on \`\_LE\` in all LOG.error * VMware: rename vmware\_images to images * Remove unuseful parameter in cloudpipe api(v2/v2.1) * Moves trusted filter unit tests into own file * Port update method of cloudpipe\_update to v2.1(v3) * Clean up iSCSI multipath devices in Post Live Migration * Check fixed-cidr is within fixed-range-v4 * Porting baremetal\_nodes extension to v2.1/v3 * Port fixed\_ip extention to v2.1 * Separate filter unit tests for agg extra specs * Move JSON filter unit tests into own file * Separate compute caps filter unit tests * Separate image props filter unit tests * Separate disk filters out from test\_host\_filters * Separate and refactor RAM filter unit tests * Remove duplicate test * Reduce the complexity of stub\_out\_db\_network\_api() * Remove duplicate index from model * Remove useless join in nova.virt.vmwareapi.vm\_util * fixed typo in test name * Separate and refactor affinity filter tests * Pull extra\_specs\_ops tests from test\_host\_filters * Remove outdated docstring for XenApi driver's options * VMware: attach config drive if booting from a volume * Remove duplicated comments in virt/storage\_users * Compute: use instance object for vm\_state * libvirt: use six.text\_type when setting text node value in guest xml * Allow strategic loading of InstanceExtra columns * Create Nova Scheduler IO Ops Weighter * Put a cap on our cyclomatic complexity * Add notification for server group operations * Clean up the naming of PCI python modules * Port os-networks-associate plugin to v2.1(v3) infrastructure * Port os-tenant-networks plugin to v2.1(v3) infrastructure * Cleanup of exception handling in network REST API plugin * Fix instance\_extra backref * Refactor compute tests to not use \_objectify() * Refactor compute and conductor tests to use objects * Fix genconfig - missed one import from oslo cleanup * Handle Forbidden error from network\_api.show\_port in os-interface:show * Replace outdated oslo-incubator middleware * VMware: Improve logging on failure due to invalid guestId * Ironic: Continue pagination when listing nodes * Fix unit test failure due to tests sharing mocks * libvirt: fully parse PCI addresses to integer data type * libvirt: remove pointless HostState class * Porting SecurityGroup related controller into v2.1 * Allow force-delete irrespective of VM task\_state * Use response.text for returning unicode EC2 metadata * Remove unused modules copied from oslo-incubator * Remove unused code in pci\_manager.get\_instance\_pci\_devs() * VMWare: Remove unused exceptions * Switch to nova's jsonutils in oslo.serialization * VMware: mark virtual machines as 'belonging' to OpenStack * XenAPI: Inform XAPI who is connecting to it * Rename cli variable in ironic driver * Add more input validation of bdm param in server creation * Return HTTP 400 if use an in-use fixed ip to attach interface * VMware: get\_all\_cluster\_refs\_by\_name default to {} * Minor refactor of \_setup\_instance\_group() * add InstanceGroup.get\_by\_instance\_uuid * Add instance\_group\_get\_by\_instance to db.api * Updated from global requirements * Add supported\_hv\_specs to ComputeNode object * Pass block device info in pre\_live\_migration * Use 400 instead of 422 for security\_groups v2 API * Port floating\_ip\_pools extention to v2.1 * Imported Translations from Transifex * Sync with latest oslo-incubator * Don't translate unit test logs * Optimize get\_instance\_nw\_info and remove ipam * Convert migrate reqeusts to use joins * Use database joins for fixed ips to other objects * Keep migration status if instance still resizing * Don't log every (friggin) migration version step during unit tests * Remove init for object list in api layer * Revise compute API schemas and add tests * Add Quota roll back for deallocate fix ip in nova-network * Update README for openstack/common * Fix libvirt watchdog support * VMware: add support for default pbm policy * Remove unused imports from neutron api * Cleanup tenant networks plugin config creation * Port os-networks plugin to v2.1(v3) infrastructure * Use reasonable timeout for rpc service\_update() * Finish objects conversion in the os-interface API 2014.2 ------ * Fix pci\_request\_id break the upgrade from icehouse to juno * Fix pci\_request\_id break the upgrade from icehouse to juno * Updated translations * vfs: guestfs logging integration * Fix broken cert revocation * Port cloudpipe extension to v2.1 * Cleanup log marker in neutronv2 api * Add 'zvm' to the list of known hypervisor types * Fix wrong exception return in fixed\_ips v2 extention * Extend XML unicode test coverage * Remove unnecessary debug/info logs of normal API ops * Refactor of test case of floating\_ips * Make v2.1 API tests use v2 URLs(test\_[r-v].\*) * Make v2.1 API tests use v2 URLs(test\_[f-m].\*) * Break out over-quota calculation code from quota\_reserve() * Fix image metadata returned for volumes * Log quota refresh in\_use message at INFO level for logstash * Break out over-quota processing from quota\_reserve() * Remove obsolete vmware/esx tools * Fix broken cert revocation * Remove baremetal virt driver * Update rpc version aliases for juno * VMware: Set vmPathName properly in fake driver * Port disk\_config extension for V2.1 * Allow backup operation in paused and suspend state * Update NoMoreFixedIps message description * Make separate calls to libvirt volume * Correct VERSION of NetworkRequest * Break out quota usage refresh code from quota\_reserve() * libvirt: abort init\_host method on libvirt that is too old * Mask passwords in exceptions and error messages * Support message queue clusters in inter-cell communication * neutronv2: translate 401 and 404 neutron client errors in show\_port * Log id in raise\_http\_conflict\_for\_instance\_invalid\_state() * Use image metadata from source volume of a snapshot * Fix KeyError for euca-describe-images * Optimize 'fixed\_ip\_bulk\_create' function * Remove 'get\_host\_stats' virt driver API method * Suppressed misleading log in unshelve, resize api * Imported Translations from Transifex * libvirt: add \_get\_launch\_flags helper method in unit test * Refactoring of contrib.test\_networks tests * Make v2.1 API tests use v2 URLs(test\_[a-e].\*) * Port fping extension to work in v2.1/v3 framework * Use oslo.utils * Correctly catch InstanceExists in servers create API * Fix the os\_networks display to show cidr properly * Avoid using except Exception in unit test * nova-net: add more useful logging before raising FixedIpLimitExceeded * libvirt: convert conn test case to avoid DB usage * libvirt: convert driver test suite to avoid DB usage * Mask passwords in exceptions and error messages * Disable libvirt NUMA topology support if libvirt < 1.0.4 * Resource tracker: use brackets for line wrap * VMWare: Remove unnecessery method * console: make unsupported ws scheme in python < 2.7.4 * VMWare: Fix nova-compute crash when instance datastore not available * Disable libvirt NUMA topology support if libvirt < 1.0.4 * VMware: remove \_get\_vim() from VMwareAPISession * Compute: use an instance object in terminate\_instance * VMware: remove unnecessary deepcopy * Destroy orig VM during resize if triggered by user * Break out quota refresh check code from quota\_reserve() * move integrated api client to requests library * Fix unsafe SSL connection on TrustedFilter * Update rpc version aliases for juno * Fix the os\_networks display to show cidr properly * libvirt: convert mox to mock in test\_utils * Remove kombu as a dependency for Nova * Adds missing exception handling in resize and rebuild servers API * Remove keystoneclient requirement * Destroy orig VM during resize if triggered by user * VMware: Fix deletion of an instance with no files * console: introduce a new exception InvalidConnectionInfo * Remove the nova-manage flavor sub-command * support TRACE\_FAILONLY env variable * Ensure files are closed promptly when generating a key pair * libvirt: convert volume snapshot test case to avoid DB usage * libvirt: convert volume usage test case to avoid DB usage * libvirt: convert LibvirtNonblockingTestCase to avoid DB usage * libvirt: convert firewall tests to avoid DB usage * libvirt: convert HostStateTestCase to avoid DB usage * libvirt: split firewall tests out into test\_firewall.py * libvirt: convert utils test case to avoid DB usage * Add VIR\_ERR\_CONFIG\_UNSUPPORTED to fakelibvirt * Remove indexes that are prefix subsets of other indexes * remove scary error message in tox * Cleanup \_convert\_block\_devices * Enhance V2 disk\_config extension Unit Test * Add developer policy about contractual APIs * Reserve 10 migrations for backports * libvirt: Make sure volumes are well detected during block migration * Remove websocketproxy workaround * Fix unsafe SSL connection on TrustedFilter 2014.2.rc1 ---------- * Remove xmlutils module * libvirt: Make sure NUMA cell memory is in Kb in XML * Fix disk\_allocation\_ratio on filter\_scheduler.rst * Remove unused method within filter\_scheduler test * Open Kilo development * Correct missing vcpupin elements for numa case * VMware: remove unused variable from tests * Imported Translations from Transifex * VMWare: Fix VM leak when deletion of VM during resizing * Logging detail when attach interface failed * Removes unused code from wsgi \_to\_xml\_node * Fix XML UnicodeEncode serialization error * Add @\_retry\_on\_deadlock to \_instance\_update() * Remove duplicate entry from .gitignore file * console: fix bug when invalid connection info * console: introduce a new exception InvalidToken * cmd: update the default behavior of serial console * console: make websocketproxy handles token from path * VMware: Remove tests for None in fake.\_db\_content['files'] * Fix creating bdm for failed volume attachment * libvirt: consider vcpu\_pin\_set when choosing NUMA cells * Fix hook documentation on entry\_points config * Remove local version of generate\_request\_id * fix usage of obj\_reset\_changes() call in flavor * Fix Bad except clauses order * Typo in exception name - CellsUpdateProhibited * Log original error when attaching volume fails * Retry on closing of luks encrypted volume in case device is busy * VMware: Remove VMwareImage.file\_size\_in\_gb * VMware: remove unused argument from \_delete\_datastore\_file() * xenapi: deal with reboots while talking to agent * Ironic: Do not try to unplug VIF if not associated * Fix Typo in method name - parse\_Dom * Adds openSUSE support for developer documentation * VMware: Remove class orphaned by ESX driver removal * Fixes missing ec2 api address disassociate error on failure * Fixes potential reliablity issue with missing CONF import * Updated from global requirements * Port extended\_ips/extended\_ips\_mac extension to V2.1 * the value of retries is error in \_allocate\_network * Ironic driver must wait for power state changes * Fallback to legacy live migration if config error * libvirt: log exception info when interface detach failed * libvirt: support live migration with shared instances dir * Fix SecurityGroupExists error when booting instances * Undo changes to obj\_make\_compatible * Clarify virt driver test comments & log statement * move integrated api client to requests library * VMware: Make DatastorePath hashable * Remove usage of self.\_\_dict\_\_ for message var replacement * VMware: trivial formatting fix in fake driver * VMware: Improve logging of DatastorePath in error messages * VMware: Use vm\_mode constants * Imported Translations from Transifex * Updated from global requirements * do not use unittest.TestCase for tests * Neutron: Atomic update of instance info cache * Reduce the scope of RT work while holding the big lock * libvirt: convert CacheConcurrencyTestCase to avoid DB usage * Give context to the warning in \_sync\_power\_states * remove test\_multiprocess\_api * add time to logging in unit tests * XenAPI: clean up old snapshots before create new * Return vcpu pin set as set rather than list * Fix start/stop return active/stopped immediately in EC2 API * consistently set status as REBUILD when rebuilding * Add test case for vim header check * Add missing instance action record for start of live migration * Reduce the log level for the guestfs being missing * Sync network\_info if instance not found before \_build\_resources yield * Remove the AUDIT log message about loaded ext * Fix unset extra\_spec for a flavor * Add further debug logging for multiprocess test * Revert "libvirt: support live migrate of instances with conf drives" * Revert "libvirt: Uses correct imagebackend for configdrive" * Fixes server list filtering on metadata * Add good path test cases of osapi\_compute\_workers * Be less confusing about notification states * Remove unused py33 tox env * fix\_typo\_in\_heal\_instance\_info\_cache * Refactor test\_get\_port\_vnic\_info 2 and 3 * Revert "libvirt: reworks configdrive creation" * Making nova.compute.api to return Aggregate Objects * Scheduler: add log warning hints * Change test function from snapshot to backup * Fixes Hyper-V dynamic memory issue with vNUMA * Update InstanceInvalidState output * Add unit test for glanceclient ssl options * Fix Broken links in devref/filter\_scheduler.rst * Change "is lazy loaded" detection method in db\_api test * Handle VolumeBDMPathNotFound in \_get\_disk\_over\_committed\_size\_total * Handle volume bdm not found in lvm.get\_volume\_size * Updated from global requirements * Address nits in I6b4123590 * Add exists check to fetch\_func\_sync in libvirt imagebackend * libvirt: avoid changing UUID when redefining nwfilters * Vmware:Add support for ParaVirtualSCSIController * Fix floating\_ips\_bulk unit test name * refactor flavor manage tests in prep for object-ify flavors * refactor flavor db fakes in prep for object-ify flavors * move dict copy in prep for object-ify flavors * tests: kill worker pids as well on timeouts * Close standard fds in test child process * Mitigating performance impact with getting pci requests from DB * Return None from get\_swap() if input is not swap * Require tests for DB migrations * VMware: fix broken mock of ds\_util.mkdir * Fix KeyError for euca-describe-images * Fixes HyperV VM Console Log * FIX: Fail to remove the logical volume * correct \_sync\_instance\_power\_state log message * Add support for hypervisor type in IronicHostManager * Don't list entire module autoindex on docs index * Add multinic API unit test * Add plan for kilo blueprints: project priorities * make flavors use common limit and marker * Raise an exception if qemu-img fails * Libvirt: Always teardown lxc container on destroy * Mark nova-baremetal driver as deprecated in Juno, removed in K * libvirt: Unnecessary instance.save(s) called * Add progress and cell\_name into notifications * XenAPI: run vhd-util repair if VHD check fails * Get instance\_properties from request\_spec * libvirt: convert encrypted LVM test to avoid DB usage * libvirt: convert test\_dmcrypt to avoid DB usage * libvirt: convert test\_blockinfo.py to avoid DB usage * libvirt: convert test\_vif.py to avoid DB usage * libvirt: remove pointless class in util test suite * libvirt: avoid need for lockutils setup running test cases * VMware: Remove host argument to ds\_util.get\_datastore() * Fix DB migration 254 by adding missing unittest * postgresql: use postgres db instead of template1 * Assume VNIC\_NORMAL if binding:vnic\_type not set * mock.assert\_called\_once() is not a valid method * db: Add @\_retry\_on\_deadlock to service\_update() * Update ironic states and documentation * XenAPI improve post snapshot coalesce detection * Catch NotImplementedError on reset\_network for xen * VMware: Fix usage of assertEqual in test\_vmops * Add more information to generic \_add\_floating\_ip exception message * bring over pretty\_tox.sh from tempest * Console: warn that the Nova VMRC console driver will be deprecated in K * virt: use compute.vm\_mode constants and validate vm mode type * compute: tweaks to vm\_mode APIs to align with arch/hvtype * Fix NUMA fit testing in claims and filter class * consolidate apirequest tests to single file * ensure that we safely encode ec2 utf8 responses * instance\_topology\_from\_instance handles request\_spec properly * NUMA \_get\_constraint auto assumed Flavor object * Imported Translations from Transifex * Fix 'force' parameter for quota-update * Update devref * default=None is unneeded in config definitions * Remove unused elevated context param from quota helper methods * Remove stale code from ObjectListBase * Split up libvirt volume's connect\_volume method * Record instance faults during boot process * ironic/baremetal: add validation of host manager/state APIs * virt: move assertPublicAPISignatures into base test class * libvirt: avoid 30 second long test in LXC mount setup * Remove all redundant \`setUp\` methods * fix up assertEqual(None...) check to catch more cases * Fix object version hash test * disk/vfs: make docstring conventional to python * disk/vfs: ensure guestfs capabilities * NIST: increase RSA key length to 2048 bit * Fix incorrect exception when bdm with error state volume * ironic: Clean LOG usage * Improve secgroup create error message * Always log the releasing, even under failure * Fix race condition in update\_dhcp * Make obj\_make\_compatible consistent * Correct baremetal/ironic consume\_from\_instance.. * Fix parsing sloppiness from iscsiadm discover * correct inverted subtraction in quota check * Add quotas for Server Groups (quota checks) * Add quotas for Server Groups (V2 API change) * check network ambiguity before external network auth * Updated from global requirements * libvirt: Consider numa\_topology when booting * Add the NUMATopologyFilter * Make HostManager track NUMA usage * API boot process sets NUMA topology for instances * Make resource tracker track NUMA usage * Hook NUMA topology checking into claims * Stash numa-related flavor extra\_spec items in system\_metadata * Fixes network\_get\_all\_by\_host to use indexes * Add plan for kilo blueprints: when is a blueprint needed * Bump FakeDriver's resource numbers * delete python bytecode before every test run * Stop using intersphinx * Don't swallow exceptions in deallocate\_port\_for\_instance * neutronv2: attempt to delete all ports * Proxy nova baremetal commands to Ironic * Increase sleeps in baremetal driver * Improve logging of external events on the compute node * virt: use compute.virttype constants and validate virt type * compute: Add standard constants for hypervisor virt types * Fix test\_create\_instance\_invalid\_key\_name * Fix \`confirmResize\` action status code in V2 * Remove unnecessary imageRef setting from tests * Add unit test for add\_floating\_ip API * Remove unused config "service\_down\_time" reference * Clarify logging in lockutils * Make sure libvirt VIR\_ERR\_NO\_DOMAIN errors are handled correctly * Adds LOG statements in multiprocess API test * Block sqlalchemy migrate 0.9.2 as it breaks all of nova * Xen: Attempt to find and cleanup orphaned SR during delete * Nova-net: fix server side deallocate\_for\_instance() * Method for getting NUMA usage from an instance * Ironic: save DB calls for getting flavor * Imported Translations from Transifex * Fix 'os-interface' resource name for Nova V2.1 * Add new unit tests for PCI stats * Fixes AttributeError with api sample test fail * Fix "revertResize/confirmResize" for V2.1 API * Add unit test to os-agent API * check the block\_device\_allocate\_retries * Support SR-IOV networking in libvirt * Support SR-IOV networking in nova compute api and nova neutronv2 * Support SR-IOV networking in the PCI modules * Add request\_id in PciDevice * Replace pci\_request flavor storage with proper object usage * Adds a test for raw\_cpu\_arch in \_node\_resource * Stop stack tracing when trying to auto-stop a stopped instance * Add quotas for Server Groups (V2 API compatibility & V2.1 support) * Fixes Hyper-V volume mapping issue on reboot * Libvirt-Enable support for discard option for disk device * libvirt: set pae for Xen PVM and HVM * Add warning to periodic\_task with interval 0 * document why we disable usb\_tablet in code * Fix 'os-start/os-stop' server actions for V2.1 API * Fix 'createImage' server actions for V2.1 API * Add unit test to aggregate api * Handle exception better in v2 attach\_interface * Fix integrated test cases for assisted-volume-snapshots * libvirt: start lxc from block device * Remove exclude coverage regex from coverage job * Pass instance to set\_instance\_error\_state vs. uuid * Add InstancePCIRequests object * Drop verbose and useless nova-api log information * Add instance\_extra\_update\_by\_uuid() to DB API * Add pci\_requests to instance\_extra table * Add claims testing to VirtNUMAHostTopology class * Expose numa\_topology to the resource tracker * libvirt: fix bug when releasing port(s) * Specify correct operation type when NVH is raised * Ironic: don't canonicalize extra\_specs data * VMware: add tests for image fetch/cache functions * VMware: spawn refactor image fetch/cache * Ironic: Fix direct use of flavor and instance module objects * Ironic driver fetches extra\_specs when needed * Maint: neutronclient exceptions from a more appropriate module * Check requirements.txt files for missing (used) requirements * Sync oslo-incubator module log: * Add amd64 to arch.canonicalize() * Sync oslo lockutils to nova * libvirt: deprecated volume\_drivers config parameter * VMware: Remove get\_copy\_virtual\_disk\_spec from vmops and vm\_util * maint: various spelling fixes * Fix config generator to use keystonemiddleware * libvirt: improve unit test time * VMware: prevent race condition with VNC port allocation * VMware: Fix return type of get\_vnc\_console() * VMware: Remove VMwareVCVMOps * Network: enable instance deletion when dhcp release fails * Adds ephemeral storage encryption for LVM back-end images * Don't elevate context when rescheduling * Ironic driver backports: patch 7 * Improve Ironic driver performance: patch 6 * Import Ironic Driver & supporting files - part 5 * Import Ironic Driver & supporting files - part 4 * Import Ironic Driver & supporting files - part 3 * Import Ironic Driver & supporting files - part 2 * Import Ironic Driver & supporting files - part 1 * Add sqlite dev packages to devref env setup doc * Add user namespace support for libvirt-lxc * Move to oslo.db * api: add serial console API calls v2 * compute: add get\_serial\_console rpc and cells api calls * compute: add get\_serial\_console in manager.py * virt: add method get\_serial\_console to driver * Clean up LOG import in floating\_ips\_bulk v2 api 2014.2.b3 --------- * Update invalid state error message on reboot API * Fix race condition with vif plugging in finish migrate * Fix service groups with zookeeper * xenapi: send chunk terminator on subprocess exc * Add support for ipv6 nameservers * Remove unused oslo.config import * Support image property for config drive * warn against sorting requirements * VMware: remove unused \_get\_vmdk\_path from vmops * virt: use compute.arch constants and validate architectures * Change v3 quota-sets API to v2.1 * always set --no-hosts for dnsmasq * Allow \_poll\_bandwidth\_usage task to hit slave * Add bandwidth usage object * VMware: spawn refactor enlist image * VMware: image user functions for spawn() * Change v3 flavor\_manage API to v2.1 * Port used\_limits & used\_limits\_for\_admin into v2.1 * Add API schema for v2.1 access\_ips extension * Add API schema for v2.1 "rebuild a server" API * Add API schema for v2.1 "update a server" API * Enabled qemu memory balloon stats * Reset task state 'migrating' on nova compute restart * Pass certificate, key and cacert to glanceclient * Add a policy for handling retrospective vetos * Adds Hyper-V soft shutdown implementation * Fix swap\_volumes * Add API schema for v2.1/v3 multiple\_create extension * Return hydrated net info from novanet add/remove\_fixed\_ip calls * Add API schema for v2.1/v3 availability\_zone extension * Add API schema for v2.1/v3 server\_metadata API * Fixes a Hyper-V list\_instances localization issue * Adds list\_instance\_uuids to the Hyper-V driver * Change v3 admin\_actions to v2.1 * Change v3 aggregate API to v2.1 * Convert v3 ExtendedAvailabilityZone api to v2.1 * Convert v3 hypervisor plugin to v2.1 * Convert server\_usage v3 plugin to v2.1 API * Convert v3 servers return\_reservation\_id behaviour to v2.1 * the headroom infomation is incomplete * Port volumes extension to work in v2.1/v3 framework * vmwareapi oslo.vmware library integration * Allow forceDelete to delete running instances * Port limits extension to work in v2.1/v3 framework * Port image-size extension to work in v2.1/v3 framework * Port v2 image\_metadata extension to work in v2.1(v3) framework * Port v2 images extension to work in v2.1(v3) framework * Convert migrate\_server v3 plugin to v2.1 * Changes V3 evacuate extension into v2.1 * console: add typed console objects * virt: setup TCP chardevice in libvirt driver * Remove snapshot\_id from \_volume\_snapshot\_create() * Check min\_ram and min\_disk when boot from volume * Add API schema for v2.1 "create a server" API * InstanceNUMAToplogy object create remove uuid param * Change v3 flavor\_access to v2.1 * Convert rescue v3 plugin to v2.1 API * Change v3 security\_groups API to v2.1 * Changes V3 remote\_console extension into v2.1 * Use common get\_instance function in v2 consoles extension * Add API schema for v2.1/v3 user\_data extension * Convert v3 cells API to v2.1 * Convert v3 server metadata plugin to v2.1 * Convert multiple-create v3 plugin to v2.1 * Convert v3 flavor extraspecs plugin to v2.1 * Fix scheduler\_available\_filters help * cmd: add nova-serialproxy service * console: add serial console module * Changes V3 server\_actions extension into v2.1 * Change v3 version API to v2.1 * Change v3 shelve to v2.1 * Process power state syncs asynchronously * Made unassigned networks visible in flat networking * Add functions to setup user namespaced filesystems * Adds nova-idmapshift cli utility * Add idmap to libvirt config * Allow hard reboots when still attempting a soft reboot * Decrease amount of queries while adding aggregate metadata * Adds Hyper-V serial console log * Store original state when suspending * Fix NoopQuotasDriver.get\_settable\_quotas() * Use instance objects consistently in suspend tests * Instance objects: fix indentation issue * libvirt: Add method for getting host NUMA topology * Add instance\_extra table and related objects * Change v3 availability-zone API to v2.1 * Move and generalize decorator serialize\_args to nova.objects.base * Convert v3 certificate API to v2.1 * Make neutronapi use NetworkRequest for allocate\_for\_instance() * Use NetworkRequest objects through to nova-network * Add extension block\_device\_mapping\_v1 for v2.1 * Catch BDM related InvalidBDM exceptions for server create v2.1 * Changes block\_device\_mapping extension into v2.1 * Fix rootwrap for non openstack.org iqn's * Let update\_available\_resource hit slave * Plumb NetworkRequest objects through conductor and compute RPC * Updates available resources after live migration * Convert compute/api to use NetworkRequest object and list * Refactor the servers API to use NetworkRequest * Cells: Update set\_admin\_password for objects * Remove libvirt legacy LVM code * libvirt: reworks configdrive creation * Handle non dict metadata in server metadata V2 API * Fix wrong disk type limitation for disk IO throttling * Use v2.1 URLs instead of v3 ones in API unit tests * VMware: Add in support for CPU shares in event of resource contention * VMware: add resource limits for CPU * Refactor admin\_action plugin and test cases * Fix error in log when log exception in guestfs.py * Remove concatenation with translated messages * Port simple\_tenant\_usage into v2.1 * Convert console\_output v3 plugin to v2.1 * GET servers API sorting enhancements common utilities * Add \_security\_group\_ensure\_default() DBAPI method * Fix instance boot when Ceph is used for ephemeral storage * Add NetworkRequest object and associated list * Remove use of str on exceptions * Fix the current state name as 'shutting-down' * Explicitly handle exception ConsoleTypeUnavailable for v2 consoles * Convert v3 server diagnostics plugin to v2.1 * Porting v3 evacuate testcases to v2 * libvirt: Uses correct imagebackend for configdrive * Add v2.1 API router and endpoint * Change v3 keypairs API to v2.1 * Backport V3 hypervisor plugin unit tests to V2 * Remove duplicated negative factors in keypair test * filter: add per-aggregate filter to configure max\_instances\_per\_host * Updated from global requirements * Mask passwords in exceptions and error messages * Make strutils.mask\_password more secure * A minor change to a comments * Check vlan parameter is valid * filter: add per-aggregate filter to configure disk\_allocation\_ratio * Deprecate cinder\_\* configuration settings * Allow attaching external networks based on configurable policy * Fix CellStateManagerFile init to failure * Change v3 extended\_status to v2.1 * Fixes Hyper-V volume discovery exception message * Use default quota values in test\_quotas * libvirt: add validation of migration hostname * Add a Set and SetOfIntegers object fields * Add numa\_topology column to the compute\_node table * Preserve exception text during schedule retries * Change v3 admin-password to v2.1 * Make Object FieldType from\_primitive pass objects * Change V3 access\_ips extension into v2.1 * Update RESP message when failed to create flavor * Cleanup of V2 console output tests and add missing tests * Convert multinic v3 plugin to v2.1 * Change 'changes\_since'/'changes-since' into v2.1 style for servers * Backport v3 multinic tests to v2 * Change ViewBuilder into v2.1 for servers * Change v3 agents API to v2.1 * Change v3 attach\_interface to v2.1 * Backport V3 flavor extraspecs API unit tests to V2 * Return BadRequest instead of UnprocessableEntity for volumes API * Convert create\_backup v3 plugin to v2.1 API * Update instance state after compute service died for rebuilded instance * Make floatingip-ip-delete atomic with neutron * Add v3 versions plugin unit test to v2 * Remove duplicated code in test\_versions * Change v3 hosts to v2.1 * Change v3 extended\_server\_attibutes to v2.1 * Make test\_killed\_worker\_recover faster * Change v3 flavor\_rxtx to v2.1 * fix typo in docstring * libvirt: driver used memory tests cleanup * Avoid refreshing PCI devices on instance.save() * Updated from global requirements * Change v3 flavors to v2.1 * neutronv2: treat instance as object in deallocate\_for\_instance * Fix class name for ServerGroupAffinityFilter * Adds Hyper-V Compute Driver soft reboot implementation * Add QuotaError handling to servers rebuild API * Allow to create a flavor without specifying id * XenAPI: Remove interrupted snapshots * Fix typo in comment * Fix V2 unit tests to test hypervisor API as admin * Create compute api var at \_\_init\_\_ * libvirt: support live migrations of instances with config drives * Change v3 os-user-data extension to v2.1 * Remove duplicated code in test\_user\_data * Convert v3 server SchedulerHints plugin to v2.1 * Convert deferred\_delete v3 plugin to v2.1 API * Backport some v3 scheduler hints API UT to v2 API * Change error status code for out of quota to be 403 instead of 413 * Correct seconds of a day from 84400 to 86400 * VMware: add adapter type constants * Fix comment typo * scheduler sends select\_destinations notifications * Fix for volume detach error when use NFS as the cinder backend * objects: Add base test for obj\_make\_compatible() * objects: Fix InstanceGroup.obj\_make\_compatible() * Restore backward compat for int/float in extra\_specs * Convert v3 config drive plugin to v2.1 * Fix sample files miss for os-aggregates * Backport v3 config\_drive API unittest to v2 API * Backport some v3 availability zones API UT to v2 API * Handle non-ascii characters in spawn exception msg * Log warning message if volume quota is exceeded * Remove \_instance\_update usage in \_build\_instance * Treat instance like an object in \_build\_instance * Remove \_instance\_update usage in \_default\_block\_device\_names * Add missing flags to fakelibvirt for migration * Adds tests for Hyper-V Volume utils * Fix ability to generate object hashes in test\_objects.py * Fix expected error details from jsonschema * Extend the docstring for obj\_make\_compatible() with examples * HyperV Driver - Fix to implement hypervisor-uptime * Port os-server-groups extension to work in v2.1/v3 framework * Fix the exception for a nonexistent flavor * Add api extension for new network fields * Use real exceptions for network create and destroy * Support reserving ips at network create time * Adds get\_instance\_disk\_info to compute drivers * Use rfc3986 library to validate URL paths and URIs * Send create.end notification even if instance is deleted * Allow three periodic tasks to hit slave * Fixes Hyper-V unit test path separator issue * Share common test settings in test\_flavor\_manage * Shelve should give guests a chance to shutdown * Rescue should give guests a chance to shutdown * Resize should give guests a chance to shutdown * Power off commands should give guests a chance to shutdown * objects: Make use of utils.convert\_version\_to\_tuple() * tests: fix test\_compute to have predictable service list * libvirt: make sysinfo serial number configurable * Fixes Hyper-V resize down exception * Make usage\_from\_instances consider current usage * VMware: ensure test case for init\_host in driver * Add some v2 agents API tests * Libvirt: Do not raise ENOENT exception * Add missing create() method on SecurityGroupRule object * Add test for get\_instance\_disk\_info to test\_virt\_drivers * Move fake\_quotas and fake\_get\_quotas into a class * Objectify association in neutronapi * Objectify last uses of direct db access in network/floating\_ips * Update migration defaults * libvirt: reduce indentation in is\_vif\_model\_valid\_for\_virt * Fixes Hyper-V boot from volume root device issue * Fixes Hyper-V vm state issue * Imported Translations from Transifex * Share unittest between v2 and v2.1 for hide\_server\_addresses extension * Check compulsory flavor create parameters exist * Treat instance like an object in \_default\_block\_device\_names * Change 'image\_ref'/'flavor\_ref' into v2 style for servers * Change 'admin\_password' into v2 style for servers extension * Image caching tests: use list comprehension * Move \_is\_mapping to more central location * Stop augmenting oslo-incubators default log levels * Track object version relationships * Remove final use of glance\_stubs * Removes GlanceClient stubs * Pull transfer module unit tests from glance tests * VMware: remove specific VC support from class VMwareVolumeOps * VMware: remove Host class * Image cache tests: ensure that assertEquals has expected param first * VMware: spawn refactor \_configure\_config\_drive * VMware: refactor spawn() code to build a new VM * VMware: Fix type of VM's config.hardware.device in fake * VMware: Create fake VM with given datastore * VMware: Remove references to ebs\_root from spawn() * VMware: Create VMwareImage object for image metadata * Image caching: update image caching to use objects * Report all objects with hash mismatches in a single go * Include child\_versions in object hashes * Direct-load Instance.fault when lazy-loading * VMware: Remove unused variable in test\_configdrive * Raise HTTPNotFound error from V2 cert show API * Add dict and json methods to VirtNUMATopology classes * virt: helper for processing NUMA topology configuration * Raise Not Implemented error from V2 diagnostics API * Make NovaObjectSerializer work with dicts * Updated from global requirements * neutronv2: treat instance like object in allocate\_for\_instance * nova-network: treat instance like object in allocate\_for\_instance * Treat instance like object in \_validate\_instance\_group\_policy * Treat instance like an object in \_prebuild\_instance * Treat instance like an object in \_start\_building * Add graphviz to list of distro packages to install * Fixes Hyper-V agent force\_hyperv\_utils\_v1 flag issue * ec2: Use S3ImageMapping object * ec2: Add S3ImageMapping object * Remove unused db api methods * Get EC2 snapshot mappings with nova object * Use EC2SnapshotMapping for creating mappings * Add EC2SnapshotMapping object * Fix NotImplementedError in floating-ip-list * filter: add per-aggregate filter to configure max\_io\_ops\_per\_host * Hacking: a new hacking check was added that used an existing number * Fix hacking check for jsonutils * VMware: revert deletion of cleanup\_host * Use flavor in confirm-resize to drop claim * Add new db api get functions for ec2\_snapshot * Partial oslo-incubator sync -- log.py * Add unit tests for libvirt domain creation * Fix Trusted Filter to work with Mt. Wilson \`vtime\` * Fix 202 responses to contain valid content * Fix EC2 instance type for a volume backed instance * libvirt: add serial ports config * Split EC2 ID validator to validator per resource type * libvirt: do not fail instance destroy, if mount\_device is missing * libvirt: persist lxc attached volumes across reboots and power down * Resize block device after swap to larger volume * Make API name validation failure deterministic * VMware: spawn refactor add VirtualMachineConfigInfo * libvirt: Fix kwargs for \_create\_image * VMware: fix crash when VC driver boots * baremetal: Remove depenency on libvirt's fetch\_image method * libvirt: Remove unecessary suffix defaulting * Drop instance\_group\_metadata from the database * Neutron v2 API: fix get\_floating\_ip\_pools * libvirt: Allow specification of default machine type * Fix rebuild with cells * Added hacking check for jsonutils * Consistently use jsonutils instead of specific implementation * Convert network/api.py uses of vif database functions to objects * Convert last use of direct database instance fetching from network api * libvirt: skip disk resize when resize\_instance is False * libvirt: fix \_disk\_resize to make sure converted image will be restored * Backport some v3 certificate API unittest to v2 API * Backport some v3 aggregate API unittest to v2 API * Imported Translations from Transifex * More informative nova-scheduler log after NoValidHost is caught * Remove metadata/metadetails from instance/server groups * Prepend /dev/ to root\_device\_name in get\_next\_device\_name * Lock attach\_volume * Adjust audit logs to avoid negative disk info * Convert network/api.py to use FloatingIP object * Correct some IPAddress DB interaction in objects * docs - Set pbr 'warnerrors' option for doc build * docs - Fix errors,warnings from document generation * Provide a quick way to run flake8 * Add support for select\_destinations in Scheduler client * Create a Scheduler client library * VMware: handle case when VM snapshot delete fails * Use common get\_instance function in v2 attach\_interface * Add some v2 flavor\_manage API tests * Backport v3 api unittest into v2 api for attach\_interface extension * Fix the error status code of duplicated agents * Handle ExternalNetworkAttachForbidden exception * Allow empty volumes to be created * docs - Fix errors,warnings from document generation * docs - Fix exception in docs generation * docs - Fix docstring issues in virt tree * VMware: test\_driver\_api: Use local variables in closures * VMware: Remove ds\_util.build\_datastore\_path() * Use v1 as default for cinder\_catalog\_info * Fix live-migration failure in FC multipath case * Optimize instance\_floating\_address\_get\_all * Enhance PCI whitelist * Add a missing instance=instance in compute/mgr * Correct returned HTTP status code (Use 403 instead of 413) * Fix wrong command for \_rescan\_multipath * add log exception hints in some modules * Fix extension parameters in test\_multiple\_create * Standardize logging for v3 api extensions * Standardize logging for v2 api extensions * Add ListOfDictOfNullableString field type * Enable terminate for EC2 InstanceInitiatedShutdownBehavior * Remove duplicate test of passing glance params * Convert glance unit tests to not use stubs * Add decorator expected\_errors for ips v3 extension * Return 404 instead of 501 for unsupported actions * Return 404 when floating IP pool not found * Makes versions API output deterministic * Work on document structure and doc building * Catch NeutronClientException when showing a network * Add API schema for v2.1/v3 security\_groups extension * Add API schema for v2.1/v3 config\_drive extension * Remove pre-icehouse rpc client API compat * makes sure correct PCI device allocation * Adds tests for Hyper-V VM Utils * Make nova-api use quotas object for reservations * VMware: implement get\_host\_ip\_addr * Boot an instance with multiple vnics on same network * Optimize db.floating\_ip\_deallocate * Fixes wrong usage of mock.assert\_not\_called() * Code change for nova support cinder client v2 * libvirt: saving the lxc rootfs device in instance metadata * Add method for deallocating networks on reschedule * DB: use assertIsNotNone for unit test * Add expire reservations in backport position * Make network/api.py use Network object for associations * Migrate test\_glance from mox to mock * Add instanceset info to StartInstance response * Adds verbosity to child cell update log messages * Removes unnecessary instructions in test\_hypervapi * Diagnostics: add validation for types * Add missed discoverable policy rules for flavor-manage v3 * Rename rbd.py to rbd\_utils.py in libvirt driver directory * Correct a maybe-typo in pci\_manager * libvirt: make guestfs methods always return list of tuples * Revert "Deallocate the network if rescheduling for * libvirt: volume snapshot delete for network-attached disks * libvirt: parse disk backing chains from domain XML * Handle MacAddressInUseClient exception from Neutron when creating port * Updated from global requirements * Remove instance\_info\_cache\_delete() from conductor * Make spawn\_n() stub properly ignore errors in the child thread work * Update devref out-of-tree policy grammar error * Compute: add log exception hints * Handle NetworkAmbiguous error when booting a new instance with v3 api * Handle FloatingIpPoolNotFound exception in floating ip creation * Add policy on how patches and reviews go hand in hand * Add hacking check for explicit import of \_() * VMware: Do not read opaque type for DVS network * VMware: add in DVS VXLAN support * Network: add in a new network type - DVS * Network: interface attach and detach raised confusing exception * Deprecate metadata\_neutron\_\* configuration settings * Log cleanups for nova.network.neutron.api * Remove ESXDriver from Juno * Only get image location attributes if including locations * Use JSON instead of json in the parameter descriptions * Add a retry\_on\_deadlock to reservations\_expire * docs - Fix doc build errors with SQLAlchemy 0.9 * docs - Fix indentation for RPC API's * docs - Prevent eventlet exception during docs generation * docs - Add an index for the command line utilities * docs - fix missing references * Change LOG.warn to LOG.debug in \_shutdown\_instance * EC2: fixed AttributeError when metadata is not found * Import Ironic scheduler filters and host manager * EndpointNotFound deleting volume backend instance * Fix nova boot failure using admin role for another tenant * docs - Fix docstring issues * Update scheduler after instance delete * Remove duplicate index from block\_device\_mapping table * Fix ownership checking in get\_networks\_by\_uuid * Raises NotImplementedError for LVM migration * Convert network/api.py fixedip calls to use FixedIP object * Convert network/api.py get calls to use Network object * Add extensible resources to resource tracker (2) * Make DriverBlockDevice save() context arg optional * Improved error logging in nova-network for allocate\_fixed\_ip() * Issue multiple SQL statements in separate engine.execute() calls * Move check\_image\_exists out of try in \_inject\_data * Fix fake\_update in test\_update\_missing\_server * Add unit tests to cells conductor link * Revert "libvirt: add version cap tied to gate CI testing" * Use Ceph cluster stats to report disk info on RBD * Add trace logging to allocate\_fixed\_ip * Update devref setup docs for latest libvirt on ubuntu * libvirt re-define guest with wrong XML document * Improve logging when python-guestfs/libguestfs isn't working * Update dev env docs on libvirt-dev(el) requirement * Parse unicode cpu\_info as json before using it * Fix Resource tracker should report virt driver stats * Fix \_parse\_datetime in simple tenant usage extension * Add API schema for v2.1/v3 cells API * Fix attaching config drive issue on Hyper-V when migrate instances * Allow to unshelve instance booted from volume * libvirt: add support for guest NUMA topology in XML config * libvirt: remove pointless LibvirtBaseVIFDriver class * libvirt: remove 'vif\_driver' config parameter * libvirt: remove use of CONF.libvirt.virt\_type in vif.py * Handle NotImplementedError in server\_diagnostics v3 api * Remove useless check in \_add\_retry\_host * Initialize Ironic virt driver directory * Live migration is broken for NFS shared storage * Fix ImportError during docs generation * Updated from global requirements * Extend API schema for "create a server" extensions * Enable cloning for rbd-backed ephemeral disks * Add include\_locations kwarg to nova.image.API.get() * Add index for reservations on (deleted, expire) * VMWare Driver - Ignore datastore in maintenance mode * Remove outdated docstring for nova.network.manager * libvirt: remove 3 unused vif.py methods * Turn on pbr's autodoc feature * Remove api reference section in devref * Deduplicate module listings in devref * VMware: Resize operation fails to change disk size * Use library instead of CLI to cleanup RBD volumes * Move libvirt RBD utilities to a new file * Properly handle snatting for external gateways * Only use dhcp if enable\_dhcp is set on the network * Allow dhcp\_server to be set from new field * Set python hash seed to 0 in tox.ini * Make devref point to official devstack vagrant repo * Stop depending on sitepackages libvirt-python * libvirt: driver tests use non-mocked BDMs * Fix doc build errors in models.py * Make several ec2 API tests inherit from NoDBTestCase * Stub out rpc notifications in ec2 cloud unit tests * Add standard constants for CPU architectures * virt: switch order of args to assertEqual in guestfs test * virt: move disk tests into a sub-directory * virt: force TCG with libguestfs unless KVM is enabled in libvirt * Do not pass instances without host to compute API * Pass errors from detach methods back to api proc * libvirt: add tests for \_live\_snapshot and \_swap\_volume methods * libvirt: fill in metadata when launching instances * Increase min required libvirt to 0.9.11 * Rollback quota when confirm resize concurrently completed * API: Enable support for tenant option in nova absolute-limits * libvirt: removing lxc specific disk mapping * Method to filter non-root block device mappings * VMware: remove local variable * Use hypervisor hostname for compute trust level * Remove unused cell\_scheduler\_method * Fix the i18n for some warnings in compute utils * Fix FloatingIP.save() passing FixedIP object to sqlalchemy * Scheduler: throw exception if no configured affinity filter * xenapi: Attach original local disks during rescue * libvirt: remove VIF driver classes deprecated in Icehouse * Move logs of restore state to inner logic * Clean nova.compute.resource\_tracker:\_update\_usage\_from\_instances * Fix and Gate on E265 * Log translation hint for nova.api * Fix duplicated images in test\_block\_device\_mapping * Add Hyper-V driver in the "compute\_driver" option description * reduce network down time during live-migration * Augment oslo's default log levels with nova specific ones * Make the coding style consistent with other Controller in plugins/v3 * Fix extra metadata didn't assign into snapshot image * Add i18n log markers in disk api * VMware: improve log message for attachment of CDROM * Raise NotImplemented default-security-group-rule api with neutron * vmwareapi: remove some unused fake vim methods * Correct image\_metadata API use of nova.image.glance * Revert "Add extensible resources to resource tracker" * Update database columns nullable to match model * Updated from global requirements * Make quotas APIv3 extension use Quotas object for create/update * Make quotas APIv2 extension use Quotas object for create/update * Add quota limit create/update methods to Quotas object 2014.2.b2 --------- * libvirt: VM diagnostics (v3 API only) * Add ibmveth model as a supported network driver for KVM * libvirt: add support for memory tuning in config * libvirt: add support for memory backing parameters * libvirt: add support for per-vCPU pinning in guest XML * libvirt: add parsing of NUMA topology in capabilities XML * handle AutoDiskConfigDisabledByImage at API layer * Rollback quota in os\_tenant\_network * Raise specific error of network IP allocation * Convert to importutils * Catch CannotResizeDisk exception when resize to zero disk * VMware: do not cache image when root\_gb is 0 * Turn periodic tasks off in all unit tests * Rename virtutils to the more common libvirt\_utils * Check for resize path on libvirt instance delete * Return status for compute node * servers list API support specify multi-status * Deprecate scheduler prep\_resize * Updated from global requirements * Fix nova cells exiting on db failure at launch * Remove unneeded calls in test\_shelve to start instances * Correct InvalidAggregateAction reason for Xen * Handle a flavor create failed better * Add valid method check for quota resources * VMware: power\_off\_instance support * Add debug log for availability zone filter * Fix typo * Fix last of direct use of object modules * Check instance state before attach/detach interface * Fix error status code for cloudpipe\_update * Fix unit tests related to cloudpipe\_update * Add API schema for v2.1/v3 reset\_server\_state API * Adjust audit logs to avoid negative mem/cpu info * Re-add H803 to flake8 ignore list * Fix nova/pci direct use of object modules * Gate on F402/pep8 * Inject expected results for IBM Power when testing bus devices * Add extensible resources to resource tracker * libvirt: define XML schema for recording nova instance metadata * Sync loopingcall from oslo * Add APIv2 support to make host optional on evacuate * Add differencing vhdx resize support in Hyper-V Driver * Imported Translations from Transifex * Add context as param to cleanup function * Downgrade the warn log in network to debug * Correct use of nova.image.glance in compute API * Keep Migration status in automatic confirm-resize * Removes useless stub of glanceclient create * Remove rescue/unrescue NotImplementedError handle * Add missing foreign key on pci\_devices.compute\_node\_id * Revert "Add missing image to instance booted from volume" * Add debug log for pci passthrough filter * Cleanup and gate on hacking E711 and E712 rule * Keep resizing&resized instances when compute init * Commit quota when deallocate floating ip * Remove unnecessary error log in cell API * Remove stubs in favor of mock in test\_policy * Remove translation for debug message * Fix error status code for agents * Remove warn log for over quota * Use oslo.i18n * Cleanup: remove unused argument * Implement methods to modify volume metadata * Minor tweaks to hypervisor\_version to int * update ignore list for pep8 * Add decorator expected\_errors for v3 attach\_interfaces * Add instance to debug log at compute api * Don't truncate osapi\_glance\_link or osapi\_compute\_link prefixes * Add decorator expected\_errors to V3 servers core * Correctly reject request to add lists of hosts to an aggregate * Do not process events for instances without host * Fix Cells ImagePropertiesFilter can raise exceptions * libvirt: remove flawed get\_num\_instances method impl * libvirt: remove unused list\_instance\_ids method * libvirt: speed up \_get\_disk\_over\_committed\_size\_total method * Partial oslo-incubator sync * VMware: Remove unnecessary deepcopy()s in test\_configdrive * VMware: Convert vmops to use instance as an object * VMware: Trivial indentation cleanups in vmops * VMware: use datastore classes in file\_move/delete/exists, mkdir * VMware: use datastore classes get\_allowed\_datastores/\_sub\_folder * VMware: DatastorePath join() and \_\_eq\_\_() * VMware: consolidate datastore code * VMware: Consolidate fake\_session in test\_(vm|ds)\_util * Make BDM dict \_\_init\_\_ behave more like a dict * VMware: support the hotplug of a neutron port * Deallocate the network if rescheduling for Ironic * Make sure that metadata handler uses constant\_time\_compare() * Enable live migration unit test use instance object * Move volume\_clear option to where it's used * move the cloudpipe\_update API v2 extension to use objects * Avoid possible timing attack in metadata api * Move injected\_network\_template config to where it's used * Don't remove delete\_on\_terminate volumes on a reschedule * Defer raising an exception when deleting volumes * Xen: Cleanup orphan volume connections on boot failure * Adds more policy control to cells ext * shelve doesn't work on nova-cells environment * libvirt: add migrateToURI2 method to fakelibvirt * libvirt: fix recent test changes to work on libvirt < 0.9.13 * Update requirements to include decorator>=3.4.0 * Cleanup and gate on hacking E713 rule * libvirt: add version cap tied to gate CI testing * Small grammar fix in libvirt/driver.py. fix all occurrences * Correct exception for flavor extra spec create/update * Fixes Hyper-V SCSI slot selection * xenapi: Use netuils.get\_injected\_network\_template * libvirt: Support IPv6 with LXC * Improve shared storage checks for live migration * XenAPI: VM diagnostics for v3 API * Move retry of prep\_resize to conductor instead of scheduler * Retry db.api.instance\_destroy on deadlock * Translations: add LC to all LOG.critical messages * Remove redundant code in Libvirt driver * Virt: fix typo (flavour should be flavor) * Fix and gate on H305 and H307 * Remove unused instance variables from HostState * Send compute.instance.create.end after launched\_at is set * VMware: validate the network\_info is defined * Security groups: add missing translation * Standardization of nova.image.API.download * Catch InvalidAggregateAction when deleting an aggregate * Restore ability to delete aggregate metadata * Nova-api service throws error when SIGHUP is sent * Remove cell api overrides for lock and unlock * Don't mask out HostState details in WeighedHost * vmware: VM diagnostics (v3 API only) * Use pool/volume\_name notation when deleting RBD volumes * Add instanceset info to StopInstance response * Change compute updates from periodic to on demand * Store volume backed snapshot in current tenant * libvirt+lxc: Unmount guest FS from host on error * libvirt: speed up get\_memory\_mb\_used method * libvirt: speed up get\_vcpus method * libvirt: speed up get\_all\_block\_devices method * libvirt: speed up list\_instances method * libvirt: speed up list\_instance\_uuids method * Updated from global requirements * Fix interfaces template for two interfaces and IPv6 * Fix error status code for multinic * libvirt: fix typo in fakelibvirt listAllDomains() * Refactors VIF configuration logic * Add missing test coverage for MultiplePortsNotApplicable compute/api * Make the block device mapping retries configurable * Catch image and flavor exceptions in \_build\_and\_run\_instance * Restore instance flavor info when driver finish\_migration fails * synchronize 'stop' and power state periodic task * Fix more re-definitions and enable F811/F813 in gate * Prepend '/dev/' to supplied dev names in the API * Handle over quota exception from Neutron * Remove pause/unpause NotImplementedError API layer * Add test cases for 2 block\_device functions * Make compute api use util.check\_string\_length * add comment about why snapshot/backup have no lock check * VM diagnostics (v3 API only) * VM diagnostics: add serializer to Diagnostics object * VM diagnostics: add methods to class to update diagnotics * object-ify API v2 availability\_zone extension * object-ify availability\_zones * add get\_by\_metadata\_key to AggregateList object * xenapi: make boot from volume use volumeops * libvirt: Avoid Glance.show on hard\_reboot * Add host\_ip to compute node object * VMware: move fake.py to the test directory * libvirt: convert cpuset XML handling to use set instead of string * virt: add method for formatting CPU sets to strings * Fixes rbd backend image size * Prevent max\_count > 1 and specified ip address as input * Add aggregates.rst to devref index * VMware: virt unrescue method now supports objects * VMware: virt rescue method now supports objects * Remove duplicate python-pip from Fedora devref setup doc * Do not fail cell's instance deletion, if it's missing info\_cache * libvirt: more efficient method to list domains on host * vmwareapi: make method signatures match parent class * Remove duplicate keys from dictionaries * virt: split CPU spec parsing code out into helper method * virt: move get\_cpuset\_ids into nova.virt.hardware * Fix duplicate definitions of variables/methods * change the firewall debugging for clarity * VMware: consolidate common constants into one file * Require posix\_ipc for lockutils * hyperv: make method signatures match parent class * Format eph disk with specified format in libvirt * Resolve import dependency in consoleauth service * Add 'anon' kwarg to FakeDbBlockDeviceDict class * Make cells rpc bdm\_update\_or\_create\_at\_top use BDM objects * Improve BlockDeviceMapping object cells awareness * Add support for user\_id based authentication with Neutron * VMware: add in test utility to get correct VM backing * Change instance disappeared during destroy from Error to Warning * VMware: Fix race in spawn() when resizing cached image * VMware: add support for driver method instance\_exists * Object-ify APIv3 agents extension * Object-ify APIv2 agents extension * Avoid re-adding iptables rules for instances that have disappeared * libvirt: Save device\_path in connection\_info when booting from volume * sync periodic\_task fix from incubator * Fix virt BDM \_\_setattr\_\_ and \_\_getattr\_\_ * Handle InstanceUserDataTooLarge at api layer * Updated from global requirements * Mask node.session.auth.password in volume.py \_run\_iscsiadm debug logs * Nova api service doesn't handle SIGHUP properly * check ephemeral disk format at libvirt before use * Avoid referencing stale instance/network\_info dicts in firewall * Use mtu setting from table instead of flag * Add debug log for core\_filter * VMware: optimize VM spawn by caching the vm\_ref after creating VM * libvirt: Add configuration of guest VCPU topology * virt: add helper module for determining VCPU topology * Change the comments of SOFT\_DELETED race condition * Fix bad log message with glance client timeout * Move the instance\_type\_id judgment to the except-block * Update port binding when unshelve instance * Libvirt: Added suffix to configdrive\_path required for rescue * sync policy logging fix from incubator * Sync process utils from olso * Remove instance\_uuids argument to \_schedule * Add \_\_repr\_\_ handler for NovaObjects * Pass instance to \_reschedule rather than instance\_uuid * Pass instance to \_set\_instance\_error\_state * Pass instance to \_error\_out\_instance\_on\_exception * Add APIv3 support to make host optional on evacuate * Move rebuild to conductor and add find host logic * VMware: validate that VM exists on backend prior to deletion * VMware: remove duplicate key from test\_instance dict * ConfigDriveBuilder refactor for tempdir cleanliness * VMware: cleanup the constructors of the compute drivers * Fix wrong lock name for operating instance external events * VMware: remove unused parameter 'network\_info' * VM diagnostics: introduce Diagnostics model object * Fixes internal server error for add/remove tenant flavor access request * add repr for event objects * Sync oslo lockutils to nova * Neutronv2 api does not support neutron without port quota * Be explicit about objects in \_shutdown\_instance() * Pass instance object into \_shutdown\_instance() * Skip none value attributes for ec2 image bdm output * Fixed wrong assertion in test\_vmops.py * Remove a not used function \_get\_ip\_by\_id * make lifecycle event logs more clear * xenapi: make method signatures match parent class * libvirt: make method signatures match parent class * virt: add test helper for checking public driver API method names * virt: fix signature of set\_admin\_password method * virt: use context & instance as param names in migrate APIs * virt: add get\_instance\_disk\_info to virt driver API * vmwareapi: remove unused update\_host\_status method * libvirt: remove hack from ensure\_filtering\_rules\_for\_instance * libvirt: remove volume\_driver\_method API * libvirt: add '\_' prefix to remaining internal methods * Imported Translations from Transifex * Fake driver: remove unused method get\_disk\_available\_least * Baremetal driver: remove unused states * Fix nova/network direct use of object modules * Fix rest of API objects usage * Fix rest of compute objects usage * Clean conntrack records when removing floating ip * Updated from global requirements * Enforce task\_state is None in ec2 create\_image stop instance wait loop * Update compute rpcapi tests to use instance object instead of dict * Fix run\_instance() rpc method to pass instance object * Fix error in rescue rpcapi that prevents sending objects * add checksums to udp independent of /dev/vhost-net * Use dot notation to access instance object fields in ec2 create\_image * vmwareapi: remove unused fake vim logout method * vmware: remove unused delete\_disk fake vim method * Revert "Sync revert and finish resize on instance.uuid" * Add test cases for block\_device * Add assert\_called check for "brclt addif" test * Log when nova-conductor connection established * Xen: Remove extraneous logging of type information * Fix agent\_id with string type in API samples files for os-agents v2 * Fix update agent return agent\_id with string for os-agents v3 * VMware: Fix fake raising the wrong exception in \_remove\_file * VMware: refactor get\_datastore\_ref\_and\_name * libvirt: introduce separate class for cpu tune XML config * libvirt: test setting of CPU tuning data * Make Evacuate API use Instance objects * VMware: create utility function for reconfiguring a VM * effectively disable libvirt live snapshotting * Fix exception raised when a requested console type is disabled * Add missing image to instance booted from volume * Use default rpc\_response\_timeout in unit tests * vmware: Use exc\_info when logging exceptions * vmware: Reuse existing StorageError class * vmware: Refactor: fold volume\_util.py into volumeops.py * Use ebtables to isolate dhcp traffic * Replace nova.utils.cpu\_count() with processutils.get\_worker\_count() * Sync log and processutils from oslo * libvirt: add '\_' prefix to host state information methods * libvirt: add '\_' prefix to some get\_host\_\* methods * Deprecate and remove agent\_build\_get\_by\_triple() * Object-ify xenapi driver's use of agent\_build\_get\_by\_triple() * Add Agent object * Move the error check for "brctl addif" * Add API schema for v2.1/v3 quota\_sets API * Add API schema for v2.1/v3 flavors\_extraspecs API * Add API schema for v2.1/v3 attach\_interfaces API * Add API schema for v2.1/v3 remote\_consoles API * Use auth\_token from keystonemiddleware * Use \_set\_instance\_obj\_error\_state in compute manager set\_admin\_password * api: remove unused function * api: remove useless get\_actions() in consoles * Do not allow resize to zero disk flavor * api: remove dead code in WSGI XML serializer * Updated from global requirements * Standardize logging for nova.virt.libvirt * Fix log debug statement in compute manager * Add API schema for v2.1/v3 aggregates API * Fix object code direct use of other object modules * Fix the rest of direct uses of instance module objects * Imported Translations from Transifex * Add API schema for v2.1/v3 flavor\_manage API * Forcibly set libvirt uri in baremetal virtual power driver * Synced jsonutils and its dependencies * Sync revert and finish resize on instance.uuid * Object-ify APIv3 availability\_zone extension * Fix bug in TestObjectVersions * libvirt: add '\_' prefix to all get\_guest\_\*\_config methods * libvirt: remove unused 'get\_disks' method * Downgrade some exception LOG messages in the ec2 API * Conductor: remove irrelevant comment * Added statement for ... else * Avoid traceback logs from simple tenant usage extension * Fix detaching pci device failed * Adds instance lock check for live migrate * Don't follow HTTP\_PROXY when talking to localhost test server * Correct the variable name in trusted filter * Target host in evacuate can't be the original one * Add API schema for v2.1/v3 hosts API * Object-ify APIv3 flavor\_extraspecs extension * Object-ify APIv2 flavorextraspecs extension * Catch permission denied exception when update host * Fix resource cleanup in NetworkManager.allocate\_fixed\_ip * libvirt: Support snapshot creation via libgfapi * Allow evacuate from vm\_state=Error * xenapi: reorder volume\_utils * Replace assertTrue/False with assertEqual/NotEqual * Replace assert\* with more suitable asserts in tests * Replace assertTrue/False with assertIn/NotIn * VMware: remove unused code in vm\_util.py * Not count disabled compute node for statistics * Instance and volume cleanup when a build fails * wrap\_instance\_event() shouldn't swallow return codes * Don't replace instance object with dict in \_allocate\_network() * Determine shared ip from table instead of flag * Set reasonable defaults for new network values * Adds network fields to object * Add new fields to the networks table * Log exception if max scheduling attempts exceeded * Make remove\_volume\_connection() use objects * Create lvm.py module containing helper API for LVM * Reduce unit test times for glance * Should not delete active snapshot when instance is terminated * Add supported file system type check at virt layer * Don't store duplicate policies for server\_group * Make exception handling in get\_image\_metadata more specific * live migrate conductor tasks to use nova.image.API * Fix Flavor object extra\_specs and projects handling * Drop support for scheduler 2.x rpc interface * Drop support for conductor 1.x rpc interface * Deprecate glance\_\* configuration settings * Update websocketproxy to work with websockify 0.6 * XenAPI: disable/enable host will be failed when using XenServer * Remove traces of now unused host capabilities from scheduler * Fix BaremetalHostManager node detection logic * Add missing stats info to BaremetalNodeState * Replace assertTrue(not \*) with assertFalse(\*) * Clean nova.compute.api.API:\_check\_num\_instances\_quota * Fix the duplicated image params in a test * Imported Translations from Transifex * Fix "fixed\_ip" parameters in unit tests * Removes the use of mutables as default args * Add API schema for v2.1/v3 create\_backup API * Catch ProcessExecutionError in revoke\_cert * Updated from global requirements * Sync oslo lockutils to nova * devref policy: code is canonical source of truth for API * Log cleanups for nova.virt.libvirt.volume * Log cleanups for nova.virt.libvirt.imagecache * Rename VolumeMapping to EC2VolumeMapping * ec2: Convert to use EC2InstanceMapping object * Add EC2InstanceMapping object for use in EC2 * Add hook for network info update * Enhance and test exception safety in hooks * Object-ify server\_password APIv3 extension * Object-ify server\_password APIv2 extension * Move the fixed\_ips APIv2 extension to use objects * Completely object-ify the floating\_ips\_bulk V2 extension * Add bulk create/destroy functionality to FloatingIP * Cleanup and gate on pep8 rules that are stricter in hacking 0.9 * VMware: update file permissions and mode * Downgrade log level when create network failed * Updated from global requirements * libvirt: Use VIR\_DOMAIN\_AFFECT\_LIVE for paused instances * Initialize objects field in ObjectsListBase class * Remove bdms from run\_instance RPC conductor call * Sync "Prevent races in opportunistic db test cases" * Imported Translations from Transifex * Check the network\_info obj type before invoke wait function * Migrate nvp-qos to generic name qos-queue * Add test for HypervisorUnavailable on conductor * Test force\_config\_drive as a boolean as last resort * Add helper functions for getting local disk * Add more logging to nova-network * Make resize raise exception when no valid host found * Fix doc for service list * Handle service creation race by service workers * Add configurable HTTP timeout to cinder API calls * Prevent clean-up of migrating instances on compute init * Deprecate neutron\_\* configuration settings * Skip migrations test\_walk\_versions instead of pass * Remove duplicate code in Objects create() function * Fix object change detection * Fix object leak in nova.tests.objects.test\_fields.TestObject * Failure during termination should always leave state as error() * Make check\_instance\_shared\_storage() use objects * Save connection info in libvirt after volume connect * Remove unused code from test\_compute\_cells * libvirt: Don't pass None for image\_meta parameter in tests * Revert "Allow admin user to get all tenant's floating IPs" * libvirt: Remove use of db for flavor extra specs in tests * libvirt: Close opened file explicitly * Network: ensure that ports are 'unset' when instance is deleted * Don't translate debug level logs in nova * maint: Fixes wrong docstring of method get\_memory\_mb\_used * Ensure changes to api.QUOTA\_SYNC\_FUNCTIONS are restored * Fix the wrong dest of 'vlan' option and add new 'vlan\_start' option * Add deprecation warning to nova baremetal virt driver * Fixes typo error in Nova * Attach/detach interface to paused instance with affect live flag * Block device API missing translations for exceptions * Enabled swap disk to be resized when resizing instance * libvirt: return the correct instance path while cleanup\_resize * Remove the device handling from pci device object * Use new pci device handling code in pci\_manager * Separate the PCI device object handling code * xenapi: move find\_vbd\_by\_number into volume utils * Virt: remove unnecesary return code * Fixes hyper-v volume attach when host is AD member * Remove variability from object change detection unit test * Remove XML namespace from some v3 extensions 2014.2.b1 --------- * xenapi: Do not retry snapshot upload on 500 * Fix H401,H402 violations and re-enable gating * Bump hacking to 0.9.x series * Change listen address on libvirt live-migration * Make get\_console\_output() use objects * Add testing for hooks * Handle string types for InstanceActionEvent exc\_tb serialization * Revert "Remove broken quota-classes API" * Revert "Remove quota-class logic from context and make unit tests pass" * Fix cold-migrate missing retry info after scheduling * Disable rescheduling instance when no retry info * Fix infinitely reschedule instance due to miss retry info * Use VIF details dictionary to get physical\_network * Fix live\_migration method's docstring * Add subnet routes to network\_info when Neutron is used * fix nova test\_enforce\_http\_true unit test * novncproxy: Setup log when start nova-novncproxy * Make sure domain exists before referencing it * Network: add instance to the debug statement * V3 Pause: treat case when driver does not implement the operation * Don't translate debug level logs in nova.virt * Remove duplicate method * websocketproxy: remove leftover debug output * Remove unnecessary else block in compute manager set\_admin\_password * Treat instance objects like objects in set\_admin\_password flow * Move set\_admin\_password tests from test\_compute.py to api/mgr modules * Fix a wrong comment in the code * maint: correct docstring parameter description * libvirt: Remove dated docstring * Add unit tests for ipv4/ipv6 format validation * Cleanup allocating networks when InstanceNotFound is raised * Add test to verify ironic api contracts * VMware: spawn refactor - phase 1 - test for spawn * Revert "Fix migration and instance resize update order" * Simplify filter\_scheduler.populate\_retry() * libvirt: Use os\_command\_line when kernel\_id is set * libvirt: Make nwfilter driver use right filterref * libvirt: convert cpu features attribute from list to a set * Don't log TRACE info in notify\_about\_instance\_usage * xenapi: add tests for find\_bad\_volumes * Revert "Remove traces of now unused host capabilities from scheduler" * Check the length of aggregate metadata * Add out of tree support dev policy * Deprecate instance\_get\_by\_uuid() from conductor * Make metadata password routines use Instance object * Make SecurityGroupAPI use Object instead of instance\_get\_by\_uuid() * Add development policies section to devref * Add read\_only field attribute * Fix api direct use of instance module objects * Fix direct use of block\_device module objects * Fix InstanceActionEvent traceback parameter not serializable * Fix state mutation in cells image filter * libvirt: split and test finish\_migration disk resize * Use no\_timer\_check with soft-qemu * Add missing translation support * Update HACKING.rst to include N320 * Add tests to avoid inconsistent extension names * VMware: spawn refactor - Datastore class * VMware: remove dsutil.split\_datastore\_path * VMware: spawn refactor - DatastorePath class * Updated from global requirements * VMware: Fix memory leaks caused by caches * Allow user to specify image to use during rescue - V3 API changes * VMware: create utility functions * Check if volume is bootable when creating an instance * VMware: remove unused parameters in imagecache * xenapi: virt unrescue method now supports objects * libvirt: virt unrescue method now supports objects * libvirt: virt rescue method now supports objects * xenapi: virt rescue method now supports objects * Remove useless codes for server\_group * Catch InstanceInfoCacheNotFound during build\_instances * Do not replace the aggregate metadata when updating az * Move oslotest to test only requirements * libvirt: merge two utils tests files * libvirt: remove redundant 'libvirt\_' prefix in test case names * xenapi: refactor detach volume * Add API schema for v2.1/v3 migrate\_server API * Adds IVS unit tests for new VIF firewall logic * Don't set CONF options directly in unit tests * Fix docstring typo in need\_legacy\_block\_device\_info * Revert "Partially remove quota-class logic from nova.quotas" * Revert "Remove quota\_class params from rest of nova.quota" * Revert "Remove quota\_class db API calls" * Revert "Convert address to str in fixed\_ip\_obj.associate" * String-convert IPAddr objects for FixedIP.attach() * Updated from global requirements * Run instance root device determination fix * xenapi: tidy up volumeops tests * Don't return from a finally block * Support detection of fixed ip already in use * Rewrite nova policy to use the new changes of common policy * Treat instance objects as objects in unrescue API flow * Treat instance objects as objects in rescue API flow * Refactor test\_rescue\_unrescue into compute api/manager unit tests * Sync oslo network utils * Fix EC2 not found errors for volumes and snapshots * xenapi: refactor volumeops attach * xenapi: remove calls to call\_xenapi in volumeops * xenapi: move StorageError into global exception.py * Virt: ensure that instance\_exists uses objects * Use objects through the run\_instance() path * Deprecate run\_instance and remove unnecessary code * Change conductor to cast to build\_and\_run\_instance * Fix migration and instance resize update order * remove cpu feature duplications in libvirt * Add unit test trap for object change detection * Sync periodic\_task from oslo-incubator * VCDriver - Ignore host in Maintenance mode in stats update * Enable flake8 F841 checking * Imported Translations from Transifex * Reverse order of cinder.detach() and bdm.delete() * Correct exception info format of v3 flavor manage * Imported Translations from Transifex * Handle NetworkInUse exception in api layer * Correct exception handling when create aggregate * Properly skip coreutils readlink tests * Record right action name while migrate * Imported Translations from Transifex * Fix for multiple misspelled words * Refactor test to ensure file is closed * VM in rescue state must have a restricted set of actions * versions API: ignore request with a body * xenapi: fix live-migrate with volume attached * Add helper methods to convert disk * XenAPI: Tolerate multiple coalesces * Add helpers to create per-aggregate filters * Ensure live-migrate reverts if server not running * Raise HTTPInternalServerError when boot\_from\_volume with cinder down * Imported Translations from Transifex * [EC2]Correct the return status of attaching volume * Fix security group race condition while creating rule * VMware: spawn refactor - phase 1 - copy\_virtual\_disk * Catch InstanceNotFound exception if migration fails * Inject expected results for IBM Power when testing bus * Fix InstanceActionTestCase on PostgreSQL/MySQL * Fix ReservationTestCase on PostgreSQL * VMware: deprecate ESX driver from virt configuration * Add new ec2 instance db API calls * Remove two unused db.api methods * Fix direct use of aggregate module objects * Fix tests/compute direct use of instance module objects * share neutron admin auth tokens * Fix nova image-show with queued image * Catch missing Glance image attrs with None * Align internal image API with volume and network * Do not wait for neutron event if not powering on libvirt domain * Mask block\_device\_info auth\_password in virt driver debug logs * Remove all mostly untranslated PO files * Payload meta\_data is empty when remove metadata * Handle situation when key not memcached * Fix nova/compute direct use of instance module objects * Address issues with objects of same name * Register objects in more services * Imported Translations from Transifex * Default dhcp lease time of 120s is too short * Add VIF mac address to fixed\_ips in notifications * Call \_validate\_instance\_group\_policy in \_build\_and\_run\_instance * Add refresh=True to get\_available\_nodes call in build\_and\_run\_instance * Add better coverage support under tox * remove unneeded call to network\_api on detach\_interface * Cells: Pass instance objects to build\_instances * XenAPI: Add logging information for cache/download duration * Remove spaces from SSH public key comment * Make hacking test more accurate * Fix security group race condition while listing and deleting rules * On rebuild check for null image\_ref * Add a reference to the nova developer documentation * VMware: use default values in get\_info() when properties are missing * VMware: uncaught exception during snapshot deletion * Enforce query order for getting VIFs by instance * Fix typo in comment * Allow admin user to get all tenant's floating IPs * Defer applying iptable changes when nova-network start * Remove traces of now unused host capabilities from scheduler * Add log translation hints * Imported Translations from Transifex * Fix CIDR values denoting hosts in PostgreSQL * Sync common db and db/sqlalchemy * Remove quota\_class db API calls * Remove quota\_class params from rest of nova.quota * Fix wrong quota calculation when deleting a resizing instance * Ignore etc/nova/nova.conf.sample * Fix wrong method name assert\_called\_once * Correct pci resources log * Downgrade log when attach interface can't find resources * Fixes Hyper-V iSCSI target login method * VMware: spawn refactor - phase 1 - fetch\_image * vmware:Don't shadow builtin function type * Partially remove quota-class logic from nova.quotas and test\_quotas * Convert address to str in fixed\_ip\_obj.associate * Accurate exception info in api layer for aggregate * minor corrections to devref rpc page * libvirt: Handle unsupported host capabilities * Fix the duplicated extension summaries * Imported Translations from Transifex * Raise more information on V2 API volumes when resource not found * Remove comments since it's pointless * Downgrade and fix log message for floating ip already disassociated * Fix wrong method name for test\_hacking * Imported Translations from Transifex * Add specific regexp for timestamps in v2 xml * VMWare: spawn refactor - phase 1 - create\_virtual\_disk * VMware: spawn refactor - phase 1 - power\_on\_vm * Move tests into test\_volume\_utils * Tidy up xenapi/volume\_utils.py * Updated from global requirements * VMware: Fix usage of an alternate ESX/vCenter port * VMware: Add check for datacenter with no datastore * Remove unused instance\_update() method from virtapi * Make baremetal driver use Instance object for updates * Rename quota\_injected\_file\_path\_bytes * Imported Translations from Transifex * Fixes arguments parsing when executing command * Remove explicit dependency on amqplib * Deprecate action\_event\_\*() from conductor * Remove conductor usage from compute.utils.EventReporter * Unit test case for more than 1 ephemeral disks in BDM * Network: replace neutron check with decorator * Update links in README * Add mailmap entry * XenAPI: Remove unneeded instance argument from image downloading * XenAPI: adjust bittorrent settings * Fix a minor comments error * Code Improvement * Fix the explanation of HTTPNotFound for cell showing v2 API * Add Nova API Sample file & test for get keypair * Add a docstring to hacking unit tests * Make libvirt driver use instance object for updates * Make vmwareapi/vmops use Instance object for updates * Convert xenapi/vmops uses of instance\_update to objects * Make xenapi agent code use Instance object for updates * Check object's field * Use Field in fixed\_ip * Remove logging in libvirt \_connect\_auth\_cb to avoid eventlet locking * Fix v3 API extension names for camelcase * VMware: prevent image snapshot if no root disk defined * Remove unnecessary cleanup in test * Raise HTTPForbidden from os-floating-ips API rather than 404 * Improve hacking rule to avoid author markers * Remove and block DB access in dhcpbridge * Improve conductor error cases when unshelving * Dedup devref on unit tests * Shrink devref.unit\_tests, since info is in wiki * Fix calls to mock.assert\_not\_called() * VMware: reduce unit test times * Fix wrong used ProcessExecutionError exception * Clean up openstack-common.conf * Revert "Address the comments of the merged image handler patch" * Remove duplicated import in unit test * Fix security group list when not defined for an instance * Include pending task in log message on skip sync\_power\_state * Make cells use Fault obj for create * libvirt: Handle \`listDevices\` unsupported exception * libvirt: Stub O\_DIRECT in test if not supported * Deprecate instance\_fault\_create() from conductor * Remove conductor usage from add\_instance\_fault\_from\_exc() * Add create() method to InstanceFault object * Remove use of service\_\* conductor calls from xenapi host.py * Updated from global requirements * Optimize validate\_networks to query neutron only when needed * Remove quota-class logic from context and make unit tests pass * VMware: spawn refactor - phase 1 - execute\_create\_vm * xenapi: fixup agent tests * Don't translate debug level logs in nova.spice, storage, tests and vnc * libvirt: Refresh volume connection\_info after volume snapshot * Fix instance cross AZ check when attaching volumes * Raise descriptive error for over volume quota * Fix broken version responses * Don't translate debug level logs in objectstore, pci, rdp, servicegroup * Don't translate debug level logs in cloudpipe, hacking, ipv6, keymgr * Don't translate debug level logs in nova.cert, console and consoleauth * Don't translate debug level logs in nova.cmd and nova.db * Don't translate debug level logs in nova.objects * Don't translate debug level logs in nova.compute * Fix bad Mock calls to assert\_called\_once() * VCDriver - No longer returns uptime due to multiple hosts * Make live\_migration use instance objects * wrap\_check\_security\_groups\_policy is already defined * Updated from global requirements * Don't translate debug level logs in nova.conductor * Don't translate debug level logs in nova.cells * Use strtime() specific timestamp regexp * Use datetime object for fake network timestamps * Use datetime object for stub created\_at timestamp * Verify created\_at cloudpipe timestamp is isotime * Verify next-available limit timestamps are isotime * Verify created/updated timestamps are isotime * Use timeutils.isotime() in images view builder * Use actual fake timestamp in API templates * Normalize API extension updated timestamp format * Regenerate API samples for GET /extensions * objects: remove unused utils module * objects: restore some datetime field comments * Add fault wrapper for rescue function * Add x-openstack-request-id to nova v3 responses * Remove unnecessary wrapper for 5 compute APIs * Update block\_device\_info to contain swap and ephemeral disks * Hacking: add rule number to HACKING.rst * Create the image mappings BDMs earlier in the boot * Delete in-process snapshot when deleting instance * Imported Translations from Transifex * Fixed many typos * VMware: remove unneeded code * Rename NotAuthorized exception to Forbidden * Add warning to periodic\_task with interval 0 * Fix typo in unit tests * Remove a bogus and unnecessary docstring * Don't translate debug level logs in nova.api * Don't translate debug level logs in nova.volume * VMware: remove duplicate \_fake\_create\_session code * libvirt: Make \`fakelibvirt.libvirtError\` match * ec2utils: Use VolumeMapping object * ec2: create volume mapping using nova object * Add VolumeMapping object for use in EC2 * Add new ec2 volume db API calls * Remove legacy block device usage in ec2 API * Deprecate instance\_get\_active\_by\_window\_joined() from conductor * Deprecate instance\_get\_all\_by\_filters() from conductor * Don't translate debug level logs in nova.network * Fix bad param name in method docstring * Nova should pass device\_id='' instead of None to neutron.update\_port() * Set default auth\_strategy to keystone * Support multi-version pydevd * replace NovaException with VirtualInterfaceCreate when neutron fails * Spice proxy config setting to be read from the spice group in nova.conf * xenapi: make auto\_config\_disk persist boot flag * Deprecate compute\_unrescue() from conductor * Deprecate instance\_destroy() from conductor * libvirt: fix comment for get\_num\_instances * Fix exception message being changed by nested exception * DescribeInstances in ec2 shows wrong image-message * Imported Translations from Transifex * Remove unused nova.crypto.compute\_md5() * VMware: spawn refactor - phase 1 - get\_vif\_info * Remove comments and to-do for quota inconsistency * Set the volume access mode during volume attach * Fix a typo in compute/manager::remove\_volume\_connection() * XenAPI: Use local rsync rather than remote if possible * Delete image when backup operation failed on snapshot step * Fix migrate\_instance\_\*() using DB for floating addresses * Ignore errors when deleting non-existing vifs * Use eventlet.tpool.Proxy for DB API calls * Improve performance for checking hosts AZs * Correct the log in conductor unshelve\_instance * Imported Translations from Transifex * Make instance\_exists() take an instance, not instance\_name * Xen: Retry plugin call after connection reset * Remove metadata's network-api dependence on the database * Add helper method to determine disk size from instance properties * Deprecate nova-manage flavor subcommand * Updated from global requirements * Imported Translations from Transifex * VMware: remove unused variable * Scheduler: enable scheduler hint to pass the group name * Loosen import\_exceptions to cover all of gettextutils * Don't translate debug level scheduler logs * VMWare - Check for compute node before triggering destroy * Update version aliases for rpc version control * make ec2 errors not useless * VMware: ensure rescue instance is deleted when instance is deleted * Ensure info cache updates don't overwhelm cells * Remove utils.reset\_is\_neutron() to avoid races * Remove unnecessary call to fetch info\_cache * Remove deprecated config option names: Juno Edition * Don't overwrite instance object with dict in \_init\_instance() * Add specific doc build option to tox * Fix up import of conductor * Use one query instead of two for quota\_usages * VMware: Log additional details of suds faults * Disable nova-manage network commands with Neutron V2 * Fix the explanations of HTTPNotFound for keypair's API * remove unneeded call to network\_api on rebuild\_instance * Deprecate network\_migrate\_instance\_\* from conductor * Deprecate aggregate\_host\_\* operations in conductor * Convert instance\_usage\_audit() periodic task to objects * Return to using network\_api directly for migrations * Make \_is\_multi\_host() use objects * Remove unneeded call to fetch network info on shutdown * Instance groups: add method get\_by\_hint * Imported Translations from Transifex * GET details REST API next link missing 'details' * Don't explode if we fail to unplug VIFs after a failed boot * nit: correct docstring for FilterScheduler.schedule\_run\_instance * Revert "Fix network-api direct database hits in metadata server" * ec2: use BlockDeviceMappingList object * ec2: use SecurityGroup object * ec2: get services using ServiceList object * ec2: remove db.instance\_system\_metadata usage * Remove nova-clear-rabbit-queues * Allow -1 as the length of "get console output" API * Fix AvailabilityZone check for hosts in multiple aggregates * Move \_get\_locations to module level plus tests * Define constants for the VIF model types * Imported Translations from Transifex * Make aggregate host operations use Aggregate object * Convert poll\_rescued\_instances() periodic task to objects * Make update\_available\_resource() use objects * Add get\_by\_service() method to ComputeNodeList object * Add with\_compute\_node to service\_get() * Make \_get\_compute\_info() use objects * Pass configured auth strategy to neutronclient * Imported Translations from Transifex * Make quota rollback checks more robust in conductor tests * Updated from global requirements * Remove duplicate code from nova.db.sqlalchemy.utils * Downgrade the log level when automatic confirm\_resize fails * Refactor unit tests for image service CRUD * Finish \_delete\_instance() object conversion * Make detach\_volume() use objects * Add lock on API layer delete floating IP * ec2: Convert instance\_get\_by\_uuid calls to objects * Fix network-api direct database hits in metadata server * Scheduler: remove test scheduling methods that are not used * Add info\_cache as expected attribute when evacuate instance * Make compute manager use network api method return values * Allow user to specify image to use during rescue - V2 API changes * Allow user to specify image to use during rescue * Use debug level logging in unit tests, but don't save them * Update user\_id length to match Keystone schema in volume\_usage\_cache * Avoid the possibility of truncating disk info file * Read deleted instances during lifecycle events * Add RBAC policy for ec2 API security groups calls * compute: using format\_message() to convert exception to string * support local debug logging * Fix bug detach volume fails with "KeyError" in EC2 * Fix straggling uses of direct-to-database queries in nova-network * Xen: Do not resize root volumes * Remove mention of nova-manage.conf from nova-manage.rst * XenAPI: Add host information to glance download logs * Check image exists before calling inject\_data * xenapi: Cleanup tar process on glance error * Missing catch InstanceNotFound in v3 API * Recover from POWERING-\* state on compute manager start-up * Remove the unused \_validate\_device\_name() * Adds missing expected\_errors for V3 API multinic extension * Correct test boundary for libvirt\_driver.get\_info * Updated from global requirements * Update docs to reflect new default filters * Enable ServerGroup scheduler filters by default * Revert "Use debug level logging during unit tests" * Remove redundant tests from Qcow2TestCase * libvirt: remove\_logical\_volumes should remove each separately * VMware: Fixes the instance resize problem * Fix anti-affinity server-group boot failure * Nova utils: add in missing translation * Add exception handling in "nova diagnostics" * mark vif\_driver as deprecated and log warning * Revert object-assuming changes to \_post\_live\_migration() * Revert object-assuming changes to \_post\_live\_migration() * libvirt: optimize pause mode support * Check for None or timestamp in availability zone api sample * Refactor Network API * Require admin context for interfaces on ext network * remove redundant copy of test\_cache\_base\_dir\_exists * Make sure leases are maintained until release * Add tests for remaining expected conductor exceptions * Fix Jenkins translation jobs * libvirt: pause mode is not supported by all drivers * Reduce config access in scheduler * VMWare: add power off vm before detach disk during unrescue * Reduce logging in scheduler * xenapi: add a test for \_get\_partitions * Refactor network\_utils to new call\_xenapi pattern * Sync request\_id middleware bug fix from oslo * Make example 'entry\_points' parameter a dictionary * Localized error exception message on delete host aggregate * Note that XML support \*may\* be removed * Change errors\_out\_migration decorator to work with RPC * low hanging fruit oslo-incubator sync * Fix description of ServerGroupAffinityFilter * Added test cases in ConfKeyManagerTestCase to verify fixed\_key * Moved the registration of lifecycle event handler in init\_host() * Change NotFound to InstanceNotFound in server\_diagnostics.py * Remove unnecessary passing of task\_state to check\_instance\_state * Rename instance\_actions v3 to server\_actions * Drop nova-rpc-zmq-receiver man-page * Correct the keypairs-get-resp.json API sample file * Make hypervisor\_version an int in fakeVirt driver * Ensure network interfaces are in requested order * Reserve 10 migrations for backports * XenAPI: Calculate disk\_available\_least * Open Juno development 2014.1.rc1 ---------- * Fix getting instance events on subsequent attempts * Improved logs for add/remove security group rules * VMware: remove double import * VMware: clean up VNC console handling * Make conductor expect ActionEventNotFound for action methods * Remove zmq-receiver from setup.cfg * Add a note about deprecated group filters * Fix the section name in CONTRIBUTING.rst * Fix display of server group members * Add new style instance group scheduler filters * Automatically create groups that do not exist * Add InstanceGroup.get\_by\_name() * Remove unnecessary check for CONF.notify\_on\_state\_change * Add nova.conf.sample to gitignore * Use binding:vif\_details to control firewall * Disable volume attach/detach for suspended instances * Updated from global requirements * Persist image format to a file, to prevent attacks based on changing it * Add test cases for validate\_extra\_spec\_keys * Catch InstanceInLocked exception for rescue and instance metadata APIs * Imported Translations from Transifex * Make 'VDI too big' more verbose * Use osapi\_glance\_link\_prefix for image location header * postgres incompatibility in InstanceGroup.get\_hosts() * Add missing test for None in sqlalchemy query filter * Use instance data instead of flavor in simple\_tenant\_usage extension * Sync oslo imageutils, strutils to Nova * Use correct project/user id in conductor.manager * fix the extension of README in etc/nova * Tell pip to install packages it sees globally * Change exception type from HTTPBadRequest to HTTPForbidden * Don't attempt to fill faults for instance\_list if FlavorNotFound * Bypass the database if limit=0 for server-list requests * Fix availability-zone option miss when creates an instance * No longer any need to pass admin context to aggregate DB API methods * Updated Setting up Developer Environment for Ubuntu * Change libvirt close callback to use green thread * Re-work how debugger CLI opts are registered * Imported Translations from Transifex * \_translate\_from\_glance() can cause an unnecessary HTTP request * Add UNSHELVING and RESCUING into IoOPSFilter consideration state * VMware: fix booting from volume * Do not add current tenant to private flavor access * Disable oslo.messaging debug logs * Update vm\_mode when rebuilding instance with new image * VMware: fix list\_instances for multi-node driver * VMware: Add utility method to retrieve remote objects * Use project/user from instance for quotas * Refactors unit tests of image service detail() * Refactors nova.image.glance unit tests for show() * Revert deprecation warning on Neutron auth * V2 API: remove unused imports * Change HTTPUnprocessableEntity to HTTPBadRequest * Rename \_post\_live\_migration instance\_ref arg * Add a decorator decorator that checks func args * Updated from global requirements * Instance groups: cleanup * Use the list when get information from libvirt * Remove unused quota\_\* calls from conductor * Use correct project/user for quotas * Include proper Content-Type in the HTTP Headers * Fix inconsistent quota usage for security group * Handling unlimited values when updating quota * Fix service API and cells * Remove unnecessary stubbing in test\_services * InvalidCPUInfo exception added to except block * VMware: fix exception when no objects are returned * Don't allow empty or 0 volume size for images * Wait till message handling is done on service stop * Remove PciDeviceList usage in pci manager * Fix the rpc module import in the service module * Revert "VMware Driver update correct disk usage stat" * Catch HostBinaryNotFound exception in V2 API * Ignore InstanceNotFound while getting console output * Raise error on nova-api if missing subnets/fixed\_ips on networks/port * Fix the explanations of HTTPNotFound for new APIs * Remove the nova.config.sample file * Refuse to block migrate instances with config drive * Include next link when default limit is reached * Catch NotImplementedError on Network Associate * VMware: add a file to help config the firewall for vnc * Change initial delay for servicegroup api reporting * Fix KeyError if neutron security group is not TCP/UDP/ICMP and no ports * Prevent rescheduling on block device failure * Check if nfs/glusterfs export is already mounted * Make compute API resize methods use Quotas objects * Remove commented out code in test\_cinder\_cloud * Update quantum to neutron in comment * Add deleted\_at attribute in glance stub on delete() * Add API sample files of "unshelve a server" API * Remove unused method from fake\_network.py * Don't refresh network cache for instances building or deleting * GlanceImageService static methods to module scope * Remove XenAPI driver deprecation warning log message * VMware: bug fix for host operations when using VMwareVCDriver * xenapi: boot from volume without image\_ref * Use HTTPRequestV3 instead of HTTPRequest in v3 API tests * Cells: Send instance object for instance\_delete\_everywhere * Fix "computeFault" when v3 API "GET /versions/:(id)" is called * VMware: ensure that the task completed for resize operation * Change parameters of add\_timestamp in ComputeDriverCPUMonitor class * Cells API calls return 501 when cells disabled * Add version 2.0 of conductor rpc interface * Added missing raise statement when checking the config driver format * Make NovaObject report changed-ness of its children * Increase volume creation max waiting time * Remove action-args from nova-manage help * VMware: fix rescue disk location when image is not linked clone * Fix comment for block\_migration in nova/virt/libvirt/driver.py * Don't import library guestfs directly * Correct inheritance of nova.volume.cinder.API * VMware: enable booting an ISO with root disk size 0 * Remove bad log message in get\_remote\_image\_service * Raise NotImplementedError in NeutronV2 API * Remove block\_device\_mapping\_destroy() from conductor API * Make sure instance saves network\_info when we go ACTIVE * Fix sqlalchemy utils test cases for SA 0.9.x * Fix equal\_any() DB API helper * Remove migration\_update() from conductor API * Remove instance\_get() from conductor API * Remove aggregate\_get\_by\_host() from conductor API * add support for host driver cleanup during shutdown * Add security\_group\_rule to objects registry * Remove aggregate\_get() from conductor API * Delete meaningless lines in test\_server\_metadata.py * Imported Translations from Transifex * Move log statement to expose actually info\_cache value * Fix input validation for V2 API server group API extension * Adds test for rebuild in compute api * Specify spacing on periodic\_tasks in manager.py * network\_info cache should be cleared before being rescheduled * Don't sync [system\_]metadata down to cells on instance.save() * Fixes the Hyper-V agent individual disk metrics * VMware: remove unused code (\_delete method in vmops.py) * Fix docstring for shelve\_offload\_instance in compute manager * Block database access in nova-network binary * Make nova-network use conductor for security groups refresh * Make nova-network use quotas object * Reverts change to default state\_path * Fix raise\_http\_conflict\_for\_instance\_invalid\_state docstring * Cells: Pass instance objects to update/delete\_instance\_metadata * Don't detach root device volume * Revert "Adding image multiple location support" * Revert "Move libvirt RBD utilities to a new file" * Revert "enable cloning for rbd-backed ephemeral disks" * Add helper method for injecting data in an image * Add helper method for checking if VM is booting from a volume * Libvirt: Repair metadata injection into guests * Make linux\_net use objects for last fixed ip query * Add get\_by\_network() to FixedIPList * Update aggregate should not allow duplicated names * Recover from REBOOT-\* state on compute manager start-up * VMware: raise an exception for unsupported disk formats * VMware: ensure that deprecation does not appear for VC driver * rename ExtensionsResource to ExtensionsController * Ensure is\_image\_available handles V2 Glance API * libvirt: fix blockinfo get\_device\_name helper * Log Content-Type/Accept API request info * Remove the docker driver * xenapi: Speed up tests by not waiting on conductor * Updated from global requirements * xenapi: Fix test\_rescue test to ensure assertions are valid * VMware: image cache aging * Add py27local tox target * Fix broken API os-migrations * Catch FloatingIpNotFoundForHost exception * Fix get\_download\_hander() typo * Handle IpAddressGenerationClient neutron * Delete ERROR+DELETING VMs during compute startup * VMware: delete vm snapshot after nova snapshot * Fix difference between mysql & psql of flavor-show * Add version 3.0 of scheduler rpc interface * Make libvirt wait for neutron to confirm plugging before boot * Task cleanup\_running\_deleted\_instances can now use slave * Do not add HPET timer config to non x86 targets * Make test computations explicit * Instance groups: only display valid instances for policy members * Don't allow reboot when instance in rebooting\_hard * VMware: add missing translations * Fix typo and add test for refresh\_instance\_security\_rules * Add declaration of 'refresh\_instance\_security\_rules' to virt driver * Remove mention of removed dhcp\_options\_enabled * Fix compute\_node stats * Fix: Unshelving an instance uses original image * Noted that tox is the preferred unit tester * Updated development.environment.rst * Use instance object instead of \_instance\_update() * Remove compute virtapi BDM methods * enable cloning for rbd-backed ephemeral disks * Move libvirt RBD utilities to a new file * Fixup debug log statements in the nova compute manager * Use debug level logging during unit tests * Fix debug message formatting in server\_external\_events * VMware: VimException \_\_str\_\_ attempts to concatenate string to list * Mark ESX driver as deprecated * Volume operations should be blocked for non-null task state * xenapi: fix spawn servers with ephemeral disks * Fixes NoneType vcpu list returned by Libvirt driver * Add conversion type to LOG.exception's string * Remove compute API get\_instance\_bdms method * Move run\_instance compute to BDM objects * Move live migration callbacks to BDM objects * Instance groups: validate policy configuration * Add REST API for instance group api extension * VMware: boot from iso support * Store neutron port status in VIF model * Correct network\_model tests and \_\_eq\_\_ operator * Make network\_cache more robust with neutron * Error out failed migrations * Fix BDM legacy usage with objects * Fix anti-affinity race condition on boot * Initial scheduler support for instance\_groups * Add get\_hosts to InstanceGroup object * Add instance to instance group in compute.api * Add add\_members to InstanceGroup object * Remove run-time dependency on fixtures module by the nova baremetal * Make compute manager prune instance events on delete and migrate * Make compute manager's virtapi support waiting for events * Add os-server-external-events V3 API * Add os-server-external-events API * Add external\_instance\_event() method to compute manager * Fix invalid vim call in vim\_util.get\_dynamic\_properties() * Rescue API handle NotImplementedError * VMware: Add a test helper to mock the suds client * VMware: Ensure test VM is running in rescue tests * Move \_poll\_volume\_usage periodic task to BDM objects * Move instance\_resize code paths to BDM objects * Make swap\_volume code path use BDM objects * Fix log messages typos in rebuild\_instance function * Move detach\_volume and remove\_vol\_connection to BDM objects * Move instance delete to new-world BDM objects * VMware ESX: Boot from volume must not relocate vol * Fix development environment docs for redhat-based systems * neutron\_metadata\_proxy\_shared\_secret should not be written to log file * VMware: create datastore utility functions * Address the comments of the merged image handler patch * Ignore the image name when booting from volume 2014.1.b3 --------- * Fix typo in devref * VMware: refactor \_get\_volume\_uuid * Add return value to some network API methods * Fixing host\_ip configuration help message * No longer call check\_uptodate.sh in pep8 * notifier middleware broken by oslo.messaging * regenerate the config file to support 1.3.0a9 * Add doc update for 4 filters which is missing in filter\_scheduler.rst * Remove 3 unnecessary variables in scheduler * Adding image multiple location support * Move all shelve code paths to BDM objects * Move rebuild to BDM objects * sync sslutils to not conflict with oslo.messaging * Accurate comment in compute layer * Refactor xenapi/host.py to new call\_xenapi pattern * Add a missing space in a log message * VMware: iscsi target discovery fails while attaching volumes * Remove warn log in quota function on API layer * Sync the latest DB code from oslo-incubator * Prevent thrashing when deploying many bm instances * Support configuring libvirt watchdog from flavors * Add watchdog device support to libvirt driver * Remove extra space at the end of help string * Port libvirt copy\_image tests to mock * Updated from global requirements * Sync latest Guru Meditation Reports from Oslo * Skip sqlite-specific tests if sqlite is not configured * VMware: add in debug information for network selection * vmwareapi:Fix nova compute service down issue when injecting pure IPv6 * Make compute use quota object existing function * Fixes api samples for V2 os-assisted-volume-snapshots * Raise exception if volume snapshot id not found instead of return * Added os-security-groups prefix * VMware Driver update correct disk usage stat * attach/detach interface should raise exception when instance is locked * Restore get\_available\_resource method in docker driver * Make compute manager use InstanceInfoCache object for deletes * Deprecate conductor instance\_type\_get() and remove from VirtAPI * Make restore\_instance pass the Instance object to compute manager * Use uuid instead of name for lvm backend * Adds get\_console\_connect\_info API * Remove log\_handler module from oslo-incubator sync * Remove deleted module flakes from openstack-common.conf * When a claim is rejected, explain why * Move xenapi/agent.py to new call\_xenapi style * xenapi plugins: Make sure subprocesses finish executing * Update Oslo wiki link in README * Refactor pool.py to remove calls to call\_xenapi * Move vbd plug/unplug into session object * xenapi: make session calls more discoverable * Make error notifications more consistent * Adds unit test for etc/nova/policy.json data * Support IPv6 when booting instances * xenapi: changes the debug log formatting * libvirt: raises exception when attempt to resize disk down * xenapi: stop destroy\_vdi errors masking real error * Make resource\_tracker use Flavor object * Make compute manager use Flavor object * Make baremetal driver use Flavor object instead of VirtAPI * Sync latest config file generator from oslo-incubator * Fixes evacuate doesn't honor enable password conf for v3 * Removed copyright from empty files * Fix the explanations of HTTPNotFound response * VMware: support instance objects * Add support for tenant\_id based authentication with Neutron * Remove and recreate interface if already exists * Prevent caller from specifying id during Aggregate.create() * Enable flake8 H404 checking * Imported Translations from Transifex * Fix logic for aggregate\_metadata\_get\_by\_host\_with\_key test case * Use oslo-common's logging fixture * Re-Sync oslo-incubator fixtures * Updated from global requirements * Update pre\_live\_migration to take instance object * Remove unused method inject\_file() * Remove db query from deallocate\_fixed\_ip * update deallocate\_for\_instance to take instance obj * Update server\_diagnostics to use instance object * Move the metrics update to get\_metrics * Unmount the NFS and GlusterFS shares on detach * Add a caching scheduler driver * libvirt: image property variable already defined * Replaces exception re-raising in Hyper-V * Remove blank space after print * VMware: add instance detail to detach log message * libvirt: Enable custom video RAM setting * Remove trailing comma from sample JSON * Add pack\_action\_start/finish helper to InstanceAction object * Rewrite InstanceActionEvent object testcase using mock * Clean up \_make\_\*\_list in object models to use base.obj\_make\_list * libvirt: remove explicit /dev/random rng default * Document virt driver methods that take Instance objects * Make interface attach and detach use objects * Pass instance object to soft\_delete() and get\_info() * libvirt: setting a correct driver name for iscsi volumes * libvirt: host specific virtio-rng backend * Fix HTTP methods for test\_attach\_interfaces * Fix the calls of webob exception classes * VMware: remove unused parameter from \_wait\_for\_task * Downgrade the log level for floating IP associate * Removing redundant validation for rebuild request * VMware: add a test for driver capabilities * Catch HostBinaryNotFound exception when updating a service * VMware: ensure that datastore name exists prior to deleting disk * Move compute's \_get\_instance\_volume\_block\_device\_info to BDM objects * Use disk\_bus and device\_type in attaching volumes * Add device bus and type to virt attach\_volume call * Make volume attach use objects * compute: invalid gettext message format * VMware: fix the VNC port allocation * VMware: fix datastore selection when token is returned * Hyper-V log cleanups * vmware: driver races to create instance images * Introduce Guru Meditation Reports into Nova * Updated from global requirements * Revert "VMware: fix race for datastore directory existence" * Use instance object for delete * Update ubuntu dev env instructions * VMware: fix race for datastore directory existence * libvirt: adding a random number generator device to instances * Add 'use\_slave' to instance\_get\_all\_by\_filter in conductor * Fix the validation of flavor\_extraspecs v2 API * Make webob.exc.HTTPForbidden return correct message * Use image from the api in run\_instance, if present * Remove unused variables in the xenapi.vmops module * Describe addresses in ec2 api broken with neutron * Cleanup v3 test\_versions * Fix import order in log\_handler * Emit message which merged user-supplied argument in log\_handler * Adds service request parameter filter for V3 API os-hosts request * Fix comment typo in nova/compute/api.py * stop throwing deprecation warnings on init * Remove broken quota-classes API * VMware: fix instance lookup against vSphere * Add a new compute API method for deleting retired services * Fix instance\_get\_all\_by\_host to actually use slave * Periodic task poll\_bandwidth\_usage can use slave * Partially revert "XenAPI: Monitor the GC when coalescing" * Mark XML as deprecated in the v2 API * adjust version definition for v3 to be only json * Fix option indenting in compute manager * Adds create backup server extension for the V3 API * Catch InstanceNotFound exceptions for V2 API instance\_actions * Sync log.py from oslo * Make floating\_ips module use FloatingIP for associations * Remove \_\_del\_\_ usage in vmwareapi driver * Fixed spelling errors in nova * LibVirt: Disable hairpin when using Neutron * VMware: optimize instance reference access * Serialize the notification payload in json * Add resource tracking to unshelve\_instance() * Typo in the name 'libvirt\_snapshot\_compression' * Refactor driver BDM attach() to cover all uses * Fix assertEqual parameter order post V3 API admin-actions-split * Fix copyright messages after admin actions split for V3 API * Catch InstanceNotFound exceptions for V2 API virtual interfaces * Correct the assert() order in test\_libvirt\_blockinfo * Use disk\_bus when guessing the device name for vol * libvirt: add virtio-scsi disk interface support * libvirt: configuration element for virtual controller * VMware: factor out management of controller keys and unit numbers * Remove unused notifier and rpc modules from oslo sync * Imported Translations from Transifex * Remove XML support from schemas v3 * Treat port attachment failures correctly * Add experimental warning for Cells * Add boolean convertor to "create multiple servers" API * VMware: prevent race for vmdk deletion * VMware: raise more specific exceptions * Disable IGMP snooping on hybrid Linux bridge * libvirt: remove retval from libvirt \_set\_host\_enabled() * VMware: remove unused class * compute: format\_message is a method not an attribute * MetricsWeigher: Added support of unavailable metrics * Fix incorrect kwargs 'reason' for HTTPBadRequest * Fix the indents of v3 API sample docs * Refactor get\_iscsi\_initiator to a common location * Fix compute\_node\_update() compatibility with older clients * XenAPI: Add the mechanism to attach a pci device to a VM * Remove underscore for the STATE\_MAP variable * XenAPI: Add the support for updating the status of the host * libvirt: support configurable wipe methods for LVM backed instances * Fix InstanceNotFound error in \_delete\_instance\_files * Ensure parent dir exists while injecting files * Convert post\_live\_migration\_at\_destination to objects * Convert remove\_fixed\_ip\_to\_instance to objects * Convert add\_fixed\_ip\_to\_instance to objects * Fix invalid facilities documented in rootwrap.conf * VMware: improve unit test time * Replace assertEqual(None, \*) with assertIsNone in tests * Add comment/doc about utils.mkfs in rootwrap * Add mkfs to the baremetal-deploy-helper rootwrap * libvirt-volume: improve unit test time * Move consoleauth\_manager option into nova.service and fix imports * libvirt: improve unit test time * Imported Translations from Transifex * Make is\_neutron() thread-safe * Update the mailmap * Rewrite InstanceAction object test cases using mock * Make floating\_ips module use FloatingIP for updates * Make floating\_ips module use FloatingIP for (de-)allocations * Make floating\_ips module use FloatingIP for all get queries * Make floating\_ips module use Service object * Make floating\_ips module use Instance object * Make floating\_ips module use Network object * Make floating\_ips module use FixedIP object * Fix break in vm\_vdi\_cleaner after oslo changes * Fixes the Hyper-V VolumeOpsTestCase base class * libvirt: Uses available method get\_host\_state * Add V3 api for pci support * Update docstring for baremetal opportunistic tests * Fix upper bound checking for flavor create parameters * Fixed check in image cache unit test * Count memory and disk slots once in cells state manager * changed quantum to neutron in vif-openstack * Convert unrescue\_instance to objects * Don't allow compute\_node free\_disk\_gb to be None * compute: removes unnecessary condition * Rename Openstack to OpenStack * Support setting a machine type to enable ARMv7/AArch64 guests to boot * Catch InstanceNotFound exceptions for V2 API floating\_ips * Explicity teardown on error in libguestfs setup() * Catch InstanceNotFound exceptions for V2 API deferred delete * Replace oslo.sphinx with oslosphinx * Change assertTrue(isinstance()) by optimal assert * Make nova\_ipam\_lib use Network, FixedIP, and FloatingIP objects * Make nova-network use FixedIP for timeouts * Make nova-network use FixedIP object for updates * Make nova-network use FixedIP object for disassociations * Use six.moves.urllib.parse instead of urlparse * Add "body=" argument to v3 API unit tests * Remove unused methods * Adds migrate server extension for V3 API * Move policy check of start/stop to api layer * Refactor stats to avoid bad join * Remove @author from copyright statements * Remove character filtering from V3 API console\_output * DB: logging exceptions should use save\_and\_reraise * Fix incorrect check in aggregate/az test * xenapi: set viridian=false for linux servers * Delete baremetal image files after deployment * Make sure "volumeId" in req body on volume actions * Removes console output plugin from the core list * Using six.add\_metaclass * Fix bad log formatting * Remove quota classes extension from the V3 API * Group kvm image\_meta tests for get\_disk\_bus * Prefix private methods with \_ in docker driver * Fix the sample and unittest params of v3 scheduler-hints * Add a instance lookup helper to v3 plugins * Use raw string notation for regexes in hacking checks * Improve detection of imports in hacking check * Renumber some nova hacking checks * Docker cannot start a new instance because of an internal error * libvirt: configuration element for a random number generator device * VMware: fix instance rescue bug * Fix run\_tests.sh lockutils when run with -d * Adds tests to sqlachemy.api.\_retry\_on\_deadlock * Replace detail for explanation msgs on webob exceptions * Allow operators to customize max header size * Prevent caller from specifying id during Migration.create() * Prevent caller from specifying id during KeyPair.create() * Prevent caller from specifying id during Service.create() * Prevent caller from specifying id during ComputeNode.create() * Clean IMAGE\_SNAPSHOT\_PENDING state on compute manager start up * Fix trivial typo in libvirt test comment * Refactoring metadata/base * Removes XML namespace from V3 API test\_servers * correct the bugs reference url in man documents * Objectify instance\_action for cell scheduler * Remove tox locale overrides * libvirt: use to\_xml() in post\_live\_migration\_at\_destination * Removes os-instance-usage-audit-log from the V3 API * VMware: update test name * VMware: improve unit test performance * Fix english grammar in the quota error messages * Removes os-simple-tenant-usage from the V3 API * Fix a couple of unit test typos * Add HEAD api response for test s3 server BucketHandler * Removes XML support from security\_groups v3 API * Hyper-V driver RDP console access support * Make consoleauth token verification pass an Instance object * Adds RDP console support * Fix migrations changing the type of deleted column * Add hpet option for time drifting * Typo in backwards compat names for notification drivers * Support building wheels (PEP-427) * Fix misspellings in nova * Disable file injection in baremetal by default * Drop unused dump\_ SQL tables * Convert rescue\_instance to objects * Convert set\_admin\_password to objects * The object\_compat decorator should come first * Default video type to 'vga' for PowerKVM * Sync latest db.sqlalchemy from oslo-incubator * Guard against oversize flavor rxtx\_factor float * Make libvirt use Flavor object instead of using VirtAPI * Fix instance metadata tracking during resets * Make delete\_instance\_metadata() use objects * Break out the meat of the object hydration process * V2 Pause: treat case when driver does not implement the operation * VMware: fix bug for exceptions thrown in \_wait\_for\_task * Nova Docker: Metadata service doesn't work * nova: use RequestContextSerializer for notifications * Fix auto instance unrescue after poll period * Fix typos in hacking check warning numbers * Fix exception handling miss in remote\_consoles * Don't try to restore VM's in state ERROR * Make it possible to disable polling for bandwidth usage * XenAPI: Monitor the GC when coalescing * Revert "Allow deleting instances while uuid lock is held" * report port number for address already in use errors * Update my mailmap * libvirt: Adds missing tests to copy\_image * Sync latest gettextutils from oslo-incubator * Make change\_instance\_metadata() use objects * Add XenAPI driver deprecation warning log message * Adds host\_ip to hypervisor show API * VMware: update the default 'task\_poll\_interval' time * Fixes Hyper-V VHDX snapshot bigger than instance * Define common "name" parameter for Nova v3 API * Stacktrace on error from libvirt during unfilter * Disable libvirt driver file injection by default * Add super call to db Base class * Fix baremetal stats type * Fix bittorrent URL configuration option * Fix VirtualInterfaceMacAddressException message * Add serializer capability to fake\_notifier * Avoid deadlock when stringifying NetworkInfo model * Add hacking test to block cross-virt driver code usage * Hyper-V: Change variable in debug log message * Rename API schema modules with removing "\_schema" * Fixed naming issue of variable in a debug statement formatting * Use new images when spawning BM instances * Remove get\_instance\_type and get\_active\_by\_window from nova compute API * Make the simple\_tenant\_usage API use objects * Add instance\_get\_active\_by\_window\_joined to InstanceList * Update nova.conf.sample for python-keystoneclient 0.5.0 * Add ESX quality warning * Set SCSI as the default cdrom bus for PowerKVM * Enforce FlavorExtraSpecs Key format * Fix scheduler\_hints parameter of v3 API * Remove vi modelines * VMware: Remove some unused variables * Fix a bug in v3 API doc * Move logging out of BDM attach method * Add missing translation support * libvirt: making set\_host\_enabled to be a private methods * Remove unused variable * Call get\_pgsql\_connection\_info from \_test\_postgresql\_opportunistically * Port to oslo.messaging * Sync latest config file generator from oslo-incubator * Test guestfs without support for close\_on\_exit * Make nova-network use FixedIP object for vif queries and bulk create * Make nova-network use FixedIP for host and instance queries * Make nova-network use FixedIP object for associations * Make nova-network use FixedIP for get\_by\_address() queries * Add FixedIP.floating\_ips dynamic property * Add FloatingIP object implementation * Add FixedIP Object implementation * Deal with old versions of libguestfs * Destroy docker container if spawn fails to set up network * Adds suspend server extension for V3 API * Adds pause server extension for V3 API * Removes XML namespace definitions from V3 API plugins * Remove XML support from migrations pci multiple\_create v3 API plugins * Remove extra space in log message * Allow deleting instances while uuid lock is held * Add 'icehouse-compat' to [upgrade\_levels] compute= * Make os-service API return correct error messages * Make fixed\_ip\_get\_by\_address() take columns\_to\_join * Refactor return value of fixed\_ip\_associate calls * Make nova-network use Network object for deleting networks * Make nova-network use Network for associations * Make nova-network use Network object for set\_host() operation * Make nova-network use Network object for updates * Make nova-network use Network object for remaining "get" queries * Make nova-network use NetworkList for remaining "all" queries * Make nova-network use Network object for get-all-by-host query * Make nova-network a "conductor-using service" * Ignore 'dynamic' addr flag on bridge configuration * Remove XML support from some v3 API plugins * xenapi: clean up step decorator fake steps * Use objects internally in DriverBlockDevice class * Make snapshot\_volume\_backed use new-world objects * Make volume\_snapshot\_{create,delete} use objects * Move compute API is\_volume\_backed to BDM objects * Add block device mapping objects implementation * XenAPI: Wait for VDI on introduce * Shelve: The snapshot should be removed when delete instance * Revert "Allow deleting instances while uuid lock is held" * Retry reservation commit and rollback on deadlock * Adds lock server extension for V3 API * Remove duplicated method in mock\_key\_mgr * Add quality warning for non-standard libvirt configurations * Add docker driver removal warning * Remove V3 API XML entry points * Remove XML support from admin\_password V3 API plugin * Remove XML support from certificates v3 API * Remove XML support from some v3 API plugins(e.g. services) * Remove XML support from some extension v3 API plugins * Remove XML support from some server v3 API plugins * Remove XML support from quota and scheduler\_hints v3 API plugins * Remove XML support from flavor v3 API plugins * Revert "Fix race conditions between imagebackend and imagecache" * Remove XML support from v3 API plugins * Remove unused methods * Remove trace XML from unittests * removing xml from servers.py * Remove xml unit tests for v3 api plugins * Remove v3 xml API sample tests * Adds dmcrypt utility module * Adds ephemeral\_key\_uuid field to instance * Error message is malformed when removing a sec group from an instance * Do not set root device for libvirt+Xen * Docker Set Container name to Instance ID * Fix init of pci\_stats in resource tracker * Catch NotImplementedError in get\_spice\_console in v2/v3 API * Minor changes to make certificates test cases use HTTPRequestV3 * VMware: Only include connected hosts in cluster stats * disk/api.py: refactors extends and adds missing tests * Make nova-network use Network to create networks * Make obj\_to\_primitive() handle netaddr types * Add Network object * Make service workers gracefully handle service creation race * support stevedore >= 0.14 * Increase the default retry for iscsi connects * Finish compacting pre-Icehouse database migrations * Compact pre-Icehouse database migrations <= 210 * Compact pre-Icehouse database migrations <= 200 * Compact pre-Icehouse database migrations <= 190 * Fix cache lock for image not consistent * VMware: fix image snapshot with attached volume * Use block\_device\_info at post\_live\_migration\_at\_destination * Update policy check on each action for certificates * Use (# of CPUs) workers by default * Remove policy check in db layer for aggregates * Remove unused configurations * VMware: fix exception when using multiple compute nodes * Remove copyright from empty files in nova * disk/api.py: resize2fs fails silently + adds tests * remove 2 unused function in test\_volumes.py * Update log message to support translations * PCI address should be uniform * Remove flavor-disabled related policy rules for v3 api * Remove get\_all\_networks from nova.network.rpcapi * Remove get\_network from nova.network.rpcapi * Update nova.network to use DNSDomain object * Remove some dead dnsdomain code * Add DNSDomain object * Add db.dnsdomain\_get\_all() method * Update linux\_net to use VirtualInterface * Update nova\_ipam\_lib to use VirtualInterface * libvirt: Review of the code to use module units * Update network.manager to use VirtualInterface * Imported Translations from Transifex * Updated from global requirements * Define "supported\_instances" for fake compute * Remove get\_vif\_by\_mac\_address from network rpcapi * Remove unused method from network rpcapi * Allow delete when InstanceInfoCache entry is missing * libvirt: Fix root disk leak in live mig * Additional check for qemu-nbd hang * Correct host managers free disk calculation * Correct the state for PAUSED instances on reboot * XenAPI: Use get\_VALUE in preference to get\_record()['VALUE'] * XenAPI: Speedup get\_vhd\_parent\_uuid * XenAPI: Report the CPU details correctly * XenAPI: Tidy calls to get\_all\_ref\_and\_rec * XenAPI: get\_info was very expensive * Fix bug with not implemented virConnect.registerCloseCallback * Make test\_poll\_volume\_usage\_with\_data more reliable * Re-write sqlite BigInteger mapping test * Small edits on help strings * Make floating\_ip\_bulk\_destroy deallocate quota if not auto\_assigned * Sync processutils from oslo-incubator * Create common method for MTU treatment * Move fake\_network config option to linux\_net * libvirt: move unnecesary comment * Sync log.py from oslo-incubator * hyperv: Retry after WMI query fails to find dev * vmwareapi:remove unused variables in volumeops * Fix docstring in libvirt.driver.LibvirtDriver.get\_instance\_disk\_info() * Hide VIR\_CONNECT\_BASELINE\_CPU\_EXPAND\_FEATURES where needed * Make test\_different\_fname\_concurrency less racy * VMware: improve exception logging in driver.py 2014.1.b2 --------- * Add instance faults during live\_migrate errors * VMware: use .get() to access 'summary.accessible' * Nova Docker driver must remove network namespace * Added a new scheduler filter for metrics * Sync module units from oslo * Join pci\_devices for servers API * VMware: fix missing datastore regex with ESX driver * Fix the flavor\_ref type of unit tests * Sync unhandled exception logging change from Oslo * Fix race conditions between imagebackend and imagecache * Add explicit discussion of dependencies to README.rst * Add host and details column to instance\_actions\_events table * Join pci\_devices when getting all servers in API * Add sort() method to ObjectListBase * Add VirtualInterface object * VMware: Fix incorrect comment indentation * vmwareapi: simple refactor of config drive tests * Fix multi availability zone issue part 2 * Make exception message more friendly * disable debug in eventlet.wsgi server * Alphabetize core list for V3 API plugins * Ensure MTU is set when the OVS vif driver is used * remove redundant \_\_init\_\_() overwriting when getting ExtensionResources * Fix bug for neutron network-name * Fix rbd backend not working for none admin ceph user * Set objects indirection API in network service * Use oslo.rootwrap library instead of local copy * Remove admin auth when getting the list of Neutron API extensions * Fix the test parameter order for v3 evacuate test * Add API schema for v3 evacuate API * Remove unused code * Take a vm out of SNAPSHOTTING after Glance error * Corrected typo in metrics * libvirt: handle exception while get vcpu info * Fixed incorrect test case of test\_server\_metadata.py * Add API schema for v3 rescue API * Support preserve\_ephemeral in baremetal * Show bm deploy how to preserve ephemeral content * Add preserve\_ephemeral option to rebuild * Fix string formatting of exception.NoUniqueMatch message * docstring fix * xenapi: stop server destroy on live\_migrate errors * Ensure that exception raised in neutron are handled correctly * Fix updating device names when defaulting * libvirt: Fix confusing use of mox.StubOutWithMock * Sync request\_id middleware for nova * Calculate default security group into quota usage * Allow run\_image\_cache\_manager\_pass to hit db slave * Consolidate the blockdev related filters * VMware: upload images to temporary directory * Refactor CIDR field to use netaddr.IPNetwork * Make nova-network use Instance objects * Make nova-network use Service object * Allow \_check\_instance\_build\_time to hit db slave * Set objects indirection API in metadata service * libvirt: Configuration element for sVirt support * VMware: unnecessary session reconnection * Add API schema for v3 multinic API * API schema for v3 console\_output API * Workers verification for WSGI service * Remove unused dict BYTE\_MULTIPLIERS * Optimize libvirt live migration workflow at source * libvirt, fix test tpool\_execute\_calls\_libvirt * Using staticmethod to mock LibvirtDriver.\_supports\_direct\_io * Use the mangle checksum fill rule regardless to the multi\_host * Enabled Libvirt driver to read 'os\_command\_line' from image properties * Update nova.conf.sample * Capture exception for JSON load in virt.storage\_users * Ensure that headers are utf8, not unicode * Attribute snapshot not defined in libvirt/config.py * ec2 api should check 'max\_count'&'min\_count' para * nova docker driver cannot find cgroup in /proc/mounts on RHEL * VMware: fix rescue with disks are not hot-addable * VMware: bug fix for VM rescue when config drive is configured * Define common API parameter types * Fixed a problem in iSCSI multipath * Fix unhandled InvalidServerState exceptions in server start/stop * Cells rebuild regression fix * Fix potential fd leak * Rename instance\_type to flavor in libvirt virt driver tests * Rename instance\_type to flavor in vmware virt driver tests * Improve error message in services API * Make image props filter handle old vm\_modes * XenAPI: Use direct IO for writing config drive * Avoid unnecessary use of rootwrap for some network commands * Remove unused copyright from nova.api.\_\_init\_\_ * replace type() to isinstance() in nova * Make availability\_zone optional in create for aggregates * libvirt: Fix infinite loop waiting for block job * baremetal: stop deployment if block devices are not available * Cleanup 'deleting' instances on restart * Ignore duplicate delete requests * Let drivers override default rebuild() behaviour * Enable compute\_node\_update to tolerate deadlocks * xenapi: resize up ephemeral disks * xenapi: refactor generate\_ephemeral * xenapi: refactor resize\_up\_root\_vdi * Abstract add\_timestamp out of ComputeDriverCPUMonitor class * Revert "Whitelist external netaddr requirement" * The private method \_text\_node should be used as function * Add finer granularity to host aggregate APIs * Remove unused import * Adds new method nova.utils.get\_hash\_str * Make nova/quota use keypair objects * VMware: update test file names * Ensure instance action event list in order * Docker Driver doesn't respect CPU limit * libvirt: stop overwriting LibvirtConfigCPU in get\_host\_capabilities * Cleanup the flake8 section of tox.ini * Use the full string for localisation * Don't deallocate/reallocate networks on reschedules * Cleanup object usage in the rebuild path * Fix test case with wrong parameter in test\_quota\_classes * Remove unused variables in imagebackend.py * Remove unused code in test\_attach\_interfaces.py * Whitelist external netaddr requirement * Better exception handling for deletes during build * Translate the snapshot\_pending state for old instances * Prevent Instance.refresh() from returning a new info cache * Extends V3 os-hypervisor api for pci support * Sync config generator from oslo-incubator * Imported Translations from Transifex * Remove uneeded dhcp\_opts initialization * Update class/function name for test\_extended\_availability\_zone.py * Allow deleting instances while uuid lock is held * xenapi: add support for vcpu\_pin\_set * xenapi: more info when assert\_can\_migrate fails * fix ips to 'ips' in APIRouter * Hyper-V:Preserve config drive image after the instance is resized * fix log message in APIRouter * VMware: use session.call\_method to invoke api's * Rename instance\_type to flavor in hyper-v virt driver * Rename instance\_type to flavor in xenapi virt driver * Compact pre-Icehouse database migrations <= 180 * Change when exists notification is sent for rescue * Revert change of default FS from ext3 to etx4 * Convert nova.compute.manager's \_spawn to objects * Add alias as prefix for flavor\_rxtx v3 * Remove unused code in nova/api/ec2/\_\_init\_\_.py * Remove unused import * VMware: improve connection issue diagnostic * Fixes messages logged on Glance plugin retries * Aggregate: Hosts isolation based on image properties * Fix for qemu-nbd hang * Return policy error, not generic error * Fix lxc rootfs attached two devices in some action * Removes disk-config extension from v3 api * Fix typo'ed deprecated flag names in libvirt.imagebackend * Disable libguestfs' default atexit handlers * Add API schema for v3 extended\_volumes API * Catch InstanceIsLocked exception on server actions * Fix inconsistent "image" value on \_get\_image() * Add API schema for v3 keypairs API * Add API schema for v3 flavor\_access API * Add API schema for v3 agents API * Add API schema for v3 admin\_password API * Adds a PREPARED state after baremetal node power on * Make scheduler rpcapi use object serializer * Update log message when remove pci device * Add unit test for ListOfStrings field in object models * Sync oslo db.sqlalchemy.utils to nova * Remove duplicated test * Fixing availability-zone not take effect error * Fix image cache periodic task concurrent access bug * Fix interprocess locks for run\_tests.sh * lxc: Fix a bug of baselineCPU parse failure * platform independence for test\_virt unit tests * Imagecache: fix docstring * libvirt: Set "Disabled Reason" to None when enable nova compute * Change log from ERROR to WARNING when instance absent * VMware: clean up unnecessary help message of options * Don't use deprecated module commands * Add apache2 license header to appropriate files for enabling H102 * XenAPI: Allow use of clone\_vdi on all SR types * Remove unused variables in test\_conductor.py * Do not use contextlib.nested if only mock one function * Remove update\_service\_capabilities from nova * Adds user\_data extension to nova.api.v3.extensions * Add wsgiref to requirements.txt * pass the empty body into the controller * Imported Translations from Transifex * Revert recent change to ComputeNode * sync oslo service to fix SIGHUP handling * Fix parameter checking about quota update api * Spelling fix resouce=>resource * Change default ephemeral FS to ext4 * When inject admin password, no need to generate temp file * Make \_change\_index\_columns use existing utility methods * Fix interprocess locks when running unit-tests * Cleanup object usage in the delete path * Change RPC post\_live\_migration\_at\_destination from call to cast * Pass rbd\_user id and conf path as part of RBD URI for qemu-img * Allow some instance polling periodic tasks to hit db slave * Sync timeutils from oslo-incubator * Catch NotImplementedError for vnc in the api * List NotImplementedError as a client exception for vnc * remove vmwareapi.vmops.get\_console\_output() * Object-ify build\_and\_run\_instance * Retry on deadlock in instance\_metadata\_update * use 'os\_type' in ephemeral filename only if mkfs defined * ValueError should use '%' instead of ',' * Setting the xen vm device id on vm record * Rename instance\_type to flavor in nova.utils and nova.compute.utils * Rename instance\_type to flavor in nova.cloudpipe * Serialize instance object while building request\_spec * Make rebuild use Instance objects * Remove deprecated config aliases * Changed error message to match usage * Add configurable 120s timeout ovs-vsctl calls * Clarify rebuild\_instance's recreate parameter * Clean swap\_volume rollback, on libvirt exception * Image cache: move all of the variables to a common place * baremetal: set capabilites explicitly * Remove docker's unsupported capabilities * Set a sane default for state\_path * Fix incorrect exception on os-migrateLive * barematal: Cleanup the calls to assertEqual * Refactor time conversion helper function for objects in db api * Fixes ConfigDrive bug on Windows * Remove smoketests * Revert graceful shutdown patch * Handle InstanceUserDataMalformed in create server v2 api * Enable remote debugging for nova * Fix race in unit tests, which can cause gate job to fail * Add boolean convertor to cells sync\_instances API * Initialize iptables rules on initialization of MetadataManager * vmwareapi: raise on get\_console\_output * hyperv: remove get\_console\_output method * List NotImplementedError as client exception * api: handle NotImplementedError for console output * Make Serializer/Conductor able to backlevel objects * Make ec2 use Flavor object * Move restore and rebuild operations to Flavor objects * Add flavor access methods to Instance object * Rename instance\_type to flavor in nova.network tree * Stop, Rescue, and Delete should give guest a chance to shutdown * Remove middleware ratelimits from v3 api * Remove unused variables in neutron api interface and neutron tests * Remove unneeded call to conductor in network interface * Return client tokens in EC2 DescribeInstances * Require List objects to be able to backlevel their contents * Make Instance object compatible with older compute nodes * Deprecate/remove scheduler select\_hosts() * Pass Instance object to console output virt driver api * Send Instance object to validate\_console\_port * Pass Instance object to compute vnc rpc api * Update vnc virt driver api to take Instance object * Add error as not-in-progress migration status * Don't replace instance.info\_cache on each save * Add boolean convertors for migrate\_live API * VMWare: bug fix for Vim exception handling * XenAPI: Synchronize on all VBD plug/unplug per VM * Add IPAddress field type in object models * Fixes errors on start/stop unittest * Use a dictionary to eliminate the inner loop in \_choose\_host\_filters() * Correct uses of :params in docstrings * Delete iSCSI devices after volume detached * Prevent spoofing instance\_id from neutron to nova * Replaces call to lvs with blockdev * Refactor PXE DHCP Option support * Normalize the weights instead of using raw values * Compact pre-Icehouse database migrations <= 170 * XenAPI: Speedup host\_ref cannot change - get it once * Updated from global requirements * Rename instance\_type to flavor in test\_utils and nova.tests.utils * Rename instance\_type to flavor in baremetal virt driver * VMware: fix bug when more than one datacenter exists * Sync oslo lockutils for "fix lockutils.lock() to make it thread-safe" * Move calls to os.path.exists() in libvirt imagebackend * Ensure api\_paste\_conf is an absolute path * Log exception in \_heal\_instance\_info\_cache * Raise better exception if duplicate security groups * Remove the largely obsolete basepath helper * libvirt: Custom disk\_bus setting is being lost on hard\_reboot * Libvirt: Making the video driver element configurable * Give migrations tests more time to run * Remove the api\_thread\_pool option from libvirt driver * baremetal: volume driver refactoring and tests * Sync middleware audit, base, and notifier from oslo * Get test\_openAuth\_can\_refuse\_None\_uri to cleanup after itself * Hide injected\_file related quotas for V3 API * Make obj\_from\_primitive() preserve version information * Cells: check states on resize/rebuild updates * Make flavor\_access extension use Flavor object * libvirt: add a test to guard against set\_host\_enabled raising an error * Fix UnboundLocalError in libvirt.driver.\_close\_callback * Quota violations should not cause a stacktrace in the logs * Enforce permissions in snapshots temporary dir * Sync rpc fix from oslo-incubator * Fix changes-since filter for list-servers API * Make it possible to override test timeout value * Imported Translations from Transifex * libvirt: consider minimal I/O size when selecting cache type * Setup destination disk from virt\_disk\_size * Add Flavor object * Add atomic flavor access creation * Add extra\_resources field to compute\_nodes table * Recommend the right call instead of datetime.now() * libvirt: remove unused imports from fake libvirt utils * VMware: fix disk extend bug when no space on datastore * Fix monkey\_patch docstring bug * Change unit test for availability\_zones.reset\_cache * Make compute support monitors and store metrics * Added a new scheduler metrics weight plugin * LXC: Image device should be reset in mount() and teardown() * Add shutdown option to cleanup running periodic * xenapi: Update VM memory overhead estimation * Misc typos in nova * Add default arguments for Connection class * Update Instance from database after destroy * Libvirt: Adding video device to instances * Configuration element for describing video drivers * Don't log stacktrace for UnexpectedTaskStateError * Extends V3 servers api for pci support 2014.1.b1 --------- * LOG.warn() and LOG.error() should support translation * Minor change for typo from patch 80b11279b * network\_device\_mtu should be IntOpt * Fix HTTP response code for network APIs and improve error message * Use password masking utility provided in Oslo * Sync log.py from Oslo-incubator * xenapi: stop hang during glance download * Clean up test cases for compute.manager.\_check\_instance\_build\_time * Recover from IMAGE-\* state on compute manager start-up * Document when config options were deprecated * VMware: Fix unhandled session failure issues * Use utils method when getting instance metadata and system metadata * Add status mapping for shutoff instance when resize * Fix docstring on SnapshotController * Fix trivial typo 'descirption' * Compact pre-Icehouse database migrations <= 160 * Compact pre-Icehouse database migrations <= 150 * Compact pre-Icehouse database migrations <= 140 * Remove redundant body validation for createBackup * Change evacuate test hostnames to preferable ones * Change conductor live migrate task to use select\_destinations() * Ensure proper notifications are sent when build finishes * Periodic task \_heal\_instance\_info\_cache can now use slave db * docker: access system\_metadata as a dict * Don't overwrite marker when checking if it exists * There is no need to set VM status to ERROR on a failed migration * DB migration 209: Clean up child rows as well * Cleanup ec2/metadata/osapi address/port listen config option help * Recover from build state on compute manager start-up * Comply with new hacking 0.8 release * Correct network\_device\_mtu help string * Remove last of AssertEquals * Fix Neutron Authentication for Metadata Service * Update help for osapi\_compute\_listen\_port * libvirt: host update disable/enable report HTTP 400 * Catch InstanceIsLocked exception on server actions * VMware: enable driver to work with postgres database * Make test\_evacuate from compute API DRYer * Fix testcase config option imports * Fix "in" comparisons with one element tuples * Remove \_security\_group\_chain\_name from nova/virt/firewall.py * Remove duplicate setting of os\_type in libvirt config builder * Fix logic in LibvirtConnTestCase.\_check\_xml\_and\_uri * Remove unused flag 'host\_state\_interval' * Make object compat work with more positional args * Fix LibvirtGenericVIFDriver.get\_config() for quota * Fix a tiny double quote matching in field obj model * Move flags in libvirt's volume to the libvirt group * Check Neutron port quota during validate\_networks in API * Failure during termination should always leave state as Error(Deleting) * Remove duplicate FlavorNotFound exception handling in server create API * Make check more pythonic * Make sure report\_interval is less than service\_down\_time * Set is\_public to False by default for volume backed snapshots * Delete instance faults when deleting instance * Pass Instance object to spice compute rpc api * Pass Instance object to get\_spice\_console virt api * Remove update\_service\_capabilities from scheduler rpc api * Remove SchedulerDependentManager * powervm: remove powervm virt driver from nova * libvirt: Provide a port field for GlusterFS network disks * Add API input validation framework * Remove duplicate BuildAbortException block * Remove compute 2.x rpc api * Add v3 of compute rpc API * Fix incorrect argument position in DbQuotaDriver * Change ConductorManager to self.db when record cold\_migrate event * instance state will be stuck in unshelving when unshelve fails * Fix some i18n issue in nova/compute/manager.py * Don't gate on E125 * Supplement 'os-migrateLive' in actions list * Corrected typo in host\_manager * Fix a lazy-load exception in security\_group\_update() * fakevirt: return hypervisor\_version as an int instead of a string * Bump to sqlalchemy-migrate 0.8.2 * ComputeFilter shouldn't generate a warning for disabled hosts * Remove cert 1.X rpc api * Add V2 rpc api for cert * Remove console 1.X rpc api * Do not hide exception in update\_instance\_cache\_with\_nw\_info * Wrong handling of Instance expected\_task\_state * XenAPI: Fix caching of images * Extend LibvirtConfigGuest to parse guest cpu element info * Rename instance\_type parameter in migrate\_disk\_and\_power\_off to flavor * convert min\_count and max\_count to type int in nova v3 api * Add decorator expected\_errors for flavors\_extraspecs v3 * Remove nullable=True in models.py which is set by default * baremetal: Make api validate mac address * Use 204 instead of 202 for delete of keypairs v3 * Fix log message format issue for api * Remove "set()" from CoreAPIMissing exception * Move flag in libvirt's vif to the libvirt group * Move flag in libvirt's utils to the libvirt group * Move flags in libvirt's imagebackend to the libvirt group * Extend the scheduler HostState for metrics from compute\_node * docker: return hypervisor\_version as an int rather than a string * Sync Log Levels from OSLO * Removes check CONF.dhcp\_options\_enabled from nova * Improved debug ability for log message of cold migration * Adjust the order of notification for shelve instance * Add FloatField for objects * XenAPI: Fix config section usage * Fix performance of Server List with Neutron for Admins * Add context as parameter for two libvirt APIs * Add context as parameter for resume * xenapi: move session into new client module * xenapi: stop key\_init timeout failing set password * xenapi: workaround vbd.plug race * Address infinite loop in nova compute when getting network info * Use of logging in native thread causes deadlock connecting to libvirtd * Add v3 api samples for shelve * Imported Translations from Transifex * libvirt: Fix log message when disable/enable a host * Fix missing format specifier in ImagePropertiesFilter log message * Sync the DB2 communication error code change from olso * baremetal: refactor out powervm dependency * handle migration errors * Make compute manager \_init\_instance use native objects * Fix for reading the xenapi\_device\_id from image metadata * Check if reboot request type is None * Use model\_query() instead of session.query in db.instance\_destroy * Fix up spelling mistake * Periodic task \_poll\_unconfirmed\_resizes can now use slave db * Include image block device maps in info * Sync local from oslo * objects: declare some methods as static * Handle UnicodeEncodeError in validate\_integer * Remove traces of V3 personality extension from api samples * Removes os-personalities extension from the V3 API * VMware: add support for VM diagnostics * Remove useless api sample template files for flavor-rxtx v3 * Fix libvirt evacuate instance on shared storage fails * Fixes get\_vm\_storage\_paths issue for Hyper-V V2 API * Clean up how test env variables are parsed * Add missing argument max\_size in libvirt driver * VMware: Always upload a snapshot as a preallocated disk * Fix empty selector XML bug * Libvirt:Instance resize confirm issue against NFS * Add V2 rpc api for console * Fix sample parameter of agent API * VMware: fix snapshot failure when host in maintenance mode * Clean up unused variables * Add a driver method to toggle instance booting * Fix cells instance\_create extra kwarg * handle empty network info in instance cache * Remove deprecated instance\_type alias from nova-manage * xenapi: kernel and ramdisk missing after live-migrate * Remove V2 API version of coverage extensions * Remove V3 API version of coverage extension * Update openstack/common/periodic\_task * Use 201 instead of 200 for action create of flavor-manage v3 * Enforce metadata string type on key/value pairs * Fixes RequestContext initialization failure * Move flags in libvirt's imagecache to the libvirt group * Move base\_dir\_name option to somewhere more central * Move some libvirt specific flags into a group * Removed unused instance object helper function * Update openstack/common/lockutils * Rename InstanceType exceptions to Flavor * Added monitor (e.g. CPU) to monitor and collect data * Conditionalise automatic enabling of disabled host * Users with admin role in Nova should not re-auth with Neutron * Use 400 instead of 422 for invalid input in v3 servers core * Fix limits v3 follow API v3 rules * Remove used\_limits extension from the V3 API * Remove reduntant call to update\_instance\_info\_cache * Add flavor-extra-specs to core for V3 API * Add flavor-access to core for V3 API * Remove unused libvirt\_ovs\_bridge flag * Fix AttributeError(s) from get\_v4/6\_ips\_by\_interface * Raising exception for invalid floating\_ip's ID * libvirt: Allow delete to complete when a volume disconnect fails * replace assertNotEquals with assertNotEqual * Add V3 api samples for access\_ips * Add v3 api samples for scheduler-hints * Add v3 api samples for availability\_zone * Add V3 API sample for server's actions * Cache Neutron Client for Admin Scenarios * More instance\_type -> flavor renames in db.api * Cache compute node info in Hypervisor api * Reverse the quota reservation in revert\_resize * Rename virtapi.instance\_type\_get to flavor\_get * Xenapi: Allow windows builds with xentools 6.1 and 6.2 * Make baremetal support metadata for ephemeral block-device-mapping * Make baremetal\_deploy\_helper understand ephemeral disks * Removed unused methods from db.api * Fix type mismatch errors in NetworkTestCase * VMware: Detach volume should not delete vmdk * xenapi: Fix agent update message format * xenapi: Fix regression issue in agent update * Shrink the exception handling range * Moved quota headroom calculations into quota\_reserve * Remove dup of LibvirtISCSIVolumeDriver in LibvirtISERVolumeDriver * Replace assertEquals with assertEqual - tests/etc * libvirt: pass instance to a log() call in the standard way * xenapi: Move settings to their own config section * domainEventRegisterAny called too often * Allow configuring the wsgi pool size * driver tests (loose ends): replace assertEquals with assertEqual * baremetal: replace assertEquals with assertEqual * image tests: replace assertEquals with assertEqual * virt root tests: replace assertEquals with assertEqual * Remove unnecessary steps for cold snapshots * baremetal: Make volume driver use a correct source device * Update quota-class-set/quota-set throw 500 error * Add log\_handler to implement the publish\_errors config option * Imported Translations from Transifex * Enable non-ascii characters in flavor names * Move docker specific options into a group * Check return code of command instead of checking stderr * Added tests for get\_disk\_bus\_for\_disk\_dev function * Checking existence of index before dropping * add hints to api\_samples documentation * xenapi: check for IP address in live migration pre check * Remove live\_snapshot plumbing * Remove unused local variable in test\_compute * Make v3 admin\_password parameters consistent * Flavor name should not contain only white spaces * fix a typo error in test\_libvirt\_vif.py * Remove unused local variables in test case * Rename \_get\_vm\_state to \_get\_vm\_status * Ensure deleted instances' status is always DELETED * Let resource\_tracker report right migration status * Imported Translations from Transifex * nit: fix indentation * Always pass context to compute driver destroy() * Imported Translations from Transifex * db tests: replace assertEquals with assertEqual * compute tests: replace assertEquals with assertEqual * Catch exception while building due to instance being deleted * Refactor UnexpectedTaskStateError for handling of deleting instances * Parted 'invalid option' in XenAPI driver * Specify DB URL on command-line for schema\_diff.py * Fix \`NoopQuotaDriver.get\_(project|user)\_quotas\` format * Send delete.end with latest instance state * Add missing fields in DriverBlockDevice * Fix the boto version comparison * Add test for class InsertFromSelect * Process image BDM earlier to avoid duplicates * Clean BDM when snapshoting volume-backed instances * Remove superflous 'instances' joinedload * Fix OLE error for HyperV * Make the vmware pause/unpause unit tests actually test something * Fixes the destroy() method for the Docker virt driver * xenapi: converting XenAPIVolumeTestCase to NoDB * Move \`diff\_dict\` to compute API * Add compatibility for InstanceMetadata and primitives * Issue brctl/delif only if the bridge exists * ensure we don't boot oversized images * Add V3 API samples for config-drive * Remove duplicated test * Add notification for host operation * Sync log from oslo * Replace assertEquals with assertEqual - tests/scheduler * Make non-admin users can unshelve a server * Fix interface-attach removes existing interfaces from db * Correct exception handling * Utilizes assertIsNone and assertIsNotNone - tests/etc * Use elevated context in resource\_tracker.instance\_claim * Add updates and notifications to build\_and\_run\_instance * Add network handling to build\_and\_run\_instance * Make unshelve use new style BDM * Make \_get\_instance\_nw\_info() use Instance object * Convert evacuation code to use objects * Deprecate two security\_group-related methods from conductor * Make metadata server use objects for Instance and Security Groups * Replace assertEquals with assertEqual - tests/api * Remove security\_group-related methods from VirtAPI * Make virt/firewall use objects for Security Groups and Rules * Drop auth\_token configs for api-paste.ini * Add auth\_token settings to nova.conf.sample * Use \_get\_server\_admin\_password() * Pass volume\_api to get\_encryption\_metadata * Comments for db.api.compute\_node\_\*() methods * Fix migration 185 to work with old fkey names * Adds V3 API samples for user-data * Enforce compute:update policy in V3 API * tenant\_id implies all\_tenants for servers list in V3 API * Move get\_all\_tenants policy enforcement to API * all\_tenants=0 should not return instances from all tenants * Utilizes assertIsNone and assertIsNotNone - tests/virt * xenapi: workaround for failing vbd detach * xenapi: strip base\_mirror after live-migrate * xenapi: refactor get\_all\_vdis\_in\_sr * Remove unused expected\_sub\_attrs * Remove useless variable from libvirt/driver.py * Add a metadata type validation when creating vm * Update schema\_diff.py to use 'postgresql' URLs * Disable nova-compute on libvirt connectivity exceptions * Make InstanceInfoCache load base attributes * Add SecurityGroupRule object * Add ephemeral\_mb record to bm\_nodes * Stylistic improvement of models.ComputeNodeStat * clean up numeric expressions in tests * replaced e.message with unicode(e) * Add DeleteFromSelect to avoid database's limit * Imported Translations from Transifex * Utilizes assertIsNone and assertIsNotNone - tests/api * Include name/level in unit test log messages * Remove instance\_type\* proxy methods from nova.db.api * Add InstanceList.get\_by\_security\_group() * Make security\_group\_rule\_get\_by\_security\_group() honor columns * Claim IPv6 is unsupported if no interface with IPv6 configured * Pass thru credentials to allow re-authentication * network tests: replace assertEquals with assertEqual * Nova-all: Replace basestring by six for python3 compatability * clean up numeric expressions with byte constants * Adds upper bound checking for flavor create parameters * Remove fake\_vm\_ref from test\_vmwareapi.py * xen tests: replace assertEquals with assertEqual * Fix tests to work with mysql+postgres concurrently * Enable extension access\_ips for v3 API * Correct update extension point's check\_func for v3 server's controller * Updates the documentation for nova unit tests * Remove consoleauth 1.X rpc api * consoleauth: retain havana rpc client compat * Pull system\_metadata for notifications on instance.save() * Allow \_sync\_power\_states periodic task to hit slave DB * Fix power manager hangs while executing ipmitool * Update my mailmap * Stored metrics into compute\_nodes as a json dictionary * Bad except clauses order causes wrong text in http response * Add nova.db.migration.db\_initial\_version() * Fix consoleauth check\_token for rpcapi v2 * Nova db/api.py docstring cleanups.. * Adds XML namespace example for disk config extension * Remove multipath mapping device descriptor * VMware: fix VM resize bug * VMware: fix bug for reporting instance UUID's * Remove extra space in tox.ini * Fix migrate w/ cells * Add tests for compute (child) cell * Call baselineCPU for full feature list * Change testing of same flavor resize * Fix bad typo in cloudpipe.py * Fix compute\_api tests for migrate * Replace basestring by six for python3 compatability * Add flavor-manage to core for V3 API * Refactor unit tests code for python3 compatability * make libvirt driver get\_connection thread-safe * Remove duplicates from exceptions list * Apply six for metaclass * Add byte unit constants * Add block device handling to build\_and\_run\_instance * Reply with a meaningful exception, when libvirt connection is broken * Fix getting nwinfo for Instance obj * Make cells info\_cache updates more tolerant * Raise an error if module import fails * Drop RPC securemessage.py and crypto module * Remove deprecated libvirt VIF driver code * nova.exception does not have a ProcessExecutionError * Fix setting backdoor port in service start * Sync lockutils from oslo * Fix wrong description when updating quotas * Expose additional status in baremetal API extension * migrate server doesn't raise correct exception * Make security\_group\_get() more flexible about joins * Make Object FieldType take an object name instead of a class * Hyper-v: Change the hyper-v error log for debug when resize failed * Adds V3 API samples for the disk-config extension * Utilizes assertIn - tests/etc * Fix all scripts to honor the enabled\_ssl\_apis flag * Updated from global requirements * Fix i18n issue for nova/compute/manager.py * Change tab to blank space in hypervisors-detail-resp * Fixing ephemeral disk creation * Merging two mkfs commands * xenapi: ephemeral disk partition should fill disk * Fix the ConsolesController class doc string * xenapi: Speeding up the easy cases of test\_xenapi * xenapi: Speeding up more tests by switching to NoDB * Remove .pyc files before generating sample conf * xenapi: migrate multiple ephemeral disks * Fail quickly if file injection for boot volume * Add obj\_make\_compatible() * Updated from global requirements * Make cells 'flavorid' for resizes * Fixes unicode issue in the Hyper-V driver * Add missing ' to extra\_specs debug message * VMware: Fix ValueError unsupported format character in log message * graceful-shutdown: add graceful shutdown into compute * remove unused network module from certificates api extension * Sync fixture module from oslo * Fixes Invalid tag name error when using k:v tagname * Fix tests for migration 227 to check sqlite * Adds V3 API samples for console output * Add V2 rpc api for consoleauth * Update version aliases for rpc version control * Improve object instantiation syntax in some tests * A nicer calling convention for object instantiation * Updates OpenStack Style Commandments link * Updated from global requirements * Adding support for multiple hypervisor versions * Manage None value for the 'os\_type' property * Add CIDR field type * Validate parameters of agent API * Adding Read-Only volume attaching support to Nova * Update timeutils.py from oslo * Fix docstring related to create\_backup API * powervm tests: replace assertEquals with assertEqual * Add V3 API sample for admin-password * Remove duplicated test cases * Add extension access\_ips for v3 API * Ensure migration 209 works with NULL fkey values * Cells: Fix instance deletes * Uses oslo.imageutils * Add testr concurrency option for run\_tests.sh * Fix the image name of a shelved server * xenapi: test\_driver should use NoDBTestCase * xenapi: Speedup vm\_util and vmops tests * xenapi: speedup test\_wait\_for\_instance\_to\_start * Remove xenapi rpm building code * Fixes datastore selection bug * Fixes Hyper-V snapshot spawning issue * Make SecurityGroup receive context * Fix DB API mismatch with sqlalchemy API * Remove aggregate metadata methods from conductor and virtapi * Make XenAPI use Aggregate object * libvirt: add missing i18n support * Adds V3 API samples for attach-interfaces * Make aggregate methods use new-world objects * Add missing key attribute to AggregateList.get\_by\_host() * Fix i18n issue for nova/virt/baremetal/virtual\_power\_driver.py * Fix scheduler rpcapi deprecated method comment * Send notifications on keypair create/delete * Use \`versionutils.is\_compatible\` for Dom0 plugin * Use \`versionutils.is\_compatible\` for Nova Objects * Improve logging messages in libvirt driver * xenapi: stop agent errors stopping build * Fix NovaObject versioning attribute usage * xenapi: removes sleep after final upload retry * xenapi: stop using get\_all\_vdis\_in\_sr in spawn * populate local-ipv4 address in config drive * Harden version checking for boto * Handle MarkerNotFound better in Flavor API * Sanitize passwords when logging payload in wsgi * Remove unnecessary "LOG.error()" statement * xenapi: simplify \_migrate\_disk\_resizing\_up * xenapi: revert on \_migrate\_disk\_resizing\_up error * xenapi: make \_migrate\_disk\_resizing\_up use @step * libvirt tests: replace assertEquals with assertEqual * Use the oslo fixture module * Port server actions unittests to V3 API Part 2 * Remove unused method \_get\_res\_pool\_ref from VMware * Imported Translations from Transifex * Check for None when cleaning PCI dev usage * Fix vmwareapi driver get\_diagnostics calls * Remove instance\_info\_cache\_update() from conductor * compute api should throw exception if soft reboot invalid state VM * Make a note about Object deepcopy helper * Avoid caching quota.QUOTAS in Quotas object * Remove transitional callable field interface * Make the base object infrastructure use Fields * Migrate some tests that were using callable fields * Migrate NovaPersistentObject and ObjectListBase to Fields * Migrate Instance object to Fields * Utilizes assertIn - tests/api/etc * Utilizes assertIn - tests/virt * Utilizes assertIn - tests/api/contrib * Utilizes assertIn - tests/api/v3 * Make scheduler disk\_filter take swap into account * Add variable to expand for format string * Make quota sets update type handling a bit safer * Add test\_instance\_get\_active\_by\_window\_joined * Fixes error on live-migration of volume-backed vm * Migrate PciDevice object to Fields * Migrate InstanceInfoCache object to Fields * Migrate InstanceFault object to Fields * Migrate Service object to Fields * Migrate ComputeNode object to Fields * Migrate Quotas object to Fields * Migrate InstanceGroup object to Fields * Migrate InstanceAction and InstanceActionEvent objects to Fields * Move exception definitions out of db api * Remove unused scheduler rpcapi from compute api * Libvirt: disallow live-mig for volume-backed with local disk * xeanpi: pass network\_info to generate\_configdrive * Replace incorrect Null checking to return correctly * Fix nova DB 215 migration script logic error * Xenapi: set hostname when performing a network reset * Fix "resource" length in project\_user\_quotas table * Migrate SecurityGroup object to Fields * Migrate Migration object to Fields * VMware: fix regression attaching iscsi cinder volumes * Remove whitespace from cfg options * cleanup after boto 2.14 fix * Add boto special casing for param changes in 2.14 * xenapi: simplify PV vs HVM selection logic * fix missing host when unshelving * Fix a typo of tabstop * Fix error message of os-cells sync\_instances api * Log which filter failed when on log level INFO * Migrate KeyPair object to Fields * Migrate Aggregate object to Fields * Make field object support transitional call-based interface * Add Field model and tests * Fix conductor's object change detection * Remove obsolete redhat-eventlet.patch * Move is\_volume\_backed\_instance to new style BDM * Add a get\_root\_bdm utility function * Libvirt: allow more than one boot device * Libvirt: make boot dev a list in GuestConfig * Remove compute\_api\_class config option * Libvirt: add boot\_index to block device info dicts * Fixes Hyper-V issue with VHD file format * Update log message for add\_host\_to\_aggregate * Correct use of ConfigFilesNotFoundError * hyperv tests: replace assertEquals with assertEqual * Utilizes assertNotIn * VMware tests: replace assertEquals with assertEqual * Fix incorrect root partition size and compatible volume name * Imported Translations from Transifex * Utilize assertIsInstance * Fix typos in nova/api code * Make \`update\_test\` compatible with nose * Add a custom iboot power driver for nova bm * Fix FK violation errors in InstanceActionTestCase * Fix test\_shadow\_tables() on PostgreSQL/MySQL * Fix PCI devices DB API tests * Fix DB API tests depending on the order of rows * Use print function rather than print statement * Update default for running\_deleted\_instance\_action * Drop unused BM start\_console/stop\_console methods * VMware: Network fallback in case specified one not found * baremetal: Add missing method to volume driver * baremetal: Use network API to get fixed IPs * Replace decprecated method aliases in tests * catch exception in start and stop server api * Ensure that the netaddr import is in the 3rd party section * Fix status code of server's action confirm\_resize for v3 * Remove duplicated method in test\_compute\_api.py * Create flavor-access for the tenant when creating a private flavor * Fix root disk not be detached after deleting lxc container * fallocate image only when user has write access * Fixes typo in ListTargets CLI in hyperv driver * Fixes typos in nova/db code * Fixes typos in the files in the nova folder * Avoid clobbering {system\_,}metadata dicts passed to instance update * Baremetal: Be more patient with IPMI and BMC * VMware: fix bug with booting from volumes * Fixes typos in nova/compute files * Fixes typos in virt files * Fix docstring for disk\_cachemodes * Plug Vif into Midonet using Neutron port binding * VMware: remove deprecated configuration variable * Fix races in v3 cells extension tests * Add V3 API samples for consoles * Update allowvssprovider in xenstore\_data * Fix races in cells extension tests * Move \`utils.hash\_file\` -> \`imagecache.\_hash\_file\` * Remove \`utils.timefunc\` function * Remove \`utils.total\_seconds\` * Remove \`utils.get\_from\_path\` * Fix divergence in attach\_interfaces extensions * Replace assert\_ with assertTrue * Fixes several misc typos in scheduler code * Fix libvirt test on systems with real iSCSI devices * Reserve 10 migrations for backports * Sync three-part RPC versions support from Oslo * Remove unused dict functions from utils * Avoid mutable default args in \_test\_populate\_filter\_props * XenAPI: Add versioning for plugins * Add Docstring to some scheduler/driver.py methods * Libvirt: default device bus for floppy block devs * Fix filter\_properties of unshelve API * hyperv: Initialize target\_iqn in attach\_volume * Log if a quota\_usages sync updates usage information 2013.2.rc1 ---------- * Open Icehouse development * baremetal: Fix misuse of "instance" parameter of attach/detach\_volume * Fix the wrong params of attach/detach interface for compute driver * Imported Translations from Transifex * Adds missing entry in setup.cfg for V3 API shelve plugin * Avoid spamming conductor logs with object exceptions * Prefix \`utils.get\_root\_helper\` with underscore * Remove \`utils.debug\` * Remove \`utils.last\_octet\` * Remove \`utils.parse\_mailmap\` * Updated from global requirements * Remove unecessary \`get\_boolean\` function * Make Exception.format\_message aware of Messages * Disable lazy gettext * VMware: Check for the propSet attribute's existence before using * VMware: fix bug for invalid data access * Make rbd.libvirt\_info parent class compatible * Host aggregate configuration throws exception * VMware: Handle cases when there are no hosts in cluster * VMWare: Disabling linked clone doesn't cache images * Fixes inconsistency in flavors list with marker * Fix indentation in virt.libvirt.blockinfo module * Update jsonutils.py from oslo * Fix loading instance fault in servers view * Refactor test cases related to instance object * Use system locale for default request language * Update attach interface api to use new network model * Adds V3 API specific urlmap tests * Catch volume errors during local delete * Fix processutils.execute errors on windows * Fixes rescue doesn't honor enable password conf for v3 * VMware: Fix bug for root disk size * Fix incorrect exception raised during evacuate * Full sync of quota\_usages * Fix log format error in lazy-load message * xenapi: reduce impact of errors during SR.scan * Forced scheduling should be logged as Audit not Debug * xenapi: Resize operations could be faster * Resource limits check sometimes enforced for forced scheduling * Skip test if sqlite3 not installed * Add notification for pause/unpause instance * Make LiveMigrateTask use build\_request\_spec() * Ensure image property not set to None in build\_request\_spec() * Make sure periodic task sync\_power\_states continues on error * get\_all\_flavors uses id as key to be unique * fix the an Unexpected API Error issue in flavor API * Adds V3 API samples for srvcs, tenant usage, server\_diagnostics * VMware: Fix SwitchNotFound error when network exists * Fix unicode string values missing in previous patch * Fix stopping instance in sync\_power\_states * Remove deprecated task states * plug\_vif raise NotImplementedError instead of pass * Check instance exists or not when evacuate * xenapi: ignore 500 errors from agent resetnetwork * Add flavor name validation when create flavor * xenapi: enforce filters after live-migration * xenapi: set vcpu cap to ensure weight is applied * Get image metadata in to\_xml for generating xml * Add notification on deleting instance without host * Fix V3 API flavor returning empty string for attributes * Fix v3 server rebuild deserializer checking with wrong access\_ip key * Windows instances require the timezone to be "localtime" * Don't wrap Glance exceptions in NovaExceptions * Update rootwrap with code from oslo * fix typo & grammer in comment 363-364 * Make Instance.refresh() extra careful about recursive loads * Log object lazy-loads * Ensure we don't end up with invalid exceptions again * Fix console db can't load attribute pool * Fix HTTP response for PortNotFound during boot (v3 API) * Fixes assertion bug in test\_cells\_weights.py * Remove \_get\_compute\_info from filter\_scheduler.py * VMware: fix bug for incorrect cluster access * Add V3 API samples for security-groups * Correct lock path for storage-registry-lock * Moved registration of lifcycle events handler in init\_host() * Rebuilding stopped instance should not set terminated\_at * Require oslo.config 1.2.0 final * Removes pre\_live\_migration need for Fixed IPs * Move call to \_default\_block\_device\_names() inside try block * Fix several flake8 issues in the plugins/xenserver code * Fix type is overwritten when UPDATE cell without type specified * Adds v3 API samples for hide server addresses and keypairs * Always filter out multicast from reflection * VMware: fix bug with booting from volume * VMware: enable VNC access without user having to enter password * Remove exceptions.Duplicate * Add v3 API samples for rescue * Added 'page\_size' param to image list * Fix SecurityGroupsOutputTest v3 security group tests * Fixes file mode bits of compute/manager.py * Adds v3 API samples for hosts extension * Only update PCI stats if they are reported from the host * xenapi: Cleanup pluginlib\_nova * Fix Instance object assumptions about joins * Bring up interface when enslaving to a bridge * v3 API samples for servers * xenapi: refactor: move UpdateGlanceImage to common * Imported Translations from Transifex * Fixes modules with wrong file mode bits in virt package * Adds v3 API samples for ips and server\_metadata extensions * Fix V3 API server metadata XML serialization * libvirt: add test case for \_hard\_reboot * Add tests for pre\_live\_migration * Adds V3 API samples for evacuate,ext-az,ext-serv-attrs * Add V3 API samples for ext-status,hypervisor,admin-actions * Code change for regex filter matching * Convert TestCases to NoDBTestCase * VMware: ensure that resource exists prior to accessing * Fixes modules with wrong file mode bits * Fixes test scripts with wrong bitmode * Update sample config generator script * Instance object incorrectly handles None info\_cache * Don't allow pci\_devices/security\_groups to be None * Allow for nested object fields that cannot be None * Object cleanups * Convert TestCases to NoDBTestCase * Convert TestCases to NoDBTestCase * Actually fix info\_cache healing lazy load * Fixes host stats for VMWareVCDriver * libvirt: ignore false exception due to slow NFS on resize-revert * Syncs install\_venv\_common.py from oslo-incubator * Correct deleted\_at value in notification messages * VMwareVCDriver Fix sparse disk copy error on spawn * Remove unused \_instance\_update() method in compute api * Change service id to compute for compute/api.py * XenAPI raise InstanceNotFound in \_get\_vm\_opaque\_ref * Replace OpenStack LLC with OpenStack Foundation * Send notification for any updates to instance objects * Add flag to make baremetal.pxe file injection optional * Force textmode consoles on baremetal * Typo: certicates=>certificates in nova.conf * Remove print statement from test\_quotas that fails H233 check * Fix for os-availability-zone/detail returning 500 * Convert TestCases to NoDBTestCase * Fixes the usage of PowerVMFileTransferFailed class * MultiprocessWSGITest wait for workers to die bug * Prune node stats at compute node delete time * VMware: datastore regex not honoured * VMware: handle exceptions from RetrievePropertiesEx correctly * VMware: Fix volume detach failure * Remove two unused config options in baremetal * Adds API samples and unitests for os-server-usage V3 extension * xenapi: Make rescue safer * Add V3 API samples for quota-sets/class-sets,inst-usage-audit-log * Fix problem with starting Windows 7 instances using VMware Driver * VMware: bug fix for instance deletion with attached volume * Fix migration 201 tests to actually test changes * Don't change the default attach-method * Fix snapshot failure with VMwareVCDriver * Fix quota direct DB access in compute * Add new-world Quota object * Fix use of bare list/dict types in instance\_group object * Fix non-unicode string values on objects * Add missing get\_available\_nodes() refresh arg * Make Instance.Name() not lazy-load things * Add debugging to ComputeCapabilitiesFilter * xenapi: fix pep8 violations in nova plugins * Retry on deadlock in instance\_metadata\_delete * Make virt drivers use a consistent hostname * [VMware] Fix problem transferring files with ipv6 host * VMware: Fix ensure\_vlan\_bridge to work properly with existing DVS * Fix network info injection in pure IPv6 environment * delete a non existent flavor extra spec returns 204 * Don't use ModelBase.save() inside of transaction * send the good binding to neutron after live-migration * Add linked clone related unit tests for VMware Hyper * Ensure anti affinity scheduling works * pci passthrough bug fix:hasattr dones not work for dict * Fix rename q\_exc to n\_exc (from quantum to neutron) * Improve "keypair data is invalid" error message * Enable fake driver can live migration * Don't use sudo to discover ipv4 address * xenapi: Fix rescue * Fix create's response is different with requested for sec-grps V3 * Fix logging of failed baremetal commands * Add v3 API samples for os-extended-volumes * Better help for generate config * Fix hyper-v vhd real size bigger than flavor issue * Remove unused and duplicate code * Policy check for forced\_host should be before the instance is created * Remove cached console auth token after migration * Don't generate notifications when reaping running\_deleted instances * Add instance\_flavor\_id to the notification message * Edits for nova.conf.sample * xenapi: fix where root\_gb=0 causes problems * Wire in ConfKeyManager.\_generate\_hex\_key! * Drop unused logger from keymgr/\_\_init\_\_.py * Move required keymgr classes out of nova/tests * Translate more REST API error messages * pci passthrough fails while trying to decode extra\_info * Update requirements not to boto 2.13.0 * Port server actions unittests to V3 API Part 1 * Remove unused method in scheduler driver * Ignore H803 from Hacking * Fixes misuse of assertTrue in virt test scripts * Add missing notifications for rescue/unrescue * Libvirt: volume driver set correct device type * Make v3 API versions extensions core * Make Instance.save() log missing save handlers * Don't fail if volume has no image metadata * Get image properties instead of the whole image * Remove extra 'console' key for index in extensions consoles v3 * Fix V3 API server extension point exception propagation * VMware: nova-compute crashes if VC not available * Update mailmap for jhesketh * Code change for nova support glance ipv6 address * disassociate\_address response should match ec2 * Adds V3 API samples for remote consoles, deferred delete * Fix asymmetric view of object fields * Use test.TestingException where possible * Add encryption support for volumes to libvirt * VMware: fix driver support for hypervisor uptime * Wrong arguments when calling safe\_utils.getcallargs() * Add key manager implementation with static key * Remove duplication in disk checks * Change the duplicate class name TestDictMatches in test\_matches.py * Add alias as prefix to request params for config\_drive v3 * xenapi: Add per-instance memory overhead values * Fixes misuse of assertTrue in test scripts * Remove unused and wrong code in test\_compute.py * Remove cases of 'except Exception' in tests.image * Remove \_assert\_compute\_node\_has\_enough\_memory from filter\_scheduler.py * Fix regression issues with cells target filter * Remove out of date list of jenkins jobs * Don't lose exception info * Add filter for soft-deleted instances to periodic cleanup task * Don't return query from db API * Update fedora dev env instructions * Only return requested network ID's * Ensure get\_all\_flavors returns deleted items * Fix the order of query output for postgres * Fix migration 211 to downgrade with MySQL * Removed duplicated class in exception.py * Fix console api pass tuple as topic to console rpc api * Enable test\_create\_multiple\_servers test for V3 API * VMware image clone strategy settings and overrides * Reduce DB load caused by heal instance info cache * Clean up object comparison routines in tests * Clean up duplicated change-building code in objects * disable direct mounting of qcow2 images by default * xenapi: ensure finish\_migration cleans on errors * xenapi: regroup spawn steps for better progress * xenapi: stop injecting the hostname during resize * xenapi: add tests for finish\_migration and spawn * xenapi: tidy ups to some spawn related methods * xenapi: move kernel/ramdisk methods to vm\_utils * xenapi: ensure pool based migrate is live * Fix live-migrate when source image deleted * Adds v3 API samples for limits and simple tenant usage * Return a NetworkInfo object instead of a list * Fix compute\_node\_get\_all() for Nova Baremetal * Add Neutron port check for the creation of multiple instances * Remove unused exceptions * Add V3 API samples for flavor-manage,flavor-extra-specs * Add V3 API samples for flavors,flavor-rxtx,flavor-access * Catch more accuracy exception for \_lookup\_by\_name * Fixes race cond between delete and confirm resize * Fixes unexpected exception message in ProjectUserQuotaNotFound * Fixes unexpected exception message in PciConfigInvalidWhitelist * Add missing indexes back in from 152 * Fix the bootfile\_name method call in baremetal * update .mailmap * Don't stacktrace on ImageNotFound in image\_snapshot * Fix PCIDevice ignoring missing DB attributes * Revert "Call safe\_encode() instead of str()" * Avoid errors on some actions when image not usable * Add methods to get image metadata from instance * Fix inconsistent usages for network resources * Revert baremetal v3 API extension * Fixes misuse of assertTrue in compute test scripts * add conf for number of conductor workers * xenapi: Add efficient impl of instance\_exists() 2013.2.b3 --------- * Updated from global requirements * Fix failure to emit notification on Instance.save() * MultiprocessWSGITest wait for workers to die bug * Synchronize the key manager interface with Cinder * Remove indirect dependency from requirements.txt * Clean up check for migration 213 * Add V3 API samples for instance-actions,extenions * fix conversion type missing * Enable libvirt driver to use the new BDM format * Allow block devices without device\_name * Port to oslo.messaging.Notifier API * Add expected\_errors for extension aggregates v3 * Refresh network info cache for secgroups * Port "Make flavors is\_public option .." to v3 tree * Add missing Aggregate object tests * Generalize the \_make\_list() function for objects * PCI passthrough Libvirt vm config * Add columns\_to\_join to instance\_update\_and\_get\_original * XenAPI: Allow 10GB overhead on VHD file check size * Adds ephemeral storage support for Hyper-V * Adds Hyper-V VHDX support * Create mixin class for common DB fields * Deprecate conductor migration\_get() * Change finish\_revert\_resize paths to use objects * Change finish\_resize paths to use objects * Change resize\_instance paths to use objects * VMware: Nova boot from cinder volume * VMware: Multiple cluster support using single compute service * Nova support for vmware cinder driver * Adds Hyper-V dynamic memory support * xenapi: Fix download\_handler fallback * Ensure old style images can be resized * Add nova.utils.get\_root\_helper() * Inherit base image properties on instance creation * Use utils.execute instead of subprocess * Fixes misuse of assertTrue in Cells test scripts * Remove versioning from IOVisor APIs PATH * Revert "Importing correlation\_id middleware from oslo-incubator" * update neutronclient to 2.3.0 minimum * Adds metrics collection support in Hyper-V * Port all rpcapi modules to oslo.messaging interface * Fix a gross duplication of context code in objects tests * Make compute\_api use Aggregate objects * Add Aggregate object model * Add dict and list utility functions for object typing * VMware: remove conditional suds validation * Limit instance fault messages to 255 characters * Add os-assisted-volume-snapshots extension * Scheduler rpcapi 2.9 is not backwards compatible * Adds support for Hyper-V WMI V2 namespace * Port flavormanage extension to v3 API Part 2 * Add os-block-device-mapping to v3 API * Improves Hyper-V vmutils module for subclassing * xenapi: add support for auto\_disk\_config=disabled * Check ephemeral and swap size in the API * Adds V3 API samples for cells and multinic * Increase volume created checking retries to 60 * Fix changes\_since for V3 API * Make v3 API console-output extension core * Makes v3 API keypairs extension core * Add support for API message localization * Fix typo and indent error in isolated\_hosts\_filter.py * Adds 'instance\_type' param to build\_request\_spec * Guest-assisted-snaps libvirt implementation * Improve EC2 API error responses * Remove EC2 postfix from InvalidInstanceIDMalformedEC2 * Introduce InternalError EC2 error code * Introduce UnsupportedOperation EC2 error code * Introduce SecurityGroupLimitExceeded EC2 error code * Introduce IncorrectState EC2 error code * Introduce AuthFailure EC2 error code * Fix ArchiveTestCase on PostgreSQL * Fix AggregateDBApiTestCase on PostreSQL and MySQL * Port Cheetah templates to Jinja2 * Libvirt: call capabilites before getVersion() * Remove \_report\_driver\_status from compute/manager.py * Interpret BDM None size field as 0 on compute side * Add test cases for resume\_state\_on\_host\_boot * Add scheduler support for PCI passthrough * Fix v3 swap volume with wrong signature * vm\_state and task\_state not updated during instance delete * VMware: use VM uuid for volume attach and detach * xenapi: support raw tgz image download * xenapi: refactor - extract image\_utils * Add block\_device\_mapping\_get\_all\_by\_instance to virtapi * Sync rpc from oslo-incubator * Fix the multi-instance quota message * Fix virtual power driver fails silently * VMware: Config Drive Support * xenapi: skip metadata updates when VM not found * Make resource\_tracker record host\_ip * Disable compute fanout to scheduler * Make image\_props\_filter use information from DB not RPC * Make compute\_capabilities\_filter use information from DB not RPC * XenAPI: More operations with LVM-based SRs * XenAPI: make\_partition fixes for Dom0 * Fix wrong method call in baremetal * powervm: make start\_lpar timeout * Disable retry filter with force\_hosts or force\_nodes * Call safe\_encode() instead of str() * Fix usage of classmethod in various places * Fix V3 API quota\_set tests using V3 url and request * Handle port over-quota when allocating network for instance * Fix warning log message typo in resource\_tracker.instance\_claim * Sync filetuils from oslo-incubator * Fix VMware fakes * DRY up use of @wrap\_exception() decorator * Remove unused fake run\_instance() method * Use ExceptionHelper to bypass @client\_exceptions * Added new hypervisor to support Docker containers * Introduce InvalidPermission.Duplicate EC2 error code * Fix and gate on H302 (import only modules) * On snapshot errors delete the image * Remove dis/associate actions from security\_groups v3 * Add volume snapshot delete API test case * Assisted snapshots compute API plumbing * Adds V3 API samples for agents, aggregates and certificates * Adds support for security\_groups for V3 API server create * powervm: Use FixedIntervalLoopingCall for polling LPAR status * xenapi: agent not inject ssh-key if cloud-init * Tenant id filter test is not correct * Add PCI device tracker to compute resource tracker * PCI devices resource tracker * PCI device auto discover * Add PCI device filters support * Avoid swallowing exceptions in network manager * Make compute\_api use Service and ComputeNode objects * Adding VIF Driver to support Mellanox Plugin * Change prep\_resize paths to use objects * Make backup and snapshot use objects * Deprecate conductor migration\_create() * Make inject\_network\_info use objects * Convert reset\_network to use instance object * Make compute\_api use objects for lock/unlock * Add REUSE\_EXT in \_swap\_volume call to blockRebase * Remove unused \_decompress\_image\_file from powervm operator class * powervm: actually remove files after migration * Fix to disallow server name with all blank spaces (v3 API) * Add mock to test-requirements * xenapi: Improve test\_xenapi unit testing performance * Sets policy settings so V3 API extensions are discoverable * Pass objects for revert and confirm resizes * Convert \_poll\_unconfirmed\_resizes to use Migration object * Make compute\_api confirm/revert resize use objects * Make compute\_api migrate/resize paths use instance objects * Fix race when running initialize\_gateway\_device() * fix bad usage of exc\_info=True * Use implicit nullable=True in sqlalchemy model * Introduce Invalid\* EC2 error codes * Improve parameter related EC2 error codes * Disconnect from iSCSI volume sessions after live migration * Correct default ratelimits for v3 * Improve db\_sqlalchemy\_api test coverage * Safe db.api.compute\_node\_get\_all() performance improvement * Remove a couple of unused stubs * Fix Instance object issues * Adds API version discovery support for V3 * Port multiple\_create extension to V3 API * Add context information to download plugins * Adds V3 API samples for migrations * Filter network by project id * Added qemu guest agent support for qemu/kvm * PCI alias support * Add PCI stats * Raise timeout in fake RPC if no consumers found * Stub out instance\_update() in build instance tests * Mock out action event calls in build instance test * powervm: revert driver to pass for plug\_vifs * Remove capabilities.enabled from test\_host\_filters * xenapi: through-dev raw-tgz image upload to glance * Add PCI device object support * Store CONF.baremetal.instance\_type\_extra\_specs in DB * Pci Device DB support * VMware: remove redundant default=None for config options * Move live-migration control flow from scheduler to conductor * Fix v3 extensions inherit from wrong controller * Fix network creation in Vlan mode * compute rpcapi 2.29 is not backwards compatible * Fix the message of coverage directory error * Fix error messages in v3 aggregate API * compute rpcapi 2.37 is not backwards compatible * use 'exc\_info=True' instead of import traceback * Add env to make\_subprocess * Remove unused nova.common module * Adds Flavor ID validations * Imported Translations from Transifex * Add DocStrings for function allocate\_for\_instance * Removes V3 API images and image\_metadata extensions * Powervm driver now logs ssh stderr to warning * Update availability\_zone on time if it was changed * Add db.block\_device\_mapping\_get\_by\_id * Add volume snapshot APIs to driver interface * Pass the destination file name to download modules * Fix typo in baremetal docs * VMware: clean up get\_network\_with\_the\_name * Stylistic improvement of compute.api.API.update() * Removes fixed ips extension from V3 API * Libvirt: fix KeyError in set\_vif\_bandwidth\_config * Add expected\_errors for migrations v3 * Add alias as prefix to request params for user\_data v3 * Fix migrations index * Should finish allocating network before VM reaches ACTIVE * Fixes missing host in Hyper-V get\_volume\_connector * Fix various cells issues due to object changes * Document CONF.default\_flavor is for EC2 only * Revert task state when terminate\_instance fails * Revert "Make compute\_capabilities\_filter use ..." * Add resource tracking to build\_and\_run\_instance * Link Service.compute\_node with ComputeNode object * Add ComputeNode object implementation * Add Service object implementation * Make compute\_api use KeyPair objects * Add KeyPair object * Fix spice/vnc console api samples tests * Fix network manager tests to use correct network host * Stub out get\_console\_topic() in test\_create\_console * Stub out instance\_fault\_create() in compute tests * Fix confirm\_resize() mock in compute tests * Fix rpc calls on pre/post live migration tests * Stub out setup\_networks\_on\_host() in compute tests * maint: remove redundant disk\_cachemode validation entry * Fix unicode key of azcache can't be stored to memcache * XenAPI: SR location should default to location stored in PBD * XenAPI: Generic Fake.get\_all\_records\_where implementation * XenAPI: Return platform\_version if no product\_version * XenAPI: Support local connections * Delete expired instance console auth tokens * Fix aggregate creation/update with null or too long name * Fix live migration test for no scheduler running * Fix get\_diagnostics() test for no compute consumer * Stubout reserve\_block\_device\_name() in test * Stubout deallocate\_for\_instance() in compute tests * Stub out net API sooner in servers API test * PCI utils * Object support for instance groups * Add RBD supporting to libvirt for creating local volume * Add alias as prefix to request params for availability\_zone v3 * Remove deprecated legacy network info model in Hypervisor drivers * Correct the authorizer for extended-volumes v3 * emit warning while running flake8 without virtual env * Adds Instance UUID to rsync debug logging * Fixes sync issue for user level resources * Fix Fibre Channel attach for single WWN * nova.conf configurable gzip compression level * Stub out more net API methods floating IP DNS test * Enable CastAsCall for test\_api\_samples * Stub out attach\_volume() in test\_api\_samples * Fix remove\_fixed\_ip test with CastAsCall * Add add\_aggregate\_to\_host() to FakeDriver * Fix api samples image service stub * Add CastAsCall fixture * Enable consoleauth service during ec2 tests * Disable periodic tasks during integration tests * Use ExceptionHelper to bypass @client\_exceptions * Clean up some unused wrap\_exception() stuff * Add new compute method for building an instance * VMware: provide a coherent message to user when viewing console log * Use new BDM syntax when determining boot metadata * Allow more than one ephemeral device in the DB * Port flavormanage extension to v3 API part 1 * Correct the status code to 201 for create v3 * Pop extra keys from context in from\_dict() * Don't initialize neutronv2 state at module import * Remove instance exists check from rebuild\_instance * Remove unused variables in test\_compute\_cells * Fix fake image\_service import in v3 test\_disk\_config * Updates tools/config/README * xenapi: Added iPXE ISO boot support * Log exception details setting vm\_state to error * Fix instance metadata access in xenapi * Fix prep\_resize() stale system\_metadata issue * Implement hard reboot for powervm driver * Use the common function is\_neutron in servers.py * Make xenapi capabilities['enabled'] use service enabled * Remove duplicate test from V3 version of test\_hosts * Remove unused nova.tests.image.fake code * Remove unused fake run\_instance() method * Remove use of fake\_rabbit in Nova * libvirt: fix {attach,detach}\_interface() * Added test case in test\_migrations for migration 208 * Add flag to make IsolatedHostsFilter less restrictive * Add unique constraint to AggregateMetadata * Fix a typo in test\_migrations for migration 209 * Remove duplicate variable \_host\_state * enhance description of share\_dhcp\_address option * Adds missing V3 API scheduler hints testcase * [v3] Show detail of an quota in API os-quota-sets * Remove legacy network model in tests and compute manager * Remove redundant \_create\_instance method from test\_compute * Add jsonschema to Nova requirements.txt * Remove docstrings in tests * Fix scheduler prep\_resize deprecated comments * Search filters for get\_all\_system\_metadata should use lists * fix volume swap exception cases * Set VM back to its original state if cold migration failed * Enforce flavor access during instance boot * Stub out entry points in LookupTorrentURLTestCase * Port volumes swap to the new API-v3 * correct the name style issue of ExtendedServerAttributes in v3 api * Fix IVS vif to correctly delete interfaces on unplug * Adding support for iSER transport protocol * libvirt: allow passing 'os\_type' property to glance * Fixes auto confirm invalid error * Fix ratelimiting * quantum pxeboot-port support for baremetal * baremetal: Log IPMI power on/off timeouts * VMware: Added check for datastore state before selection * Boot from image destination - volume * Virt driver flag for different BDM formats * Refactor how BDMs are handled when booting * Change RPC to use new BDM format for instance boot * Make API part of instance boot use new BDM format * Add Migration object * Fix untranslated log messages in libvirt driver * Fix migration 210 tests for PostgreSQL * Handle InstanceInvalidState of soft\_delete * Don't pass RPC connection to pre\_start\_hook * VMware: Ensure Neutron networking works with VMware drivers * Unimplemented suspend/resume should not change vm state * Fix project\_user\_quotas\_user\_id\_deleted\_idx index * Allow Cinder to specify file format for NFS/GlusterFS * Add migration with missing fkeys * Implement front end rate-limiting for Cinder volume * Update mailmap * Fixup some non-unity-ness to conductor tests * Add scheduler utils unit tests * Convert admin\_actions ext tests to unit tests * Unit-ify the compute API resize tests * Raises masked AssertionError in \_test\_network\_api * Have tox install via setup.py develop * Set launch\_index to right value * Add passing a logging level to processutils.execute * Clear out service disabled reason on enable for V3 API * Fix HTTP response for PortInUse during boot (v3 API) * Adds infra for v3 API sample creation * Remove deprecated CONF.fixed\_range * Offer a paginated version of flavor\_get\_all * Port integrated tests for V3 API * Refactor integrated tests to support V2 and V3 API testing Part 2 * Refactor integrated tests to support V2 and V3 API testing * Fix cells manager RPC version * Upgrade to Hacking 0.7 * Fix logic in add\_host\_to\_aggregate() * Enforce compute:update policy in API * Removed the duplicated \_host\_state = None in libvirt driver * Sync gettextutils from oslo-incubator * Fix typo in exception message * Fix message for server name with whitespace * Demote personalities from core of API v3 as extensions os-personality * Port disk\_config API to v3 Part 2 * remove \_action\_change\_password the attribute in V3 server API * Fix exception handling in V3 API coverage extension * Remove "N309 Python 3.x incompatible construct" * Allow swap\_volume to be called by Cinder * Remove trivial cases of unused variables * Handle NeutronClientException in secgroup create * Fix bad check for openstack versions (vendor\_data/config drive) * Make compute\_capabilities\_filter use information from DB not RPC * Make affinity\_filters use host\_ip from DB not RPC * db: Add host\_ip and supported\_instances to compute\_nodes * Add supported\_instances to get\_available\_resource to all virt drivers * libvirt: sync get\_available\_resources and get\_host\_stats * Clean up unimplemented methods in the powervm driver * Make InvalidInstanceIDMalformed an EC2 exception * Fix one port can be attached to more devices * Removed code duplication in test\_get\_server\_\*\_by\_id * Add option for QEMU Gluster libgfapi support * Moves compute.rpcapi.prep\_resize call to conductor * Fix get\_available\_resource docstrings * Fix spelling in image\_props\_filter * Fix FK violation in ConsoleTestCase * Fix ReservationTestCase on PostgreSQL * Fix instance\_group\_delete() DB API method * Fix capitalization, it's OpenStack * Add test cases to validate neutron ports * Add expected\_errors for extension quota\_classes v3 * Fix leaking of image BDMs * Moved tests for server.delete * Fix VMwareVCDriver to support multi-datastore * Fixes typo in \_\_doc\_\_ of /libvirt/blockinfo.py * User quota update should not exceed project quota * Port "Accept is\_public=None .." to v3 tree * Remove clear\_rabbit\_queues script * Don't need to init testr in run\_tests.sh * Imported Translations from Transifex * Deprecate conductor's compute\_reboot() interface * Deprecate conductor's compute\_stop() interface * Make compute\_api use InstanceAction object * Add basic InstanceAction object * Add delete() operation to InstanceInfoCache * Make compute\_api use Instance.destroy() * Add Instance.destroy() * Make compute\_api use Instance.create() * Change swap\_volume volume\_api calls to use ID * Fix H501: Do not use locals() for string formatting * fix libguestfs mount order when inspecting * Imported Translations from Transifex * powervm: add test case for get\_available\_resource * Fix to allow ipv6 in host\_ip for ESX/vSphere driver * Improve performance of driver's get\_available\_nodes * Cleanup exception handling on evacuate * Removed code for modular exponentiation, pow() already does this * Remove unsafe XML parsing * Fix typo with network manager service\_name * Remove old legacy network info model in libvirt driver * maint: remove redundant default=None for config options * Fix simultaneous timeout with smart iptables usage * xenapi: send identity headers from glance plugin * Catch ldap ImportError * xenapi: refactor - extract get\_virtual\_size * xenapi: refactor - extract get\_stream\_funct\_for * xenapi: test functions for \_stream\_disk * Check host exists before evacuate * Fix EC2 API Fault wrapper * Fix deferred delete use of objects * Remove unsafe XML parsing * Update BareMetal driver to current nova.network.model * Personality files can be injected during server rebuild * Need to allow quota values to be set to zero * Merged flavor\_disabled extension into V3 core api * Merged flavorsextraspecs extension into core API * Code dedup in test\_update\_\* * Move tests test\_update\_\* to separate class * VMware: fix rescue/unrescue instance * Add an exception when doesn't have permissions to operate vm on hyper-v * Remove dead capabilities code * Spelling correction in test\_glance.py * Enhance object inheritance * Enable no\_parent and file\_only security * Add Instance.create() * Pull out instance object handling for use by create also * Make fake\_instance handle security groups * Fix instance actions testing * Sync models with migrations * Wrong unique key name in 200 migration * Remove unused variable * Make NovaObject.get() avoid lazy-load when defaulting * Fix migration downgrade 146 with mysql * Remove the indexes on downgrade to work with MySQL * Downgrade MySQL to the same state it used to be * Format CIDR strings as per storage * Fix migration downgrade 147 with mysql * Fix typo in compute.rpcapi comments * Imported Translations from Transifex * Avoid extra glance v2 locations call! * xenapi: Adding BitTorrent download handler * xenapi: remove dup code in make\_step\_decorator * Retry failed instance file deletes * xenapi: retry when plugin killed by signal * Do not use context in db.sqla.api private methods * Finish DB session cleanup * Clean up session in db.sqla.api.migration\_\* methods * Clean up session in db.sqla.api.network\_\* and sec\_groups\_\* methods * Don't inject files while resizing instance * Convert CamelCase attribute naming to camel\_case for servers V3 API * Convert camelCase attribute naming to camel\_case * Add plug-in modules for direct downloads of glance locations * Allow user and admin lock of an instance * Put fault message in the correct field * Fix Instance objects with empty security groups * db: Remove deprecated assert\_unicode attribute * VlanManager creates superfluous quota reservations * xenapi: allow non rsa key injection * Add expected\_errors for extensions simple\_tenant\_usage v3 * Clean destroy for project quota * Add expected\_errors for extension Console v3 * Add expected\_errors for extension baremetal v3 * Clean up session in db.sqla.api.get\_ec2 methods * Clean up db.sqla.api.instance\_\* methods * remove improper usage of 'assert' * Support networks without gateway * Raise 404 when instance not found in admin\_actions API * Switch to Oslo-Incubator's EnvFilter rootwrap * xenapi: Moving Glance fetch code into image/glance:download\_vhd * Performs hard reboot if libvirt soft reboot raises libvirtError * xenapi: Rename imageupload image * Make nbd reservation thread-safe * Code dedup in class QuotaReserveSqlAlchemyTestCase * Fix multi availability zone issue part 1 * Fix instance\_usage\_audit\_log v3 follow REST principles * Update mailmap * Add obj\_attr\_is\_set() method to NovaObject * Add ObjectActionFailed exception and make Instance use it * Fix change detection logic in conductor * Convert pause/unpause to use objects * Make delete/soft\_delete use objects * Refactor compute API's delete to properly do local soft\_deletes * Add identity headers while calling glanceclient * xenapi: Reduce code duplication in vmops * vendor-data minor format / style cleanups * maint: remove unused exceptions * Add support for Neutron https endpoint * Add index to reservations.uuid column * Refactor EC2 API error handling code * Cleanup copy/paste in test\_quota\_sets * Make EvacuateTest DRYer * Add expected\_errors for extensions quota\_sets and hypervisors * Remove generic exception catching for admin\_actions API v3 * Demote admin-passwd from core of API v3 as extensions os-admin-password * handle auto assigned flag on allocate floating ip * Add expected\_errors for extension shelve v3 * Use cached nwinfo for secgroup rules * Sync config.generator from Oslo * Remove \* import from xenserver plugins * EC2-API: Fix ambiguous ipAddress/dnsName issue * xenapi: no image upload retry on certain errors * Add error checking around host service checking * add vendor\_data to the md service and config drive * Moves compute.rpcapi.prep\_resize call to scheduler.manager * Removed scheduler doc costs section * Fix formatting on scheduler documentation * Add expected\_errors for extension server\_diagnostics V3 * Fix extensions agent follow API v3 rules * XenAPI: Change the default SR to be the pool default * Fix flavor\_access extension follow API V3 rules * Add notification for live migration call * Correct status code and response for quota\_sets API v3 * Fixes for v3 API servers tests * Remove sleep from service group db and mc tests * [xenapi] Unshadow an important test case class * Fix and Gate on H303 (no wildcard imports) * Remove unreachable code * powervm: pass on unimplemented aggregate operations * Fix timing issue in SimpleTenantUsageSample test * Code dedup in virt.libvirt.test\_imagecache.test\_verify\_checksum\_\* * Move tests test\_verify\_checksum\_\* to separate class * Logging virtual size of the QCOW2 * Add expected\_errors for extension certificates v3 * Support setting block size for block devices * Set the image\_meta for the instance booted from a volume * return 429 on API rate limiting occur * Add task\_state filter for nova list * Port server\_usage API to v3 part 2 * Port server\_usage API to v3 part 1 * Adds factory methods to load Hyper-V utils classes * Fix 2 pep8 errors in tests * Enabled hacking check for Python3 compatible print (H233) * Fix race between aggregate list and delete * Enforce authenticated connections to libvirt * Enabled the hacking warning for Py3 compatible octal literals (H232) * Remove fping plugin from V3 API * Moves scheduler.rpcapi.prep\_resize call on compute.api to conductor * Fix some Instance object class usage errors * xenapi: remove pv detection * Add expected\_errors for extension keypair and availablity\_zone * Add expected\_errors for extension console\_output v3 * Fix extension hosts follow API v3 rules * Use project quota as default user quota * Adds NoAuthMiddleware for V3 * xenapi: remove propagate xenapi\_use\_agent key * Update references with new Mailing List location * MinDisk size based on the flavor's Disk size * Use RetrievePropertiesEx instead of RetrieveProperties * Speed up test BareMetalPduTestCase.test\_exec\_pdutool * Port ips-extended to API-v3 ips core API Part 2 * Disable per-user rate limiting by default * Support EC2 API wildcards for DescribeTags filters * Remove the monkey patching of \_ into the builtins * Sync lockutils from Oslo * Set lock\_path in tests * Port ips-extended to API-v3 ips core API Part 1 * Fix postgresql failures related to Data type to API-v3 fixed-ip * Bypass queries which cause a contradiction * Add basic BDM format validation in the API layer * Servers API for the new BDM format * Fixes Hyper-V issues on versions prior to 2012 * Add expected\_errors for extension instance\_actions v3 * Fix extension server\_meta follow API v3 rules * Ensure that uuid is returned with mocked instance * Code dedup in class InstanceTypeExtraSpecsTestCase * Add expected\_errors for extension cells V3 * Add expected\_errors for extension\_info V3 * Add latest oslo DB support * Add note why E712 is ignored * Clarify instance\_type vs flavor in nova-manage * Fix leaky network tests * Fix HTTP response for PortNotFound during boot * Don't pass empty image to filter on live migration * Start using hacking 0.6 * Set VM back to its original state if cold migration failed * xenapi: ensure vcpu\_weight configured correctly * Fix failing network manager unit tests * Add expected\_errors for extensions services and server\_password v3 * Update oslo.config.generator * Fix the is\_volume\_backed\_instance check * Add support for volume swap * Fix policy failure on image\_metadata calls * Sync models for AgentBuild, Aggregates, AggregateHost tables * Imported Translations from Transifex * Make ServerXMLSerializationTest DRYer * Port migrations extension to v3 API part 2 * Port migrations extension to v3 API part 1 * xenapi: Fix console rotate script * Sync some of Instance\* models with migrations * Fix extension rescue follow API v3 rules * Per-project-user-quotas for more granularity * Add unique constraint to InstanceTypeExtraSpecs * Remove instance\_metadata\_get\_all\* from db api * Merged flavorextradata extension (ephemeral disk size) into core API * Fixed tests for flavor swap extension after merging in core API * Remove hostname param from XenApi after first boot * Cell Scheduler support for hypervisor versions * Fix flavor v3 follow API v3 rules * Sync sample config file generator with Oslo * Allow exceptions to propagate through stevedore map * Create vmware section * Sync latest rpc changes from oslo-incubator * Check instance on dest once during block migration * Revert "Add requests requirement capped <1.2.1." * Unit-ify compute\_api delete tests * Convert network API to use InfoCache object * Make InfoCache.network\_info be the network model * Make shelve pass old-world instance object to conductor * Make admin API state resets use Instance.save() * Deduplicate data in TestAddressesXMLSerialization * Move \_validate\_int\_value controller func to utils * Correct the action name for admin\_actions API v3 * Fixing dnsdomain\_get call in nova.network.manager * Raise exception if both port and fixed-ip are in requested networks * Sync eventlet\_backdoor from oslo-incubator * Fix up trivial license mismatches * Implements host uptime API call for cell setup * Ensure dates are dates, not strings * Use timeutils.utcnow() throughout the code * Add indexes to sqlite * Fix iptables rules when metadata\_host=127.0.0.1 * Sync gettextutils from oslo * Handle instance objects in conductor compute\_stop * Config drive attached as cdrom * Change EC2 client tokens to use system\_metadata * Check that the configuration file sample is up to date * Make servers::update() use Instance.save() to do the work * Make Instance.save() handle cells DB updates * Convert suspend/resume to use objects * Make compute\_api.reboot() use objects * Fix HTTP response for PortInUse during boot * Fix DB access when refreshing the network cache * Use valid IP addresses values in tests * Add ability to factor in per-instance overheads * Send updated aggregate to compute on add/rm host * Fix inconsistency between Nova-Net and Neutron * Fix parse\_transport\_url when url has query string * xenapi: no glance upload retry on 401 error * Code dedup in test\_libvirt\_vif * Raise exceptions when Spice/VNC are unavailable * xenapi: Pass string arguments to popen * Add rpcapi tests for shelving calls * Create key manager interface * Remove duplicate cells\_rpcapi test * ec2-api: Disable describing of instances using deleted tags as filter * Disable ssl layer compression for glance requests * Missed message -> msg\_fmt conversion * Refresh network cache when reassigning a floating IP in Neutron * Remove unnecessary comments for instance rebuild tests * Add missing tests for console\_\* methods * Force reopening eventlet's hub after fork * Remove project\_id from alternate image link path * Fixes wrong action comment 'lock' to 'unlock' * Add expected\_errors for extension extended\_volumes v3 * port BaremetalNodes API into v3 part2 * port baremetal\_nodes API into v3 part1 * Add validation of available\_zone during instance create * Move resource usage sync functions to db backend * Remove locals() from various places * Add expected\_errors for extension evacuate v3 * Add expected\_errors for extension deferred\_delete v3 * Fix accessing to '/' of metadata server without any checks to work * Fix duplicate osapi\_hide\_server\_address\_states config option * API for shelving * Fix shelve's use of system\_metadata * Fix Instance object handling of implied fields * Make Instance object properly update \*metadata * Support Client Token for EC2 RunInstances * Change get\_all\_instance\_metadata to use \_get\_instances\_by\_filters * Add a new GroupAffinityFilter * Move a migration test to MigrationTestCase * Use db.flavor\_ instead of db.instance\_type\_ * Periodic task for offloading shelved instances * Shelve/unshelve an instance * Code dedup in class ImagesControllerTest * Assert backing\_file should exist before attempting to create it * Add API-v3 merged core API into core API list * Don't ignore 'capabilities' flavor extra\_spec * Support scoped keys in aggregate extra specs filter * Fix blocking issue when powervm calculate checksum * Avoid shadowing Exception 'message' attribute * Code dedup in class TestServerActionRequestXMLDeserializer * Fix mig 186 downgrade when using sqlalchemy >= 0.8 * Move test\_stringified\_ips to InstanceTestCase * Move \*\_ec2\_\* tests in test\_db\_api to own test case * Code dedup in class ImageXMLSerializationTest * Fix malformed format string * Fix EC2 DescribeTags filter * Code dedup in test\_libvirt\_volume * Port AttachInterfaces API to v3 Part 2 * Make ServersViewBuilderTest DRYer * Move test\_security\_group\_update to SecurityGroupTestCase * Code dedup in class ServersControllerCreateTest * Code dedup in tests for server.\_action\_rebuild * Moved tests for server.\_action\_rebuild * Move bw\_usage\_\* tests in test\_db\_api to own test case * Move dnsdomain\_\* tests in test\_db\_api to own test case * Remove redundant if statements in cells.state * Move special cells logic for start/stop * Port used limits extension to v3 API Part 2 * Avoid deleting user-provided Neutron ports if VM spawn fails * Fix nic order not correct after reboot * Porting os-aggregates extensions to API v3 Part 2 * Porting os-aggregates extensions to API v3 Part 1 * Porting server metadata core API to API v3 Part 2 * Porting server metadata core api to API v3 Part 1 * Port limits core API to API-v3 Part 2 * xenapi: Only coalesce VHDs if needed * Don't attach to multiple Quantum networks by default * Load cell data from a configuration file * Fix filtering aggregate metadata by key * remove python-glanceclient cap * Remove duplicated key\_pair\* tests from test\_db\_api * Porting limits core api to API v3 Part 1 * Add missing tests for db.api.instance\_\* methods * Fix IPAddress and CIDR type decorators * Complete deletion when compute manager start-up * Port user\_data API to v3 Part 2 * Add legacy flag to get\_instance\_bdms * XenAPI: Refactor Fake to create pools, SRs and VIFs automatically * Port flavor\_rxtx extension to v3 API Part 2 * Port flavor\_rxtx extension to v3 API Part 1 * Fix aggregate\_get\_by\_host host filtering * Fix v3 hypervisor extension servers action follow REST principles * xenapi:populating hypervisor version in host state * Port attach and detach of volume-attachment into os-extended-volume v3 * Port deferredDelete API to v3 Part 2 * Fix status code for coverage API v3 * Port instance\_actions API to v3 Part 2 * port instance\_actions API into v3 part1 * Prompt error message when creating aggregate without aggregate name * Port used limits extension to v3 API Part 1 * Makes \_PATH\_CELL\_SEP a public global variable * port disk\_config API into v3 part1 * Imported Translations from Transifex * Remove locals() from virt directory * Handle ImageNotAuthorized exception * Port AvailabilityZone API to v3 Part 2 * port AvailabilityZone API into v3 part1 * Port service API to v3 Part 2 * Imported Translations from Transifex * Unify duplicate code for powering on an instance * Port hide srvr addresses extension to v3 API Pt2 * Sync v2/v3 console\_output API extensions * Port extended status extension to v3 API Part 2 * Port os-console-output extension to API v3 Part 2 * Changes select\_destinations to return dicts instead of objects * Better start/stop handling for cells * Make notifications properly string-convert instance datetimes * Fix default argument values on get\_all\_by\_filters() * Make db/api strip timezones for datetimes * Fix object\_compat decorator for non-kwargs * Imported Translations from Transifex * Remove unused recreate-db options from run\_test.sh * update Quantum usage to Neutron * Convert cells to use a transport URL * Fix aggregate update * Passing volume ID as id to InvalidBDMVolume exception * Handle instance being deleted while in filter scheduler * Port extended-availability-zone API into v3 part2 * Fix extensions os-remote-consoles to follow API v3 rules * Add unique constraints to AggregateHost * Unimplemented pause should not change vm state on PowerVM * Port server password extension to v3 API Part 2 * xenapi: Add disk config value to xenstore * Port hide srvr addresses extension to v3 API Pt1 * Add -U to the command line for pip * xenapi: support ephemeral disks bigger than 2TB * Cells: Make bandwidth\_update\_interval configurable * Add \_set\_instance\_obj\_error\_state() to compute manager * Update v3 servers API with objects changes * xenapi: enable attach volumes to non-running VM * Change force\_dhcp\_release default to True * Revert "Sync latest rpc changes from oslo-incubator" * Sync 10 DB models and migrations * Make compute\_api.get() use objects natively * port Host API into v3 part2 * Port admin-actions API into v3 part2 * Fix cells manager rpc api version * Allow ::/0 for IPv6 security group rules * Fix issue with pip installing oslo.config-1.2.0 * Sort output for unit tests in test\_describe\_tags before compare * Document rate limiting is per process * Properly pin pbr and d2to1 in setup.py * Add support for live\_snapshot in compute * xenapi: Stub out \_add\_torrent\_url for Vhd tests * Add Instance.get\_by\_id() query method * Fix duplicate fping\_path config option * Port images metadata functionality to v3 API Part 2 * Add unique constraint to ConsolePool * Enable core API-v3 to be optional when unit testing * Clarify flavorid vs instance\_type\_id in db * Sync db.models.Security\* and db.models.Volume\* * Sync db.models.Instance\* with migrations * Add "ExtendedVolumes" API extension * Fix misc issues with os-multinic v3 API extension * Port multinic extension to v3 API Part 2 * Port security groups extension to v3 API Part 2 * Port security groups extension to v3 API Part 1 * Add missing help messages for nova-manage command * Validate volume\_size in block\_device\_mapping * Imported Translations from Transifex * Fix info\_cache and bw\_usage update race * xenapi: glance plugin should close connections * Change db.api.instance\_type\_ to db.api.flavor\_ * Replace get\_instance\_metadata call in api.ec2.cloud.\_format\_instances * Add unique constraint to AgentBuild * Ensure flake8 tests run on all api code * Sync notifier change from oslo-incubator * Sync harmless changes from oslo-incubator * Sync latest rpc changes from oslo-incubator * Add missing matchmaker\_ring * Port extended-server-attributes API into v3 part2 * List migrations through Admin API * Add a VIF driver for IOVisor engine * port Service API into v3 part1 * Port admin-actions API into v3 part1 * Port fping extension to v3 API Part 2 * Disassociate fixed IPs not known to dnsmasq * Imported Translations from Transifex * Allow filters to only run once per request if their data is static * Port extended-availability-zone API into v3 part1 * Update openstack.common.config * Export just the volume metadata for the database to be populated * port Deferred\_delete API into v3 part1 * Misc fixes for v3 evacuate API extension * Imported Translations from Transifex * Baremetal ensures node is off before powering on * Remove references to deprecated DnsMasqFilter * Port user\_data API to v3 Part 1 * Update instance.node on evacuate * Fix formatting errors in documentation * Use oslo.sphinx and remove local copy of doc theme * Remove doc references to distribute * Sync install\_venv\_common from oslo * Make EC2 API request objects instead of converting them * Make instance show and index use objects * Remove conductor usage from consoleauth service * xenapi: Stub out entry points for BitTorrent tests * Fix debug message for GroupAntiAffinityFilter * Add unique constraints to Service * Add unique constraint to FixedIp * Fixed columns list in indexes * Add cinder cleanup to migrations * Change unique constraint in VirtualInterface * Changes ComputeTaskManager class to inherit base.Base * Moves populate retry logic to the scheduler utils * Exceptions raised by quantum validate\_networks result in 500 error * Fix and gate on E125 * Add object (de)serialization support to cells * Add cells get\_cell\_type() method * Add fill\_faults() batch operation to InstanceList * Make api\_samples reboot test use a plausible scenario * Fix compute\_api object handling code in cells messaging * Fix power\_state lookup in confirm\_resize * Make flavors is\_public option actually work * Imported Translations from Transifex * hyperv: Fix vmops.get\_info raises InstanceNotFound KeyError * Make instance\_update() string-convert IP addresses * Refactor compute\_api reboot tests to be unit-y * Refactors select\_destinations to return HostState objects * PowerVM resize and migrate test cases * Clear out service disabled reason on enable * Port agent API to v3 Part 2 * Fix v3 hypervisor extension search action follow REST principles * Fix resize ordering for COW VHD * Add inst\_type parameter * Store volume metadata as key/value pairs * Fixes a typo on AggregateCoreFilter documentation * xenapi: Tidy up Popen calls to avoid command injection attacks * Remove notify\_on\_any\_change option * Add unique constraints to Quota * Port images metadata functionality to v3 API Part 1 * Port scheduler hints extension to v3 API Part 2 * Adding action based authorization for keypairs * Port multinic extension to v3 API Part 1 * Port hypervisor API into v3 part2 * port Instance\_usage\_audit\_log API into v3 part2 * port Instance\_usage\_audit\_log API into v3 part1 * Fix metadata for create in child cell * update xen/vmware virt drivers not to hit db directly * Reduce nesting in instance\_usage\_audit * Port os-console-output extension to API v3 Part 1 * Fix to integer cast of length in console output extension * Imported Translations from Transifex * Add notifiers to both attach and detach volumes * Make test\_deferred\_delete() be deterministic * Added functionality for nova hooks pass functions * Fix compatibility with older confirm\_resize() calls * Pass instance host-id to Quantum using port bindings extension * libvirt: Fix spurious backing file existence check * Add unique constraint for security groups * powervm: make get\_host\_uptime output consistent with other virt drivers * Remove locals() from virt/vmwareapi package * Add HACKING check for db session param * Select disk driver for libvirt+Xen according to the Xen version * Port coverage API into v3 part2 * Port coverage API into v3 part1 * Fix grizzly compat issue in conducor rpc api * Xenapi shutdown should return True if vm is shutdown * Break out Compute Manager unit tests * Break out compute API unit tests * port Host API into v3 part1 * Imported Translations from Transifex * Standardize use of nova.db * Check system\_metadata type in \_populate\_instance\_for\_create * Clean up and make HACKING.rst DRYer * Sync db.models with migrations * Refactor ServerStatusTest class * Move tests db.api.instance\_\* to own class * Add tests for \`db.console\_pool\_\*()\` functions * Fix binding of SQL query params in DB utils * Make db.fakes stub out API not sqlalchemy * Reassign MAC address for vm when resize\_revert * test\_xmlutil.py covers more code in xmlutil.py * Handle UnexpectedTaskState and InstanceNotFound exceptions * Port quota classes extension to v3 API Part 2 * Ports image\_size extension to v3 API * xenapi: Add configurable BitTorrent URL fetcher * remove locals() from virt/hyperv package * Add resume state on host boot function to vmware Hyper * Port server\_diagnostics extension to v3 API Part2 * Port images functionality to v3 API Part 2 * Port cells extension to v3 API Part 2 * Notification support for host aggregate related operation * Fix vol\_usage\_update() DB API tests * Port consoles extension API into v3 part2 * Port consoles extension API into v3 part1 * Imported Translations from Transifex * New select\_destinations scheduler call * Session cleanup for db.security\_group\_\* methods * fix invalid logging * Port scheduler hints extension to v3 API Part 1 * Port config\_drive API to v3 Part 2 * Port config drive API to v3 Part 1 * Port images functionality to v3 API Part 1 * Moves scheduler.manager.\_set\_vm\_state\_and\_notify to scheduler.utils * VNC console does not work with VCDriver * Sane rest API rate limit defaults * Ignore lifecycle events for non-existent instances * Fix resizes with attached file-based volumes * Remove trivial cases of unused variables (3) * Remove locals() from compute directory * Hypervisor uptime fails if service is disabled * Fix metadata access in prep for instance objects * Sync to\_primitive() IPAddress support from Oslo * Merged flavor\_swap extension into core API * Fix typo for instance\_get\_all\_by\_filters() function * Implement get\_host\_uptime for powervm driver * Port flavor\_disabled extension to v3 API Part 2 * Fix sqlalchemy utils * Port flavor\_disabled extension to v3 API Part 1 * Port flavor\_access extension to v3 API Part 2 * Port flavor\_access extension to v3 API Part 1 * Fixes for quota\_sets v3 extension * Port server password extension to v3 API Part 1 * Port Simple\_tenant\_usage API to v3 Part 2 * xenapi: Remove vestigial \`compile\_metrics\` code * Add update() method to NovaObject for dict compatibility * Add obj\_to\_primitive() to recursively primitiveize objects * Make sure periodic instance reclaims continues on error * Remove broken config\_drive image\_href support * Report the az based on the value in the instance table * Allow retrying network allocations separately * Imported Translations from Transifex * Better default for my\_ip if 8.8.8.8 is unreachable * Fix a couple typos in the nova.exception module * Make fake\_network tolerant of objects * Prepare fake instance stubs for objects * Make info\_cache handle when network\_info is None * Fix instance object's use of a db query method parameter * Make NovaObject support the 'in' operator * Add Instance.fault * Add basic InstanceFault model * xenapi: Make BitTorrent url more flexible * xenapi: Improve cross-device linking error message * db.compute\_node\_update: ignore values['update\_at'] * Make sure periodic cleanup of instances continues on error * Fix for failure of periodic instance cleanup * Update instance properties values in child cells to create instance * port Attach\_interface API into v3 part1 * Sync models.Console\* with migrations * Port quota API into v3 part2 * Stop creating folders in virt unit tests * Imported Translations from Transifex * Refresh volume connections when starting instances * Fix trivial mismatch of license header * Exeption message of 'live migration' is not appropriate * Sync rpc from oslo-incubator * Fix types in test\_ec2\_ids\_not\_found\_are\_printable * Port quota API into v3 part1 * Skip security group code when there is no network * Sync db.models and migrations * Update pyparsing to 1.5.7 * Make InstanceList filter non-column extra attributes * Add Instance.security\_groups * Add basic SecurityGroup model * Revert XenApi virt driver should throw exception * Imported Translations from Transifex * Avoid redefining host to none in get\_instance\_nw\_info(...) * Extract live-migration scheduler logic from the scheduler driver * Fix the filtered characters list from console-log * Add invalid number checking in flavor creation api * Port quota classes extension to v3 API Part 1 * Remove usage of locals() from powervm virt package * Fix xenstore-rm race condition * Refactor db.security\_group\_get() instance join behavior * Fix serialization of iterable types * Fix orphaned instance from get\_by\_uuid() and \_from\_db\_object() * refactor security group api not to raise http exceptions * Perform additional check before live snapshotting * Do not raise NEW exceptions * Baremetal\_deploy\_helper error message formatting * Fix sys\_meta access in prep for instance object * Cells: Pass object for start/stop * Clarify the compute API is\_volume\_backed\_instance method * Add AggregateCoreFilter * Port extended-server-attributes into v3 part1 * Add AggregateRamFilter * Fix KeyError exception when scheduling to child cell * Port missing bits from httplib2 to requests * Revert "fixes nova resize bug when force\_config\_drive is set." * Port extended status extension to v3 API Part 1 * Fix quota logging on exceptions * XenApi virt driver should throw exception on failure * Retry quota\_reserve on DBDeadlock * Handle NoMoreFixedIps in \_shutdown\_instance * Make sure instance\_type has extra\_specs * Remove locals() from nova/virt/libvirt package * Fix importing InstanceInfoCache during register\_all() * Make \_poll\_unconfirmed\_resizes() use objects * Revert "Add oslo-config-1.2.0a2 and pbr>=0.5.16 to requirements." * Preserve network order when using ConfigDrive * Revert "Initial scheduler support for instance\_groups" * fixes nova resize bug when force\_config\_drive is set * Add troubleshoot to baremetal PXE template * Sync db.models.Quota\* with migrations * Modify \_assertEqualListsOfObjects() function * Port hypervisor API into v3 part1 * Remove a layer of nesting in \_poll\_unconfirmed\_resizes() * Use InstanceList for \_heal\_instance\_info\_cache() * Remove straggling use of all-kwarg object methods * Allow scheduler manager NoValidHost exception to pass over RPC * Imported Translations from Transifex * Add oslo-config-1.2.0a2 and pbr>=0.5.16 to requirements * Remove usage of locals() for formatting from nova.scheduler.\* * Libvirt driver: normalize variable names (part1) * xenapi: script to rotate the guest logs * Clean up scheduler tests * Drop unused \_virtual\_power\_settings global * Remove junk file when ftp transfer failure * xenapi: revisit error handling around calls to agent * Remove the unused plugins framework * Added unit tests for vmware cluster driver * Adds expected\_errors decorator for API v3 * Sync oslo-incubator gettextutils * port Simple\_tenant\_usage API into v3 part1 * Remove db session hack from conductor's vol\_usage\_update() * Converts scheduler.utils.build\_request\_spec return to json primitive * Revert "Delegate authentication to quantumclient" * Retry the sfdisk command up to 3 times * No support for double nested 64 bit guest using VCDriver * Fill context on objects in lists * Setting static ip= for baremetal PXE boot * Add tests for libvirt's reboot functionality * Check the instance ID before creating it * Add missing tests for nova.db.api.instance\_system\_metadata\_\* * Add err\_msg param to baremetal\_deploy\_helper * Remove \_is\_precooked pre-cells Zones hacks * Raise max header size to accommodate large tokens * Make NovaObject support extra attributes in items() * Imported Translations from Transifex * Fix instance obj refresh() * Fix overzealous conductor test for vol\_usage\_update * Add missing tests for certificate\_\* methods * Log xml in libvirt \_create\_domain failures * Add unique constraints to Cell * Accept is\_public=None when listing all flavors * Add missing tests for cell\_\* methods * Add missing tests for nova.db.api.instance\_metadata\_\* * Don't deallocate network if destroy time out * Port server\_diagnostics extension to v3 API Part1 * Add old display name to update notification * Port fping extension to v3 API Part 1 * libvirt fix resize/migrates with swap or ephemeral * Allow reboot or rebuild from vm\_state=Error * Initial scheduler support for instance\_groups * Fix the ServerPasswordController class doc string * Imported Translations from Transifex * Cleanup certificate API extension * Enforce sqlite-specific flow in drop\_unique\_constraint * Remove unused cert db method * Fix bad vm\_state change in reboot\_instance() * Add rpc client side version control * xenapi: ensure agent check respects image flags * Drop \`bm\_pxe\_ips\` table from baremetal database * Adding fixed\_ip in create.end notification * Improved tests for instance\_actions\_\* * Refactored tests for instance\_actions\_\* * Add missing tests for provider\_fw\_rule\_\* methods * Session cleanup for db.security\_group\_rule\_\* methods * Add tests for nova.db.api.security\_group\_rule\_\* methods * Refactors qemu image info parsing logic * Port cells extension to v3 API Part 1 * Organize limits units and per-units constants * Fix flavor extra\_specs filter doesn't work for number * Replace utils.to\_bytes() with strutils.to\_bytes() * Updates nova.conf.sample * Remove bin lookup in conf sample generator * Refactor conf sample generator script * Remove unused arg from make\_class\_properties.getter method * Fix obj\_load() in NovaObject base class * Backup and restore object registry for tests * Fix the wrong reference by CONF * Port flavors core API to v3 tree * Remove usage of locals() from xenapi package * Remove trivial cases of unused variables (1) * Don't make nova-compute depend on iSCSI * Change resource links when url has no project id * Make sync\_power\_state routines use InstanceList * Enhance the validation of the quotas update * Add missing tests for compute\_node\_\* methods * Fix VMware Hyper can't honor hw\_vif\_model image property * Remove use of locals() in db migrations * Don't advertise mute cells capabilities upwards * Allow confirm\_resize if instance is in 'deleting' status * Port certificates API to v3 Part 2 * port agent API into v3 part1 * Port certificates API to v3 Part 1 * Naming instance directory by uuid in VMware Hyper * Revert "Fix local variable 'root\_uuid' ref before assign" * Use Python 3.x compatible octal literals * Fix and enable H403 tests * Remove usage of locals() from manager.py * Fix local variable 'root\_uuid' ref before assign * Improve the performance of migration 186 * Update to the latest stevedore * Quantum API \_get\_floating\_ip\_by\_address mismatch with Nova-Net * xenapi: remove auto\_disk\_config check during resize * xenapi: implement get\_console\_output for XCP/XenServer * Check libvirt version earlier * update\_dns() method optimization * Sync can\_send\_version() helper from oslo-incubator * Remove unused db api call * Quantumapi returns an empty network list * Add missing tests for nova.db.api.network\_\* * Cleanup overshadowing in test\_evacuate.py * Give a way to save why a service has been disabled * Cells: Add support for global cinder * Fix race conditions with xenstore * Imported Translations from Transifex * Remove explicit distribute depend * Fix assumed port has port\_security\_enabled * Rename functions in nova.compute.flavors from instance\_type * Remove redundant architecture property update in powervm snapshot * Use an inner join on aggregate\_hosts in aggregate\_get\_by\_host * xenapi: ensure instance metadata always injected into xenstore * Nova instance group DB support * Fix to disallow server name with all blank spaces * Replace functions in utils with oslo.fileutils * Refactors get\_instance\_security\_groups to only use instance\_uuid * Create an image BDM for every instance * DB migration to the new BDM data format * Fix dangling LUN issue under load with multipath * Imported Translations from Transifex * Add missing tests for s3\_image\_\* methods * Register libvirt driver with closed connection callback * Enhance group handling in extract\_opts * Removed code duplication in conductor.api * Refactored tests for instance\_fault\_\* * Added verbose error message in tests helper mixin * Adds v3 API extension discovery filtering * Adds support for the Indigo Virtual Switch (IVS) * Some libvirt driver lookups lacks proper exception handling * Put VM UUID to live migration error notification * Fix db.models.Instance description * Fix db.models.Certificate description * Fix db.models.ComputeNodeStats description * Fix db.models.ComputeNode description * Fix db.models.Service description * BDM class and transformation functions * Remove unused method in VMware driver * Cleanup nova exception message conversion * Update analyze\_opts to work with new nova.conf sample format * Remove unused methods from VirtAPI * Make xenapi use Instance object for host\_maintenance\_mode() * Make xenapi/host use instance objects for \_uuid\_find * Use InstanceList object for init\_host * Add Instance.info\_cache * Use Instance Objects for Start/Stop * Add lists of instance objects * Add base mixin class for object lists * Add deleted flag to NovaObject base * Export volume metadata to new instances * Sending volume IO usage broken * Rename unique constraints due to new convention * Replace openstack-common with oslo in HACKING.rst * Fixes test\_config\_drive unittest * Port evacuate API to v3 Part 2 * Port evacuate API to v3 Part 1 * Speeding up scheduler tests * Port rescue API to v3 Part 2 * Port rescue API to v3 Part 1 * Handle security group quota exceeded gracefully * Adds check that the core V3 API is loaded * Call virt.driver.destroy before deallocating network * More KeypairAPI cleanups * Improve Keypair error messages in osapi * Fix Keypair exception messages * Moving more tests to appropriate locations * Skip ipv6 tests on system without ipv6 support * Keypair API test cleanup * Alphabetize v3 API extension entry point list * Add missing exception to cell\_update() * Refactors scheduler.chance.select\_hosts to raise NoValidHost * Enhance unit test code coverage for availability zone * Converts 'image' to json primitive on compute.rpcapi.prep\_resize * Import osapi\_v3/enabled option in nova/test * Regenerate missing resized backing files * Moving \`test\_misc\` tests to better locations * Allocate networks in the background * Make the datetime utility function coerce to UTC * API to get the Cell Capacity * Update rpc/impl\_qpid.py from oslo * More detailed log in failing aggregate extra filter * xenapi: Added logging for sparse copy * Make object actions pass positional arguments * Don't snat all traffic when force\_snat\_range set * Add x-compute-request-id header when no response body * Call scheduler for run\_instance from conductor * correctly set iface-id in vmware driver * Fix a race where a soft deleted instance might be removed by mistake * Fix quota checks while resizing up by admin * Refactor libvirt driver exception handling * Avoiding multiple code loops in filter scheduler * Don't log warn if v3 API is disabled * Link to explanation of --checksum-full rule * Imported Translations from Transifex * Stop libvirt errors from outputting to strerr * Delete unused bin directory * Make instance object tolerate isotime strings * Add fake\_instance.py * Fix postgresql failures related to Data type * hardcode pbr and d2to1 versions * Silence exceptions from qpid connection.close() (from oslo) * Add Davanum to the mailmap * Fix VMwareVCdriver reporting incorrect stats * Adds ability to black/whitelist v3 API extensions * Clean up vmwareapi.network\_util.get\_network\_with\_the\_name * Imported Translations from Transifex * Normalize path for finding api\_samples dir * Add yolanda to the mailmap * Add notes about how doc generation works * python3: Add py33 to tox.ini * Improve Python 3.x compatibility * Ports consoles API to v3 API * Fix nova-compute fails to start if quantum is down * Handle instance directories correctly for migrates * Remove unused launch\_time from instance * Launch\_at and terminated\_at on server(s) response * Fixed two minor docs niggles * Adds v3 API disable config option * Fix bug where consoleauth depended on remote conductor service * Only update cell capabilites once * Ports ips api to v3 API * Make pylint ignore nova/objects/ * Set resized instance back to original vm\_state * Add power\_on flag to virt driver finish/revert migration methods * Cosmetic fix to parameter name in DB API * compute.api call conductor ComputeTaskManager for live-migrate * Removed session from reservation\_create() * Raise exception instances not exception classes * \_s3\_create handles image being deleted * Imported Translations from Transifex * Add instance object * Add base object model * Enhance multipath parsing * Don't delete sys\_meta on instance delete * Fix volume IO usage notifications been sent too often * Add missing os.path.abspath around csrfile * Fix colorizier thowing exception when a test fails * Add db test that checks that shadow tables are up-to-date * Sync shadow table for 159 migration * Sync shadow table for 157 migration * Sync shadow table for 156 migration * Add missing tests for nova.db.api.quota\_\* methods * Add tests for some db.security\_group\_\* methods * Fix \_drop\_unique\_constraint\_in\_sqlite() function * Clean up failed image transfers in instance spawn * Make testr preserve existing OS\_\* env vars values * Fix msg version type sent to cells RPC API * Verify that CONF.compute\_driver is defined * Fix EC2 RegisterImage ImageLocation starts with / * Support Cinder mount options for NFS/GlusterFS * Raise exception instances, not exception classes * Add update method of security group name and description * Cell weighing class to handle mute child cells * Add posargs support to flake8 call * Enumerate Flake8 E12x ignores * Fix and enable flake8 F823 * Fix and enable flake8 F812 * libvirt: improve the specification of network disks * Imported Translations from Transifex * In utils.tempdir, pass CONF.tempdir as an argument * Delegate authentication to quantumclient * Pull binary name from sys.argv[0] * Rename policy auth for V3 os-fixed-ips * Fix internationalization for some LOG messages * Enumerate Flake8 Fxxx ignores * Enable flake8 E721 * Removing misleading error message * No relevant message when stop a stopped VM * Cells: Add filtering and weight support * API Extensions framework for v3 API Part 2 * fix a misleading docstring * xenapi: make the xenapi agent optional per image * Fix config drive code logical error * Add missing conversion specifier to ServiceGroupUnavailable * Deprecate compute\_api\_class option in the config * Add node as instance attribute for notification * removes project\_id/tenant\_id from v3 api urls * Set up 'compute\_task' conductor namespace * Removed superflous eval usage * Fix log message * Sync shadow table for 179 migration * Remove copy paste from 179 migration * Sync shadow table for 175 and 176 migration * Change db \`deleted\` column type utils * Fix tests for sqlalchemy utils * Add missing tests for nova.db.api.quota\_class\_\* * Moved sample network creation out of unittest base class constructor * Add missing tests for db.api.reservation\_\* * add xml api sample tests to os-tenant-network * Remove locals() usage from nova.virt.libvirt.utils * IPMI driver sets bootdev option persistently * update mailmap * Imported Translations from Transifex * Remove tempest hack for create/rebuild checks * Better error message on malformed request url * virt: Move generic virt tests to nova/tests/virt/ * vmwareapi: Move tests under tests/virt/vmwareapi/ * hyperv: Move tests under nova/tests/virt/hyperv * Fix UnboundLocalError in powervm lvm cleanup code * Delete a quota through admin api * Remove locals() usage from nova.virt.libvirt.volume * Importing correlation\_id middleware from oslo-incubator * Make a few places tolerant of sys\_meta being a dict * Remove locals() from scheduler filters * Rename requires files to standard names * Imported Translations from Transifex * translates empty remote\_ip\_prefix to valid cidr for nova * Reset task\_state when resetting vm\_state to ACTIVE * xenapi: Moving tests under tests/virt/xenapi/ * xenapi: Disable VDI size check when root\_gb=0 * Remove ImageTooLarge exception * Move ImageTooLarge check to Compute API * Share checks between create and rebuild * Remove path\_exists from NFS/GlusterFS drivers * Removed session from fixed\_ip\_\*() functions * Catch InstanceNotFound in instance\_actions GET * Using unicode() to handle image's properties * Adds live migration support to cells API * Raise AgentBuildNotFound on updating/destroying deleted object * Add missing tests for nova.db.api.agent\_build\_\* methods * Don't update API cell on get\_nwinfo * Optimize SecurityGroupsOutputController by len(servers) * get\_instance\_security\_groups() fails if no name on security group * libvirt: Moving tests under tests/virt/libvirt * Make it easier to add namespaced rpc APIs * baremetal: Move tests under tests/virt/baremetal * Disallow resize if image not available * powervm: Move tests under tests/virt/powervm * Sync RPC serializer changes from Oslo * Fix missing argument to logging warning call * set ERROR state when scheduler hits max attempts * Sync latest RPC changes from oslo * Add notification for live migration * Add requests requirement capped <1.2.1 * Adding tests for rebuild image checks * Add ImageNotActive check for instance rebuild * Fix error in instance\_get\_all\_by\_filters() use of soft\_deleted filter * Fix resize when instance has no image * Fixes encoding issues for nova api req body * Update run\_tests.sh to run flake8 too * Added validation for networks parameter value * Added attribute 'ip' to server search options * Make nova-api use servicegroup.API.service\_is\_up() * Add memorycache import into the oslo config * Fix require\_context() decorators * Imported Translations from Transifex * Remove locals() from nova/cells/\* * Update mailmap * Strip exec\_dirs prefix from rootwrap filters * Clean up test\_api\_samples a bit * Remove unnecessary parens in test\_volumes * Use strict=True instead of \`is\_valid\_boolstr\` * Editable default quota support * Remove usage of locals() for formatting from nova.api.\* * Switch to flake8+hacking * Fix flake8 errors in anticipation of flake8 * Don't update DB records for unchanged stats * baremetal: drop 'prov\_mac\_address' column * The vm\_state should not be modified until the task is complete * Return Customer's Quota Usage through Admin API * Use prettyxml output * Remove locals() from messages in virt/disk/api.py * 'm1.tiny' now has root\_gb=1 * Cast \`size\` to int before comparison * Don't raise unnecessary stack traces in EC2 API * Mox should cleanup before stubs * Reverse compare arguments in filters tests * Don't inject settings for dynamic network * Add ca cert file support to cinder client requests * libvirt: Catch VIR\_ERR\_NO\_DOMAIN in list\_instances * Revert "Include list of attached volumes with instance info" * Sync rpc from oslo * Remove openstack.common.version * Fix for missing multipath device name * Add missing tests for db.fixed\_ip\_\*(). functions * xenapi: ensure vdi is not too big when resizing down * Cells: Don't allow active -> build * Fix whitespace issue in indent * Pass the proper admin context to update\_dhcp * Fix quantum security group driver to accept none for from/to\_port * Reverse path SNAT for DNAT floating-ip * Use Oslo's \`bool\_from\_string\` * Handle IPMI transient failures better * Improve unit tests for DB archiving * Remove "#!/usr/bin/env python" from .py files under nova/cmd * Add missing unique constraint to KeyPair model * Refactored tests for db.key\_pair\_\*() functions * Refactor nova.volume.cinder.API to reduce roundtrips with Cinder * Fix response from snapshot create stub * Hide lock\_prefix argument using synchronized\_with\_prefix() * Cleanups for create-flavor * Cleanup create flavor tests * Imported Translations from Transifex * Test for remote directory creation before shutting down instance * Fix run\_tests.sh usage of tools/colorizer.py * Move get\_table() from test\_migrations to sqlalchemy.utils * Convert Nova to use Oslo service infrastructure * Show the cause of virt driver error * Detach volume fails when using multipath iscsi * API extensions framework for v3 API * Sync service and threadgroup modules from oslo * Fix header issue for baremetal\_deploy\_helper.py * Extract getting instance's AZ into a helper module * Allow different paths for deploy-helper helpers * Show exception details for failed deploys * Imported Translations from Transifex * Check QCOW2 image size during root disk creation * Adds useful debug logging to filter\_scheduler * fix non reporting of failures with floating IP assignment * Improve message and logging for corrupt VHD footers * Cleanup for test\_create\_server\_with\_deleted\_image * Check cached SSH connection in PowerVM driver * Allow a floating IP to be associated to a specific fixed IP * Record smoketest dependency on gFlags * Make resize/migrated shared storage aware * Imported Translations from Transifex * Add pointer to compute driver matrix wiki page * xenapi: cleanup vdi when disk too big exception raised * Update rootwrap with code from oslo * Fixes typo in server-evacuate-req.xml * Fix variable referenced before assginment in vmwareapi code * Remove invalid block\_device\_mapping volume\_size of '' * Architecture property updated in snapshot libvirt * Add sqlalchemy migration utils.create\_shadow\_table method * Add sqlalchemy migration utils.check\_shadow\_table method * Change type of cells.deleted from boolean to integer * Pass None to image if booted from volume in live migration * Raise InstanceInvalidState for double hard reboot * Removes duplicate assertEqual * Remove insecure default for signing\_dir option * Removes unnecessary check for admin context in evacuate * Fix zookeeper import and tests * Make sure that hypervisor nodename is set correctly in FakeDriver * Optimize db.instance\_floating\_address\_get\_all method * Session cleanup for db.floating\_ip\_\* methods * Optimize instance queries in compute manager * Remove duplicate gettext.install() calls * Include list of attached volumes with instance info * Catch volume create exception * Fixes KeyError bug with network api associate * Add unitests for VMware vif, and fix code logical error * Fix format error in claims * Fixes mock calls in Hyper-V test method * Adds instance root disk size checks during resize * Rename nova.compute.instance\_types to flavors * Convert to using newly imported processutils * Import new additions to oslo's processutils * Imported Translations from Transifex * Enable live block migration when using iSCSI volumes * Nova evacuate failed when VM is in SHUTOFF status * Transition from openstack.common.setup to pbr * Remove random print statements * Remove security\_group\_handler * Add cpuset attr to vcpu conf in libvirt xml * Imported Translations from Transifex * Remove referances to LegacyFormatter in example logging.conf * libvirt: ignore NOSTATE in resume\_state\_on\_host\_boot() method * Sync oslo-incubator print statement changes * Fix stub\_instance() to include missing attributes * Add an index to compute\_node\_stats * Convert to using oslo's execute() method * Import latest log module from oslo * Being more defensive around the use\_ipv6 config option * Update hypervisor\_hostname after live migration * Make nova-network support requested nic ordering * nova coverage creates lots of empty folders * fix broken WSDL logic * Remove race condition (in FloatingIps) * Add missing tests for db.floating\_ip\_\* methods * Deprecate show\_host\_resources() in scheduler manager * Add force\_nodes to filter properties * Adds --addn-hosts to the dnsmasq arg list * Update our import of oslo's processutils * Update oslo-incubator import * Delete InstanceSystemMetadata on instance deletion * vmwareapi: Add supported\_instances to host state * xenapi: Always set other\_config for VDIs * Copy the RHEL6 eventlet workaround from Oslo * Move db.fixed\_ip\_\* tests from DbApiTestCase to FixedIpTestCase * Checks if volume can be attached * Call format\_message on InstanceTypeNotFound exception * xenapi: Don't swallow missing SR exception * Prevent rescuing a VM with a partially mounted volume * Fix key error when create lpar instance failed * Reset migrating task state for MigrationError exceptions * Volume IO usage gets reset to 0 after a reboot / crash * Sync small and safe changes from oslo * Sync jsonutils from oslo * Fix EC2 instance bdm response * Rename \_check\_image\_size to \_get\_and\_check\_image\_metadata * Convert the cache key from unicode to a string * Catch glance image create exceptions * Update to using oslo periodic tasks implementation * Import oslo periodic tasks support * import and install gettext in vm\_vdi\_cleaner.py * Fix baremetal get\_available\_nodes * Fix attach when running as root without sysfsutils * Make \_build\_network\_info\_model testable * Fix building quantumapi network model with network list * Add the availability\_zone to the volume.usage notifications * Add delete\_net\_interface function * Performance optimization for contrib.flavorextraspecs * Small whitespace tweak * Kill off usage of locals() in the filter\_scheduler * Remove local variable only used in logging * Create instance with deleting image * Refactor work with db.instance\_type\_\* methods * Fix flakey TestS3ImageService bug * Add missing snapshot image properties for VMware Hyper * Imported Translations from Transifex * Fix VMware Hyper console url parameter error * Update NovaBase model per changes on oslo.db.sqlalchemy * Send a instance create error notification * Refactor \_run\_instance() to unify control flow * set bdm['volume\_id'] to None rather than delete it * Destroy conntrack table on source host during migration * Adds tests for isolated\_hosts\_filter * Fixes race condition of deleting floating ip * Imported Translations from Transifex * Wrong proxy port in nova.conf for Spice proxy * Fix missing kernel output via VNC/Spice on boot * Fix bug in db.instance\_type\_destroy * Move get\_backdoor\_port to base rpc API * Move db.instance\_type\_extra\_specs\_\* to db.instance\_type\_\* methods * Add missing test for db.instance\_type\_destroy method * Fix powervm driver resize instance error * Support FlatDHCP network for VMware Hyper * Imported Translations from Transifex * Deprecate conductor ping method * Add an rpc API common to all services * If rescue fails don't error the instance * Make os.services.update work with cells * Fix fixed\_ip\_count\_by\_project in DB API * Add unit tests for /db/api.py#fixed\_ip\_\* * Add option to exclude joins from instance\_get\_by\_uuid * Remove unnecessary method argument * Improve Python 3.x compatibility * ec2 CreateVolumes/DescribeVolumes status mapping * Can now reboot rescued instances in xenapi * Allows xenapi 'lookup' to look for rescue mode VMs * Adds tests to xenapi.vm\_utils's 'lookup' method * Imported Translations from Transifex * Stop vm\_state reset on reboot of rescued vm * Fix hyperv copy file error logged incorrect * Fix ec2 CreateVolumes/DescribeVolumes status * Imported Translations from Transifex * Don't swallow PolicyNotAuthorized for resize/reboot actions * Remove unused exception and variable from scheduler * Remove unnecessary full resource audits at the end of resizes * Update the log module from oslo-incubator * Translate NoMoreFloatingIps exception * Imported Translations from Transifex * Fix up regression tester * Delete extra space to api/volumes message * Map internal S3 image state to EC2 API values * removing unused variable from a test * Translate cinder NotFound exception * hypervisor tests more accurate db * Added comments to quantum api client * Cleanup and test volume usage on volume detach * Import and convert to oslo loopingcall * Remove orphaned db method instance\_test\_and\_set * baremetal: VirtualPowerDriver uses mac addresses in bm\_interfaces * Sync rpc from oslo-incubator * Correct disk's over committed size computing error * Imported Translations from Transifex * Allow listing fixed\_ips for a given compute host * Imported Translations from Transifex * baremetal: Change input for sfdisk * Make sure confirm\_resize finishes before setting vm\_state to ACTIVE * Completes the power\_state mapping from compute driver and manager * Make compute/manager use conductor for unrescue() * Add an extension to show the mac address of a ip in server(s) * Cleans up orphan compute\_nodes not cleaned up by compute manager * Allow for the power state interval to be configured * Imported Translations from Transifex * Fix bug in os-availability-zone extension * Remove unnecessary db call in scheduler driver live-migration code * baremetal: Change node api related to prov\_mac\_address * Don't join metadata twice in instance\_get\_all() * Imported Translations from Transifex * Don't hide stacktraces for unexpected errors in rescue * Fix issues with check\_instance\_shared\_storage * Remove "undefined name" pyflake errors * Optimize some of compute/manager's periodic tasks' DB queries * Optimize some of the periodic task database queries in n-cpu * Change DB API instance functions for selective metadata fetching * Replace metadata joins with another query * xenapi: Make \_connect\_volume exc handler eventlet safe * Fix typo: libvir => libvirt * Remove multi scheduler * Remove unnecessary LOG initialisation * Remove unnecessary parens * Simplify random host choice * Add NOVA\_LOCALEDIR env variable * Imported Translations from Transifex * Clarify volume related exception message * Cleanup trailing whitespace in api samples * Add tenant/ user id to volume usage notifications * Security groups may be unavailable * Encode consoleauth token in utf-8 to make it a str * Catch NoValidHost exception during live-migration * Evacuated instance disk not deleted * Fix a bad tearDown method in test\_quantumv2.py * Import eventlet in \_\_init\_\_.py * Raise correct exception for duplicate networks * Add an extension to show the network id of a virtual interface * Fix error message in pre\_live\_migration * Add reset function to nova coverage * Imported Translations from Transifex * nova-consoleauth start failed by consoleauth\_manager option missing * set timeout for paramiko ssh connection * Define LOG globally in baremetal\_deploy\_helper * Allow describe\_instances to use tags for searches * Correct network uuid field for os-network extension * Only call getLogger after configuring logging * Add SecurityGroups API sample tests * Cannot boot vm if quantum plugin does not support L3 api * Add missing tests for instance\_type\_extra\_specs\_\* methods * Remove race condition (in InstanceTypeProjects) * Deprecate old vif drivers * Optimize resource tracker queries for instances * baremetal: Integrate provisioning and non-provisioning interfaces * Move console scripts to entrypoints * Remove deprecated Grizzly code * Fallback to conductor if types are not stashed * Imported Translations from Transifex * Resolve conflicting mac address in resize * Simplify and correct the bm partition sizes * Fix legacy\_net\_info guard * Fix SecurityGroups XML sample tests * Modify \_verify\_response to validate response codes * Fix a typo in attach\_interface error path * After migrate, catch and remove deleted instances * Grab instance for migration before updating usage * Explain why the give methods are whitelisted * libvirt: Get driver type from base image type * Guard against content being None * Limit the checks for block device becoming available * Fix \_error\_out\_instance exception handler * Raise rather than generating millions of IPs * Add unit tests for nova.volume.cinder.API * Update latest oslo.setup * baremetal: Drop unused columns in bm\_nodes * Remove print statements * Imported Translations from Transifex * Fix the python version comparison * Remove gettext.install() from nova/\_\_init\_\_.py * Sync latest gettextutils from oslo-incubator * Return 409 on creating/importing same name keypair * Delete tests.baremetal.util.new\_bm\_deployment() * Return proper error message when network conflicts * Better iptables DROP removal * Query quantum once for instance's security groups * quantum security group driver nova list shows same group * Sync in matchmaker and qpid Conf changes from oslo * improve handling of an empty dnsmasq --domain * Fix automatic confirmation of resizes for no-db-compute * 'injected\_files' should be base 64 encoded * Add missing unit tests for FlavorActionController * Set default fixed\_ip quota to unlimited * Accepts aws-sdk-java timestamp format * Imported Translations from Transifex * get context from req rather than getting a new admin context * Use Cluster reference to reduce SDK calls * Fix missing punctuation in docstring * xenapi: fix support for iso boot * Ensure only pickle-able objects live in metadata * sync oslo db/sqlalchemy module * Convert host value from unicode to a string * always quote dhcp-domain, otherwise dnsmasq can fail to start * Fix typo in the XML serialization os-services API * Add CRUD methods for tags to the EC2 API * Fix migrating instance to the same host * Rework time handling in periodic tasks * Show quota 'in\_use' and 'reserved' info * Imported Translations from Transifex * Fix quantum nic allocation when only portid is specified * Make tenant\_usage fall back to instance\_type\_id * Use format\_message on exceptions instead of str() * Add a format\_message method to the Exceptions * List AZs fails if there are disabled services * Switch nova-baremetal-deploy-helper to use sfdisk * Bring back colorizer again with error results * Imported Translations from Transifex * Adds Tilera back-end for baremetal * Always store old instance\_type during a migration * Make more readable error msg on quantum client authentication failure * Adding netmask to dnsmasq argument --dhcp-range * Add missing tests for db.instance\_type\_access\_\* methods * Remove race condition (in InstanceTypes) * Add missing tests for db.instance\_type\_\* methods * Imported Translations from Transifex * set up FakeLogger for root logger * Fix /servers/os-security-groups using quantum * NoneType exception thrown if driver live-migration check returns None * Add missing info to docstring * Include Co-authored-by entries in AUTHORS * Do not test foreign keys with SQLite version < 3.7 * Avoid using whitespace in test\_safe\_parse\_xml * xenapi: Retrieve VM uuid from xenstore * Reformat openstack-common.conf * Imported Translations from Transifex * Fixes Nova API /os-hosts missing element "zone" * disable colorizer as it swallows fails * Make iptables drop action configurable * Fixes argument order of quantumv2.api.get\_instance\_nw\_info * Make \_downsize\_quota\_delta() use stashed instance types * py2.6 doesn't support TextTestRunner resultclass * Reset ec2 image cache between S3 tests * Sync everything from oslo-incubator * Sync rpc from oslo-incubator * Don't log traceback on rpc timeout * Adds return-type in two functions' docstrings * Remove unnecessary checks in api.py * translate cinder BadRequest exception * Initialize compute manager before loading driver * Add a comment to placeholder migrations * xenapi: fix console for rescued instance * Fixes passing arbitrary conductor\_api argument * Make nova.virt.fake.FakeDriver useable in integration testing * Remove unnecessary DB call to find EC2 AZs * Remove outdated try except block in ec2 code * nova-manage vm list fails looking 'instance\_type' * Update instance network info cache to include vif\_type * Bring back sexy colorized test results * Don't actually connect to libvirtd in unit tests * Add placeholder migrations to allow backports * Change arguments to volume\_detach() * Change type of ssh\_port option from Str to Int * xenapi: rpmbuild fixes * Set version to 2013.2 2013.1.rc1 ---------- * Fix Hyper V instance conflicts * Add caching for ec2 mapping ids * Imported Translations from Transifex * fix add-fixed-ip with quantum * Update the network info when using quantum * List InstanceNotFound as a client exception * Refactor db.service\_destroy and db.service\_update methods * Fix console support with cells * Fix missing argument to QemuImageInfo * Add missing tests for db.virtual\_interface\_\* methods * Fix multiple fixed-ips with quantum * Add missing tests for db.service\_\* methods * Ensure that headers are returned as strings, not integers * Enable tox use of site-packages for libvirt * Require netaddr>=0.7.6 to avoid UnboundLocalError * Pass project id in quantum driver secgroup list * Fixes PowerVM spawn failed as missing attr supported\_instances * Fix RequestContext crashes w/ no service catalog * Prevent volume-attach/detach from instances in rescue state * Fix XenAPI performance issue * xenapi: Adding logging for migration plugin * libvirt: Tolerate existing vm(s) with cdrom(s) * Remove dead code * Remove unused virt.disk.api methods bind/unbind * Imported Translations from Transifex * Revert "Remove the usage of instance['extra\_specs' * Add standard methods to the Limits API * Store project\_id for instance actions * rstrip() strips characters, not strings * Fix use of libvirt\_disk\_prefix * Revert 1154253 causes XenServer image compat issue * Reset migrating task state for more Exceptions * Fix db archiving bug with foreign key constraints * Imported Translations from Transifex * Update migration 153 for efficiency * Don't include traceback when wrapping exceptions * Fix exception message in Networks API extension * Make conductor's quota methods pass project\_id properly * Fix: improve API error responses from os-hosts extension * Add missing API doc for networks-post-req * Make os-services API extensions consistent * Fix system\_metadata "None" and created\_at values * Add the serial to connection info for boot volumes * Do not accept invalid keys in quota-update * Add quotas for fixed ips * Makes safe xml data calls raise 400 http error instead of 500 * Fixes an iSCSI connector issue in the Hyper-V driver * Check keypair destroy result operation * Resize/Migrate refactoring fixes and test cases * Fixes Hyper-V live migration with attached volumes * Force nova to use keystone v2.0 for auth\_token * Fix issues with cells and resize * Fix copyright - from LLC to Foundation * Don't log traceback on expected console error * Generalize console error handling during build * Remove sqlalchemy calling back to DB API * Make ssh key injection work with xenapi agent * Fix use of potentially-stale instance\_type in tenant\_usage * Drop gzip flag from tar command for OVF archives * Fix reconnecting to libvirt * List ComputeHostNotFound as a client exception * Fix: Nova aggregate API throws an uncaught exception on invalid host * Do cleaning up resource before rescheduling * nova-manage: remove unused import * Read instance resource quota info from "quota" namespace * LibvirtGenericVIFDriver update for stp * Switch to final 1.1.0 oslo.config release * Skip deleted fixed ip address for os-fixed-ips extension * Return error details to users in "dns-create-private-domain" * Lazy load CONF.quota\_driver * Fix cells instance deletion * Don't load system\_metadata when it isn't joined * List ConsoleTypeInvalid as a client exception * Make run\_instance() bail quietly if instance has been deleted * Delete instance metadata when delete VM * Virtual Power Driver list running vms quoting error * Refactor work with session in db.block\_device\_mapping\_\* methods * Add missing tests for db.block\_device\_mapping\_\* methods * websockify 0.4 is busted * Sync rpc from oslo-incubator * Fix: nova-manage throws uncaught exception on invalid host/service * Fix more OS-DCF:diskConfig XML handling * Fix: Managers that incorrectly derive from SchedulerDependentManager * Fix nova-manage --version * Pin SQLAlchemy to 0.7.x * Deprecate CONF.fixed\_range, do dynamic setup * Remove the usage of instance['extra\_specs'] * Fix behaviour of split\_cell\_and\_item * Fix quota issues with instance deletes * Fixes instance task\_state being left as migrating * Force resource updates to update updated\_at * Prepare services index method for use with cells * Handle vcpu counting failures gracefully * Return XML message with objectserver 404 * xenapi: Fix reboot with hung volumes * Rename LLC to Foundation * Pass migration\_ref when when auto-confirming * Revert changing to FQDN for hostnames * Add numerous fixes to test\_api\_samples * Fixes instance action exception in "evacuate" API * Remove instance['instance\_type'] relationship from db api * Refactor db tests to ensure that notdb driver is used * Rewrap two lines * Server create will only process "networks" if os-networks is loaded * Fixes nbd device can't be released error * Correct exception args in vfs/guestfs * Imported Translations from Transifex * Prevent nova services' coverage data from combining into nova-api's * Check if flavor id is an empty string * Simple syntax fix up * Fixes volume attach on Hyper-V with IPv6 * Add ability to control max utilization of a cell * Extended server attributes can show wrong hypervisor\_hostname * Imported Translations from Transifex * Remove uses of instance['instance\_type'] from nova/notifications * Libvirt driver create images even without meta * Prevent rescue for volume-backed instances * Fix OS-DCF:diskconfig XML handling * Imported Translations from Transifex * Compile BigInteger to INTEGER for sqlite * Add conductor to nova-all * Make bm model's deleted column match database * Update to Quantum Client 2.2.0 * Remove uses of instance['instance\_type'] from nova/scheduler * Remove uses of instance['instance\_type'] from nova/api * Remove uses of instance['instance\_type'] from nova/network * Remove uses of instance['instance\_type'] from nova/compute * Correct substring matching of baremetal VPD node names * Fix Wrong syntax for set:tag in dnsmasq startup option * Fix instance evacuate with shared storage * nova-manage: remove redundant 'dest' args * clear up method parameters for \_modify\_rules * Check CONF values \*after\* command line args are parsed * Make nova-manage db archive\_deleted\_rows more explicit * Fix for delete error in Hyper-V - missing CONF imports * add .idea folder to .gitignore pycharm creates this folder * Make 'os-hosts/node1' case sensitivity defer to DB * Fix access\_ip\_\* race * Add MultipleCreate template and fix conflict with other templates * Update tox.ini to support RHEL 6.x * Fix instance type cleanup when doing a same-id migration * Tiny typo * Remove unnecessary setUp() and tearDown() methods * Remove duplicate API logging * Remove uses of instance['instance\_type'] from libvirt driver * Remove uses of instance['instance\_type'] from powervm driver * Remove uses of instance['instance\_type'] from xenapi driver * Fixed image filter support for vmware * Switch to oslo.config * Fix instance\_system\_metadata deleted columns * Remove parameters containing passwords from Notifications * Add missing action\_start if deleting resized inst * Fix issues with re-raising exceptions * Don't traceback in the API on invalid keypair * delete deleted image 500 bug * Moves Hyper-V options to the hyperv section * Fix 'to integer' conversion of max and min count values * Standarize ip validation along the code * Adjusts reclaim instance interval of deferred delete tests * Fix Network object encoding issue when using qpid * Rename VMWare to VMware * Put options in a list * Bump instance updated\_at on network change * Catching InstanceNotFound exception during reboot instance * Imported Translations from Transifex * Remove completed FIXME * quantum security\_group driver queries db regression * Prevent reboot of rescued instance * Baremetal deploy helper sets ODIRECT * Read baremetal images from extra\_specs namespace * Rename source\_(group\_id/ip\_prefix) to remote\_(group\_id/ip\_prefix) * docs should indicate proper git commit limit * Imporove db.sqlalchemy.api.\_validate\_unique\_server\_name method * Remove unused db calls from nova.db.api * Fixes oslo-config update for deprecated\_group * fix postgresql drop race * Compute manager should remove dead resources * Fix an error in compute api snapshot\_volume\_backed bdm code * Fixes disk size issue during image boot on Hyper-V * Updating powervm driver snapshot with update\_task\_state flow * Imported Translations from Transifex * Add ssh port and key based auth to VPD * Make ComputeManager \_running\_deleted\_instances query by uuid * Refactor compute manager \_get\_instances\_by\_driver * Fix target host variable from being overwritten * Imported Translations from Transifex * Fixes live migration with attached volumes issue * Don't LOG.error on max\_depth (by default) * Set vm\_state to ERROR on net deallocate failure * validate security\_groups on server create * Fix IBM copyright strings * Implement rules\_exist method for quantum security group driver * Switch to using memorycache from oslo * Remove pylint errors for undefined GroupException members * Sync timeutils and memorycache from oslo * instance\_info\_cache\_update creates wrongly * Tone down logging while waiting for conductor * Add os-volumes extension to api samples * Regenerate nova.conf.sample * Fix ephemeral devices on LVM don't get mkfs'd * don't stack trace if long ints are passed to db * Pep8/pyflakes cleanup of deprecated\_api * Fix deprecated network api * Fixes the Hyper-V driver's method signature * Imported Translations from Transifex * Fixes a Hyper-V live migration issue * Don't use instance['instance\_type'] for scheduler filters in migration * Fallback coverage backdoor telnet connection to lo * Add instance\_type\_get() to virt api * Make compute manager revert crashed migrations on init\_host() * Adds API Sample tests for Volume Attachments * Ensure that FORWARD rule also supports DHCP * Remove duplicate options(joinedload) from aggregates db code * Shrink size of aggregate\_metadata\_get\_by\_host sql query * Remove old commented out code in sqlalchemy models * Return proper error messages while disassociating floating IP * Don't blindly skip first migration * Imported Translations from Transifex * Suppress retries on UnexpectedTaskStateErrors * Fix \`with\_data\` handling in test-migrations * BM Migration 004: Actually drop column * Actually run baremetal migration tests * Adds retry on upload\_vhd for xapi glance plugin * ec2 \_format\_security\_group() accesses db when using quantum\_driver * Remove un-needed methods * Prevent hacking.py from crashing on unexpected import exception * Bump python-quantumclient version to 2.1.2 * Improve output msgs for \_compare\_result * Add a 'hw\_' namespace to glance hardware config properties * Makes sure required powervm config options are set * Update OpenStack LLC to Foundation * Improve hackings docstring detection * Make sure no duplicate forward rules can exist * Use min\_ram of original image for snapshot, even with VHD * Revert IP Address column length to 39 * Additional tests for safe parsing with minidom * Make allocate\_for\_instance() return only info about ports allocated * Fix crash in quantumapi if no network or port id is specified * Unpin PasteDeploy dependency version * Unpin routes dependency version * Unpin suds dependency version * Unpin Cheetah dependency version * Allow zk driver be imported without zookeeper * Retry floating\_ip\_fixed\_ip\_associate on deadlock * Fix hacking.py to handle 'cannot import x' * Add missing import to fakelibvirt * Migration 148: Fix drop table dependency order * Minor code optimization in \_compute\_topic * Fix hacking.py to handle parenthesise in from import as * Fix redefinition of function test\_get\_host\_uptime * Migration 147: Prevent duplicate aggregate\_hosts * Rework instance actions to work with cells * Fix incorrect zookeeper group name * Sync nova with oslo DB exception cleanup * Fix broken baremetal migration tests * if reset fails, display the command that failed * Remove unused nova.db.api:instance\_get\_all\_by\_reservation * Add API Sample tests for Snapshots extension * Run libguestfs API calls in a thread pool * Change nova-dhcpbridge FLAGFILE to a list of files * Imported Translations from Transifex * Readd run\_tests.sh --debug option * Clean unused kernels and ramdisks from image cache * Imported Translations from Transifex * Ensure macs can be serialized * Remove Print Statement * Prevent default security group deletion * libvirt: lxml behavior breaks version check * Add missing import\_opt for flat\_injected * Add processutils from oslo * Updates to OSAPI sizelimit middleware * Remove compat cfg wrapper * Fix exception handling in baremetal API * Make guestfs use same libvirt URI as Nova * Make LibvirtDriver.uri() a staticmethod * Enable VM DHCP request to reach DHCP agent * Don't set filter name if we use Noop driver * Removes unnecessary qemu-img dependency on powervm driver * Migration 146: Execute delete call * Add \`post\_downgrade\` hook for migration tests * Fix migration snake-walk * BM Migrations 2 & 3: Fix drop\_column statements * Migration 144: Fix drop index statement * Remove function redefinitions * Migration 135: Fix drop\_column statement * Add missing ec2 security group quantum mixin * Fix baremetal migration skipping * Add module prefix to exception types * Flush tokens on instance delete * Fix launching libvirt instances with swap * Spelling: compatable=>compatible * import base\_dir\_name config option into vmwareapi * Fix ComputeAPI.get\_host\_uptime * Move DB thread pooling to DB API * Use a fake coverage module instead of real one * Standardize the coverage initializations * Sync eventlet\_backdoor from oslo-incubator * Sync rpc from oslo-incubator * Fix message envelope keys * Remove race condition (in Networks) * Move some context checking code from sqlalchemy * Baremetal driver returns accurate list of instance * Identify baremetal nodes by UUID * Improve performance of baremetal list\_instances * Better error handling in baremetal spawn & destroy * Wait for baremetal deploy inside driver.spawn * cfg should be imported from oslo.config * Add Nova quantum security group proxy * Add a volume driver in Nova for Scality SOFS * Make nova security groups more pluggable * libvirt: fix volume walk of /dev/disk/by-path * Add better status to baremetal deployments * Fix handling of source\_groups with no-db-compute * Improve I/O performance for periodic tasks * Allow exit code 21 for 'iscsiadm -m session' * Removed duplicate spawn code in PowerVM driver * Add API Sample tests for Hypervisors extension * Log lifecycle events to log INFO (not ERROR) * Sync rpc from oslo-incubator * sync oslo log updates * Adding ability to specify the libvirt cache mode for disk devices * Sync latest install\_venv\_common.py * Make add-fixed-ip update nwfilter wth in libvirt * Refactor nwfilter parameters * ensure we run db tests in CI * More gracefully handle TimeoutException in test * Multi-tenancy isolation with aggregates * Fix pep8 issues with test\_manager.py * Fix broken logging imports * Fix hacking test to handle namespace packages * Use oslo-config-2013.1b4 * support preallocated VM images * Fix instance directory path for lxc * Add snapshot methods to fakes.py * PowerVMDiskAdapter detach/cleanup refactoring * Make ComputeTestCase.test\_state\_revert faster * Add an extension to show image size * libvirt: Use uuid for instance directory name * Support running periodic tasks immediately at startup * Fix XMLMatcher error reporting * Fix XML config tests for disk/net/cpu tuning * Add support for network adapter hotplug * Handle lifecycle events in the compute manager * Add support for lifecycle events in the libvirt driver * Enhance IPAdresses migration tests * Add basic infrastructure for compute driver async events * Fix key check in instance actions formatter * Add a safe\_minidom\_parse\_string function * Documentation cleanups for nova devref * Fix leak of loop/nbd devices in injection using localfs * Add support for instance CPU consumption control * Add support for instance disk IO control * Retry bw\_usage\_update() on innodb Deadlock * Change CIDR column size on migration version 149 * Provide way to pass rxtx factor to quantum * Fibre channel block storage support (nova changes) * Default SG rules for the Security Group "Default" * create new cidr type for data storage * Ensure rpc result is primitive types * Change all instances of the non-word "inteface" to "interface" * Remove unused nova.db.api:network\_get\_by\_bridge * Fix a typo in two comments. networksa -> networks * Live migration with an auto selection of dest * Remove unused nova.db.api:network\_get\_by\_instance * Fix network list and show with quantum * Remove unused db calls from nova.db.sqlalchemy.api * Remove unused db calls * Small spelling fix in sqlalchemy utils * Fix \_get\_instance\_volume\_block\_device\_info call parameter * Do not use abbreviated config group names (zookeeper) * Prevent the unexpected with nova-manage network modify * Fix hacking tests on osx * Enable multipath for libvirt iSCSI Volume Driver * Add select\_hosts to scheduler manager rpc * Add and check data functions for test\_migrations 141 * fix incorrectly defined ints as strs * Remove race condition (in TaskLog) * Add generic dropper for duplicate rows * Imported Translations from Transifex * Fix typo/bug in generic UC dropper * remove intermediate libvirt downloaded images * Add support for instance vif traffic control * Add libvirt XML schema support for resource tuning parameters * Fix instance can not be deleted after soft reboot * Correct spelling of quantum * Make pep8 tests run inside virtualenv * Remove tests for non-existing SimpleScheduler * libvirt: Fix LXC container creation * Rename 'connection' to 'driver' in libvirt HostState * Ensure there is only one instance of LibvirtDriver * Stop unit test for prompting for a sudo password * clean up missing whitespace after ':' * Push 'Error' result from event to instance action * Speedup the revert\_state test * Add image to request\_spec during resize * Ensure start time is earlier than end time in simple\_tenant\_usage * Split out body of loop in \_sync\_power\_states in compute manager * Remove dead variable assignment in compute manager * Assign unique names with os-multiple-create * Nova network needs to take care of existing alias * Delete baremetal interfaces when their parent node is deleted * Harmonize PEP8 checking between tox and run\_tests.sh * VirtualPowerDriver catches ProcessExecutionError * [xenapi] Cooperatively yield during sparse copy * Allow archiving deleted rows to shadow tables, for performance * Adds API Sample tests for FlavorAccess extension * Add an update option to run\_tests.sh * filter\_scheduler: Select from a subset of hosts * use nova-conductor for live-migration * Fix script argument parsing * Add option to allow cross AZ attach configurable * relocatable roots doesn't handle testr args/opts * Remove a log message in test code * add config drive to api\_samples * Don't modify injected\_files inside PXE driver * Synchronize code from oslo * Canonizes IPv6 before insert it into the db * Only dhcp the first ip for each mac address * Use connection\_info on resize * Fix add-fixed-ip and remove-fixed-ip * API extension for accessing instance\_actions * Use joinedload for system\_metadata in db * Add migration with data test for migration 151 * Correct misspelling in PowerVM comment * Add GlusterFS libvirt volume connector * Module import style checking changes * Stub additional FloatingIP methods in FlatManager * Resize/Migrate functions for PowerVM driver * Added a service heartbeat driver using Memcached * Use a more specific error reporting invalid disk hardware * Allow VIF model to be chosen per image * Check the length of flavor name in "flavor-create" * Add API sample tests to Services extension * VMWare driver to use current nova.network.model * Add "is not" test to hacking.py * Update tools/regression\_tester * Fix passing conductor to get\_instance\_nw\_info() * Imported Translations from Transifex * Make compute manager use conductor for stopping instances * Move allowvssprovider=false to vm-data field * Allow aggregate create to have None as the az * Forces flavorRef to be string in servers resize api * xenapi: Remove unecessary exception handling * Sync jsonutils from openstack-common * Simplify and optimize az server output extension * Add an extension to show the type of an ip * Ensure that only one IP address is allocated * Make the metadata paths use conductor * Fix nova-compute use of missing DBError * Adding support for AoE block storage SANs * Update docs about testing * Allow generic rules in context\_is\_admin rule in policy * Implements resize / cold migration on Hyper-V * test\_(dis)associate\_by\_non\_existing\_security\_group\_name missing stub * Make scheduler remove dead nodes from its cache * More conductor support for resizes * Allow fixed to float ping with external gateway * Add generic UC dropper * Remove locking declarator in ServiceGroup \_\_new\_\_() * Use ServiceGroup API to show node liveness * Refine PowerVM MAC address generation algorithm * Fixes a bug in attaching volumes on Hyper-V * Fix unconsumed column name warning in test\_migrations * Fix regression in non-admin simple\_usage:show * Ensure 'subunit2pyunit' is run in venv from run\_tests.sh * Fix inaccuracies in the development environment doc * preserve order of pre-existing iptables chains * Adds API Sample tests for FloatingIPDNS extension * Don't call 'vif.plug' twice during VM startup * Disallow setting /0 for network other than 0.0.0.0 * Fix spelling in comment * Imported Translations from Transifex * make vmwareapi driver pass quantum port-id to ESX * Add control-M to list of characters to strip out * Update to simplified common oslo version code * Libvirt: Implement snapshots for LVM-backed roots * Properly write non-raw LVM images on creation * Changes GA code for tracking cross-domain * Return dest\_check\_data as expected by the Scheduler * Simplify libvirt snapshot code path * fix VM power state to be NOSTATE when instance not found * Fix missing key error in libvirt.driver * Update jsonutils from oslo-incubator * Update nova/compute/api to handle instance as dict * Use joined version of db.api calls * l3.py,add\_floating\_ip: setup NAT before binding * Regenerate nova.conf.sample * Fixes a race condition on updating security group rules * Ensure that LB VIF drivers creates the bridge if necessary * Remove nova.db call from baremetal PXE driver * Support for scheduler hints for VM groups * Fixed FlavorAccess serializer * Add a virtual PowerDriver for Baremetal testing * Optimize rpc handling for allocate and deallocate * Move floating ip db access to calling side * Implement ZooKeeper driver for ServiceGroup API * Added the build directory to the tox.ini list pep8 ignores * support reloctable venv roots in testing framework * Change to support custom nw filters * Allow multiple dns servers when starting dnsmasq * Clean up extended server output samples * maint: remove unused imports from bin/nova-\* * xenapi: Cleanup detach\_volume code * Access DB as dict not as attributes part 5 * Introduce support for 802.1qbg and 802.1qbh to Nova VIF model * Adds \_(prerun|check)\_134 functions to test\_migrations * Extension for rebuild-for-ha * Support hypervisor supplied macs in nova-network * Recache or rebuild missing images on hard\_reboot * Cells: Add cells support to hypervisors extension * Cells: Add cells support to instance\_usage\_audit\_log api extension * Update modules from common required for rpc with lock detection * Fix lazy load 'system\_metadata' failed problem * Ban database access in nova-compute * Move security\_groups refreshes to conductor * Fix inject\_files for storing binary file * Add regression testing tool * Change forward\_bridge\_interface to MultiStrOpt * Imported Translations from Transifex * hypervisor-supplied-nics support in PowerVM * Default the last parameter (state) in task\_log\_get to None * Sync latest install\_venv\_common from oslo * Remove strcmp\_const\_time * Adds original copyright notice to refactored files * Update .coveragerc * Allow disk driver to be chosen per image * Refactor code for setting up libvirt disk mappings * Refactor instance usage notifications for compute manager * Flavor Extra Specs should require admin privileges * Remove unused methods * Return to skipping filters when using force\_hosts * Refactor server password metadata to avoid direct db usage * lxc: Clean up namespace mounts * Move libvirt volume driver tests to separate test case * Move libvirt NFS volume driver impl into volume.py * replace ssh-keygen -m with a python equivalent * Allow connecting to self-signed quantum endpoints * Sync latest db and importutils from oslo * Use oslo database code * Fix check instance host for instance action * Make get\_dev\_name\_for\_instance() use stashed instance\_type info * Added Postgres CI opportunistic test case * Remove remaining instance\_types query from compute/manager * Make cells\_api fetch stashed instance\_type info * Teach resource tracker about stashed instance types * Fix up instance types in sys meta for resizes * lxc: virDomainGetVcpus is not supported by driver * Fix incorrect device name being raised * VMware VC Compute Driver * Default value of monkey\_patch\_modules is broken * Adds evacuate method to compute.api * Fix import for install\_venv.py * allow disabling file injection completely * separate libvirt injection and configdrive config variables * Add API sample tests to os-network * Fix incorrect logs in network * Update HACKING.rst per recent changes * Allow for specifying nfs mount options * Add REST API to show availability\_zone of instance * Make NFS mount hashes consistent with Cinder * Parse testr output through subunit2pyunit * Imported Translations from Transifex * Optimize floating ip list to make one db query * Remove hardcoded topic strings in network manager * Reimplement is\_valid\_ipv4() * Tweakify is\_valid\_boolstr() * Fix update quota with invalid value * Make system\_metadata update in place * Mark password config options with secret * Record instance actions and events * Postgres does not like empty strings for type inet * Add 'not in' test to tools/hacking.py * Split floating ip functionality into new file * Optimize network calls by moving them to api * Fixes unhandled exception in detach\_volume * Fixes FloatingIPDNS extension 'show' method * import tools/flakes from oslo * Use conductor for instance\_info\_cache\_update * Quantum metadata handler now uses X-Forwarded-For * instance.update notifications don't always identify the service * Handle compute node not available for live migration * Fixes 'not in' operator usage * Fixes "is not" usage * Make scheduler modules pass conductor to add\_instance\_fault * Condense multiple authorizers into a single one * Extend extension\_authorizer to enable cleaner code * Remove unnecessary deserializer test * Added sample tests to FlavorExtraSpecs API * Fix rebuild with volumes attached * DRYing up volume\_in\_mapping code * Use \_prep\_block\_device in rebuild * xenapi: Ax unecessary \`block\_device\_info\` params * Code cleanup for rebuild block device mapping * Fix eventlet/mysql db pooling code * Add support for compressing qcow2 snapshots * Remove deprecation notice in LibvirtBridgeDriver * Fix boto capabilities check * Add api samples to fping extension * Fix SQL Error with fixed ips under devstack/postgresql * Pass testropts in to setup.py in run\_tests.sh * Nova Hyper-V driver refactoring * Fixed grammar problems and typos in doc strings * Add option to control where bridges forward * xenapi: Add support for different image upload drivers * Removed print stmts in test cases * Fix get and update in FlavorExtraSpecs * Libvirt: Add support for live snapshots * Move task\_log functions to conductor * erase outdated comment * Keep flavor information in system\_metadata * Add instance\_fault\_create() to conductor * Adds API Sample tests for os-instance\_usage\_audit\_log extension * validate specified volumes to boot from at the API layer * Refactor libvirt volume driver classes to reduce duplication * Change ''' to """ in bin/nova-{novncproxy,spicehtml5proxy} * Pass parameter 'filter' back to model layer * Fix boot with image not active * refactored data upgrade tests in test\_migrations * Fix authorized\_keys file permissions * Finer access control in os-volume\_attachments * Stop including full service catalog in each RPC msg * Make sure there are no unused import * Fix missing wrap\_db\_error for Session.execute() method * Use install\_venv\_common.py from oslo * Add Region name to quantum client * Removes retry of set\_admin\_password * fix nova-baremetal-manage version printing * Refactoring/cleanup of compute and db apis * Fix an error in affinity filters * Fix a typo of log message in \_poll\_unconfirmed\_resizes * Allow users to specify a tmp location via config * Avoid hard dependency on python-coverage * iptables-restore error when table not loaded * Don't warn up front about libvirt loading issues in NWFilterFirewall * Relax API restrictions around the use of reboot * Strip out Traceback from HTTP response * VMware Compute Driver OVF Support * VMware Compute Driver Host Ops * VMware Compute Driver Networking * Move policy checks to calling side of rpc * Add api-samples to multinic extension * Add system\_metadata to db.instance\_get\_active\_by\_window\_joined * Enable N302: Import modules only * clean up api\_samples documentation * Fix bad imports that cause nova-novncproxy to fail * populate dnsmasq lease db with valid leases * Support optional 4 arg for nova-dhcpbridge * Add debug log when call out to glance * Increase maximum URI size for EC2 API to 16k * VMware Compute Driver Glance improvement * Refactored run\_command for better naming * Fix rendering of FixedIpNotFoundForNetworkHost * Fix hacking N302 import only modules * Avoid db lookup in info\_from\_instance() * Fixes task\_log\_get and task\_log\_get\_all signatures * Make failures in the periodic tests more detailed * Clearer debug when test\_terminate\_sigterm fails * Skip backup files when running pep8 * Added sample tests to floating-ip-pools API * \_sync\_compute\_node should log host and nodename * Don't pass the entire list of instances to compute * VMware Compute Driver Volume Management * Bump the base rpc version of the network api to 1.7 * Remove compute api from scheduler driver * Remove network manager from compute manager * Adds SSL support for API server * Provide creating real unique constraints for columns * Add version constraint for coverage * Correct a format string in virt/baremetal/ipmi.py * Add REST api to manage bare-metal nodes * Adding REST API to show all availability zones of an region * Fixed nova-manage argument parsing error * xenapi: Add cleanup\_sm\_locks script * Fix double reboot during resume\_state\_on\_host\_boot * Add support for memory overcommit in live-migration * Adds conductor support for instance\_get\_active\_by\_window\_joined * Make compare\_result show the difference in lists * Don't limit SSH keys generation to 1024 bits * Ensure service's servicegroup API is created first * Drop volume API * Fix for typo in xml API doc sample in nova * Avoid stuck task\_state on snapshot image failure * ensure failure to inject user files results in startup error * List servers having non-existent flavor should return empty list * Add version constraint for cinder * Remove duplicated tapdev creation code from libvirt VIF * Move helper APIs for OVS ports into linux\_net * Add 'ovs\_interfaceid' to nova network VIF model * Replace use of mkdtemp with fixtures.TempDir * Add trust level cache to trusted\_filter * Fix the wrong datatype in task\_log table * Cleanup of extract\_opts.py * Baremetal/utils should not log certain exceptions * Use setup.py testr to run testr in run\_tests.sh * Fix nova coverage * PXE driver should rmtree directories it created * Fix floating ips with external gateway * Add support for Option Groups in LazyPluggable * Fix incorrect use of context object * Unpin testtools * fix misspellings in logs, comments and tests * fix mysql race in tests * Fix get Floating ip pools action name to match with its policy * Generate coverage even if tests failed * Allow snapshots of paused and suspended instances * Update en\_US message translations * Sync latest cfg from oslo-incubator * Avoid testtools 0.9.25 * Cells: Add support for compute HostAPI() * Refactor compute\_utils to avoid db lookup * ensure zeros are written out when clearing volumes * fix service\_ref undefined problem * Add rootwrap filters for password injection with localfs * fix floating ip test that wasn't running * Prevent metadata updates until instance is active * More consistent libvirt XML handling and cleanup * pick up eventlet backdoor fix from oslo * Run\_as\_root to ensure resize2fs succeed for all image backends * libvirt: Fix typo in configdrive implementation * Refactor EC2 keypairs exception * Directly copy a file URL from glance * Remove restoring soft deleted entries part 2 * Remove restoring soft deleted entries part 1 * Use conductor in the servicegroup db driver * Add service\_update to conductor * Remove some db calls from db servicegroup driver * XenAPI: Fix volume detach * Refactor: extract method: driver\_dict\_from\_config * Cells: Fix for relaying instance info\_cache updates * Fix wrong quota reservation when deleting resizing instance * Go back to the original branch after pylint check * Ignore auto-generated files by lintstack * Add host to instance\_faults table * Clean up db network db calls for fixed and float * Remove obsolete baremetal override of MAC addresses * Fix multi line docstring tests in hacking.py * PXE driver should not accept empty kernel UUID * Use common rootwrap from oslo-incubator * Remove network\_host config option * Better instance fault message when rescheduling * libvirt: Optimize test\_connection and capabilities * don't allow crs in the code * enforce server\_id can only be uuid or int * Allow nova to use insecure cinderclient * Makes sure compute doesn't crash on failed resume * Fix fallback when Quantum doesn't provide a 'vif\_type' * Move compute node operations to conductor * correcting for proper use of the word 'an' * Correcting improper use of the word 'an' * Save password set through xen agent * Add encryption method using an ssh public key * Make resource tracker use conductor for listing instances * Make resource tracker use conductor for listing compute nodes * Updates prerequisite packages for fedora * Expose a get\_spice\_console RPC API method * Add a get\_spice\_console method to nova.virt.ComputeDriver API * Add nova-spicehtml5proxy helper * Pull NovaWebSocketProxy class out of nova-novncproxy binary * Add support for configuring SPICE graphics with libvirt * Add support for setting up elements in libvirt config * Add common config options for SPICE graphics * Create ports in quantum matching hypervisor MAC addresses * Make nova-api logs more useful * Override floating interface on callee side * Reject user ports that have MACs the hypervisor cannot use * Remove unused import * Reduce number of iptable-save restore loops * Clean up get\_instance\_id\_by\_floating\_address * Move migration\_get\_...\_by\_host\_and\_node to conductor * Make resource tracker use conductor for migration updates * minor improvements to nova/tests/test\_metadata.py * Cells: Add some cells support to admin\_actions extension * Populate service list with availability zone and correct unit test * Correct misspelling of fake\_service\_get\_all * Add 'devname' to nova.network.model.VIF class * Use testrepository setuptools support * Cleaning up exception handling * libvirt: use tap for non-blockdevice images on Xen * Export the MAC addresses of nodes for bare-metal * Cells: Add cells API extension * More HostAPI() cleanup for cells * Break out a helper function for working with bare metal nodes * Renames the new os-networks extension * Define a hypervisor driver method for getting MAC addresses * enables admin to view instance fault "details" * Revert "Use testr setuptools commands." * Revert "Populate service list with availability zone" * Fix typos in docstring * Fix problem with ipv6 link-local address(es) * Adds support for Quantum networking in Hyper-V * enable hacking.py self tests * Correct docstring on sizelimit middleware * sync latest log and lockutils from oslo * Fix addition of CPU features when running against legacy libvirt * Fix nova.availability\_zones docstring * Fix uses of service\_get\_all\_compute\_by\_host * VMware Compute Driver Rename * use postgresql INET datatype for storing IPs * Extract validation and provision code to separate method * Implement Quantum support for addition and removal of fixed IPs * Keep self and context out of error notification payload * Populate service list with availability zone * Add Compute API validations for block device map * Cells: Commit resize quota reservations immediately * Cells: Reduce the create\_image call depth for cells * Clean up compute API image\_create * Fix logic error in periodic task wait code * Centralize instance directory logic * Chown doesn't work on mounted vfat * instances\_path is now defined here * Convert ConfigDriveHelper to being a context manager itself * Use testr setuptools commands * Move migration\_create() to conductor * Move network call from compute API to the manager * Fix incorrect comment, and move a variable close to use * Make sure reboot\_instance uses updated instance * Cleanup reboot\_instance tests * Fix use of stale instance data in compute manager * Implements getPasswordData for ec2 * Add service\_destroy to conductor * Make nova.service get service through conductor * Add service\_create to conductor * Handle waiting for conductor in nova.service * Allow forcing local conductor * Make pinging conductor a part of conductor API * Fix some conductor manager return values * Handle directory conflicts with html output * Fix error in NovaBase.save() method * Skip domains on libvirt errors in get\_vcpu\_used() * Fix state sync logic related to the PAUSED VM state * Remove more unused opts from nova.scheduler.driver * Fix quota updating when admin deletes common user's instance * Tests for PXE bare-metal provisioning helper server * Correct the calculating of disk size when using lvm disk backend * Adding configdrive to xenapi * Validated device\_name value in block device map * Fix libvirt resume function call to get\_domain\_xml * Make it clearer that network.api.API is nova-network specific * Access instance as dict, not object in xenapi * Expand quota logging * Move logic from os-api-host into compute * Create a directory for servicegroup drivers * Move update\_instance\_info\_cache to conductor * Change ComputerDriver.legacy\_nwinfo to raise by default * Cleanup pyflakes in nova-manage * Add user/tenant shim to RequestContext * make runtests -p act more like tox * fix new N402 errors * Add host name to log message for \_local\_delete * Try out a new nova.conf.sample format * Regenerate nova.conf.sample * Make Quantum plugin fill in the 'bridge' name * Make nova network manager fill in vif\_type * Add some constants to the network model for drivers to use * Move libvirt VIF XML config into designer.py * Remove bogus 'unplug' calls from libvirt VIF test * Fix bash syntax error in run\_tests.sh * Update instance's cell\_name in API cell * Fix init\_host checking moved instances * Fix test cases in integrated.test\_multiprocess\_api * Map libvirt error to InstanceNotFound in get\_instance\_disk\_info * Fixed comment typo * Added sample tests to FlavorSwap API * Remove unused baremetal PXE options * Remove unused opt import in scheduler.driver * Move global service networking opts to new module * Move memcached\_servers opt into common.memorycache * Move service\_down\_time to nova.service * Move vpn\_key\_suffix into pipelib * fix N402 on tools/ * fix N402 for nova-manage * fix N402 for rest of nova * fix N402 for nova/c\* * fix N402 for nova/db * don't clear the database dicts in the tearDown method * Fixed typos in doc strings * Enhance wsgi to listen on ipv6 address * Adds a flag to allow configuring a region * Fix double reboot issue during soft reboot * Remove baremetal-compute-pxe.filters * Fix pyflakes issues in integrated tests * Adds option to rebuild instance with existing disk * Move common virt driver options to virt.driver * Move vpn\_image\_id to pipelib * Move enabled\_apis option into nova.service * Move default\_instance\_type into nova.compute * Move osapi\_compute\_unique\_server\_name\_scope to db * Move api\_class options to where they are used * Move manager options into nova.service * Move compute\_topic into nova.compute.rpcapi * fix N402 for nova/network * fix N402 for nova/scheduler * fix N402 for nova/tests * Fix N402 for nova/virt * Fix N402 for nova/api * New instance\_actions and events table, model, and api * Cope better with out of sync bm data * Import latest timeutils from oslo-incubator * Remove availability\_zones from service table * Enable Aggregate based availability zones * Sync log from oslo-incubator * Clarify the DBApi object in cells fakes * Fix lintstack check for multi-patch reviews * Adds to manager init\_host validation for instances location * Add to libvirt driver instance\_on\_disk method * add to driver option to keep disks when instance destroyed * Fix serialization in impl\_zmq * Added sample tests to FlavorRxtx API * Refresh instance metadata in-place * xenapi: Remove dead code, moves, tests * Fix baremetal VIFDriver * Adds a new tenant-centric network extension * CLI for bare-metal database sync * Move scheduler\_topic into nova.scheduler.rpcapi * Move console\_topic into nova.console.rpcapi * Move network\_topic into nova.network.rpcapi * Move cert\_topic into nova.cert.rpcapi * Move global s3 opts into nova.image.s3 * Move global glance opts into nova.image.glance * Remove unused osapi\_path option * attach/detach\_volume() take instance as a parameter * fix N401 errors, stop ignoring all N4\* errors * Add api extension to get and reset password * powervm: Implement snapshot for local volumes * Add exception handler for previous deleted flavor * Add NoopQuotaDriver * Conductor instance\_get\_all replaces \_by\_filters * Support cinderclient http retries * Sync rpc and notifier from oslo-incubator * PXE bare-metal provisioning helper server * Added sample tests to QuotaClasses API * Changed 'OpenStack, LLC' message to 'OpenStack Foundation' * Convert short doc strings to be on one line * Get instances from conductor in init\_host * Invert test stream capture logic for debugging * Upgrade WebOb to 1.2.3 * Make WebOb version specification more flexible * Refactor work with TaskLog in sqlalchemy.api * Check admin context in bm\_interface\_get\_all() * Provide a PXE NodeDriver for the Baremetal driver * Handle compute node records with no timestamp * config\_drive is missing in xml deserializer * Imported Translations from Transifex * NovaBase.delete() rename to NovaBase.soft\_delete() * livbirt: have a single source of console log file naming * Remove the global DATA * Add ping to conductor * Add two tests for resize action in ServerActionsControllerTest * Move service\_get\_all operations to conductor * Move migration\_get\_unconfirmed\_by\_dest\_compute to conductor * Move vol\_usage methods to conductor * Add test for resize server in ComputeAPITestCase * Allow pinging own float when using fixed gateway * Use full instance in virt driver volume usage * Imported Translations from Transifex * Refactor periodic tasks * Cells: Add periodic instance healing * Timeout individual tests after one minute * Fix regression in RetryFilter * Cells: Add the main code * Adding two snapshot related task states * update version urls to working v2 urls * Add helper methods to nova.paths * Move global path opts in nova.paths * Remove unused aws access key opts * Move fake\_network opt to nova.network.manager * Allow larger encrypted password posts to metadata * Move instance\_type\_get() to conductor * Move instance\_info\_cache\_delete() to conductor * Move instance\_destroy() to conductor * Move instance\_get\_\*() to conductor * Sync timeutils changes from Oslo * Remove system\_metadata db calls from compute manager * Move block\_device\_mapping destroy operations to conductor * Clean up setting of control\_exchange default * fix floating-ip in multihost case * Invalid EC2 ids should make the entire request fail * improve libguestfs exception handling * fix resize of unpartitioned images with libguestfs * xenapi: Avoid hotplugging volumes on resize * Remove unused VMWare VIF driver abstraction * Delete pointless nova.virt.VIFDriver class * Clarify & fix docs for nova-novncproxy * Removes unused imports * Imported Translations from Transifex * Fix spelling mistakes in nova.virt * Cells: Add cells commands to nova-manage * Add remaining get\_backdoor\_port() rpc calls to coverage * Fix race in resource tracker * Move block\_device\_mapping get operations to conductor * Move block\_device\_mapping update operations to conductor * Improve baremetal driver error handling * Add unit test to update server metadata * Add unit test to revert resize server action * Add compute build/resize errors to instance faults * Add unit test for too long metadata for server rebuild action * Adds os-volume\_attachments 'volume\_id' validation * Raise BadRequest when updating 'personality' * Imported Translations from Transifex * Ensure that Quantum uses configured fixed IP * Add conditions in compute APIRouter * Imported Translations from Transifex * CRUD on flavor extra spec extension should be admin-only * Report failures to mount in localfs correctly * Add API sample tests to FixedIPs extension * baremetal power driver takes \*\*kwargs * Implement IPMI sub-driver for baremetal compute * Fix tests/baremetal/test\_driver.py * Move baremetal options to [BAREMETAL] OptGroup * Adds test for HTTPUnprocessableEntity when rebooting * Make sure the loadables path is the absolute path * Fix bug and remove update lock in db.instance\_test\_and\_set() * Periodic update of DNS entries * Fix error in test\_get\_all\_by\_multiple\_options\_at\_once() * Remove session.flush() and session.query() monkey patching * Update nova-cert man page * Allow new XML API sample file generation * Remove unused imports * spelling in test\_migrations * Imported Translations from Transifex * Check for image\_meta in libvirt.driver.spawn * Adds test for 'itemNotFound' errors in 'Delete server' * Remove improper NotFound except block in list servers * Spelling: Compatability=>Compatibility * Imported Translations from Transifex * Ensure we add a new line when appending to rc.local * Verify the disk file exists before running qemu-img on it * Remove lxc attaching/detaching of volumes * Teardown container rootfs in host namespace for lxc * Fix cloudpipe instances query * Ensure datetimes can be properly serialized * Imported Translations from Transifex * Database metadata performance optimizations * db.network\_delete\_safe() method performance optimization * db.security\_group\_rule\_destroy() method performance optimization * Import missing exception * Ignore double messages to associate the same ip * Imported Translations from Transifex * Database reservations methods performance optimization * Using query.soft\_delete() method insead of soft deleting by hand * Create and use subclass of sqlalchemy Query with soft\_delete() method * Remove inconsistent usage of variable from hyperv * Log last compute error when rescheduling * Removed unused imports * Make libvirt driver default to virtio for KVM/QEMU NICs * Refactor libvirt VIF classes to reduce duplicate code * Makes sure to call crypto scripts with abspath * Enable nova exception format checking in tests * Eliminate race conditions in floating association * Imported Translations from Transifex * Provide a configdrive helper which uses contextlib * Add extension to allow hiding of addresses * Add html reports to report action in coverage extension * Add API samples tests for the coverage extension * Fix \_find\_ports() for when backdoor\_port is None * Parameterize database connection in test.py * fixing the typo of the error message from nbd * add 'random\_seed' entry to instance metadata * Baremetal VIF and Volume sub-drivers * Fix revert resize failure with disk.local not found * Fix a test isolation error in compute.test\_compute * New Baremetal provisioning framework * Move baremetal database tests to fixtures * address uuid overwriting * Add get\_backdoor\_port to cert * Add get\_backdoor\_port to scheduler * Add get\_backdoor\_port to console * Make libvirt driver.listinstances return defined * Add get\_backdoor\_port to consoleauth * Export custom SMBIOS info to QEMU/KVM guests * Make configdrive.py use version.product\_string() * Allow loading of product/vendor/package info from external file * Remove obsolete VCS version info completely * Trap exception when trying to write csr * Define a product, vendor & package strings in version.py * Extract image metadata from Cinder * Add expected exception to aggregate\_metadata\_delete() * Move aggregate\_get() to conductor * Add .testrepository/ directory to gitginore * Make load\_network\_driver load passed in driver * Fix race condition of resize confirmation * libvirt: Make vif\_driver.plug() returns None * Add an iptables mangle rule per-bridge for DHCP * Make NBD retry logic more generic, add retry to loop * Reliably include OS type in ephemeral filenames * Allow specification of libvirt guest interface backend driver * Fix "image\_meta" data passed in libvirt test case * Fix typos in vncserver\_listen config param help description * Traceback when user doesn't have permission * removed duplicate function definitions * network/api add\_fixed\_ip correctly passes uuid * Import cfg module in extract\_opts.py * Raise old exception instance instead of new one * Update exceptions to pass correct kwargs * Add option to make exception format errors fatal * allow for the ability to run partial coverage * Remove fake\_tests opt from test.py * Execute pygrub using nova-rootwrap in xenapi * Add DBDuplicateEntry exception for unique constraint violations * Fix stack trace on incorrect nova-manage args * Use service fixture in DB servicegroup tests * fix instance rescue without cmdline params in xml.rescue * Added sample tests to FlavorDisabled API * Reset the IPv6 API backend when resetting the conf stack * libvirt: Skip intermediate base files with qcow2 * fix test\_nbd using stubs * Imported Translations from Transifex * Properly remove the time override in quota tests * Fix API samples generation * Move TimeOverride to the general reusable-test-helper place * Added conf support for security groups * Add accounting for orphans to resource tracker * Add more association support to network API * Remove the WillNotSchedule exception * Replace fixtures.DetailStream with fixtures.StringStream * Move network\_driver into new nova.network.driver * Move DNS manager options into network.manager * Refactor xvp console * Move agent\_build\_get\_by\_triple to conductor * Move provider\_fw\_rule\_get\_all to conductor * Move security\_group operations in VirtAPI to conductor * Retry NBD device allocation * Use testr to run nova unittests * Add a developer trap for api samples * Update command on devref doc * Fixed deleting instance booted from invalid vol * Add general mechanism for testing api coverage * Add the missing replacement text in devref doc * Allow xenapi to work with empty image metadata * Imported Translations from Transifex * Fix for broken switch for config\_drive * Fix use of osapi\_compute\_extension option in api\_samples * Remove sleep in test\_consoleauth * Fix errors in used\_limits extension * Fix poll\_rescued\_instances periodic task * Add syslogging to nova-rootwrap * Clean up run\_tests.sh * Ensure that sql\_dbpool\_enable is a boolean value * Stop nbd leaks, remove pid race * Fixes KeyError: 'sr\_uuid' when booting from volume on xenapi * Add VirtAPI tests * Move remaining aggregate operations to conductor * remove session param from instance\_get * remove session param from instance\_get\_by\_uuid * Use nova.test.TestCase as the base test class * Ensure datetimes can be properly serialized * Fixes string formatting error * Adds API Sample tests for DiskConfig extension * Fix for correctly parsing snapshot uuid in ec2api * Autodetect nbd devices * Add Jian Wen to .mailmap * Move metadata\_{host,port} to network.linux\_net * Move API extension opts to api.openstack.compute * Move osapi\_max\_limit into api.openstack.common * Move link\_prefix options into api.openstack.common * Move some opts into nova.utils * Properly scope password options * Remove the deprecated quantum v1 code and directory * add and removed fixed ip now refresh cache * Implement an XML matcher * Add support for parsing the from libvirt host capabilities * Add support for libvirt domain XML config * Add support for libvirt domain XML config * Add coverage extension to nova API * Allow rpc-silent FloatingIP exceptions in n-net * Allow conductor exceptions to pass over RPC silently * Don't leak info from libvirt LVM backed instances * Add get\_backdoor\_port to nova-conductor * Properly scope isolated hosts config opts * Move monkey patch config opts into nova.utils * Move zombie\_instance\_updated\_at\_window option * Move some options into nova.image.glance * Move cache\_images to nova.virt.xenapi.vm\_utils * Move api\_rate\_limit and auth\_strategy to nova.api * Move api\_paste\_config option into nova.wsgi * Port to argparse based cfg * Cleanup the test DNS managers * Move all temporary files into a single /tmp subdir * Modified sample tests to FlavorExtraData API * Fix KeyError of log message in virt/libvirt/utils.py * Allows an instance to post encrypted password * Make nova/virt use aggregate['metadetails'] * Revert "Simplify how ephemeral disks are created and named." * Fix bw\_usage\_update issue with conductor * Correctly init XenAPIDriver in vm\_vdi\_cleaner.py * Set instance\_ref['node'] in \_set\_instance\_host\_and\_node * Consider reserved count in os-user-limits extension * Make DNS drivers inherit interface * Map cinder snapshot statuses to ec2 * i18n raise Exception messages * Set default DNS driver to No-op * Access DB values as dict not as attributes. Part 4 * Use conductor for bw\_usage operations * libvirt: enable apic setting for Xen or KVM guest * Improve virt/disk/mount/nbd test coverage * Add NFS to the libvirt volume driver list * Use admin user to read Quantum port * Add vif\_type to the VIF model * Make the nbd mounter respect CONF.max\_nbd\_devices * Imported Translations from Transifex * Raise NotImplementedError in dns\_driver.DNSDriver * Unpin lxml requirements * Added sample tests to FlavorManage API * Use fixtures library for nova test fixtures * Catch ProcessExecutionError when building config drives * Fix fname concurrency tests * Imported Translations from Transifex * Make ignore\_hosts and force\_hosts work again * Run test objectstore server on arbitrary free port * Fix network manager ipv6 tests * Prevent creation of extraneous resource trackers * Remove unused bridge interfaces * Use conductor for migration\_get() * Reset node to source in finish\_revert\_resize() * Simplify how ephemeral disks are created and named * Order instance faults by created\_at and id * Sync RPC logging-related bits from oslo * Fix bugs in test\_migrations.py * Fix regression allowing quotas to be applied to projects * Improve nova-manage usability * Add new cliutils code from oslo-incubator * Update tools/flakes to work with pydoc * Fix pep8 exclude logic for 1.3.3 * Avoid vm instance shutdown when power state is NOSTATE * Fix handling of unimplemented host actions * Fix positional arg swallow decorator * Fix minidns delete\_entry to work for hostname with mixed case chars * powervm: Refactored run\_command for better naming * Sync latest openstack.common.rpc * Ensure prep\_resize arguments can be serialized * Add host to get\_backdoor\_port() for network api * Add agent build API support for list/create/delete/modify agent build * Added sample tests to extended status API * Imported Translations from Transifex * Make policy.json not filesystem location specific * Use conductor for resourcetracker instance\_update * network managers: Pass elevated cxtx to update\_dhcp * Volume backed live migration w/o shared storage * Add pyflakes option to tox * Adds API Sample tests for Quotas extension * Boot from volume without image supplied * Implements volume usage metering * Configurable exec\_dirs to find rootwrap commands * Allow newer boto library versions * Add notifications when libvirtd goes down * Make update\_service\_capabilities() accept a list of capabilities * update mailmap to add my perferred mail * Fix test suite to use MiniDNS * Add support for new WMI iSCSI initiator API * Added sample tests to deferred delete API * On confirm\_resize, update correct resource tracker * Renaming xml test class in sample tests of consoles API * remove session param from certificate\_get * improve sessions for key\_pair\_(create,destroy) * powervm: add DiskAdapter for local volumes * Access DB values as dict not as attributes. Part 3 * Patch fake\_libvirt\_utils with fixtures.MonkeyPatch * Open test xenapi/vm\_rrd.xml relative to tests * Reset notifier\_api before each test * Reset volume\_api before cinder cloud tests * Fix rpc control\_exchange regression * Add generic customization hooks via decorator * add metadata support for overlapping networks * Split out part of compute's init\_host * Use elevated cxtx in resource\_tracker.resize\_claim * Fix test\_migrations for postgres * Add vpn ip/port setting support for CloudPipe * Access DB values as dict not as attributes. Part 2 * Enable debug in run\_tests using pdb * Add POWERVM\_STARTING state to powervm driver * Fix test\_inject\_admin\_password for OSX * Multi host DHCP networking and local DNS resolving * use file instead of tap for non-blockdevice images on Xen * use libvirt getInfo() to receive number of physical CPUs * Don't run the periodic task if ticks\_between\_runs is below zero * Fix args to AggregateError exception * Fix typo in inherit\_properties\_from\_image * Access DB values as dict not as attributes * Fix KeyError of log message in compute/api.py * Fix import problem in test\_virt\_disk\_vfs\_localfs * Remove start\_guests\_on\_host\_boot config option * Add aggregate\_host\_add and \_delete to conductor * Imported Translations from Transifex * Call plug\_vifs() for all instances in init\_host * Make compute manager use conductor for instance\_gets * Fixes HyperV compute "resume" tests * Convert datetimes for conductor instance\_update * Update migration so it supports PostgreSQL * Include 'hosts' and 'metadetails' in aggregate * Verify doc/api\_samples files along with the templates * Remove default\_image config option * Move ec2 config opts to nova.api.ec2.cloud * Move imagecache code from nova.virt.libvirt.utils * Use flags() helper method to override config in tests * RetryFilter checks 'node' as well as 'host' * Make resize and multi-node work properly together * Migration model update for multi-node resize fix * Add version to conductor migration\_update message * Validate rxtx\_factor as a float * Display errors when running nosetests * Respect the base\_dir\_name flag in imagebackend * Add exceptions to baremetal/db/api * Clean up unused methods in scheduler/driver * Provide better error message for aggregate-create * Imported Translations from Transifex * Allow multi\_host compute nodes to share dhcp ip * Add blank nova/virt/baremetal/\_\_init\_\_.py * Add migration\_update to conductor * Remove unnecessary topic argument * Add pluggable ServiceGroup monitoring APIs * Add SSL support to utils.generate\_glance\_url() * Add eventlet db\_pool use for mysql * Make compute manager use nova-conductor for instance\_update * Missing instance\_uuid in floating\_ip notifications * Make nova-dhcpbridge use CONFIG\_FILE over FLAGFILE * Rename instance\_info\_cache unique key constraints * Cleanup compute multi-node assignment of node * Imported Translations from Transifex * maint: remove an unused import from libvirt.utils * Encode consoleauth token in utf-8 to make it a str * nova-dhcpbridge should require the FLAGFILE is set * Added cpu\_info report to HyperV Compute driver * Remove stale flags unit tests * Truncate large console logs in libvirt * Move global fixture setup into nova/test.py * Complete API samples for Hosts extension * Fix HostDeserializer to enable multiple line xml * adjust rootwrap filters for recent file injection changes * Don't hard code the xen hvmloader path * Don't update arch twice when create server * remove db access in xen driver * Imported Translations from Transifex * Move compute\_driver into nova.virt.driver * Re-organize compute opts a bit * Move compute opts from nova.config * Add a CONTRIBUTING file * Compute doesn't set the 'host' field in instance * Xenapi: Don't resize down if not auto\_disk\_config * Cells: Re-add DB model and calls * Use more specific SecurityGroupHandler calls * Fix wait\_for\_deleted function in SmokeTests * Wrap log messages with \_() * Add methods to Host operations to fake hypervisor * Move sql options to nova.db.sqlalchemy.session * Add debug logging to disk mount modules * Remove the libguestfs disk mount API implementation * Remove img\_handlers config parameter usage * Convert file injection code to use the VFS APIs * Introduce a VFS implementation backed by the libguestfs APIs * Introduce a VFS implementation mapped to the host filesystem * Adds API for bulk creation/deletion of floating IPs * Remove obsolete config drive init.d example * Imported Translations from Transifex * Rename sql\_pool\_size to sql\_max\_pool\_size * Detect shared storage; handle base cleanup better * Allow VMs to be resumed after a hypervisor reboot * Fix non-primitive uses of instance in compute/manager * Remove extra space in exception * Adds missing index migrations by instance/status * Convert migrations.instance\_uuid to String(36) * Add missing binary * Change all tenants servers listing as policy-based * Fixes a bug in get\_info in the Hyper-V Driver * refactor: extract method: connect\_volume * Handle instances not being found in EC2 API responses * Pin pep8 to 1.3.3 * Return an error response if the specified flavor does not exists. (v4) * Send block device mappings to rebuild\_instance * Move db lookup for block device mappings * Use CONF.import\_opt() for nova.config opts * Imported Translations from Transifex * Remove nova.config.CONF * Add keystoneclient to pip-requires * Pass rpc connection to pre\_start\_hook * Fix typo: hpervisor=> hypervisor * Fix reversed args to call to \_reschedule * Add the beginnings of the nova-conductor service * remove old baremetal driver * Remove useless function quota\_usage\_create * Fix calls to private method in linux\_net * Drop unused PostgreSQL sequences from Folsom * Compact pre-Grizzly database migrations * Fix os-hosts extension can't return xml response correctly * Set node\_availability\_zone in XenAPIAggregateTestCase * Ignore editor backup files * Imported Translations from Transifex * Remove nova.flags * Remove FLAGS * Make fping extension use CONF * Use disk image path to setup lxc container * Use the auth\_token middleware from keystoneclient * improve session handling around instance\_ methods * add index to fixed\_ips * add instance\_type\_extra\_specs to instances * Change a toplevel function comment to a docstring * Ensure cat process is terminated * Add some sqlalchemy tweakables * Fixes an error reporting bug on Hyper-V * update api\_samples add os-server-start-stop * update api\_samples add os-services module * Switch to using eventlet\_backdoor from oslo * Sync eventlet\_backdoor from oslo * Added sample tests to consoles API * Fix use of 'volume' variable name * Ditch unused import and variable * Make ec2\_instance\_create db method consistant across db apis * Adds documentation for Hyper-V testing * Adds support for ConfigDriveV2 in Hyper-V * don't explode if a 413 didn't set Retry-After * Fix a couple uses of FLAGS * Remove nova.flags imports from scheduler code * Remove some unused imports from compute/\* * Remove importing of flags from compute/\* * Remove nova.flags imports from bin/\* * Move nova shared config options to nova.config * Fix use\_single\_default\_gateway * Update api\_samples README.rst to use tox * Do not alias stdlib uuid module as uuidutils, since nova has uuidutils * Allow group='foo' in self.flags() for tests * updated api\_samples with real hypervisor\_hostname * Issue a hard shutdown if clean fails on resize up * Introduce a VFS api abstraction for manipulating disk images * Fix network RPC API backwards compat * create\_db\_entry\_for\_new\_instance did not call sgh for default * Add support for backdoor\_port to be returned with a rpc call * Refactor scheduling filters * Unpin amqplib and kombu requirements * Add module for loading specific classes * Make sure instance data is always refreshed * Move all mount classes into a subdirectory * Add support for resizes to resource tracker * Fixes create instance \*without\* config drive test * Update db entry before upate the DHCP host file * Remove gen\_uuid() * Enhance compute capability filter to check multi-level * API extension for fpinging instances * Allow controller extensions to extend update/show * Isolate tests from the environment variable http\_proxy * Handle image cache hashing on shared storage * fix flag type define error * Simplify libvirt volume testing code * Migrate floating ip addresses in multi\_host live\_migration * Add DB query to get in-progress migrations * Try hard shutdown if clean fails on resize down * Restore self.test\_instance at LibvirtConnTestCase.setUp() * Fixes usage of migrate\_instance\_start * added getter methods for quantumv2 api * fix LVM backed VM logial volumes can't be deleted * Clean up \_\_main\_\_ execution from two tests for consistency * Imported Translations from Transifex * Update uuidutils from openstack common * Remove volume.driver and volume.iscsi * Use base image for rescue instance * Make xenapi shutdown mode explicit * Fix a bug in XenAPISession's use of virtapi * Ban db import from nova/virt * Update vol mount smoketest to wait for volume * Add missing webob to exc * Add missing exception NetworkDuplicated * Fix misuse of exists() * Rename config to vconfig * Move agent\_build\_get\_by\_triple to VirtAPI * Fix \_setup\_routes() signature in APIRouter * Move libvirt specific cgroups setup code out of nova.virt.disk.api * make libvirt with Xen more workable * script for configuring a vif in Xen in non-bridged mode * Upgrade pylint version to 0.26.0 * Removes fixed\_ip\_get\_network * improve session handling around virtual\_interfaces * improve sessions for reservation * improve session handling around quotas * Remove custom test assertions * Add nova option osapi\_compute\_unique\_server\_name\_scope * Add REST API support for list/enable/disable nova services * Switch from FLAGS to CONF in nova.compute * Switch from FLAGS to CONF in tests * Get rid of pylint E0203 in filter\_scheduler.py * Updated scheduler and compute for multiple capabilities * Switch from FLAGS to CONF in nova.db * Removed two unused imports * Remove unused functions * Fixes a bug in api.metadata.base.lookup() on Windows * Fixes a bug in nova.utils, due to Windows compatibility issues * improve session handling of dnsdomain\_list * Make tox.ini run pep8/hacking checks on bin * Fix import ordering in /bin scripts * add missing opts to test\_db\_api.py * clean up dnsdomain\_unregister * Make utils.mkfs() set label when fs=swap * Another case of dictionary access * Remove generic topic support from filter scheduler * Clarify server\_name, hostname, host * Refactor scheduling weights * update nova.conf.sample * Check instance\_type in compute capability filter * Sync latest code from oslo-incubator * Adds REST API support for Fixed IPs * Added separate bare-metal MySQL DB * Added bare-metal host manager * Remove unused volume exceptions * Adds a conf option for custom configdrive mkisofs * Fixed HyperV to get disk stats of instances drive * powervm: failed spawn should raise exception * Enable Quantum linux bridge VIF driver to use "bridge" type * Remove nova-volume DB * make diagnostics workable for libvirt with Xen * Avoid unnecessary system\_metadata db lookup * Make instance\_system\_metadata load with instance * Add some xenapi Bittorrent tests * Move security groups and firewall ops to VirtAPI * Move host aggregate operations to VirtAPI * Simplify topic handling in network rpcapi * Sync rpc from openstack-common * Send instance\_type to resize\_instance * Remove instance\_type db lookup in prep\_resize * Send all aggregate data to remove\_aggregate\_host * Fix incorrect LOG.error usage in \_compare\_cpu * Limit formatting routes when adding resources * Removes unnecessary db query for instance type * Fix verification in test\_api\_samples.py * Yield in between hash runs for the image cache manager * Remove unused function require\_instance\_exists * Refactor resource tracker claims and test logic * Remove out-of-date comment * Make HostManager.get\_all\_host\_states() return an iterator * Switch from FLAGS to CONF in nova.virt * 'BackupCreate' rotation parameter >= 0 * Corrects usage of db.api.network\_get * Switch from FLAGS to CONF in nova.console * Map NotAuthorized to 403 in floating ips extension * Decouple EC2 API from using instance id * libvirt: Regenerates xml instead of using on-disk * Imported Translations from Transifex * Fix to include error message in instance faults * Include hostname in notification payloads * Fix quota updating during soft delete and restore * Fix warnings found with pyflakes * make utils.mkfs() more general * Fixes snapshot instance failure on libvirt * Make ComputeDrivers send hypervisor\_hostname * Fixed instance deletion issue from Nova API * De-duplicate option: console\_public\_hostname * Don't verify image hashes if checksumming is disabled * Imported Translations from Transifex * Look up stuck-in-rebooting instances in manager * Use chance scheduler in EC2 tests * Send all aggregate data to add\_aggregate\_host * Send all migration data to finish\_revert\_resize * Send all migration data to revert\_resize * Fix migrations when not using multi-host network * Fix bandwidth polling exception * Fixes volume attach issue on Hyper-V * Shorten self.compute.resource\_tracker in test\_compute.py * Cleanup nova.db.sqlalchemy.api import * Use uuidutils.is\_uuid\_like for uuid validation * Add uuidutils module * Imported Translations from Transifex * Switch from FLAGS to CONF in nova.scheduler * Switch from FLAGS to CONF in nova.network * Switch from FLAGS to CONF in misc modules * Switch from FLAGS to CONF in nova.api * Switch from FLAGS to CONF in bin * Remove flags.DECLARE * Move parse\_args to nova.config * Forbid resizing instance to deleted instance types * Imported Translations from Transifex * Fix unused variables and wrong indent in test\_compute * Remove unnecessary db call from xenapi/vmops * xenapi: place boot lock when doing soft delete * Detangle soft delete and power off * Fix signing\_dir option for auth\_token middleware * Fix no attribute 'STD\_OUT\_HANDLE' on windows * Use elevated context in disassociate\_floating\_ip * Remove db.instance\_get\* from nova/virt * sync deprecated log method from openstack-common * move python-cinderclient to pip-requires * Tiny resource tracker cleanup * Fix Quantum v2 API method signatures * add doc to standardize session usage * improve sessions around floating\_ip\_get\_by\_address * Bump the base rpc version of the network api * Eliminates simultaneous schedule race * Introduce VirtAPI to nova/virt * Add some hooks for managers when service starts * Fix backwards compat of rpc to compute manager * xenapi: Make agent optional * Add xenapi host\_maintenance\_mode() test * refactor: extract \_attach\_mapped\_block\_devices * Make bdms primitive in rpcapi.terminate\_instance * Ability to specify a host restricted to admin * Improve EC2 describe\_security\_groups performance * Increased MAC address range to reduce conflicts * Move to a more canonicalized output from qemu-img info * Read deleted flavors when using to\_xml() * Fix copy-paste bug in block\_device\_info\_generation * Remove nova-volume scheduling support * Remove duplicate api\_paste\_config setting * Fixes hypervisor based image filtering on Hyper-V * make QuantumV2 support requested nic ordering * Add rxtx\_factor to network migration logic * Add scheduler retries for prep\_resize operations * Add call to reset quota usage * Make session.py reusable * Remove redundant code from PowerVM driver * Force earlier version of sqlalchemy * refactor: extract method vm\_ref\_or\_raise * Use env to set environ when starting dnsmasq * pep8 fixes for nova-manage * Fix VM deletion from down compute node * Remove database usage from libvirt check\_can\_live\_migrate\_destination * Clean up xenapi VM records on failed disk attaches * Remove nose detailed error reporting * Validate is-public parameter to flavor creation * refactor: extract \_terminate\_volume\_connections * improve sessions around compute\_node\_\* * Fix typo in xenapi/host.py * Remove extra print line in hacking.py * Ensures compute\_driver flag can be used by bdm * Add call to trigger\_instance[add/remove]\_security\_group\_refresh quantum * Validates Timestamp or Expiry time in EC2 requests * Add API samples to Admin Actions * Add live migration helper methods to fake hypervisor driver * Use testtools as the base testcase class * Clean up quantumv2.get\_client * Fix getattr usage * Imported Translations from Transifex * removes the nova-volume code from nova * Don't elevate context when calling run\_instance * remove session parameter from fixed\_ip\_get * Make instance\_get\_all() not require admin context * Fix compute tests abusing admin context * Fix use of elevated context for resize methods * Fix check for memory\_mb * Imported Translations from Transifex * Fix nova-network MAC collision logic * Fix rpcapi version for new methods * Remove useless return * Change hacking.py N306 to use logical\_lines * Add missing live migration methods to ComputeDriver base class * Fix hacking.py naivete regarding lines that look like imports * details the reboot behavior that a virt driver should follow * xenapi: refactor: Agent class * Send usage event on revert\_resize * Fix config-file overrides for nova-dhcpbridge * Make nova-rootwrap optional * Remove duplicated definition of is\_loaded() * Let scheduler know services' capabilities at startup * fetch\_images() method no more needed * Fix hardcoded topic strings with constants * Save exceptions earlier in finish\_resize * Correct \_extract\_query\_params in image.glance * Fix Broken XML Namespace Handling * More robust checking for empty requested\_networks * Imported Translations from Transifex * Rehydrate NetworkInfo in reboot\_instance() * Update common * Use cat instead of sleep for rootwrap test * Addtional 2 packages for dev environment on ubuntu * Let VlanManager keep network's DNS settings * Improve the performance of quantum detection * Support for nova client list hosts with specific zone * Remove unused imports in setup.py * Fixes fake for testing without qemu-img * libvirt: persist volume attachments into config * Extend IPv6 subnets to /64 if network\_size is set smaller than /64 * Send full migration data to finish\_resize * Send full migration to confirm\_resize * Send full migration to resize\_instance * Migrate to fileutils and lockutils * update sample for common logging * Add availability zone extension to API samples test * Refactor: config drive related functions * Fix live migration volume assignment * Remove unused table options dicts * Add better log line for undefined compute\_driver * Remove database usage from libvirt imagecache module * Return empty list when listing servers with bad status value * Consistent Rollback for instance creation failures * Refactor: move find\_guest\_agent to xenapi.agent * Fix Incorrect Exception when metadata is over 255 characters * Speed up volume and routing tests * Speed up api.openstack.compute.contrib tests * Allow loading only selected extensions * Migrate network of an instance * Don't require quantumclient when running nova-api * Handle the case where we encounter a snap shot correctly * Remove deprecated root\_helper config * More specific exception handling in migration 091 * Add virt driver capabilities definition * Remove is\_admin\_context from sqlalchemy.api * Remove duplicate methods from network/rpcapi.py * SanISCSIDriver SSH execution fixes * Fix bad Log statement in nova-manage * Move mkfs from libvirt.utils to utils * Fixes bug Snapshotting LXC instance fails * Fix bug in a test for the scheduler DiskFilter * Remove mountpoint from parse\_volume\_info * limit the usage of connection\_info * Sync with latest version of openstack.common.timeutils * nova-compute sends its capabilities to schedulers ASAP * Enable custom eventlet.wsgi.server log\_format * Fix the fail-on-zero-tests case so that it is tolerant of no output * add port support when QuantumV2 subclass is used * Add trove classifiers for PyPI * Fix and enable pep8 E502, E712 * Declare vpn client option in pipelib * Fix nova-volume-usage-audit * Fix error on invalid delete\_on\_termination value * Add Server diagnostics extension api samples * Add meaningful server diagnostic information to fake hypervisor * Use instance\_exists to check existence * Fix nova-volume-usage-audit * Imported Translations from Transifex * Avoid leaking BDMs for deleted instances * Deallocate network if instance is deleted in spawn * Create Flavors without Optional Arguments * Update policies * Add DNS records on IP allocation in VlanManager * update kwargs with args in wrap\_instance\_fault * Remove ComputeDriver.update\_host\_status() * Do not call directly vmops.attach\_volume * xenapi: fix bfv behavior when SR is not attached * Use consoleauth rpcapi in nova-novncproxy * Change install\_venv to use setup.py develop * Fixes syntax error in nova.tools.esx.guest\_tools.py * Allow local rbd user and secret\_uuid configuration * Set host prior to allocating network information * Remove db access for block devices and network info on reboot * Remove db access for block devices on terminate\_instance * Check parameter 'marker' before make request to glance * Imported Translations from Transifex * Internationalize nova-manage * Imported Translations from Transifex * Fixes live\_migration missing migrate\_data parameter in Hyper-V driver * handles empty dhcp\_domain with hostname in metadata * xenapi: Tag volumes in boot from volume case * Stops compute api import at import time * Fix imports in openstack compute tests * Make run\_tests.sh fail if no tests are actually run * Implement snapshots for raw backend * Used instance uuid rather than id in remove-fixed-ip * Migrate DHCP host info during resize * read\_deleted snapshot and volume id mappings * Make sure sleep can be found * Pass correct task\_state on snapshot * Update run\_tests.sh pep8 ignore list for pep8 1.2 * Clean up imports in test\_servers * Revert "Tell SQLite to enforce foreign keys." * Add api samples to simple tenant usage extension * Avoid RPC calls while holding iptables lock * Add util for image conversion * Add util for disk type retrieval * Fixes test\_libvirtr spawn\_with\_network\_info test * Remove unneeded temp variable * Add version to network rpc API * Remove cast\_to\_network from scheduler * Tell SQLite to enforce foreign keys * Use paramiko.AutoAddPolicy for the smoketests * nova-manage doesn't validate key to update the quota * Dis-associate an auto-assigned floating IP should return proper warning * Proxy floating IP calls to quantum * Handle invalid xml request to return BadRequest * Add api-samples to Used limits extension * handle IPv6 race condition due to hairpin mode * Imported Translations from Transifex * XenAPI should only snapshot root disk * Clarify trusted\_filter conf options * Fix pep8 error in bin/nova-manage * Set instance host field after resource claim * powervm: add polling timeout for LPAR stop command * Drop claim timeouts from resource tracker * Update kernel\_id and ramdisk\_id while rebuilding instance * Add Multiple Create extension to API sample tests * Fix typo in policy docstring * Fix reserve\_block\_device\_name while attach volume * Always use bdm in instance\_block\_mapping on Xen * Centralize sent\_meta definition * Move snapshot image property inheritance * Set read\_deleted='yes' for instance\_id\_mappings * Fix XML response for return\_reservation\_id * Stop network.api import on network import * libvirt: ignore deleted domain while get block dev * xenapi: Refactor snapshots during resize * powervm: remove broken instance filtering * Add ability to download images via BitTorrent * powervm: exception handling improvements * Return proper error messages while associating floating IP * Create util for root device path retrieval * Remove dependency on python-ldap for tests * Add api samples to Certificates extension * Add nova-cert service to integrated\_helpers * Compare lists in api samples against all matches * ip\_protocol for ec2 security groups * Remove unneeded lines from aggregates extension API sample tests * Remove deprecated Folsom code: config convert * Make resource tracker uses faster DB query * Remove deprecated Folsom code: bandwith\_poll\_interval * Add TestCase.stub\_module to make stubbing modules easier * Imported Translations from Transifex * Update tools hacking for pep8 1.2 and beyond * Remove outdated moduleauthor tags * remove deprecated connection\_type flag * Add aggregates extension to API samples test * Update RPM SPEC to include new bandwidth plugin * Remove TestCase.assertNotRaises * Imported Translations from Transifex * Imported Translations from Transifex * Use self.flags() instead of manipulating FLAGS by hand * Use test.TestCase provided self.mox and self.stubs * Remove unnecessary setUp, tearDown and \_\_init\_\_ in tests * xenapi: implement resume\_state\_on\_host\_boot * Revert "Add full test environment." * Synchronize docstring with actual implementation * Num instances scheduler filter * Add api samples to cloudpipe extension * Fix CloudPipe extension XML serialization * Max I/O ops per host scheduler filter * libvirt: continue detach if instance not found * libvirt: allows attach and detach from all domains * Fixes csv list required for qemu-img create * Added compute node stats to HostState * libvirt: Improve the idempotency of iscsi detach * Pass block\_device\_info to destroy in revert\_resize * Enable list with no dict objects to be sorted in api samples * Fixes error message for flavor-create duplicate ID * Loosen anyjson dependency to avoid clash with ceilometer * xenapi: make it easier to recover from failed migrations * Remove unnecessary check if migration\_ref is not None * Bump the version of SQLAlchemy in pip-requires * optimize slightly device lookup with LXC umounts * Support for several HA RabbitMQ servers * xenapi: Removing legacy swap-in-image * xenapi: increase timeout for resetnetwork agent request * Replaced default hostname function from gethostname to getfqdn * Fix issues deleting instances in RESIZED state * Modified 404 error response to show specific message * Updated code to update attach\_time of a volume while detaching * Check that an image is active before spawning instances * Fix issues with device autoassignment in xenapi * Deleting security group does not mark rules as deleted * Collect more accurate bandwidth data for XenServer * Zmq register opts fix in receiver * Revert explicit usage of tgt-adm --conf option * Fix booting a raw image on XenServer * Add servers/ips api\_samples tests * LOG.exception() should only be used in exception handler * Fix XenServer's ability to boot xen type images * all\_extensions api\_samples testing for server actions * Fixes remove\_export for IetAdm * libvirt: Fix \_cleanup\_resize * Imported Translations from Transifex * xenapi: fix undefined variable in logging message * Spelling: ownz=>owns * Fix NetAppCmodeISCSIDriver.\_get\_lun\_handle() method * Integration tests virtual interfaces API extension * Allow deletion of instance with failed vol cleanup * Fixes snapshotting of instances booted from volume * Move fakeldap.py from auth dir to tests * Remove refs to ATAoE from nova docs * Imported Translations from Transifex * Set volume status to error if scheduling fails * Update volume detach smoke test to check status * Fix config opts for Storwize/SVC volume driver * Ensure hybrid driver creates veth pair only once * Cleanup exception handling * Imported Translations from Transifex * Add lun number (0) to model\_update in HpSanDriver * libvirt: return after soft reboot successfully completes * Fixes to the SolarisISCSI Driver * Fix live migration when volumes are attached * Clarify dangerous use of exceptions in unit tests * Cleanup test\_api\_samples:\_compare\_result * Fix testContextClaimWithException * Fix solidfire unit tests * Stop double logging to the console * Recreate nw\_info after auto assigning floating ip * Re-generate sample config file * Use test.TestingException instead of duplicating it * Fix startup with DELETED instances * Fix solidfire option declaration * Restore SIGPIPE default action for subprocesses * Raise NotFound for non-existent volume snapshot create * Catch NotFound exception in FloatingIP add/remove * Adds API sample testing for rescue API extension * Fix bugs in resource tracker and cleanup * Replace builtin hash with MD5 to solve 32/64-bit issues * Properly create and delete Aggregates * No stack trace on bad nova aggregate-\* command * Clean up test\_state\_revert * Fix aggregate\_hosts.host migration for sqlite * Call compute manager methods with instance as keyword argument * Adds deserialization for block\_device\_mapping * Fix marker pagination for /servers * Send api.fault notification on API service faults * Always yield to other greenthreads after database calls * fix unused import * Don't include auto\_assigned ips in usage * Correct IetAdm remove\_iscsi\_target * Cleanup unused import in manager.py * xapi: fix create hypervisor pool * Bump version to 2013.1 * Add Keypairs extension to API samples test * sample api testing for os-floating-ips extension * Update quota when deleting volume that failed to be scheduled * Update scheduler rpc API version * Added script to find unused config options * Make sure to return an empty subnet list for a network without sunbet * Fix race condition in CacheConcurrencyTestCase * Makes scheduler hints and disk config xml correct * Add lookup by ip via Quantum for metadata service * Fix over rate limit error response * Add deserialization for multiple create and az * Fix doc/README.rst to render properly * Add user-data extension to API samples tests * Adds API sample testing for Extended server attributes extension * Inherit the base images qcow2 properties * Correct db migration 91 * make ensure\_default\_security\_group() call sgh * add ability to clone images * add get\_location method for images * Adds new volume API extensions * Add console output extension to API samples test * Raise BadRequest while creating server with invalid personality * Update 'unlimited' quota value to '-1' in db * Modified 404 error response for server actions * Fix volume id conversion in nova-manage volume * Improve error handling of scheduler * Fixes error handling during schedule\_run\_instance * Include volume\_metadata with object on vol create * Reset the task state after backup done * Allows waiting timers in libvirt to raise NotFound * Improve entity validation in volumes APIs * Fix volume deletion when device mapper is used * Add man pages * Make DeregisterImage respect AWS EC2 specification * Deserialize user\_data in xml servers request * Add api samples to Scheduler hints extension * Include Schedule Hints deserialization to XML API * Add admin actions extension * Allow older versions of libvirt to delete vms * Add security groups extension to API samples test * Sync a change to rpc from openstack-common * Add api\_samples tests for servers actions * Fix XML deserialization of rebuild parameters * All security groups not returned to admins by default * libvirt: Cleanup L2 and L3 rules when confirm vm resize * Corrects use of instance\_uuid for fixed ip * Clean up handling of project\_only in network\_get * Add README for doc folder * Correct typo in memory\_mb\_limit filter property * Add more useful logging around the unmount fail case * Imported Translations from Transifex * Make compute/manager.py use self.host instead of FLAGS.host * Add a resume delete on volume manager startup * Remove useless \_get\_key\_name() in servers API * Add entity body validation helper * Add 422 test unit test for servers API * Use tmpdir and avoid leaving test files behind * Includes sec group quota details in limits API response * Fixes import issue on Windows * Overload comment in generated SSH keys * Validate keypair create request body * Add reservations parameter when cast "create\_volume" to volume manager * Return 400 if create volume snapshot force parameter is invalid * Fix FLAGS.volumes\_dir help message * Adds more servers list and servers details samples * Makes key\_name show in details view of servers * Avoid VM task state revert on instance termination * Avoid live migrate overwriting the other task\_state * Backport changes from Cinder to Nova-Volume * Check flavor id on resize * Rename \_unplug\_vifs to unplug\_vifs * PowerVM: Establish SSH connection at use time * libvirt: Fix live block migration * Change comment for function \_destroy * Stop fetch\_ca from throwing IOError exceptions * Add 'detaching' to volume status * Reset task state before rescheduling * workaround lack of quantum/nova floatingip integration * fix rpcapi version * Added description of operators for extra\_specs * Convert to ints in VlanManager.create\_networks * Remove unused AddressAlreadyAllocated exception * Remove an unused import * Make ip block splitting a bit more self documenting * Prevent Partial terminations in EC2 * Add flag cinder\_endpoint\_template to volume.cinder * Handle missing network\_size in nova-manage * Adds API sample test for Flavors Extra Data extension * More specific lxml versions in tools/pip-requires * Fixes snat rules in complex networking configs * Fix flavor deletion when there is a deleted flavor * Make size optional when creating a volume from a snapshot * Add documentation for scheduler filters scope * Add and fix tests for attaching volumes * Fix auth parameter passed to libvirt openAuth() method * xapi: Fix live block migration * Add a criteria to sort a list of dict in api samples * delete a module never used * Update SolidFire volume driver * Adds get\_available\_resource to hyperv driver * Create image of volume-backed instance via native API * Improve floating IP delete speed * Have device mapping use autocreated device nodes * remove a never used import * fix unmounting of LXC containers in the presence of symlinks * Execute attach\_time query earlier in migration 98 * Add ServerStartStop extension API test * Set install\_requires in setup.py * Add Server Detail and Metadata tests * xenapi: Make dom0 serialization consistent * Refer to correct column names in migration 98 * Correct ephemeral disk cache filename * Stop lock decorator from leaving tempdirs in tests * Handle missing 'provider\_location' in rm\_export * Nail the pip requirement at 1.1 * Fix typo in tgtadm LOG.error() call * Call driver for attach/detach\_volume * rbd: implement create\_volume\_from\_snapshot * Use volume driver specific exceptions * Fake requests in tests should be to v1 * Implement paginate query use marker in nova-api * Simplify setting up test notifier * Specify the conf file when creating a volume * Generate a flavorid if needed at flavor creation * Fix EC2 cinder volume creation as an admin user * Allow cinder catalog match values to be configured * Fix synchronized decorator path cleanup * Fix and cleanup compute node stat tracking * avoid the buffer cache when copying volumes * Add missing argument to novncproxy websockify call * Use lvs instead of os.listdir in \_cleanup\_lvm * Fixing call to hasManagedSaveImage * Fix typo in simple\_tenant\_usage tests * Move api\_samples to doc dir * Add a tunable to control how many ARPs are sent * Get the extension alias to compose the path to save the api samples * Add scope to extra\_specs entries * Use bare container format by default * Sync some updates from openstack-common * Fix simple\_tenant\_usage's handing of future end times * Yield to another greenthread when some time-consuming task finished * Automatically convert device names * Fix creation of iscsi targets * Makes sure new flavors default to is\_public=True * Optimizes flavor\_access to not make a db request * Escape ec2 XML error responses * Skip tests in OSX due to readlink compat * Allow admins to de-allocate any floating IPs * Fix xml metadata for volumes api in nova-volume * Re-attach volumes after instance resize * Speed up creating floating ips * Adds API sample test for limits * Fix vmwareapi driver spawn() signature * Fix hyperv driver spawn() signature * Add API samples to images api * Add method to manage 'put' requests in api-sample tests * Add full python path to test stubbing modules for libvirt * Rename imagebackend arguments * Fixes sqlalchemy.api.compute\_node\_get\_by\_host * Fix instances query for compute stats * Allow hard reboot of a soft rebooting instance * On rebuild, the compute.instance.exists * Fix quota reservation expiration * Add api sample tests for flavors endpoint * Add extensions for flavor swap and rxtx\_factor * Address race condition from concurrent task state update * Makes sample testing handle out of order output * Avoid leaking security group quota reservations * Save the original base image ref for snapshots * Fixed boot from snapshot failure * Update zmq context cleanup to use term * Fix deallocate\_fixed\_ip invocation * fix issues with Nova security groups and Quantum * Clear up the .gitignore file * Allow for deleting VMs from down compute nodes * Update nova-rpc-zmq-receiver to load nova.conf * FLAG rename: bandwith\_poll\_\*=>bandwidth\_poll\_\* * Spelling: Persistant=>Persistent * Fix xml metadata for volumes extension * delete unused valiables * Clean up non-spec output in flavor extensions * Adds api sample testing for extensions endpoint * Makes api extension names consistent * Fixes spawn method signature for PowerVM driver * Spelling fix Retrive=> Retrieve * Update requires to glanceclient >=0.5.0 * Sort API extensions by alias * Remove scheduler RPC API version 1.x * Add version 2.0 of the scheduler RPC API * Remove some remnants of VSA support * hacking: Add driver prefix recommendation * Implements PowerVM get\_available\_resource method * Add a new exception for live migration * Assume virt disk size is consumed by instances * External locking for image caching * Stop using scheduler RPC API magic * Adds api sample testing for versions * Do not run pylint by default * Remove compute RPC API version 1.x * Add version 2.0 of compute RPC API * Accept role list from either X-Roles or X-Role * Fix PEP8 issues * Fix KeyError when test\_servers\_get fails * Update nova.conf.sample * Fixes backwards compatible rpc schedule\_run * Include launch-index in openstack style metadata * Port existing code to utils.ensure\_tree * Correct utils.execute() to check 0 in check\_exit\_code * Add the self parameter to NoopFirewallDriver methods * request\_spec['instance\_uuids'] as list in resize * Fix column variable typo * Add ops to aggregate\_instance\_extra\_specs filter * Implement project specific flavors API * Correct live\_migration rpc call in test * Allow connecting to a ssl-based glance * Move ensure\_tree to utils * Define default mode and device\_id\_string in Mount * Update .mailmap * Fix path to example extension implementation * Remove test\_keypair\_create\_quota\_limit() * Remove duplicated test\_migrate\_disk\_and\_power\_off() * Add missing import webob.exc * Fix broken SimpleScheduler.schedule\_run\_instance() * Add missing user\_id in revoke\_certs\_by\_user\_and\_project() * Rename class\_name to project\_id * Use the compute\_rpcapi instance not the module * Remove duplicated method VM\_migrate\_send * Add missing context argument to start\_transfer calls * Remove unused permitted\_instance\_types * Add lintstack error checker based on pylint * Make pre block migration create correct disk files * Remove unused and old methods in hyperv and powervm driver * Trap iscsiadm error * Check volume status before detaching * Simplify network create logic * Clean up network create exception handling * Adding indexes to frequently joined database columns * Ensure hairpin\_mode is set whenever vifs is added to bridge * Returns hypervisor\_hostname in xml of extension * Adds integration testing for api samples * Fix deallocate\_fixed\_ip() call by unifying signature * Make instance\_update\_and\_get\_original() atomic * Remove unused flags * Remove test\_instance\_update\_with\_instance\_id test * Remove unused instance id-to-uuid function * Re-work the handling of firewall\_driver default * Include CommonConfigOpts options in sample config * Re-generate nova.conf.sample * Ensure log formats are quoted in sample conf * Don't include hostname and IP in generated sample conf * Allow generate\_sample.sh to be run from toplevel dir * Let admin list instances in vm\_states.DELETED * Return actual availability zones * Provide a hint for missing EC2 image ids * Check association when removing floating ip * Add public network support when launching an instance * Re-define libvirt domain on "not found" exception * Add two prereq pkgs to nova devref env guide * Fix hyperv Cfgs: StrOpt to IntOpt * continue deleting instance even if quantum port delete fails * Typo fix: existant => existent * Fix hacking.py git checks to propagate errors * Don't show user-data when its not sent * Clarify nwfilter not found error message * Remove unused \_create\_network\_filters() * Adds missing assertion to FloatingIP tests * Restore imagebackend in test\_virt\_drivers.py * Add nosehtmloutput as a test dependency * Remove unused exceptions from nova/exception.py * Cleanup pip dependencies * Make glance image service check base exception classes * Add deprecated warning to SimpleScheduler * Have compute\_node\_get() join 'service' * XCP-XAPI version fix * add availability\_zone to openstack metadata * Remove stub\_network flag * Implements sending notification on metadata change * Code clean up * Implement network creation in compute API * Debugged extra\_specs\_ops.py * Fix typo in call in cinder.API unreserve\_volume * xenapi: Tag nova volumes during attach\_volume * Allow network to call get\_fixed\_ip\_by\_address * Add key\_name attribute in XML servers API * Fix is\_admin check via policy * Keep the ComputeNode model updated with usage * Remove hard-coded 'admin' role checking and use policy instead * Introduce ImagePropertiesFilter scheduler filter * Return HTTP 422 on bad server update PUT request * Makes sure instance deletion ok with deleted data * OpenStack capitalization added to HACKING.rst * Fix get\_vnc\_console race * Fix a TypeError that occurs in \_reschedule * Make missing imports flag in hacking settable * Makes sure tests don't leave lockfiles around * Update FilterScheduler doc * Disable I18N in Nova's test suites * Remove logging in volume tests * Refactor extra specs matching into a new module * Fix regression in compute\_capabilities filter * Refactor ComputeCapabilitiesFilter test cases * Revert per-user-quotas * Remove unused imports * Fix PEP8 issues * Sync changes from openstack common * Implement GET (show) in OS API keypairs extension * Fix spelling typos * Ignoring \*.sw[op] files * xenapi: attach root disk during rescue before boot * Allows libvirt to set a serial number for a volume * Adds support for serial to libvirt config disks * Remove unused variables * Always create the run\_instance records locally * Fix use of non-existant var pool * Adds Hyper-V support in nova-compute (with new network\_info model), including unit tests * Update sqlite to use PoolEvents for regexp * Remove unused function in console api * Allow nova to guess device if not passed to attach * Update disk config to check for 'server' in req * Changes default behavior of ec2 * Make ComputeFilter verify compute-related instance properties * Collect instance capabilities from compute nodes * Move volume size validation to api layer * Change IPtablesManager to preserve packet:byte counts * Add get\_key\_pair to compute API * Defined IMPL in global ipv6 namespace * xenapi: remove unnecessary json decoding of injected\_files * Remove unnecessary try/finally from snapshot * Port pre\_block\_migration to new image caching * Adding port attribute in network parameter of boot * Add support for NFS-based virtual block devices * Remove assigned, but unused variables from nova/db/sqlalchemy/api.py * xenapi: Support live migration without pools * Restore libvirt block storage connections on reboot * Added several operators on instance\_type\_extra\_specs * Revert to prior method of executing a libvirt hard\_reboot * Set task\_state=None when finished snapshotting * Implement get\_host\_uptime in libvirt driver * continue config-drive-v2, add openstack metadata api * Return values from wrapped functions in decorators * Allow XML payload for volume creation * Add PowerVM compute driver and unit tests * Revert task\_state on failed instance actions * Fix uuid related bug in console/api * Validate that min\_count & max\_count parameters are numeric * Allow stop API to be called in Error * Enforce quota limitations for instance resize * Fix rpc error with live\_migration * Simple checks for instance user data * Change time.sleep to greenthread.sleep * Add missing self. for parent * Rewrite image code to use python-glanceclient * Fix rpc error with live\_migration * volumes: fix check\_for\_export() in non-exporting volume drivers * Avoid {} and [] as default arguments * Improve bw\_usage\_update() performance * Update extra specs calls to use deleted: False * Don't stuff non-db data into instance dict * Fix type error in state comparison * update python-quantumclient dependency to >=2.0 * Key auto\_disk\_config in create server off of ext * Implement network association in OS API * Fix TypeError conversion in API layer * Key requested\_networks off of network extension * Key config\_drive off of config-drive extension * Make sure reservations is initialized * import module, not type * Config drive v2 * Don't accept key\_name if not enabled * Fix HTTP 500 on bad server create * Default behavior should restrict admins to tenant for volumes * remove nova code related to Quantum v1 API * Make sure ec2 mapping raises proper exceptions * Send host not ComputeNode into uptime RPC call * Making security group refresh more specific * Sync with latest version of openstack.common.cfg * Sync some cleanups from openstack.common * maint: compare singletons with 'is' not '==' * Compute restart causes period of network 'blackout' * Revert "Remove unused add\_network\_to\_project() method" * Add error log for live migration * Make FaultWrapper handle exception code = None * Don't accept scheduler\_hints if not enabled * Avoid double-reduction of quota for repeated delete * Traceback when over allocating IP addresses * xenapi: ensure all calls to agent get logged * Make update\_db an opt arg in scheduler manager * Key min\_count, max\_count, ret\_res\_id off of ext * Key availability\_zone in create server off of ext * Fix the inject\_metadata\_into\_fs in the disk API * Send updated instance model to schedule\_prep\_resize * Create unique volumes\_dir for testing * Fix stale instances being sent over rpc * Fix setting admin\_pass in rescue command * Key user\_data in create server off of extension * Key block\_device\_mapping off of volume extension * Moves security group functionality into extension * Adds ability to inherit wsgi extensions * Fixes KeyError when trying to rescue an instance * Make TerminateInstances compatible with EC2 api * Uniqueness checks for floating ip addresses * Driver for IBM Storwize and SVC storage * scheduler prep\_resize should not update instance['host'] * Add a 50 char git title limit test to hacking * Fix a bug on remove\_volume\_connection in compute/manager.py * Fix a bug on db.instance\_get\_by\_uuid in compute/manager.py * Make libvirt\_use\_virtio\_for\_bridges flag works for all drivers * xenapi: reduce polling interval for agent * xenapi: wait for agent resetnetwork response * Fix invalid exception format strings * General host aggregates part 2 * Update devref for general host aggregates * Cleanup consoles test cases * Return 409 error if get\_vnc\_console is called before VM is created * Move results filtering to db * Prohibit file injection writing to host filesystem * Added updated locations for iscsiadm * Check against unexpected method call * Remove deprecated use Exception.message * Remove temporary hack from checks\_instance\_lock * Remove temporary hack from wrap\_instance\_fault * Fix up some instance\_uuid usage * Update vmops to access metadata as dict * Improve external locking on Windows * Fix traceback when detaching volumes via EC2 * Update RPC code from common * Fixes parameter passing to tgt-admin for iscsi * Solve possible race in semaphor creation * Rename private methods of compute manager * Send full instance to compute live\_migration * Add underscore in front of post\_live\_migration * Send full instance to scheduler live\_migration * Send full instance to run\_instance * Use dict style access for image\_ref * Use explicit arguments in compute manager run\_instance * Remove topic from scheduler run\_instance * Use explicit args in run\_instance scheduler code * Update args to \_set\_vm\_state\_and\_notify * Reduce db access in prep\_resize in the compute manager * Remove instance\_id fallback from cast\_to\_compute\_host() * Remove unused InstanceInfo class * Adds per-user-quotas support for more detailed quotas management * Remove list\_instances\_detail from compute drivers * Move root\_helper deprecation warning into execute * Flavor extra specs extension use instance\_type id * Fix test\_resize\_xcp testcase - it never ran * tests: avoid traceback warning in test\_live\_migration * ensure\_tree calls mkdir -p * Only log deprecated config warnings once * Handle NetworkNotFound in \_shutdown\_instance * Drop AES functions and pycrypto dependency * Simplify file hashing * Allow loaded extensions to be checked from servers * Make extension aliases consistent * Remove old exception type * Fix test classes collision * Remove unused variables * Fix notification logic * Improve external lock implementation * maint: remove an unused import in libvirt.driver * Require eventlet >= 0.9.17 * Remove \*\*kwargs from prep\_resize in compute manager * Updates to the prep\_resize scheduler rpc call * Migrate a notifier patch from common: * Update list\_instances to catch libvirtError * Audit log messages in nova/compute/api.py * Rename \_self to self according to Python convention * import missing module time * Remove unused variables * Handle InstanceNotFound in libvirt list\_instances * Fix broken pep8 exclude processing * Update reset\_db to call setup if \_DB is None * Migrate a logging change from common: * Send 'create volume from snapshot' to the proper host * Fix regression with nova-manage floating list * Remove unused imports * Simple refactor of some db api tests * fix unmounting of LXC containers * Update usage of 'ip' to handle more return codes * Use function registration for policy checks * Check instance lock in compute/api * Fix a comment typo in db api * Audit log messages in nova/compute/manager.py * XenAPI: Add script to destroy cached images * Fix typo in db test * Fix issue with filtering where a value is unicode * Avoid using logging in signal handler * Fix traceback when using s3 * Don't pass kernel args to Xen HVM instances * Sync w/ latest openstack common log.py * Pass a full instance to rotate\_backups() * Remove agent\_update from the compute manager * Move tests.test\_compute\_utils into tests.compute * Send a full instance in terminate\_instance * maint: don't require write access when reading files * Fix get\_diagnostics RPC arg ordering * Fix failed iscsi tgt delete errors with new tgtadm * Deprecate root\_helper in favor of rootwrap\_config * Use instance\_get instead of instance\_by * Clarify TooManyInstances exception message * Setting root passwd no longer fails silently * XenAPI: Fix race-condition with cached images * Prevent instance\_info\_cache from being altered post instance * Update targets information when creating target * Avoid recursion from @refresh\_cache * Send a full instance in change\_instance\_metadata * Send a full instance in unrescue\_instance * Add check exit codes for vlans * Compute: Error out instance on rebuild and resize * Partially revert "Remove unused scheduler functions" * Use event.listen() instead of deprecated listeners kwarg * Avoid associating floating IP with two instances * Tidy up nova.image.glance * Fix arg to get\_instance\_volume\_block\_device\_info() * Send a full instance in snapshot\_instance * Send a full instance in set\_admin\_password * Send a full instance in revert\_resize * Send a full instance in rescue\_instance * Send a full instance in remove\_volume\_connection * Send a full instance in rollback\_live\_migration\_at\_destination * Send a full instance in resume\_instance * Send a full instance in resize\_instance * Send a full instance in reset\_network * Convert virtual\_interfaces to using instance\_uuid * Compute: VM-Mode should use instance dict * Fix image\_type=base after snapshot * Send a full instance in remove\_fixed\_ip\_from\_instance * Send a full instance in rebuild\_instance * Reverts fix lp1031004 * sync openstack-common log changes with nova * Set default keystone auth\_token signing\_dir loc * Resize.end now includes the correct instance\_type * Fix rootwrapper with tgt-admin * Use common parse\_isotime in GlanceImageService * Xen: VHD sequence validation should handle swap * Revert "Check for selinux before setting up selinux." * reduce debugging from utils.trycmd() * Avoid error during snapshot of ISO booted instance * Add a link from HACKING to wiki GitCommitMessages page * Instance cleanups from detach\_volumes * Check for selinux before setting up selinux * Prefer instance in reboot\_instance * maint: libvirt imagecache: remove redundant interpreter spec * Support external gateways in VLAN mode * Turn on base image cleanup by default * Make compute only auto-confirm its own instances * Fix state logic for auto-confirm resizes * Explicitly send primitive instances via rpc * Allow \_destroy\_vdis if a mapping has no VDI * Correct host count in instance\_usage\_audit\_log extension * Return location header on volume creation * Add persistent volumes for tgtd * xenapi: Use instance uuid when calling DB API * Fix HACKING violation in nova/api/openstack/volume/types.py * Remove ugly instance.\_rescue hack * Convert to using dict style key lookups in XenAPI * Implements notifications for more instance changes * Fix ip6tables support in xenapi bug 934603 * Moving where the fixed ip deallocation happens * Sanitize xenstore keys for metadata injection * Don't store system\_metadata in xenstore * use REDIRECT to forward local metadata request * Only enforce valid uuids if a uuid is passed * Send a full instance in pre\_live\_migration * Send a full instance in power\_on\_instance and start\_instance * Send a full instance in power\_off\_instance and stop\_instance * Make instance\_uuid backwards compat actually work * Send a full instance via rpc for post\_live\_migration\_at\_destination * Send a full instance via rpc for inject\_network\_info * Send a full instance via rpc for inject\_file * Send a full instance via rpc for get\_vnc\_console * Remove get\_instance\_disk\_info from compute rpcapi * Send a full instance via rpc for get\_diagnostics * Send a full instance via rpc for finish\_revert\_resize * Ensure instance is moved to ERROR on suspend failure * Avoid using 'is' operator when comparing strings * Revert "Add additional capabilities for computes" * Allow power\_off when instance doesn't exist * Fix resizing VDIs on XenServer >= 6 * Refactor glance image service code * Don't import libvirt\_utils in disk api * Call correct implementation for quota\_destroy\_all\_by\_project * Remove return values from some compute RPC methods * Reinstate instance locked error logging * Send a full instance via rpc for finish\_resize * Fix exception handling in libvirt attach\_volume() * Convert fixed\_ips to using instance\_uuid * Trim volume type representation * Fix a couple of PEP8 nits * Replace subprocess.check\_output with Popen * libvirt driver: set os\_type to support xen hvm/pv * Include architecture in instance base options passed to the scheduler * Fix typo of localhost's IP * Enhance nova-manage to set flavor extra specs * Send a full instance via rpc for detach\_volume * Remove unused methods from compute rpcapi * Send a full instance via rpc for confirm\_resize * Send a full instance via rpc for check\_can\_live\_migrate\_source * Send a full instance via rpc for check\_can\_live\_migrate\_destination * Remove unused scheduler functions * Send a full instance via rpc for attach\_volume * Send a full instance via rpc for add\_fixed\_ip\_to\_instance * Send a full instance via rpc for get\_console\_output * Send a full instance via rpc for suspend\_instance * Send a full instance via rpc for (un)pause\_instance * Don't use rpc to lock/unlock an instance * Convert reboot\_instance to take a full instance * Update decorators in compute manager * Include name in a primitive Instance * Shrink Simple Scheduler * Allow soft deletes from any state * Handle NULL deleted\_at in migration 112 * Add support for snapshots and volume types to netapp driver * Inject instance metadata into xenstore * Add missing tempfile import to libvirt driver * Fix docstring for SecurityGroupHandlerBase * Don't log debug auth token when using cinder * Remove temporary variable * Define cross-driver standardized vm\_mode values * Check for exception codes in openstack API results * Add missing parameters to novas cinder api * libvirt driver: set driver name consistently * Allow floating IP pools to be deleted * Fixes console/vmrc\_manager.py import error * EC2 DescribeImageAttribute by kernel/ramdisk * Xen: Add race-condition troubleshooting script * Return 400 in get\_console\_output for bad length * update compute\_fill\_first\_cost\_fn docstring * Xen: Validate VHD footer timestamps * Xen: Ensure snapshot is torn down on error * Provide rootwrap filters for nova-api-metadata * Fix a bug in compute\_node\_statistics * refactor all uses of the \`qemu-img info\` command * Xen: Fix snapshots when use\_cow=True * tests: remove misleading docstrings on libvirt tests * Update NovaKeystoneContext to use jsonutils * Use compute\_driver in vmware driver help messages * Use compute\_driver in xenapi driver help messages * Add call to get hypervisor statistics * Adds xcp disk resize support * Log snapshot UUID and not OpaqueRef * Remove unused user\_id and project\_id arguments * Fix wrong regex in cleanup\_file\_locks * Update jsonutils from openstack-common * Return 404 when attempting to remove a non-existent floating ip * Implements config\_drive as extension * use boto's HTTPResponse class for versions of boto >=2.5.2 * Migrations for deleted data for previously deleted instances * Add image\_name to create and rebuild notifications * Make it clear subnet\_bits is unused in ipam case * Remove unused add\_network\_to\_project() method * Adding networking rules to vm's on compute service startup * Avoid unrecognized content-type message * Updates migration 111 to work w/ Postgres * fixes for nova-manage not returning a full list of fixed IPs * Adds non\_inheritable\_image\_properties flag * Add git commit message validation to hacking.py * Remove unnecessary use of with\_lockmode * Improve VDI chain logging * Remove profane words * Adds logging for renaming and hardlinking * Don't create volumes if an incorrect size was given * set correct SELinux context for injected ssh keys * Fixes nova-manage fixed list with deleted networks * Move libvirt disk config setup out of main get\_guest\_config method * Refactor libvirt imagebackend module to reduce code duplication * Move more libvirt disk setup into the imagebackend module * Don't hardcode use of 'virtio' for root disk in libvirt driver * Ensure to use 'hdN' for IDE disk device in libvirt driver * Don't set device='cdrom' for all disks in libvirt driver * Move setup of libvirt disk cachemode into imagebackend module * Get rid of pointless 'suffix' parameter in libvirt imagebackend * Revert "Attach ISO as separate disk if given proper instruction" * Ensure VHDs in staging area are sequenced properly * Fix error in error handler in instance\_usage\_audit task * Fix SQL deadlock in quota reservations * Ensure 413 response for security group over-quota * fixes for nova-manage network list if network has been deleted * Allow NoMoreFloatingIps to bubble up to FaultWrapper * Fix cloudpipe keypair creation. Add pipelib tests * Don't let failure to delete filesystem block deletion of instances in libvirt * Static FaultWrapper status\_to\_type map * Make flavorextradata ignore deleted flavors * Tidy up handling of exceptions in floating\_ip\_dns * Raise NotImplementedError, not NotImplemented singleton * Fix the mis-use of NotImplemented * Update FilterSchedulerTestCase docstring * Remove unused testing.fake * Make snapshot work for stopped VMs * Split ComputeFilter up * Show all absolute quota limits in /limits * Info log to see which compute driver has loaded * Rename get\_lock() to \_get\_lock() * Remove obsolete line in host\_manager * improve efficiency of image transfer during migration * Remove unused get\_version\_from\_href() * Add debug output to RamFilter * Fixes bare-metal spawn error * Adds generic retries for build failures * Fix docstring typo * Fixes XenAPI driver import in vm\_vdi\_cleaner * Display key\_name only if keypairs extension is used * Fix EC2 CreateImage no\_reboot logic * Reject EC2 CreateImage for instance-store * EC2 DescribeImages reports correct rootDeviceType * Support EC2 CreateImage API for boot-from-volume * remove unused clauses[] variable * Partially implements blueprint xenapi-live-migration * Improved VM detection for bandwidth polling (XAPI) * Sync jsonutils from openstack-common * Adding granularity for quotas to list and update * Remove VDI chain limit for migrations * Refactoring required for blueprint xenapi-live-migration * Add the plugin framework from common; use and test * Catch rpc up to the common state-of-the-art * Support requested\_networks with quantum v2 * Return 413 status on over-quota in the native API * Fix venv wrapper to clean \*.pyc * Use all deps for tools/hacking.py tests in tox * bug 1024557 * General-host-aggregates part 1 * Attach ISO as separate disk if given proper instruction * Extension to show usage of limited resources in /limits response * Fix SADeprecationWarning: useexisting is deprecated * Fix spelling in docstrings * Fix RuntimeWarning nova\_manage not found * Exclude openstack-common from pep8 checks * Use explicit destination user in xenapi rsync call * Sync gettextutils fixes from openstack-common * Sync importutils from openstack-common * Sync cfg from openstack-common * Add SKIP\_WRITE\_GIT\_CHANGELOG to setup.py * Remove unnecessary logging from API * Sync a commit from openstack-common * Fix typo in docstring * Remove VDI chain limit for snapshots * Adds snapshot\_attached\_here contextmanager * Change base rpc version to 1.0 in compute rpcapi * Use \_lookup\_by\_name instead of \_conn.lookupByName * Use the dict syntax instead of attribute to access db objects * Raise HTTP 500 if service catalog is not json * Floating\_ip create /31,32 shouldn't silent error * Convert remaining network API casts to calls * network manager returns empty list, not raise an exception * add network creation call to network.api.API * overriden VlanManager.create\_networks must return a result * When over quota for floating ips, return HTTPRequestEntityTooLarge * Remove deprecated auth-related db code * Fix .mailmap to generate unique AUTHORS list * Imports base64 to fix xen file injection * Remove deprecated auth from GlanceImageService * Adds bootlocking to the xenserver suspend and resume * ensure libguestfs mounts are cleaned up * Making docs pretty! * allows setting accessIPvs to null via update call * Re-add nova.virt.driver import to xenapi driver * Always attempt to delete entire floating IP range * Adds network labels to the fixed ips in usages * only mount guest image once when injecting files * Remove unused find\_data\_files function in setup.py * Use compute\_api.get\_all in affinity filters * Refactors more snapshot code into vm\_utils * Clarifying which vm\_utils functions are private * Refactor instance\_usage\_audit. Add audit tasklog * Fixes api fails to unpack metadata using cinder * Remove deprecated auth docs * Raise Failure exception when setting duplicate other\_config key * Split xenapi agent code out to nova.virt.xenapi.agent * ensure libguestfs has completed before proceeding * flags documentation to deprecate connection\_type * refactor baremetal/proxy => baremetal/driver * refactor xenapi/connection => xenapi/driver * refactor vmwareapi\_conn => vmwareapi/driver * Don't block instance delete on missing block device volume * Adds diagnostics command for the libvirt driver * associate\_floating\_ip an ip already in use * When deleting an instance, avoid freakout if iscsi device is gone * Expose over-quota exceptions via native API * Fix snapshots tests failing bug 1022670 * Remove deprecated auth code * Remove deprecated auth-related api extensions * Make pep8 test work on Mac * Avoid lazy-loading errors on instance\_type * Fetch kernel/ramdisk images directly * Ignore failure to delete kernel/ramdisk in xenapi driver * Boot from volume for Xen * Fix 'instance %s: snapshotting' log message * Fix KeyError 'key\_name' when KeyPairExists raised * Propagate setup.py change from common * Properly name openstack.common.exception * Janitorial: Catch rpc up with a change in common * Make reboot work for halted xenapi instances * Removed a bunch of cruft files * Update common setup code to latest * fix metadata file injection with xen * Switch to common notifiers * Implements updating complete bw usage data * Fix rpc import path in nova-novncproxy * This patch stops metadata from being deleted when an instance is deleted * Set the default CPU mode to 'host-model' for Libvirt KVM/QEMU guests * Fallback to fakelibvirt in test\_libvirt.py test suite * Properly track VBD and VDI connections in xenapi fake * modify hacking.py to not choke on the def of \_() * sort .gitignore for readability * ignore project files for eclipse/pydev * Add checks for retrieving deleted instance metadata for notification events * Allow network\_uuids that begin with a prefix * Correct typo in tools/hacking.py l18n -> i18n * Add \*.egg\* to .gitignore * Remove auth-related nova-manage commands * Remove unnecessary target\_host flag in xenapi driver tests * Remove unnecessary setUp() method in xenapi driver tests * Finish AUTHORS transition * Don't catch & ignore exceptions when setting up LXC container filesystems * Ensure system metadata is sent on new image creation * Distinguish over-quota for volume size and number * Assign service\_catalog in NovaKeystoneContext * Fix some hacking violations in the quantum tests * Fix missing nova.log change to nova.openstack.common.log * Add Cinder Volume API to Nova * Modifies ec2/cloud to be able to use Cinder * Fix nova-rpc-zmq-receiver * Drop xenapi session.get\_imported\_xenapi() * Fix assertRaises(Exception, ...) HACKING violation * Make possible to store snapshots not in /tmp directory * Prevent file injection writing to host filesystem * Implement nova network API for quantum API 2.0 * Expand HACKING with commit message guidelines * Add ServiceCatalog entries to enable Cinder usage * Pass vdi\_ref to fake.create\_vbd() not a string * Switch to common logging * use import\_object\_ns for compute\_driver loading * Add compatibility for CPU model config with libvirt < 0.9.10 * Sync rpc from openstack-common * Redefine the domain's XML on volume attach/detach * Sync jsonutils from openstack-common * Sync iniparser from openstack-common * Sync latest importutils from openstack-common * Sync excutils from openstack-common * Sync cfg from openstack-common * Add missing gettextutils from openstack-common * Run hacking tests as part of the gate * Remove duplicate volume\_id * Make metadata content match the requested version of the metadata API * Create instance in DB before block device mapping * Get hypervisor uptime * Refactoring code to kernel Dom0 plugin * Ability to read deleted system metadata records * Add check for no domains in libvirt driver * Remove passing superfluous read\_deleted argument * Flesh out the README file with a little more useful information * Remove unused 'get\_open\_port' method from libvirt utils * deallocate\_fixed\_ip attempts to update deleted ip * Dom0 plugin now returns data in proper format * Add PEP8 checks back for Dom0 plugins * Add missing utils declaration to RPM spec * Fixes bug 1014194, metadata keys are incorrect for kernel-id and ramdisk-id * Clean up cruft in nova.image.glance * Retry against different Glance hosts * Fix some import ordering HACKING violations * Deal with unknown instance status * OS API should return SHUTOFF, not STOPPED * Implement blueprint ec2-id-compatibilty * Add multi-process support for API services * Allow specification of the libvirt guest CPU model per host * Refactor Dom0 Glance plugin * Switch libvirt get\_cpu\_info method over to use config APIs * Remove tpool stub in xenapi tests * Use setuptools-git plugin for MANIFEST * Remove duplicate check of server\_dict['name'] * Add missing nova-novncproxy to tarballs * Add libvirt config classes for handling capabilities XML doc * Refactor libvirt config classes for representing CPU models/features * Fix regression in test\_connection\_to\_primitive libvirt testcase * Rename the instance\_id column in instance\_info\_caches * Rename GlanceImageService.get to download * Use LOG.exception instead of logging.exception * Align run\_tests.py pep8 with tox * Add hypervisor information extension * Remove GlanceImageService.index in favor of detail * Swap VDI now uses correct name label * Remove image service show\_by\_name method * Cleanup of image service code * Adds default fall-through to the multi scheduler. Fixes bug 1009681 * Add missing netaddr import * Make nova list/show behave nicely on instance\_type deletion * refactor libvirt from connection -> driver * Switch to using new config parsing for vm\_vdi\_cleaner.py * Adds missing 'owner' attribute to image * Ignore floatingIpNotAssociated during disassociation * Avoid casts in network manager to prevent races * Stop nova\_ipam\_lib from changing the timeout setting * Remove extra DB calls for instances from OS API extensions * Allow single uuid to be specified for affinity * Fix invalid variable reference * Avoid reset on hard reboot if not supported * Fix several PEP-8 issues * Allow access to metadata server '/' without IP check * Fix db calls for snaphsot and volume mapping * Removes utils.logging\_error (no longer used) * Removes utils.fetch\_file (no longer used) * Improve filter\_scheduler performance * Remove unnecessary queries for network info in notifications * Re-factor instance DB creation * Fix hacking.py failures.. * fix libvirt get\_memory\_mb\_total() with xen * Migrate existing routes from flat\_interface * Add full test environment * Another killfilter test fix for Fedora 17 * Remove unknown shutdown kwarg in call to vmops.\_destroy * Refactor vm\_vdi\_cleaner.py connection use * Remove direct access to glance client * Fix import order of openstack.common * metadata: cleanup pubkey representation * Make tgtadm the default iscsi user-land helper * Move rootwrap filters definition to config files * Fixes ram\_allocation\_ratio based over subscription * Call libvirt\_volume\_driver with right mountpoint * XenAPI: Fixes Bug 1012878 * update refresh\_cache on compute calls to get\_instance\_nw\_info * vm state and task state management * Update pylint/pep8 issues jenkins job link * Addtional CommandFilters to fix rootwrap on SLES * Tidy up exception handling in contrib api consoles * do sync before fusermount to avoid busyness * Fix bug 1010581 * xenapi tests: changes size='0' to size=0 * fixes a bug in xenapi tests where a string should be int * Minor HACKING.rst exception fix * Make libvirt LoopingCalls actually wait() * Add instance\_id in Usage API response * Set libvirt\_nonblocking to true by default for Folsom * Admin action to reset states * Use rpc from openstack-common * add nova-manage bash completion script * Spelling fixes * Fix bug 1014925: fix os-hosts * Adjust the libvirt config classes' API contract for parsing * Move libvirt version comparison code into separate function helper * Remove two obsolete libvirt cheetah templates from MANIFEST.in * Propose nova-novncproxy back into nove core * Fix missing import in compute/utils.py * Add instance details to notifications * Xen Storage Manager: tests for xensm volume driver * SM volume driver: DB changes and tests * moved update cache functionality to the network api * Handle missing server when getting security groups * Imports cleanup * added deprecated.warn helper method * Enforce an instance uuid for instance\_test\_and\_set * Replaces functions in utils.py with openstack/common/timeutils.py * Add CPU arch filter scheduler support * Present correct ec2id format for volumes and snaps * xensm: Fix xensm volume driver after uuid changes * Cleanup instance\_update so it only takes a UUID * Updates the cache * Add libvirt min version check * Ensure dnsmasq accept rules are preset at startup * Re-add private \_compute\_node\_get call to sql api * bug #996880 change HostNotFound in hosts to HTTPNotFound * Unwrap httplib.HTTPConnection after WsgiLimiterProxyTest * Log warnings instead of full exceptions for AMQP reconnects * Add missing ack to impl\_qpid * blueprint lvm-disk-images * Remove unused DB calls * Update default policies for KVM guest PIT & RTC timers * Add support for configuring libvirt VM clock and timers * Dedupe native and EC2 security group APIs * Add two missing indexes for instance\_uuid columns * Revert "Fix nova-manage backend\_add with sr\_uuid" * Adds property to selectively enable image caching * Remove utils.deprecated functions * Log connection\_type deprecation message as WARNING * add unit tests for new virt driver loader * Do not attempt to kill already-dead dnsmasq * Only invoke .lower() on non-None protocols * Add indexes to new instance\_uuid columns * instance\_destroy now only takes a uuid * Do not always query deleted instance\_types * Rename image to image\_id * Avoid partially finished cache files * Fix power\_state mis-use bug 1010586 * Resolve unittest error in rpc/impl\_zmq * Fix whitespace in sqlite steps * Make eventlet backdoor play nicer with gettext * Add user\_name project\_name and color option to log * fixes bug 1010200 * Fixes affinity filters when hints is None * implement sql-comment-string stack traces * Finalize tox config * Fixes bug lp:999928 * Convert consoles to use instance uuid * Use OSError instead of ProcessExecutionError * Replace standard json module with openstack.common.jsonutils * Don't query nova-network on startup * Cleans up power\_off and power\_on semantics * Refactor libvirt create calls * Fix whitespace in sqlite steps * Update libvirt imagecache to support resizes * separate Metadata logic away from the web service * Fix bug 1006664: describe non existent ec2 keypair * Make live\_migration a first-class compute API * Add zeromq driver. Implements blueprint zeromq-rpc-driver * Fix up protocol case handling for security groups * Prefix all nova binaries with 'nova-' * Migrate security\_group\_instance\_association to use a uuid to refer to instances * Migrate instance\_metadata to use a uuid to refer to instances * Adds \`disabled\` field for instance-types * More meaningful help messages for libvirt migration options * fix the instance quota overlimit message * fix bug lp:1009041,add option "-F" to make mkfs non-interactive * Finally ack consumed message * Revert "blueprint " * Use openstack-common's policy module * Use openstack.common.cfg.CONF * bug #1006094 correct typo in addmethod.openstackapi.rst * Correct use of uuid in \_get\_instance\_volume\_bdm * Unused imports cleanup (folsom-2) * Quantum Manager disassociate floating-ips on instance delete * defensive coding against None inside bdm resolves bug 1007615 * Add missing import to quantum manager * Add a comment to rpc.queue\_get\_for() * Add shared\_storage\_test methods to compute rpcapi * Add get\_instance\_disk\_info to the compute rpcapi * Add remove\_volume\_connection to the compute rpcapi * blueprint * Implements resume\_state\_on\_host\_boot for libvirt * Fix libvirt rescue to work with whole disk images * Finish removing xenapi.HelperBase class * Remove network\_util.NetworkHelper class * Remove volume\_util.VolumeHelper class * Remove vm\_utils.VMHelper class * Start removing unnecessary classes from XenAPI driver * XenAPI: Don't hardcode userdevice for VBDs * convert virt drivers to fully dynamic loading * Add compare\_cpu to the compute rpcapi * Add get\_console\_topic() to the compute rpcapi * Add refresh\_provider\_fw\_rules() to compute rpcapi * Use compute rpcapi in nova-manage * Add post\_live\_migration\_at\_destination() to compute rpcapi * Add pre\_live\_migration() to the compute rpcapi * Add rollback\_live\_migration\_at\_destination() to compute rpcapi * Add finish\_resize() to the compute rpcapi * Add resize\_instance() to the compute rpcapi * Add finish\_revert\_resize() to the compute rpcapi * Add get\_console\_pool\_info() to the compute rpcapi * Fix destination host for remove\_volume\_connection * Don't deepcopy RpcContext * Remove resize function from virt driver * Cleans up extraneous volume\_api calls * Remove list\_disks/list\_interfaces from virt driver * Remove duplicate words in comments * Implement blueprint host-topic-matchmaking * Remove unnecessary setting of XenAPI module attribute * Prevent task\_state changes during VERIFY\_RESIZE * Eliminate a race condition on instance deletes * Make sure an exception is logged when config file isn't found * Removing double quotes from sample config file * Backslash continuation removal (Nova folsom-2) * Update .gitignore * Add a note on why quota classes are unused in Nova * Move queue\_get\_for() from db to rpc * Sample config file tool updates * Fix instance update notification publisher id * Use cfg's new global CONF object * Make xenapi fake match real xenapi a bit closer * Align ApiEc2TestCase to closer match api-paste.ini * Add attach\_time for EC2 Volumes * fixing issue with db.volume\_update not returning the volume\_ref * New RPC tests, docstring fixes * Fix reservation\_commit so it works w/ PostgreSQL * remove dead file nova/tests/db/nova.austin.sqlite * Fix the conf argument to get\_connection\_pool() * Remove Deprecated auth from EC2 * Revert "API users should not see deleted flavors." * Grammar fixes * Record instance architecture types * Grammar / spelling corrections * cleanup power state (partially implements bp task-management) * [PATCH] Allow [:print:] chars for security group names * Add scheduler filter for trustedness of a host * Remove nova.log usage from nova.rpc * Remove nova.context dependency from nova.rpc * \_s3\_create update only pertinent metadata * Allow adding fixed IPs by network UUID * Fix a minor spelling error * Run coverage tests via xcover for jenkins * Localize rpc options to rpc code * clean-up of the bare-metal framework * Use utils.utcnow rather than datetime.utcnow * update xen to use network\_model * fixes bug 1004153 * Bugfix in simple\_tenant\_usage API detail view * removed a dead db function register\_models() * add queue name argument to TopicConsumer * Cleanup tools/hacking using flake8 * Expose a limited networks API for users * Added a instance state update notification * Remove deprecated quota code * Update pep8 dependency to v1.1 * Nail pep8 dependencies to 1.0.1 * API users should not see deleted flavors * Add scheduler filter: TypeAffinityFilter * Add help string to option 'osapi\_max\_request\_body\_size' * Permit deleted instance types to be queried for active instances * Make validate\_compacted\_migration into general diff tool * Remove unused tools/rfc.sh * Finish quota refactor * Use utils.parse\_strtime rather than datetime.strptime * Add version to compute rpc API * Add version to scheduler rpc API * Add version to console rpc API * Remove wsgiref from requirements * More accurate rescue mode testing for XenAPI * Add tenant id in self link in /servers call for images * Add migration compaction validation tool * Enable checking for imports in alphabetical order * Include volume-usage-audit in tarballs * Fix XenServer diagnostics to provide correct details * Use cfg's new behavior of reset() clearing overrides * Sync with latest version of openstack.common.cfg * Only permit alpha-numerics and .\_- for instance type names * Use memcache to store consoleauth tokens * cert/manager.py not using crypto.fetch\_crl * Cleanup LOG.getLoggers to use \_\_name\_\_ * Imported Translations from Launchpad * Alphabetize imports in nova/tests/ * Fix Multi\_Scheduler to process host capabilities * fixed\_ip\_get\_by\_address read\_deleted from context * Fix for Quantum LinuxBridge Intf driver plug call * Add additional logging to compute filter * use a RequestContext object instead of context module * make get\_all\_bw\_usage() signature match for fake virt driver * Add unit test coverage for bug 1000261 * Moving network tests into the network folder * Add version to consoleauth rpc API * Add version to the cert rpc API * Add base support for rpc API versioning * fixes typo that completely broken Quantum/Nova integration * Make Iptables FW Driver handle dhcp\_server None * Add aliases to .mailmap for comstud and belliott * Add eventlet backdoor to facilitate troubleshooting * Update nova's copy of image metadata on rebuild * Optional timeout for servers stuck in build * Add configurable timeout to Quantum HTTP connections * Modify vm\_vdi\_cleaner to handle \`-orig\` * Add \_\_repr\_\_ to least\_cost scheduler * Bump XenServer plugin version * handle updated qemu-img info output * Rearchitect quota checking to partially fix bug 938317 * Add s3\_listen and s3\_listen\_port options * Misused and not used config options * Remove XenAPI use of eventlet tpool * Fixed compute periodic task. Fixes bug 973331 * get instance details results in volumes key error * Fix bug 988034 - Quantum Network Manager - not clearing ips * Stop using nova.exception from nova.rpc * Make use of openstack.common.jsonutils * Alphabetize imports in nova/api/ * Remove unused \_get\_target code from xenapi * Implement get\_hypervisor\_hostname for libvirt * Alphabetize imports * Alphabetize imports in nova/virt/ * Adding notifications for volumes * Pass 'nova' project into ConfigOpts * fixes bug 999206 * Create an internal key pair API * Make allocation failure a bit more friendly * Avoid setting up DHCP firewall rules with FlatManager * Migrate missing license info * Imported Translations from Launchpad * Fix libvirt Connection.get\_disks method * Create a utf8 version of the dns\_domains table * Setup logging, particularly for keystone middleware * Use default qemu-img cluster size in libvirt connection driver * Added img metadata validation. Fixes bug 962117 * Remove unnecessary stubout\_loopingcall\_start * Actually use xenapi fake setter * Provide a transition to new .info files * Store image properties with instance system\_metadata * Destroy system metadata when destroying instance * Fix XenServer windows agent issue * Use ConfigOpts.find\_file() to find paste config * Remove instance Foreign Key in volumes table, replace with instance\_uuid * Remove old flagfile support * Removed unused snapshot\_instance method * Report memory correctly on Xen. Fixes bug 997014 * Added image metadata to compute.instance.exists * Update PostgreSQL sequence names for zones/quotas * Minor help text related changes * API does need new image\_ref on rebuild immediately * Avoid unnecessary inst lookup in vmops \_shutdown * implement blueprint floating-ip-notification * Defer image\_ref update to manager on rebuild * fix bug 977007,make nova create correct size of qcow2 disk file * Remove unnecessary shutdown argument to \_destroy() * Do not fail on notify when quantum and melange are out of sync * Remove instance action logging mechanism * httplib throw "TypeError: an integer is required" when run quantum * fix bug 992008, we should config public interface on compute * A previous patch decoupled the RPC drivers from the nova.flags, breaking instance audit usage in the process. This configures the xvpvncproxy to configure the RPC drivers properly with FLAGS so that xvpvncproxy can run * Fix bug 983206 : \_try\_convert parsing string * pylint cleanup * Fix devref docs * Remove Deprecated AuthMiddleware * Allow sitepackages on jenkins * Replaces exceptions.Error with NovaException * Docs for vm/task state transitions * Fix a race with rpc.register\_opts in service.py * Mistake with the documentation about cost function's weight corrected * Remove state altering in live-migration code * Register fake flags with rpc init function * Generate a Changelog for Nova * Find context arg by type rather than by name * Default auto-increment for int primary key columns * Adds missing copyright to migration 082 * Add instance\_system\_metadata modeling * Use fake\_libvirt\_utils for libvirt console tests * Fix semantics for migration test environment var * Clean up weighted\_sum logic * Use ConfigOpts.find\_file() to locate policy.json * Sync to newer openstack.common.cfg * Fix test\_mysql\_innodb * Implement key pair quotas * Ensure that the dom0 we're connected to is the right one * Run ip link show in linux\_net.\_device\_exists as root * Compact pre-Folsom database migrations * Remove unused import * Pass context to notification drivers when we can * Use save\_and\_reraise\_exception() from common * Fix innodb tests again * Convert Volume and Snapshot IDs to use UUID * Remove unused images * Adding 'host' info to volume-compute connection information * Update common.importutils from openstack-common * Provide better quota error messages * Make kombu support optional for running unit tests * Fix nova.tests.test\_nova\_rootwrap on Fedora 17 * Xen has to create it's own tap device if using libvirt and QuantumLinuxBridgeVIFDriver * Fix test\_migrations to work with python 2.6 * Update api-paste.ini to remove unused settings * Fix test\_launcher\_app to ensure service actually got started * Minor refactor of servers viewbuider * A previous patch decoupled the RPC drivers from the nova.flags, breaking instance audit usage in the process. This configures the instance audit usage to configure the RPC drivers properly with FLAGS so that the job can run * Allow blank passwords in changePassword action * Allow blank adminPass on server create * Return a BadRequest on bad flavors param values * adjust logging levels for utils.py * Update integration tests to listen on 127.0.0.1 * Log instance consistently * Create name\_label local variable for logging message * Remove hack for xenapi driver tests * Migrate block\_device\_mapping to use instance uuids * Remove unnecessary return statements * Clean up ElementTree usage * Adds better bookending and robustness around the instance audit usage generation * Pass instance to resize\_disk() to fix exception * Minor spelling fix * Removes RST documentation and moves it to openstack-manuals * Trivial spelling fix * Remove workaround for sqlalchemy-migration < 0.6.4 * Remove unnecessary references to resize\_confirm\_window flag * Fix InnoDB migration bug in migrate script 86 * Use openstack.common.importutils * Ignore common code in coverage calculations * Use additional task states during resize * Add libvirt get\_console\_output tests: pty and file * Keep uuid with bandwidth usage tracking to handle the case where a MAC address could be recycled between instances * Added the validation for name check for rebuild of a server * Make KillFilter to handle 'deleted' w/o rstrip * Fix instance delete notifications * Disconnect stale instance VDIs when starting nova-compute * Fix timeout in EC2 CloudController.create\_image() * Add additional capabilities for computes * Move image checksums into a generic file * Add instance to several log messages * Imports to human alphabetical order * Fixes bug 989271, fixes launched\_at date on notifications * Enable InnoDB checking * make all mysql tables explicitly innodb * Use instance\_get\_by\_uuid since we're looking up a UUID * Use nova\_uuid attribute instead of trying to parse out name\_label * Add a force\_config\_drive flag * Fix 986922 * Improvement for the correct query extraction * Fixes bug 983024 * Make updating hostId raises BadRequest * Disallow network creation when label > 255. Fixes bug 965008 * Introduced \_atomic\_restart\_dhcp() Fixes Bug 977875 * Make the filename that image hashes are written to configurable * Xen: Pass session to destroy\_vdi * Add instance logging to vmware\_images.py * Add instance logging to vmops.py * fix bug #980452 set net.ipv4.ip\_forward=1 on network * Log instance * Log instance information for baremetal * Include instance in log message * Log instance * Ensure all messages include instance * Add instance to log messages * Include instance in log message * Refactor nova.rpc config handling * Don't leak RPC connections on timeouts or other exceptions * Small cleanup to attach\_volume logging * Implements EC2 DescribeAddresses by specific PublicIp * Introduced flag base\_dir\_name. Fixes bug 973194 * Set a more reasonable default RPC thread pool size * Number of missing imports should always be shown * Typo fix in bin/instance-usage-audit * Improved tools/hacking.py * Scope coverage report generation to nova module * Removes unnecessary code in \_run\_instance * Validate min\_ram/min\_disk on rebuild * Adding context to usage notifications * Making \`usage\_from\_instance\` private * Remove \_\_init\_\_.py from locale dir * Fixes bug 987335 * allow power state "BLOCKED" for live migrations if using Xen by libvirt * Exclude xenapi plugins from pep8/hacking checks * Imported Translations from Launchpad * Remove unnecessary power state translation messages * Add instance logging * Use utils.save\_and\_reraise\_exception * Removing XenAPI class variable, use session instead * Log instance consistently * Keep nova-manage commands sorted * Log instances consistently * Moves \`usage\_from\_instance\` into nova.compute.utils * Log instance * nova.virt.xenapi\_conn -> nova.virt.xenapi.connection * Remove unused time keyword arg * Remove unused variable * support a configurable libvirt injection partition * Refactor instance image property inheritance out to a method * Refactor availability zone handling out to a method * Include name being searched for in exception message * Be more tolerant of deleting failed builds * Logging updates in IptablesFirewallDriver * Implement security group quotas * Do not allow blank adminPass attribute on set password * Make rebuilds with an emtpy name raise BadRequest * Updates launched\_at in the finish and revert\_migration calls * Updated instance state on resize error * Reformat docstrings in n/c/a/o/servers as per HACKING * fix bug 982360, multi ip block for dmz\_cidr * Refactor checking instance count quota * Small code cleanup for config\_disk handling * Refactors kernel and ramdisk handling into their own method * Improve instance logging in compute/manager * Add deleted\_at to instance usage notification * Simplify \_get\_vm\_opaque\_ref in xenapi driver * Test unrescue works as well * Remove unused variable * Port types and extra specs to volume api * Make exposed methods clearer in xenapi.vmops * Fix error message to report correct operation * Make run\_tests.sh just a little bit less verbose * Log more information when sending notifications * xenapi\_conn -> xenapi.connection * Renamed current\_audit\_period function to last\_completed\_audit\_period to clarify its purpose * QuantumManager will start dnsmasq during startup. Fixes bug 977759 * Fixed metadata validation err. Fixes bug 965102 * Remove python-novaclient dependency from nova * Extend instance UUID logging * Remove references to RemoteError in os-networks * Fix errors in os-networks extension * Removes dead code around start\_tcp in Server * Improve grammar throughout nova * Improved localization testing * Log kwargs on a failed String Format Operation * Standardize quota flag format * Remove nova Direct API * migration\_get\_all\_unconfirmed() now uses lowercase "finished" Fixes bug 977719 * Run tools/hacking.py instead of pep8 mandatory * Delete fixed\_ips when network is deleted * Remove unecessary --repeat option for pep8 * Create compute.api.BaseAPI for compute APIs to use * Give all VDIs a reasonable name-label and name-description * Remove last two remaining hyperV references * bug 968452 * Add index to fixed\_ips.address * Use 'root' instead of 'os' in XenAPI driver * Information about DifferentHostFilter and SameHostFilter added * HACKING fixes, sqlalchemy fix * Add test to check extension timestamps * Fixes bug 952176 * Update doc to mention nova tool for type creation * Change Diablo document reference to trunk * Imported Translations from Launchpad * Cloudpipe tap vpn not always working * Allow instance logging to use just a UUID * Add the serialization of exceptions for RPC calls * Cleanup xenapi driver logging messages to include instance * Stop libvirt test from deleting instances dir * Move product\_version to XenAPISession * glance plugin no longer takes num\_retries parameter * Remove unused user\_id and project\_id parameters to fetch\_image() * Cleanup \_make\_plugin\_call() * Push id generation into \_make\_agent\_call() * Remove unused path argument for \_make\_agent\_call() * Remove unused xenstore methods * Combine call\_xenapi and call\_xenapi\_request * Fixed bug 962840, added a test case * Use -1 end-to-end for unlimited quotas * fix bug where nova ignores glance host in imageref * Remove unused \_parse\_xmlrpc\_value * Fix traceback in image cache manager * Fixes regression in release\_dhcp * Use thread local storage from openstack.common * Extend FilterScheduler documentation * Add validation on quota limits (negative numbers) * Get unit tests functional in OS X * Make sure cloudpipe extension can retrieve network * Treat -1 quotas as unlimited * Auto-confirming resizes would bail on exceptions * Grab the vif directly on release instead of lookup * Corrects an AttributeError in the quota API * Allow unprivileged RADOS users to access rbd volumes * Remove nova.rpc.impl\_carrot * Sync openstack.common.cfg from openstack-common * add libvirt\_inject\_key flag fix bug #971640 * Do not fail to build a snapshot if base image is not found * fix TypeError with unstarted threads in nova-network * remove unused flag: baremetal\_injected\_network\_template baremetal\_uri baremetal\_allow\_project\_net\_traffic * Imported Translations from Launchpad * fixed postgresql flavor-create * Add rootwrap for touch * Ensure floating ips are recreated on reboot * Handle instances being missing while listing floating IPs * Allow snapshots in error state to be deleted * Ensure a functional database connection * Add a faq to vnc docs * adjust logging levels for linux\_net * Handle not found in check for disk availability * Acccept metadata ip so packets aren't snatted * bug 965335 * Export user id as password to keystone when using noauth * Check that DescribeInstance works with deleted image * Check that volume has no snapshots before deletion * Fix libvirt rescue * Check vif exists before releasing ip * Make kombu failures retry on IOError * Adds middleware to limit request body sizes * Add validation for OSAPI server name length * adjust logging levels for libvirt error conditions * Fix exception type in \_get\_minram\_mindisk\_params * fixed bug lp:968019 ,fix network manager init floating ip problem * When dnsmasq fails to HUP log an error * Update KillFilter to handle 'deleted' exe's * Fix disassociate query to remove foreign keys * Touch in use image files when they're checked * Base image signature files are not images * Support timestamps as prefixes for traceback log lines * get\_instance\_uuids\_by\_ip\_filter to QM * Updated docstrings in /tools as per HACKING * Minor xenapi driver cleanups * Continue on the the next tenant\_id on 400 codes * Fix marker behavior for flavors * Remove auth\_uri, already have auth\_host, auth\_port * A missing checksum does not mean the image is corrupt * Default scheduler to spread-first * Reduce the image cache manager periodic interval * Handle Forbidden and NotAuthenticated glance exc * Destroy src and dest instances when deleting in RESIZE\_VERIFY * Allow self-referential groups to be created * Fix unrescue in invalid state * Clean up the shared storage check (#891756) * Don't set instance ACTIVE until it's really active * Fix traceback when sending invalid data * Support sql\_connection\_debug to get SQL diagnostic information * Improve performance of safe\_log() * Fix 'nova-manage config convert' * Add another libvirt get\_guest\_config() test case * Fix libvirt global name 'xml\_info' is not defined * Clean up read\_deleted support in host aggregates code * ensure atomic manipulation of libvirt disk images * Import recent openstack-common changes * makes volume versions display properly * Reordered the alphabet * Add periodic\_fuzzy\_delay option * Add a test case for generation of libvirt guest config * Convert libvirt connection class to use config APIs for CPU comparisons * Introduce a class for storing libvirt CPU configuration * Convert libvirt connection class to use config APIs for guests * Convert libvirt connection class to use config APIs for filesystem devices * Introduce a class for storing libvirt snapshot configuration * Move NIC devices back after disk devices * Convert libvirt connection class to use config APIs for disk devices * Convert libvirt connection class to use config APIs for input devices * Convert libvirt connection class to use config APIs for serial/console devices * Convert libvirt connection class to use config APIs for graphics * Convert libvirt vif classes over to use config API * Convert libvirt volume classes over to use config API * Delete the test\_preparing\_xml\_info libvirt test * Introduce a set of classes for storing libvirt guest configuration * Send a more appropriate error response for 403 in osapi * Use key in locals() that actually exists * Fix launching of guests where instances\_path is on GlusterFS * Volumes API now uses underscores for attrs * Remove unused certificate SQL calls * Assume migrate module missing \_\_version\_\_ is old * Remove tools/nova-debug * Inlining some single-use methods in XenAPI vmops * Change mycloud.com to example.com (RFC2606) * Remove useless dhcp\_domain flags in EC2 * Handle correctly QuotaError in EC2 API * Avoid unplugging VBDs for rescue instances * Imported Translations from Launchpad * Rollback create\_disks handles StorageError exception * Capture SIGTERM and Shut down python services cleanly * Fixed status validation. Fixes bug 960884 * Clarify HACKING's shadow built-in guidance * Strip auth token from log output * Fail-fast for invalid read\_deleted values * Only shutdown rescue instance if it's not already shutdown * Modify nova.wsgi.start() should check backlog parameter * Fix unplug\_vbd to retry a configurable number of times * Don't send snapshot requests through the scheduler * Implement quota classes * Fixes bug 949038 * Open Folsom * Fixes bug 957708 * Improvements/corrections to vnc docs * Allow rate limiting to be disabled via flag * Improve performance of generating dhcp leases * Fix lxc console regression * Strip out characters that should be escaped from console output * Remove unnecessary data from xenapi test * Correct accessIPv6 error message * Stop notifications from old leases * Fix typo in server diagnostics extension * Stub-implement floating-ip functions on FlatManager * Update etc/nova.conf.sample for ship * Make sqlite in-memory-db usable to unittest * Fix run/terminate race conditions * Workaround issue with greenthreads and lockfiles * allow the compute service to start with missing libvirt disks * Destroy rescue instance if main instance is destroyed * Tweak security port validation for ICMP * Debug messages for host filters * various cleanups * Remove Virtual Storage Array (VSA) code * Re-instate security group delete test case * db api: Remove check for security groups reference * Allow proper instance cleanup if state == SHUTOFF * Use getLogger for nova-all * Stop setting promisc on bridge * Fix OpenStack Capitalization * Remove improper use of redirect for hairpin mode * Fix OpenStack Capitalization * HACKING fixes, TODO authors * Keep context for logging intact in greenthreads * fix timestamps to match documented ec2 api * Include babel.cfg in tarballs * Fix LXC volume attach issue * Make extended status not admin-only by default * Add ssl and option to pass tenant to s3 register * Remove broken bin/\*spool\* tools * Allow errored volumes to be deleted * Fix up docstring * libvirt/connection.py: Set console.log permissions * nonblocking libvirt mode using tpool * metadata speed - revert logic changes, just caching * Refix mac change to work around libvirt issue * Update transfer\_vhd to handle unicode correctly * Fixes bug 954833 By adding the execute bit to the xenhost xenapi plugin * Cleanup flags * fix bug 954488 * Fix backing file cp/resize race condition * Use a FixedIp subquery to find networks by host * Changes remove\_fixed\_ip to pass the instance host * Map image ids to ec2 ids in metadata service * Remove date\_dhcp\_on\_disassociate comment and docs * Make fixed\_ip\_disassociate\_all\_by\_timeout work * Refactor glance id<->internal id conversion for s3 * Sort results from describe\_instances in EC2 API * virt/firewall: NoopFirewallDriver::instance\_filter\_exists must return True * fix nova-manage floating delete * fixed list warn when ip allocated to missing inst * Removes default use of obsolete ec2 authorizor * Additional extensions no longer break unit-tests * Use cPickle and not just pickle * Move (cast|call)\_compute\_message methods back into compute API class * Fix libvirt get\_console\_output for Python < 2.7 * doc/source/conf.py: Fix man page building * Update floating auto assignment to use the model * Make nova-manage syslog check /var/log/messages * improve speed of metadata * Fix linux\_net.py interface-driver loading * Change default of running\_deleted\_instance\_action * Nuke some unused SQL api calls * Avoid nova-manage floating create /32 * Add a serializer for os-quota-sets/defaults * Import nova.exception so exception can be used * refactoring code, check connection in Listener. refer to Bug #943031 * Fix live-migration in multi\_host network * add convert\_unicode to sqlalchemy connection arguments * Fixes xml representation of ext\_srv\_attr extension * Sub in InstanceLimitExceeded in overLimit message * Remove update lockmode from compute\_node\_get\_by\_host * Set 'dhcp\_server' in \_teardown\_network\_on\_host * Bug #922356 QuantumManager does not initiate unplug on the linux\_net driver * Clean up setup and teardown for dhcp managers * Display owner in ec2 describe images * EC2 KeyName validation * Fix issues with security group auths without ports * Replaced use of webob.Request.str\_GET * Allow soft\_reboot to work from more states: * Make snapshots with qemu-img instead of libvirt * Use utils.temporary\_chown to ensure permissions get reset * Add VDI chain cleanup script * Reduce duplicated code in xenapi * Since 'net' is of nova.network.model.VIF class and 'ips' is an empty list, net needs to be pulled from hydrated nw\_info.fixed\_ips(), and appended to ips * Fix nova-manage backend\_add with sr\_uuid * Update values in test\_flagfile to be different * Switch all xenapi async plugin calls to be sync * Hack to fixup absolute pybasedir in nova.conf.sample * fixup ldapdns default config * Use cache='none' for all disks * Update cfg from openstack-common * Add pybasedir and bindir options * Simply & unify console handling for libvirt drivers * Cleanup XenAPI tests * fix up nova-manage man page * Don't use glance when verifying images * Fixes os-volume/snapshot delete * Use a high number for our default mac addresses * Simplify unnecessary XenAPI Async calls to be synchronous * Remove an obsolete FIXME comment * Fixing image snapshots server links * Wait for rescue VM shutdown to complete before destroying it * Renaming user friendly fault name for HTTP 409 * Moving nova/network tests to more logical home * Change a fake classes variable to something other than id * Increase logging for xenapi plugin glance uploads * Deprecate carrot rpc code * Improve vnc proxy docs * Require a more recent version of glance * Make EC2 API a bit more user friendly * Add kwargs to RequestContext \_\_init\_\_ * info\_cache is related to deleted instance * Handle kwargs in deallocate\_fixed\_ip for FlatDHCP * Add a few missing tests regarding exception codes * Checks image virtual size before qemu-img resize * Set logdir to a tempdir in test\_network * Set lock\_path to a tempdir in TestLockCleanup * Exceptions unpacking rpc messages shouldn't hang the daemon * Use sqlalchemy reflection in migration 080 * Late load rabbit\_notifier in test\_notifier * boto shouldn't be required for production deploys * Don't use ec2 IDs in scheduler driver * pyflakes cleanups on libvirt/connection.py * Validate VDI chain before moving into SR * Fix racey snapshots * Don't swallow snapshot exceptions * allow block migration to talk to glance/keystone * Remove cruft and broken code from nova-manage * Update paste file to use service tenant * Further cleanup of XenAPI * Fix XML namespaces for limits extensions and versions * Remove the feature from UML/LXC guests * setup.py: Fix doc building * Add adjustable offset to audit\_period * nova-manage: allow use of /32 IP range * Clear created attributes when tearing down tests * Fix multi\_host column name in setup\_networks.. * HACKING fixes, all but sqlalchemy * Remove trailing whitespaces in regular file * remove undocumented, unused mpi 'extension' to ec2 metadata * Minor clarifications for the help strings in nova config options * Don't use \_ for variable name * Make test\_compute console tests more robust * test\_compute stubs same thing multiple times * Ignore InstanceNotFound when trying to set instance to ERROR * Cleans up the create\_conf tool * Fix bug 948611. Fix 'nova-manage logs errors' * api-paste.ini: Add /1.0 to default urlmap * Adds nova-manage command to convert a flagfile * bug 944145: race condition causes VM's state to be SHUTOFF * Cleanup some test docstrings * Cleans up a bunch of unused variables in XenAPI * Shorten FLAGS.rpc\_response\_timeout * Reset instance to ACTIVE when no hosts found * Replaces pipelines with flag for auth strategy * Setup and teardown networks during migration * Better glance exception handling * Distinguish rootwrap Authorization vs Not found errors * Bug #943178: aggregate extension lacks documentation * Rename files/dirs from 'rabbit' to 'rpc' * Change references to RabbitMQ to include Qpid * Avoid running code that uses logging in a thread * No longer ignoring man/novamanage * Fixing incorrect use of instance keyword in logging * Fix rst formatting and cross-references * Provide a provider for boto.utils * Only pass image uuids to compute api rebuild * Finally fix the docs venv bug * Get rid of all of the autodoc import errors * Rename DistributedScheduler as FilterScheduler * Allows new style config to be used for --flagfile * Add support for lxc consoles * Fix references to novncproxy\_base\_url in docs * Add assertRaises check to tools/hacking.py as N202 * fix restructuredtext formatting in docstrings that show up in the developer guide * Raise 409 when rescuing instance in RESCUE mode * Log a certain rare instance termination exception * Update fixed\_ip\_associate to not use relationships * Remove unnecessary code in test setUp/tearDown * Imported Translations from Launchpad * Only raw string literals should be used with \_() * assertRaises(Exception, ...) considered harmful * Added docs on MySQL queries blocking main thread * Fix test\_attach\_volume\_raise\_exception * Fix test\_unrescue to actually test unrescue * bug #941794 VIF and intf drivers for Quantum Linux Bridge plugin * Ensures that we don't exceed iptables chain max * Allows --flat\_interface flag to override db * Use self.mox instead of create a new self.mocker * Fix test\_migrate\_disk\_and\_power\_off\_exception * fakes.fake\_data\_store doesn't exist, so don't reset it * populate glance 'name' field through ec2-register * Remove unused \_setup\_other\_managers method from test case * Remove unused test\_obj parameter to setUp() * Use stubout instead of manually stubbing out os.path.exists * Remove superfluous \_\_init\_\_ from test case * Use test.TestCase instead of manually managing stubout * Handle InstanceNotFound during server update * Use stubout instead of manually stubbing out versions.VERSIONS * Remove unused session variable in test setup * Cleanup swap in \_create\_vm undo * Do not invoke kill dnsmasq if no pid file was found * Fixes for ec2 images * Retry download\_vhd with different glance host each time * Display error for invalid CIDR * Remove empty setUp/tearDown methods * Call super class tearDown correctly * Fixes bug 942556 and bug 944105 * update copyright, add version information to footer * Refactor spawn to use UndoManager * Fail gracefully when the db doesn't speak unicode * Remove unnecessary setting up and down of mox and stubout * Remove unnecessary variables from tests * Ensure image status filter matches glance format * fix for bug 821252. Smarter default scheduler * blueprint sphinx-doc-cleanup bug 944381 * Adds soft-reboot support to libvirt * Minor cleanup based on HACKING * libvirt driver calls unplug() twice on vm reboot * Add missing format string type on some exception messages * Fixing a request-id header bug * Test creating a server with metadata key too long * Fixes lp931801 and a key\_error * notifications for delete, snapshot and resize * Ensure that context read\_deleted is only one of 'no', 'yes' or 'only' * register Cell model, not Zone model * Option expose IP instead of dnshost in ec2 desc' * Fix \_sync\_power\_states to obtain correct 'state' * Ensures that keypair names are only AlphaNumeric * Cast vcpu\_weight to string before calling xen api * Add missing filters for new root commands * Destroy VM before VDIs during spawn cleanup * Include hypervisor\_hostname in the extended server attributes * Remove old ratelimiting code * Perform image show early in the resize process * Adds netapp volume driver * Fixes bug 943188 * Remove unused imports and variables from OS API * Return empty list when volume not attached * Be consistent with disabling periodic tasks * Cast volume-related ids to str * Fix for bug 942896: Make sure network['host'] is set * Allow xvd\* to be supplied for volume in xenapi * Initialize progress to 0 for build and resize * Fix issue starting nova-compute w/ XenServer * Provide retry-after guidance on throttled requests * Use constant time string comparisons for auth * Rename zones table to cells and Instance.zone\_name to cell\_name * Ensure temporary file gets cleaned up after test * Fixes bug 942549 * Use assertDictMatch to keep 2.6 unit tests passing * Handle case where instance['info\_cache'] is None * sm volume driver: fix backend adding failure * sm vol driver: Fix regression in sm\_backend\_conf\_update * TypeError API exceptions get logged incorrectly * Add NoopFirewallDriver * Add utils.tempdir() context manager for easy temp dirs * Check all migrations have downgrade in test\_misc * Remove monkey patching in carrot RPC driver * Call detach\_volume when attach fails * Do not hit the network\_api every poll * OS X Support fixed, bug 942352 * Make scheduler filters more pluggable * Adds temporary chown to sparse\_copy * make nova-network usable with Python < 2.6.5 * Re-adds ssl to kombu configuration and adds flags that are needed to pass through to kombu * Remove unused import * Make sure detail view works for volume snaphots * Imported Translations from Launchpad * Decode nova-manage args into unicode * Cleanup .rescue files in libvirt driver unrescue * Fixes cloudpipe extension to work with keystone * Add missing directive to tox.ini * Update EC2KeystoneAuth to grab tenant 'id' * Monkey patch migrate < 0.7.3 * Fixes bug lp#940734 - Adding manager import so AuthMiddleware works * Clean stale lockfiles on service startup : fixes bug 785955 * Fix nova-manage floating create docs * Fix MANIFEST.in to include missing files * Example config\_drive init script, label the config drive * fix unicode triggered failure in AuthManager * Fix bug 900864 Quantum Manager flag for IP injection * Include launch\_index when creating instances * Copy data when migration dst is on a different FS * bigger-than-unit test for cleanup\_running\_deleted\_instances * Nova options tool enhancements * Add hypervisor\_hostname to compute\_nodes table and use it in XenServer * Fixes error if Melange returns no networks * Print error if nova-manage should be run as root * Don't delete security group in use from OS API * nova-network can't deallocate ips from deleted instances * Making link prefixes support https * Prevent infinite loop in PublishErrorsHandler * blueprint host-aggregates: host maintenance - xenapi implementation * bug 939480 * libvirt vif-plugging fixes. Fixes bug 939252 , bug 939254 * Speeding up resize down with sparse\_copy * Remove network\_api fallback for info\_cache from APIs * Improve unit test coverage per bug/934566 * Return 40x for flavor.create duplicate * refactor a conditional for testing and understanding * Disable usb tablet support for LXC * Add Nexenta volume driver * Improve unit test coverage per bug/934566 * nova-manage: Fix 'fixed list' * Add lun number to provider\_location in create\_volume \* Fixes bug 938876 * Fix WeightedHost * Fix instance stop in EC2 create\_image * blueprint host-aggregates: improvements and clean-up * Move get\_info to taking an instance * Support fixed\_ip range that is a subnet of the network block * xenapi: nova-volume support for multiple luns * Fix error that causes 400 in flavor create * Makes HTTP Location Header return as utf-8 as opposed to Unicode * blueprint host-aggregates: host maintenance * blueprint host-aggregates: xenapi implementation * Rework base file checksums * Avoid copying file if dst is a directory * Add 'nova-manage export auth' * Alter output format of volume types resources * Scheduler notifications added * Don't store connection pool in RpcContext * Fix vnc docs: novaclient now supports vnc consoles * Clarify use of Use of deprecated md5 library * Extract get\_network in quantum manager * Add exception SnapshotIsBusy to be handled as VolumeIsBusy * Exception cleanup * Stop ignoring E202 * Support tox-based unittests * Add attaching state for Volumes * Fix quantum get\_all\_networks() signature (lp#936797) * Escape apostrophe in utils.xhtml\_escape() (lp#872450) * Backslash continuations (nova.api.openstack) * Fix broken method signiture * Handle OSError which can be thrown when removing tmpdir. Fixes bug 883326 * Update api-paste.ini with new auth\_token settings * Imported Translations from Launchpad * Don't tell Qpid to reconnect in a busy loop * Don't inherit controllers from each other, we don't want the methods of our parent * Improve unit test coverage per bug/934566 * Setting access ip values on server create * nova.conf sample tool * Imported Translations from Launchpad * Add support for admin\_password to LibVirt * Add ephemeral storage to flavors api * Resolve bug/934566 * Partial fix for bug 919051 * fix pre\_block\_migration() interaction with libvirt cache * Query directly for just the ip * bug 929462: compile\_diagnostics in xenapi erronously catch XenAPI.Failure * Use new style instance logging in compute api * Fix traceback running instance-usage-audit * Actual fix for bug 931608 * Support non-UTC timestamps in changes-since filter * Add additional information to servers output * Adding traceback to async faults * Pulls the main components out of deallocate * Add JSONFormatter * Allow file logging config * LOG.exception does not take an exc\_info keyword * InstanceNotFound exceptions for terminate\_intance now Log warning instead of throwing exeptions * bug 933620: Error during ComputeManager.\_poll\_bandwidth\_usage * Make database downgrade works * Run ovs-ofctl as root * 077\_convert\_to\_utf8: Convert \*all\* FK tables early * Fix bug 933147 Security group trigger notifications * Fixes nova-volume support for multiple luns * Normalize odd date formats * Remove all uniqueness constraints in migration 76 * Add RPC serialization checking, fix exposed problems * Don't send a SQLAlchemy model over rpc * Adds back e2fsck exit code checking * Syncs vncviewer mouse cursor when connected to Windows VMs * Backslash continuations (nova.tests) * The security\_group name should be an XML attribute * Core modifications for future zones service * Remove instance\_get stubs from server action tests * removed unused method and added another test * Enables hairpin\_mode for virtual bridge ports, allowing NAT reflection * Removed zones from api and distributed scheduler * Fix bug 929427 * Tests for a melange\_ipam\_lib, who is missing tests * Create a flag for force\_to\_raw for images * Resolve bug/927714 -- get instance names from db * Fix API extensions documentation, bug 931516 * misc networking fixes * Print friendly message if no floating IPs exist * Catch httplib.HTTPException as well * Expand Quantum Manager Unit Tests + Associated Fixes * bw\_usage takes a MAC address now * Adding tests for NovaException printing * fix a syntax error in libvirt.attach\_volume() with lxc * Prevent Duplicate VLAN IDs * tests: fix LdapDNS to allow running test\_network in isolation * Fix the description of the --vnc\_enabled option * Different exit code in new versions of iscsiadm * improve injection diagnostics when nbd unavailable. Bug 755854 * remove unused nwfilter methods and tests * LOG.exception only works while in an exception handler * \_() works best with string literals * Remove unnecessary constructors for exceptions * Don't allow EC2 removal of security group in use * improve stale libvirt images handling fix. Bug 801412 * Added resize support for Libvirt/KVM * Update migration 076 so it supports PostgreSQL * Replace ApiError with new exceptions * Simple way of returning per-server security groups * Declare deprecated auth flag before its used * e2fsck needs -y * Standardize logging delaration and use * Changing nova-manage error message * Fix WADL/PDF docs referenced in describedby links * bug 931604: improve how xenapi RRD records are retrieved * Resolve bug/931794 -- add uuid to fake * Use new style instance logging in compute manager * clean pyc files before running unit tests * Adding logging for 500 errors * typo fix * run\_tests.sh fix * get\_user behavior in ldapdriver * Fsck disk before removing journal * Don't query database with an empty list for IN clause * Use stubs in libvirt/utils get\_fs\_info test * Adding (-x | --stop) option back to runner.py * Remove duplicate variable * Fixing a unicode related metadata bug * bug 931356: nova-manage prints libvirt related warnings if libvirt isn't installed * Make melange\_port an integer * remove a private duplicate function * Changes for supporting fast cloning on Xenserver. Implements blueprint fast-cloning-for-xenserver 1. use\_cow\_images flag is reused for xenserver to check if copy on write images should be used. 2. image-id is used to tag an image which has already been streamed from glance. 3. If cow is true, when an instance of an image is created for the first time on a given xenserver, the image is streamed from glance and copy on write disk is created for the instance. 4. For subsequent instance creation requests (of the same image), a copy on write disk is created from the base image that is already present on the host. 5. If cow is false, when an instance of an image is created for the first time on a host, the image is streamed from glance and its copy is made to create a virtual disk for the instance. 6. For subsequent instance creation requests, a copy of disk is made for creating the disk for the instance. 7. Snapshot creation code was updated to handle cow=true. Now there can be upto 3 disks in the chain. The base disk needs to be uploaded too. 8. Also added a cache\_images flag. Depending on whether the flag is turned on on not, images will be cached on the host * Completes fix for LP #928910 - libvirt performance * Add some more comments to \_get\_my\_ip() * remove unused and buggy function from S3ImageService * Fix minor typo in runner.py * Remove relative imports from scheduler/filters * Converting db tables to utf8 * remove all instance\_type db lookups from network * Remedies LP Bug #928910 - Use libvirt lookupByName() to check existence * Force imageRef to be a string * Retry on network failure for melange GET requests * Handle network api failures more gracefully * Automatic confirmation of resizes on libvirt * Fix exception by passing timeout as None * Extend glance retries to show() as well * Disable ConfigParser interpolation (lp#930270) * fix FlatNetworkTestCase.test\_get\_instance\_nw\_info * remove unused and buggy function from baremetal proxy * Remove unused compute\_service from images controller * Backslash continuations (nova.virt.baremetal) * fixed bug 928749 * Log instance id consistently inside the firewall code * Remove the last of the gflags shim layer * Fix disk\_config typo * Pass instance to log messages * Fix logging in xenapi vmops * Ensures that hostId's are unique * Fix confirm\_resize policy handling * optimize libvirt image cache usage * bug 929428: pep8 validation on all xapi plugins * Move translations to babel locations * Get rid of distutils.extra * Backslash continuations (network, scheduler) * Remove unnecessary use of LoopingCall in nova/virt/xenapi/vm\_utils.py * Stop using LoopingCall in nova.virt.xenapi\_conn:wait\_for\_task() * Handle refactoring of libvirt image caching * linux\_net: Also ignore shell error 2 from ip addr * Consistently update instance in nova/compute/manager.py * Use named logger when available * Fix deprecated warning * Add support for LXC volumes * Added ability to load specific extensions * Add flag to include link local in port security * Allow e2fsck to exit with 1 * Removes constraints from instance and volume types * Handle service failures during finish\_resize gracefully * Set port security for all allocated ips * Move connection pool back into impl\_kombu/qpid * pep8 check on api-paste.ini when using devstack * Allows test\_virt\_drivers to work when run alone * Add an alias to the ServerStartStop extension * tests.integrated fails with devstack * Backslash continuations (nova.virt) * Require newer versions of SA and SA-Migrate * Optimizes ec2 keystone usage and handles errors * Makes sure killfilter doesn't raise ValueError * Fixes volume snapshotting issues and tests * Backslash continuations (misc.) * nova-rootwrap: wait() for return code before exit * Fix bug 921814 changes handling of adminPass in API * Send image properties to Glance * Check return code instead of output for iscsiadm * Make swap default to vdb if there is no ephemeral * Handle --flagfile by converting to .ini style * Update cfg from openstack-common * Fix xvpvncproxy error in nova-all (lp#928489) * Update MANIFEST.in to account for moved schemas * Remove ajaxterm from Nova * Adding the request id to response headers. Again * Update migration to work when data already exists * Fix support for --flagfile argument * Implements blueprint heterogeneous-tilera-architecture-support * Add nova/tests/policy.json to tarball * Fix quantum client filters * Store the correct tenant\_id/project\_id * dont show blank endpoint headers * Pass in project\_id in ext. authorizer * Fix \_poll\_bandwidth\_usage if no network on vif * Fix nova.virt.firewall debugging message to use UUID * Fix debugging log message to print instance UUID * mkfs takes vfat, not fat32 * Pass partition into libvirt file injection * bug 924266: connection\_type and firewall\_driver flags mismatch * bug 927507: fix quantum manager get\_port\_by\_attachment * Fix broken flag in test\_imagecache * Don't write a dns directive if there are no dns records in /etc/network/interfaces * Imported Translations from Launchpad * Backslash continuations (nova.db) * Add initiator to initialize\_connection * Allows nova to read files as root * Re-run nova-manage under sudo if unable to read conffile * Fix status transition when reverting resize * Adds flags for href prefixes * X\_USER is deprecated in favor of X\_USER\_ID * Move cfg to nova.openstack.common * Use Keystone Extension Syntax for EC2 Creds * Remove duplicate instances\_path option * Delete swap VDI if not used * Raise ApiError in response to InstanceTypeNotFound * Rename inst in \_create\_image, and pass instance to log msgs * Fix bug #924093 * Make sure tenant\_id is populated * Fix for bug 883310 * Increased coverage of nova/auth/dbdriver.py to 100%. Fixes 828609 * Make crypto use absolute imports * Remove duplicate logging\_debug\_format option * blueprint nova-image-cache-management phase1 * Set rescue instance hostnames appropriately * Throw an user error on creating duplicate keypairs Fixes bug 902162 * Fixes uuid lookup in virtual interfaces extension * Add comments to injected keys and network config * Remove hard coded m1.tiny behavior * Fix disassociation of fixed IPs when using FlatManager * Provides flag override for vlan interface * remove auto fsck feature from file injection. Bug 826794 * DRYing up Volume/Compute APIRouters * Excise M2Crypto! * Add missing dev. Fixes LP: #925607 * Capture bandwidth usage data before resize * Get rid of DeprecationWarning during db migration * Don't block forever for rpc.(multi)call response * Optionally disable file locking * Avoid weird test error when mox is missing * fix stale libvirt images on download failure. Bug 801412 * cleanup test case to use integers not strings * Respect availability\_zone parameter in nova api * Fix admin password skip check * Add support for pluggable l3 backends * Improve dom0 and template VM avoidance * Remove Hyper-V support * Fix logging to log correct filename and line numbers * Support custom routes for extensions * Make parsing of usage stats from XS more robust * lockfile.FileLock already appends .lock * Ties quantum, melange, and nova network model * Make sure multiple calls to \_get\_session() aren't nested * bug 921087: i18n-key and local-storage hard-coded in xenapi * optimize libvirt raw image handling. Bug 924970 * Boto 2.2.x failes. Capping pip-requires at 2.1.1 * fixed bug 920856 * Expand policies for admin\_actions extension * Correct checking existence of security group rule * Optionally pass a instance uuid to log methods * remove unsupported ec2 extensions * Fix VPN ping packet length * Use single call in ExtendedStatus extension * Add mkswap to rootwrap * Use "display\_name" in "nova-manage vm list" * Fix broken devref docs * Allow for auditing of API calls * Use os.path.basename() instead of string splitting * Remove utils.runthis() * Empty connection pool after test\_kombu * Clear out RPC connection pool before exit * Be more explicit about emptying connection pool * fixes melange ipam lib * bug 923798: On XenServer the DomU firewall driver fails with NotImplementedError * Return instancesSet in TerminateInstances ec2 api * Fix multinode libvirt volume attachment lp #922232 * Bug #923865: (xenapi driver)instance creation fails if no guest agent is avaiable for admin password configuration * Implementation of new Nova Volume driver for SolidFire ISCSI SAN * Handle kepair delete when not found * Add 'all\_tenants' filter to GET /servers * Use name filter in GlanceImageService show\_by\_name * Raise 400 if bad kepair data is provided * Support file injection on boot w/ Libvirt * Refactor away the flags.DEFINE\_\* helpers * Instances to be created with a bookmark link * fix \`nova-manage image convert\` exception * Added validation of name when creating a new keypair * Ignore case in policy role checks * Remove session arg from sm\_backend\_conf\_update * Remove session arguments from db.api * Add a note explaining why unhandled exceptions shouldn't be returned to users * Remove fetching of networks that weren't created via nova-manage * uses the instance uuid in libvirt by introducing a new variable 'uuid' for the used template instead of using a random uuid in libvirt * Fixing a rebuild race condition bug * Fixes bug 914418 * Remove LazySerializationMiddleware * Bug #921730: plugins/xenserver/xenapi/etc/xapi.d/plugins/objectstore no longer in use * Adding live migration server actions * bug 921931: fix Quantum Manager VM launch race condition * Fix authorization checks for simple\_usage.show * Simplify somewhat complicated reduce() into sum() * Ignore connection\_type when no instances exist * Add authorization checks to flavormanage extension * rootwrap: Fix KillFilter matching * Fix uptime calculation in simple\_usage * Fixing rebuilds on libvirt, seriously * Don't pass filter\_properites to managers * Fixing rebuilds on libvirt * Fix bug 921715 - 'nova x509-create-cert' fails * Return 403 instead of 401 when policies reject * blueprint host-aggregates: OSAPI extensions * blueprint host-aggregates: OSAPI/virt integration, via nova.compute.api * Fixes bug 921265 - i'nova-manage flavor create|list' * Remove unused flags.Help\*Flag * Convert vmwareapi code to UNIX style line endings * Blueprint xenapi-provider-firewall and Bug #915403 * Adds extension for retrieving certificates * Add os-start/os-stop server actions to OSAPI * Create nova cert worker for x509 support * Bug #916312: nova-manage network modify --network flag is inconsistent * Remove unused nova/api/mapper.py * Add nova.exception.InvalidRPCConnectionReuse * Add support for Qpid to nova.rpc * Add HACKING compliance testing to run\_test.sh * Remove admin\_only ext attr in favor of authz * usage: Fix time filtering * Add an API extension for creating+deleting flavors * extensions: Allow registering actions for create + delete * Explicitly encode string to utf8 before passing to ldap * Make a bunch of dcs into single-entry lists * Abstract out \_exact\_match\_filter() * Adds a bandwidth filter DB call * KVM and XEN Disk Management Parity * Tweak api-paste.ini to prepare for a devstack change * Remove deprecated serialization code * Add affinity filters updated to use scheduler\_hints and have non-douchey names * Do not output admin\_password in debug logs * Handle error in associate floating IP (bug 845507) * Brings back keystone middleware * Remove sensitive info from rpc logging * Error out instance on set password failure * Fixed limiting for flavors * Adds availability zone filter * Fixes nova-manage fixed list * API version check cleanups * ComputeNode Capacity support * blueprint host-aggregates: maintenance operations to host OSAPI exts * Add a specific filter for kill commands * Fix environment passing in DnsmasqFilter * Cleanups for rootwrap module * Fix 'nova-manage config list' * Add context and request spec to filter\_properties * Allow compute manager prep\_resize to accept kwargs * Adds isolated hosts filter * Make start\_instance cast directly to compute host * Refactor compute api messaging calls to compute manager * Refactor test\_scheduler into unit tests * Forgot to update chance scheduler for ignore\_hosts change * Add SchedulerHints compute extension * Add floating IP support to Quantum Manager * Support filter based on CPU core (over)allocation * bug 917397 * Add option to force hosts to scheduler * Change the logic for deleting a record dns\_domains * Handle FlavorNotFound on server list w/ filter * ERROR out instance if unrescue fails * Fix xenapi rescue without swap * Pull out ram\_filter into a separate filter * pass filter\_properties into scheduling requests for run\_instance * Fixes bug #919390 - Block Migration fails when keystone is un use * Fix nova-manage floating list (fixes bug 918804) * Imported Translations from Launchpad * scheduler host\_manager needs service for filters * Allow Quantum Manager to run in "Flat" mode * aws/ec2 api validation * Fix for bug 918502 * Remove deprecated extension code * Validating image id for rebuild * More cleanup of Imports to match HACKING * chmod nova-logspool * nova/network: pass network\_uuid to linuxnet\_interface\_driver and vif driver * Clean up crypto.py * Fix missing imports and bad call caught by pyflakes * Clarify error messages for admin passwords * Log uuid when instances fail to spawn * Removed references to FLAGS.floating\_ip\_dns\_domains * Removed some vestigial default args from DNS drivers * Allow config of vncserver\_proxyclient\_address * Rename 'zone' to 'domain.' * disk\_config extension now uses OS prefix * Do not write passwords to verbose logs. bug 916167 * Automatically clean up DNS when a floating IP is deallocated * Fix disassociating of auto assigned floating ips * Cleanup Imports to match HACKING guidelines * Added an LDAP/PowerDNS driver * Add dns domain manipulation to nova * fixes bug lp914962 * Fixed bug 912701 * Fix bug #917615 * Separate scheduler host management * Set instance\_ref property when creating snapshots * Implements blueprint vnc-console-cleanup * Rebuild/Resize support for disk-config * Allow instances in 'BUILD' state to be deleted * Stop allowing blank image names on snapshot/backup * Only update if there are networks to update * Drop FK constraint if it exists in migration 064 * Fix an error that prevents message from getting substituted * blueprint host-aggregates * Add missing scripts to setup.py (lp#917676) * Fixes bug 917128 * Clean up generate fingerprint * Add policy checking to nova.network.api.API * Add default policy rule * Super is not so super * Fixed the log line * Add tests for volume list and detail through new volume api, and fix error that the tests caught * Typofix for impl\_kombu * Refactoring logging \_log function * Update some extensions (1) * DECLARE osapi\_compute\_listen\_port for auth manager * Increase robustness of image filtering by server * Update some extensions (2) * Implement BP untie-nova-network-models * Add ipv4 and ipv6 validation * greenlet version inconsistency * Add policy checks to Volume.API * Remove unused extension decorator require\_admin * Fix volume api typo * Convert nova.volume.api.API to use volume objects * Remove a whole bunch of unused imports * have all quota errors return an http 413 * This import is not used * Refactor request and action extensions * Prefixing the request id with 'req-' to decrease confusion when looking at logs * Fixing a bug that was causing the logging to display the context info for the wrong user. bug: 915608 * Modify the fake ldap driver to fix compatibility * Create an instance DNS record based on instance UUID * Implements blueprint separate-nova-volumeapi * Implement more complete kombu reconnecting * First implementation of bp/live-migration-resource-calc * Remove 'status' from default snapshot properties * Clean up disk\_format mapping in xenapi.vm\_utils * Remove skipping of 2 tests * Make authz failures use proper response code * Remove compute.api.API.add\_network\_to\_project * Adds test for local.py * Fix policy import in nova.compute.api * Remove network\_api from Servers Controller * minor fix in comment * Updates linux\_net to ignore some shell errors * Add policy checks to Compute.API * Ensure nova is compatible with WebOb 1.2+ * improve handling of the img\_handlers config list * Unbreak start instance and fixes bug 905270 * catch InstanceInvalidState in more places * Fix some cfg test case naming conflicts * Remove 'location' from GlanceImageService * Makes common/cfg.py raise AttributeError * Call to instance\_info\_cache\_delete to use uuid * Bug #914907: register\_models in db/sqlalchemy/models.py references non-existent ExportDevice * Update logging in compute manager to use uuids * Do not overwrite project\_id from request params * Add optional revision field to version number * Imported Translations from Launchpad * nova-manage floating ip fixes * Add a modify function to the floating ip dns api * Adding the request id to response headers * Add @utils.deprecated() * Blueprint xenapi-security-groups * Fix call to compute\_api.resize from \_migrate * Fix metadata mapping in s3.\_s3\_parse\_manifest * Fix libguestfs operation with specified partitions * fix reboot\_instance typo * Fix bad test cases in smoketest * fix bug 914049: private key in log * Don't overwrite local context on elevated * Bug 885267: Fix GET /servers during instance delete * Adds support for floating ip pools * Adds simple policy engine support * Refactors utils.load\_cached\_file * Serialization, deserialization, and response code decorators * Isolate certain images on certain hosts * Workaround bug 852095 without importing mox * Bug #894683: nova.service does not handle attribute specific exceptions and client hangs * Bug #912858: test\_authors\_up\_to\_date does not deal with capitalized names properly * Adds workaround check for mox in to\_primitive * preload cache table and keep it up to date * Use instance\_properties in resize * Ensure tests are python 2.6 compatible * Return 409s instead of 500s when deleting certain instances * Update HACKING.rst * Tell users what is about to be installed via sudo * Fix LP912092 * Remove small unneeded code from impl\_kombu * Add missing space between XML attributes * Fix except format to match HACKING * Set VLAN MTU size when creating the vlan interface * Add instance\_name field to console detail command which will give the caller the necessary information to actually connect * Fix spelling of variable * Remove install\_requires processing * Send event notifications for suspend and resume * Call mkfs with the correct order of arguments * Fix bug 901899 * Fix typo in nova/rootwrap/compute.py. Fixes LP: #911880 * Make quantum\_use\_dhcp falsifiable * Fixing name not defined * PEP8 type comparison cleanup * Add cloudpipe/vpn api to openstack api contrib * Every string does not need to be internationalized * Adds running\_deleted\_instance\_reaper task * libvirt: implements boot from ISO images * Unused db.api cleanup * use name gateway\_v6 instead of gateway6 * PEP8 remove direct type comparisons * Install a good version of pip in the venv * Bug #910045: UnboundLocalError when failing to get metrics from XenAPI hosts * re-raising exceptions fix * use dhcp\_lease\_time for dnsmasq. Fix bug 894218 * Clean up pylint errors in top-level files * Ensure generated passwords meet minimum complexity * Fixing novaclient\_converter NameError * Bug 820059: bin/nova-manage.py VpnCommands.spawn calls non-existant method VpnCommands.\_vpn\_for - fixed * Bug 751229: Floating address range fixed * Brings some more files up to HACKING standards * Ensure queue is declared durable so messages aren't dropped * Create notification queues as durable * Adding index to instances project\_id column * Add an API for associating floating IPs with DNS entries * 'except:' to 'except Exception:' as per HACKING * Adds EC2 ImportKeyPair API support * Take the availability zone from the instance if available * Update glance Xen plugin w/ purge props header * Converting zones into true extension * Convering /users to admin extension * Add a DECLARE for dhcp\_doamin flag to metadata handler * Support local target for Solaris, use 'safe' command-line processing * Add 'os-networks' extension * Converting accounts resource to admin extension * Add exit\_code, stdout, stderr etc to ProcessExecutionException * Fixes LP bug #907898 * Switch extension namespace * Refactor Xen Vif drivers. Fixes LP907850 * Remove code in migration 064 to drop an fkey that does not exist. Fixes LP bug #907878 * Help clarify rpc API with docs and a bit of code * Use SQLAlchemy to drop foreign key in DB migrate * Move createBackup server action into extension * Bug#898257 support handling images with libguestfs * Bug#898257 abstract out disk image access methods * Move 'actions' subresource into extension * Make os-server-diagnostics extension admin-only * Remove unneeded broken test case * Fix spelling typos in comments * Allow accessIPv4 and accessIPv6 on rebuild action * Move 'diagnostics' subresource to admin extension * Cleaning up imports in compute and virt * Cleaning up imports in nova.api * Make reroute\_compute use functools.wraps. Fixes LP bug #906945 * Removing extra code from servers controller * Generate instance faults when instance errors * Clarify NoValidHost messages * Fix one last bug in os-console-output extension * Fix os-console-output extension integration * Set Location header in server create and rebuild actions * Consistently use REBUILDING vm\_state * Improve the minidns tests to handle zone matching * Remove unused FLAGS.block\_size * Make UUID format checking more correct * Set min\_ram and min\_disk on snapshot * Add support for port security to QuantumManager * Add a console output action to servers * Creating mechanism that loads Admin API extensions * Document return type from utils.execute() * Renamed the instance\_dns\_driver to dns\_driver for more general use * Specify -t rsa when calling ssh-keygen * create\_export and ensure\_export should pass up the return value, to update the database * Imported Translations from Launchpad * avoid error and trace on dom.vcpus() in lxc * Properly passes arg to run\_iscsiadm to fix logout * Makes disassociate by timeout work with multi-host * Call get\_instance\_nw\_info with elevated context, as documented in nova/network/manager.py * Adds missing joinedload for vif loading * missing comments about extensions to ec2 * Pull resource extensions into APIRouter * IPAM drivers aren't homogenous bug 903230 * use env to find 'false'. Fix for OS X * Fix scheduler error handler * Starting work on exposing service functionality * Bugfix for lp904932 * Ensure fkey is dropped before removing instance\_id * Fixes bug 723235 * nova.virt.libvirt.firewall: set static methods * Expose Asynchronous Fault entity in the OSAPI * Fix nova-manage flags declaration * Remove useless flags declaration * Remove useless input\_chain flags * Make XenAPI agent configuration synchronous * Switch disk\_config extension to use one DB query * Update utils.execute so that check\_exit\_code handles booleans. Fixes LP bug #904560 * Rename libvirt\_uri to uri * Make libvirt\_uri a property * Making pep8 output less verbose * Refactors handling of detach volume * Fixes bug 887402 * Bug 902626 * Make various methods static * Pass additional information from nova to Quantum * Refactor vm\_state and task\_state checking * Updates OVS rules applied to IPv4 VIFs * Follow-on to I665f402f to convert rxtx\_quota to rxtx\_factor in nova-manage and a couple of tests * Make sure the rxtx\_cap is used to set qos info * Fix some errors found by pychecker * Fix tgtadm off by one error. Fixes bug #871278 * Sanitize EC2 manifests and image tarballs * floating-ip: return UUID of instance rather than ID * Renaming instance\_actions.instance\_id column to instance\_uuid. blueprint: internal-uuids * Fix for bug 902175 * fixed typos. removed an unused import * Vm state management and error states * Added support for creating nova volume snapshots using OS API * Fix error when subnet doesn't have a cidr set * bug 899767: fix vif-plugging with live migration * Fixing snapshot failure task\_state * Imported Translations from Launchpad * Moves find config to utils because it is useful * fixed\_ips by vif does not raise * Add templates for selected resource extensions * Fix network forwarding rule initialization in QuantumManager * \_check\_image\_size returns are consistent * Fixed the perms on the linux test case file so that nose will run it * Add preparation for asynchronous instance faults * Add templates for selected resource extensions * Use more informative message when violating quota * Log it when we get a lock * removing TODO as we support Windows+XenServer and have no plans to support quiesce or VSS at the moment * Adds network model and network info cache * Rename .nova-venv to .venv * revert using git for novaclient * Port nova.flags to cfg * Make cfg work on python 2.6 * Relax novaclient and remove redis dependency * Relax dependency on boto 1.9b and nova-adminclient * Make QuantumManager no longer depend on the projects table * Imported Translations from Launchpad * Fix for bug 901459 * Updated the test runner module with a sys.path insert so that tests run in and outside a virtual environment * Add ability to see deleted and active records * Set instance['host'] to the original host value on revert resize * Fix race condition in XenAPI when using .get\_all * Clean up snapshot metadata * Handle the 'instance' half of blueprint public-and-private-dns * Refactors periodic tasks to use a decorator * Add new cfg module * Remove extra\_context support in Flags * A more secure root-wrapper alternative * Remove bzr related code in tests/test\_misc * Change cloudServersFault to computeFault * Update associate\_floating\_ip to use instance objs * vm\_state:=error on driver exceptions during resize * Use system M2Crypto package on Oneiric, bug 892271 * Update compute manager so that finish\_revert\_resize runs on the source compute host. Fixes bug #900849 * First steps towards consolidating testing infrastructure * Remove some remnants of ChangeLog and vcsversion.py generation * Pass '-r' option to 'collie cluster status' * Remove remnants of babel i18n infrastructure * Fixes a typo preventing attaching RBD volumes * Remove autogenerated pot file * Make admin\_password keyword in compute manager run\_instance method match what we send in the compute API. Fixes bug #900591 * remove duplicate netaddr in nova/utils * cleanup: remove .bzrignore * add index to instance\_uuid column in instances * Add missing documentation for shared folder issue with unit tests and Python lock file * Updated nova-manage to work with uuid images Fixes bug 899299 * Add availabity\_zone to the refresh list * Document nova-tarball Jenkins job * Adds extension documentation for some but not all extensions * Add templates for selected resource extensions * EC2 rescue/unrescue is broken, bug 899225 * Better exception handling during run\_instance * Implement resize down for XenAPI * Fix for EC2 API part of bug 897164 * Remove some unused imports from db * Replacing instance id's in in xenapi.vmops and the xen plugin with instance uuids. The only references to instance id's left are calls to the wait\_for\_task() method. I will address that in another branch. blueprint: internal-uuids * Convert get\_lock in compute to use uuids * Fix to correctly report memory on Linux 3.X * Replace more cases of instance ids with uuids * Make run\_instance only support instance uuids * Updates simple scheduler to allow strict availability\_zone scheduling * Remove VIF<->Network FK dependancy * Adds missing image\_meta to rescue's spawn() calls * Bug #898290: iSCSI volume backend treats FLAGS.host as a hostname * split rxtx\_factor into network and instance\_type * Fixing get\_info method implementations in virt drivers to accept instance\_name instead of instance\_id. The abstract class virt.ComputeDriver defines get\_info as: def get\_info(self, instance\_name). blueprint: internal-uuids * Fixes bug 767947 * Remove unused ec2.action\_args * Fix typo: priviledges -> privileges * Bug #896997: nova-vncproxy's flash socket policy port is not configurable * Convert compute manager delete methods to objects * Removing line dos line endings in vmwareapi\_conn.py * reboot & rebuild to use uuids in compute manager * Fix for bug 887712 * Add NAT/gateway support to QuantumManager * Fix QuantumManager update\_dhcp calls * Fix RPC responses to allow None response correctly * Use uuids for compute manager agent update * power\_on/power\_off in compute manager to use uuids * Use uuids for file injection * removed logic of throwing exception if no floating ip * Adding an install\_requires to the setup call. Now you can pip install nova on a naked machine * Removing obsolete bzr-related clauses in setup.py * Makes rpc\_allocate\_fixed\_ip return properly * Templatize extension handling * start/stop in compute manager to use uuids * Updating {add,remove}\_fixed\_ip\_from\_instance in compute.api and compute.manager to use instance uuid instead of instance id. blueprint internal-uuids * Use instance uuids for consoles and diagnostics * Fixes bug 888649 * Fix Bug #891718 * Bug #897091: "nova actions" fails with HTTP 400 / TypeError if a server action has been performed * Bug #897054: stack crashes with AttributeError on e.reason if the server returns an error * Refactor a few things inside the xenapi unit tests * New docs: unit tests, Launchpad, Gerrit, Jenkins * Fix trivial fourth quote in docstring * Fix deprecation warnings * Fix for bug 894431 * Remove boot-from-volume unreachable code path (#894172) * reset/inject network info in compute to use uuid * Updating set\_admin\_password in compute.api and compute.manager to use instance uuids instead of instance ids. Blueprint internal-uuids * rescue/unrescue in compute manager to use uuids * Updated development environment docs * Call df with -k instead of -B1 * Make fakelibvirt python2.6 compatible * Clean up compute api * Updating attach/detach in compute.api and compute.manager to use instance uuid instead of instance id. blueprint internal-uuids * Change compute API.update() to take object+params * Use XMLDictSerializer for resource extensions * Updating {add,remove}\_security\_group in compute.api to use instance uuids instead of instance ids. blueprint internal-uuids * Extend test\_virt\_driver to also test libvirt driver * poll\_rebooting\_instances passes an instance now * Revert "Fixes bug 757033" * Put instances in ERROR state when scheduler fails * Converted README to RST format * Workaround xenstore race conditions * Fix a minor memory leak * Implement schedule\_prep\_resize() * Fixes bug 886263 * snapshot/backup in compute manager to use uuids * Fixes bug 757033 * Converting tests to use v2 * lock/unlock in compute manager to use uuids * suspend/resume in compute manager to use uuids * Refactor metadata code out of ec2/cloud.py * pause/unpause in compute manager to use uuids * Creating new v2 namespace in nova.api.openstack * Add a "libvirt\_disk\_prefix" flag to libvirt driver * Added RST docs on how to use gettext * Refactoring/cleanup of some view builders * Convert remaining calls to use instance objects * Make run instances respect availability zone * Replacing disk config extension to match spec * Makes sure gateways forward properly * Convert security\_group calls to use instance objs * Remove hostname update() logic in compute.API * Fixes bug 890206 * Follow hostname RFCs * Reference Ron Pedde's cleanup script for DevStack * Remove contrib/nova.sh and other stale docs * Separate metadata api into its own service * Add logging, error handling to the xenstore lib * Converting lock/unlock to use instance objects * Deepcopy optparse defaults to avoid re-appending multistrings (#890489) * install\_venv: apply eventlet patch correctly with python 2.7 (#890461) * Fix multistring flags default handling (#890489) * Fixing image create in S3ImageService * Defining volumes table to allow FK constraint * Converting network methods to use instance objects * Handle null ramdisk/kernel in euca-describe-images * Bind engine to metadata in migration 054 * Adding downgrade for migration 57 plus test * Log the URL to an image\_ref and not just the ID * Converting attach\_volume to use instance object * Converting rescue/unrescue to use instance objects * Converting inject\_file to use instance objects * Bug #888719: openvswitch-nova runs after firstboot scripts * Bug #888730: vmwareapi suds debug logging very verbose * Converting consoles calls to use instance objects * Converting fixed ip calls to use instance objects * Convert pause/unpause, sus/res to use instance obj * fix rebuild sha1 not string error * Verify security group parameters * Converting set password to use instance objects * Converting snapshot/backup to use instance objects * Refactor of QuotaError * Fix a notification bug when creating instances * Converting metadata calls to use instance objects * nova-manage: exit with status 1 if an image registration fails * Converting start and stop to use instance objects * Converting delete to use instance objects * Capture exceptions happening in API layer * Removed some old cruft * Add more error handling to glance xenapi plugin * Fixes bug 871877 * Replace libvirt driver's use of libxml2 with ElementTree * Extend fake image service to let it hold image data * Bug #887805 Error during report\_driver\_status(): 'LibvirtConnection' object has no attribute '\_host\_state' * More spelling fixes inside of nova * Fixes LP878319 * Fix exception reraising in volume manager * Adding Chuck Short to .mailmap * Undefine libvirt saved instances * Split compute api/manager tests within module * Workaround for eventlet bug with unit tests in RHEL6.1 * Apply M2Crypto fix for all Fedora-based distributions * Fix failing libvirt test (bug 888250) * Spelling fixes in nova/api comments * Get MAC addresses from Melange * Refactor logging\_error into utils * Converting rebuild to use instance objects * Converting resize to use instance objects * Converting reboot to use instance objects * Reducing the number of compute calls to Glance * Remove duplicate method (2) * Move tests for extensions to contrib directory * Remove duplicate method * Remove debugging print * Adds extended status information via the Admin API to the servers calls * Wait until the instance is booted before setting VCPU\_params * changes logging reference in zone\_manager.py * Exception cleanup in scheduler * Fixing create\_vbd call per VolumeHelper refactoring * Switch glance XenAPI plugin to use urllib2 * Blueprint lasterror * Move failed instances to error state * Adding task\_states.REBOOTING\_HARD * Set task state to UPDATING\_PASSWORD when needed * Clean up docstrings for faults.Fault and it's usage * Fix typo in docstring * Add DHCP support to the QuantumManager and break apart dhcp/gateway * Change network delete to delete by uuid or cidr * Bug #886353: Faults raised by OpenStack API Resource handlers fail to be reported properly * Define faults.Fault.\_\_str\_\_ * Speed up tests a further 35 seconds * Removing duplicate kernel/ramdisk check in OSAPI * Remove unnecessary image list in OSAPI * Add auto-reloading JSON config file support to scheduler * Change floating-snat to float-snat * Allows non-admin users to use simple scheduler * Skip libvirt tests when libvirt not present * Correcting libvirt tests that were failing * Fix for launchpad bug #882568 * Gracefully handle Xen resize failure * Don't update database before resize * fix bug 816630 * Set nova-manage to executable Fixes LP885778 * Fixing immediate delete after boot on Libvirt * exception.KeypairNotFound usage correction * Add local storage of context for logging * Reserve memory/disk for dom0/host OS * Speed up tests yet another 45 seconds * APIs should not wait on scheduler for builds in single zone deployment * Added some documentation to db.api module docstring * Updated rst docs to include threading model * Adds documentation for Xen Storage Manager * Xen Storage Manager Volume Driver * Drop extra XML tests and remove \_json suffix from names * Fix empty group\_id to be considered invalid * Stop nova-ajax-console-proxy configuring its own logging * Bug 884863: nova logs everything to syslog twice * Log the exception when we get one * Use fat32 for Windows, linux-swap for Linux swap partitions * Fix KeyError when passed unknown format of time * flatten distributed scheduler * Bug #884534: nova-ajax-console-proxy crashes on shutdown * Bug 884527: ajax\_console\_proxy\_port needs to be an integer * Too much information is returned from POST /servers * Disable SQLite synchronous mode during tests * Creating uuid -> id mapping for S3 Image Service * Fix 'begining' typo in system usage data bug 884307 * Fixes lp883279 * Log original dropped exception when a new exception occurs * Fix lp:861160 -- newly created network has no uuid * Bug #884018: "stack help" prints stacktrace if it cannot connect to the server * Optional --no-site-packages in venv * fixes bug 883233. Added to Authors fix typo in scheduler/driver.py assert\_compute\_node\_has\_enough\_memory * Updated NoAuth to account for requests ending in / * Retry failed SQL connections (LP #876663) * Removed autogenerated API .rst files * Fix to a documentation generation script * Added code to libvirt backend to report state info * Adding bulk create fixed ips. The true issue here is the creation of IPs in the DB that are not currently used(we are building the entire block). This fix is just a bandaid, but it does cut ~25 seconds off of the quantum tests on my laptop * Fix overzealous use of faults.Fault() wrapper * Revert how APIs get IP address info for instances * Support server uuids with security groups * Support using server uuids when accessing consoles * Adding support for retrying glance image downloads * Fix deletion of instances without fixed ips * Speed up test suite by 20 seconds * Removed callback concept on VM driver methods: * Fix file injection for OSAPI rebuilds. Fixes 881649 * Replaces all references to nova.db.api with nova.db * venv: update distribute as well as pip * Fix undefined glance\_host in get\_glance\_client * Fix concurrency of XenAPI sessions * Server metadata must support server uuids * Add .gitreview config file for gerrit * Convert instancetype.flavorid to string * Make sure networks returned from get\_instance\_nw\_info have a label * Use UUIDs instead of IDs for OSAPI servers * Improve the liveness checking for services * Refactoring of extensions * Moves a-zone scheduling into simple scheduler * Adds ext4 and reiserfs to \_mount\_filesystem() * Remove nova dependency on vconfig on Linux * Upgrade pip in the venv when we build it * Fixes bug 872459 * Repartition and resize disk when marked as managed * Remove dead DB API call * Only log instance actions once if instance action logging is enabled (now disabled by default) * Start switching from gflags to optparse * Don't leak exceptions out to users * Fix EC2 test\_cloud timing issues * Redirects requests from /v#.# to /v#.#/ * Chain up to superclass tearDown in ServerActionsTest * Updated RST docs: bzr/launchpad -> git/github * Refactoring nova.tests.api.openstack.test\_flavors * Refactoring image and server metadata api tests * Refactoring nova.tests.api.openstack.test\_servers * Refactoring nova.tests.api.openstack.test\_images * Utility script that makes enforcing PEP8 within git's pre-commit hook as easy as possible * Add XML templates * Remove OSAPI v1.0 * Remove unused flag\_overrides from TestCase * Cancel any clean\_reboot tasks before issuing the hard\_reboot * Makes snapshots work for amis. Fixes bug 873156 * Xenapi driver can now generate swap from instance\_type * Adds the ability to automatically issue a hard reboot to instances that have been stuck in a 'rebooting' state for longer than a specified window * Remove redundant, dead code * Added vcpu\_weight to models * Updated links in the README that were out of date * Add INPUT chain rule for EC2 metadata requests (lp:856385) * Allow the user to choose either ietadm or tgtadm (lp:819997) * Remove VolumeDriver.sync\_exec method (lp:819997) * Adds more usage data to Nova's usage notifications * Fixes bug 862637 -- make instance\_name\_template more flexible * Update EC2 get\_metadata calls to search 'deleted': False. Fixes nova smoke\_tests!!! * Use new ip addr del syntax * Updating HACKING to make split up imports into three blocks * Remove RateLimitingMiddlewareTest * Remove AoE, Clean up volume code * Adds vcpu\_weight column to instance\_types table and uses this value when building XenServer instances * Further changes to the cleaner * Remove duplicated functions * Reference orphaned\_instance instead of instance * Continue to the next iteration of the loop if an instance is not found * Explicit errors on confirm/revertResize failures * Include original exception in ClassNotFound exception * Enable admin access to EC2 API server * Make sure unknown extensions return 404 * Handle pidfile exception for dnsmasq * Stop returning correct password on api calls * Restructure host filtering to be easier to use * Add support for header version parameter to specify API version * Set error state on spawn error + integration test * Allow db schema downgrades * moved floating ip db access and sanity checking from network api into network manager added floating ip get by fixed address added fixed\_ip\_get moved floating ip testing from osapi into the network tests where they belong * Adds a script that can automatically delete orphaned VDIs. Also had to move some flags around to avoid circular imports * Improve access check on images * Updating image progress to be more granular. Before, the image progress had only 2 states, 0 and 100. Now it can be 0, 25, 50 or 100 * Deallocate ip if build fails * Ensure non-default FLAGS.logfile\_mode is properly converted to an octet * Moving admin actions to extension * Fixes bug 862633 -- OS api consoles create() broken * Adds the tenant id to the create images response Location header Fixes bug 862672 * Fixes bug 862658 -- ec2 metadata issue getting IPs * Added ==1.0.4 version specifier to kombu in pip-requires to ensure tests pass in a clean venv * bug lp845714 * install\_venv: pip install M2Crypto doesn't work on Fedora * install\_venv: add support for distro specific code * install\_venv: remove versioned M2Crypto dependency * install\_venv: don't use --no-site-packages with virtualenv * install\_venv: pass the --upgrade argument to pip install * install\_venv: refactor out pip\_install helper * Replace socat with netcat * api.ec2.admin unit tests * Fixes Bug #861293 nova.auth.signer.Signer now honors the SignatureMethod parameter for SHA1 when creating signatures * Enforce snapshot cleanup * bug 861310 * Change 'recurse\_zones' to 'local\_zone\_only' * Fixes euca-describe-instances failing or not showing IPs * Fixes a test failure in master * Fixed bug lp850602. Adding backing file copy operation on kvm block migration * Add nova-all to run all services * Snapshots/backups can no longer happen simultaneously. Tests included * Accept message as sole argument to NovaException * Use latest version of SQLAlchemy * Fix 047 migration with SQLAlchemy 0.7.2 * Beef up nova/api/direct.py tests * Signer no longer fails if hashlib.sha256 is not available. test\_signer unit test added * Make snapshots private by default * use git config's review.username for rfc.sh * Raise InsufficientFreeMemory * Adding run\_test.sh artifacts to .gitignore * Make sure options is set before checking managed\_disk setting. Fixes bug 860520 * compute\_api create\*() and schedulers refactoring * Removed db\_pool complexities from nova.db.sqlalchemy.session. Fixes bug 838581 * Ensure minRam and minDisk are always integers * Call endheaders when auth\_token is None. Fixes bug 856721 * Catch ImageNotFound on image delete in OSAPI * Fix the grantee group loading for source groups * Add next links to images requests * put fully qualified domain name in local-hostname * Removing old code that snuck back in * Makes sure to recreate gateway for moved ip * Fix some minor issues due to premature merge of original code * \* Rework osapi to use network API not FK backref \* Fixes lp854585 * Allow tenant networks to be shared with domain 0 * Use ovs-vsctl iface-to-br to look up the bridge associated with the given VIF. This avoids assuming that vifX.Y is attached to xenbrY, which is untrue in the general case * Made jenkins email pruning more resilient * Fixing bug 857712 * Adds disk config * Adding xml schema validation for /versions resource * Fix bug 856664 overLimit errors now return 413 * Don't use GitPython for authors check * Fix outstanding pep8 errors for a clean trunk * Add minDisk and minRam to OSAPI image details * Fix rfc.sh's check for the project * Add rfc.sh to help with gerrit workflow * This patch adds flavor filtering, specifically the ability to flavor on minRam, minDisk, or both, per the 1.1 OSAPI spec * Add next links for server lists in OSAPI 1.1. This adds servers\_links to the json responses, and an extra atom:link element to the servers node in the xml response * Update exception.wrap\_exception so that all exceptions (not just Error and NovaException types) get logged correctly * Merging trunk * Adding OSAPI tests for flavor filtering * This patch adds instance progress which is used by the OpenStack API to indicate how far along the current executing action is (BUILD/REBUILD, MIGRATION/RESIZE) * Merging trunk * Fixes lp:855115 -- issue with disassociating floating ips * Renumbering instance progress migration * Fixing tests * Keystone support in Nova across Zones * trunk merge fixup * Fix keys in ec2 conversion to make sure not to use unicode * Adds an 'alternate' link to image views per 3.10 and 3.11 of http://docs.openstack.org/cactus/openstack-compute/developer/openstack-compute-api-1.1/content/LinksReferences.html * Typo * Fixing tests * Fixing tests * make sure kwargs are strings and not unicode * Merging trunk * Adding flavor filtering * Instance deletions in Openstack are immediate. This can cause data to be lost accidentally * Makes sure ips are moved on the bridge for nodes running dnsmasq so that the gateway ip is always first * pep8 * add tests and fix bug when no ip was set * fix diverged branch * migration conflict fixed * clean up based on cerberus review * clean up based on cerberus review * Remove keystone middlewares * fix moving of ips on flatdhcp bridge * Merged trunk * merged trunk * update floating ips tests * floating ip could have no project and we should allow access * actions on floating IPs in other projects for non-admins should not be allowed * floating\_ip\_get\_by\_address should check user's project\_id * Pep8 fixes * Merging trunk * Refactoring instance\_type\_get\_all * remove keystone url flag * merge trunk, fix conflicts * remove keystone * Include 'type' in XML output * Minor cleanup * Added another unit test * Fixed unit tests with some minor refactoring * Fix the display of swap units in nova manage * Refactored alternate link generation * pep8 fixes * Added function to construct a glance URL and unit test * merge from trunk * convert images that are not 'raw' to 'raw' during caching to node * show swap in Mb in nova manage * Address Soren's comments: \* clean up temp files if an ImageUnacceptable is going to be raised Note, a qemu-img execution error will not clean up the image, but I think thats reasonable. We leave the image on disk so the user can easily investigate. \* Change final 2 arguments to fetch\_to\_raw to not start with an \_ \* use 'env' utility to change environment variables LC\_ALL and LANG so that qemu-img output parsing is not locale dependent. Note, I considered the following, but found using 'env' more readable out, err = utils.execute('sh', '-c', 'export LC\_ALL=C LANG=C && exec "$@"', 'qemu-img', 'info', path) * Add iptables filter rules for dnsmasq (lp:844935) * create disk.local the same way ephemerals are created (LP: #851145) * merge with trunk r1601 * fix call to gettext * Fixed --uuid network command in nova-manage to desc to "uuid" instead of "net\_uuid" * removes warning set forth in d3 for deprecated setting of bridge automagically * Update migration 047 to dynamically lookup the name of the instance\_id fkey before dropping it. We can't hard code the name of the fkey since we didn't name it explicitly on create * added to authors cuz trey said I cant patch otherwise! * Fixed --uuid network command in nova-manage to desc to "uuid" instead of "net\_uuid" * merged with trunk * Update migration 047 to dynamically lookup the name of the instance\_id fkey before dropping it. We can't hard code the name of the fkey since we didn't name it explicitly on create * oops, add project\_id and 'servers' to next links * Fixes migration for Mysql to drop the FK on the right table * Reverted some changes to instance\_get\_all\_by\_filters() that was added in rev 1594. An additional argument for filtering on instance uuids is not needed, as you can add 'uuid: uuid\_list' into the filters dictionary. Just needed to add 'uuid' as an exact\_match\_filter. This restores the filtering to do a single DB query * fix syntax error in exception, remove "Dangerous!" comment * merged trunk and resolved conflict * run the alter on the right table * fix unrelated pep8 issue in trunk * use dictionary format for exception message * fix a test where list order was assumed * Removed the extra code added to support filtering instances by instance uuids. Instead, added 'uuid' to the list of exact\_filter\_match names. Updated the caller to add 'uuid: uuid\_list' to the filters dictionary, instead of passing it in as another argument. Updated the ID to UUID mapping code to return a dictionary, which allows the caller to be more efficient... It removes an extra loop there. A couple of typo fixes * Reworked the export command to be nova-manage shell export --filename=somefile * Adds the ability to automatically confirm resizes after the \`resize\_confirm\_window\` (0/disabled by default) * use '\_(' for exception messages * PEP8 cleanup * convert images that are not 'raw' to 'raw' during caching to node * now raising instead of setting bridge to br100 and warning as was noted * \* Remove the foreign key and backrefs tying vif<->instance \* Update instance filtering to pass ip related filters to the network manager \* move/update tests * Adds an optional flag to force dhcp releases on instance termination. This allows ips to be reused without having to wait for the lease to timeout * remove urllib import * Fixing case where OSAPI server create would return 500 on malformed body * Fix the issue with the new dnsmasq where it tries and fails to bind to ipv6 addresses * Merging trunk * Renaming progress migration to 47 * merge with trunk * Added unit test * Corrected the status in DB call * don't try to listen on ipv6 addresses, or new dnsmasq goes boom * make our own function instead of using urllib.urlencode since we apparently don't suppor urlencoded strings yet * Merged trunk * remove unused import * merge the sknurt * remove the polymorph * Fix typo in comment * Fixes the handling of snapshotting in libvirt driver to actually use the proper image type instead of using raw for everything. Also cleans up an unneeded flag. Based on doude's initial work * merge with trunk * removing extra newline * catching AttributeError and adding tests * Remove vestigial db call for fixed\_ips * Fixes the user credentials for installing a config-drive from imageRef * Some Linux systems can also be slow to start the guest agent. This branch extends the windows agent timeout to apply to all systems * remove extra line * get the interface using the network and instance * flag typo * add an optional flag to force dhcp release using dnsmasq-utils * Fix user\_id, project\_id reference for config\_drive with imageRefs * Fix a bug that would make spawning new instances fail if no port/protocol is given (for rules granting access for other security groups) * When swap is specified as block device mapping, its size becomes 0 wrongly. This patch make it set to correct size according to instance\_type * Fix pep8 issues * fixed grant user, added stdout support * This changes the interpretation of 'swap' for an instance-type to be in MB rather than GB * Fixing list prepend * Merging trunk * create disk.local the same way ephemerals are created (LP: #851145) * Fix failing test * Authorize to start a LXC instance withour, key, network file to inject or metadata * Update the v1.0 rescue admin action and the v1.1 rescue extension to generate 'adminPass'. Fixes an issue where rescue commands were broken on XenServer. lp#838518 * pep8 * merge the trunks * update tests to return fake\_nw\_info that is valid for the pre\_live\_migrate * make sure to raise since the tests require it * Pep8 Fix * Update test\_volumes to use FLAGS.password\_length * Zero out the progress when beginning a resize * Adding migration progress * Only log migration info if they exist * remove getting fixed\_ips directly from the db * removed unused import * Fixes libvirt rescue to use the same strategy as xen. Use a new copy of the base image as the rescue image. It leaves the original rescue image flags in, so a hand picked rescue image can still be used if desired * Fixing tests, PEP8 failures * fix permissions * Add a FakeVirDomainSnapshot and return it from snapshotCreateXML. Fixes libvirt snapshot tests * merge the trunks * Merged trunk * I am using iputils-arping package to send arping command. You will need to install this package on the network nodes using apt-get command apt-get install iputils-arping * Removed sudo from the arguments * Add a FakeVirDomainSnapshot and return it from snapshotCreateXML. Fixes libvirt snapshot tests * merge from trunk * Make sure grantee\_group is eagerly loaded * Merged trunk * compute/api: swap size issue * Update exception.wrap\_exception so that all exceptions (not just Error and NovaException types) get logged correctly * Removes the on-disk internal libvirt snapshot after it has been uploaded to glance * cleaned up * remove debugging * Merging trunk * Allowing resizes to the same machine * trunk merge * updates Exception.NoMoreFixedIps to subclass NovaException instead of Error * NoMoreFixedIps now subclasses NovaException instead of Error * merge trunk * was trying to create the FK when Should have been dropping * pep8 * well since sqlalchemy-migrate and sqlalchemy can't agree on what the FK is called, we fall back on just manually dropping it * tests working again * the table is the table for the reason its a table * uhh dialect doesn't exist, beavis * update comment * if no public-key is given (--key), do not show public-keys in metadata service * it merges the trunk; or else it gets the conflicts again * exceptions properly passed around now * merge with trunk at revno 1573 * add the fake\_network Manager to prevent rpc calls * This makes the OS api extension for booting from volumes work. The \_get\_view\_builder method was replaced in the parent class, but the BootFromVolume controller was not updated to use the new method * remove undedded imports and skips * pep8 fixes * Added a unit test * pass-through all other parameters in next links as well * update for the id->uuid flip * Merged trunk * Adding flavor extra data extension * Merged trunk * fix test * revert last change * Added virt-level support for polling unconfirmed resizes * build the query with the query builder * Removing toprettyxml from OSAPI xml serialization in favor of toxml * use uuids everywhere possible * make sure to use the uuid * update db api for split filterings searches * update tests * delete the internal libvirt snapshot after it is saved to glance * cleanup prints in tests * cleanup prints in tests * Add a simple test for the OS boot from volume api * get rid of debugs * Merged from trunk and resolved conflicts * Execute arping command using run\_as\_root=True instead of sudo * Return three rules for describe\_security\_groups if a rule refers to a foreign group, but does not specify protocol/port * pep8 issues * added xml support for servers\_list in response with tests * Merged trunk * added servers\_links in v1.1 with tests * added build\_list to servers controllers and view builder and kept all old tests passing * The 1.1 API specifies that two vendor content types are allowed in addition to the standard JSON and XML content types * pep8 * tests are back * PEP8 fix * Adding progress * In the unlikely case of an instance losing a host, make sure we still delete the instance when a forceDelete is done * 0 for the instance id is False ;) * Cleanup state management to use vm\_state instead of task\_state Add schedule\_delete() method so delete() actually does what it says it does * merge trunk * write out xml for rescue * fix up the filtering so it does not return duplicates if both the network and the db filters match * fix rescue to use the base image, reset firewall rules, and accept network\_info * make sure to pass in the context * move the FakeNetworkManager into fake\_network * Fix issue where floating ips don't get recreated when a network host reboots * ip tests were moved to networking * add tests * fix typo * allow matching on fixed\_ip without regex and don't break so all results are reported * add case where vif may not have an instance\_id associated with it * fix typo * Initial pass at automatically confirming resizes after a given window * Use the correct method to get a builder * merge trunks * pep8 * move ip filtering over to the network side * fix pep8 whitespace error * add necessary fields to flavor.rng schema * get all the vifs * get all the vifs * make sure we are grabbing out just the ids * flavor\_elem.setAttribute -> flavor\_elem.set, flavor -> flavor\_dict * minor changes to credentials for the correct format * Don't report the wrong content type if a mapped type doesn't exist * add stubs for future tests that need to be written * Test both content types for JSON and XML * Remove unnecessary vendor content types now that they are mapped to standard content types automatically * Add copyright * Map vendor content types to their standard content type before serializing or deserializing. This is so we don't have to litter the code with both types when they are treated identically * exporting auth to keystone (users, projects/tenants, roles, credentials) * make xml-api tests pass * update variable name after merge: flavor\_node -> flavor\_elem * resolve conflicts / merge with trunk revno 1569 * Fixes an issue where 'invalid literal for int' would occur when listing images after making a v1.1 server snapshot (with a UUID) * fixed tests * removing toprettyxml * add attributes to xml api * Remove debugging * Update test\_libvirt so that flags and fakes are used instead of mocks for utils.import\_class and utils.import\_object. Fixes #lp849329 * fix the test so that it fakes out the network * fix white space for pep8 * fix test\_extensions test to know of new extension FlavorExtraData * add extension description for FlavorExtraData * Adding migration for instance progress * Make tests pass * no need for the instance at all or compute * bump the migration * remove unused import, make call to network api to get vifs for the instance * merge the trunk * skip a bunch of tests for the moment since we will need to rework them * remove the vif joins, some dead code, and the ability to take in some instances for filtering * allow passing in of instances already * run the instances filter through the network api first, then through the db * add get\_vifs\_by\_instance and stub get\_instance\_ids\_by\_ip\_filter * change vifs to rpc call and add instance ids by ip * Multi-NIC support for vmwareapi virt driver in nova. Does injection of Multi-NIC information to instances with Operating system flavors Ubuntu, Windows and RHEL. vmwareapi virt driver now relies on calls to network manager instead of nova db calls for network configuration information of instance. Re-oranized VMWareVlanBridgeDriver and added session parmeter to methods to use existing session. Also removed session creation code as session comes as argument. Added check for flat\_inject flag before attempting an inject operation * last of the api.openstack.test\_images merge fixes * pep8 fixes * trunk merge * makes sure floating addresses are associated with host on associate so they come back * Deprecate aoe in preperation for removal in essex * Only allow up to 15 chars for a Windows hostname * pep8 * deprecate aoe * Fix instance rebooting (lp847604) by correcting a malformed cast in compute.api and an incorrect method signature in the libvirt driver * Fix mismerge * make tests pass * This patch teaches virt/libvirt how to format filesystem on ephemeral device depending on os\_type so that the behaviour matches with EC2's. Such behaviour isn't explicitly described in the documentation, but it is confirmed by checking realy EC2 instances. This patch introduces options virt\_mkfs as multistring. Its format is --virt\_mkfs== When creating ephemeral device, format it according to the option depending on os\_type. This addresses the bugs, https://bugs.launchpad.net/nova/+bug/827598 https://bugs.launchpad.net/nova/+bug/828357 * Test new vendor content types as well * Only allow up to 15 chars for a Windows hostname * Split accept tests to better match the name of the test * Remove debugging print * Inject hostname to xenstore upon creation * Update test\_libvirt so that flags and fakes are used instead of mocks for utils.import\_class and utils.import\_object. Fixes #lp849329 * interpret 'swap' to be in MB, not in GB * Actually test expected matches received * Test new content-types * This branch changes XML Serializers and their tests to use lxml.etree instead of minidom * add additional data to flavor's ViewBuilder * Inject hostname to xenstore upon creation * drop the virtual\_interfaces key back to instances * - remove translation of non-recognized attributes to user metadata, now just ignored - ensure all keys are defined in image dictionaries, defaulting to None if glance client doesn't provide one - remove BaseImageService - reorganize some GlanceImageService tests * And again * Update MANIFEST.in to match directory moves from rev1559 * we're back * Update MANIFEST.in to match directory moves from rev1559 * Moving tests/test\_cloud.py to tests/api/ec2/test\_cloud.py. They are EC2-specific tests, so this makes sense * Same as last time * Made tests version version links more robust * PEP8 cleanup * PEP8 cleanup * PEP8 cleanups * zone manager tests working * fixing import * working on getting tests back * relocating ec2 tests * merging trunk; resolving conflicts * Correctly map image statuses from Glance to OSAPI v1.1 * pep8 fixes in nova/db/sqlalchemy/api.py and nova/virt/disk.py * Add support for vendor content types * pep8 fixes * merging trunk; resolving conflicts * Update GlanceClient, GlanceImageService, and Glance Xen plugin to work with Glance keystone * Fix typo (woops) * pep8 fix * Some arches dont have dmidecode, check to see if libvirt is capable of running rather getInfo of the arch its running on * merging parent branch lp:~rackspace-titan/nova/glance-client-keystone * adding tests for deleted and pending\_delete statuses * Fixes rogue usage of sudo that crept in * fixups * remove unused dep * add test for method sig * parent merge * migration move * bug fixes * merging trunk * Fixes shutdown of lxc containers * Make quoting consistent * Fix rogue usage of 'sudo' bypassing the run\_as\_root=True method * trunk merge * region name * tweaks * fix for lp847604 to unbreak instance rebooting * use 'qemu-image resize' rather than 'truncate' to grow image files * When vpn=true in allocate ip, it attempts to allocate the ip that is reserved in the network. Unfortunately fixed\_ip\_associate attempts to ignore reserved ips. This fix allows to filter reserved ip address only when vpn=True * Do not require --bridge\_interface for FlatDHCPManager (lp:844944) * Makes nova-vncproxy listen for requests on the queue like it did before the bin files were refactored * Update GlanceClient, GlanceImageService, and Glance Xen plugin to work with Glance keystone * api/ec2/ebs: make metadata returns correct swap and ephemeral0 * api/ec2: make get\_metadata() return correct mappings * virt/libvirt: format ephemeral device and add fs label when formating ext3 fs * Fix spelling mistake * Stock zones follows a fill-first methodology—the current zone is filled with instances before other zones are considered. This adds a flag to nova to select a spread-first methodology. The implementation is simply adding a random.shuffle() prior to sorting the list of potential compute hosts by weights * Pass reboot\_type (either HARD or SOFT) to the virt layers from the API * merging trunk * fixing image status mapping * don't need random in abstract\_scheduler.py anymore.. * pull-up from trunk; move spread\_first into base\_scheduler.py * trunk merge * adding auth tokens to child zone calls * Add comment to document why random.shuffle() works * Merged trunk * Make whitespace consistent * Use triple quotes for docstrings to be consistent * Remove the unnecessary sudo from qemu-img as it is unneeded and doesn't work with our current packaging * Remove chanes\_since and key\_name from basic server entity * Merged trunk * remove extra line for pep8 * remove unnecessary qemu-img flag, use base image type by default * shorten comment to < 79 chars * merged rbp * remove sudo from qemu-img commands * adds a fake\_network module to tests to generate sensible network info for tests. It does not require using the db * Adding a can\_read\_deleted filter back to db.api.instance\_get\_all\_by\_filters that was removed in a recent merge * removing key\_name and config\_drive from non-detailed server entity * Authorize to start a LXC instance withour, key, network file to inject or metadata * Open Essex (switch version to 2012.1) * Last Diablo translations for Nova * Open Essex (switch version to 2012.1) * Last Diablo translations * pep 8 * Fixing security groups stuff * put key into meta-data, not top level 'data' * metadata key is 'public-keys', not 'keys' * fix for lp844364: fix check for fixed\_ip association in os-floating-ips * if no public-key is given (--key), do not show public-keys in metadata service * NetworkManager's add\_fixed\_ip\_to\_instance calls \_allocate\_fixed\_ips without vpn or requested\_networks parameters. If vpn or requested\_networks is not provided to the \_allocate\_fixed\_ips method, it throws an exception. This issue is fixed now * Merged trunk * First pass at adding reboot\_type to reboot codepath * child zone queries working with keystone now * Added docstring to explain usage of reserved keyword argument * One more bug fix to make zones work in trunk. Basic problem is that in novaclient using the 1.0 OSAPI, servers.create() takes an ipgroups argument, but when using the 1.1 OSAPI, it doesn't, which means booting instances in child zones won't work with OSAPI v1.0. This fix works around that by using keyword arguments for all the arguments after the flavor, and dropping the unused ipgroups argument * Fixes the reroute\_compute decorator in the scheduler API so that it properly: * make check for fixed\_ip association more defensive * Fix lp:844155 * Changing a behavior of update\_dhcp() to write out dhcp options file. This option file make dnsmasq offer a default gateway to only NICs of VM belonging to a network that the first NIC of VM belongs to. So, first NIC of VM must be connected to a network that a correct default gateway exists in. By means of this, VM will not get incorrect default gateways * merged trunk * merging trunk * merging trunk * merged trunk * Make weigh\_hosts() return a host per instance, instead of just a list of hosts * converting fix to just address ec2; updating test * Do not attempt to mount the swap VDI for file injection * Add a NOTE() * Merged trunk * Use .get instead * Do not attempt to mount the swap VDI for file injection * pull-up from trunk * pull-up from trunk * pull-up from trunk * adding can\_read\_deleted back to db api * Clean up shutdown of lxc containers * Cleanup some more comments * Cleanup some comments * fixes vncproxy service listening on rabbit * added tests for failure cases talking with zones * This code contains contains a new NetworkManager class that can leverage Quantum + Melange * comment fix * typo trying to raise InstanceNotFound when all zones returned nothing * create a new exception ZoneRequestError to use for returning errors when zone requests couldn't complete * pep8 fix for tests/api/openstack/test\_servers.py which is an issue in trunk * catch exceptions from novaclient when talking to child zones. store them and re-raise if no other child zones return any results. If no exceptions are raised but no results are returned, raise a NotFound exception * added test to cover case where no local hosts are available but child hosts are * remove the short circuit in abstract scheduler when no local hosts are available * fix for lp844364: improve check for fixed\_ip association * Ensure restore and forceDelete don't do anything unless the server is waiting to be reclaimed * actually shuffle the weighted\_hosts list.. * Check task\_state for queued delete * spread-first strategy * Make sure instance is deleted before allowing restore or forceDelete * Add local hostname to fix Authors test * delete\_instance\_interval -> reclaim\_instance\_interval * PEP8 cleanup * Restart compute with a lower periodic\_interval to make test run faster * merge trunk * properly handle the id resetters * removed vestige * pull-up from trunk * fix a couple of typos in the added unit test * modified unit tests, set use\_single\_default\_gateway flag to True whereever needed instead of setting it in the init method * exclude net tag from host\_dhcp if use\_single\_default\_gateway flag is set to false * forgot \_id * had used wrong variable * Fixes a case where if a VIF is returned with a NULL network it might not be able to be deleted. Added test case for that fix * Fix for LP Bug #837867 * weigh\_hosts() needs to return a list of hosts for the instances, not just a list of hosts * Merged trunk * Set flat\_injected to False by default * changed the fixed\_ip\_generator * PEP8 cleanup * Wait longer for all agents, not just Windows * merged trunk * updated floating\_ip generation * Tests for deferred delete, restore and forceDelete * An AMI image without ramdisk image should start * Added use\_single\_default\_gateway to switch from multiple default gateways to single default gateway * Fixed unit test * reverting change to GlanceImageService.\_is\_image\_available * At present, the os servers.detail api does not return server.user\_id or server.tenant\_id. This is problematic, since the servers.detail api defaults to returning all servers for all users of a tenant, which makes it impossible to tell which user is associated with which server * reverting xenapi change * Micro-fix; "exception" was misspelled as "exceptions" * Fix a misspelling of "exception" * revert changes to display description * merged trunk * novaclient v1\_0 has an ipgroups argument, but novaclient v1\_1 doesn't * Set flat\_injected to False by default * Fixes an issue where 'invalid literal for int' would occur when listing images after making a v1.1 server snapshot (with a UUID) * further cleanup * Default to 0 seconds (off) * PEP8 cleanups * Include new extension * Implement deferred delete of instances * trunk merge * cleaning up tests * zone name not overwritten * Update the v1.0 rescue admin action and the v1.1 rescue extension to generate 'adminPass'. Fixes an issue where rescue commands were broken on XenServer. lp#838518 * fix a mistaking of dataset and expected values on small test * fix a mistaking of deletion in ensure\_floating\_forward * revert codes for db * correct a method to collect instances from db add interface data to test * added me to Authors * meeging trunk * format for pep8 * format for pep8 * implement unit test for linux\_net * Adjust test\_api to account to multiple rules getting returned for a single set rule * Clean up security groups after use * Make a security group rule that references another security group return ipPermission for each of tcp, udp, and icmp * Multi-NIC support for vmwareapi virt driver in nova. Does injection of Multi-NIC information to instances with Operating system flavors Ubuntu, Windows and RHEL. vmwareapi virt driver now relies on calls to network manager instead of nova db calls for network configuration information of instance. Ensure if port group is properly associated with vlan\_interface specified in case of VLAN networking for instances. Re-oranized VMWareVlanBridgeDriver and added session parmeter to methods to use existing session. Also removed session creation code as session comes as argument. Added check for flat\_inject flag before attempting an inject operation. Removed stale code from vmwareapi stubs. Also updated some comments to be more meaningful. Did pep8 and pylint checks. Tried to improve pylint score for newly added lines of code * Fix bug #835919 that output a option file for dnsmasq not to offer a default gateway on second vif * Accidentally added instance to security group twice in the test. Fixed * Minor cleanup * Fixing xml serialization of limits resource * correct floating ip id to increment in fake\_network * Add iptables filter rules for dnsmasq * Merged trunk * Change non E ascii characte * Launchpad automatic translations update * Instance record is not inserted in db if the security group passed to the RunInstances API doesn't exists * Added unit tests to check instance record is not inserted in db when security groups passed to the instances are not existing * removed unneeded import * rick nits * alex meade issues * Added list of security groups to the newly added extension (Createserverext) for the Create Server and Get Server detail responses * default description to name * use 'qemu-image resize' rather than 'truncate' to grow image files * remove extra description stuff * fix pep8 violation * feedback from jk0's review, including removing a lot of spaces from docstrings * revert description changes, use metadata['description'] if it is set to populate field in db * merged trunk * change db migrate script again to match other similar scripts * Fix for LP Bug #839269 * move networks declarations within upgrade/downgrade methods * more review cleanup * remove import of 'fake' from nova manager, now that we've moved that to test\_quantum.py * Fixes a small bug which causes filters to not work at all. Also reworks a bit of exception handling to allow the exception related to the bug to propagate up * Email error again. Tired * Email error * Fixed review comments * Add documentation comment * pull-up from trunk * Forgot to handle return value * Add tests for flags 'snapshot\_image\_format' * Update snapshot image metada 'disk\_format' * Add flag 'snapshot\_image\_format' to select the disk format of the snapshot image generated with the libvirt driver * missing migration * Email contact error * Update Authors file * Merged trunk * Correct tests associated * Fix protocol-less security groups * Adding feedparser to pip-requires * Removing xml functions that are no longer called * Launchpad automatic translations update * Glance can now perform its own authentication/authorization checks when we're using keystone * import filters in scheduler/host\_filter.py so default\_host\_filter gets added to FLAGS; rework SchedulerManager() to only catch missing 'schedule\_' attribute and report other missing attributes * move content of quantum/fake.py to test\_quantum.py in unit testing class (most original content has been removed anyway) * melange testing cleanup, localization cleanup * remove references to MelangeIPAMTest, as they cannot be used yet * Deleted debug messages * Resolved conflicts and fixed pep8 errors * Fix a few references to state\_description that slipped through * added unit tests and cleanup of import statements * renamed fake\_network\_info.py * trunk merge * moved cidr\_v6 back * Probably shouldn't leave that commented out * Added test for NULL network * Fixed lp835242 * Fixes for minor network manager issues centered around deleting/accessing instances which don't have network information set * remove extra references to state\_description * pull-up from trunk * merge unit test from Chris MacGown * Adds test for image.glance.GlanceImageService.\_is\_image\_available * - implements changes-since for servers resource - default sort is now created\_at desc for instances * undo change in setting q\_tenant\_id in quantum\_manager.create\_network * additional review cleanup * docstring cleanup * merging trunk * Fixes NotFound exceptions to show the proper instance id in the ec2 api * typo * more review cleanup * another commit from brad * add specific exceptions for quantum client. Fix doc-strings in client.py * merge brad's changes that address most review feedback * fix for lp838583 - fixes bug in os-floating-ips view code that prevents instance\_id from being returned for associated addresses * Accept keypair when you launch a new server. These properties would be stored along with the other server properties in the database (like they are currently for ec2 api) * Launchpad automatic translations update * merge trunk, fix tests * fix for lp838583 - return instance\_id for associated floating\_ips, add test * removing unnecessary imports * remove BaseImageService * pep8 * move GlanceImageService tests to proper module; remove translation of non-standard image attributes to properties; ensure all image properties are available, defaulting to None if not provided * merge trunk * Add comment for an uncommon failure case that we need to fix * Fix for LP Bug #838466 * Correctly yield images from glance client through image service * Simple usage extension for nova. Uses db to calculate tenant\_usage for specified time periods * Fix for LP Bug #838251 * merge trunk, fix conflict * Validates that user-data is b64 encoded * Updated VersionsAtomSerializer.index to use lxml.etree to generate atom feed * remove extra test * merged trunk * Fixed and improved the way instance "states" are set. Instead of relying on solely the power\_state of a VM, there are now explicitly defined VM states and VM task states which respectively define the current state of the VM and the task which is currently being performed by the VM * Updating test for xml to use lxml * expect key\_name attribute in 1.1 * change to use \_get\_key\_name to retrieve the key * Implements lp:798876 which is 'switch carrot to kombu'. Leaves carrot as the default for now... decision will be made later to switch the default to kombu after further testing. There's a lot of code duplication between carrot and kombu, but I left it that way in preparation for ripping carrot out later and to keep minimal changes to carrot * Disassociated previously associated floating ips when calling network\_api.associate\_floating\_ip. Also guard against double-association in the network.manager * adding support for limiting in image service; updating tests with fixture ids and marker support * trunk merge * merging trunk * fix keypairs stubs * add explicit message for NoMoreFloatingIps exception * fix for chris behrens' comment - move tenant\_id => project\_id mapping to compute.api.get\_all * moved key\_name per review * zone\_add fixed to support zone name * kludge for kombu 1.1.3 memory transport bug * merged trunk * Removed extraneous import and s/vm\_state.STOP/vm\_states.STOPPED/ * Merged trunk * Code cleanup * Use feedparser to parse the generated atom feeds in the tests for the versions resource * add test to verify 400 response when out of addresses * switched default to kombu per vishy * use kombu.connection.BrokerConnection vs kombu.connection.Connection so that older versions of kombu (1.0.4) work as well as newer * fix FloatingIpAlreadyInUse to use correct string pattern, convert ApiErrors to 400 responses * Fix for LP Bug #782364 * Fix for LP Bug #782364 * more logging info to help identify bad payloads * Removed test\_parallel\_builds in the XenAPI tests due to it frequently hanging indefinitely * logging change when rpc pool creates new connection * pep8 fix * make default carrot again and delay the import in rpc/\_\_init\_\_.py * Removed debug messages * Fix for LP Bug #837534 * add kombu to pip-requires and contrib/nova.sh * restore old way FLAGS.rpc\_backend worked.. no short name support for consistency * fix remaining tests * Update RequestContext so that it correctly sets self.is\_admin from the roles array. Additionally add a bit of code to ignore case as well * pep8, fix fakes * fix a bunch of direct usages of db in compute api * make two functions instead of fast flag and add compute api commands instead of hitting db directly * fixing bug * fixing short-ciruit condition * yielding all the images * merged trunk * changing default sort to created\_at * The exception 'RamdiskNotFoundForImage' is no longer used * With OS API, if the property 'ramdisk\_id' isn't set on the AMI image, Nova can not instantiate it. With EC2 API, the AMI image can be instantiate * adding an assert * Use getCapabilities rather than getInfo() since some versions of libvirt dont provide dmi information * supporting changes-since * Fix a bad merge on my part, this fixes rebuilds\! * disassociate floating ips before re-associating, and prevent re-association of already associated floating ips in manager * Update RequestContext so that it correctly sets self.is\_admin from the roles array. Additionally add a bit of code to ignore case as well * Merged trunk * remove unneeded connection= in carrot Consumer init * pep8 fix for test\_rpc\_common.py * fix ajax console proxy for new create\_consumer method * doc string cleanup * created nova/tests/test\_rpc\_common.py which contains a rpc test base class so we can share tests between the rpc implementations * ditched rpc.create\_consumer(conn) interface... instead you now do conn.create\_consumer(. * Update the EC2 ToToken middleware to use eventlet.green.httplib instead of httplib2. Fixes issues where the JSON request body wasn't getting sent to Keystone * remove brackets from mailmap entry * access db directly in networkmanagers's delete\_network method, so stubbed test call works correctly * more logging info to help identify bad payloads * In the XenAPI simulator, set VM.domid, when creating the instance initially, and when starting the VM * remove 'uuid' param for nova-manage network delete that I had add previously * add alias to mailmap * update file name for db migrate script after merge (again) * update file name for db migrate script after merge * merged trunk * Fixes this bug by removing the test. The test has no asserts and seems to be raising more problems than it could solve * Removed test\_parallel\_builds * Merged trunk * Increased migration number * Fixes lp:813864 by removing the broken assert. The assert was a check for isinstance of 'int' that should have been 'long'. But it doesn't appear this assert really belongs, anyway * Merged trunk * Adds assertIn and assertNotIn support to TestCase for compatibility with python 2.6 This is a very minimal addition which doesn't require unittest2 * support the extra optional arguments for msg to assertIn and assertNotIn * removed broken assert for abstract\_scheduler * pep8 fixes * fix for assertIn and assertNotIn use which was added in python 2.7. this makes things work on 2.6 still * merge trunk * restore fixed\_ip\_associate\_pool in nova/db/sqlalchemy.py to its original form before this branch. Figured out how to make unit tests pass without requiring that this function changes * remove unused rpc connections in test\_cloud and test\_adminapi * carrot consumer thread fix * add carrot/kombu tests... small thread fix for kombu * add doc-strings for all major modules * remove fake IPAM lib, since qmanager must now access nova DB directly * Update the EC2 ToToken middleware to use eventlet.green.httplib instead of httplib2. Fixes issues where the JSON request body wasn't getting sent to Keystone * fix nova/tests/test\_test.py * fix nova-ajax-console-proxy * fix test\_rpc and kombu stuff * always set network\_id in virtual\_interfaces table, otherwise API commands that show IP addresses get confused * start to rework some consumer stuff * update melange ipam lib to use network uuid, not bridge * fix issue with setting 'Active' caused by Quantum API changes. Other misc fixes * Bug #835952: pep8 failures do not cause the tests to fail * Start domid's at 1, not 0, to avoid any confusion with dom0 * use 'uuid' field in networks table rather than 'bridge'. Specify project\_id when creating instance in unit test * Bug #835964: pep8 violations in IPv6 code * In the XenAPI simulator, set VM.domid, when creating the instance initially, and when starting the VM * Bug #835952: pep8 failures do not cause the tests to fail * Bug #835964: pep8 violations in IPv6 code * Virtual Storage Array (VSA) feature. - new Virtual Storage Array (VSA) objects / OS API extensions / APIs / CLIs - new schedulers for selecting nodes with particular volume capabilities - new special volume driver - report volume capabilities - some fixes for volume types * fix FALGS typo * changes a few double quotes to be single, as the rest in the vicinity are * Default rabbit max\_retries to forever Modify carrot code to handle retry backoffs and obey max\_retries = forever Fix some kombu issues from cut-n-paste Service should make sure to close the RPC connection * Updated VersionsXMLSerializer and corresponding tests to use lxml * v1.0 of server create injects first users keypair * add tests to verify NotFound exceptions are wrapped with the proper ids * use db layer for aggregation * merged trunk * flag for kombu connection backoff on retries * more fixes * more work done to restore original rpc interfaces * merge changes from brad due to recent quantum API changes * Minor changes based on recent quantum changes * start of kombu implementation, keeping the same RPC interfaces * doubles quotes to single * changed format string in nova-manage * removed self.test ip and \_setup\_networking from libvirt * updated libvirt test * merge trunk * stubbed some stuff in test\_libvirt * removed create\_volumes, added log & doc comment about experimental code * reverted CA files * couple of pep8s * Tiny tweaks to the migration script * updated fake values * updated fake values * Merged trunk and fixed conflicts * updated fake values * updated fake values * forgot ) * update libvirt tests * Update compute API and manager so that the image\_ref is set before spawning the rebuilt instance. Fixes issue where rebuild didn't actually change the image\_id * added debug prints for scheduler * update libvirt * updated instance type fake model * added vcpus to instance flavor test model * added memory\_mb to instance flavor test model * forgot test print statements * misplaced comma.. * Update compute API and manager so that the image\_ref is set before spawning the rebuilt instance. Fixes issue where rebuild didn't actually change the image\_id * Add brad to Authors file * replace accidental deletion in nova-mange * rearrange imports * fix for quantum api changes, change nova-mange to have quantum\_list command * merge brad's fixes * add priority for static networks * driver: added vsa\_id parameter for SN call * merged with rev.1499 * cosmetic cleanup * Updated server and image XML serializers to take advantage of the addresses and metadata serializers * VSA code redesign. Drive types completely replaced by Volume types * merged trunk * Just a couple of small changes I needed to get the migrations working with SQLAlchemy 0.7.x on Fedora 16 * Minor fixes * check log file's mode prior to calling chmod * The fix for run\_iscsiadm in rev 1489 changed the call to use a tuple because values were being passed as tuples. Unfortunately a few calls to the method were still passing strings * Add a set of generic tests for the virt drivers. Update a bit of documentation to match reality * updated LimitsXMLSerializer to use etree and supply the xml declaration * merge underlying fix for testing * merged trunk * updated additional limits test * pep8 * pass all commands to run\_iscsiadm as a tuple * altered fake network model * Updated limits serialization tests to use etree and added limits schema * Test fixup after last review feedback commit * Fix glance image authorization check now that glance can do authorization checks on its own; use correct image service when looking for ramdisk, etc.; fix a couple of PEP8 errors * forget a return * review feedback * Fixed integrated.test\_xml to be more robust * typo * fixed a couple of syntax errors * Add bug reference * updated tests * updated libvirt tests to use fake\_network\_info * Bumped migration number * Merged trunk * Review feedback * pep8 * DRYed up code by moving \_to\_xml into XMLDictSerializer * updated addresses serializer to use etree instead of minidom * Added addresses schema * updated addresses xml serialization tests to use etree instead of minidom * Updated ServerXMLSerializer to use etree instead of minidom * added unit tests to instance\_types for rainy day paths * Reverted two mistakes when looking over full diff * Updated MetadataXMLSerializer to use etree instead of minidom * Added: - volume metadata - volume types - volume types extra\_specs * Added schemas Updated metadata tests to use etree instead of minidom * Servers with metadata will now boot on xenserver with flat\_injected==False * moved import up * Verify resize needs to be set * changing comment * fixing bug * merged trunk * Updated ImagesXMLSerializer to use etree instead of minidom * Set error state when migration prep fails * Removed invalid test * Removed RESIZE-CONFIRM hack * Set state to RESIZING during resizing.. * Merged trunk * Another attempt at fixing hanging test * Once a network is associated with project, I can’t delete this network with ‘nova-manage network delete’. As you know, I can delete network by scrubbing the project with ‘nova-manage project scrub’. However it is too much. The cause of this problem is there is no modify command of network attribute * Update paste config so that EC2 admin API defaults to noauth * merged with volume types (based on rev.1490). no code rework yet * merged with volume\_types. no code refactoring yet * merged with nova 1490 * added new tables to list of DBs in migration.py * removes french spellings to satisfy american developers * added virtio flag; associate address for VSA; cosmetic changes. Prior to volume\_types merge * stub\_instance fix from merge conflict * moved import to the top * fixing inappropriate rubyism in test code * Added fix for parallel build test * Fixed silly ordering issue which was causing tons of test failures * merged trunk * change snapshot msg too * forgot to add new extension to test\_extensions * Add me to Authors * added Openstack APIs for volume types & extradata * Add comments for associate/dissociate logic * Updated ImageXMLSerialization tests to use etree instead of minidom Fixed incorrect server entity ids in tests * Merged from trunk * Add names to placeholders of formatting * The notifiers API was changed to take a list of notifiers. Some people might want to use more than one notifier so hopefully this will be accepted into trunk * use dict.get for user\_id, project\_id, and display\_description in servers view as suggested by ed leaf, so that not all tests require these fields * Updated flavors xml serialization to use lxml instead of minidom * merge trunk, fix tests * fix more tests * Removed unused imports * Updated FlavorsXMLSerialization tests to use etree and validation instead of minidom * Merged from trunk * split test\_modify() into specific unit tests * Added DELETED status to OSAPI just in case * Fixes iscsiadm commands to run properly * Fixed issue where we were setting the state to DELETED before it's actually deleted * merged with rev.1488 * Merged trunk and fixed conflicts * added volume type search by extra\_spec * Fix for trying rebuilds when instance is not active * Fixed rebuild naming issue and reverted other fix which didn't fix anythin * Attempt to fix issue when deleting an instance when it's still in BUILD * Fix default hostname generator so that it won't use underscores, and use minus signs instead * merged with 1487 * pep8 compliant * Merged from trunk * - rebuilds are functional again - OSAPI v1.1 rebuild will accept adminPass or generate a new one, returning it in a server entity - OSAPI v1.0 will generate a new password, but it doesn't communicate it back to the user * Fix flag override in unit test * merged with rev.1485 * add rainy day test to to\_global fixed to\_global to catch correct error from incorrect mac addresses * Let's be more elegant * similar to lp828614: add rainy day test and fix exception error catch to AddrFormatError * check log file mode prior to chmod * added unit tests for version.py * Merged trunk * Fix for migrations * Conversion to SQLAlchemy-style * dict formatting * Commit without test data in migration * Commit with test data in migration * Do not require --bridge\_interface for FlatDHCPManager * Fix quotas migration failure * Fix flavorid migration failure * fixed indentation * adding xml serialization and handling instance not found * removing extraneous imports * pep8 * Thou shalt not use underscores in hostnames * Catch exception for instances that aren't there * pep8 fixes * Couple of fixes to the review feedback changes * Launchpad automatic translations update * Address code review feedback from Rick and Matt * removing print statement * added volume metadata APIs (OS & volume layers), search volume by metadata & other * Update paste config so that EC2 admin API defaults to noauth * cleanup * updating tests * fix iscsi adm command * Fix pep8 * Merged from trunk * added volume\_types APIs * Fix not found exceptions to properly use ec2\_ips for not found * Stub out the DB in unit test. Fix 'nova-manage network modify' to use db.network\_update() * rebuilds are functional again * Adds a use\_deprecated\_auth flag to make sure creds generated using nova-manage commands will work with noauth * Merged from upstream * Fixed some pep8 and pylint issues * Forgot to set the flag for the test * I added notifications decorator for each API call using monkey\_patching. By this merge, users can get API call notification from any modules * Fixes bug that causes 400 status code when an instance wasn't attached to a network * fix for rc generation using noauth * Fixed doc string * Merged from upstream * Switched list\_notifier to log an exception each time notify is called, for each notification driver that failed to import * updating tests * merging trunk * Fixed some docstring Added default publisher\_id flagw * Removed blank line * Merged with trunk * Fixed typo and docstring and example class name * Updated migration number * Move use\_ipv6 into flags. Its used in multiple places (network manager and the OSAPI) and should be defined at the top level * Merged trunk * PEP8 fixes * 'use the ipv6' -- 'use ipv6' * Move use\_ipv6 into flags. Its used in multiple places (network manager and the OSAPI) and should be defined at the top level * Refresh translations * This branch does the final tear out of AuthManager from the main code. The NoAuth middlewares (active by default) allow a user to specify any user and project id through headers (os\_api) or access key (ec2\_api) * Implements first-pass of config-drive that adds a vfat format drive to a vm when config\_drive is True (or an image id) * Launchpad automatic translations update * pulling all qmanager changes into a branch based on trunk, as they were previously stacked on top of melange * Moved migration and fixed tests from upstream * Merged trunk * Added the fixes suggested by Eric Windisch from cloudscaling.. * removing unnecessary tthing * merge trunk, resolve conflicts, fix tests * unindented per review, added a note about auth v2 * Our goal is to add optional parameter to the Create server OS 1.0 and 1.1 API to achieve following objectives:- * fixing exception logging * Fixes bug 831627 where nova-manage does not exit when given a non-existent network address * Move documentation from nova.virt.fake into nova.virt.driver * initial cut on volume type APIs * fix pep8 issue * Change parameters of 'nova-manage network modify'. Move common test codes into private method * Merged from trunk,resolved conflicts and fixed broken unit tests due to changes in the extensions which now include ProjectMapper * xml deserialization, and test fixes * syntax * update test\_network test\_get\_instance\_nw\_info() * remove extra spaces * Fixed conflict with branch * merged trunk * The FixedIpCommandsTestCase in test\_nova\_manage previously accessed the database. This branch stubs out the database for these tests, lowering their run time from 104 secs -> .02 secs total * some readability fixes per ja feedback * fix comment * Update a few doc strings. Address a few pep8 issues. Add nova.tests.utils which provides a couple of handy methods for testing stuff * Make snapshot raise InstanceNotRunning when the instance isn't running * change NoAuth to actually use a tenant and user * Added Test Code, doc string, and fixed pip-requiresw * Merged trunk * Ensure that reserve and unreserve exit when an address is not found * Simple usage extension for nova. Uses db to calculate tenant\_usage for specified time periods * Stubbed out the database in order to improve tests * logging as exception rather than error * Merged from upstream * Changed list\_notifier to call sys.exit if a notification driver could not be found * merged trunk * implemented tenant ids to be included in request uris * Add a generic set of tests for hypervisor drivers * Upstream merge * Added ability to detect import errors in list\_notifier if one or more drivers could not be loaded * Fix pep8 * delete debug code * Fixes for a number of tests * Use 'vm\_state' instead of 'state' in instance filters query * Merged with Dan to fix some EC2 cases * Add 'nova-manage network modify' command * Fixes/updates to make test\_cloud pass * Fix scheduler and integrated tests * Update migration number * Merged with Dan * Merged task\_state -> task\_states and fixed test\_servers test * Update virt/fake to correct power state issue * fix test\_servers tests * update test\_security\_group tests that have been added * Merged trunk * Renamed task\_state to task\_states.. * Ec2 API updates * merge with trunk * Fixing merge conflicts * Launchpad automatic translations update * Adds accessIPv4 and accessIPv6 to servers requests and responses as per the current spec * adding import * Fixes utils.to\_primitive (again) to handle modules, builtins and whatever other crap might be hiding in an object * fixing bug lp:830817 * added test for bad project\_id ... although it may not be used * added exception catch and test for bad project\_id * added exception catch for bad prefix and matching test * added exception catch and test for bad prefix * comment strings * added unit tests for versions.py * Added OS APIs to associate/disassociate security groups to/from instances * add/remove security groups to/from the servers as server actions * lp:828610 * removed leftover netaddr import * added rainy day test for ipv6 tests. fixed ipv6.to\_global to trap correct exception * Merged from trunk * pep8 * improve test coverage for instance types / flavors * Launchpad automatic translations update * Assorted fixes to os-floating-ips to make it play nicely with an in-progress novaclient implementation, as well as some changes to make it more consistent with other os rest apis. Changes include: * finished fake network info, removed testing shims * updated a maths * updated a maths * Merged trunk * Lots of modifications surrounding the OSAPI to remove any mention of dealing with power states and exclusively using vm\_states and task\_state modules. Currently there are still a number of tests failing, but this is a stopping place for today * who cares * added return * Merged from trunk and fixed review comments * fixed formatting string * typo * typo * typo * typo * typo * typo * added fake network info * Fixed review comments * Fixed typo * better handle malformed input, and add associated tests * Fixed typo * initial committ * Fixed NoneType returned bugw * merged trunk * Updated accessIPv4 and accessIPv6 to always be in a servers response * Fixed mistake on mergew * tweak to comment * Merged with trunkw * a few tweaks - remove unused member functions, add comment * incorporate feedback from brian waldon and brian lamar. Move associate/disassociate to server actions * merge from trunk * pep8 * Finished changing ServerXMLSerializationTest to use XML validation and lxml * Added monkey patching notification code function w * Updated test\_show in ServerXMLSerializationTest to use XML validation * vm\_state --> vm\_states * Next round of prep for keystone integration * merge from trunk * Removes the incorrect hard-coded filter path * Revert irrelevant changes that accidentally crept into this patch :( * add tenant\_id to api. without tenant\_id, admins can't tell which servers belong to which tenants when retrieving lists * Merged from trunk * Fixes primitive with builtins, modules, etc * fix test\_virtual interfaces for tenant\_id stuff * fix test\_rescue tests for tenant\_id changes * Fix unit test for the change of 'nova-manage network list' format * Add copyright notices * merged trunk * Define FLAGS.default\_local\_format. By default it's None, to match current expected \_create\_local * Fix config\_drive migration, per Matt Dietz * updated migration number * merge with trunk * Bump migration number * pep8 * Start improving documentation * Added uuid column in virtual\_interfaces table, and an OpenStack extension API for virtual interfaces to expose these IDs. Also set this UUID as one of the external IDs in the OVS vif driver * Move documentation from nova.virt.fake to nova.virt.driver * add key\_name/data support to server stub * add user\_id and description. without user\_id, there is no way for a tenant to tell which user created the server. description should be added for ec2 parity * merge * Bugfix for lp 828429. Its still not clear to me exactly how this code path is actually invoked when nova is used, so I'm looking for input on whether we should be adding a test case for this, removing the code as unused, etc. Thanks * remove security groups, improve exception handling, add tests * Merged trunk * merged trunk * Currently, rescue/unrescue is only available over the admin API. Non-admin tenants also need to be able to access this functionality. This patch adds rescue functionality over an API extension * Makes all of the binary services launch using the same strategy.  \* Removes helper methods from utils for loading flags and logging  \* Changes service.serve to use Launcher  \* Changes service.wait to actually wait for all the services to exit  \* Changes nova-api to explicitly load flags and logging and use service.serve \* Fixes the annoying IOError when /etc/nova/nova.conf doesn't exist * tests pass * Fixes issue where ServersXMLSerializer was missing a method for update actions * follow same pattern as userdata (not metadata apporach) * rename the test method * Updated docs for the recent scheduler class changes * Passes empty string instead of None to MySQLdb driver if the DB password isn't set * merged trunk * added volume metadata. Fixed test\_volume\_types\_extra\_specs * declare the use\_forwarded\_for flag * merge trunk * Fixes lp828207 * Added unit test * allow specification of key pair/security group info via metadata * Fixed bug in which DescribeInstances was returning deleted instances. Added tests for pertinent api methods * Accept binary user\_data in radix-64 format when you launch a new server using OSAPI. This user\_data would be stored along with the other server properties in the database. Once the VM instance boots you can query for the user-data to do any custom installation of applications/servers or do some specific job like setting up networking route table * added unittests for volume\_extra\_data * Removed extra parameter from the call to \_provision\_resource\_locally() * resolve conflicts after upstream merge * Change the call name * Cleanup the '\_base' directory in libvirt tests * Oops * Review feedback * Added 'update' method to ServersXMLSerializer * Added more unit testcases for userdata functionality * Remove instances.admin\_pass column * merged trunk * Merged with trunk * typo * updated PUT to severs/id to handle accessIPv4 and accessIPv6 * DB password should be an empty string for MySQLdb * first cut on types & extra-data (only DB work, no tests) * merge from trunk * Better docstring for \_unrescue() * Review feedback * Need to pass the action * Updated the distributed scheduler docs with the latest changes to the classes * Syntax error * Moved compute calls to their own handler * Remove old comment * Don't send 'injected\_files' and 'admin\_pass' to db.update * fix docstrings in new api bins * one more * fix typo * remove signal handling and clean up service.serve * add separate api binaries * more cleanup of binaries per review * Changed the filter specified in \_ask\_scheduler\_to\_create\_instance() to None, since the value isn't used when creating an instance * Minor housecleaning * Fix to return 413 for over limit exceptions with instances, metadata and personality * Refactored a little and updated unit test * minor cleanup * dhcpbridge: add better error if NETWORK\_ID is not set, convert locals() to static dict * Added the fix for the missing parameter for the call to create\_db\_entry\_for\_new\_instance() * Updated a number of items to pave the way for new states * Corrected the hardcoded filter path. Also simplified the filter matching code in host\_filter.py * Added rescue mode extension * Fixed issue where accessIP was added in none detail responses * Updated ServersXMLSerializer to allow accessIPv4 and accessIPv6 in XML responses * Merged trunk * Added accessIPv4 and accessIPv6 to servers view builder Updated compute api to handle accessIPv4 and 6 * Fixed several logical errors in the scheduling process. Renamed the 'ZoneAwareScheduler' to 'AbstractScheduler', since the zone-specific designation is no longer relevant. Created a BaseScheduler class that has basic filter\_hosts() and weigh\_hosts() capabilities. Moved the filters out of one large file and into a 'filters' subdirectory of nova/scheduler * Merged trunk * Adds the enabled status of a host when XenServer reports its host's capabilities. This allows the scheduler to ignore hosts whose enabled is False when considering where to place a new instance * merge trunk and fix unit test errors * in dhcpbridge, only grab network id from env if needed * bug #828429: remove references to interface in nova-dhcpbridge * pep8 * remove extra reference in pipelib * clean up fake auth from server actions test * fix integration tests * make admin context the default, clean up pipelib * merged trunk * Merged with trunk and fixed broken testcases * merged with nova-1450 * nova-manage VSA print & forced update\_cap changes; fixed bug with report capabilities; added IP address to VSA APIs; added instances to APIs * Make all services use the same launching strategy * Updated compute manager/API to use vm/task states. Updated vm/task states to cover a few more cases I encountered * Updated server create XML deserializer to account for accessIPv4 and accessIPv6 * Added the host 'enabled' status to the host\_data returned by the plugin * Added accessip to models pep8 * Added migration for accessIPv4 and accessIPv6 * Fixed broken unit testcases * Initial instance states migration * pep8 fix * fix some naming inconsistencies, make associate/disassociate PUTs * Add NetworkCommandsTestCase into unit test of nova-manage * very minor cleanup * Undo an unecessary change * Merged trunk * Pep8 fixes * Split set state into vm, task, and power state functions * Add modules for task and vm states * Updated tests to correctly use the tenant id * DB object was being casted to dict() in API code. This did not work as intended and logic has been updated to reflect a more accurate way of getting information out of DB objects * merge from trunk * Cleaned up the extension metadata API data * Updated get\_updated time * Cleaned up the file * Fixed vif test to match the JSON key change * Added XML support and changed JSON output keys * Added virtual interfaces API test * Removed serverId from the response * Merged trunk * Merged Dan's branch to add VIF uuid to VIF drivers for Quantum * Removed a change from faults.py that was not required." * Changed return code to 413 for metadata, personality and instance quota issues * Append the project\_id to the SERVER-MANAGEMENT-URL header for v1.1 requests. Also, ensure that the project\_id is correctly parsed from the request * add new vif uuid for OVS vifplug for libvirt + xenserver * Remove instances.admin\_pass column * merge trunk * all tests passing * fix unit tests * Resolved conflicts and merged with trunk * Added uuid for networks and made changes to the Create server API format to accept network as uuid instead of id * I'm taking Thierry at his word that I should merge early and merge often :) * Fixes issue with exceptions getting eaten in image/s3.py if there is a failure during register. The variables referenced with locals() were actually out of scope * Allow local\_gb size to be 0. libvirt uses local\_gb as a secondary drive, but XenServer uses it as the root partition's size. Now we support both * Merged trunk * merge from trunk * make project\_id authorization work properly, with test * Use netaddr's subnet features to calculate subnets * make delete more consistant * Review feedback * Updated note * Allow local\_gb to be 0; PEP8 fixes * Updated ViewBuilderV10 as per feedback * \* Added search instance by metadata. \* instance\_get\_all\_by\_filters should filter deleted * This branch implements a nova api extension which allows you to manage and update tenant/project quotas * test improvements per peer review * fixing pep8 issue * defaults now is referred to using a tenant * fixing up the show quotas tests, and extension * making get project quotas require context which has access to the project/tenant) * fixing pep8 issues again * fixing spacing issues * cleaning up a few things from pyflakes * fixing pep8 errors * refactoring tests to not use authmanager, and now returning 403 when non admin user tries to update quotas * removed index, and separated out defaults into its own action * merging test\_extensions.py * another trunk merge * another trunk merge... a new change made it into nova before the code was merged * Cleanup the '\_base' directory in libvirt tests * Small bug fix...don't cast DB objects to dicts * merge from trunk * Updated the EC2 metadata controller so that it returns the correct value for instance-type metadata * Fix test\_metadata tests * merge the trunk * Merged with upstream * Added list\_notifier, a driver for the notifer api which calls a list of other drivers * merge with trunk * Refactored the HostFilterScheduler and LeastCostScheduler classes so that they can be combined into a single class that can do both host filtering and host weighting, allowing subclasses to override those processes as needed. Also renamed the ZoneAwareScheduler to AbstractScheduler, for two reasons: one, the 'zone-aware' designation was necessary when the zone code was being developed; now that it is part of nova, it is not an important distinction. Second, the 'Abstract' part clearly indicates that this is a class that is not designed to be used directly, but rather as the basis for specific scheduler subclasses * cosmetic change in test\_extensions. Avoids constant merge conflicts between proposals with new extensions * Validate the size of VHD files in OVF containers * Include vif UUID in the network info dictionary * Added uuid to allocate\_mac\_address * Fixed the naming of the extension * redux of floating ip api * Merged trunk * Merged trunk * log the full exception so we don't lose traceback through eventlet * fix error logging in s3.py * pep8 cleanup * Merged trunk * Removed newly added userdatarequesthandler for OS API, there is no need to add this handler since the existing Ec2 API metadatarequesthandler does the same job * got tests passing with logic changes * pep8 * pep8 * add note * have the tests call create\_networks directly * allow for finding a network that fits the size, also format string correctly * adding sqlalchemi api tests for test\_instance\_get\_all\_by\_filter to ensure doesn't return deleted instances * added cloud unit test for describe\_instances to ensure doesn't return deleted instances * return the created networks * pep8 fix * merge trunk * Adding kvm-block-migration feature * i hate these exceptions where it should just return an empty list * fix typo where I forgot a comma * merge trunk, remove \_validate\_cidrs and replace functionality with a double for loop * fix bug which DescribeInstances in EC2 api was returning deleted instances * We don't have source for open-wrt in the source tree, so we shouldn't use the images. Since the images are only there for uploading smoketests, They are now replaced with random images * Make response structure for list floating ips conform with rest of openstack api * put tenant\_id back in places where it was * This branch allows the standard inclusion of a body param which most http clients will send along with a POST request * Libvirt has some autogenerated network info that is breaking ha network * making body default to none * pep8 fix * Adding standard inclusion of a body param which most http clients will send along with a POST request * Fixed merging issue * Merged with trunk * Updated rate limiting tests to use tenants * Corrected names in TODO/FIXME * remove openwrt image * Fix the tests when libvirt actually exists * Merged trunk * Add durable flag for rabbit queues * Fixed merge conflict * merged trunk * Merged trunk * Dryed up contructors * make list response for floating ip match other apis * fix missing 'run\_as\_root' from bad merge * Added ability too boot VM from install ISO. System detects an image of type iso. Images is streamed to a VDI and mounted to the VM. Blank disk allocated to VM based on instance type * Add source-group filtering * added logic to make the creation of networks (IPv4 only) validation a bit smarter: - detects if the cidr is already in use - detects if any existing smaller networks are within the range of requested cidr(s) - detects if splitting a supernet into # of num\_networks && network\_size will fit - detects if requested cidr(s) are within range of already existing supernet (larger cidr) * fix InvalidPortRange exception shows up in euca2ools instead of UnknownError when euca-authorize is specified w/ invalid port # * Changes requests with an invalid server action to return an HTTP 400 instead of a 501 * Currently OS API doesn't accept availability zone parameter so there is no way to instruct scheduler (SimpleScheduler) to launch VM instance on specific host of specified zone * typo fix * Fix v1.1 /servers/ PUT request to match API documentation by returning 200 code and the server data in the body * Allow different schedulers for compute and volume * have NetworkManager generate MAC address and pass it to the driver for plugging. Sets the stage for being able to do duplicate checks on those MACs as well * make sure security groups come back on restart of nova-compute * fix all of the tests * rename project\_net to same\_net * use dhcp server instead of gateway for filter exception * get rid of network\_info hack and pass it everywhere * fix issue introduced in merge * merge trunk, fix conflict frim dprince's branch to remove hostname from bin/nova-dhcpbridge * merge in trunk, resolving conflicts with ttx's branch to switch from using sudo to run\_as\_root=True * remerge trunk * Added durable option for nova rabbit queues added queueu delete script for admin/debug purposes * Added add securitygroup to instance and remove securitygroup from instance functionality * Fix ugly little violations before someone says anything * Merged trunk * Updated logging * end of day * Check uncompressed VHD size * reworked test\_extensions code to avoid constant merge conflicts with newly added ext * nova-manage: fixed instance type in vsa creation * Stub out instance\_get as well so we can show the results of the name change * removed VSA/drive\_type code from EC2 cloud. changed nova-manage not to use cloud APIs * Merged with trunk and fixed broken unit testcases * merged rev1418 and fixed code so that less than 1G image can be migrated * Created the filters directory in nova/scheduler * removed admincontext middleware * updates from review * merge from trunk * fix merges from trunk * Nuke hostname from nova-dhcpbridge. We don't use it * merge the trunk * need to actually assign the v4 network * Fixes to the OSAPI floating API extension DELETE. Updated to use correct args for self.disassociate (don't sweep exceptions which should cause test cases to fail under the rug). Additionally updated to pass network\_api.release\_floating\_ip the address instead of a dict * Merged trunk * Fixed unit tests * only run if the subnet and cidr exist * only run if the subnet and cidr exist * merge from trunk * make sure network\_size gets set * merge from trunk * don't require ipv4 * forgot the closing paren * use subnet iteration from netaddr for subnet calculation * Fix a typo that causes ami images to launch with a kernel as ramdisk when using xen * Fixing a 500 error when -1 is supplied for flavorRef on server create * rewriting parsing * fix typo that causes ami instances to launch with a kernal as ramdisk * Merged trunk * Allows for a tunable number of SQL connections to be maintained between services and the SQL server using new configuration flags. Only applies when using the MySQLdb dialect in SQLAlchemy * Merged trunk * Fixes pep8 issues in test\_keypairs.py * Merged trunk * start of day * Fixes to the OSAPI floating API extension DELETE. Updated to use correct args for self.disassociate (don't sweep exceptions which should cause test cases to fail under the rug). Additionally updated to pass network\_api.release\_floating\_ip the address instead of a dict * API needs virtual\_interfaces.instance joined when pulling instances from the DB. Updated instance\_get\_all() to match instance\_get\_all\_by\_filters() even though the former is only used by nova-manage now. (The latter is used by the API) * remove extra log statements * join virtual\_interfaces.instance for DB queries for instances. updates instance\_get\_all to match instance\_get\_all\_by\_filters * remove accidentally duplicated flag * merged trunk * add keystone middlewares for ec2 api * Merged with trunk * added userdata entry in the api paste ini * Initial version * Accidentally added inject\_files to merge * Support for management of security groups in OS API as a new extension * Updates to libvirt, write metadata, net, and key to the config drive * prefixed with os- for the newly added extensions * Merged with trunk * Author added * allow scheduling topics to multiple drivers * Check compressed image size and PEP8 cleanup * v1.1 API also requires the server be returned in the body * capabilities fix, run\_as\_root fix * lp824780: fixed typo in update\_service\_capabilities * fix pep8 * spacing fixes * fixed pep8 issue * merge from trunk * fixed v1.0 stuff with X-Auth-Project-Id header, and fixed broken integrated tests * merged with 1416 * fixing id parsing * moved vsa\_id to metadata. Added search my meta * Refactored the scheduler classes without changing functionality. Removed all 'zone-aware' naming references, as these were only useful during the zone development process. Also fixed some PEP8 problems in trunk code * Added search instance by metadata. get\_all\_by\_filters should filter deleted * got rid of tenant\_id everywhere, got rid of X-Auth-Project-Id header support (not in the spec), and updated tests * Silly fixes * v1.0 and v1.1 API differs for PUT, so split them out Update tests to match API * Removed postgres, bug in current ubuntu package which won't allow it to work easily. Will add a bug in LP * minor cleanup * Added availability zone support to the Create Server API * Make PUT /servers/ follow the API specs and return a 200 status * More logging * removed extra paren * Logging for SQLAlchemy type * merged trunk * Fixed per HACKING * \* Removes rogue direct usage of subprocess module by proper utils.execute calls \* Adds a run\_as\_root parameter to utils.execute, that prefixes your command with FLAG.root\_helper (which defaults to 'sudo') \* Turns all sudo calls into run\_as\_root=True calls \* Update fakes accordingly \* Replaces usage of "sudo -E" and "addl\_env" parameter into passing environment in the command (allows it to be compatible with alternative sudo\_helpers) \* Additionally, forces close\_fds=True on all utils.execute calls, since it's a more secure default * Remove doublequotes from env variable setting since they are literally passed * Changed bad server actions requests to raise an HTTP 400 * removed typos, end of line chars * Fixed broken unit testcases * Support for postgresql * merge from trunk * tenant\_id -> project\_id * Adding keypair support to the openstack contribute api * elif and FLAG feedback * Removed un-needed log line * Make sure to not use MySQLdb if you don't have it * get last extension-based tests to pass * Allows multiple MySQL connections to be maintained using eventlet's db\_pool * Removed verbose debugging output when capabilities are reported. This was clogging up the logs with kbytes of useless data, preventing actual helpful information from being retrieved easily * Removed verbose debugging output when capabilities are reported * Updated extensions to use the TenantMapper * fix pep8 issues * Fixed metadata PUT routing * These fixes are the result of trolling the pylint violations here * Pass py\_modules=[] to setup to avoid installing run\_tests.py as a top-level module * Add bug reference * Pass py\_modules=[] to setup to avoid installing run\_tests.py as a top-level module * fix servers test issues and add a test * added project\_id for flavors requests links * added project\_id for images requests * merge trunk * fix so that the exception shows up in euca2ools instead of UnknownError * Dropped vsa\_id from instances * import formatting - thx * List security groups project wise for admin users same as other users * Merged with trunk * merge with nova-1411. fixed * pep8 fix * use correct variable name * adding project\_id to flavor, server, and image links for /servers requests * Merged with trunk * tests pass * merge from trunk * merged with nova-1411 * This branch makes sure to detach fixed ips when their associated floating ip is deallocated from a project/tenant * adding other emails to mailmap * add Keypairs to test\_extensions * adding myself to authors * This adds the servers search capabilities defined in the OS API v1.1 spec.. and more for admins * Be more tolerant of agent failures. It is often the case there is only a problem with the agent, not with the instance, so don't claim it failed to boot so quickly * Updated the EC2 metadata controller so that it returns the correct value for instance-type metadata * added tests - list doesn't pass due to unicode issues * initial port * merged trunk * Be more tolerant of agent failures. The instance still booted (most likely) so don't treat it like it didn't * Updated extensions to expect tenant ids Updated extensions tests to use tenant ids * Update the OSAPI v1.1 server 'createImage' and 'createBackup' actions to limit the number of image metadata items based on the configured quota.allowed\_metadata\_items that is set * Fix pep8 error * fixing one pep8 failure * I think this restores the functionality .. * Adds missing nova/api/openstack/schemas to tarball * Instance metadata now functionally works (completely to spec) through OSAPI * updated v1.1 flavors tests to use tenant id * making usage of 'delete' argument more clear * Fix the two pep8 issues that sneaked in while the test was disabled * Fix remaining two pep8 violations * Updated TenantMapper to handle resources with parent resources * updating tests; fixing create output; review fixes * OSAPI v1.1 POST /servers now returns a 202 rather than a 200 * Include missing nova/api/openstack/schemas * Rename sudo\_helper FLAG into root\_helper * Minor fix to reduce diff * Initial validation for ec2 security groups name * Remove old commented line * Command args can be a tuple, convert them to list * Fix usage of sudo -E and addl\_env in dnsmasq/radvd calls, remove addl\_env support, fix fake\_execute allowed kwargs * Use close\_fds by default since it's good for you * Fix ajaxterm's use of shell=True, prevent vmops.py from running its own version of utils.execute * With this branch, boot-from-volume can be marked as completed in some sense. The remaining is minor if any and will be addressed as bug fixes * Update the curl command in the \_\_public\_instance\_is\_accessible function of test\_netadmin to return an error code which we can then check for and handle properly. This should allow calling functions to properly retry and timeout if an actual test failure happens * updating more test cases * changing server create response to 202 * Added xml schema validation for extensions resources. Added corresponding xml schemas. Added lxml dep, which is needed for doing xml schema validation * Fixing a bug in nova.utils.novadir() * Adds the ability to read/write to a local xenhost config. No changes to the nova codebase; this will be used only by admin tools that have yet to be created * fixed conditional because jk0 is very picky :) * Fixed typo found in review * removing log lines * added --purge optparse for flavor delete * making server metadata work functionally * cleaning up instance metadata api code * Updated servers tests to use tenant id * Set image progress to 100 if the image is active * Cleaned up merge messes * Merged trunk * cleaned up unneeded line * nova.exception.wrap\_exception will re-raise some exceptions, but in the process of possibly notifying that an exception has occurred, it may clobber the current exception information. nova.utils.to\_primitive in particular (used by the notifier code) will catch and handle an exception clobbering the current exception being handled in wrap\_exception. Eventually when using the bare 'raise', it will attempt to raise None resulting a completely different and unhelpful exception * remove obsolete script from setup.py * assert that vmops.revert\_migration is called * Import sys as well * Resolve conflicts and fixed broken unit testcases * This branch adds additional capability to the hosts API extension. The new options allow an admin to reboot or shutdown a host. I also added code to hide this extension if the --allow-admin-api is False, as regular users should have no access to host API calls * adding forgotten import for logging * Adds OS API 1.1 support * Updated test\_images to use tenant ids * Don't do anything with tenant\_id for now * Review fixes * fixed wrong syntax * Assign tenant id in nova.context * another trunk merge * Merged trunk * Merged trunk * Cleaned up some old code added by the last merge * Fixed some typos from the last refactoring * Moved the restriction on host startup to the xenapi layer.: * Remove nova/tests/network, which was accidentally included in commit * upper() is even better * merged with 1383 * Updated with code changes on LP * Merged trunk * Save exception and re-raise that instead of depending on thread local exception that may have been clobbered by intermediate processing * Adding \_\_init\_\_.py files * Adds ability to disable snapshots in the Openstack API * Sync trunk * Set image progress to 100 if the image is active * Sync trunk * Update the curl command in the \_\_public\_instance\_is\_accessible function of test\_netadmin to return an error code which we can then check for and handle properly. This should allow calling functions to properly retry and timout if an actual test failure happens * ZoneAwareScheduler classes couldn't build local instances due to an additional argument ('image') being added to compute\_api.create\_db\_entry\_for\_new\_instance() at some point * simplified test cases further, thanks to trunk changes * Added possibility to mark fixed ip like reserved and unreserved * Update the OSAPI v1.1 server 'createImage' and 'createBackup' actions to limit the number of image metadata items based on the configured quota.allowed\_metadata\_items that is set * Pep8 fix * zone\_aware\_scheduler classes couldn't build instances due to a change to compute api's create\_db\_entry\_for\_new\_instance call. now passing image argument down to the scheduler and through to the call. updated a existing test to cover this * Adding check to stub method * moving try/except block, and changing syntax of except statement * Fixes broken image\_convert. The context being passed to glance image service was not a real context * Using decorator for snapshots enabled check * Disable flag for V1 Openstack API * adding logging to exception in delete method * Pass a real context object into image service calls * Adding flag around image-create for v1.0 * Refactored code to reduce lines of code and changed method signature * If ip is deallocated from project, but attached to a fixed ip, it is now detached * Glance Image Service now understands how to use glance client to paginate through images * Allow actions queries by UUID and PEP8 fixes * Fixed localization review comment * Allow actions queries by UUID and PEP8 fixes * Fixed review comments * fixing filters get * fixed per peer review * fixed per peer review * re-enabling sort\_key/sort\_dir and fixing filters line * Make sure mapping['dns'] is formatted correctly before injecting via template into images. mapping['dns'] is retrieved from the network manager via info['dns'], which is a list constructed of multiple DNS servers * Add a generic image service test and run it against the fake image service * Implemented @test.skip\_unless and @test.skip\_if functionality in nova/test.py * merged with 1382 * Updates v1.1 servers/id/action requests to comply with the 1.1 spec * fix typo * Moving from assertDictEqual to assertDictMatch * merging trunk * merging trunk * Add exception logging for instance IDs in the \_\_public\_instance\_is\_accessible smoke test function. This should help troubleshoot an intermittent failure * adding --fixes * glance image service pagination * Pass tenant ids through on on requests * methods renamed * Add exception logging for instance IDs in the \_\_public\_instance\_is\_accessible smoke test function. This should help troubleshoot an intermittent failure * Removed most direct sudo calls, make them use run\_as\_root=True instead * pep8 violations sneaking into trunk? * pep8 violations sneaking into trunk? * trunk merge * Fixes lp821144 * Make disk\_format and container\_format optional for libvirt's snapshot implementation * pep8 * fixed up zones controller to properly work with 1.1 * Add generic image service tests * Add run\_as\_root parameter to utils.execute, uses new sudo\_helper FLAG to prefix command * Remove spurious direct use of subprocess * Added virtual interfaces REST API extension controller * Trunk contained PEP8 errors. Fixed * Trunk merge * fix mismerge * Added migration to add uuid to virtual interfaces. Added uuid column to models * merged trunk * merged with nova trunk * Launchpad automatic translations update * fixed pep8 issue * utilized functools.wraps * added missing tests * tests and merge with trunk * removed redundant logic * merged trunk * For nova-manage network create cmd, added warning when size of subnet(s) being created are larger than FLAG.network\_size, in attempt to alleviate confusion. For example, currently when 'nova-manage network create foo 192.168.0.0/16', the result is that it creates a 192.168.0.0/24 instead without any indication to why * Remove instances of the "diaper pattern" * Read response to reset the connection state-machine for the next request/response cycle * Added explanations to exceptions and cleaned up reboot types * fix pep8 issues * fixed bug , when logic searched for next avail cidr it would return cidrs that were out of range of original requested cidr block. added test for it * Adding missing module xmlutil * fixed bug, wasn't detecting smaller subnet conflict properly added test for it * Properly format mapping['dns'] before handing off to template for injection (Fixes LP Bug #821203) * Read response to reset HTTPConnection state machine * removed unnecessary context from test I had left there from prior * move ensure\_vlan\_bridge,ensure\_bridge,ensure\_vlan to the bridge/vlan specific vif-plugging driver * re-integrated my changes after merging trunk. fixed some pep8 issues. sorting the list of cidrs to create, so that it will create x.x.0.0 with a lower 'id' than x.x.1.0 (as an example). <- was causing libvirtd test to fail * Revert migration now finishes * The OSAPI v1.0 image create POST request should store the instance\_id as a Glance property * There was a recent change to how we should flip FLAGS in tests, but not all tests were fixed. This covers the rest of them. I also added a method to test.UnitTest so that FLAGS.verbose can be set. This removes the need for flags to be imported from a lot of tests * Bad method call * Forgot the instance\_id parameter in the finish call * Merged in the power action changes * Removed test show() method * Fixed rescue/unrescue since the swap changes landed in trunk. Minor refactoring (renaming callback to \_callback since it's not used here) * Updates to the XenServer glance plugin so that it obtains the set of existing headers and sends them along with the request to PUT a snapshotted image into glance * Added admin-only decorator * This updates nova-ajax-console-proxy to correctly use the new syntax introduced last week by Zed Shaw * Merged trunk * Changed all references to 'power state' to 'power action' as requested by review * Added missing tests for server actions Updated reboot to verify the reboot type is HARD or SOFT Fixed case of having an empty flavorref on resize * Added more informative docstring * Added XML serialization for server actions * Removed debugging code * Updated create image server action to respect 1.1 * Fixes lp819397 * Fixed rescue unit tests * Nuke hostname. We don't use it * Split serverXMLDeserializers into v1.0 and v1.1 * another merge * Removed temporary debugging raise * Merged trunk * modify \_setup\_network for flatDHCP as well * Merged trunk * Added xenhost config get/setting * fix syntax error * Fixed rescue and unrescue * remove storing original flags verbosity * remove set\_flags\_verbosity.. it's not needed * Merged trunk * OS v1.1 is now the default into novarc * added NOVA\_VERSION to novarc * remove unused reference to exception object * Add a test for empty dns list in network\_info * Fix comments * uses 2.6.0 novaclient (OS API 1.1 support) * Fix to nova-ajax-console-proxy to use the new syntax * Update the OS API servers metadata resource to match the current v1.1 specification - move /servers//meta to /servers//metadata - add PUT /servers//metadata * fix pep8 issues that are in trunk * test\_host\_filter setUp needs to call its super * fix up new test\_server\_actions.py file for flags verbosity change * merged trunk * fixing typo * Sync with latest tests * The logic for confirming and reverting resizes was flipped. As a result, reverting a resize would end up deleting the source (instead of the destination) instance, and confirming would end up deleting the destination (instead of the source) instance * Found a case where an UnboundLocalError would be raised in xenapi\_conn.py's wait\_for\_task() method. This fixes the problem by moving the definition of the unbound name outside of the conditional * Moves code restarting instances after compute node reboot from libvirt driver to compute manager; makes start\_guests\_on\_host\_boot flag global * Moved server actions tests to their own test file. Updated stubbing and how flags are set to be in line with how they're supposed to be set in tests * merging trunk * add test for spawning a xenapi instance with an empty dns list * Nova uses instance\_type\_id and flavor\_id interchangeably when they almost always different values. This can often lead to an instance changing instance\_type during migration because the values passed around internally are wrong. This branch changes nova to use instance\_type\_id internally and flavor\_id in the API. This will hopefully avoid confusion in the future * The OSAPI v1.0 image create POST request should store the instance\_id as a Glance property * Linked to bug * Changed the definition of the 'action' dict to always occur * Updates to the XenServer glance plugin so that it obtains the set of existing headers and sends them along with the request to PUT a snapshotted image into glance * Fixed rescue and unrescue * Added in tests that verify tests are skipped appropriately * Merged trunk * Merged dietz' branch * Update HACKING: - Make imports more explicit - Add some dict/list formatting guidelines - Add some long method signature/call guidelines - Add explanation of i18n * Pep8 cleanup * Defaults \`dns\` to '' if not present, just as we do with the other network info data * Removes extraneous bodies from certain actions in the OSAPI servers controller * Revert should be sent to destination node and confirm should be sent to source node * Conditionals were not actually runing the tests when they were supposed to. Renamed example testcases * fix pylint W0102 errors * Remove whitespaces from name and description before creating security group * Remove instances of the "diaper pattern" * Fixes lp819397 * Initial version * Load instance\_types in downgrade method too * Fix trailing whitespace (PEP8) * fix test\_cloud FLAGS setting * dist scheduler flag setting fixes * fix scheduler tests that set FLAGS * fix more tests that use FLAGS setting * all subclasses of ComputeDriver should fully implement the interface of the destroy method * align multi-line string * fix test\_s3 FLAGS uses * switch FLAGS.\* = in tests to self.flags(...) remove unused cases of FLAGS from tests modified test.TestCase's flags() to allow multiple overrides added missing license to test\_rpc\_amqp.py * follow convention when raising exceptions * pep8 fixes * use an existing exception * use correct exception name * fix duplicate function name * fix undefined variable error * fix potential runtime exception * remove unused imports * remove bit-rotted code * more cleanup of API tests regarding FLAGS * fix use of FLAGS in openstack API servers tests to use the new way * Removes extraneous body argument from server controller methods * Merged trunk * Merged trunk * Default dns to '' if not present * replaced raise Exception with self.fail() * Removed dependancy on os.getenv. Test cases now raise Exception if they are not properly skipped * PEP8 issue * whoops, got a little comma crazy * Merged trunk and fixed conflicts to make tests pass * fumigate non-pep8 code * Use flavorid only at the API level and use instance\_type\_id internally * Yet another conflict resolved * forgot to remove comment * updated to work w/ changes after merged trunk fixing var renaming. the logic which forces default to FLAGS.network\_size if requested cidr was larger, was also applying to requested cidrs smaller than FLAGS.network\_size. Requested cidrs smaller than FLAGS.network\_size should be ignored and not overriden * merged from trunk * merged from trunk * merge trunk * Launchpad automatic translations update * Resolved pep8 errors * renaming test\_skip\_unless\_env\_foo\_exists() * merging trunk * Removed trailing whitespace that somehow made it into trunk * Merged trunk * Removed duplicate methods created by previous merge * Fixes lp819523 * Fix for bug #798298 * fix for lp816713: In instance creation, when nova-api is passed imageRefs generated by itself, strip the url down to an id so that default glance connection params are used * Added check for --allow-admin-api to the host API extension code * Another unittest * Merged trunk * Add support for 300 Multiple Choice responses when no version identifier is used in the URI (or no version header is present) * Merged trunk * Glance has been updated for integration with keystone. That means that nova needs to forward the user's credentials (the auth token) when it uses the glance API. This patch, combined with a forth-coming patch for nova\_auth\_token.py in keystone, establishes that for nova itself and for xenapi; other hypervisors will need to set up the appropriate hooks for their use of glance * Added changes from mini server * raise correct error * Minor test fixes * fix failing tests * fix pep8 complaints * merge from trunk * Fixed a missing space * Bad merge res * merge the trunk * fix missing method call and add failing test * Removed duplicate xattr from pip-requires * Fixed merge issues * Merged trunk * merged trunk * remove unused parameter * Merged trunk * Merged from lab * fix pylint errors * fix pylint errors * merge from trunk * Moves image creation from POST /images to POST /servers//action * Fixed several typos * Changed migration to be an admin only method and updated the tests * - Remove Twisted dependency from pip-requires - Remove Twisted patch from tools/install\_venv.py - Remove eventlet patch from tools/install\_venv.py - Remove tools/eventlet-patch - Remove nova/twistd.py - Remove nova/tests/test\_twistd.py - Remove bin/nova-instancemonitor - Remove nova/compute/monitor.py - Add xattr to pip-requires until glance setup.py installs it correctly - Remove references to removed files from docs/translations/code * Fix an error in fetch\_image() * Get instance by UUID instead of id * Merged trunk * Added the powerstate changes to the plugin * pull-up from trunk/fix merge conflict * fixing typo * refactored tests * pull-up from trunk * Removing the xenapi\_image\_service flag in favor of image\_service * cleanup * Merged trunk * abstraction of xml deserialization * fixing method naming problem * removing compute monitor * merge from trunk * code was checking for key in sqlalchemy instance and will ignore if value is None, but wasn't working if floating\_ip was a non-sqlalchemy dict obj. Therefore, updated the error checking to work in both caes * While we currently trap JSON encoding exceptions and bail out, for error notification it's more important that \*some\* form of the message gets out. So, we take complex notification payloads and convert them to something we know can be expressed in JSON * Better error handling for resizing * Adds the auth token to nova's RequestContext. This will allow for delegation, i.e., use of a nova user's credentials when accessing other services such as glance, or perhaps for zones * merged trunk rev1348 * Launchpad automatic translations update * added some tests for network create & moved the ipv6 logic back into the function * merged with nova trunk * Added host shutdown/reboot conditioning * avoid explicit type checking, per brian waldon's comment * Added @test.skip\_unless and @test.skip\_if functionality. Also created nova/tests/test\_skip\_examples.py to show the skip cases usage * fix LinuxBridgeInterfaceDriver * merge trunk, resolve conflict in net/manater.py in favor of vif-plug * initial commit of vif-plugging for network-service interfaces * Merged trunk * pep8 fixes * Controller -> self * Added option for rebooting or shutting down a host * removed redundant logic * merged from trunk * adding a function with logic to make the creation of networks validation a bit smarter: - detects if the cidr is already in use - when specifying a supernet to be split into smaller subnets via num\_networks && network\_size, ensures none of the returned subnets are in use by either a subnet of the same size and range, nor a SMALLER size within the same range. - detects if splitting a supernet into # of num\_networks && network\_size will fit - detects if the supernet/cidr specified is conflicting with a network cidr that currently exists that may be a larger supernet already encompassing the specified cidr. " * Carry auth\_token in nova's RequestContext * merge with trunk, resolve conflicts * Revert hasattr() check on 'set\_auth\_token' for clients * it makes the pep8, or else it gets the vim again * merge from trunk * Fixes this issue that I may have introduced * Update compute tests to use new exceptions * Resync to trunk * Remove copy/paste error * Launchpad automatic translations update * Launchpad automatic translations update * Fixed review comments: Put parsing logic of network information in create\_instance\_helper module and refactored unit testcases as per the changed code * pep8 * wow, someone whent all crazy with exceptions, why not just return an empty list? * Only call set\_auth\_token() on the glance client if there's one available * Make unit tests pass * merging * only attempt to get a fixed\_up from a v4 subnet if there is a v4 subnet * FlavorNotFound already existed, no need to create another exception * Created exceptions for accepting in OSAPI, and handled them appropriately * only create fixed\_ips if we have an ipv4 range * Revert to using context; to avoid conflict, we import context module as nova\_context; add context to rescue * You see what happens Danny when you forget to close the parenthesis * Merged with trunk * Merged trunk * allow the manager to try to do the right thing * allow getting by the cidr\_v6 * the netmask is implied by the cidr, so use that to display the v6 subnet * either v4 or v6 is required * merging trunk * pull-up from trunk and conflict resolution * merge trunk * stwart the switch to just fixed\_range * typo * Round 1 of changes for keystone integration. \* Modified request context to allow it to hold all of the relevant data from the auth component. \* Pulled out access to AuthManager from as many places as possible \* Massive cleanup of unit tests \* Made the openstack api fakes use fake Authentication by default * require either v4 or v6 * pull-up from trunk * Fix various errors discovered by pylint and pyflakes * fixing underline * removing extra verbage * merged trunk * This change creates a minimalist API abstraction for the nova/rpc.py code so that it's possible to use other queue mechanisms besides Rabbit and/or AMQP, and even use other drivers for AMQP rather than Rabbit. The change is intended to give the least amount of interference with the rest of the code, fixes several bugs in the tests, and works with the current branch. I also have a small demo driver+server for using 0MQ which I'll submit after this patch is merged * removing dict() comment * adding more on return\_type in docstrings * Fixes issue with OSAPI passing compute API a flavorid instead of an instance identifier. Added tests * made the whole instance handling thing optional * Reorganize the code to satisfy review comments * pull-up from trunk; fix problem obscuring context module with context param; fix conflicts and no-longer-skipped tests * remove unused import * --Stolen from https://code.launchpad.net/~cerberus/nova/lp809909/+merge/68602 * removing 'Defining Methods' paragraph * rewording * Use the util.import\_object to import a module * rewording * one last change * upgrades * expanding * merged trunk and fix time call * updating HACKING * Fixing lxml version requirement * Oops, I wasn't actually being compatible with the spec here * bumping novaclient version * Fixes lp:818050 * Updated resize to call compute API with instance\_type identifiers instead of flavor identifiers. Updated tests * fix run\_tests.sh * merge trunk * Fixed changes missed in merge * fix more spacing issues, and removed self link from versions template data * merged trunk * added instance support to to\_primitive and tests * merged trunk and fixed post\_live\_migratioin\_at\_destination to get nw\_info * Removing unnecessary imports * Added xml schema validation for extensions resources. Added corresponding xml schemas. Added lxml dep, which is needed for doing xml schema validation * remove extra log statement * api/ec2: rename CloudController.\_get\_instance\_mapping into \_format\_instance\_mapping * fixed typo * merge with trunk * fixed pep8 issues and removed unnecessary factory function * returned vsa\_manager, nova-manage arg and print changes * Added the config values to the return of the host\_data method * Adds XML serialization for servers responses that match the current v1.1 spec * Added methods to read/write values to a config file on the XenServer host * fix pep8 errors * minor cleanup * Removed unused Duplicate catch * Fix to\_dict() and elevated() to preserve auth\_token; revert an accidental change from context.get\_admin\_context() to simply context * Fixes bug 816604, which is the problem that timeformat in server responses for updated and created are incorrect. This fix just converts the datetime into the correct format * merging trunk * pep8 * moving server backup to /servers//action instead of POST /images * Simplified test cases * Rewrite ImageType enumeration to be more pythonic * refactoring and make self links correct (not hard coded) * Fix tests for checking pylint errors * Use utils.utcnow. Use True instead of literal 1 * Some tests for resolved pylint errors * simplify if statement * merge trunk * use wsgi XMLNS/ATOM vars * Updated deserialization of POST /servers in the OSAPI to match the latest v1.1 spec * Removed unused Duplicate catch * pull-up from trunk * Catch DBError for duplicate projects * Catch DBError for duplicate projects * Make network\_info truly optional * trunk infected with non-pep8 code * unicode instead of str() * Add a flag to set the default file mode of logs * merge trunk * make payload json serializable * moved test * Removed v1\_1 from individual tests * merge from trunk * merge to trunk * more commented code removed * some minor cosmetic work. addressed some dead code section * merged with nova-1336 * prior to nova-1336 merge * remove authman from images/s3.py and replace with flags * fix tests broken in the merge * merged trunk * fix undeclared name error * fix undeclared name error * fix undeclared name error * fix undeclared name errors * remove unused assignment which causes undeclared name error * fix undefined variable errors * fix call to nonexistant method to\_global\_ipv6. Add myself to authors file * Make network\_info truly optional * updates handling of arguments in nova-manage network create. updates a few of the arguments to nova-manage and related help. updates nova-manage to raise proper exceptions * forgot a line * fixed create\_networks ipv6 management * Fail silently * typo * --bridge defaults to br100 but with a deprecation warning and to be removed in d4 * Reverting to original code * use ATOM\_XMLNS everywhere * merge trunk * added unit testcase to increase code coverage * stub out VERSIONS for the tests * put run\_tests.sh back to how it was * Fixed conflict * Fail silently * Merged with trunk and fixed broken unit test cases * Fix the skipped tests in vmwareapi and misc spots. The vmware networking stuff is stubbed out, so the tests can be improved there by fixing the fakes * pep8 issue * refactoring MetadataXMLDeserializer in wsgi/common * move viewbuilder and serializer tests into their own test cases * Fix all of the skipped libvirt tests * fix typo * merged trunk * Fixes typo in attach volume * utilize \_create\_link\_nodes base class function * default the paramater to None, not sure why it was required to begin with * pass None in for nw\_info * added test for accept header of atom+xml on 300 responses to make sure it defaults back to json, and reworked some of the logic to make how this happens clearer * Drop FK before dropping instance\_id column * moved rest of build logic into builder * Drop FK before dropping instance\_id column * Removed FK import * Delete FK before dropping instance\_id column * oops! moved ipv6 block back into the for loop in network manager create\_networks * update everything to use global VERSIONS * merged trunk * change local variable name * updated handling of v6 in network manager create\_networks to it can receive None for v6 args * added ipv6 requirements to nova-manage network create. changed --network to --fixed\_range\_v4 * remove unexpected parameter * fixed xmlns issue * updated the bridge arg requirements based on manager * this change will require that local urls be input with a properly constructed local url: http://localhost/v1.1/images/[id]. Such urls are translated to ids at the api layer. Previously, any url ending with and int was ok * make atom+xml accept header be ignored on 300 responses in the VersionsRequestDeserializer * Removed superfluous parameter * Use auth\_token to set x-auth-token header in glance requests * Fixed the virt driver base * Some work on testing. Two cases related to lp816713 have some coverage already: using an id as an imageRef (test\_create\_instance\_v1\_1\_local\_href), and using a nova href as a url (test\_create\_instance\_v1\_1) * Remove xenapi\_inject\_image flag * Add a flag to set the default file mode of logs * fixed issue with factory for Versions Resource * Fix context argument in a test; add TODOs * improved the code per peer review * Add context argument a lot more places and make unit tests work * fix hidden breakage in test * Remove xenapi\_inject\_image flag * removed unused import * pep8 * pep8 * updated nova-manage create network. better help, handling of required args, and exceptions. Also updated FLAG flat\_network\_bridge to default to None * Re-enables and fixes test\_cloud tests that broke from multi\_nic * Fix for boto2 * Re-enables and fixes test\_cloud tests that broke from multi\_nic * add invalid device test and make sure NovaExceptions don't get wrapped * merge from trunk * pep8 * pep8 * updating common metadata xml serializer tests * Cleaned up test\_servers * Moved server/actions tests to test\_server\_actions.py * updating servers metadata resource * pull-up from trunk * Address merge review concerns * Makes security group rules with the newer version of the ec2 api and correctly supports boto 2.0 * merging parent branch servers-xml-serialization * updating tests * updated serializer tests for multi choice * pep8 cleanup * multi choice XML responses with tests * merged recent trunk * merge with trunk * Cherry-pick of tr3buchet's fix for add\_fixed\_ip\_to\_instance * Resolved conflicts with trunk * fix typo in attach\_volume * fix the last of them * fake plug for vif driver * couple more fixes * cleanup network create * code was checking for key in sqlalchemy instance but if floating\_ip is a non-sqlalchemy dict instance instead, value=None will cause NoneType exception * fix more tests * fix the first round of missing data * fix the skipped tests in vmwareapi xenapi and quota * Add myself to authors * Implements a simplified messaging abstraction with the least amount of impact to the code base * fix for lp816713: In instance creation, when nova-api is passed imageRefs generated by itself, strip the url down to an id so that default glance connection params are used * cloud tests all passing again * added multi\_choice test just to hit another resource * pep8 fixes * initial working 300 multiple choice stuff * cherry-pick tr3buchet's fix for milestone branch * cleanup * pep8 * pep8 * First pass at converting this stuff--pass context down into vmops. Still need to fix unit tests and actually use auth\_token from the context.. * pep8 and simplify rule refresh logic * pep8 * merging parent branch lp:~rackspace-titan/nova/osapi-create-server * adding xml deserialization for createImage action * remove some logging, remove extra if * compute now appends self.host to the call to add an additional fixed ip to an instance * Update security gropu rules to properly support new format and boto 2.0 * Updated test stubs to contain the correct data Updated created and updated in responses to use correct time format * pep8 compliance * VSA volume creation/deletion changes * moved v1.1 image creation from /images to /servers//action * fixed per peer review * passing host from the compute manager for add\_fixed\_ip\_to\_instance() * adding assert to check for progress attribute * removing extra function * Remove debugging code * cleanup * fixed minor issues * reverting tests to use imageRef, flavorRef * updating imageRef and flavorRef parsing * Updates to the compute API and manager so that rebuild, reboot, snapshots, and password resets work with the most recent versions of novaclient * merging trunk; resolving conflicts * Add OpenStack API support for block\_device\_mapping * queries in the models.Instance context need to reference the table by name (fixed\_ips) however queries in the models.FloatingIp context alias the tables out properly and return the data as fixed\_ip (which is why you need to reference it by fixed\_ip in that context) * added warning when size of subnet(s) being created are larger than FLAG.network\_size in attempt to alleviate confusion. For example, currently when 'nova-manage network create foo 192.168.0.0/16', the result is that it creates a 192.168.0.0/24 instead without any indication to why * xml deserialization works now * merged from trunk * merged trunk * merging trunk * pull-up from trunk * got rid of print * got rid of more xml string comparisons * atom test updates * got rid of some prints * got rid of string comparisons in serializer tests * removing objectstore and image\_service flag checking * Updates /servers requests to follow the v1.1 spec. Except for implementation of uuids replacing ids and access ips both of which are not yet implemented. Also, does not include serialized xml responses * fixed detail xml and json tests that got broken * updated atom tests * Updated ServerXMLSerializer to utilize the IPXMLSerializer * merged trunk * merge from trunk * fix pep8 issues * fix issue with failing test * merged trunk * I'm sorry, for my fail with rebasing. Any way previous branch grew to many other futures, so I supersede it. 1. Used optparse for parsing arg string 2. Added decorator for describe method params 3. Added option for assigning network to certain project. 4. Added field to "network list" for showing which project owns network * Moved the VIF network connectivity logic('ensure\_bridge' and 'ensure\_vlan\_bridge') from the network managers to the virt layer. In addition, VIF driver class is added to allow customized VIF configurations for various types of VIFs and underlying network technologies * merge with trunk, resolve conflicts * fix pep8 * Launchpad automatic translations update * removing rogue print * removing xenapi\_image\_service flag * adding to authors * fixing merge conflict * merge from trunk * initial stuff to get away from string comparisons for XML, and use ElementTree * merged with 1320 * volume name change. some cleanup * - Updates /images//meta and /images//meta/ to respect the latest specification - Renames ../meta to ../metadata - Adds PUT on ../metadata to set entire container (controller action is called update\_all) * Adds proper xml serialization for /servers//ips and /servers//ips/ * some cleanup. VSA flag status changes. returned some files * Pass on auth\_token * Warn user instead of ignoring * Added ensuring filter rules for all VMs * atom and xml\_detail working, with tests * Adds the -c|--coverage flag to run\_tests.sh to generate a local code coverage report * Estetic fix * Fix boot from volume failure for network block devices * Bug #796813: vmwareapi does not support distributed vswitch * modified to conform to latest AWS EC2 API spec for authorize & revoke ingress params using the IpPermissions data structure, which nests lists of CIDR blocks (IpRanges) as well as lists of Group data * Fixes faults to use xml serializers based on api version. This fixed bug 814228 * Fixes a typo in rescue instance in ec2 api. This is mnaser's fix, I just added a test to verify the change * Fixes bug 797250 where a create server request with the body '{"name":"server1"}' results in a HTTP 500 instead of HTTP 422 * adding xml serialization for /servers//ips and /servers//ips/ * add a simple broken test to verify the bug * Fixed old libvirt semantics, added resume\_guests\_state\_on\_host\_boot flag * xml version detail working with tests * adding testing to solidify handling of None in wsgi serialization * Added check to make sure there is a server entity in the create server request * Fixed some typos in log lines * removed prints, got versions detail tests passing, still need to do xml/atom * reverting some wsgi-related changes * merged trunk * removed print lines * This fixes the xml serialization of the /extensions and /extensions/foo resources. Add an ExtensionsXMLSerializer class and corresponding unit tests * added 1.0 detail test, added VersionRequestDeserializer to support Versions actions properly, started 300/multiple choice work * fix for reviews * Fixed bad test Fixed using wrong variable * Moved the exception handling of unplugging VIF from virt driver to VIF driver. Added better comments. Added OpenStack copyrights to libivrt vifs.py * pep8 + spelling fixes * Floating IP DB tests * Updated Faults controller to choose an xml serializer based on api version found in the request url * removing unnecessary assignments * Hotfix * Some estetic refactoring * Fixing PEP8 compliance issues * adding --fixes * fixing typos * add decorator for 'dns' params * merge with trunk, resolve conflicts * pep8 * Fixed logging * Fixed id * Fixed init\_host context name * Removed driver-specific autostart code * fix 'version' command * Add bug reference * Use admin context when fetching instances * Use subscript rather than attribute * Make IP allocation test work again * Adjust and re-enable relevant unit tests * some file attrib changes * some cosmetic changes. Prior to merge proposal * Added test\_serialize\_extenstions to test ExtensionsXMLSerializer.index() * tests: unit tests for describe instance attribute * tests: an unit test for nova.compute.api.API.\_ephemeral\_size() * tests: unit tests for nova.virt.libvirt.connection.\_volume\_in\_mapping() * tests/glance: unit tests for glance serializer * tests: unit tests for nova.virt * tests: unit tests for nova.block\_device * db/api: fix network\_get\_by\_cidr() * image/glance: teach glance block device mapping * tests/test\_cloud:test\_modify\_image: make it pass * nova/tests/test\_compute.py: make test\_compute.test\_update\_block\_device\_mapping happy * test\_metadata: make test\_metadata pass * test\_compute: make test\_compute pass * test\_libvirt: fix up for local\_gb * virt/libvirt: teach libvirt driver swap/ephemeral device * virt/libvirt: teach libvirt driver root device name * compute/api: pass down ephemeral device info * compute/manager, virt: pass down root device name/swap/ephemeral to virt driver * ec2/get\_metadata: teach block device mapping to get\_metadata() * api/ec2: implement describe\_instance\_attribute() * db/api: block\_device\_mapping\_update\_or\_create() * block\_device: introduce helper function to check swap or ephemeral device * ec2utils: factor generic helper function into generic place * Launchpad automatic translations update * Config-Drive happiness, minus smoketest * merged with latest nova-1308 * more unittest changes * Last patch broke libvirt mapping of network info. This fixes it * Fixes an issue with out of order operations in setup\_network for vlan mode in new ha-net code * Merged with 1306 + fix for dns change * update netutils in libvirt to match the 2 dns setup * merge * merge with 1305 * make sure dhcp\_server is available in vlan mode * Adds ability to set DNS entries on network create. Also allows 2 dns servers per network to be specified * pep8-compliant. Prior to merge with 1305 * Reverted volume driver part * pep cleanup * remove auth manager from instance helper * docstring update * pass in the right argument * pull out auth manager from db * merge trunk * default to None in the method signature * merged trunk * remove some more stubouts and fakes * clean up fake auth manager in other places * same as: https://code.launchpad.net/~tr3buchet/nova/lp812489/+merge/68448 fixes: https://bugs.launchpad.net/nova/+bug/812489 but in a slightly different context * pep8 * updating images metadata resource * ...and this is me snapping back into reality removing all trace of ipsets. Go me * fixed networks not defined error when creating instances when no networks exist * fix test\_access * This is me being all cocky, thinking I'll make it use ipsets.. * fix auth tests * Add i18n for logging, changed create\_bridge/vlan to should\_create\_bridge/vlan, changed unfilter\_instance's keyword param to positional, and added Dan's alternate ID to .mailmap * fix extensions tests * merge trunk * fix all tests * pep8 fixes * Updated the comments for VMWare VIF driver * initial test for v1.1 detail request * Moved restaring instances from livbirt driver to ComputeManager * Added network\_info to unfilter\_instance to avoid exceptions when shutting down instances * Removed unused exception object * Fixed the missing quotes for 802.1Qbh in libvirt template * add decorator for multi host option * Merged Dan's branch * Merged trunk * use new 'create\_vlan' field in XenAPIBridgeDriver * merge with trunk, resolve conflicts * remove IPy * for libvirt OVS driver, do not make device if it exists already * refactor xenapi vif plug to combine plug + get\_vif\_rec, tested and fixed XenAPIBridgeDriver * Correctly add xml namespaces to extensions xml * Added xml serialization for GET => /extensions. Added corresponding tests * merge ryu's branch * remove debugging * fix a whole bunch of tests * start removing references to AuthManager * change context to maintain exact time, store roles, use ids instead of objects and use a uuid for request\_id * Resolved conflict with trunk * Adds an XML serializer for limits and adds tests for the Limits view builder * pep8 * add in the right number of fields * pep8 * updated next-available to use utc time * merge trunk * rename in preperation for trunk merge * only include dns entries if they are not None in the database * Updated the compute API so that has\_finished\_migration uses instance\_uuid. Fixes some regressions with 1295-1296 * only use the flag if it evaluates true * Catch the FixedIpNotFoundForInstance exception when no fixed IP is mapped to instance * Updated time-available to be correct format Fixed old tests to respect this * This fixes issues with invalid flavorRef's being passed in returning a 500 instead of a 400, and adds tests to verify that two separate cases work * merge from trunk * Moving lp:~rackspace-titan/nova/extensions-xml-serialization to new branch based off of trunk. To remove dep on another branch * Perform fault wrapping in the openstack WSGI controller. This allows us to just raise webob Exceptions in OS API controllers with the appropriate explanations set. This resolves some inconsistencies with exception raising and returning that would cause HTML output to occur when faults weren't being handled correctly * pep8 and stuff * Some code was recently added to glance to allow the is\_public filter to be overridden. This allows us to get all images and filter properly on the nova side until keystone support is in glance. This fixes the issue with private images and snapshots disappearing from the image list * pep8 * Merged with trunk which includes ha-net changes * Updated the compute API so that has\_finished\_migration uses instance\_uuid. Fixes some regressions with 1295-1296 * Updating the /images and /images/detail OSAPI v1.1 endpoints to match spec w/ regards to query params * Ensure valid json/xml/atom responses for versions requests * Update OSAPI v1.1 /flavors, /flavors/detail, and /flavors/ to return correct xml responses * Renamed the virt driver resize methods to migration for marginally more understandable code * allow 2 dns servers to be specified on network create * allow 2 dns servers to be specified on network create * Fixes lp813006 * Fixes lp808949 - "resize doesn't work with recent novaclient" * minor fix * Some broken tests from my other merge * Fixed import issue * added tests, updated pep8 fixes * Changed test\_live\_migration\_raises\_exception to use mock for compte manager method * fixed another issue with invalid flavor\_id parsing, and added tests * minor cleanup * pep8 issue * cleanup * merge with trunk * Fixed the localization unit test error in the vif driver logging * cleanup tests and fix pep8 issues * removed vif API extension * Fixed Xenapi unit test error of test\_rescue * Slight indentation change * Merged Dan Wendlandt's branch and fixed pep8 errors * Added call to second coverage invocation * Fixed an issue where was invoked before it was defined in the case of a venv * - Add 'fixed\_ipv6' property to VirtualInterface model - Expose ipv6 addresses in each network in OSAPI v1.1 * forgot to add xenapi/vif.py * Perform fault wrapping in the openstack WSGI controller. This allows us to just raise webob Exceptions in OS API controllers with the appropriate explanations set. This resolves some inconsistencies with exception raising and returning that could cause HTML output to occur when an exception was raised * Added LimitsXMLSerializer Added LimitsViewBuidlerV11Test test case * Added create\_vlan/bridge in network unit test * Add OpenStack API support for block\_device\_mapping * Changed the default of VIF driver * Fixed PEP8 issues * Combined bridige and vlan VIF driver to allow better transition for current Nova users * Merged trunk * Merged lp:~~danwent/nova/network-refactoring * Adds HA networking (multi\_host) option to networks * CHanges based on feedback * Older Windows agents are very picky about the data sent to it. It also requires the public key for the password exchange to be in a string format and not an integer * adding flavors xml serialization * added versions list atom test and it passes * Set the status\_int on fault wrapped exceptions. Fixes WSGI logging issues when faults are returned * Fix plus passing tests * remove debug prints * merge ryu's branch * update for ryu's naming changes, fix some bugs. tested with OVSDriver only so far * Fixes bug #807764. Please disregard previous proposal with incorrect bug # * Whoops * Added LP bug num to TODO * Split tests into 2 * Fix email address in Author * Make sure reset\_network() call happens after we've determined the agent is running * pep8 * Merged trunk * Added Dan Wendlandt to Authors, and fixed failing network unit tests * merged trunk * Made all but one test pass for libvirt * Moved back allow\_project\_net\_traffic to libvirt conn * Set the status\_int on fault wrapped exceptions. Fixes WSGI logging issues when faults are returned * lp812489: better handling of periodic network host setup to prevent exception * add smoketests to verify image listing * default image to private on register * correct broken logic for lxc and uml to avoid adding vnc arguments (LP: #812553) * Stupid merge and fixed broken test * Most of the XenServer plugin files need the execute bit set to run properly. However, they are inconsistent as it is, with one file having the execute bit set, but the another having it set when it is not needed * Made the compute unit tests to pass * Host fix * Created \_get\_instance\_nw\_info method to clean up duplicate code * initial changes for application/atom+xml for versions * Update Authors file * network api release\_floating\_ip method will now check to see if an instance is associated to it, prior to releasing * merge from lp:~midokura/nova/network-refactoring-l2 * Corrects a bad model lookup in nova-manage * correct indentation * Fixes lp809587 * Fix permissions for plugins * Ya! Apparently sleep helps me fix failing tests * Some older windows agents will crash if the public key for the keyinit command is not a string * added 'update' field to versions * First attempt at vmware API VIF driver integration * Removed unnecessary context parameter * Merged get\_configurations and plug of VIF drivers * Moved ensure\_vlan\_bridge of vmware to VIF driver * Added network\_info parameter to all the appropriate places in virt layers and compute manager * remove xenapi\_net.py from network directory, as this functionality is now moved to virt layer * first cut of xenserver vif-plugging, some minor tweaks to libvirt plugging * Refactor device type checking * Modified alias ^Cd minor fixes * Merged with trunk * Reverted to original code, after network binding to project code is in integration code for testing new extension will be added * Fixed broken unit testcases after adding extension and minor code refactoring * Added a new extension instead of directly making changes to OS V1.1. API * have to use string 'none' and add a note * tell glance to not filter out private images * updated links to use proper atom:link per spec * Renamed setup\_vif\_network to plug\_vif * Fixes lp813006 - inconsistent DB API naming * move import network to the top * Merged lp:~danwent/nova/network-refactoring-l2 * merged from trunk * network api release\_floating\_ip method checks if an instance associated to the floating prior to releasing. added test * Added detroy\_vif\_network * Functionality fixed and new test passing * Updates to the compute API and manager so that rebuild, reboot, snapshots, and password resets work with the most recent versions of novaclient * better handling of periodic network host setup * Merged trunk * Removed blank lines * Fix unchecked key reference to mappings['gateway6']. Fixes LP #807764 * add downgrade * correct broken logic for lxc and uml to avoid adding vnc arguments (LP: #812553) * Beginnings of the patch * Fixed equality comparison bug in libvirt XML * Fixed bad parameters to setup\_vif\_networks * Zapped an extra newline * Merged with trunk * Add support for generating local code coverage report * respecting use\_ipv6 flag if set to False * merged trunk * merged trunk * fixed reviewer's comment. 1. ctxt -> context, 2. erase unnecessary exception message from nova.sccheduler.driver * cleanup * merge of ovs L2 branch * missed the vpn kwarg in rpc * fix bad merge * change migration number * merged trunk * This change adds the basic boot-from-volume support to the image service * Fixed the broken tests again * Merging from upstream * Some missed instance\_id casts * pep8 cleanup * adding --fixes * adding fixed\_ipv6 property to VirtualInterface model; exposing ipv6 in api * VSA schedulers reorg * Merged with trunk * fix issues that were breaking vlan mode * fixing bad lookup * Updates to the XenServer agent plugin to fix file injection: * Don't jsonify the inject\_file response. It is already json * localization changes. Removed vsa params from volume cloud API. Alex changes * Added auth info to XML * returncode is an integer * - Fixed the conflift in vmops.py * Check returncode in get\_agent\_features * resolved pep8 issues * merged from trunk * Updated servers to choose XML serializer based on api version * pep8 * updated servers to use ServerXMLSerializer * added 'create' to server XML serializer * added 'detail' to server XML serializer * convert group\_name to string, incase it's a long * nova/api/ec2/cloud.py: Rearranged imports to be alphabetical as per HACKING * pep8'd * Extended test to check for error specific error code and test cover for bad chars * Some basic validation for creating ec2 security groups. (LP: #715443) * changed to avoid localization test failure * Initial test case proving we have a bug of, ec2 security group name can exceed 255 chars * added index to servers xml serializer * Change \_agent\_has\_method to \_get\_agent\_features. Update the inject files function so that it calls \_get\_agent\_features only once per injected file * pep8 * Moved Metadata Serialization Test * Added ServerXMLSerializer with working 'show' method Factored out MetadataXMLSerializer from images and servers into common * added missing drive\_types.py * added missing instance\_get\_all\_by\_vsa * merged with 1280 * VSA: first cut. merged with 1279 * Added some unit and integration tests for updating the server name via the openstack api * renamed priv method arg\_to\_dict since it's not just used for revoke. modified to conform to latest AWS EC2 API spec for authorize & revoke ingress params using the IpPermissions data structure, which nests lists of CIDR blocks (IpRanges) as well as lists of Group data * got rid of return\_server\_with\_interfaces and added return\_server\_with\_attributes * Added ServerXMLSerializationTest * take out print statements * Ensures a bookmark link is returned in GET /images. Before, it was only returned in GET /images/detail * One last nit * Tests passing again * put maxDiff in setUp * remove get\_uuid\_from\_href and tests * stop using get\_uuid\_from\_href for now * Updated with some changes from manual testing * Updates to the XenServer agent plugin to fix file injection: * merging trunk * use id in links instead of uuid * pep8 fixes * fix ServersViewBuilderV11Tests * Adds greater configuration flexibility to rate limiting via api-paste.ini. In particular: * return id and uuid for now * merge with trunk * Adds distributed scheduler and multinic docs to the Developer Reference page * Added more view builder tests * merged wills revisions * Added ViewBuilderV11 tests Fixed bug with build detail * fix issues with uuid and old tests * - Present ip addresses in their actual networks, not just a static public/private - Floating ip addresses are grouped into the networks with their associated fixed ips - Add addresses attribute to server entities * Update the agent plugin so that it gets 'b64\_contents' from the args dict instead of 'b64\_file' (which isn't what nova sends) * Adding unit and integration tests for updating the server name via the 1.1 api * merge with trunk, resolve conflicts * remove argument help from docstrings + minor fix * Fixes Bug #810149 that had an incomplete regex * Existing Windows agent behaves differently than the Unix agents and require some workarounds to operate properly. Fixes are going into the Windows agent to make it behave better, but workarounds are needed for compatibility with existing installed base * Add possibility to call commands without subcommands * fix redundency * Updated Authors * Fixed remove\_version\_from\_href Added tests * mistakenly commited this code into my branch, reverting it to original from trunk * Merged with trunk and fixed pep errors * added integrated unit testcases and minor fixes * First pass * corrected catching NoNetworksDefined exception in host setup and getting networks for instance * catching the correct exception * Added ServersTestv1\_1 test case Changed servers links to use uuid instead of id * pep8 * Updated old tests * add support to write to stdout rather than file if '-' is specified. see bug 810157 * merging trunk * removed self links from flavors * added commands * exposing floating ips * updated image entity for servers requests * Update the agent plugin so that it gets 'b64\_contents' from the args dict instead of 'b64\_file' (which isn't what nova sends) * Use assertRaises instead of try/except--stupid brain-o * Added progress attribute to servers responses * fixing bad merge * pull-up from trunk, while we're at it * Comment on parse\_limits(); expand an exception message; add unit tests; fix a minor discovered bug * adding bookmark to images index * add updated and created to servers detail test, and make it work * removing mox object instantiation from each test; renaming \_param to filter\_name * add self to authors * use 'with' so that close is called on file handle * adding new query parameters * support '-' to indicate stdout in nova-manage project 'environment' and 'zip' * Improvements to nova-manage: 1. nova-manage network list now shows what belongs to what project, and what's the vlan id, simplifying management in case of several networks/projects 2. nova-manage server list [zone] - shows servers. Useful if you have many servers and want to list them in particular zone, instead of grep'ing nova-manage service list * Minor fixes * Merged with Trunk * updated to support and check for flavor links in server detail response * Updated responses for GET /images and GET /images/detail to respect the OSAPI v1.1 spec * merge * beginning server detail spec 1.1 fixup * Augment rate limiting to allow greater flexibility through the api-paste.ini configuration * merge from trunk * added unit testcases for validating the requested networks * Extends the exception.wrap\_exception decorator to optionally send an update to the notification system in the event of a failure * trunk merge * merging trunk * updating testing; simplifying instance-level code * pep8 * adding test; casting instance to dict to prevent sqlalchemy errors * merged branch lp:~rackspace-titan/nova/images-response-formatting * Add multinic doc and distributed scheduler doc to developer guide front page * merged trunk * Don't pop 'vpn' on kwargs inside a loop in RPCAllocateFixedIP.\_allocate\_fixed\_ips (fixes KeyError) * Added Mohammed Naser to Authors file * merge with trunk * fix reviewer's comment * Starting part of multi-nic support in the guest. Adds the remove\_fixed\_ip code, but is incomplete as it needs the API extension that Vek is working on * Don't pop 'vpn' on kwargs inside a loop in RPCAllocateFixedIP.\_allocate\_fixed\_ips (fixes KeyError's) * added unit test cases and minor changes (localization fix and added fixed\_ip validation) * Made sure the network manager accepts kwargs for FlatManager * Fix bug 809316. While attempting to launch cloudpipe instance via 'nova-manage vpn run' command, it comes up with IP from instances DHCP pool and not the second IP from the subnet, which break the forwarding rules that allow users to access the vpn. This is due 'allocate\_fixed\_ip' method in VlanManager doesn't receive 'vpn' as an argument from caller method and cloudpipe instances always considers as 'common' instances * cleanup * server create deserialization functional and tested * added xml deserialization unit test cases and fixe some pep errors * Updated some common.py functions to raise ValueErrors instead of HTTPBadRequests * Renamed 'nova-manage server list' -> 'nova-manage host list' to differentiate physical hosts from VMs * Allowed empty networks, handled RemoteError properly, implemented xml format for networks and fixed broken unit test cases * minor cleanup * Updated ImageXMLSerializer to serialize links in the server entity * Updated images viewbuilder to return links in server entity * updated images tests * merged trunk * pep8 * Updated remove\_version\_from\_href to be more intelligent Added tests * Fix PEP8 for 809316 bugfix * Fix 809316 bug which prevent cloudpipe to get valid IP * fix reviewer's comment * stray debug * pep8 * fixed marshalling problem to cast\_compute.. * fixed all failed unit test cases * This doesn't actually fix anything anymore, as the wsgi\_refactor branch from Waldon took care of the issue. However, a couple rescue unit tests would have caught this originally, so I'm proposing this to include those * fixes an issue where network host fails to start because a NoNetworksFound exception wasn't being handled correctly * Bad test * unknowingly made these changes, reverting to original * catch raise for networks not found in network host and instance setup * Merged with Trunk * add optional parameter networks to the Create server OS API * Changed broken perms * Tests * Made xen plugins rpm noarch * Set the proper return code for server delete requests * Making the xen plugins rpm to be noarch * merging trunk * Expanding OSAPI wsgi module to allow handling of headers and status codes * Updates some of the extra scripts in contrib and tools to current versions * updating code to implement tests * merging parent wsgi-refactor * allowing controllers to return Nonew * adding headers serializer * pep8 * minor refactoring * minor tweaks * Adds an extension which makes add\_fixed\_ip() available through an OpenStack extension * Comment out these two asserts; Sandy will uncomment in his merge-prop * Fix the bug 800759 * merging wsgi-refactor * adding 204 response code * pre trunk merge * Missing Author updated * Allows for ports in serverRef in image create through the openstack api * Adds security groups to metadata server. Also adds some basic tests for metadata code * fix comments * fix conflict * Added vif OS API extension to get started on it * Moved 'setup\_compute\_network' logic into the virt layer * Added myself to authors file * Fixed two typos in rescue API command * flaw in ec2 cloud api, \_get\_image method , if doing a search for aki-0000009, yet that image name doesn't exist, it strips off aki- and looks for any image\_id 0000009 and if there was an image match that happens to be an ami instead of aki, it will go ahead and deregister the ami instead. That behavior is unintended, so added logic to ensure that the original request image\_id matches the type of image being returned from database by matching against container\_format attr * Fixed up an incorrect key being used to check Zones * merged trunk * fix tests * make sure that old networks get the same dhcp ip so we don't break existing deployments * cleaned up on set network host to \_setup\_network and made networks allocate ips dynamically * Make the instance migration calls available via the API * Add a flag to disable ec2 or osapi * Add a flag to disable ec2 or osapi * refactor * easing up content-type restrictions * peer review fix - per vish: 'This method automatically converts unknown formats to ami, which is the same logic used to display unknown images in the ec2 api. This will allow you to properly deregister raw images, etc.' * Updated resize docstring * removing Content-Length requirement * Add docstrings for multinic extension * Add support for remove\_fixed\_ip() * Merged trunk * pull-up from trunk * Added unit tests * First take at migrations * Fixes bug #805604 "Multiprocess nova-api does not handles SIGTERM correctly." * image/fake: added teardown method * Updated mailmap due to wrong address in commit message * tests/test\_cloud: make an unit test, test\_create\_image, happy * nova/compute/api.py: fixed mismerge * ec2 api \_get\_image method logic flaw that strips the hex16 digit off of the image name, and does a search against the db for it and ignores that it may not be the correct image, such as if doing a search for aki-0000009, yet that image name doesn't exist, it strips off aki- and looks for any image\_id 0000009 and if there was an image match that happens to be an ami instead of aki, it will go ahead and deregister that. That behavior is unintended, so added logic to ensure that the original request image\_id matches the type of image being returned from database by matching against container\_format attr * sqlalchemy/migrate: resolved version conflict * merge with trunk * pull-up from trunk * unit test suite for the multinic extension * pull-up from trunk * Added server entity to images that only has id * Merging issues * Updated \_create\_link\_nodes to be consistent with other create\_\*\_nodes * Changed name of xml\_string to to\_xml\_string * Merging issuse * Temporarily moved create server node functionality into images.py Temporarily changed image XML tests to expect server entities with only ids * Removed serverRef from some tests and viewbuilder * Comments for bugfix800759 and pep8 * Removed bookmark link from non detailed image viewbuilder * implemented clean-up logic when VM fails to spawn for xenapi back-end * Adds the os-hosts API extension for interacting with hosts while performing maintenance. This differs from the previous merge prop as it uses a RESTful design instead of GET-based actions * Added param to keep current things from breaking until we update all of the xml serializers and view builders to reflect the current spec * Fixes Bug #805083: "libvirtError: internal error cannot determine default video type" when using UML * Dried up images XML serialization * Dried up images XML serialization * stricter zone\_id checking * trunk merge * cleanup * Added image index * pep8 fixes * Comments Incorporated for Bug800759 * Added API and supporting code for rebooting or shutting down XenServer hosts * fixed image create response test * Updated test\_detail * Merged trunk * make server and image metadata optional * Updated the links container for flavors to be compliant with the current spec * pep8 * Renamed function * moved remove\_version to common.py * unit tests * progress and server are optional * merged trunk * Add a socket server responding with an allowing flash socket policy for all requests from flash on port 843 to nova-vncproxy * pep8 compliance * Pull-up from trunk (post-multi\_nic) * changed calling signature to be (instance\_id, address) * correct test\_show * first round * removed extra comment * Further test update and begin correcting serialization * Removed a typo error in libvirt connection.py * updated expected xml in images show test to represent current spec * pep8 fixes * Added VIF driver concept * Added the missing 'self' parameter * after trunk merge * Changed the exception type for invalid requests to webob.exc.HTTPBadRequest * Added net\_attrs argument for ensure\_bridge/vlan methods * Added a L2 network driver for bridge/vlan creation * wrap list comparison in test with set()s * slightly more fleshed out call path * merged trunk * merge code i'd split from instance\_get\_fixed\_addresses\_v6 that's no longer needed to be split * fix metadata test since fixed\_ip searching now goes thru filters db api call instead of the get\_by\_fixed\_ip call * clean up compute\_api.get\_all filter name remappings. ditch fixed\_ip one-off code. fixed ec2 api call to this to compensate * clean up OS API servers getting * rename \_check\_servers\_options, add some comments and small cleanup in the db get\_by\_filters call * pep8 fix * convert filter value to a string just in case before running re.compile * add comment for servers\_search\_options list in the OS API Controllers * pep8 fixes * fix ipv6 search test and add test for multiple options at once * test fixes.. one more to go * resolved conflict incorrectly from trunk merge * merged trunk * doc string fix * fix OS API tests * test fixes and typos * typos * cleanup checking of options in the API before calling compute\_api's get\_all() * a lot of major re-work.. still things to finish up * merged trunk * remove debug from failing test * remove faults.Fault wrapper on exceptions * rework OS API checking of search options * merged trunk * missing doc strings for fixed\_ip calls I renamed * clarify a couple comments * test fixes after unknown option string changes * minor fixups * merged trunk * pep8 fixes * test fix for renamed get\_by\_fixed\_ip call * ec2 fixes * added API tests for search options fixed a couple of bugs the tests caught * allow 'marker' and 'limit' in search options. fix log format error * another typo * merged trunk * missed power\_state import in api fixed reversed compare in power\_state * more typos * typos * flavor needs to be converted to int from query string value * add image and flavor searching to v1.0 api fixed missing updates from cut n paste in some doc strings * added searching by 'image', 'flavor', and 'status' reverted ip/ip6 searching to be admin only * compute's get\_all should accept 'name' not 'display\_name' for searching Instance.display\_name. Removed 'server\_name' searching.. Fixed DB calls for searching to filter results based on context * Refactored OS API code to allow checking of invalid query string paremeters and admin api/context to the index/detail calls. v1.0 still ignores unknown parameters, but v1.1 will return 400/BadRequest on unknown options. admin\_api only commands are treated as unknown parameters if FLAGS.enable\_admin\_api is False. If enable\_admin\_api is True, non-admin context requests return 403/Forbidden * clean up checking for exclusive search options fix a cut n paste error with instance\_get\_all\_by\_name\_regexp * merged trunk * python-novaclient 2.5.8 is required * fix bugs with fixed\_ip returning a 404 instance searching needs to joinload more stuff * added searching by instance name added unit tests * pep8 fixes * Replace 'like' support with 'regexp' matching done in python. Since 'like' would result in a full table scan anyway, this is a bit more flexible. Make search options and matching a little more generic Return 404 when --fixed\_ip doesn't match any instance, instead of a 500 only when the IP isn't in the FixedIps table * start of re-work of compute/api's 'get\_all' to handle more search options * Silence warning in case tests.sqlite doesn't exist * fix libvirt test * update tests * don't set network host for multi\_host networks * add ability to set multi\_host in nova-manage and remove debugging issues * filter the dhcp to only respond to requests from this host * pass in dhcp server address, fix a bunch of bugs * PEP8 passed * Formatting fix * Proper Author section insertion (thx Eldar) * Signal handler cleanup, proper ^C handling * copy paste * make sure to filter out ips associated by host and add some sync for allocating ip to host * fixed zone id check * it is multi\_host not multi\_gateway * First round of changes for ha-flatdhcp * Updated the plugin to return the actual enabled status instead of just 'true' or 'false' * UML doesnt do vnc as well * fixed a bug which prevents suspend/resume after block-migration * Gracefull shutdown of nova-api * properly displays addresses in each network, not just public/private; adding addresses attribute to server entities * Gracefull shutdown of nova-api * Removing import of nova.test added to nova/\_\_init.py\_\_ as problem turned out to be somewhere else (not in nova source code tree) * Fixing weird error while running tests. Fix required patching nova/tests/\_\_\_init\_\_.py explictly importing nova.test * Added missing extension file and tests. Also modified the get\_host\_list() docstring to be more accurate about the return value * Silence warning in case tests.sqlite doesn't exist * Fix boot from volume failure for network block devices * Improvements to nova-manage: network list now includes vlan and projectID, added servers list filtered by zone if needed * removed unneeded old commented code * removed more stray debug output * removed debugging output * after trunk merge * Updated unit tests * remove logging statement * Found some additional fixed\_ip. entries in the Intance model contest that needed to be updated * use url parse instead of manually splitting * Changed fixed\_ip.network to be fixed\_ips.network, which is the correct DB field * Added the GroupId param to any pertinent security\_group methods that support it in the official AWS API * Removes 'import IPy' introduced in recent commit * removing IPy import * trunk merge * Fixed the case where an exception was thrown when trying to get a list of flavors via the api yet there were no flavors to list * fix up tests * tweak * review fixes * completed api changes. still need plugin changes * Update the fixed\_ip\_disassociate\_all\_by\_timeout in nova.db.api so that it supports Postgres. Fixes casting errors on postgres with this function * after trunk merge * Fixes MANIFEST.in so that migrate\_repo/versions/\*.sql files are now included in tarball * Include migrate\_repo/versions/\*.sql in tarball * Ensure auto-delete is false on Topic Queues * refactored the security\_group tests a bit and broke up a few of them into smaller tests * Reverses the self.auto\_delete = True that was added to TopicPublisher in the bugfix for lp804063. That bugfix should have only added auto\_delete = True to FanoutPublisher to match the previous change to FanoutConsumer * Added 'self.auto\_delete = True' to the two Publisher subclasses that lacked that setting * Added the '--fixes' tag to link to bug * Added self.auto\_delete = True to the Publisher subclasses that did not have that set * added multi-nic support * osapi test\_servers fixed\_ip -> fixed\_ips * updated osapi 1.0 addresses view to work with multiple fixed ips * trunk merge with migration renumbering * Allows subdirectory tests to run even if sqlite database doesn't exist * fix bug 800759 * Child Zone Weight adjustment available when adding Child Zones * trunk merge * blah * merge trunk * merged trunk * Windows instances will often take a few minutes setting up the image on first boot and then reboot. We should be more patient for those systems as well check if the domid changes so we can send agent requests to the current domid * Theese changes eliminate dependancy between hostname and ec2-id. As I understand, there already were no such dependancy, but still we had confusing names in code. Also I added more sophisticated generation of default hostname to give user possibility to set the custom one * updated images * updated servers * refactored flavors viewbuilder * fixes lp:803615 * added FlavorRef exception handling on create instance * refactored instance type code * Update the ec2 get\_metadata handler so it works with the most recent version of the compute API get\_all call which now returns a list if there is only a single record * - add metadata container to /images/detail and /images/ responses - update xml serialization to encode image entities properly * merging trunk * PEP8 fix * Adapt flash socket policy branch to new nova/wsgi.py refactoring * clean up * Update the ec2 get\_metadata handler so it works with the most recent version of the compute API get\_all call which now returns a list if there is only a single record * trunk merge * pep8 * pep8 * done and done * Update the fixed\_ip\_disassociate\_all\_by\_timeout in nova.db.api so that it supports Postgres. Fixes casting errors on postgres with this function * phew ... working * compute\_api.get\_all should be able to recurse zones (bug 744217). Also, allow to build more than one instance at once with zone\_aware\_scheduler types. Other cleanups with regards to zone aware scheduler.. * Updated v1.1 links in flavors to represent the curret spec * fix issue of recurse\_zones not being converted to bool properly add bool\_from\_str util call add test for bool\_from\_str slight rework of min/max\_count check * fixed incorrect assumption that nullable defaults to false * removed port\_id from virtual interfaces and set network\_id to nullable * changes a few instance refs * merged trunk * Rename one use of timeout to expiration to make the purpose clearer * pulled in koelkers test changes * merge with trey * major reactor of the network tests for multi-nic * Merged trunk * Fixes Bug #803563 by changing how nova passes options in to glance. Before, if limit or marker were not set, we would pass limit=0 and marker=0 in to glance. However, marker is supposed to be an image id. With this change, if limit or marker are not set, they are simply not passed into glance. Glance is free then to choose the default behavior * Fixed indentation issues Fixed min/max\_count checking issues Fixed a wrongly log message when zone aware scheduler finds no suitable hosts * Fixes Bug #803563 by changing how nova passes options in to glance. Before, if limit or marker were not set, we would pass limit=0 and marker=0 in to glance. However, marker is supposed to be an image id. With this change, if limit or marker are not set, they are simply not passed into glance. Glance is free then to choose the default behavior * Sets 'exclusive=True' on Fanout amqp queues. We create the queues with uuids, so the consumer should have exclusive access and they should get removed when done (service stop). exclusive implies auto\_delete. Fixes lp:803165 * don't pass zero in to glance image service if no limit or marker are present * more incorrect list type casting in create\_network * removed the list type cast in create\_network on the NETADDR projects * renumbered migrations again * Make sure test setup is run for subdirectories * merged trunk, fixed the floating\_ip fixed\_ip exception stupidity * trunk merge * "nova-manage vm list" was still referencing the old "image\_id" column, which was renamed to "image\_ref" at revision 1144 * Implement backup with rotation and expose this functionality in the OS API * Allow a port name in the server ref for image create * Fanout queues use unique queue names, so the consumer should have exclusive access. This means that they also get auto deleted when we're done with them, so they're not left around on a service restart. Fixes lp:803165 * pep8 fix * removed extra stubout, switched to isinstance and catching explicit exception * get latest branch * Deprecate -r for run\_tests.sh and adds -n, switching the default back to recreate * check\_domid\_changes is superfluous right now since it's only used when timeout is used. So simplify code a little bit * updated pip-requires for novaclient * Merged trunk * pip requires * adopt merge * clean up logging for iso SR search * moved to wrap\_exception approach * Fix 'undefined name 'e'' pylint error * change the default to recreate the db but allow -n for faster tests * Fix nova-manage vm list * Adding files for building an rpm for xenserver xenapi plugins * moved migration again & trunk merge * Brought back that encode under condition * Add test for hostname generation * Remove unnessesary (and possibly failing) encoding * Fix for bug 803186 that fixes the ability for nova-api to run from a source checkout * moved to wrap\_exception decorator * Review feedback * Merged trunk * Put possible\_topdir back in nova-api * Use milestone cut * Merged trunk * Let glance handle sorting * merging trunk * Review feedback * This adds system usage notifications using the notifications framework. These are designed to feed an external billing or similar system that subscribes to the nova feed and does the analysis * Refactored usage generation * pep8 * remove zombie file * remove unecessary cast to list * merge with trey * OOPS * Whoops * Review feedback * skipping another libvirt test * Fix merge issue in compute unittest * adding unicode support to image metadata * Fix thinko in previous fix :P * change variable names to remove future conflict with sandy's zone-offsets branch * Fix yet more merge-skew * merge with trey * This branch allows LdapDriver to reconnect to LDAP server if connection is lost * Fix issues due to renming of imange\_id attrib * Re-worked some of the WSGI and WSGIService code to make launching WSGI services easier, less error prone, and more testable. Added tests for WSGI server, new WSGI loader, and modified integration tests where needed * Merged trunk * update a test docstring to make it clear we're testing multiple instance builds * log formatting typo pep8 fixes * Prevent test case from ruining other tests. Make it work in earlier python versions * pep8 fix * I accidently the whole unittest2 * Adds support for "extra specs", additional capability requirements associated with instance types * refactoring to compute from scheduler * remove network to project bind * resync with trunk * Add test for spawn from an ISO * Add fake SR with ISO content type * Revise key used to identify the SR used to store ISO images streamed from Glance * remerged trunk * Fix pep8 nits in audit script * Re-merging code for generating system-usages to get around bzr merge braindeadness * getting started * Added floating IP support in OS API * This speeds up multiple runs of tests to start up much faster because it only runs db migrations if the test db doesn't exist. It also adds the -r/--recreate-db option to run\_tests.sh to delete the tests db so it will be recreated * small formatting change * breaking up into individual tests for security\_groups * Proposing this because it is a critical fix before milestone. Suggestions on testing it are welcome * logging fixes * removed unneded mac parameter to lease and release fixed ip functions * Made \_issue\_novaclient\_command() behave better. Fixed a bunch of tests * Review feedback * merge with trey * trunk merge, getting fierce. * Merged trunk * Added nova.version to utils.py * - Modified NOTE in vm\_util.py - Changed gettext line to nova default in guest\_tool.py * renaming tests * make sure basic filters are setup on instance restart * typo * changed extension alias to os-floating-ips * missed the bin line * Updating license to ASL 2.0 * update nova.sh * make nova-debug work with new style instances * Changed package name to openstack-xen-plugins per dprince's suggestion. All the files in /etc/xapi.d/plugins must be executable. Added dependency on parted. Renamed build.sh to build-rpm.sh * remove extra stuff from clean vlans * Clarify help verbiage * making key in images metadata xml serialization test null as well * making image metadata key in xml serialization test unicode * extracting images metadata xml serialization tests into specific class; adding unicode image metadata value test * merged blamar's simpler test * Pulled changes, passed the unit tests * Pulled trunk, merged boot from ISO changes * Removed now un-needed fake\_connection * Use webob to test WSGI app * fixed pep style * review issues fixed * sqlalchmey/migration: resolved version conflict * merge with trunk * Adding files for building an rpm for xenserver xenapi plugins * Upstream merge * merging trunk; adding error handling around image xml serialization * adding xml serialization test of zero images * pep8 * add metadata tests * add fake connection object to wsgi app * add support to list security groups * only create the db if it doesn't exist, add an option -r to run\_tests.py to delete it * Fix for bug #788265. Remove created\_at, updated\_at and deleted\_at from instance\_type dict returned by methods in sqlalchemy API * PEP8 fix * pep8 * Updated \_dict\_with\_extra\_specs docstring * Renamed \_inst\_type\_query\_to\_dict -> \_dict\_with\_extra\_specs * Merged from trunk * Add api methods to delete provider firewall rules * This small change restores single quotes and double quotes as they were before in the filter expression for retrieving the PIF (physical interface) xenapi should use for creating VLAN interfaces * Remove the unnecessary insertion of whitespace. This happens to be enough to make this patch apply on recent versions of XenServer / Xen Cloud Platform * Removes the usage of the IPy module in favor of the netaddr module * - update glance image fixtures with expected checksum attribute - ensure checksum attribute is handled properly in image service * mailmap * mailmap * configure number of attempts to create unique mac address * merged * trunk merged. conflicts resolved * added disassociate method to tests * fixes * tests * PEP8 cleanup * parenthesis issue in the migration * merge * some tests and refactoring * Trunk merge fixes * Merging trunk * implement list test * some tests * fix tests for extensions * Fixed snapshot logic * PEP8 cleanup * Refactored backup rotate * conflict resolved * stub tests * add stubs for flating api os api testing * merge with kirill * associate diassociate untested, first attept to test * Pep8 fix * Adding tests for backup no rotation, invalid image type * Fixed the default arguments to None instead of an empty list * Fixing PEP8 compliance issues * Trailing whitespace * Adding tests for snapshot no-name and backup no-name * Edited the host filter test case for extra specs * Removed an import * Merged from trunk * Remove extra debug line * Merged with trunk * Add reconnect test * Use simple\_bind\_s instead of bind\_s * Add reconnect on server fail to LDAP driver * ec2/cloud: typo * image/s3: typo * same typo i made before! * on 2nd run through filter\_hosts, we've already accounted for the topic memory needs converted to Bytes from MB * LeastCostScheduler wasn't checking for topic cost functions correctly. Added support so that --least\_cost\_scheduler\_cost\_functions only needs to have method names specified, instead of the full blown version with module and class name. Still works the old way, too * requested\_mem typo * more typos * typo in least cost scheduler * Unwind last commit, force anyjson to use our serialization methods * debug logging of number of instances to build in scheduler * missed passing in min/max\_count into the create/create\_all\_at\_once calls * Dealing with cases where extra\_specs wasn't defined * pep8 fixes * Renamed from flavor\_extra\_specs to extra\_specs * All tests passing * missed passing an argument to consume\_resources * Committing some broken code in advance of trying a different strategy for specifying args to extensions.ResoruceExtensions, using parent * Starting to transition instance type extra specs API to an extension API * Now automatically populates the instance\_type dict with extra\_specs upon being retrieved from the database * pep8 * Created Bootstrapper to handle Nova bootstrapping logic * alter test, alter some debug statements * altered some tests * freakin migration numbering * trunk merge * removing erroneous block, must've been a copy and paste fat finger * specify keyword, or direct\_api proxy method blows up * updated the way vifs/fixed\_ips are deallocated and their relationships, altered lease/release fixed\_ip * Fixed syntax errors * This adds a way to create global firewall blocks that apply to all instances in your nova installation * Accept a full serverRef to OSAPI POST /images (snapshot) * Cast rotation to int * PEP8 cleanup * Fixed filter property and added logging * added tests * Implemented view and added tests * Adding missing import * Fixed issue with zero flavors returning HTTP 500 * Adding dict with single 'meta' key to /imgages//meta/ GET and PUT * fixing 500 error on v1.0 images xml * Small refactoring around getting params * libvirt test for deleting provider firewall rules * Make firewall rules tests idempotent, move IPy=>netaddr, add deltete test * merge from trunk * altho security\_group authorize & revoke tests already exist in test\_api, adding some direct ec2 api method tests. added group\_id param support to the pertinent security group methods * Make sure there are actually rules to test against * Add test for listing provider firewall rules * pep8: remove newline at end of file * Add admin api test case (like cloud test case) with a test for fw rules * Move migration to newer version * an int() was missed being removed from UUID changes when zone rerouting kicks in * fixing 500 on None metadata value * proper xml serialization for images * "nova-manage checks if user is member of proj, prior to adding role for that project" * adding metadata container to /images/detail and /images/ calls * Add xml serialization for all /images//meta and /images//meta/ responses * trunk merge and migration bump * handle errors for listing an instance by IP address * Merged markwash's fixes * Merged list-zone-recurse * str\_GET is a property * Fixed typo * Merged trunk * minor fixups * fixes for recurse\_zones and None instances with compute's get\_all * typo * add support for compute\_api.get\_all() recursing zones for more than just reservation\_id * Change so that the flash socket policy server is using eventlet instead of twisted and is running in the same process as the main vnx proxy * ec2/cloud: address review * compute/api: an unit test for \_update\_{image\_}bdm * ec2/cloud: unit tests for parser/formatter of block device mapping * ec2/cloud: an unit test for \_format\_instance\_bdm() * ec2utils: an unit test for mapping\_prepend\_dev() * ec2: bundle block device mapping * ec2utils: introduce helper function to prepend '/dev/' in mappings * volume/api: an unit test for create\_snapshot\_force() * Add some resource checking for memory available when scheduling Various changes to d-sched to plan for scheduling on different topics, which cleans up some of the resource checking. Re-compute weights when building more than 1 instance, accounting for resources that would be consumed * Returned code to original location * Merged from trunk * run launcher first since it initializes global flags and logging * Now passing unit tests * Two tests passing * Now stubbing nova.db instead of nova.db.api * Bug fixing * Added flavor extra specs controller * Initial unit test (failing) * This catches the InstanceNotFound exception on create, and ignores it. This prevents errors in the compute log, and causes the server to not be built (it should only get InstanceNotFound if the server was deleted right after being created). This is a temporary fix that should be fixed correctly once no-db-messaging stuff is complete * allocate and release implementation * fixed pep8 issues * merge from trunk * image -> instance in comment * added virtual\_interface\_update method * Fixes issues with displaying exceptions regarding flavors in nova-manage * better debug statement around associating floating ips when multiple fixed\_ips exist * pep8 fixes * merging trunk * added fixed ip filtering by null virtual interface\_id to network get associated fixed ips * fixed ip gets now have floating IPs correctly loaded * reverting non-xml changes * Adding backup rotation * moving image show/update into 'meta' container * Check API request for min\_count/max\_count for number of instances to build * updated libvirt tests network\_info to be correct * fixed error * skipping more ec2 tests * skipping more ec2 tests * skipping more ec2 tests * skipping test\_run\_with\_snapshot * updated test\_cloud to set stub\_network to true * fixed incorrect exception * updating glance image fixtures with checksum attribute; fixing glance image service to use checksum attribute * Round 1 of backup with rotation * merge from trunk * fix some issues with flags and logging * Add a socket server responding with an allowing flash socket policy for all requests from flash on port 843 to nova-vncproxy * api/ec2: an unit test for create image * api/ec2, boot-from-volume: an unit test for describe instances * unittest: an unit test for ec2 describe image attribute * test\_cloud: an unit test for describe image with block device mapping * ec2utils: an unit test for ec2utils.properties\_root\_defice\_name * unittest, image/s3: unit tests for s3 image handler * image/s3: factor out \_s3\_create() for testability * ec2utils: unit tests for case insensitive true/false conversion * ec2utils: add an unit test for dict\_from\_dotted\_str() * test\_api: unit tests for ec2utils.id\_to\_ec2\_{snap, vol}\_id() * api/ec2: make CreateImage pass unit tests * volume/api: introduce create\_snapshot\_force() * api/ec2/image: make block device mapping pass unit tests * db/block\_device\_mapping/api: introduce update\_or\_create * db/migration: resolve version conflict * merge with trunk * ec2 api describe\_security\_groups allow group\_id param , added tests for create/delete security group in test\_cloud although also exists in test\_api this tests directly the ec2 method * pip-requires * pep8 * fixed zone update * Stop trying to set a body for HTTP methods that do not allow it. It renders the unit tests useless (since they're testing a situation that can never arise) and webob 1.0.8 fails if you do this * fixed local db create * omg stop making new migrations.. * trunk merge * merge from trunk * added try except around floating ip get by host in host init * This branch adds support to the xenapi driver for updating the guest agent on creation of a new instance. This ensures that the guest agent is running the latest code before nova starts configuring networking, setting root password or injecting files * renamed migrations again * merge from trunk * if we get InstanceNotFound error on create, ignore (means it has been deleted before we got the create message) * some libvirt multi-nic just to get it to work, from tushar * Removed whitespace * Fixed objectstore test * merge with trey * Very small alterations, switched from using start() to pass host/port, to just defining them up front in init. Doesn't make sense to set them in start because we can't start more than once any way. Also, unbroke binaries * Bump WebOb requirement to 1.0.8 in pip-requires * Oops, I broke --help on nova-api, fixed now * pep8 fix * Monkey patching 'os' kills multiprocessing's .join() functionality. Also, messed up the name of the eventlet WSGI logger * Filter out datetime fields from instance\_type * erase unnecessary TODO: statement * fixed reviewer's comment. 1. adding dest-instance-dir deleting operation to nova.compute.manager, 2. fix invalid raise statement * fix comment line * Stop trying to set a body for HTTP methods that do not allow it. It renders the unit tests useless (since they're testing a situation that can never arise) and webob 1.0.8 fails if you do this * log -> logging to keep with convention * Removed debugging and switched eventlet to monkey patch everything * Removed unneeded import * Tests for WSGI/Launcher * Remove the unnecessary insertion of whitespace. This happens to be enough to match this patch apply on recent versions of XenServer / Xen Cloud Platform * trunk merge * fix lp 798361 * Removed logging logic from \_\_init\_\_, added concept of Launcher...no tests for it yet * nova-manage checks if user is member of proj, prior to adding role for that project * Other migrations have been merged in before us, so renumber * Merged trunk * pep8 fixes * assert\_ -> assertTrue since assert\_ is deprecated * added adjust child zone test * tests working again * updated the exceptions around virtual interface creation, updated flatDHCP manager comment * more trunks * another trunk merge * This patch adds support for working with instances by UUID in addition to integer IDs * importing sqlalchemy IntegrityError * Moving add\_uuid migration to 025 * Merging trunk, fixing conflicts * Enclosing tokens for xenapi filter in double quotes * working commit * Fix objectstore test * Cleanup and addition of tests for WSGI server * Merged trunk * Check that server exists when interacting with /v1.1/servers//meta resource * No, really. Added tests for WSGI loader * Added tests for WSGI loader * nova.virt.libvirt.connection.\_live\_migration is changed * Cleanup * merged rev trunk 1198 * Introduced Loader concept, for paste decouple * fix pep8 check * fix comments at nova.virt.libvirt.connection * Cleanup of the cleanup * Further nova-api cleanup * Cleaned up nova-api binary and logging a bit * Removed debugging, made objectstore tests pass again * General cleanup and refactor of a lot of the API/WSGI service code * Adding tests for is\_uuid\_like * Using proper UUID format for uuids * Implements a portion of ec2 ebs boot. What's implemented - block\_device\_mapping option for run instance with volume (ephemeral device and no device isn't supported yet) - stop/start instance * updated fixed ip and floating ip exceptions * pep8: white space/blank lines * Merging trunk * renamed VirtualInterface exception and extend NovaException * moving instance existance logic down to api layer * Ensure os\_type and architecture get set correctly * Make EC2 update\_instance() only update updatable\_fields, rather than all fields. Patch courtesy of Vladimir Popovski * Fixes two minor bugs (lp795123 and lp795126) in the extension mechanism. The first bug is that each extension has \_check\_extension() called twice on it; this is a minor cosmetic problem, but the second is that extensions which flunk \_check\_extension() are still added. The proposed fix is to make \_check\_extensions() return True or False, then make \_add\_extension() call it from the top and return immediately if \_check\_extensions() returns False * Fixes a bug where a misleading error message is outputted when there's a sqlalchemy-migrate version conflict * Result is already in JSON format from \_wait\_for\_agent * Fix PEP8 * Fix for lp:796834 * Add new architecture attribute along with os\_type * bunch of docstring changes * adding check for serverRef hostname matching app url * Fix for Bug lp:796813 * Fix the volumes extension resource to have a proper prefix - /os-volumes * Fixes lp797017, which is broken as a result of a fragile method in the xenapi drivers that assumed there would only ever be one VBD attached to an instance * adding extra image service properties to compute api snapshot; adding instance\_ref property * Missed a pep8 fix * Remove thirdwheel.py and do the test with a now-public ExtensionManager.add\_extension() * Removes nova/image/local.py (LocalImageService) * Add some documentation for cmp\_version Add test cases for cmp\_version * Increased error message readability for the OpenStack API * fixing test case * Updated "get\_all\_across\_zones" in nova/compute/api.py to have "context = context.elevated()", allowing it to be run by non-admin users * merging trunk * more words * Cleaned up some pep8 issues in nova/api/openstack/create\_instance\_helper.py and nova/api/openstack/\_\_init\_\_.py * Pull-up from trunk * Add a test to ensure invalid extensions don't get added * Update xenapi/vm\_utils.py so that it calls find\_sr instead of get\_sr. Remove the old get\_sr function which by default looked for an SR named 'slices' * add vlan diagram and some text * Added context = context.elevated() to get\_all\_across\_zones * auto load table schema instead of stubbing it out * Fixed migration per review feedback * Made hostname independent from ec2 id. Add generation of hostnames based on display name * Fix for a problem where run\_tests.sh would output a seemingly unrelated error message when there was a sqlalchemy-migrate version number conflict * stub api methods * Missed a InstanceTypeMetadata -> InstanceTypeExtraSpecs rename in register\_models * Fix unitttest so that it actually fails without the fix * Make $my\_ip Glance's default host, not localhost * We don't check result in caller, so don't set variable to return value * Remove debugging statement * Fix lp795123 and lp795126 by making \_check\_extension() return True or False and checking the result only from the top of \_add\_extension() * Glance host defaults to rather than localhost * Upstream merge * add in dhcp drawing * Rename: intance\_type\_metadata -> instance\_type\_extra\_specs * erroneous self in virtual\_interface\_delete\_by\_instance() sqlalchemy api * Fixes a bug where a unit test sometimes fails due to a race condition * remove the network-host fromt he flat diagram * add multinic diagram * add the actual image * Renaming to \_build\_instance\_get * merged trunk * returned two files to their trunk versions, odd that they were altered in the first place * Added a new test for confirming failure when no primary VDI is present * Unit tests pass again * more doc (and by more I mean like 2 or 3 sentances) * Fix copyright date * PEP8 cleanup * Attempting to retrieve the correct VDI for snapshotting * Fixing another test * Fixing test\_servers\_by\_uuid * floating\_ips extension is loading to api now * initial commit of multinic doc * generated files should not be in source control * Fixed UUID migration * Added UUID migration * Clean up docstrings to match HACKING * merge with trey * Small tweaks * Merged reldan changes * First implementation of FloatingIpController * First implementation of FloatingIpController * compute/api: fix mismerge due to instance creation change * ec2/cloud.py: fix mismerge * fix conflict with rebasing * api/ec2: support CreateImage * api/ec2/image: support block device mapping * db/model: add root\_device\_name column to instances table * ec2utils: consolidate 'vol-%08x' and 'snap-%08x' * api/ec2: check user permission for start/stop instances * ec2utils: consolidate 'vol-%08x' and 'snap-%08x' * api/ec2: check user permission for start/stop instances * api/ec2: check user permission for start/stop instances * Adds 'joinedload' statements where they need to be to prevent access of a 'detached' object * novaclient changed to support projectID in authentication. Caused some minor issues with distributed scheduler. This fixes them up * Add trailing LF (\n) to password for compatibility with old agents * Workaround windows agent bugs where some responses have trailing \\r\\n * removed commented out shim on Instance class * Windows instances will often take a few minutes setting up the image on first boot and then reboot. We should be more patient for those systems as well check if the domid changes so we can send agent requests to the current domid * Split patch off to new branch instead * Add --fixes * First attempt to rewrite reroute\_compute * syntax * Merged trunk * Windows instances will often take a few minutes setting up the image on first boot and then reboot. We should be more patient for those systems as well check if the domid changes so we can send agent requests to the current domid * Fixed bug * Added metadata joinedloads * Prep-work to begin on reroute\_compute * specify mysql\_engine for the virtual\_interfaces table in the migration * Passed in explanation to 400 messages * Fixing case of volumes alias * The volumes resource extension should be prefixed by its alias - os-volumes * Adding uuid test * Pep8 Fixes * Fixing test\_servers.py * pep8 * Fixing private-ips test * adding server existence check to server metadata resource * Fixing test\_create\_instance * made the test\_xenapi work * test xenapi injected set to True * something else with tests * something with tests * i dont even care anymore * network\_info has injected in xenapi tests * Adding UUID test * network\_info passed in test\_xenapi, mac\_address no longer in instance values dict * added network injected to stub * added injected to network dict oportion of tuple returned by get\_instance\_nw\_info * don't provision to all child zones * network info to \_create\_vm * fix mismerge * updated xenapi\_conn finish\_resize arguments * stubbed out get\_instance\_nw\_info for compute\_test * pip novaclient bump * merge with nova trunk * fixed up some little project\_id things with new novaclient * typo * updated finish\_resize to accept network\_info, updated compute and tests in accordance * \_setup\_block\_device\_mapping: raise ApiError when db inconsistency found * db/block\_device\_mapping\_get\_all\_by\_instance: don't raise * Print list of agent builds a bit prettier * PEP8 cleanups * Rename to 024 since 023 was added already * pep8 * The Xen driver supports running instances in PV or HVM modes, but the method it uses to determine which to use is complicated and doesn't work in all cases. The result is that images that need to use HVM mode (such as FreeBSD 64-bit) end up setting a property named 'os' set to 'windows' * typo * None project\_id now default * Adds code to run\_tests.py which: * Fixing code to ensure unit tests for objectstore, vhd & snapshots pass * ec2utils: minor optimize \_try\_convert() * block\_device\_mapping: don't use [] as default argument * api/ec2: make the parameter parser an independent method * Show only if we have slow tests, elapsed only if test success * Showing elapsed time is now default * Ensuring pep8 runs even when nose optons are passed * network tests now teardown user * Removing seconds unit * network user only set if doesnt exist * net base project id now from context, removed incorrect floatnig ip host assignment * fixed instance[fixed\_ip] in ec2 api, removed fixed\_ip shim * various test fixes * Updated so that we use a 'tmp' subdirectory under the Xen SR when staging migrations. Fixes an issue where you would get a 'File exists' error because the directory under 'images' already existed (created via the rsync copy) * db fakes silly error fix * debug statements * updated db fakes * updated db fakes * Changed requests with malformed bodies to return a HTTP 400 Bad Request instead of a HTTP 500 error * updated db fakes and network base to work with virtual\_interface instead of mac\_address * Phew ... ok, this is the last dist-scheduler merge before we get into serious testing and minor tweaks. The heavy lifting is largely done * db fakes * db fakes * updated libvirt test * updated libvirt test * updated libvirt test * updated libvirt test * updated libvirt test * getting the test\_host\_filter.py file from trunk, mine is jacked somehow * removed extra init calls * fixed HACKING * Changed requests with malformed bodies to return a HTTP 400 Bad Request instead of a HTTP 500 error * duplicate routes moved to base class * fixed scary diff from trunk that shouldnt have been there * version passing cleanup * refactored out controller base class to use aggregation over inheritance * Move ipy commands to netaddr * merged trunk * mp fixes * Really PEP8? A tab is inferior to 2 spaces? * pep8 fix * upstream merge * Stub out the rpc call in a unit test to avoid a race condition * merged trunk rev 1178 * Making timing points stricter, only show slow/sluggish tests in summary * Improved errors * added kernel/ramdisk migrate support * Added faults wrapper * remove file that got ressurected * Cleaned up pep8 errors using the current version of pep8 located in pip-requires. This is to remove the cluttered output when using the virtualenv to run pep8 (as you should). This will make development easier until the virtualenv requires the latest version of pep8 (see bug 721867) * merge with trey * autoload with the appropriate engine during upgrade/downgrade * Created new exception for handling malformed requests Wrote tests Raise httpBadRequest on malformed request bodies * Fixed bug 796619 * Adds --show-elapsed option for run\_tests * pep8 * Alias of volumes extension should be OS-VOLUMES * Illustrations now added to Distributed Scheduler documentation (and fixed up some formatting) * Load table schema automatically instead of stubbing out * Removed clocksource=jiffies from PV\_args * Test now passes even if the rpc call does not complete on time * - fixes bug that prevented custom wsgi serialization * Removed clocksource=jiffies from PV\_args * merging trunk, fixing pep8 * pep8 * Improved tests * removing unnecessary lines * wsgi can now handle dispatching action None more elegantly * This fixes the server\_metadata create and update functions that were returning req.body (as a string) instead of body (deserialized body dictionary object). It also adds checks where appropriate to make sure that body is not empty (and return 400 if it is). Tests updated/added where appropriate * removed yucky None return types * merging trunk * trunk merge * zones image\_id/image\_href support for 1.0/1.1 * Update xenapi/vm\_utils.py so that it calls find\_sr instead of get\_sr. Remove the old get\_sr function which by default looked for an SR named 'slices' * fixed bug 796619 * merge trunk * check for none and empty string, this way empty dicts/lists will be ok * Updated so that we use a 'tmp' subdirectory under the Xen SR when staging migrations. Fixes an issue where you would get a 'File exists' error because the directory under 'images' already existed (created via the rsync copy) * fix method chaining in database layer to pass right parameters * Add a method to delete provider firewall rules * Add ability to list ip blocks * pep 8 whitespace fix * Move migration * block migration feature added * Reorder firewall rules so the common path is shorter * ec2 api method allocate\_address ; raises exception.NoFloatingIpsDefined instead of UnknownError when there aren't any floating ips available * in XML Serialization of output, the toprettyxml() call would sometimes return a str() and sometimes unicode(), I've forced encoding to utf-8 to ensure that we always get str(). This fixes the related bug * A recent commit added a couple of directories that don't belong in version control. Remove them again * adding support for cusom serialization methods * forgot a comma * floating ips can now move around the network hosts * A recent commit added a couple of directories that don't belong in version control. Remove them again * 'network list' prints project id * got rid of prints for debugging * small pep8 fixes * return body correctly as object instead of a string, with tests, also check for empty body on requests that need a body * adding xml support to /images//meta resource; moving show/update entities into meta container * removed posargs decorator, all methods decorated * Allows Nova to talk to multiple Glance APIs (without the need for an external load-balancer). Chooses a random Glance API for each request * forgot a comma * misc argument alterations * force utf-8 encoding on toprettyxml call for XMLDictSerializer * added new exception more descriptive of not having available floating addresses avail for allocation * raise instance instead of class * Fix copyright year * style change * Only update updateable fields * removing LocalImageService from nova-manage * rebase from trunk * decorators for action methods added * source illustrations added & spelling/grammar based on comstud's feedback * fixed reraise in trap\_error * forgot some debugging statements * trunk merge and ec2 tests fixed * Add some docstrings for new agent build DB functions * Add test for agent update * Multiple position dependent formats and internationalization don't work well together * Adding caveat * Fixing code per review comments * removed fixed\_ips virtual\_interface\_id foreignkey constraint from multi\_nic migration, and added it as a standalone migration with special sqlite files * Record architecture of image for matching to agent build later. Add code to automatically update agent running on instance on instance creation * Add version and agentupdate commands * Add an extension to allow for an addFixedIp action on instances * further changes * tests working after merge-3 update * 022 migration has already been added, so make ours 023 now * parse options with optparse, options prepended '--' * renamed migration again * Pull-up from multi\_nic * merged koelkers tests branch * remove file that keeps popping up * Merging trunk * Fixing the tests * matched the inner exception specifically, instead of catching all RemoteError exceptions * Support multiple glance-api servers * Merged trunk * Fix merge conflict * removing custom exception, instead using NoFloatingIpsDefined * raises exception.NoFloatingIpsDefined instead of UnknownError * Normalize and update database with used vm\_mode * added a test for allocate\_address & added error handling for api instead of returning 'UnknownError', will give information 'AllocateAddressError: NoMoreAddresses * merged trunk again * updated docstring for nova-manage network create * Now forwards create instance requests to child zones. Refactored nova.compute.api.create() to support deferred db entry creation * MySQL database tables are currently using the MyISAM engine. Created migration script nova/db/sqlalchemy/migrate\_repo/versions/021\_set\_engine\_mysql\_innodb.py to change all current tables to InnoDB * merged trunk again * Support for header "X-Auth-Project-Id" in osapi * Cleaned up some pylint errors * tweaks * PEP8 fix * removed network\_info shims in vmops * Fix for bug#794239 to allow pep8 in run\_tests.sh to use the virtual environment * adding Authorizer key for ImportPublicKey * fix exception type catched * Look for vm\_mode property on images and use that if it exists to determine if image should be run in PV or HVM mode. If it doesn't exist, fall back to existing logic * removed straggler code * trunk merge * merge trunk * pep8 * removed autogen file * added field NOVA\_PROJECT\_ID to template for future using * added tests for X-Auth-Project-Id header * fix fake driver for using string project * adding Authorizer key for ImportPublicKey * Cleaned up some of the larger pylint errors. Set to ignore some lines that pylint just couldn't understand * DRY up the image\_state logic. Fix an issue where glance style images (which aren't required to have an 'image\_state' property) couldn't be used to run instances on the EC2 controller * remove the debuging lines * remove the old stuff * tests all pass * Added virtual environment to PEP8 tests * Added test\_run\_instances\_image\_status\_active to test\_cloud * Add the option to specify a default IPv6 gateway * pep8 * Removed use of super * Added illustrations for Distributed Scheduler and fixed up formatting * Disabled pylint complaining about no 'self' parameter in a decorator function * DRY up the image\_state logic. Fix an issue where glance style images (which aren't required to have an 'image\_state' property) couldn't be used to run instances on the EC2 controller * Fixed incorrect error message Added missing import Fixed Typo (pylint "undefined variable NoneV") * removing local image service * Remove unnecessary docstrings * Add the option to specify a default IPv6 gateway * port the floating over to storing in a list * Make libvirt snapshotting work with images that don't have an 'architecture' property * take out the host * Removed empty init * Use IPNetwork rather than IPRange * Fixed type causing pylint "exception is not callable" Added param to fake\_instance\_create, fake objects should appear like the real object. pylint "No value passed for parameter 'values' in function call" * sanity check * run\_instances will check image for 'available' status before attempting to create a new instance * fixed up tests after trunk merge * Use True/False instead of 1/0 when setting updating 'deleted' column attributes. Fixes casting issues when running nova with Postgres * merged from trunk * Remove more stray import IPy * Dropped requirement for IPy * Convert stray import IPy * Use True/False instead of 1/0 when setting updating 'deleted' column attributes.Fixes casting issues when running nova with Postgres * Removed commented code * Added test case for snapshoting base image without architecture * Remove ipy from virt code and replace with netaddr * Remove ipy from network code and replace with netaddr * Remove ipy from nova/api/ec2/cloud.py and use netaddr * Remove ipy from nova-manage and use netaddr * This branch allows marker and limit parameters to be used on image listing (index and detail) requests. It parses the parameters from the request, and passes it along to the glance\_client, which can now handle these parameters. Essentially all of the logic for the pagination is handled in glance, we just pass along the correct parameters and do some error checking * merge from trunk, resolved conflicts * Update the OSAPI images controller to use 'serverRef' for image create requests * Changed the error raise to not be AdminRequired when admin is not, in fact, required * merge with trey * Change to a more generic error and update documentation * make some of the tests * Merged trunk * merge trunk * Ignore complaining about dynamic definition * Removed Duplicate method * Use super on an old style class * Removed extraneous code * Small pylint fixes * merge with trunk * Fixed incorrect exception * This branch removes nwfilter rules when instances are terminated to prevent resource leakage and serious eventual performance degradation. Without this patch, launching instances and restarting nova-compute eventually become very slow * merge with trunk * resolve conflicts with trunk * Update migrate script version to 22 * Added 'config list' to nova-manage. This function will output all of the flags and their values * renamed migration * trunk merge after 2b hit * Distributed Scheduler developer docs * Updated to use the '/v1/images' URL when uploading images to glance in the Xen glance plugin. Fixes the issue where snapshots fail to upload correctly * merged trunk again * added 'nova-manage config list' which will list out all of the flags and their values. I also alphabetized the list of available categories * Updated to use the '/v1/images' URL when uploading images to glance in the Xen glance plugin. Fixes issue where snapshots failed to get uploaded * Removed "double requirement" from tools/pip-requires file * merged koelker migration changes, renumbered migration filename * fix comment * Fixed pip-requires double requirement * Added a test case for XML serialization * Removed unused and erroneous (yes, it was both) function * paramiko is not installed into the venv, but is required by smoketests/base.py. Added paramiko to tools/pip-requires * Changes all uses of utcnow to use the version in utils. This is a simple wrapper for datetime.datetime.utcnow that allows us to use fake values for tests * Set pylint to ignore correct lines that it could not determine were correct, due to the means by which eventlet.green imported subprocess Minimized the number of these lines to ignore * LDAP optimization and fix for one small bug caused huge performance leak. Dashboard's benchmarks showed overall x22 boost in page request completion time * Adds LeastCostScheduler which uses a series of cost functions and associated weights to determine which host to provision to * Make libvirt snapshotting work with images that don't have an 'architecture' property * Add serverRef to image metadata serialization list * Fixed pylint: no metadata member in models.py * Implement OSAPI v1.1 style image create * trunk merge * little tweaks * Flush AuthManager's cache before each test * Fixed FakeLdapDriver, made it call LdapDriver.\_\_init\_\_ * Merged with trunk * This change set adds the ability to create new servers with an href that points to a server image on any glance server (not only the default one configured). This means you can create a server with imageRef = http://glance1:9292/images/3 and then also create one with imageRef = http://glance2:9292/images/1. Using the old way of passing in an image\_id still works as well, and will use the default configured glance server (imageRef = 3 for instance) * added nova\_adminclient to tools/pip-requires * merged trunk * Added paramiko to tools/pip-requires * Tests that all exceptions can be raised properly, and fix the couple of instances where they couldn't be constructed due to typos * merge trunk... yay.. * switch zones to use utcnow * make all uses of utcnow use our testable utils.utcnow * Fix error with % as replacement string * Fixing conflicts * Tests to assure all exceptions can be raised as well as fixing NotAuthorized * use %% because % is a replacement string character * some comment docstring modifications * Makes novarc work properly on a mac and also for zsh in addition to bash. Other shells are not guaranteed to work * This adds the ability to publish nova errors to an error queue * don't use python if readlink is available * Sudo chown the vbd device to the nova user before streaming data to it. This resolves an issue where nova-compute required 'root' privs to successfully create nodes with connection\_type=xenapi * Bugfix #780784. KeyError when creating custom image * Remove some of the extra image service calls from the OS API images controller * pep8 fixes * merge with trey * make it pass for the demo * Merged with Will * Minor comment formatting changes * got rid of more test debugging stuff that shouldnt have made it in * Remove comment about imageRef not being implemented * Remove a rogue comment * more tests (empty responses) * get\_all with reservation id across zone tests * move index and detail functions to v10 controller * got rid of prints * Refactored after review, fixed merge * image href should be passed through the rebuild pipeline, not the image id * merge from trunk * got rid of print debugs * cleanup based on waldon's comments, also caught a few other issues * missed a couple chars * Little cleanups * pep8 and all that * tests all passing again * list --reservation now works across zones * fix novarc to work on mac and zsh * merged, with trunk, fixed the test failure, and split the test into 3 as per peer review * Fixes nova-manage bug. When a nova-network host has allocated floating ips \*AND\* some associated, the nova-manage floating list would throw exception because was expecting hash with 'ec2\_id' key , however, the obj returned is a sqlalchemy obj and the attr we need is 'hostname' * start the flat network * more testing fun * fixed as per peer review to make more consistent * merged from trunk * Implement the v1.1 style resize action with support for flavorRef * Updates to the 018\_rename\_server\_management\_url migration to avoid adding and dropping a column. Just simply rename the column * Support SSL AMQP connections * small fixes * Allow SSL AMQP connections * reservation id's properly forwarded to child zones on create * merge from trunk * fix pep8 issue from merge * coose the network\_manager based on instance variable * fix the syntax * forgot a comma * This just fixes a bunch of pep8 issues that have been lingering around for a while and bothering me :) * touch ups * Updates to the 018\_rename\_server\_management\_url to avoid adding and dropping a column. Just simply rename the column * basic reservation id support to GET /servers * - move osapi-specific wsgi code from nova/wsgi.py to nova/api/openstack/wsgi.py - refactor wsgi modules to use more object-oriented approach to wsgi request handling: - Resource object steps up to original Controller position - Resource coordinates deserialization, dispatch to controller, serialization - serialization and deserialization broken down to be more testable/flexible * merge from trunk * make the stubs * use the host * da stubs * Bumped migration number * Merged from trunk * updates to keep things looking better * merge from trunk * fix pep8 issues * PEP8 fix * Moved memcached driver import to the top of modules * fix pep8 issues * pep8 fixes * Cleanup instances\_path in the test\_libvirt test\_spawn\_with\_network\_info test. Fixes issue where the nova/tests/instance-00000001/ is left in the nova source tree when running run\_test.sh -N * fix filtering tests * Renamed migration to 020 * osapi: added support for header X-Auth-Project-Id * added /zones/boot reservation id tests * Adds hooks for applying ovs flows when vifs are created and destroyed for XenServer instances * Logs the exception if metadata fails and returns a 500 with an error message to the client * Fixing a bunch of conflicts * add new base * refator existing fakes, and start stubbing out the network for the new manager tests * pep8 * Incremented version of migration script to reflect changes in trunk * basic zone-boot test in place * Incremented version of migration script to reflect changes in trunk * Incremented version of migration script to reflect changes in trunk * switch to using webob exception * Added new snapshots table to InnoDB migrations * Adds a few more status messages to error states on image register for the ec2 api. This will hopefully provide users of the ec2 api with a little more info if their registration fails * Cleaned up bug introduced after fixing pep8 errors * Fixing Scheduler Tests * Cleaned up bug introduced after fixing ^Cp8 errors * Basic hook-up to HostFilter and fixed up the passing of InstanceType spec to the scheduler * make the old tests still pass * rename da stuffs * rename da stuffs * Resolving conflict and finish test\_images * merge * added tests for image detail requests * Merged trunk * Merged trunk and fixed conflicts * Whitespace cleanups * added pause/suspend implementation to nova.virt.libvirt\_conn * Change version number of migration * Update the rebuild\_instance function in the compute manager so that it accepts the arguments that our current compute API sends * Moved everything from thread-local storage to class attributes * Added the filtering of image queries with image metadata. This is exposing the filtering functionality recently added to Glance. Attempting to filter using the local image service will be ignored * This enables us to create a new volume from a snapshot with the EC2 api * Use a new instance\_metadata\_delete\_all DB api call to delete existing metadata when updating a server * added tests for GlanceImageService * Add vnc\_keymap flag, enable setting keymap for vnc console and fix bug #782611 * Add refresh\_provider\_fw\_rules to virt/driver.py#ComputeDriver so virtualization drivers other than libvirt will raise NotImplemented * Rebased to trunk rev 1120 * trunk merge * added get\_pagination\_params function in common with tests, allow fake and local image services to accept filters, markers, and limits (but ignore them for now) * Cleaned up text conflict * pep8 fixed * pep8 fixes * Cleaned up text conflict * removing semicolon * Cleaned up text conflict * skip the vlam test, not sure why it doesn't work * Cleaned up pep8 errors * Fixed the APIError typo * MySQL database tables are currently using the MyISAM engine. Created migration script nova/db/sqlalchemy/migrate\_repo/versions/020\_set\_engine\_mysql\_innodb.py to change all current tables to InnoDB * MySQL database tables are currently using the MyISAM engine. Created migration script nova/db/sqlalchemy/migrate\_repo/versions/020\_set\_engine\_mysql\_innodb.py to change all current tables to InnoDB * Handle the case when a v1.0 api tries to list servers that contain image hrefs * Added myself to Authors file * edits based on ed's feedback * More specific error messages for resize requests * pep8 fixes * merge trunk * tests passing again * Actually remove the \_action\_resize code from the base Servers controller. The V11 and V10 controllers implement these now * merge from trunk * This adds a volume snapshot support with the EC2 api * Fixed the typo of APIError with ApiError * nova/auth/novarc.template: Changed NOVA\_KEY\_DIR to allow symlink support * Updated compute api and manager to support image\_refs in rebuild * zone-boot working * regular boot working again * regular boot working again * first pass at reservation id support * Updates so that 'name' can be updated when doing a OS API v1.1 rebuild. Fixed issue where metadata wasn't getting deleted when an empty dict was POST'd on a rebuild * first cut complete * project\_id moved to be last * add support for keyword arguments * fixed nova.virt.libvirt\_conn.resume() method - removing try-catch * reservation\_id's done * basic flow done * lots more * starting * boot-from-volume: some comments and NOTE(user name) * Use metadata variable when calling \_metadata\_refs * Implement the v1.1 style resize action with support for flavorRef * Fixes to the SQLAlchmeny API such that metadata is saved on an instance\_update. Added integration test to test that instance metadata is updated on a rebuild * Update the rebuild\_instance function in the compute manager so that it accepts the arguments that our current compute API sends * Cleanup instances\_path in test\_libvirt test\_spawn\_with\_network\_info test * Added missing nova import to image/\_\_init\_\_.py * Another image\_id location in hyperv * Fixing nova.tests.api.openstack.fakes.stub\_out\_image\_service. It now stubs out the get\_image\_service and get\_default\_image\_service functions. Also some pep8 whitespace fixes * Fixing xen and vmware tests by correctly mocking glance client * Fixing integration tests by correctly stubbing image service * More image\_id to image\_ref stuff. Also fixed tests in test\_servers * When encrypting passwords in xenapi's SimpleDH(), we shouldn't send a final newline to openssl, as it'll use that as encryption data. However, we do need to make sure there's a newline on the end when we write the base64 string for decoding.. Made these changes and updated the test * Fixes the bug introduced by rpc-multicall that caused some test\_service.py tests to fail by pip-requiring a later version of mox * added \n is not needed with -A * now pip-requires mox version 0.5.3 * added -A back in to pass to openssl * merge with dietz * merge with dietz * XenAPI tests pass * fixed so all the new encryption tests pass.. including data with newlines and so forth * Glance client updates for xenapi and vmware API to work with image refs * Merged lp:~rackspace-titan/nova/lp788979 * get the right args * Fixing pep8 problems * Modified instance\_type\_create to take metadata * Added test for instance type metadata create * merge with trey * Added test for instance type metadata update * Added delete instance metadata unit test * Added a unit test * Adding test code * Changed metadata to meta to avoid sqlalchemy collision * Adding accessor methods for instance type metadata * remove errant print statement * prevent encryption from adding newlines on long messages * trunk merge * nova/auth/novarc.template: Changed NOVA\_KEY\_DIR to allow symlink support * docstrings again and import ordering * fix encryption handling of newlines again and restructure the code a bit * Libvirt updates for image\_ref * Commit the migration script * fixed docstrings and general tidying * remove \_take\_action\_to\_instance * fix calls to openssl properly now. Only append \n to stdin when decoding. Updated the test slightly, also * fixed read\_only check * Fix pep8 errors * Fix pep8 violations * Fix a description of 'snapshot\_name\_template' * unittest: make unit tests happy * unittest: tests for boot from volume and stop/start instances * compute: implement ec2 stop/start instances * compute, virt: support boot-from-volume without ephemeral device and no device * db: add a table for block device mapping * volume/api: allow volume clone from snapshot without size * api/ec2: parse ec2 block device mapping and pass it down to compute api * teach ec2 parser multi dot-separted argument * api/ec2: make ec2 api accept true/false * Adds the ability to make a call that returns multiple times (a call returning a generator). This is also based on the work in rpc-improvements + a bunch of fixes Vish and I worked through to get all the tests to pass so the code is a bit all over the place * fix a minor bug unrelated to this change * updated the way allocate\_for\_instance and deallocate\_for\_instance handle kwargs * Rename instances.image\_id to instances.image\_ref * changes per review * merge with dietz * stub out passing the network * Virt tests passing while assuming the old style single nics * adding TODOs per dabo's review * Fixes from Ed Leafe's review suggestions * merge trunk * move udev file so it follows the xen-backend.rules * Essentially adds support for wiring up a swap disk when building * add a comment when calling glance:download\_vhd so it's clear what is returned * make the fakes be the correct * skip vmware tests, since they need to be updated for multi-nic by someone who knows the backend * put back the hidden assert check i accidentally removed from glance plugin * fix image\_path in glance plugin * Merged trunk * skip the network tests for now * Change the return from glance to be a list of dictionaries describing VDIs Fix the rest of the code to account for this Add a test for swap * cleaning up getattr calls with default param * branch 2a merge (including trunk) * trunk merge * remerged with 2a * tests pass and pep8'ed * review fixups * Expanded tests * In vmwareapi\_net.py removed the code that defines the flag 'vlan\_interface' and added code to set default value for the flag 'vlan\_interface' to 'vmnic0'. This will now avoid flag re-definition issue * missed a driver reference * exceptions are logged via the raise, so just log an error message * log upload errors * instance obj returned is not a hash, instead is sqlalchemy obj and hostname attr is what the logic is looking for * we don't need the mac or the host anymore * Test tweaks * instances don't need a mac\_address to be created anymore * Make a cleaner log message and use [] instead of . to get database fields * use the skip decorator rather than comment out * merging trunk * Adding some pluralization * Double quotes are ugly #3 * merge with dietz * fix typo introduced during merge conflict resolution * Remove spurious newline at end of file * Move migration to fix ordering * remove dead/duplicate code * Double quotes are ugly #2 * Double quotes are ugly * refactoring compute.api.create() * Fix test\_cloud tests * Restricted image filtering by name and status only * Switch the run\_instances call in the EC2 back to 'image\_id'. Incoming requests use 'imageId' so we shouldn't modify this for image HREF's * Switching back to chown. I'm fine w/ setfacl too but nova already has 'chown' via sudoers so this seems reasonable for now * replace double quatation to single quatation at nova.virt.libvirt\_conn * remove unnecessary import inspect at nova.virt.libvirt\_conn * creating \_take\_action\_to\_instance to nova.virt.libvirt\_conn.py * Instead of redefining the flag 'vlan\_interface', just setting a default value (vmnic0) in vmwareapi\_net.py * Renamed image\_ref variables to image\_href. Since the convention is that x\_ref vars may imply that they are db objects * Added test skipper class * change the behavior of calling a multicall * move consumerset killing into stop * don't put connection back in pool * replace removed import * cleanups * cleanup the code for merging * make sure that using multicall on a call with a single result still functions * lots of fixes for rpc and extra imports * don't need to use a separate connection * almost everything working with fake\_rabbit * bring back commits lost in merge * connection pool tests and make the pool LIFO * Add rpc\_conn\_pool\_size flag for the new connection pool * Always create Service consumers no matter if report\_interval is 0 Fix tests to handle how Service loads Consumers now * catch greenlet.GreenletExit when shutting service down * fix consumers to actually be deleted and clean up cloud test * fakerabbit's declare\_consumer should support more than 1 consumer. also: make fakerabbit Backend.consume be an iterator like it should be. * convert fanout\_cast to ConnectionPool * pep8 and comment fixes * Add a connection pool for rpc cast/call Use the same rabbit connection for all topic listening and wait to be notified vs doing a 0.1 second poll for each * add commented out unworking code for yield-based returns * make the test more expicit * add support to rpc for multicall * merge with dietz * Fixing divergence * Merged trunk * Added params to local and base image service * Fixed the mistyped line referred to in bug 787023 * Merged trunk and resolved conflicts * Fixed a typo * make the test work * Merged with trunk * Several changes designed to bring the openstack api 1.1 closer to spec - add ram limits to the nova compute quotas - enable injected file limits and injected file size limits to be overridden in the quota database table - expose quota limits as absolute limits in the openstack api 1.1 limits resource - add support for controlling 'unlimited' quotas to nova-manage * During the API create call, the API would kick off a build and then loop in a greenthread waiting for the scheduler to pick a host for the instance. After API would see a host was picked, it would cast to the compute node's set\_admin\_password method * starting breakdown of nova.compute.api.create() * fix test. instance is not updated in DB with admin password in the API anymore * Merged upstream * pep8 fixes * Initial tests * fix forever looping on a password reset API call * updating admin\_pass moved down to compute where the password is actually reset. only update if it succeeds * merged trunk * change install\_ref.admin\_password to instance\_ref.admin\_pass to match the DB * Merged trunk * remove my print * we're getting a list of tuples now' * we have a list of tuples, not a list of dicts * pep8 fixes * return the result of the function * Updated tests to use mox pep8 * InstanceTypesMetadata is now registered * make some changes to the manager so dupe keywords don't get passed * Fixing the InstanceTypesMetadata table definition * try out mox for testing image request filters * Adding the migrate code to add the new table * dist-sched-2a merge * Created new libvirt directory, moved libvirt\_conn.py to libvirt/connection.py, moved libvirt templates, broke out firewall and network utilities * make the column name correct * The code for getting an opaque reference to an instance assumed that there was a reference to an instance obj available when raising an exception. I changed this from raising an InstanceNotFound exception to a NotFound, as this is more appropriate for the failure, and doesn't require an instance ID * merge against 2a * trunk merge * simplified the limiting differences for different versions of the API * New tests added * Changed the exception type to not require an instance ID * Added model for InstanceTypeMetadata * Added test * Avoid wildcard import * Add unittests for cloning volumes * merged recent trunk * merged recent trunk * Make snapshot\_id=None a default value in VolumeManager:create\_volume(). It is not a regular case to create a volume from a snapshot * Don't need to import json * Fix wrong call of the volume api create() * pep8 fix in nova/compute/api.py * instead of the API spawning a greenthread to wait for a host to be picked, the instance to boot, etc for setting the admin password... let's push the admin password down to the scheduler so that compute can just take care of setting the password as a part of the build process * tests working again * eventlet.spawn\_n() expects the function and arguments, but it expects the arguments unpacked since it uses \*args * Don't pass a tuple since spawn\_n will get the arguments with \*args anyway * move devices back * Using the root-password subcommand of the nova client results in the password being changed for the instance specified, but to a different unknown password. The patch changes nova to use the password specified in the API call * Pretty simple. We call openssl to encrypt the admin password, but the recent changes around this code forgot to strip the newline off the read from stdout * DHSimple's decrypt needs to append \n when writing to stdin * need to strip newline from openssl stdout data * merge with trey * work on * merge trunk * moved auto assign floating ip functionality from compute manager to network manager * create a mac address entry and blindly use the first network * create a mac address entry and blindly use the first network * create a mac address entry and blindly use the first network * need to return the ref * Added filtering on image properties * Fixes a bug related to incorrect reparsing of flags and prevents many extra reparses * no use mac * comment out the direct cloud case * make fake\_flags set defaults instead of runtime values * add a test from vish and fix the issues * Properly reparse flags when adding dynamic flags * no use mac * instances don't have mac's anymore and address is now plural * let the fake driver accept the network info * Comment out the 2 tests that require the instance to contain mac/ip * initial use of limited\_by\_marker * more fix up * many tests pass now * its a dict, not a class * we don't get the network in a tuples anymore * specified image\_id keyword in exception arg * When adding a keypair with ec2 API that already exists, give a friendly error and no traceback in nova-api * added imageid string to exception, per peer review * Fixes some minor doc issues - misspelled flags in zones doc and also adds zones doc to an index for easier findability * removed most of debugging code * Fixing docstring * Synchronise with Diablo development * make \_make\_fixture respect name passed in * zone1 merge * sending calls * accepting calls * Fixing \_get\_kernel\_ramdisk\_from\_image to use the correct image service * Fixing year of copyright * merge * select partially going through * merge from trunk * make image\_ref and image\_id usage more consistant, eliminate redundancy in compute\_api.create() call * take out irrelevant TODO * blah * uhhh yea * local tweaks * getting closer to working select call * swap should use device 1 and rescue use device 2 * merged from trunk * fix tests, have glance plugin return json encoded string of vdi uuids * make sure to get a results, not the query * merged from trunk * Removing code duplication between parse\_image\_ref and get\_image service. Made parse\_image\_ref private * Changed ec2 api dupe key exception log handler info->debug * Added test case for attempting to create a duplicate keypair * Removing debug print line * Renaming service\_image\_id vars to image\_id to reduce confusion. Also some minor cleanup * cleanup and fixes * got rid of print statement * initial fudging in of swap disk * make the test\_servers pass by removing the address tests for 1.1, bug filed * port the current create\_networks over to the new network scheme * need to have the complete table def since sqlalchemy/sqlite won't reload the model * must have the class defined before referencing it * make the migration run with tests * get rid of all mention of drivers ... it's filter only now * merge trunk * Fixes euca-attach-volume for iscsi using Xenserver * fix typo * merge branch lp:~rackspace-titan/nova/ram-limits * Added test * Fixes missing space * Fixed mistyped line * Rebased to trunk rev 1101 * merge from trunk * moved utils functions into nova/image/ * Trunk merge * Fix bug #744150 by starting nova-api on an unused port * Removing utils.is\_int() * Added myself to Authors * When adding a keypair that already exists, give a friendly error and no traceback in nova-api * --dhcp-lease-max=150 by default. This prevents >150 instances in one network * Minor cleanup * No reason to modify the way file names are generated for kernel and ramdisk, since the kernel\_id and ramdisk\_id is still guaranteed to be ints * found a typo in the xenserver glance plugin that doesn't work with glance trunk. Also modified the image url to fetch from /v1/image/X instead of /image/X as that returned a 300 * fixing glance plugin bug and setting the plugin to use /v1 of the glance api * merge trunk * move init start position to 96 to allow openvswitch time to fully start * Include data files for public key tests in the tarball * minor cleanup * Makes sure vlan creation locks so we don't race and fail to create a vlan * merging trunk * Include data files for public key tests in the tarball * Merged with trunk * renaming resource\_factory to create\_resource * combined the exception catching to eliminate duplication * synchronize vlan creation * print information about nova-manage project problems * merge from trunk * fix comments * make nwfilter mock more 'realistic' by having it remember which filters have been defined * fix pep8 issue * fixed silly issue with variable needing to be named 'id' for the url mapper, also caught new exception type where needed * This is the groundwork for the upcoming distributed scheduler changes. Nothing is actually wired up here, so it shouldn't break any existing code (and all tests pass) * Merging trunk * Get rid of old virt/images.py functions that are no longer needed. Checked for any loose calls to these functions and found none. All tests pass for me * Update OSAPI v1.1 extensions so that it supports RequestExtensions. ResponseExtensions were removed since the new RequestExtension covers both use cases. This branch also removes some of the odd serialization code in the RequestExtensionController that converted dictionary objects into webob objects. RequestExtension handlers should now always return proper webob objects * Addressing bug #785763. Usual default for maximum number of DHCP leases in dnsmasq is 150. This prevents instances to obtain IP addresses from DHCP in case we have more than 150 in our network. Adding myself to Authors * foo * syntax errors * temp fixes * added support for reserving certain network for certain project * Fixed some tests * merge with trunk * Added an EC2 API endpoint that'll allow import of public key. Prior, api only allowed generation of new keys * This fix ensures that kpartx -d is called in the event that tune2fs fails during key injection, as it does when trying to inject a key into a windows instance. Bug #760921 is a symptom of this issue, as if kpartx -d is not called then partitions remain mapped that prevent the underlying nbd from being reused * Add new flag 'max\_kernel\_ramdisk\_size' to specify a maximum size of kernel or ramdisk so we don't copy large files to dom0 and fill up /boot/guest * The XenAPI driver uses openssl as part of the nova-agent implementation to set the password for root. It uses a temporary file insecurely and unnecessarily. Change the code to write the password directly to stdin of the openssl process instead * The tools/\* directory is now included in pep8 runs. Added an opt-out system for excluding files/dirs from pep8 (using GLOBIGNORE) * fill out the absolute limit tests for limits v1.0 controller * add absolute limits support to 1.0 api as well * Merged with trunk * fixed pep8 issue * merge from trunk * Fail early if requested imageRef does not exist when creating a server * Separate out tests for when unfilter is called from iptables vs. nwfilter driver. Re: lp783705 * Moved back templates and fixed pep8 issue. Template move was due to breaking packaging with template moves. That will need to happen in a later merge * further refactoring of wsgi module; adding documentation and tests * don't give instance quota errors with negative values * Merged trunk and resolved horrible horrible conflicts * No reason to hash ramdisk\_id and kernel\_id. They are ints * temp * waldon's naming feedback * Fixing role names to match code * Merging trunk * updated the hypervisors and ec2 api to support receiving lists from pluralized mac\_addresses and fixed\_ips * fname should have been root\_fname * minor cleanup, plus had to merge because of diverged-branches issue * Minor cleanup * merge from trunk * Fix comments * Add a unitest to test EC2 snapshot APIs * Avoid wildcard import * Simple change to sort the list of controllers/methods before printing to make it easier to read * missed the new wsgi test file * removing controller/serializer code from wsgi.py; updating other code to use new modules * merge lp:nova * fixup absolute limits to latest 1.1 spec * refactoring wsgi to separate controller/serialization/deserialization logic; creating osapi-specific module * default to port 80 if it isnt in the href/uri * return dummy id per vishs suggestion * hackish patch to fix hrefs asking for their metadata in boot (this really shouldnt be in ec2 api?) * Sort list of controllers/methods before printing * use a manual 500 with error text instead of traceback for failure * log any exceptions that get thrown trying to retrieve metadata * skeleton of forwarding calls to child zones * fix typo in udev rule * merge trunk * libvirt fixes to use new image\_service stuff * On second thought, removing decorator * Adding FlagNotSet exception * Implements a basic mechanism for pushing notifications out to interested parties. The rationale for implementing notifications this way is that the responsibility for them shouldn't fall to Nova. As such, we simply will be pushing messages to a queue where another worker entirely can be written to push messages around to subscribers * Spacing changes * get real absolute limits in openstack api and verify absolute limit responses * Added missing xenhost plugin. This was causing warnings to pop up in the compute logs during periodic\_task runs. It must have not been bzr add'd when this code was merged * fixed bug with compute\_api not having actual image\_ref to use proper image service * Adding xenhost plugin * Merging trunk * Added missing xenhost plugin * Fix call to spawn\_n() instead. It expects a callable * fix pep8 issues * oops, took out commented out tests in integrated.test\_servers and made tests pass again * fixed api.openstack.test\_servers tests...again * fixed QuotaTestCases * fixed ComputeTestCase tests * made ImageControllerWithGlanceServiceTests pass * fixed test\_servers small tests as well * get integrated server\_tests passing * Removed all utils.import\_object(FLAGS.image\_service) and replaced with utils.get\_default\_image\_service() * MySQL database tables are using the MyISAM engine. Created migration script to change all current tables to InnoDB, updated version to 019 * MySQL database tables are using the MyISAM engine. Created migration script to change all current tables to InnoDB, updated version to 019 * Small cleanups * Moving into scheduler subdir and refactoring out common code * Moving tests into scheduler subdirectory * added is\_int function to utils * Pep8 fixes * made get\_image\_service calls in servers.py * use utils.get\_image\_service in compute\_api * updates to utils methods, initial usage in images.py * added util functions to get image service * Using import\_class to import filter\_host driver * Adding fill first cost function * add more statuses for ec2 image registration * Add --fixes * Add --fixes * Fixes the naming of the server\_management\_url in auth and tests * Merging in Sandy's changes adding Noop Cost Fn with tests * merged trunk * move migration 017 to 018 * merge ram-limits * Removed extra serialization metadata * Docstring cleanup and formatting (nova/network dir). Minor style fixes as well * pep8 * Fixes improper attribute naming around instance types that broke Resizes * merge ram-limits * support unlimited quotas in nova-manage and flags * fix test * Changed builder to match specs and added test * add migration for proper name * Update test case to ensure password gets set correctly * make token use typo that is in database. Also fix now -> utcnow and stop using . syntax for dealing with tokens * Added missing metadata join to instance\_get calls * Avoid using spawn\_n to fix LP784132 * add ram limits to instance quotas * Convert instance\_type\_ids in the instances table from strings to integers to enable joins with instance\_types. This in particular fixes a problem when using postgresql * Set password to one requested in API call * don't throw type errors on NoneType int conversions * Added network\_info into refresh\_security\_group\_rules That fixs https://bugs.launchpad.net/nova/+bug/773308 * Improved error notification in network create * Instead of using a temp file with openssl, just write directly to stdin * First cut at least cost scheduler * merge lp:nova * Implemented builder for absolute limits and updated tests * provision\_resource no longer returns value * provision working correctly now * Re-pull changed notification branch * PEP8 fixes * adding --fixes lp:781429 * Fixed mistyped key, caused huge performance leak * Moved memcached connection in AuthManager to thread-local storage. Added caching of LDAP connection in thread-local storage. Optimized LDAP queries, added similar memcached support to LDAPDriver. Add "per-driver-request" caching of LDAP results. (should be per-api-request) * ugh, fixed again * tests fixed and pep8'ed * Update comment on RequestExtension class * failure conditions are being sent back properly now * Added opt-out system for excluding files/dirs from pep8 (using GLOBIGNORE) * MySQL database tables are using the MyISAM engine. Created migration script to change all current tables to InnoDB * MySQL database tables are using the MyISAM engine. Created migration script to change all current tables to InnoDB * fix for lp783705 - remove nwfilters when instance is terminated * basic call going through * Added missing metadata join to instance\_get calls * add logging to migration and fix migration version * Migrate quota schema from hardcoded columns to a key-value approach. The hope is that this change would make it easier to change the quota system without future schema changes. It also adds the concept of quotas that are unlimited * Conceded :-D * updated the mac\_address delete function to actually delete the rows, and update fixed\_ips * Added missing flavorRef and imageRef checks in the os api xml deserialization code along with tests * Fixed minor pylint errors * This branch splits out the IPv6 address generation into pluggable backends. A new flag named ipv6\_backend specifies which backend to use * Reduce indentation to avoid PEP8 failures * merge koelker migration changes * using mac\_address from fixed\_ip instead of instance * PEP8 cleanups * Use new 3-argument API * add a todo * style fixing * Removed obsolete method and test * renamed test cases in nova/tests/api/openstack/test\_servers.py to use a consistent naming convention as used in nova/tests/api/openstack/test\_images.py. also fixed a couple of pylint #C0103 errors in test\_servers.py * make the migration work like we expect it to * Fixed all pep8 errors in tools/install\_venv.py. All tests pass * Added the imageRef and flavorRef attributes in the xml deserialization * Add vnc\_keymap flag and enable setting keymap for vnc console * Review changes and merge from trunk * Pep8 cleaning * Added response about error in nova-manage project operations * Removed tools/clean\_vlans and tools/nova-debug from pep8 tests as they are shell scripts * Added lines to include tools/\* (except ajaxterm) in pep8 tests * Add a unit test for snapshot\_volume * Define image state during snapshotting. Name snapshot to the name provided, not generate * Unit test for snapshotting (creating custom image) * fixed a few C0103 errors in test\_servers.py * renamed test cases to use a consistent naming convention as used in nova/tests/api/openstack/test\_images.py * fix sys.argv requirement * first cut at weighted-sum tests * merge trunk * add udev rules and modified ovs\_configure\_vif\_flows.py to work with udev rules * Adds proper error handling for images that can't be found and a test for deregister image * added |fixed\_ip\_get\_all\_by\_mac\_address| and |mac\_address\_get\_by\_fixed\_ip| to db and sqlalchemy APIs * started on integrating HostFilter * Add support for rbd snapshots * Merging in trunk * I'm assuming that openstack doesnt work with python < 2.6 here (which I read somewhere on the wiki). This patch will check to make sure python >= 2.6 is installed, and also allow it to work with python 2.7 (and greater in the future) * merge lp:nova * XenAPI was not implemented to allow for multiple simultaneous XenAPI requests. A single XenAPIConnection (and thus XenAPISession) is used for all queries. XenAPISession's wait\_for\_task method would set a self.loop = for looping calls to \_poll\_task until task completion. Subsequent (parallel) calls to wait\_for\_task for another query would overwrite this. XenAPISession.\_poll\_task was pulled into the XenAPISession.wait\_for\_task method to avoid having to store self.loop * pep8 fixes * Merged trunk * volume/driver: make unit test, test\_volume, pass * Make set\_admin\_password non-blocking to API * Merged trunk * Review feedback * Lost a flag pulling from another branch. Whoops * Update the compute manager so that it breaks out of a loop if set\_admin\_password is not implemented by the driver. This avoids excessively logging NotImplementedError exceptions * Merging in Sandy's changes * Make host timeout configurable * Make set\_admin\_password non-blocking to API * volume/driver: implement basic snapshot * merge trunk * Update the compute manager so that it breaks out of a loop if set\_admin\_password is not implemented by the driver * Add init script and sysconfig file for openvswitch-nova * volume/driver: factor out lvm opration * Authors: add myself to Authers file * trunk merge * Adding zones doc into index of devref plus a bug fix for flag spellings * fixup based on Lorin's feedback * added flag lost in migration * merge trunk * pep8 * Adding basic tests for call\_zone\_method * fixed\_ip disassociate now also unsets mac\_address\_id * Make sure imports are in alphabetical order * updated previous calls referring to the flags to use the column from the networks table instead * merged from trunk * handle instance\_type\_ids that are NULL during upgrade to integers * fix for lp760921. Previously, if tune2fs failed, as it does on windows hosts, kpartx -d also failed to be called which leaves mapped partitions that retain holds on the nbd device. These holds cause the observed errors * if a LoopingCall has canceled the loop, break out early instead of sleeping any more than needed * Add a test for parallel builds. verified this test fails before this fix and succeeds after this fix * incorporated ImageNotFound instead of NotFound * merged from trunk * misc related network manager refactor and cleanup * changed NotFound exception to ImageNotFound * Update comment * Variable renaming * Add test suite for IPv6 address generation * Accept and ignore project\_id * Make it so that ExtensionRequest objects now return proper webob objects. This avoids the odd serialization code in the RequestExtensionController class which converts JSON dicts to webobs for us * merged from trunk * Remove ResponseExtensions. The new RequestExtension covers both use cases * Initial work on request extensions * Added network\_info into refresh\_security\_group\_rules * fixed pep8 spacing issue * merge from trunk * rename quota column to 'hard\_limit' to make it simpler to avoid collisions with sql keyword 'limit' * Fix remote volume code * 1 Set default paths for nova.conf and api-paste.ini to /etc/nova/ 2 Changed countryName policy because https://bugs.launchpad.net/nova/+bug/724317 still affected * Implement IPv6 address generation that includes account identifier * messing around with the flow of create() and specs * Redundant line * changes per review * docstring cleanup, nova/network dir * make instance.instance\_type\_id an integer to support joins in postgres * merge from trunk and update .mailmap file * Merged trunk * Updated MANIFEST for template move * NoValidHost exception test * Fixes an issue with conversion of images that was introduced by exception refactoring. This makes the exceptions when trying to locate an ec2 id clearer and also adds some tests for the conversion methods * oops fixed a docstring * Pep8 stuff * Bluprint URL: https://blueprints.launchpad.net/nova/+spec/improve-pylint-scores/ * start of zone\_aware\_scheduler test * Moved everything into notifier/api * make sure proper exceptions are raised for ec2 id conversion and add tests * better function name * Updated the value of the nova-manager libvirt\_type * more filter alignment * Removed commented out 'from nova import log as logging' line, per request from Brian Lamar * merge trunk * align filters on query * better pylint scores on imports * Code cleanup * Merged trunk * Abstract out IPv6 address generation to pluggable backends * Merged trunk * First cut with tests passing * changing Authors file * removed unused wild card imports, replaced sqlalchemy wildcard import with explicit imports * removed unused wild card imports, replaced sqlalchemy wildcard import with explicit imports * Fix for #780276 (run\_tests.sh fails test\_authors\_up\_to\_date when using git repo) * extracted xenserver capability reporting from dabo's dist-scheduler branch and added tests * migrate back updated\_at correctly * added in log\_notifier for easier debugging * Add priority based queues to notifications. Remove duplicate json encoding in notifier (rpc.cast does encoding... ) make no\_op\_notifier match rabbit one for signature on notify() * Bugfix #780784. KeyError when creating custom image * removed unused wild card imports, replaced sqlalchemy wildcard import with explicit imports * removed unused wild card imports, replaced sqlalchemy wildcard import with explicit imports * removed unused wild card imports, replaced sqlalchemy wildcard import with explicit imports * Better tests * Add example * give a more informative message if pre-migration assertions fail * Whoops * fix migration bug * Pep8 * Test * remove stubbing of XenAPISession.wait\_for\_task for xenapi tests as it doesn't need to be faked. Also removed duplicate code that stubbed xenapi\_conn.\_parse\_xmlrpc\_value * migration bug fixes * Change xenapi's wait\_for\_task to handle multiple simultaenous queries to fix lp:766404 * Added GitPython to [install\_dir]/tools/pip-requires * got rid of unnecessary imports * Enable RightAWS style signature checking using server\_string without port number, add test cases for authenticate() and a new helper routine, and fix lp753660 * Better message format description * unified underscore/dash issue * update tests to handle unlimited resources in the db * pep8 * capabilities flattened and tests fixed * Set root password upon XenServer instance creation * trunk merge * clean up unused functions from virt/images.py * Removing a rogue try/catch expecting a non-existant exception.TimeoutException that is never raised * basic test working * db: fix db versioning * fix mismerge by 1059 * volume/driver: implement basic snapshot/clone * volume/driver: factor out lvm opration * Host Filtering for Distributed Scheduler (done before weighing) * Rebased to trunk rev 1057 * Adds coverage-related packages to the tools/pip-requires to allows users to generate coverage reporting when running unit tests with virtulenv * merge from trunk * Set publish\_errors default to False * convert quota table to key-value * Simple fix for this issue. Tries to raise an exception passing in a variable that doesn't exist, which causes an error * Fixed duplicate function * Review feedback * Review feedback * Fixed method in flavors * Review feedback * Review feedback * Merged trunk * Set root password upon XenServer instance creation * Added Python packages needed for coverage reports to virtualenv packages * Added interface functions * merge from trunk * added test for show\_by\_name ImageNotFound exception * tests pass again * Sanitize get\_console\_output results. See bug #758054 * revised file docs * New author in town * Changes to allow a VM to boot from iso image. A blank HD is also attached with a size corresponding to the instance type * Added stub function for a referenced, previously non-existant function * Merged trunk * grabbed from dist-sched branch * Explicitly casted a str to a str to please pylint * Removed incorrect, unreachable code * spacing fix * pep8 fix * Improved error notification in network create * Add two whitespaces to conform PEP8 * Publish errors via nova.notifier * Added myself to Authors file * terminology: no more plug-ins or queries. They are host filters and drivers * Added interface function to ViewBilder * Added interfaces to server controller * added self to authors * fixed issue with non-existent variable being passed to ImageNotFound exception * removing rogue TimeoutException * merge prop fixes * Merged trunk * print statements removed * merge with trunk * flipped service\_state in ZoneManager and fixed tests * pep8 * not = * not = * and or test * and or test * merge from trunk * Removed extra newline after get\_console\_output in fake virt driver * Moved all reencoding to compute manager to satisfy both Direct API and internal cloud call * Merged with current trunk * added myself to Authors * Adding a test case to show the xml deserialization failure for imageRef and flavorRef * Fixes for nova-manage vpn list * json parser * Don't fail the test suite in the absence of VCS history * It's ok if there's no commit history. Otherwise the test suite in the tarball will fail * Merged trunk * flavor test * Fix indentation * tests and better driver loading * Add missed hyphen * Adding OSAPI v1.1 limits resource * Adding support for server rebuild to v1.0 and v1.1 of the Openstack API * reduce policy for countyname * looking for default flagfile * adding debug log message * merging trunk * merging trunk * removing class imports * Merged trunk * Merged trunk * Moved reencoding logic to compute manager and cloud EC2 API * ensure create image conforms to OS API 1.1 spec * merge updates from trunk * Added support in the nova openstack api for requests with local hrefs, e.g., "imageRef":"2" Previously, it only supported "imageRef":"http://foo.com/images/2". The 1.1 api spec defines both approaches * Add a flag to allow the user to specify a dnsmasq configuration file for nova-network to use when starting dnsmasq. Currently the command line option is set to "--config-fil=" with nothing specified. This branch will leave it as it is if the user does not specify a config file, but will utilize the specific file if they do * merged from trunk * implemented review suggestion EAFP style, and fixed test stub fake\_show needs to have image\_state = available or other tests will fail * got rid of extra whitespace * Update tools/pip-requires and tools/install\_venv.py for python2.7 support (works in ubuntu 11.04) * No need to test length of admin password in local href test * merging trunk; resolving conflicts; fixing issue with ApiError test failing since r1043 * Added support in osapi for requests with local hrefs, e.g., "imageRef":"2" * initial pass * Implement get\_host\_ip\_addr in the libvirt compute driver * merging trunk; resolving conflicts * Modified the instance status returned by the OS api to more accurately represent its power state * Fixed 2 lines to allow pep8 check to pass * Since run\_tests.sh utilizes nose to run its tests, the -x, --stop flag works correctly for halting tests on the first failed test. The usage information for run\_tests.sh now includes the --stop flag * add support for git checking and a default of failing if the history can't be read * ApiError 'code' arg set to None, and will only display a 'code' as part of the str if specified * Fixed: Check for use of IPv6 missing * removed unused method and fixed imports * Change the links in the sidebar on the docs pages * Use my\_ip for libvirt version of get\_host\_ip\_addr * fix typo in import * removed unused method and fixed imports * small changes in libvirt tests * place ipv6\_rules creation under if ip\_v6 section * Added checking ip\_v6 flag and test for it * merging trunk * adding view file * Expose AuthManager.list\_projects user filter to nova-manage * Final cleanup of nova/exceptions.py in my series of refactoring branches * Uses memcached to cache roles so that ldap is actually usable * added nova version to usage output of bin/nova-manage for easy identification of installed codebase * Changing links in sidebar to previous release * Rebased to trunk rev 1035 * converted 1/0 comparison in db to True/False for Postgres cast compatibility * Changed test\_cloud and fake virt driver to show out the fix * converted 1/0 comparison to True/False for Postgres compatibility * pep8 * fixed docstring per jsb * added version list command to nova-manage * Added more unit-test for multi-nic-nova libvirt * Sanitize get\_console\_output in libvirt\_conn * added nova version output to usage printout for nova-manage * Make the import of distutils.extra non-mandatory in setup.py. Just print a warning that i18n commands are not available.. * Correcting exception case * further cleanup of nova/exceptions.py * added eagerloading mac adddresses for instance * merge with trunk and resolve conflicts * Added myself to authors file * pep8 fixes * Refactoring usage of nova.exception.NotFound * Let nova-mange limit project list by user * merging trunk * Make the import of distutils.extra non-mandatory in setup.py. Just print a warning that i18n commands are not available.. * Updated run\_tests.sh usage info to reflect the --stop flag * Fixed formatting to align with PEP 8 * Modified instance status for shutoff power state in OS api * Refactoring the usage of nova.exception.Duplicate * Rebased to trunk rev 1030 * removed extra newline * merged from trunk * updated tests to reflect serverRef as href (per Ilya Alekseyev) and refactored \_build\_server from ViewBuilder (per Eldar Nugaev) * Add a test checking spawn() works when network\_info is set, which currently doesn't. The following patch would fix parameter mismatch calling \_create\_image() from spawn() in libvirt\_conn.py * removed unused imports and renamed template variables * pep8 * merging trunk * Renamed test\_virt.py to test\_libvirt.py as per suggestion * fixing bad merge * Merged trunk and fixed simple exception conflict * merging trunk * Refactoring nova.exception.Invalid usage * adding gettext to setup.py * Use runtime XML instead of VM creation time XML for createXML() call in order to ensure volumes are attached after RebootInstances as a workaround, and fix bug #747922 * Created new libvirt directory, moved libvirt\_conn.py to libvirt/connection.py, moved libvirt templates, broke out firewall and network utilities * Rebased to trunk rev 1027, and resolved a conflict in nova/virt/libvirt\_conn.py * Rebased to trunk rev 1027 * clarifies error when trying to add duplicate instance\_type names or flavorids via nova-manage instance\_type * merge trunk * Rework completed. Added test cases, changed helper method name, etc * pep8 * merge trunk, resolved conflict * merge trunk * Abstracted libvirt's lookupByName method into \_lookup\_by\_name * Provide option of auto assigning floating ip to each instance. Depend on auto\_assign\_floating\_ip boolean flag value. False by default * Fixes per review * Restore volume state on migration failure to fix lp742256 * Fixes cloudpipe to get the proper ip address * merging trunk * Fix bug with content-type and small OpenStack API actions refactor * merge with trunk * merge trunk * merged trunk * -Fixed indent for \_get\_ip\_version -Added LoopingCall to destroy as suggested by earlier bug report -Standardized all LoopingCall uses to include useful logging and better error handling * Create a dictionary of instance\_types before executing SQL updates in the instance\_type\_id migration (014). This should resolve a "cannot commit transaction - SQL statements in progress" error with some versions of sqlite * create network now takes bridge for flat networks * Adapt DescribeInstances to EC2 API spec * Change response of the EC2 API CreateVolume method to match the API docs for EC2 * Merged trunk and fixed api servers conflict * pep8 * Fixes and reworkings based on review * pep8 * Addressing exception.NotFound across the project * fix logging in reboot OpenStack API * eager loaded mac\_address attributes for mac address get functions * updated image builder and tests for OS API 1.1 compatibility (serverRef) * forgot import * change action= to actions= * typo * forgot to save * moved get\_network\_topic to network.api * style cleaning * Fixed network\_info creation in libvirt driver. Now creating same dict as in xenapi driver * Modified instance status for shutdown power state in OS api * rebase trunk * altered imports * commit to push for testing * Rebased to trunk rev 1015 * Utility method reworked, etc * Docstring cleanup and formatting (nova/image dir). Minor style fixes as well * Docstring cleanup and formatting (nova/db dir). Minor style fixes as well * Docstring cleanup and formatting (nova dir). Minor style fixes as well * use vpn filter in basic filtering so cloudpipe works with iptables driver * use simpler interfaces * Docstring cleanup and formatting (console). Minor style fixes as well * Docstring cleanup and formatting (compute). Minor style fixes as well * merge trunk * Add privateIpAddress and ipAddress to EC2 API DescribeInstances response * style fixing * Fix parameter mismatch calling \_create\_image() from spawn() in libvirt\_conn.py * Add a test checking spawn() works when network\_info is set, which currently doesn't. The following patch would fix it * put up and down in the right dir * Makes metadata correctly display kernel-id and ramdisk-id * pep8 cleaning * style fix * revert changes that doesn't affect the bug * in doesn't work properly on instance\_ref * Another small round of pylint clean-up * Added an option to run\_tests.sh so you can run just pep8. So now you can: ./run\_tests.sh --just-pep8 or ./run\_tests.sh -p * merge trunk * fix display of vpn instance id and add output rule so it can be tested from network host * Exit early if tests fail, before pep8 is run * more changes per review * fixes per review * docstring cleanup, nova/image dir * Docstring cleanup and formatting. Minor style fixes as well * cleanups per code review * docstring cleanup, nova dir * fixed indentation * docstring cleanup, console * docstring cleanup, nova/db dir * attempts to make the docstring rules clearer * fix typo * docstring cleanup compute manager * bugfix signature * refactor the way flows are deleted/reset * remove ambiguity in test * Pylinted nova-compute * Pylinted nova-manage * replaced regex to webob.Request.content\_type * fix after review: style, improving tests, replacing underscore * merge with trunk * fix Request.get\_content\_type * Reverted bad merge * Rebased to trunk rev 1005 * Removed no longer relevant comment * Removed TODO we don't need * Removed \_ and replaced with real variable name * instance type get approach changed. tests fixed * Merged trunk * trunk merged * fix: mark floating ip as auto assigned * Add to Authors * Change response format of CreateVolume to match EC2 * revamped spacing per Rick Harris suggestion. Added exact error to nova-manage output * only apply ipv6 if the data exists in xenstore * Create a dictionary of instance\_types before executing SQL updates in the instance\_type\_id migration (014). This should resolve a "cannot commit transaction - SQL statements in progress" error with some versions of sqlite * add support for git checking and a default of failing if the history can't be read * strip output, str() link local * merging lp:~rackspace-titan/nova/exceptions-refactor-invalid * Round 1 of pylint cleanup * Review feedback * Implement quotas for the new v1.1 server metadata controller * fix doc typo * fix logging in reboot OpenStack API * make geninter.sh use the right tmpl file * pep8 fix * refactoring usage of exception.Duplicate errors * rename all versions of image\_ec2\_id * Abstracted lookupByName calls to \_lookup\_by\_name for centralized error handling * actually use the ec2\_id * remove typo * merging lp:~rackspace-titan/nova/exceptions-refactor-invalid * Fixes cloudpipe to get the proper ip address * add include file for doc interfaces * add instructions for setting up interfaces * Merged trunk and fixed small comment * Fixed info messages * Tweak to destroy loop logic * Pretty critical spelling error * Removed extra calls in exception handling and standardized the way LoopingCalls are done * one last i18n string * Merged trunk * multi-line string spacing * removing rogue print * moving dynamic i18n to static * refractoring * Add support for cloning a Sheepdog volume * Add support for cloning a Sheepdog volume * Add support for creating a new volume from a existing snapshot with EC2 API * Add support for creating a new volume from a existing snapshot with EC2 API * Add support for creating a Sheepdog snapshot * Add support for creating a Sheepdog snapshot * Add support for creating a snapshot of a nova volume with euca-create-snapshot * Add support for creating a snapshot of a nova volume with euca-create-snapshot * trunk merged * Implement get\_host\_ip\_addr in the libvirt compute driver * Adding projectname username to the nova-manage project commands to fix a doc bug, plus some edits and elimination of a few doc todos * pep8 fixes * Remove zope.interface from the requires file since it is not used anywhere * use 'is not None' instead of '!= None' * Fix loggin in creation server in OpenStack API 1.0 * Support admin password when specified in server create requests * First round of pylint cleanup * merge lp:nova and resolve conflicts * Change '== None' to 'is None' * remove zope.interface requires * use 'is not None' instead of '!= None' * pep8 fixes * Change '== None' to 'is None' * Fixes nova-manage image convert when the source directory is the same one that local image service uses * trunk merged * pep8 fixed * calc link local * not performing floating ip operation with auto allocated ips * it is rename not move * pep8 fix * Rebased to trunk rev 995 * Rebased to trunk rev 995 * merge trunk * add fault as response * Fix logging in openstack api * Fix logging in openstack api * Fix logging in openstack api * trunk merged. conflict resolved * trunk merged. conflict resolved * The change to utils.execute's call style missed this call somehow, this should get libvirt snapshots working again * Fix parameter mismatch calling to\_xml() from spawn() in libvirt\_conn.py * move name into main metadata instead of properties * change libvirt snapshot to new style execute * Add additional logging for WSGI and OpenStack API authentication * Rename the id * Added period to docstring for metadata test * Merged trunk * Empty commit to hopefully regenerate launchpad diff * Explicitly tell a user that they need to authenticate against a version root * Merged trunk * merging trunk * adding documentation & error handling * correcting tests; pep8 * Removed the unused self.interfaces\_xml variable * Only poll for instance states that compute should care about * Diablo versioning * Diablo versioning * Rebased to trunk rev 989 * Rebased to trunk rev 989 2011.2 ------ * Final versioning for Cactus * initial roundup of all 'exception.Invalid' cases * merge trunk * set the bridge on each OvsFlow * merge with trunk * bugfix * bugfix * Fix parameter mismatch calling to\_xml() from spawn() in libvirt\_conn.py * add kvm-pause and kvm-suspend 2011.2rc1 --------- * Rework GlanceImageService.\_translate\_base() to not call BaseImageService.\_translate\_base() otherwise the wrong class attributes are used in properties construction.. * Updated following to RIck's comments * Rebased to trunk rev 987 * Rework GlanceImageService.\_translate\_base() to not call BaseImageService.\_translate\_base() otherwise the wrong class attributes are used in properties construction.. * Try to be nicer to the DB when destroying a libvirt instance * pep8 * merge trunk * fixed error message i18n-ization. added test * Don't hammer on the DB * Debug code clean up * Rebased to trunk rev 986 * An ultimate workaround workd... :( * Zero out volumes during deletion to prevent data leaking between users * Minor formatting cleanup * jesse@aire.local to mailmap * Changed pep8 command line option from --just-pep8 to --pep8 * re-add broken code * merge trunk * Final versioning * Updates the documentation on creating and using a cloudpipe image * iSCSI/KVM test completed * Minor fixes * Fix RBDDriver in volume manager. discover\_volume was raising exception. Modified local\_path as well * Fixes VMware Connection to inherit from ComputeDriver * Fixes s3.py to allow looking up images by name. Smoketests run unmodified again with this change! * move from try\_execute to \_execute * Make VMWare Connection inherit from ComputeDriver * add up and down .sh * fix show\_by\_name in s3.py and give a helpful error message if image lookup fails * remove extra newline * dots * Rebased to trunk rev 980 * Rework importing volume\_manager * Blushed up a little bit * Merged trunk * Only warn about rouge instances that compute should know about * Added some tests * Dangerous whitespace mistake! :) * Cleanup after prereq merge * Add new flag 'max\_kernel\_ramdisk\_size' to specify a maximum size of kernel or ramdisk so we don't copy large files to dom0 and fill up /boot/guest * Rebased to trunk rev 980 * Merged lp:~rackspace-titan/nova/server\_metadata\_quotas as a prereq * Merged trunk * Docstring cleanup and formatting. Minor style fixes as well * Updated to use setfacl instead of chown * Commit for merge of metadata\_quotas preq * merge trunk * Removed extra call from try/except * Reverted some superfluous changes to make MP more concise * Merged trunk * Reverted some superfluous changes to make MP more concise * Replace instance ref from compute.api.get\_all with one from instance\_get. This should ensure it gets fully populated with all the relevant attributes * Add a unit test for terminate\_instances * pep8 * Fix RBDDriver in volume manager. discover\_volume was raising exception. Modified local\_path as well * pep8 fixes * migaration and pep8 fixes * update documentation on cloudpipe * Makes genvpn path actually refer to genvpn.sh instead of geninter.sh * typo * Merged trunk * Updating the runnova information and fixing bug 753352 * merge trunk * network manager changes, compute changes, various other * Floating ips auto assignment * Sudo chown the vbd device to the nova user before streaming data to it. This resolves an issue where nova-compute required 'root' privs to successfully create nodes with connection\_type=xenapi * Minor blush ups * A minor blush up * A minor blush up * Remove unused self.interfaces\_xml * Rebased to trunk rev 977 * Rebase to trunk rev 937 * debug tree status checkpoint 2 * docstring cleanup, direct api, part of compute * bzr ignore the top level CA dir that is created when running 'run\_tests.sh -N' * fix reference to genvpn to point to the right shell script * Set default stateOrProvice to 'supplied' in openssl.cnf.tmpl * merge trunk * This branch fixes https://bugs.launchpad.net/bugs/751231 * Replace instance ref from compute.api.get\_all with one from instance\_get. This should ensure it gets fully populated with all the relevant attributes * When using libvirt, remove the persistent domain definition when we call destroy, so that behavior on destroy is as it was when we were using transient instances * Rebased to trunk rev 973 * Currently terminating an instance will hang in a loop, this allows for deletion of instances when using a libvirt backend. Also I couldn't help add a debug log where an exception is caught and ignored * merge trunk * resolved lazy\_match conflict between bin/nova-manage instance and instance\_type by moving instance subcommand under vm command. documented vm command in man page. removed unused instance\_id from vm list subcommand * Ooops - redefining the \_ variable seems like a \_really\_ bad idea * Handle the case when the machine is already SHUTOFF * Split logic on shutdown and undefine, so that even if the machine is already shutdown we will be able to proceed * Remove the XML definition when we destroy a machine * Rebased to trunk rev 971 * debug tree status checkpoint * Reabased to trunk rev 971 * Fixed log message gaffe * pylintage * typo - need to get nova-volumes working on this machine :-/ * dd needs a count to succeed, and remove unused/non-working special case for size 0 * There is a race condition when a VDI is mounted and the device node is created. Sometimes (depending on the configuration of the Linux distribution) nova loses the race and will try to open the block device before it has been created in /dev * zero out volumes on delete using dd * Added RST file on using Zones * Fixes euca-attach-volume for iscsi using Xenserver * pep8 * merge trunk * removes log command from nova-manage as it no longer worked in multi-log setup * Added error message to exception logging * Fixes bug which hangs nova-compute when terminating an instance when using libvirt backend * missing 'to' * Short circuit non-existant device during unit tests. It won't ever be created because of the stubs used during the unit tests * Added a patch for python eventlet, when using install\_venv.py (see FAQ # 1485) * fixed LOG level and log message phrase * merge prop tweaks 2 * Set default stateOrProvice to 'supplied' in openssl.cnf.tmpl * This branch fixes https://bugs.launchpad.net/nova/+bug/751242 * Ignore errors when deleting the default route in the ensure\_bridge function * bzr ignore the CA dir * merge prop tweaks * Import translations from Launchpad * added Zones doc * Update the describe\_image\_attribute and modify\_image\_attribute functions in the EC2 API so they use the top level 'is\_public' attribute of image objects. This brings these functions in line with the base image service * Import from lp:~nova-core/nova/translations * corrects incorrect openstack api responses for metadata (numeric/string conversion issue) and image format status (not uppercase) * Implement a mechanism to enforce a configurable quota limit for image metadata (properties) within the OS API image metadata controller * Update the describe\_image\_attribute and modify\_image\_attribute functions in the ec2 API so they use the top level 'is\_public' attribute of image objects. This brings these functions in line with the base image service * Ignore errors when deleting the default route in the ensure\_bridge function * merge trunk * removed log command from nova-manage. no longer applicable with multiple logfiles * merge trunk * reminde admins of --purge option * Fixes issues with describe instances due to improperly set metadata * Keep guest instances when libvirt host restarts * fix tests from moving access check into update and delete * Added support for listing addresses of a server in the openstack api. Now you can GET \* /servers/1/ips \* /servers/1/ips/public \* /servers/1/ips/private Supports v1.0 json and xml. Added corresponding tests * Log libvirt errcode on exception * This fixes how the metadata and addresses collections are serialized in xml responses * Fix to correct libvirt error code when the domain is not found * merged trunk * Removed commented-out old 'delete instance on SHUTOFF' code * Automatically add the metadata address to the network host. This allows guests to ARP for the address properly * merged trunk and resolved conflict * slight typo * clarified nova-manage instance\_type create error output on duplicate flavorid * This branch is a patch for fixing below issue. > Bug #746821: live\_migration failing due to network filter not found Link a bug report * fix pep8 violation * Update instances table to use instance\_type\_id instead of the old instance\_type column which represented the name (ex: m1.small) of an instance type * Drop extra 'None' arg from dict.get call * Some i18n fixes to instance\_types * Renamed computeFault back to cloudServersFault in an effort to maintain consistency with the 1.0 API spec. We can look into distinguishing the two in the next release. Held off for now to avoid potential regression * adds a timeout on session.login\_with\_password() * Drop unneeded Fkey on InstanceTypes.id * Bypass a potential security vulnerability by not setting shell=True in xenstore.py, using johannes.erdfelt's patch * Renamed computeFault to cloudServersFault * fixed the way ip6 address were retrieved/returned in \_get\_network\_info in nova/virt/xenapi/vmops * added -manage vm [list|live-migration] to man page * removed unused instance parameter from vm list ... as it is unused. added parameters to docstring for vm list * moved -manage instance list command to -manage vm list to avoid lazy match conflict with instance\_types * Simplify by always adding to loopback * Remove and from AllocateAddress response, and fix bug #751176 * remove unused code * better error message * Blush up a bit * Rebased to trunk rev 949 * pep8 * adds timeout to login\_with\_password * test provider fw rules at the virt/ipteables layer. lowercase protocol names in admin api to match what the firewall driver expects. add provider fw rule chain in iptables6 as well. fix a couple of small typos and copy-paste errors * fixed based on reviewer's comment - 1. erase unnecessary blank line, 2. adding LOG.debug * Rebased to trunk rev 949 * fixed based on reviewer's comment - 'locals() should be off from \_() * Make description of volume\_id more generic * add the tests * pep8 cleanup * ApiError code should default to None, and will only display a code if one exists. Prior was output an 'ApiError: ApiError: error message' string, which is confusing * ec2 api run\_instances checks for image status must be 'available'. Overhauled test\_run\_instances for working set of test assertions * if we delete the old route when we move it we don't need to check for exists * merged trunk * removed comment on API compliance * Added an option to run\_tests.sh so you can run just pep8. So now you can: ./run\_tests.sh --just-pep8 or ./run\_tests.sh -p * Add automatic metadata ip to network host on start. Also fix race where gw is readded twice * Controllers now inherit from nova.api.openstack.common.OpenstackController * Merged trunk * Support providing an XML namespace on the XML output from the OpenStack API * Merged with trunk, fixed up test that wasn't checking namespace * Added support for listing addresses of a server in the openstack api. Now you can GET \* /servers/1/ips \* /servers/1/ips/public \* /servers/1/ips/private Supports v1.0 json and xml. Added corresponding tests * check visibility on delete and update * YADU (Yet Another Docstring Update) * Make sure ca\_folder is created before chdir()ing into it * another syntax error * Use a more descriptive name for the flag to make it easier to understand the purpose * Added logging statements for generic WSGI and specific OpenStack API requests * syntax error * Incorprate johannes.erdfelt's patch * updated check\_vm\_record in test\_xenapi to check the gateway6 correctly * updated get\_network\_info in libvirt\_conn to correctly insert ip6s and gateway6 into the network info, also small style fixes * add docstrings * updated \_prepare\_injectables() to use info[gateway6] instead of looking inside the ip6 address dict for the gateway6 information * Enable RightAWS style signing on server\_string without port number portion * modified behavior of inject\_network\_info and reset\_network related to a vm\_ref not being passed in * Create ca\_folder if it does not already exist * Wait for device node to be created after mounting image VDI * Improved unit tests Fixed docstring formatting * Only create ca\_path directory if it does not already exist * Added bug reference * Only create ca\_path directory if it does not already exist * Make "setup.py install" much more thorough. It now installs tools/ into /usr/share/nova and makes sure api-paste.conf lands in /etc/nova rather than /etc * fixed based on reviwer's comment * return image create response as image dict * Add a patch for python eventlet, when using install\_venv.py (see FAQ # 1485) * Undo use of $ in chain name where not needed * Testing for iptables manager changes * Don't double-apply provider fw rules in NWFilter and Iptables. Don't create provider fw rules for each instance, use a chain and jump to it. Fix docstrings * typo * remove -None for user roles * pep8 * fallback to status if image\_state is not set * update and fix tests * unite the filtering done by glance client and s3 * Removing naughty semicolon * merged trunk * remove extraneous empty lines * move error handling down into get\_password function * refactor to handle invalid adminPass * fixed comment * merged trunk * add support for specifying adminPass for JSON only in openstack api 1.1 * add tests for adminPass on server create * Fix a giant batch of copypasta * Remove file leftover from conflict * adding support for OSAPI v1.1 limits resource * Moved 'name' from to , corrected and fixes bug # 750482 * This branch contains the fix for lp:749973. VNC is assumed that is default for all in libvirt which LXC does not support yet * Remove comments * Separate CA/ dir into code and state * removed blank lines for pep8 fix * pep8 fixed * Fixed the addresses and metadata collections in xml responses. Added corresponding tests * Dont configure vnc if we are using lxc * Help paste\_config\_file find the api config now that we moved it * Add bug reference * Move api-paste.ini into a nova/ subdir of etc/ * Add a find\_data\_files method to setup.py. Use it to get tools/ installed under /usr/(local/)/share/nova * Nits * Add missing underscore * fix bug lp751242 * fix bug lp751231 * Automatically create CA state dir, and make sure the CA scripts look for the templates in the right places * fix bug 746821 * Remove and from AllocateAddress response, and fix bug #751176 * Allow CA code and state to be separated, and make sure CA code gets installed by setup.py install * Rebased to trunk 942 * fix bug lp:682888 - DescribeImages has no unit tests * Correct variable name * correct test for numeric/string metadata value conversion * openstack api metadata responses must be strings * openstack api requires uppercase image format status responses * merge trunk * Refactor so that instances.instance\_type is now instances.instance\_type\_id * splitting test\_get\_nic\_for\_xml into two functions * Network injection check fixed in libvirt driver * merging trunk * fixing log message * working with network\_ref like with mapping * add test for NWFilterFirewall * Removed adminclient.py and added reference to the new nova-adminclient project in tools/pip-requires * Don't prefix adminPass with the first 4 chars of the instance name * Declares the flag for vncproxy\_topic in compute.api * Fixes bug 741246. Ed Leafe's inject\_file method for the agent plugin was mistakenly never committed after having to fix commits under wrong email address. vmops makes calls to this (previously) missing method * Attempt to circumvent errors in the API from improper/malformed responses from image service * fixes incorrect case of OpenStack API status response * Fixed network\_info creating * Moved 'name' property from to , corrected and fixes bug # 750482 * corrected capitalization of openstack api status and added tests * libvirt\_con log fix * Ensure no errors for improper responses from image service * merge trunk * Fixes error which occurs when no name is specified for an image * improving tests * network injection check fixed * Only define 'VIMMessagePlugin' class if suds can be loaded * Make euca-get-ajax-console work with Euca2ools 1.3 * Add bug reference * Use keyword arguments * add multi\_nic\_test * added preparing\_xml test * split up to\_xml to creation xml\_info and filling the template * use novalib for vif\_rules.py, fix OvsFlow class * extract execute methods to a library for reuse * Poller needs to check for BUILDING not NOSTATE now, since we're being more explict about what is going on * Add checking if the floating\_ip is allocated or not before appending to result array in DescribeAddresses * Added synchronize\_session parameter to a query in fixed\_ip\_disassociate\_all\_by\_timeout() and fix #735974 * Made the fix simpler * Add checking if the floating\_ip is allocated or not before appending to result array * Added updated\_at field to update statement according to Jay's comment * change bridge * Add euca2ools import * Rebased to trunk 930 * Rebased to trunk 726 * lots of updates to ovs scripts * Make euca-get-ajax-console work with Euca2ools 1.3 * merge trunk * Hopefully absolved us of the suds issue? * Removes excessive logging message in the event of a rabbitmq failure * Add a change password action to /servers in openstack api v1.1, and associated tests * Removal of instance\_set\_state from driver code, it shouldnt be there, but instead should be in the compute manager * Merged trunk * Don't include first 4 chars of instance name in adminPass * Friendlier error message if there are no compute nodes are available * merge lp:nova * Merged waldon * Adding explanation keyword to HTTPConflict * Merged waldon * makes sure s3 filtering works even without metadata set properly * Merged waldon * Didn't run my code. Syntax error :( * Now using the new power state instead of string * adding servers view mapping for BUILDING power state * removes excessive logging on rabbitmq failure * Review feedback * Friendlier error message if there are no compute nodes are available * Merged with Waldon * Better error handling for spawn and destroy in libvirt * pep8 * adding 'building' power state; testing for 409 from OSAPI when rebuild requested on server being rebuild * More friendly error message * need to support python2.4, so can't use uuid module * If the floating ip address is not allocated or is allocated to another project, then the user trying to associate the floating ip address to an instance should get a proper error message * Update state between delete and spawn * adding metadata support for v1.1 * Rebuild improvements * Limit image metadata to the configured metadata quota for a project * Add volume.API.remove\_from\_compute instead of compute.API.remove\_volume * Rebased to trunk rev 925 * Removed adminclient and referred to pypi nova\_adminclient module * fixed review comment for i18n string multiple replacement strings need to use dictionary format * fixed review comment for i18n string multiple replacement strings need to use dictionary format * Add obviously-missing method that prevents an Hyper-V compute node from even starting up * Avoid any hard dependencies in nova.virt.vmwareapi.vim * review cleanup * Handles situation where Connection.\_instances doesn't exist (ie. production) * localize NotImplementedError() * Change '"%s" % e' to 'e' * Fix for LP Bug #745152 * Merged waldon * adding initial v1.1 rebuild action support * Add ed leafe's code for the inject\_file agent plugin method that somehow got lost (fixes bug 741246). Update TimeoutError string for i18n * submitting a unit test for terminate\_instance * Update docstrings and spacing * fixed ordering and spacing * removed trailing whitespace * updated per code review, replaced NotFound with exception.NotFound * Merged Waldon's API code * remove all references to image\_type and change nova-manage upload to set container format more intelligently * Rough implementation of rebuild\_instance in compute manager * adding v1.0 support for rebuild; adding compute api rebuild support * Key type values in ec2\_api off of container format * Whoops * Handle in vim.py * Refixed unit test to check XML ns * Merged with trunk (after faults change to return correct content-type) * OpenStack API faults have been changed to now return the appropriated Content-Type header * More tests that were checking for no-namespace * Some tests actually tested for the lack of a namespace :-) * pep8 fixes * Avoid hard dependencies * Implement quotas for the new v1.1 server metadata controller. Modified the compute API so that metadata is a dict (not an array) to ensure we are using unique key values for metadata. This is isn't explicit in the SPECs but it is implied by the new v1.1 spec since PUT requests modify individual items * Add XML namespaces to the OpenStack API * Merged with trunk * Fixed mis-merge: OS API version still has to be v1.1 * Store socket\_info as a dictionary rather than an array * Merged with trunk * Added synchronize\_session parameter to a query in fixed\_ip\_disassociate\_all\_by\_timeout() and fix #735974 * Key was converted through str() even if None, resulting in "None" being added to authorized\_keys when no key was specified * queues properly reconnect if rabbitmq is restarted * Moving server update adminPass support to be v1.0-specific OS API servers update tests actually assert and pass now Enforcing server name being a string of length > 0 * Adding Content-Type code to openstack.api.versions.Versions wsgi.Application * Fixes metadata for ec2\_api to specify owner\_id so that it filters properly * Makes the image decryption code use the per-project private key to decrpyt uploaded images if use\_project\_ca is set. This allows the decryption code to work properly when we are using a different ca per project * exception -> Fault * Merged trunk * Do not push 'None' to authorized\_keys when no key is specified * Add missing method that prevent HyperV compute nodes from starting up * TopicAdapterConsumer uses a different callback model than TopicConsumer. This patch updates the console proxy to use this pattern * merge trunk * Uses the proc filesystem to check the volume size in volume smoketests so that it works with a very limited busybox image * merged trunk * The VNC Proxy is an OpenStack component that allows users of Nova to access their instances through a websocket enabled browser (like Google Chrome) * make sure that flag is there in compute api * fix localization for multiple replacement strings * fix doc to refer to nova-vncproxy * Support for volumes in the OpenStack API * Deepcopy the images, because the string formatting transforms them in-place * name, created\_at, updated\_at are required * Merged with trunk * "Incubator" is no more. Long live "contrib" * Rename MockImageService -> FakeImageService * Removed unused super\_verbose argument left over from previous code * Renamed incubator => contrib * Wipe out the bad docstring on get\_console\_pool\_info * use project key for decrypting images * Fix a docstring * Found a better (?) docstring from get\_console\_pool\_info * Change volume so that it returns attachments in the same format as is used for the attachment object * Removed commented-out EC2 code from volumes.py * adding unit tests for describe\_images * Fix unit test to reflect fact that instance is no longer deleted, just marked SHUTOFF * Narrowly focused bugfix - don't lose libvirt instances on host reboot or if they crash * fix for lp742650 * Added missing blank line at end of multiline docstring * pep8 fixes * Reverted extension loading tweaks * conversion of properties should set owner as owner\_id not owner * add nova-vncproxy to setup.py * clarify test * add line * incorporate feedback from termie * Make dnsmasq\_interface configurable * Stop nova-manage from reporting an error every time. Apparently except: catches sys.exit(0) * add comment * switch cast to a call * move functions around * move flags per termie's feedback * initial unit test for describe images * don't print the error message on sys.exit(0) * added blank lines in between functions & removed the test\_describe\_images (was meant for a diff bug lp682888) * Make Dnsmasq\_interface configurable * fix flag names * Now checking that exists at least one network marked injected (libvirt and xenapi) * This branch adds support for linux containers (LXC) to nova. It uses the libvirt LXC driver to start and stop the instance * use manager pattern for auth token proxy * Style fixes * style fix * Glance used to return None when a date field wasn't set, now it returns ''. Glance used to return dates in format "%Y-%m-%dT%H:%M:%S", now it returns "%Y-%m-%dT%H:%M:%S.%f" * Fix up docstring * Added content\_type to OSAPI faults * accidentally dropped a sentence * Added checks that exists at least one network marked inhected in libvirt and xenapi * Adds support for versioned requests on /images through the OpenStack API * Import order * Switch string concat style * adding xml test case * adding code to explicitly set the content-type in versions controller; updating test * Merged trunk * Added VLAN networking support for XenAPI * pep8 * adding server name validation to create method; adding tests * merge lp:nova * use informative error messages * adding more tests; making name checks more robust * merge trunk * Fix pep8 error * Tweaking docstrings just in case * Catch the error that mount might through a bit better * sorted pep8 errors that were introduced during previous fixes * merge trunk * make all openstack status uppercase * Add remove\_volume to compute API * Pass along the nbd flags although we dont support it just yet * cleaned up var name * made changes per code review: 1) removed import of image from objectstore 2) changed to comments instaed of triple quotes * Displays an error message to the user if an exception is raised. This is vital because if logfile is set, the exception shows up in the log and the user has no idea something went wrong * Yet more docstring fixes * More style changes * Merged with trunk * Multi-line comments should end in a blankline * add note per review * More fixes to keep the stylebot happy * Cleaned up images/fake.py, including move to Duplicate exception * Code cleanup to keep the termie-bot happy * displays an error message if a command fails, so that the user knows something went wrong * Fixes volume smoketests to work with ami-tty * address some of termie's recommendations * add period, test github * pep8 * osapi servers update tests actually assert now; enforcing server name being a string of length > 0; moving server update adminPass support to be v1.0-specific * Moving shared\_ip\_groups controller to APIRouterV10 Replacing all shared\_ip\_groups contoller code with HTTPNotImplemented Adding shared\_ip\_groups testing * fix docstrings * Merged trunk * Updated docstrings to satisfy * Updated docstrings to satisfy * merge trunk * merge trunk * minor fix and comment * style fixes * merging trunk * Made param descriptions sphinx compatible * Toss an \_\_init\_\_ in the test extensions dir. This gets it included in the tarball * pep8 * Fix up libvirt.xml.template * This fixes EC2 API so that it returns image displayName and description properly * merged from trunk * Moving backup\_schedule route out of base router to OS API v1.0 All controller methods return HTTPNotImplemented to prevent further confusion Correcting tests that referred to incorrect url * Fixed superfluous parentheses around locals() * Added image name and description mapping to ec2 api * use self.flags in virt test * Fixed DescribeUser in the ec2 admin client to return None instead of an empty UserInfo object * Remove now useless try/except block * Dont make the test fail * backup\_schedule tests corrected; controller moved to APIRouterV10; making controller fully HTTPNotImplemented * when image\_id provided cannot be found, returns more informative error message * Adds support for snapshotting (to a new image) in the libvirt code * merge lp:nova * More pep8 corrections * adding shared\_ip\_groups testing; replacing all shared\_ip\_groups contoller code with HTTPNotImplemented; moving shared\_ip\_groups controller to APIRouterV10 * Merged trunk * pep8 whitespace * Add more unit tests for lxc * Decided to not break old format so this should work with the way Glance used to work and the way glace works now..The best of both worlds? * update glance params per review * add snapshot support for libvirt * HACKING update for docstrings * merge trunk * Fix libvirt merge mistake * lock down requirements for change password * merge trunk * Changed TopicConsumer to TopicAdapterConsumer in bin/nova-ajax-console-proxy to allow it to start up once again * style changes * Removed iso8601 dep from pip-requires * Merged trunk * Removed extra dependency as per suggestion, although it fixes the issue much better IMO, we should be safe sticking with using the format from python's isoformat() * Assume that if we don't find a VM for an instance in the DB, and the DB state is NOSTATE, that the db instance is in the process of being spawned, and don't mark it SHUTOFF * merge with trunk * Added MUCH more flexiable iso8601 parser dep for added stability * Fix formatting of TODO and NOTE - should be a space after the # * merge lp:nova * Mixins for tests confuse pylint no end, and aren't necessary... you can stop the base-class from being run as a test by prefixing the class name with an underscore * Merged the two periodic\_tasks functions, that snuck in due to parallel merges in compute.manager * Start up nova-api service on an unused port if 0 is specified. Fixes bug 744150 * Removed 'is not None' to do more general truth-checking. Added rather verbose testing * Merged with trunk * merge trunk * merge trunk, fixed conflicts * TopicConsumer -> TopicAdapterConsumer * Fix typo in libvirt xml template * Spell "warn" correctly * Updated Authors file * Removed extraneous white space * Add friendlier message if an extension fails to include a correctly named class or factory * addressed reviewers' concerns * addressed termies review (third round) * addressed termie's review (second round) * Do not load extensions that start with a "\_" * addressed termies review (first round) * Clarified note about scope of the \_poll\_instance\_states function * Fixed some format strings * pep8 fixes * Assume that if we don't find a VM for an instance in the DB, and the DB state is NOSTATE, that the db instance is in the process of being spawned * pep8 fixes * Added poll\_rescued\_instances to virt driver base class * There were two periodic\_tasks functions, due to parallel merges in compute.manager * pep8 fixes * Bunch of style fixes * Fix utils checking * use\_ipv6 now passing to interfaces.template as first level variable in libvirt\_conn * Replaced import of an object with module import as per suggestion * Updates to the newest version of nova.sh, which includes:  \* Installing new python dependencies  \* Allows for use of interfaces other than eth0  \* Adds a run\_detached mode for automated testing * Now that it's an extension, it has to be v1.1. Also fixed up all the things that changed in v1.1 * merge trunk addressing Trey's comments * Initial extensification of volumes * Merged with trunk, resolved conflicts & code-flicts * Removed print * added a simple test for describe\_images with mock for detail funciton * merged trunk * merge trunk * merge lp:nova * Adding links container to openstack api v1.1 servers entities * Merged trunk * Add license and copyright to nova/tests/api/openstack/extensions/\_\_init\_\_.py * Fixed a typo on line 677 where there was no space between % and FLAGS * fix typos * updated nova.sh * Added a flag to allow a user to specify a dnsmasq\_config\_file is they would like to fine tune the dnsmasq settings * disk\_format is now an ImageService property. Adds tests to prevent regression * Merged trunk * Merged trunk * merging trunk * merge trunk * Merged trunk and fixed broken/conflicted tests * - add a "links" container to versions entities for Openstack API v1.1 - add testing for the openstack api versions resource and create a view builder * merging trunk * This is basic network injection for XenServer, and includes: * merging trunk * Implement image metadata controller for the v1.1 OS API * merging trunk * Changed use\_ipv6 passing to interfaces.template * merging trunk, resolving conflicts * Add a "links" container to flavors entities for Openstack API v1.1 * Toss an \_\_init\_\_ in the test extensions dir. This gets it included in the tarball * Use metadata = image.get('properties', {}) * merge trunk * Revert dom check * merge trunk * Fix unit tests w/ latest trunk merge * merging trunk and resolving conflicts * Fix up destroy container * Fix up templating * Implement metadata resource for Openstack API v1.1. Includes: -GET /servers/id/meta -POST /servers/id/meta -GET /servers/id/meta/key -PUT /servers/id/meta/key -DELETE /servers/id/meta/key * Dont always assume qemu * Removed partition from setup\_container * pep8 fix * disk\_format is now an ImageService property * Restore volume state on migration failure * merge trunk, add unit test * merge trunk * merge trunk addressing reviewer's comments * clarify comment * add documentation * Empty commit? * minor pep8 fix in db/fakes.py * Support for markers for pagination as defined in the 1.1 spec * add hook for osapi * merge trunk * Ports the Tornado version of an S3 server to eventlet and wsgi, first step in deprecating the twistd-based objectstore * Merged with trunk Updated net injection for xenapi reflecting recent changes for libvirt * Fix lp741415 by splitting arguments of \_execute in the iSCSI driver * make everything work with trunk again * Support for markers for pagination as defined in the 1.1 spec * add descriptive docstring * don't require integrated tests to recycle connections * remove twisted objectstore * port the objectstore tests to the new tests * update test base class to monkey patch wsgi * rename objectstore tests * port s3server to eventlet/wsgi * add s3server, pre-modifications * merge trunk * Added detail keywork and i18n as per suggestions * incorporate feedback from termie * Implementation of blueprint hypervisor-vmware-vsphere-support. (Link to blueprint: https://blueprints.launchpad.net/nova/+spec/hypervisor-vmware-vsphere-support) * fix typo * Addressing Trey's comments. Removed disk\_get\_injectables, using \_get\_network\_info's return value * Adds serverId to OpenStack API image detail per related\_image blueprint * Fix for bug #740947 Executing parted with sudo in \_write\_partition (vm\_utils.py) * Implement API extensions for the Openstack API. Based on the Openstack 1.1 API the following types of extensions are supported: * Merging trunk * Adds unit test coverage for XenAPI Rescue & Unrescue * libvirt driver multi\_nic support. In this phase libvirt can work with and without multi\_nic support, as in multi\_nic support for xenapi: https://code.launchpad.net/~tr3buchet/nova/xs\_multi\_nic/+merge/53458 * Merging trunk * Review feedback * Merged trunk * Additions to the Direct API: * Merged trunk * Added test\_get\_servers\_with\_bad\_limit, test\_get\_servers\_with\_bad\_offset and test\_get\_servers\_with\_bad\_marker * pep8 cleanups * Added test\_get\_servers\_with\_limit\_and\_marker to test pagination with marker and limit request params * style and spacing fixed * better error handling and serialization * add some more docs and make it more obvious which parts are examples * add an example of a versioned api * add some more docs to direct.py * add Limited, an API limiting/versioning wrapper * improve the formatting of the stack tool * support volume and network in the direct api * Merged with trunk, fix problem with behaviour of (fake) virt driver when instance doesn't reach scheduling * In this branch we are forwarding incoming requests to child zones when the requested resource is not found in the current zone * trunk merge * Fixes a bug that was causing tests to fail on OS X by ensuring that greenthread sleep is called during retry loops * Merged trunk * Fix some errors that pylint found in nova/api/openstack/servers.py * Fix api logging to show proper path and controller:action * Merged trunk * Pylint 'Undefined variable' E0602 error fixes * Made service\_get\_all()'s disabled parameter default to None. Pass False for enabled services; True for disabled services. Calls to this method have been updated to remain consistent * Merged with trunk * Reconcile tests with latest trunk merges * Merged trunk and resolved conflict in nova/db/sqlalchemy/api.py * Don't try to parse the empty string as a datetime * change names for consistency with existing db api * Merged with trunk * Forgot one set of flags * Paginated results should not include the item starting at marker. Improved implementation of common.limited\_by\_marker as suggested by Matt Dietz. Added flag osapi\_max\_limit * Detect if user is running the default Lucid version of libvirt, and give a nicer error message * Updated to use new APIRouterV11 class in tests * Fix lp741514 by declaring libvirt\_type in nova-manage * Docstring fixes * get image metadata tests working after the datetime interface change in image services * adding versioned controllers * Addressed issues raised by Rick Harris' review * Stubbing out utils.execute for migrate tests * Aggregates capabilities from Compute, Network, Volume to the ZoneManager in Scheduler * merged trunk r864 * removing old Versions application and correcting fakes to use new controller * Renamed \_\_image and \_\_compute to better describe their purposes. Use os.path.join to create href as per suggestion. Added base get\_builder as per pychecker suggestion * merging trunk r864 * trunk merged. conflicts resolved * Merged trunk * merge trunk * merge trunk * Small refactor * Merged trunk and fixed tests * Couple of pep8 fixes * pep8 clearing * making servers.generate\_href more robust * merging trunk r863 * Fixes lp740322: cannot run test\_localization in isolation * couple of bugs fixed * Merged trunk * Dont use popen in dettaching the lxc loop * Fix up formatting of libvirt.xml.template * trunk merge * fix based on sirp's comments * Grrr... because we're not recycling the API yet, we have to configure flags the first time it's called * merge trunk * Fake out network service as well, otherwise we can't terminate the instance in test\_servers now that we've started a compute service * merge trunk * Sorted out a problem occurred with units tests for VM migration * pep8 fixes * Test for attach / detach (and associated fixes) * Pass a fake timing source to live\_migration\_pre in every test that expectes it to fail, shaving off a whole minute of test run time * merge trunk * Poll instance states periodically, so that we can detect when something changes 'behind the scenes' * Merged with conflict and resolved conflict (with my own patch, no less) * Added simple nova volume tests * Created simple test case for server creation, so that we can have something to attach to.. * Merged with trunk * Added volume\_attachments * Declare libvirt\_type to avoid AttributeError in live\_migration * minor tweak from termie feedback * Added a mechanism for versioned controllers for openstack api versions 1.0/1.1. Create servers in the 1.1 api now supports imageRef/flavorRef instead of imageId/flavorId * Fixed the docstring for common.get\_id\_from\_href * better logging of exceptions * Merged trunk * Merged trunk * Fix issues with certificate updating & whitespace removal * Offers the ability to run a periodic\_task that sweeps through rescued instances older than 24 hours and forcibly unrescues them * Merged trunk * Added hyperv stub * Don't try to parse a datetime if it is the empty string (or None) * Remove a blank line * pep8 fix * Split arguments of \_execute in the iSCSI driver * merge trunk * Added revert\_resize to base class * Addressing Rick Clark's comments * Merged with lp:nova, fixed conflicts * boto\_v6 module is imported if the flag "use\_ipv6" is set to True * pep8 fixes, backported some important fixes that didn't make it over from my testing system :-( * Move all types of locking into utils.synchronize decorator * Doh! Missed two places which were importing the old driver location * Review feedback * make missing noVNC error condition a bit more fool-proof * clean some pep8 issues * general cleanup, use whitelist for webserver security * Better method name * small fix * Added docstring * Updates the previously merged xs\_migration functionality to allow upsizing of the RAM and disk quotas for a XenServer instance * Fix lp735636 by standardizing the format of image timestamp properties as datetime objects * migration gateway\_v6 to network\_info * merge prop fixes * Should not call super \_\_init\_\_ twice in APIRouter * fix utils.execute retries for osx * Keep the fallback code - we may want to do better version checking in future * Give the user a nicer error message if they're using the Lucid libvirt * Only run periodic task when rescue\_timeout is greater than 0 * Fixed some typos * Forgot extraneous module import again * Merged trunk * Forgot extraneous module import * Automatically unrescue instances after a given timeout * trunk merge * indenting cleanup * fixing some dictionary get calls * Unit test cleanup * one more minor fix * Moving the migration yet again * xml template fixed * merge prop changes * pep8 fixed * trunk merged * added myself to authors file * Using super to call parent \_setup\_routes in APIRouter subclasses * Merged trunk * pep8 fix * Implement v1.1 image metadata * This branch contains the fix for bug #740929 It makes sure cidr\_v6 is not null before building the 'ip6s' key in the network info dictionary. This way utils.to\_global\_ipv6 does not fail because of cidr==None * review comments fixed * add changePassword action to os api v1.1 * Testing of XML and JSON for show(), and conformance to API spec for JSON * Fixed tests * Merged trunk * Removed some un-needed code, and started adding tests for show(), which I forgot\! * id -> instance\_id * Checking whether cidr\_v6 is not null before populating ipv6 key in network info map (VMOps.\_get\_network\_info) * Executing parted with sudo in \_write\_partition * We update update\_ra method to synchronize, in order to prevent crash when we request multiple instance at once * merged with trunk Updated xenapi network injection for IPv6 Updated unit tests * merge trunk * merge trunk * removed excess debug line * more progress * use the nova Server object * separating out components of vnc console * Earlier versions of the python libvirt binding had getVersion in the libvirt namespace, not on the connection object. Check both * Report the exception (happens when can't import libvirt) * Use subset\_dict * Removing dead code * Touching up comment * Merging trunk * Pep8 fixes * Adding tests for owned and non-existent images * More small cleanups * Fix for #740742 - format describe\_instance\_output correctly to prevent errors in dashboard * Cleaning up make\_image\_fixutres * Merged with lp:nova * Small cleanup of openstack/images.py * Fixed up the new location of driver.py * Fix for lp740742 - format describe\_instance\_output correctly to prevent errors in dashboard * Merged with lp:nova * Filtering images by user\_id now * Clarified my "Yuk" comment * Cleaned up comment about virsh domain.info() return format * Added space in between # and TODO in #TODO * Added note about the advantages of using a type vs using a set of global constants * Filled out the base-driver contract, so it's not a false-promise * Enable flat manager support for ipv6 * Adding a talk bubble to the nova.openstack.org site that points readers to the 2011.1 site and the docs.openstack.org site - similar to the swift.openstack.org site. I believe it helps people see more sites are available, plus they can get to the Bexar site if they want to. Going forward it'll be nice to use this talk bubble to point people to the trunk site from released sites * Correctly imports greenthread in libvirt\_conn.py. It is used by live\_migrate() * Forgot this in the rename of check\_instance -> check\_isinstance * Test the login behavior of the OpenStack API. Uncovered bug732866 * trunk merge * Renamed check\_instance -> check\_isinstance to make intent clearer * Fix some crypto strangeness (\n in file\_name field of certificates, wrong IMPL method for certificate\_update) * Added note agreeing with Brian Lamar that the namespace doesn't belong in wsgi * Fix to avoid db migration failure in virtualenv * Fixed up unit tests and direct api that was also calling \_serialize (naughty!) * Fix the describe\_vpns admin api call * pep8 and fixed up zone-list * Support setting the xmlns intelligently * get\_all cleanup * Refactored out \_safe\_translate code * Set XML namespace when returning XML * Fix for LP Bug #704300 * Fix a typo in the ec2 admin api * typo fix * Pep8 fix * Merging trunk * make executable * Adding BASE\_IMAGE\_ATTRS to ImageService * intermediate progress on vnc-nova integration. checking in to show vish * add in eventlet version of vnc proxy * Updating doc strings in accordance with PEP 257. Fixing order of imports in common.py * one more copyright fix * pep8 stupidness * Tweak * fixing copyright * tweak * tweak * Whoops * Changed default for disabled on service\_get\_all to None. Changed calls to service\_get\_all so that the results should still be as they previously were * Now using urlparse to parse a url to grab id out of it * Resolved conflicts * Fix * Remove unused global semaphore * Addressed reviewer's comments * pep8 fix * Apparantly a more common problem than first thought * Adding more docstrings. image\_id and instance\_type fields of an instance will always exist, so no reason to check if keys exist * Pass a fake timing source to test\_ensure\_filtering\_rules\_for\_instance\_timeout, shaving off 30 seconds of test run time * pep8 * Merged trunk * Add a test for leaked semaphores * Remove checks in \_cache\_image tests that were too implementation specific * adding view builder tests * Add correct bug fixing metadata * When updating or creating set 'delete = 0'. (thus reactivating a deleted row) Filter by 'deleted' on delete * merging trunk r843 * making Controller.\_get\_flavors is\_detail a keyword argument * merging trunk r843 * Fix locking problem in security group refresh code * merging trunk r843 * Add unit test and code updates to ensure that a PUT requests to create/update server metadata only contain a single key * Add call to unset all stubs * IptablesManager.semaphore is no more * Get rid of IptablesManager's explicit semaphore * Add --fixes lp: metadata * Convert \_cache\_image to use utils.synchronized decorator. Disable its test case, since I think it is no longer needed with the tests for synchronized * Make synchronized decorator not leak semaphores, at the expense of not being truly thread safe (but safe enough for Eventlet style green threads) * merge trunk * Wrap update\_ra in utils.synchronized * Make synchronized support both external (file based) locks as well as internal (semaphore based) locks. Attempt to make it native thread safe at the expense of never cleaning up semaphores * merge with trunk * vpn changes * added zone routing flag test * routing test coverage * routing test coverage * xenapi support for multi\_nic. This is a phase of multi\_nic which allows xenapi to work as is and with multi\_nic. The other virt driver(s) need to be updated with the same support * better comments. First redirect test * better comments. First redirect test * Remove \_get\_vm\_opaque\_ref() calls in rescue/unrescue * Remove dupe'd code * Wrap update\_dhcp in utils.synchronized * if fingerprint data not provided, added logic to calculate it using the pub key * get rid of another datetime alias * import greenthread in libvirt * merge lp:nova * make bcwaldon happy * fix licenses * added licenses * wrap and log errors getting image ids from local image store * merge lp:nova * merging trunk * Fix for LP Bug #739641 * pep8; various fixes * Provide more useful exception messages when unable to load the virtual driver * Added Gabe to Authors file. He helped code this up too * Added XenAPI rescue unit tests * added an enumerate to track device in vmops.create\_vifs() * pep8 * Openstack api 1.0 flavors resource now implemented to match the spec * more robust extraction of arguments * Updated comment per the extension naming convention we actually use * Added copyright header * Fix pep8 issues in nova/api/openstack/extensions.py * Fix limit unit tests (reconciles w/ trunk changes) * Changed fixed\_range (CIDR) to be required in the nova-manage command; changed default num\_networks to 1 * merging trunk r837 * zones3 and trunk merge * Added space * trunk merge * remove scheduler.api.API. naming changes * Changed error to TypeError so that we get the arguments list * Added my name to Authors Added I18n for network create string * merge with trunk * merge trunk * merge trunk * merge trunk * Add bug metadata * Wrap update\_dhcp in utils.synchronized * fixes nova-manage instance\_type compatibility with postgres db * Tell PyLint not to complain about the "\_" function * Make smoketests' exit code reveal whether they were succesful * pep8 * Added run\_instances method to the connection.py of the contrib/boto\_v6/ec2 which would return ReservationV6 object instead of Reservation in order to access attribute dns\_name\_v6 of an instance * cleanup another inconsistent use of 1 for True in nova-manage * Changed Copyright to NTT for newly added files for flatmanager ipv6 * merge trunk * \* committing ovs scripts * fix nova-manage instance\_type list for postgres compatibility * fixed migration instance\_types migration to support postgres correctly * comment more descriptive * Seriously? * Fixed netadmin smoketests for ipv6 * Merged trunk * Better errors when virt driver isn't loaded * merge lp:nova * fix date formatting in images controller show * huh * fix ups * merge trunk * uses True/False instead of 1/0 for Postgres compatibility * cleaned up tests stubs that were accidentally checked in * works again. woo hoo * created api endpoint to allow uploading of public key * api decorator * Cleanup of FakeAuthManager * Replaced all pylint "disable-msg=" with "disable=" and "enable-msg=" with "enable=" * Change cloud.id\_to\_ec2\_id to ec2utils.id\_to\_ec2\_id. Fixes EC2 API error handling when invalid instances and volume names are specified * A few more single-letter variable names bite the dust * Re-implementation (or just implementation in many cases) of Limits in the OpenStack API. Limits is now available through /limits and the concept of a limit has been extended to include arbitrary regex / http verb combinations along with correct XML/JSON serialization. Tests included * Avoid single-letter variable names * auth\_data is a list now (thanks Rick!) * merge with trunk * Mark instance metadata as deleted when we delete the instance * results * fixed up novaclient usage to include managers * Added test case * Minor fixes to replace occurances of "VI" by "VIM" in 2 comments * whoopsy2 * whoopsy * Fixed 'Undefined variable' errors generated by pylint (E0602) * Merged trunk * Change cloud.id\_to\_ec2\_id to ec2utils.id\_to\_ec2\_id. Fixes EC2 API error handling when invalid instances and volume names are specified * enable-msg -> enable * disable-msg -> disable * enable\_zone\_routing flag * PEP-8 * Make flag parsing work again * Using eventlets greenthreads for optimized image processing. Fixed minor issues and style related nits * Fixed issue arisen from recent feature update (utils.execute) * Make proxy.sh work with both openbsd and traditional variants of netcat * Query the size of the block device, not the size of the filesystem * merge trunk * Ensuring kernel/ramdisk files are always removed in case of failures * merge trunk * merge trunk * Implement metadata resource for Openstack API v1.1. Includes: -GET /servers/id/meta -POST /servers/id/meta -GET /servers/id/meta/key -PUT /servers/id/meta/key -DELETE /servers/id/meta/key * Make "ApiError" the default error code for ApiError instances, rather than "Unknown." * When changing the project manager, if the new manager is not yet a project member, be sure to make them be a project member * Make the rpc cast/call debug calls show what topic they are sending to. This aides in debuugging * Final touches and bug/pep8 fixes * Support for markers for pagination as defined in the 1.1 spec * Merged trunk * Become compatible with ironcamel and bcwaldon's implementations for standardness * pep8 * Merged dependant branch lp:~rackspace-titan/nova/openstack-api-versioned-controllers * Updated naming, removed some prints, and removed some invalid tests * adding servers container to openstack api v1.1 servers entities * decorator more generic now * Images now v1.1 supported...mostly * fixed up bzr mess * Fix for LP Bug #737240 * refactored out middleware, now it's a decorator on service.api * Fix for LP Bug #737240 * Add topic name to cast/call logs * Changing project manager should make sure that user is a project member * Invert some of the original logic and fix a typo * Make the smoketests pep8 compliant (they weren't when I started working on them..) * Update the Openstack API to handle case where personality is set but null in the request to create a server * Fix a couple of things that assume that libvirt == kvm/qemu * Made fixed\_range a required parameter for nova-manage network create. Changed default num\_networks to 1; 1000 seems large * Fix a number of place in the volume driver where the argv hadn't been fully split * fix for lp712982, and likely a variety of other dashboard error handling issues. This fix simply causes the default error code for ApiError to be 'ApiError' rather than 'Unknown', which makes dashboard handle the error gracefully, and makes euca error output slightly prettier * Fix mis-merge * pep8 is hard * syntax error * create vifs before inject network info to remove rxtx\_cap from network info (don't need to inject it) * Make utils.execute not overwrite std{in,out,err} args to Popen on retries. Make utils.execute reject unknown kwargs * merged trunk, merged qos, slight refactor regarding merges * - general approach for openstack api versioning - openstack api version now preserved in request context - added view builder classes to handle os api responses - added imageRef and flavorRef to os api v1.1 servers - modified addresses container structure in os api v1.1 servers * Pep8 * Test changes * pep8 * Adjust test cases * pep8 * merge * Mark instance metadata as deleted when we delete the instance * Backfix of bugfix of issue blocking creating servers with metadata * Better comment for fault. Improved readability of two small sections * Add support for network QoS (ratelimiting) for XenServer. Rate is pulled from the flavor (instance\_type) when constructing a vm * pep8 * I suck at merging * Now returns a 400 for a create server request with invalid hrefs for imageRef/flavorRef values. Also added tests * moving Versions app out of \_\_init\_\_.py into its own module; adding openstack versions tests; adding links to version entities * fixed code formatting nit * handle create and update requests, and update the base image service documentation to reflect the (defacto) behavior * Move the check for None personalities into the create method * Get the migration out * get api openstack test\_images working * merge trunk * Improved exception handling * better implementation of try..except..else * merging parent branch lp:~bcwaldon/nova/osapi-flavors-1\_1 * merging parent branch lp:~rackspace-titan/nova/openstack-api-version-split * iptables filter firewall changes merged * merged trunk * pep8 * adding serialization\_metadata to encode links on flavors * merge with libvirt\_multinic\_nova * pep8 * teach glance image server get to handle timestamps * merge trunk * merge trunk * fixes for NWFilterFirewall and net injection * moving code out of try/except that would never trigger NotFound * handle timestamps in glance service detail * fixed IpTablesFirewal * Fixes lp736343 - Incorrect mapping of instance type id to flavor id in Openstack API * Comparisons to None should not use == or != * Pep8 error, oddly specific to pep8 v0.5 < x > v0.6 * Remove unconditional raise, probably left over from debugging * Mapping the resize status * Mapping the resize status * Fixed pep8 violation * adding comments; removing returns from build\_extra; removing unnecessary backslash * refactor to simpler implementation * Foo * glance image service show testcases * oh come on * refactoring * Add tests and code to handle multiple ResponseExtension objects * Just use 'if foo' instead of 'if len(foo)'. It will fail as spectacularly if its not acting on a sequence anyways * bugfix * Remove unconditional raise, probably left over from debugging * No need to modify this test case function as well * refactored: network\_info creation extracted to method * Call \_create\_personality\_request\_dict within the personalities\_null test * Foo * more pep8 fixes * Switch back to 'is not None' for personality\_files check. (makes mark happy) * pep8 fixes * 1) Update few comments where whitespace is missing after '#' 2) Update document so that copy right notice doesn't appear in generated document 3) Now using self.flag(...) instead of setting the flags like FLAGS.vmwareapi\_username by direct assignment. 4) Added the missing double quote at the end a string in vim\_util.py * more pep8 fixes * Fix up tests * Replaced capability flags with List * Fix more pep8 errors * Remove me from mailmap * Fix up setup container * Merged trunk * Update the Openstack API to handle case where personality is set but null in the request to create a server * Make smoketests' exit code reveal whether they were succesful * merge with trunk. moved scheduler\_manager into manager. fixed tests * Set nbd to false when mounting the image * Fixed typo when I was trying to add test cases for lxc * Remove target\_partition for setup\_container but still hardcode because its needed when you inject the keys into the image * Remove nbd=FLAGS.use\_cow\_images for destroy container * Update mailmap * Fix a number of place in the volume driver where the argv hadn't been fully split * Fix pep8 errors * Update authors again * Improved exception handling: - catching appropriate errors (OSError, IOError, XenAPI.Failure) - reduced size of try blocks - moved exception handling code in separate method - verifing for appropriate exeception type in unit tests * get\_console\_output is not supported by lxc and libvirt * Update Authors and testsuite * Comparisons to None should not use == or != * Make error message match the check * Setting the api verion in the request in the auth middle is no longer needed. Also, common.get\_api\_version is no longer needed. As Eric Day noted, having versioned controllers will make that unnecessary * moving code out of try/except that would never trigger NotFound * Added mechanism for versioned controllers for openstack api versions 1.0/1.1. Create servers in the 1.1 api now supports imageRef/flavorRef instead of imageId/flavorId * fix up copyright * removed dead method * pep8 * pep8 * Remerge trunk * cleanup * added in network qos support for xenserver. Pull qos settings from flavor, use when creating instance * moved scheduler API check into db.api decorator * Add basic tests for lxc containers * Revert testsuite changes * MErge trunk * Fix a few of the more obvious non-errors while we're in here * hacks in place * Fix the errors that pylint was reporting on this file * foo * foo * commit before monster * Fix \_\_init\_\_ method on unit tests (they take a method\_name kwarg) * Don't warn about C0111 (No docstrings) * In order to disable the messages, we have to use disable, not disable-msg * Avoid mixins on image tests, keeping pylint much happier * Use \_ trick to hide base test class, thereby avoiding mixins and helping PyLint * hurr * hurr * get started testing * foo * Don't complain about the \_ function being used * Again * pep8 * converted new lines from CRLF to LF * adding bookmarks links to 1.1 flavor entities * Reverting * Log the use of utils.synchronized * expanding osapi flavors tests; rewriting flavors resource with view builders; adding 1.1 specific links to flavors resources * Dumb * Unit test update * Fix lp727225 by adding support for personality files to the openstack api * Changes * fixes bug 735298: start of nova-compute not possible because of wrong xml paths to the //host/cpu section in "virsh capabilities", used in nova/virt/libvirt\_conn.py * update image service documentation * merge lp:nova and resolve conflicts * User ids are strings, and are not necessarily == name. Also fix so that non-existent user gives a 404, not a 500 * Fudge * Keypairs are not required in the OpenStack API; don't require them! * Merging trunk * Add missing fallback chain for ipv6 * Typo fix * fixed pep8 issue * chchchchchanges * libvirt template and libvirt\_conn.spawn modified in way that was proposed for xenapi multinic support * Re-commit r805 * Re-commit r804 * Refactored ZoneRedirect into ZoneChildHelper so ZoneManager can use this too * Don't generate insecure passwords where it's easy to use urandom instead * merging openstack-api-version-split * chchchchchanges * chchchchchanges * Fixes euca-get-ajax-console returning Unknown Error, by using the correct exception in get\_open\_port() logic. Patch from Tushar Patil * chchchchchanges * Revert commit that modified CA/openssl.cnf.tmpl * Comment update * Derped again * Move mapper code into the \_action\_ext\_controllers and \_response\_ext\_controllers methods * The geebees * forgot to return network info - teehee * refactored, bugfixes * merge trunk * moving code out of try/except that would never trigger NotFound * merge trunk * Logging statements * added new class Instances for managaging instances added new method list in class Instances: * tweak * Stuff * Removing io\_util.py. We now use eventlets library instead * Some typos * \* Updated document vmware\_readme.rst to mention VLAN networking \* Corrected docstrings as per pep0257 recommentations. \* Stream-lined the comments. \* Updated code with locals() where ever applicable. \* VIM : It stands for VMware Virtual Infrastructure Methodology. We have used the terminology from VMware. we have added a question in FAQ inside vmware\_readme.rst in doc/source \* New fake db: vmwareapi fake module uses a different set of fields and hence the structures required are different. Ex: bridge : 'xenbr0' does not hold good for VMware environment and bridge : 'vmnic0' is used instead. Also return values varies, hence went for implementing separate fake db. \* Now using eventlet library instead and removed io\_utils.py from branch. \* Now using glance.client.Client instead of homegrown code to talk to Glance server to handle images. \* Corrected all mis-spelled function names and corresponding calls. Yeah, an auto-complete side-effect! * Implement top level extensions * Added i18n to error message * Checks locally before routing * Really fix testcase * More execvp fallout * Fix up testsuite for lxc * Error codes handled properly now * merge trunk * Adding unit test * Fix instance creation fail under use\_ipv6=false and FlatManager * pep8 clean * Fix a couple of things that assume that libvirt == kvm/qemu * Updating gateway\_v6 in \_on\_set\_network\_host() is not required for FlatManager * added correct path to cpu information (tested on a system with 1 installed cpu package) * Fix unknown exception error in euca-get-ajax-console * fixed pep8 errors (with version 0.5.0) * Use integer ids for (fake) users * req envirom param 'nova.api.openstack.version' should be 'api.version' * pep8 fixes * Fixed DescribeUser in ec2 admin client * openstack api 1.0 flavors resource now implemented; adding flavors request value testing * response working * Added tests back for RateLimitingMiddleware which now throw correctly serialized errors with correct error codes * Add ResponseExtensions * revised per code review * first pass openstack redirect working * Adding newlines for pep8 * Removed VIM specific stuff and changed copyright from 2010 to 2011 * Limits controller and testing with XML and JSON serialization * adding imageRef and flavorRef attributes to servers serialization metadata * Merged with trunk (and brian's previous fixes to fake auth) * Plugin * As suggested by Eric Day: \* changed request.environ version key to more descriptive 'api.version' \* removed python3 string formatting \* added licenses to headers on new files * Tweak * A few fixes * pep8 * merge lp:nova * ignore differently-named nodes in personality and metadata parsing * wrap errors getting image ids from local image store * Moving the migration again * Updating paste config * pep8 * internationalization * Per Eric Day's suggest, the verson is not store in the request environ instead of the nova.context * s/onset\_files/injected\_files/g * pep8 fixes * Add logging to lock check * Now that the fix for 732866, stop working around the bug * Major cosmetic changes to limits, but little-to-no functional changes. MUCH better testability now, no more relying on system time to tick by for limit testing * Merged with trunk to get fix for bug 732866 * Merged trunk * modifying paste config to support v1.1; adding v1.1 entry in versions resource ( GET /) * Fixed lp732866 by catching relevant \`exception.NotFound\` exception. Tests did not uncover this vulnerability due to "incorrect" FakeAuthManager. I say "incorrect" because potentially different implementations (LDAP or Database driven) of AuthManager might return different errors from \`get\_user\_from\_access\_key\` * refactor onset\_files quota checking * Code clean up. Removing \_decorate\_response methods. Replaced them with more explicit methods, \_build\_image, and \_build\_flavor * Use random.SystemRandom for easy secure randoms, configurable symbol set by default including mixed-case * merge lp:nova * Support testing the OpenStack API without key\_pairs * merge trunk * Fixed bugs in bug fix (plugin call) * adding missing view modules; modifying a couple of servers tests to use enumerate * just fixing a small typo in nova-manage vm live-migration * exception fixup * Make Authors check account for tests being run with different os.getcwd() depending on how they're run. Add missing people to Authors * Removed duplicated tests * PEP8 0.5.0 cleanup * Really delete the loop * Add comments about the destroy container function * Mount the right device * Merged trunk * Always put the ipv6 fallback in place. FLAGS.use\_ipv6 does not exist yet when the firewall driver is instantiated and the iptables manager takes care not to fiddle with ipv6 if not enabled * merged with trunk and removed conflicts * Merging trunk * Reapplied rename to another file * serverId returned as int per spec * Reapplied rename of Openstack -> OpenStack. Easier to do it by hand than to ask Bazaar to do it * Merged with trunk. Had to hold bazaar's hand as it got lost again * Derive unit test from standard nova.test.TestCase * pep8 fixes * adding flavors and images barebones view code; adding flavorRef and imageRef to v1.1 servers * Fixed problem with metadata creation (backported fix) * Clarify the logic in using 32 symbols * moving addresses views to new module; removing 'Data' from 'DataViewBuilder' * Don't generate insecure passwords where it's easy to use urandom instead * Added a views package and a views.servers module. For representing the response object before it is serialized * Make key\_pair optional with OpenStack API * Moved extended resource code into the extensions.py module * Moving fixtures to a factory * Refactor setup contianer/destroy container * Fixing API per spec, to get unit-tests to pass * Implements basic OpenStack API client, ready to support API tests * Fix capitalization of ApiError (it was mistakenly called APIError) * added migration to repo * Clarified message when a VM is not running but still in DB * Implemented Hyper-V list\_instances\_detail function. Needs a cleanup by someone that knows the Hyper-V code * So the first of those tests doesn't pass. Removing as it looks like it was meant to be deleted * Added test and fixed up code so that it works * Fix for LP Bug #704300 * fixed keyword arg error * pep8 * added structure to virt.xenapi.vmops to support network info being passed in * Removed duplicated test, renamed same-named (but non-identical) tests * merge trunk * PEP8 cleanup * Fixes other half of LP#733609 * Initial implementation of refresh instance states * Add missing fallback chain for ipv6 * The exception is called "ApiError", not "APIError" * Implement action extensions * Include cpuinfo.xml.template in tarball * Adding instance\_id as Glance image\_property * Add fixes metadata * Include cpuinfo.xml.template in tarball * Merged test\_network.py properly. Before I had deleted this file and added again, but this file status should be modified when you see the merged difference * removed conflicts and merged with trunk * Create v1\_0 and v1\_1 packages for the openstack api. Added a servers module to each. Added tests to validate the structure of ip addresses for a 1.1 request * committing to share * small typo in nova-manage vm live-migration * NTT's live-migration branch, merged with trunk, conflicts resolved, and migrate file renamed * Reverted unmodified files * Reverted unmodified files * Only include kernel and ramdisk ID in meta-data output if they are actually set * Test fixes and some typos * Test changes * Migration moved again * Compute test * merge trunk * merge trunk * Make nova-dhcpbridge output lease information in dnsmasq's leasesfile format * Merged my doc changes with trunk * Fixed pep8 errors * Fixed failing tests in test\_xenapi * Fixes link to 2011.1 instad of just to trunk docs * fixes: 733137 * Add a unit test * Make utils.execute not overwrite std{in,out,err} args to Popen on retries. Make utils.execute reject unknown kwargs * Removed excess LOG.debug line * merge trunk * The extension name is constructed from the camel cased module\_name + 'Extension' * Merged with trunk * Fix instructions for setting up the initial database * Fix instructions for setting up the initial database * merged with latest trunk and removed unwanted files * Removed \_translate\_keys() functions since it is no longer used. Moved private top level functions to bottom of module * Use a consistent naming scheme for XenAPI variables * oops * Review feedback * Review feedback * Review feedback * Some unit tests * Change capitalization of Openstack to OpenStack * fixed conflicts after merging with trunk with 787 * Adding a sidebar element to the nova.openstack.org site to point people to additional versions of the site * oops * Review feedback * Replace raw SQL calls through session.execute() with SQLAlchemy code * Review feedback * Remove vish comment * Remove race condition when refreshing security groups and destroying instances at the same time * Removed EOL whitespace in accordance with PEP-8 * Beginning of cleanup of FakeAuthManager * Make the fallback value None instead of False * Indentation adjustment (cosmetical) * Fixed lp732866 by catching relevant \`exception.NotFound\` exception. Tests did not uncover this vulnerability due to "incorrect" FakeAuthManager. I say "incorrect" because potentially different implementations (LDAP or Database driven) of AuthManager might return different errors from \`get\_user\_from\_access\_key\` * Merged trunk * This change adds the ability to boot Windows and Linux instances in XenServer using different sets of vm-params * merge trunk * New migration * Passes net variable as value of keyword argument process\_input. Prior to the execvp patch, this was passed positionally * Changes the output of status in describe\_volumes from showing the user as the owner of the volume to showing the project as the owner * Added support for ips resource: /servers/1/ips Refactored implmentation of how the servers response model is generated * merge trunk * Adds in multi-tenant support to openstack api. Allows for multiple accounts (projects) with admin api for creating accounts & users * merge trunk * remerge trunk (again). fix issues caused by changes to deserialization calls on controllers * Add config for osapi\_extensions\_path. Update the ExtensionManager so that it loads extensions in the osapi\_extensions\_path * process\_input for tee. fixes: 733439 * Minor stylistic updates affecting indentation * Make linux\_net ensure\_bridge commands that add and remove ip addr's from devices/bridges work with with the latest utils.execute method (execvp) * Added volume api from previous megapatch * Made changes to xs-ipv6 code impacted because of addition of flatmanger ipv6 support * Need to set version to '1.0' in the nova.context in test code for tests to be happy * merge from trunk.. * Discovered literal\_column(), which does exactly what I need * Merged trunk * Further vmops cleanup * cast execute commands to str * Remove broken test. At least this way, it'll actually fix the problem and be mergable * \* Updated the readme file with description about VLAN Manager support & guest console support. Also added the configuration instructions for the features. \* Added assumptions section to the readme file * \* Modified raise statements to raise nova defined Exceptions. \* Fixed Console errors and in network utils using HostSystem instead of Datacenter to fetch network list \* Added support for vmwareapi module in nova/virt/connection.py so that vmware hypervisor is supported by nova \* Removing self.loop to achieve synchronization * merge trunk * Moved vlan\_interface flag in network.manager removed needless carriage return in vm\_ops * Use self.instances.pop in unfilter\_instance to make the check/removal atomic * Make Authors check account for tests being run with different os.getcwd() depending on how they're run. Add missing people to Authors * Make linux\_net ensure\_bridge commands that add and remove ip addr's from devices/bridges work with with the latest utils.execute method (execvp) * \_translate\_keys now needs one more argument, the request object * Added version attribute to RequestContext class. Set the version in the nova.context object at the middleware level. Prototyped how we can serialize ip addresses based on the version * execvp: fix params * merge lp:nova * switch to a more consistent usage of onset\_files variable names * re-added a test change I removed thinking it was related to removed code. It wasn't :> * merge trunk * Document known bug numbers by the code which is degraded until the bugs are fixed * fix minor typo * Fix a fer nits jaypipes found in review * Pep8 / Style * Re-removed the code that was deleted upstream but somehow didn't get merged in. Bizarre! * More resize * Merged with upstream * pep8 fun * Test login. Uncovered bug732866 * Merged with upstream * Better logging, be more careful about when we throw login errors re bug732866 * Don't wrap keys and volumes till they're in the API * Add a new IptablesManager that takes care of all uses of iptables * Last un-magiced session.execute() replaced with SQLAlchemy code.. * PEP8 * Add basic test case * Implements basic OpenStack API client, ready to support API tests * Initial support fo extension resources. Tests * Partial revert of one conversion due to phantom magic exception from SQLAlchemy in unrelated code; convert all deletes * merge lp:nova * add docstring * fixed formatting and redundant imports * Cleaned up vmops * merge trunk * initializing instance power state on launch to 0 (fixes EC2 API bug) * Correct a misspelling * merge lp:nova * merge trunk * Use a FLAGS.default\_os\_type if available * Another little bit of fallout from the execvp branch * Updated the code to detect the exception by fault type. SOAP faults are embedded in the SOAP response as a property. Certain faults are sent as a part of the SOAP body as property of missingSet. E.g. NotAuthenticated fault. So we examine the response object for missingSet and try to check the property for fault type * Another little detail. * Fix a few things that were either missed in the execvp conversion or stuff that was merged after it, but wasn't updated accordingly * Introduces the ZoneManager to the Scheduler which polls the child zones and caches their availability and capabilities * One more thing. * merge trunk * Only include ramdisk and kernel id if they are actually set * Add bugfix metadata * More execvp fallout * Make nova.image.s3 catch up with the new execute syntax * Pass argv of dnsmasq and radvd to execute as individual args, not as a list * Split dnsmasq and radvd commands into their respective argv's * s/s.getuid()/os.getuid()/ * merge lp:nova and add stub image service to quota tests as needed * merged to trunk rev781 * fix pep8 check * merge lp:nova * Modifies S3ImageService to wrap LocalImageService or GlanceImageService. It now pulls the parts out of s3, decrypts them locally, and sends them to the underlying service. It includes various fixes for image/glance.py, image/local.py and the tests * add tests to verify the serialization of adminPass in server creation response * Fixes nova.sh to run properly the first time. We have to get the zip file after nova-api is running * minor fixes from review * merged trunk * fixed based on reviewer's comment * merge lp:nova * Moved umount container to disk.py and try to remove loopback when destroying the container * Merged trunk * Replace session.execute() calls performing raw UPDATE statements with SQLAlchemy code, with the exception of fixed\_ip\_disassociate\_all\_by\_timeout() * Fixes a race condition where multiple greenthreads were attempting to resize a file at the same time. Adds tests to verify that the image caching call will run concurrently for different files, but will block other greenthreads trying to cache the same file * maybe a int instead ? * merge lp:nova * merge, resolve conflicts, and update to reflect new standard deserialization function signature * Fixes doc build after execvp patch * execvp: fix docs * initializing instance power state on launch to 0 (fixes EC2 API bug) * - Content-Type and Accept headers handled properly - Content-Type added to responses - Query extensions no long cause computeFaults - adding wsgi.Request object - removing request-specific code from wsgi.Serializer * Fixes bug 726359. Passes unit tests * merge lp:nova, fix conflicts, fix tests * fix the copyright notice in migration * execvp: cleanup * remove the semaphore when there is no one waiting on it * merge lp:nova and resolve conflicts * Hi guys * Update the create server call in the Openstack API so that it generates an 'adminPass' and calls set\_admin\_password in the compute API. This gets us closer to parity with the Cloud Servers v1.0 spec * Added naming scheme comment * Merged trunk * execvp passes pep8 * merge trunk * Add a decorator that lets you synchronise actions across multiple binaries. Like, say, ensuring that only one worker manipulates iptables at a time * renaming wsgi.Request.best\_match to best\_match\_content\_type; correcting calls to that function in code from trunk * merge lp:nova * Fixes bug #729400. Invalid values for offset and limit params in http requests now return a 400 response with a useful message in the body. Also added and updated tests * Add password parameter to the set\_admin\_password call in the compute api. Updated servers password to use this parameter * stuff * rearrange functions and add docstrings * Fixes uses of process\_input * update authors file * merged trunk r771 * merge lp:nova * remove unneeded stubs * move my tests into their own testcase * replaced ConnectionFailed with Exception in tools/euca-get-ajax-console was not working for me with euca2tools 1.2 (version 2007-10-10, release 31337) * Fixed pep8 issues * remerge trunk * removed uneeded \*\*kw args leftover from removed account-in-url changes * fixed lp715427 * fixed lp715427 * Fix spacing * merge lp:nova and resolve conflicts * remove superfluous trailing blank line * add override to handle xml deserialization for server instance creation * Added 'adminPass' to the serialization\_metadata * merge trunk * Merged with trunk Updated exception handling according to spawn refactoring * Fixed pep8 violation in glance plugin * Added unit tests for ensuring VDI are cleaned up upon spawn failures * Stop assuming anything about the order in which the two processes are scheduled * make static method for testing without initializing libvirt * tests and semaphore fix for image caching * execvp: unit tests pass * merged to trunk rev 769 * execvp: almost passes tests * Refactoring nova-api to be a service, so that we can reuse it in unit tests * Added documentation about needed flags * a few fixes for the tests * Renamed FLAG.paste\_config -> FLAG.api\_paste\_config * Sorted imports correctly * merge trunk * Fixes lp730960 - mangled instance creation in virt drivers due to improper merge conflict resolution * Use disk\_format and container\_format in place of image type * using get\_uuid in place of get\_record in \_get\_vm\_opaqueref changed SessionBase.\_getter in fake xenapi in order to return HANDLE\_INVALID failure when reference is not in DB (was NotImplementedException) * Merging trunk * Fixing tests * Pep8 fixes * Accidentally left some bad data around * Fix the bug where fakerabbit is doing a sort of prefix matching on the AMQP routing key * merge trunk * Use disk\_format and container\_format instead of image type * merged trunk * update manpage * update code to work with new container and disk formats from glance * modify nova manage doc * Nits * abstracted network code in the base class for flat and vlan * Remerged trunk. fixed conflict * Removes VDIs from XenServer backend if spawn process fails before vm rec is created * Added ability to remove networks on nova-manage command * Remove addition of account to service url * refactored up nova/virt/xenapi/vmops \_get\_vm\_opaque\_ref() no longer inspects the param to check to see if it is an opaque ref works better for unittests * This fix is an updated version of Todd's lp720157. Adds SignatureVersion checking for Amazon EC2 API requests, and resolves bug #720157 * \* pep8 cleanups in migrations \* a few bugfixes * Removed stale references to XenAPI * Moved guest\_tool.py from etc/esx directory to tools/esx directory * Removed excess comment lines * Fix todo comment * execvp * Merged trunk * virt.xenapi.vmops.\_get\_vm\_opaque\_ref changed vm to vm\_ref and ref to obj * virt.xenapi.vmops.\_get\_vm\_opaque\_ref assumes VM.get\_record raises * add a delay before grabbing zipfile * Some more refactoring and a tighter unit test * Moved FLAGS.paste\_config to its re-usable location * Merged with trunk and fixed conflict. Sigh * Converted tabs to spaces in bin/nova-api * A few more changes * Inhibit inclusion of stack traces in the logs UNLESS --verbose has been specified. This should help keep the logs compact, helping admins find the messages they're interested in (e.g., "Can't connect to MySQL server on '127.0.0.1' (111)") without having to sort through the stack traces, while still allowing developers to see those traces at will * Addresses bugs 704985 and 705453 by: * And unit tests * A few formatting niceties * First part of the bug fix * virt.xenapi.vmops.\_get\_vm\_opaque\_ref checks for basestring instance instead of str * virt.xenapi.vmops.\_get\_vm\_opaque\_ref exception caught properly * cleaned up virt.xenapi.vmops.\_get\_vm\_opaque\_ref. more reliable approach to checking if param is an opaque ref. code is cleaner * deleted network\_is\_associated from nova.db api * move the images\_dir out of the way when converting * pep8 * rework register commands based on review * added network\_get\_by\_cidr method to nova.db api * Use IptablesManager.semapahore from securitygroups driver to ensure we don't apply half a rule set * Log failed command execution if there are more retry attempts left * Make iptables rules class \_\_ne\_\_ just be inverted \_\_eq\_\_ * Invalid values for offset and limit params in http requests now return a 400 response with a useful message in the body. Also added and updated tests * Create --paste\_config flag defaulting to api-paste.ini and mv etc/nova-api.conf to match * Implementation for XenServer migrations. There are several places for optimization but I based the current implementation on the chance scheduler just to be safe. Beyond that, a few features are missing, such as ensuring the IP address is transferred along with the migrated instance. This will be added in a subsequent patch. Finally, everything is implemented through the Openstack API resize hooks, but actual resizing of the instance RAM and hard drive space is not yet implemented * Generate 'adminPass' and call set\_password when creating servers * Merged with current trunk * merge trunk * Resolving excess conflicts due to criss-cross in branch history * Make "dhcpbridge init" output correctly formatted leases information * Rebased to nova revision 761 * Fixed some more pep8 errors * \* Updated readme file with installation of suds-0.4 through easy\_install. \* Removed pass functions \* Fixed pep8 errors \* Few bug fixes and other commits * zipfile needs to be extracted after nova is running * make compute get the new images properly, fix a bunch of tests, and provide conversion commands * avoid possible string/int comparison problems * merge lp:nova * select cleanups * Merged to trunk rev 760, and fixed comment line indent according to Jay's comment * Fix renaming of instance fields using update\_instance api method * apirequest -> apireq * \* os\_type is no longer \`not null\` * respond well if personality attribute is incomplete * Added initial support to delete networks nova-manage * move the id wrapping into cloud layer instead of image\_service * added flatmanager unit testcases and renamed test\_network.py to test\_vlan\_network.py * remove xml testing infrastructure since it is not feasible to use at present * refactor server tests to support xml and json separately * More unit tests and rabbit hooks * Fix renaming of instance fields using update\_instance method * Fix api logging to show proper path and controller:action * merged trunk * \* Tests to verify correct vm-params for Windows and Linux instances * More fixes * delete unnecessary DECLARE * Fixed based on reviewer's comment. Main changes are below. 1. get\_vcpu\_total()/get\_memory\_mb()/get\_memory\_mb\_used() is changed for users who used non-linux environment. 2. test code added to test\_virt * merge lp:nova * merge trunk * fixed wrong local variable name in vmops * Use %s for instance-delete logging in case instance\_id comes through as a string * remove ensure\_b64\_encoding * add the ec2utils file i forgot * spawn a greenthread for image registration because it is slow * fix a couple issues with local, update the glance fake to actually return the same types as the real client, fix the image tests * make local image service work * use LocalImageServiceByDefault * Replace objectstore images with S3 image service backending to glance or local * Merged to trunk rev 759 * Merged trunk rev 758 * remove ra\_server from model and fix migration issue while running unit tests * Removed properties added to fixed\_ips by xs-ipv6 BP * altered ra\_server name to gateway\_v6 * merge lp:nova * rename onset\_files to personality\_files all the way down to compute manager * Changing output of status from showing the user as the owner, to showing the project * enforce personality quotas * localize a few error messages * Refactor wsgi.Serializer away from handling Requests directly; now require Content-Type in all requests; fix tests according to new code * pep8 * Renaming my migration yet again * Merged with Trunk * Use %s in case instance\_id came through as a string * Basic notifications drivers and tests * adding wsgi.Controller and wsgi.Request testing; fixing format keyword argument exception * This fix changes a tag contained in the DescribeKeyPairs response from to so that Amazon EC2 access libraries which does more strict syntax checking can work with Nova * some comments are modified * Merged to trunk rev 757. Main changes are below. 1. Rename db table ComputeService -> ComputeNode 2. nova-manage option instance\_type is reserved and we cannot use option instance, so change instance -> vm * adding wsgi.Request class to add custom best\_match; adding new class to wsgify decorators; replacing all references to webob.Request in non-test code to wsgi.Request * Remerged trunk, fixed a few conflicts * Add in multi-tenant support in openstack api * Merged to trunk rev 758 * Fix regression in the way libvirt\_conn gets its instance\_types * Updated DescribeKeyPairs response tag checked in nova/tests/test\_cloud.py * merged to trunk rev757 * Fixed based on reviewer's comments. Main changes are below. 1. Rename nova.compute.manager.ComputeManager.mktmpfile for better naming. 2. Several tests code in tests/test\_virt.py are removed. Because it only works in libvirt environment. Only db-related testcode remains * Fix regression in the way libvirt\_conn gets its instance\_types * more rigorous testing and error handling for os api personality * Updated Authors and .mailmap * Merged to rev 757 * merges dynamic instance types blueprint (http://wiki.openstack.org/ConfigureInstanceTypesDynamically) and bundles blueprint (https://blueprints.launchpad.net/nova/+spec/flavors) * moved migration to 008 (sigh) * merged trunk * catching bare except: * added logging to instance\_types for DB errors per code review * Very simple change checking for < 0 values in "limit" and "offset" GET parameters. If either are negative, raise a HTTPBadRequest exception. Relevant tests included * requested style change * Fixes Bug #715424: nova-manage : create network crashes when subnet range provided is not enough , if the network range cannot fit the parameters passed, a ValueError is raised * adding new source docs * corrected error message * changed \_context * pep8 * added in req.environ for context * pep8 * fixed \_context typo * coding style change per devcamcar review * fixed coding style per devcamcar review notes * removed create and delete method (and corresponding tests) from flavors.py * Provide the ability to rescue and unrescue a XenServer instance * Enable IPv6 injection for XenServer instances. Added addressV6, netmaskV6 and gatewayV6 columns to the fixed\_ips table via migration #007 as per NTT FlatManager IPv6 spec * Updated docstrings * add support for quotas on file injection * Added IPv6 migrations * merge fixes * Inject IPv6 data into XenStore for instance * Change DescribeKeyPairs response tag from keypairsSet to keySet, and fix lp720133 * Port Todd's lp720157 fix to the current trunk, rev 752 * Changed \_get\_vm\_opaqueref removing test-specific code paths * Removed excess TODO comments and debug line * initial commit of vnc support * merged trunk * Changed ra\_server to gateway\_v6 and removed addressv6 column from fixed\_ips db table * \* Added first cut of migration for os\_type on instances table \* Track os\_type when taking snapshots * merging trunk * \* Added ability to launch XenServer instances with per-os vm-params * test osapi server create with multiple personalities * ensure personality contents are b64 encoded * Merged trunk * Fixed pep8 issues, applied jaypipes suggestion * Rebased to nova revision 752 * Use functools.wraps to make sure wrapped method's metadata (docstring and name) doesn't get mangled * merge from trunk * Fake database module for vmware vi api. Includes false injection layer at the level of API calls. This module is base for unit tests for vmwareapi module. The unit tests runs regardless of presence of ESX/ESXi server as computer provider in OpenStack * Review feedback * Updated the code to include support for guest consoles, VLAN networking for guest machines on ESX/ESXi servers as compute providers in OpenStack. Removed dependency on ZSI and now using suds-0.4 to generate the required stubs for VMware Virtual Infrastructure API on the fly for calls by vmwareapi module * Added support for guest console access for VMs running on ESX/ESXi servers as computer providers in OpenStack * Support for guest consoles for VMs running on VMware ESX/ESXi servers. Uses vmrc to provide the console access to guests * Minor modification to document. Removed excess flags * Moved the guest tools script that does IP injection inside VM on ESX server to etc/esx directory from etc/ directory * support adding a single personality in the osapi * corrected copyrights for new files * Updated with flags for nova-compute, nova-network and nova-console. Added the flags, --vlan\_interface= --network\_driver=nova.network.vmwareapi\_net [Optional, only for VLAN Networking] --flat\_network\_bridge= [Optional, only for Flat Networking] --console\_manager=nova.console.vmrc\_manager.ConsoleVMRCManager --console\_driver=nova.console.vmrc.VMRCSessionConsole [Optional for OTP (One time Passwords) as against host credentials] --vmwareapi\_wsdl\_loc=/vimService.wsdl> * Fixed trunk merge issues * Merged trunk * At previous commit, I forget to erase conflict - fixed it * merged to trunk rev 752 * Rebased at lp:nova 759 * test\_compute is changed b/c lack of import instance\_types * rename db migration script * 1. merged trunk rev749 2. rpc.call returns '/' as '\/', so nova.compute.manager.mktmpfile, nova.compute.manager.confirm.tmpfile, nova.scheduler.driver.Scheduler.mounted\_on\_same\_shared\_storage are modified followed by this changes. 3. nova.tests.test\_virt.py is modified so that other teams modification is easily detected since other team is using nova.db.sqlalchemy.models.ComputeService * updated docs * updated docs * Fixed xenapi tests Gave up on clever things with map stored as string in xenstore. Used ast.liteeral\_eval instead * This branch implements the openstack-api-hostid blueprint: "Openstack API support for hostId" * refactored adminclient * No reason to initialize metadata twice * Units tests fixed partially. Still need to address checking data injected into xenstore need to convert string into dict or similar. Also todo PEP8 fixes * replaced ugly INSTANCE\_TYPE constant with (slightly less ugly) stubs * add test for instance creation without personalities * fixed pep8 * Add a lock\_path flag for lock files * refactored nova-manage list (-all, ) and fixed docs * moved nova-manage flavors docs * Edited \`nova.api.openstack.common:limited\` method to raise an HTTPBadRequest exception if a negative limit or offset is given. I'm not confident that this is the correct approach, because I guess this method could be called out of an API/WSGI context, but the method \*is\* located in the OpenStack API module and is currently only used in WSGI-capable methods, so we should be safe * merge trunk * moving nova-manage integration tests to smoke tests * Wrapped the instance\_types comparison with an int and added a test case for it. Removed the inadvertently added newline * Rename migration to coincide with latest trunk changes * Adds VHD build support for XenServer driver * Suppress stack traces unless --verbose is specified * Removed extraneous newline * Merging trunk to my branch. Fixed a conflict in servers.py * Fixed obvious errors with flags. Note: tests still fail * Merging trunk * Fixed default value for xenapi\_agent\_path flag * 1) merge trunk 2) removed preconfigure\_xenstore 3) added jkey for broadcast address in inject\_network\_info 4) added 2 flags: 4.1) xenapi\_inject\_image (default True) This flag allows for turning off data injection by mounting the image in the VDI (agreed with Trey Morris) 4.2) xenapi\_agent\_path (default /usr/bin/xe-update-networking) This flag specifies the path where the agent should be located. It makes sense only if the above flag is True. If the agent is found, data injection is not performed * Wrap IptablesManager.apply() calls in utils.synchronized to avoid having different workers step on each other's toes * merge trunk * Add utils.synchronized decorator to allow for synchronising method entrance across multiple workers on the same host * execvp * execvp * execvp * execute: shell=True removed * Add lxc to the libvirt tests * Clean up the mount points when it shutsdown * Add ability to mount containers * Add lxc libvirt driver * Rebased to Nova revision 749 * added listing of instances running on a specific host * fixed FIXME * beautification.. * introduced new flag "max\_nbd\_devices" to set the number of possible NBD devices * renamed flag from maximum\_... to max\_.. * replaced ConnectionFailed with Exception in tools/euca-get-ajax-console was not working for me with euca2tools 1.2 (version 2007-10-10, release 31337) * Did a pull from trunk to be sure I had the latest, then deleted the test directory. I guess it appeared when I started using venv. Doh * Deleting test dir from a pull from trunk * introduced new flag "maximum\_nbd\_devices" to set the number of possible NBD devices * reverted my changes from https://code.launchpad.net/~berendt/nova/lp722554/+merge/50579 and reused the existing db api methods to add the disabled services. Looks much better now :) * add timeout and retry for ssh * Makes nova-api correctly load the default flagfile * force memcache key to be str * only create auth connection if cache misses * No reason to dump a stack trace just because the AMQP server is unreachable; an error notification should be sufficient * Add error message to the error report so we know why the AMQP server is unreachable * No reason to dump a stack trace just because we can't reach the AMQP servire; it ends up being just noise * DescribeInstances modified to return ipv6 fixed ip address in case of flatmanager * Bootlock original instance during rescue * merge with zones2 fixes and trunk * check if QUERY\_STRING is empty or not before building the request URL in bin/nova-ajax-console-proxy * trunk merge * API changed to new style class * trunk merge, pip-requires and novatools to novaclient changes * Fixes FlatDHCP by making it inherit from NetworkManager and moving some methods around * fixed: bin/nova-ajax-console-proxy:66:19: W601 .has\_key() is deprecated, use 'in' * merged trunk * add a caching layer to the has\_role call to increase performance * Removed unnecessary compute import * Set rescue instance VIF device * use default flagfile in nova-api * Add tests for 718999, fix a little brittle code introduced by the committed fix * Rename test to describe what it actually does * Copy over to current trunk my tests, the 401/500 fix, and a couple of fixes to the committed fix which was actually brittle around the edges.. * I'm working on consolidating install instructions specifically (they're the most asked-about right now) and pointing to the docs.openstack.org site for admin docs * check if QUERY\_STRING is empty or not before building the request URL * Teardown rescue instance * Merged trunk * Create rescue instance * Merging trunk, conflicts fixed * Verify status of image is active * Rebased at lp:nova 740 * merged with trunk * Cleanup db method names for dealing with auth\_tokens to follow standard naming pattern * The proposed bug fix stubs out the \_is\_vdi\_pv routine for testing purposes * revert a few unnecessary changes to base.py * removed unused references to unittest * add customizable tempdir and remove extra code * Pass id of token to be deleted to the db api, not the actual object * Removing unecessary headers * Rename auth\_token db methods to follow standard * Removing unecessary nokernel stuff * Adding \_make\_subprocess function * No longer users image/ directory in tarball * Merging trunk, small fixes * make smoketests run with nose * IPV6 FlatManager changes * Make tests start with a clean database for every test * merge trunk * merge clean db * merged trunk * sorry, pep8 * adds live network injection/reconfiguration. Some refactoring * forgot to get vm\_opaque\_ref * new tests * service capabilities test * moved network injection and vif creation to above vm start in vmops spawn * Merged trunk * nothing * Removes processName from debug output since we aren't using multiprocessing and it doesn't exist in python 2.6.1 * Add some methods to the ec2 admin api to work with VPNs. Also implements and properly documents the get\_hosts method * Fix copypasta pep8 violation * moved migrate script to 007 (again..sigh) * Don't require metadata (hotfix for bug 724143) * merge trunk * Merged trunk * Updated email in Authors * Easy and effective fix for getting the DNS value from flag file, when working in FlatNetworking mode * Some first steps towards resolving some of the issues brought up on the mailing list related to documenting flags * Support HP/LeftHand SANs. We control the SAN by SSHing and issuing CLIQ commands. Also improved the way iSCSI volumes are mounted: try to store the iSCSI connection info in the volume entity, in preference to doing discovery. Also CHAP authentication support * This fix checks whether the boot/guest directory exists on the hypervisor. If that is not the case, it creates it * Globally exclude \*.pyc files from generated tarballs * stubbing out \_is\_vdi\_pv for test purposes * merge trunk * Globally exclude .pyc files from tarball contents * Get DNS value from Flag, when working in FlatNetworking mode. Passing the flag was ineffective previously. This is an easy fix. I think we would need nova-manage to accept dns also from command line * xenapi plugin function now checks whether /boot/guest already exists. If not, it creates the directory * capability aggregation working * fix check for existing port 22 rule * move relevant code to baseclass and make flatdhcp not inherit from flat * Hotfix to not require metadata * Documentation fixes so that output looks better * more smoketest fixes * Removed Milind from Authors file, as individual Contributer's License Agreement & Ubuntu code of conduct are not yet signed * Fixed problems found in localized string formatting. Verified the fixes by running ./run\_tests.sh -V * Change missed reference to run\_tests.err.log * PEP 257 fixes * Merged with trunk * fix missed err.log * Tests all working again * remove extra flag in admin tests * Revert commit 709. This fixes issues with the Openstack API causing 'No user for access key admin' errors * Adds colors to output of tests and cleans up run\_tests.py * Reverted bad-fix to sqlalchemy code * Merged with trunk * added comments about where code came from * merge and fix conflicts * Prevent logging.setup() from generating a syslog handler if we didn't request one (breaks on mac) * fix pep8 * merged upstream * Changed create from a @staticmethod to a @classmethod * revert logfile redirection and make colors work by temporarily switching stdout * merged trunk * add help back to the scripts that don't use service.py * Alphabetize imports * remove processName from debug output since we aren't using multiprocessing and it doesn't exist in python 2.6.1 * updates to nova.flags to get help working better * Helper function that supports XPath style selectors to traverse an object tree e.g * tests working again * Put back the comments I accidentally removed * Make sure there are two blank links after the import * Rename minixpath\_select to get\_from\_path * Fixes the describe\_availability\_zones to use an elevated context when getting services and the db calls to pass parameters correctly so is\_admin check works * Fix pep8 violation (trailing whitespace) * fix describe\_availability\_zones * Cope when we pass a non-list to xpath\_select - wrap it in a list * Fixes existing smoketests and splits out sysadmin tests from netadmin tests * Created mini XPath implementation, to simplify mapping logic * move the deletion of the db into fixtures * merged upstream * Revert commit 709. This fixes issues with the Openstack API causing 'No user for access key admin' errors * put the redirection back in to run\_tests.sh and fix terminal colors by using original stdout * Deleted trailing whitespace * Fixes and optimizes filtering for describe\_security\_groups. Also adds a unit test * merged trunk * fix for failing describe\_instances test * merged trunk * use flags for sqlite db names and fix flags in dhcpbridge * merged trunk * Fixes lp715424, code now checks network range can fit num\_networks \* network\_size * The proposed branch prevents FlatManager from executing network initialisation tasks contained in linux\_net.init\_host(), which are unnecessary when flat networking is used * Adds some features to run\_tests.sh: - if it crashes right away with a short erorr log, print that directly - allow specifying tests without the nova.tests part * The kernel\_id and the ramdisk\_id are optional, yet the OpenStack API was requiring them. In addition, with the ObjectStore these properties are not under 'properties' (as they are with Glance) * merged trunk * merge trunk * Initial support for per-instance metadata, though the OpenStack API. Key/value pairs can be specified at instance creation time and are returned in the details view. Support limits based on quota system * Merged trunk * Removed pass * Changed unit test to refer to compute API, per Todd's suggestion. Avoids needing to extend our implementation of the EC2 API * Fixes lots of errors in the unit tests * dump error output directly on short import errors * allow users to omit 'nova.tests' with run\_tests * Merged trunk * \* Took care of localization of strings \* Addressed all one liner docstrings \* Added Sateesh, Milind to Authors file * Fixed pep8 errors * FlatManager.init\_host now inhibits call to method in superclass. Floating IP methods have been redefined in FlatManager to raise NotImplementedError * speed up network tests * merged trunk * move db creation into fixtures and clean db for each test * fix failures * remove unnecessary stubout * Lots of test fixing * Update the admin client to deal with VPNs and have a function host list * Removed unused import & formatting cleanups * Exit with exit code 1 if conf cannot be read * Return null if no kernel\_id / ramdisk\_id * Reverted change to focus on the core bug - kernel\_id and ramdisk\_id are optional * Make static create method behave more like other services * merged fix-describe-groups * add netadmin smoketests * separate out smoketests and add updated nova.sh * fix and optimize security group filtering * Support service-like wait behaviour for API service * Added create static method to ApiService * fix test * Refactoring nova-api to be a service, so that we can reuse it in tests * test that shows error on filtering groups * don't make a syslog handler if we didn't ask for one * Don't blindly concatenate queue name if second portiion is None * Missing import for nova.exceptions (!) * At the moment --pidfile is still used in some scripts in contrib/puppet/. I don't use puppet, please check if there are possible side effects * We're not using prefix matching on AMQP, so fakerabbit shouldn't be doing it! * merge fixes from anso branch * merged trunk * Removed block of code that resurrected itself in the last merge * Added Andy Southgate to the Authors file * Merged with trunk, including manual conflict resolution in nova/virt/disk.py and nova/virt/xenapi/vmops.py * Put the whitespace back \*sigh\* * Remove duplicate import gained across a merge * Rename "SNATTING" chain to "snat" * Fix DescribeRegion answer by introducing '{ec2,osapi}\_listen' flags instead of overloading {ec2,osapi}\_host. Get rid of paste\_config\_to\_flags, bin/nova-combined. Adds debug FLAGS dump at start of nova-api * Also remove nova-combined from setup.py * Fixed some docstring * Get rid of nova-combined, see rationale on ML * Merged trunk * no, really fix lp721297 this time * Updated import statements according to HACKING guidelines. Added docstrings to each document. Verified pep8 over all files. Replaced some constants by enums accordingly. Still little bit more left in vm\_util.py and vim\_util.py files * Add flags for listen\_port to nova-api. This allows us to listen on one port, but return another port (for a proxy or load balancer) in calls like describe\_regions, etc * Fix tiny mitakes! (remove unnecessary comment, etc) * Fixed based on reviewer's comment. 1. Change docstrings format 2. Fix comment grammer mistake, etc * PEP8 again * Account for the fact that iptables-save outputs rules with a space at the end. Reverse the rule deduplication so that the last one takes precedence * floating-ip-snat was too long. Use floating-snat instead * PEP8 adjustments * Remove leftover from debugging * Add a bunch of tests for everything * Fixes various issues regarding verbose logging and logging errors on import * merged trunk * Add a new chain, floating-ip-snat, at the top of SNATTING, so that SNATting for floating ips gets applied before the default SNAT rule * Address some review comments * Some quick test cleanups, first step towards standardizing the way we start services in tests * use a different flag for listen port for apis * added disabled services to the list of displayed services in bin/nova-manage * merged to trunk rev709. NEEDS to be fixed based on 3rd reviewer's comment * just add 005\_add\_live\_migration.py * Fixed based on reviewer's comment. 1. DB schema change vcpu/memory/hdd info were stored into Service table. but reviewer pointed out to me creating new table is better since Service table has too much columns * update based on prereq branch * update based on prereq branch * fixed newline and moved import fake\_flags into run\_tests where it makes more sense * merged fix * remove changes to test db * Fixed my confusion in documenting the syntax of iSCSI discovery * pretty colors for logs and a few optimizations * Renamed db\_update to model\_update, and lots more documentation * modify tests to use specific hosts rather than default * Merged with head * remove keyword argument, per review * move test\_cloud to use start\_service, too * add a start\_service method to our test baseclass * add a test for rpc consumer isolation * Merged with trunk * The OpenStack API was using the 'secret' as the 'access key'. There is an 'access key' and there is a 'secret key'. Access key ~= username. Secret key ~= password. This fix is necessary for the OpenStack Python API bindings to log in * Add a bunch of docs for the new iptables hotness * fix pep8 and remove extra reference to reset * switch to explicit call to logging.setup() * merged trunk * Adds translation catalogs and distutils.extra glue code that automates the process of compiling message catalogs into .mo files * Merged with trunk * make sure that ec2 response times are xs:dateTime parsable * Removing pesky DS\_Store files too. Begone * Updated to remove built docs * Removing duplicate installation docs and adding flag file information, plus pointing to docs.openstack.org for Admin-audience docs * introducing a new flag timeout\_nbd for manually setting the time in seconds for waiting for an upcoming NBD device * use tests.sqlite so it doesn't conflict with running db * cleanup from review * Duh, continue skips iteration, not pass. #iamanidiot * reset to notset if level isn't in flags * Enable rescue testing * PEP8 errors and remove check in authors file for nova-core, since nova-core owns the translation export branch * Merged trunk * Stub out VM create * \* Removed VimService\_services.py & VimService\_services\_types.py to reduce the diffs to normal. These 2 files are auto-generated files containing stubs for VI SDK API end points. The stub files are generated using ZSI SOAP stub generator module ZSI.commands.wsdl2py over Vimservice.wsdl distributed as part of VMware Virtual Infrastructure SDK package. To not include them in the repository we have few options to choose from, 1) Generate the stub files in build time and make them available as packages for distribution. 2) Generate the stub files in installation/configuration time if ESX/ESXi server is detected as compute provider. Further to this, we can try to reduce the size of stub files by attempting to create stubs only for the API end points required by the module vmwareapi * introducing a new flag timeout\_nbd for manually setting the time in seconds for waiting for an upcoming NBD device * \* Removed nova/virt/guest-tools/guest\_tool.bat & nova/virt/guest-tools/guest\_tool.sh as guest\_tool.py can be invoked directly during guest startup * More PEP-8 * Wrap ipv6 rules, too * PEP-8 fixes * Allow non-existing rules to be removed * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ NOVA-CORE DEVELOPERS SHOULD NOT REVIEW THIS MERGE PROPOSAL ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ * merged with nova trunk revision #706 * Fix typo * Unfilter instance correctly on termination * move exception hook into appropriate location and remove extra stuff from module namespace * Also remove rules that jump to deleted chains * simplify logic for parsing log level flags * reset all loggers on flag change, not just root * add docstring to reset method * removed extra comments and initialized from flags * fix nova-api as well * Fix refresh sec groups * get rid of initialized flag * clean up location of method * remove extra references to logging.basicConfig * move the fake initialized into fake flags * fixes for various logging errors and issues * fanout works * fanout kinda working * service ping working * scheduler manager * tests passing * start of fanout * merge trunk * previous trunk merge * puppet scripts only there as an example, should be moved to some other place if they are still necessary * Various optimizations of lookups relating to users * If there are no keypairs registered on a create call, output a useful error message rather than an out-of-range exception * Fixes vpn images to use kernel and ramdisk specified by the image * added elif branch to handle the conversion of datetime instances to isoformat instead of plain string conversion * Calculate time correctly for ec2 request logs * fix ec2 launchtime response not in iso format bug * pep8 leftover * move from datetime.datetime.utcnow -> utils.utcnow * pass start time as a param instead of making it an attribute * store time when RequestLogging starts instead of using context's time * Fix FakeAuthManager so that unit tests pass; I believe it was matching the wrong field * more optimizations context.user.id to context.user\_id * remove extra * replace context.user.is\_admin() with context.is\_admin because it is much faster * remove the weird is\_vpn logic in compute/api.py * Don't crash if there's no 'fixed\_ip' attribute (was returning None, which was unsubscriptable) * ObjectStore doesn't use properties collection; kernel\_id and ramdisk\_id aren't required anyway * added purge option and tightened up testing * Wrap iptables calls in a semaphore * pep8 * added instance types purge test * Security group fallback is named sg-fallback * Rename a few things for more clarity * Port libvirt\_conn.IptablesDriver over to use linux\_net.IptablesManager * merged trunk * Typo fix * added admin api call for injecting network info, added api test for inject network info * If there are no keypairs, output a useful error message * Fix typo (?) in authentication logic * Changing type -> image\_type * Pep8 cleanup * moved creating vifs to its own function, moved inject network to its own function * sandy y u no read hacking guide and import classes? * Typo fix * XenAPI tests * Introduce IptablesManager in linux\_net. Port every use of iptables in linux\_net to it * Use WatchedFileHandler instead of RotatingFileHandler * Resize compute tests * Support for HP SAN * Merging trunk to my branch. Fixed conflicts in Authors file and .mailmap * Rename migration 004 => 005 * Added Author and tests * Merging trunk * fixups backed on merge comments * Fixed testing mode leftover * PEP8 fix * Remove paste\_config\_to\_flags since it's now unused * Port changes to nova-combined, rename flags to API\_listen and API\_listen\_port * Set up logging once FLAGS properly read, no need to redo logging config anymore (was inoperant anyway) * Switch to API\_listen and API\_listen\_port, drop wsgi.paste\_config\_to\_flags * added new class Instances to manage instances and added a new listing method into the class * added functionality to list only fixed ip addresses of one node and added exception handling to list method * Use WatchedFileHandler instead of RotatingFileHandler * Incorporating minor cleanups suggested by Rick Harris: \* Use assertNotEqual instead of assertTrue \* Use enumerate function instead of maintaining a counter * Resize compute tests * fixed based on reviewer's comment. 1. erase wrapper function(remove/exists/mktempfile) from nova.utils. 2. nova-manage service describeresource(->describe\_resource) 3. nova-manage service updateresource(->update\_resource) 4. erase "my mistake print" statement * Tests * pep8 * merged trunk * Makes FlatDHCPManager clean up old fixed\_ips like VlanManager * Correctly pass the associate paramater for project\_get\_network through the IMPL layer in the db api * changed migration to 006 for trunk compatibility * completed doc and added --purge option to instance type delete * moved inject network info to a function which accepts only instance, and call it from reset network * Test changes * Merged with trunk * Always compare incoming flavor\_id as an int * Initial support for per-instance metadata, though the OpenStack API. Key/value pairs can be specified at instance creation time and are returned in the details view. Support limits based on quota system * a few changes and a bunch of unit tests * remove leftover periodic tasks * Added support for feature parity with the current Rackspace Cloud Servers practice of "injecting" files into newly-created instances for configuration, etc. However, this is in no way restricted to only writing files to the guest when it is first created * missing docstring and fixed copyrights * move periodic tasks to base class based on class variable as per review * Correctly pass the associate paramater to project\_get\_network * Add \*\*kwargs to VlanManager's create\_networks so that optional args from other managers don't break * Uncommitted changes using the wrong author, and re-committing under the correct author * merge with zone phase 1 again * Added http://mynova/v1.0/zones/ api options for add/remove/update/delete zones. child\_zones table added to database and migration. Changed novarc vars from CLOUD\_SERVERS\_\* to NOVA\_\* to work with novatools. See python-novatools on github for help testing this * pip requires novatools * copyright notice * moved 003\_cactus.py migration file to 004\_add\_instance\_types.py to avoid naming collision with new trunk migration * Add \*\*kwargs to VlanManager's create\_networks so that optional args from other managers don't break * merge with zone phase 1 * changed from 003-004 migration * merged lp:~jk0/nova/dynamicinstancetypes * Merged trunk * merge from dev * fixed strings * multi positional string fix * Use a semaphore to ensure we don't run more than one iptables-restore at a time * Fixed unit test * merge with trunk * fixed zone list tests * Make eth0 the default for the public\_interface flag * Finished flavor OS API stubs * Re-alphabetise Authors, move extra addresses into .mailmap * Re-alphabetise Authors, move extra addressses into .mailmap * Move the ramdisk logging stuff * Hi guys * fixup * zone list now comes from scheduler zonemanager * Stop blowing away the ramdisk * Rebased at lp:nova 688 * Update the Openstack API so that it returns 'addresses' * I have a bug fix, additional tests for the \`limiter\` method, and additional commenting for a couple classes in the OpenStack API. Basically I've just tried to jump in somewhere to get my feet wet. Constructive criticism welcome * added labels to networks for use in multi-nic added writing network data to xenstore param-list added call to agent to reset network added reset\_network call to openstack api * Add a command to nova-manage to list fixed ip's * Foo * comments + Englilish, changed copyright in migration, removed network\_get\_all from db.api (vestigial) * Adding myself to Authors and .mailmap files * example: * Switched mailmap entries * Supporting networks with multiple PIFs. pep8 fixes unit tests passed * Merged kpepple * Merged trunk * More testing * Block diagram for vmwareapi module * added entry in the category list * Added vmwareapi module to add support of hypervisor vmware-vsphere to OpenStack * added new functionality to list all defined fixed ips * added more I18N * Merged trunk and fixed conflict with other Brian in Authors * removing superfluous pass statements; replacing list comprehension with for loop; alphabetizing imports * Rebased at lp:nova 687 * added i18n of 'No networks defined' * Make eth0 the default for FLAGS.public\_interface * Typo fixes * Merging trunk * Adding tests * first crack at instance types docs * merge trunk * style cleanup * polling tests * Use glance image type to determine disk type * Minor change. Adding a helper function stub\_instance() inside the test test\_get\_all\_server\_details\_with\_host for readability * Fixes ldapdriver so that it works properly with admin client. It now sanitizes all unicode data to strings before passing it into ldap driver. This may need to be rethought to work properly for internationalization * Moved definition of return\_servers\_with\_host stub to inside the test\_get\_all\_server\_details\_with\_host test * fixed * Pep8 fixes * Merging trunk * Adding basic test * Better exceptions * Update to our HACKING doc to add examples of our docstring style * add periodic disassociate from VlanManager to FlatDHCPManager * Flipped mailmap entries * -from migrate.versioning import exceptions as versioning\_exceptions + +try: + from migrate.versioning import exceptions as versioning\_exceptions +except ImportError: + try: + # python-migration changed location of exceptions after 1.6.3 + # See LP Bug #717467 + from migrate import exceptions as versioning\_exceptions + except ImportError: + sys.exit(\_("python-migrate is not installed. Exiting.")) * Accidently removed myself from Authors * Added alternate email to mailmap * zone manager tests * Merged to trunk * added test for reset\_network to openstack api tests, tabstop 5 to 4, renamed migration * Use RotatingFileHandler instead of FileHandler * pep8 fixes * sanitize all args to strings before sending them to ldap * Use a threadpool for handling requests coming in through RPC * Typos * Derp * Spell flags correctly (i.e. not in upper case) * Fixed merge error * novatools call to child zones done * novatools call to child zones done * Putting glance plugin under pep8 control * fixed authors, import sys in migration.py * Merged trunk * First commit of working code * Stubbed out flavor create/delete API calls * This implements the blueprint 'Openstack API support for hostId': https://blueprints.launchpad.net/nova/+spec/openstack-api-hostid Now instances will have a unique hostId which for now is just a hash of the host. If the instance does not have a host yet, the hostId will be '' * Fix for bug #716847 * merge trunk * First commit for xenapi-vlan-networking. Totally untested * added functionality to nova-manage to list created networks * Add back --logdir=DIR option. If set, a logfile named after the binary (e.g. nova-api.log) will be kept in DIR * Fix PEP-8 stuff * assertIsNone is a 2.7-ism * This branch should resolve nova bug #718675 (https://bugs.launchpad.net/nova/+bug/718675) * Added myself to the authors file * I fail at sessions * I fail at sessions * Foo * hurr durr * Merging trunk part 1 * stubbed out reset networkin xenapi VM tests to solve domid problem * foo * foo * Adding vhd hidden sanity check * Fixes 718994 * Make rpc thread pool size configurable * merge with trunk * fail * Fixing test by adding stub for get\_image\_meta * this bug bit me hard today. pv can be None, which does not translate to %d and this error gets clobbered by causing errors in the business in charge of capturing output and reporting errors * More pep8 fixes * Pep8 fixes * Set name-label on VDI * Merge * Don't hide RotatingFileHandler behind FileHandler's name * Refactor code that decides which logfile to use, if any * Fixing typo * polling working * Using Nova style nokernel * changed d to s * merge with trunk * More plugin lol * moved reset network to after boot durrrrr.. * Don't hid RotatingFileHandler behind FileHandler's name * removed flag --pidfile from nova/services.py * Added teammate Naveed to authors file for his help * plugin lol * Plugin changes * merging trunk back in; updating Authors conflict * Adding documentation * Regrouping methods so they make sense * zone/info works * Refactoring put\_vdis * Adding safe\_find\_sr * Merged lp:nova * Fixes tarball contents by adding missing scripts and files to setup.py / MANIFEST.in * Moving SR path code outside of glance plugin * When re-throwing an exception, use "raise", not "raise e". This way we don't lose the stack trace * Adding more documentation, code-cleanup * Replace placeholders in nova.pot with some actual values * The proposed fix puts a VM which fails to spawn in a (new) 'FAILED' power state. It does not perform a clean-up. This because the user needs to know what has happened to the VM he/she was trying to run. Normally, API users do not have access to log files. In this case, the only way for the user to know what happened to the instance is to query its state (e.g.: doing euca-describe-instances). If we perform a complete clean-up, no information about the instance which failed to spawn will be left * Some trivial cleanups in context.py, mostly just a test of using the updated git-bzr-ng * Use eventlet.green.subprocess instead of standard subprocess * derp * Better host acquisition * zones merge * fixed / renamed migration scripts * Merged trunk * Update .pot file with source file and line numbers after running python setup.py build * Adds Distutils.Extra support, removes Babel support, which is half-baked at best * Pull in .po message catalogs from lp:~nova-core/nova/translations * Fix sporadically failing unittests * Missing nova/tests/db/nova.austin.sqlite file * Translations will be shipped in po/, not locale/ * Adding missing scripts and files to setup.py / MANIFEST.in * Fixes issues when running euca-run-instances and euca-describe-image-attribute against the latest nova/trunk EC2 API * initial * Naïve attempt at threading rpc requests * Beautify it a little bit, thanks to dabo * OS-55: Moved conn\_common code into disk.py * Break out of the "for group in rv" loop in security group unit tests so that we are use we are dealing with the correct group * Tons o loggin * merged trunk * Refactored * Launchpad automatic translations update * trunk merge * better filtering * Adding DISK\_VHD to ImageTypes * Updates to that S3ImageService kernel\_id and ramdisk\_id mappings work with EC2 API * fixed nova-combined debug hack and renamed ChildZone to Zone * plugin * Removing testing statements * Adds missing flag that makes use\_nova\_chains work properly * bad plugin * bad plugin * bad plugin * fixed merge conflict * First cut on XenServer unified-images * removed debugging * fixed template and added migration * better filtering * Use RotatingFileHandler instead of FileHandler * Typo fixes * Resurrect logdir option * hurr * Some refactoring * hurr * Snapshot correctly * Added try clause to handle changed location of exceptions after 1.6.3 in python-migrate LP Bug #717467 * Use eventlet.green.subprocess instead of standard subprocess * Made kernel and ram disk be deleted in xen api upon instance termination * Snapshot correctly * merged recent version. no conflict, no big/important change to this branch * wharrgarbl * merge jk0 branch (with trunk merge) which added additional columns for instance\_types (which are openstack api specific) * corrected model for table lookup * More fixes * Derp * fix for bug #716847 - if a volume has not been assigned to a host, then delete from db and skip rpc * added call to reset\_network from openstack api down to vmops * merging with trunk * Got rid of BadParameter, just using standard python ValueError * Merged trunk * support for multiple IPs per network * Fix DescribeRegion answer by using specific 'listen' configuration parameter instead of overloading ec2\_host * Fixed tables creation order and added clearing db after errors * Modified S3ImageService to return the format defined in BaseService to allow EC2 API's DescribeImages to work against Glance * re-add input\_chain because it got deleted at some point * Launchpad automatic translations update * Fixes a typo in the auth checking for DescribeAvailabilityZones * Fixes describe\_security\_groups by forcing it to return a list instead of a generator * return a list instead of a generator from describe\_groups * Hi guys * Added missing doc string and made a few style tweaks * fix typo in auth checking for describe\_availability\_zones * now sorting by project, then by group * Launchpad automatic translations update * Made a few tweaks to format of S3 service implementation * Merged trunk * First attempt to make all image services use similar schemas * fix :returns: and add pep-0257 * Preliminary fix for issue, need more thorough testing before pushing to lp * Launchpad automatic translations update * More typos * More typos * More typos * More typos * More typos * fixed exceptions import from python migrate * Cast to host * This fixes a lazy-load issue in describe-instances, which causes a crash. The solution is to specifically load the network table when retrieving an instance * added instance\_type\_purge() to actually remove records from db * updated tests and added more error checking * Merged trunk * more error checking on inputs and better errors returned * Added more columns to instance\_types tables * Added LOG line to describe groups function to find out what's going * joinedload network so describe\_instances continues to work * zone api tests passing * Create a new AMQP connection by default * First, not all * Merged to trunk and fixed merge conflict in Authors * rough cut at zone api tests * Following Rick and Jay's suggestions: - Fixed LOG.debug for translation - improved vm\_utils.VM\_Helper.ensure\_free\_mem * Create a new AMQP connection by default * after hours of tracking his prey, ken slowly crept behind the elusive wilderbeast test import hiding in the libvirt\_conn.py bushes and gutted it with his steely blade * fixed destroy calls * Forgot the metadata includes * added get IPs by instance * added resetnetwork to the XenAPIPlugin.dispatch dict * Forgot the metadata includes * Forgot the metadata includes * Typo fixes and some stupidity about the models * passing instance to reset\_network instead of vm\_ref, also not converting to an opaque ref before making plugin call * Define sql\_idle\_timeout flag to be an integer * forgot to add network\_get\_all\_by\_instance to db.api * template adjusted to NOVA\_TOOLS, zone db & os api layers added * Spawn from disk * Some more cleanup * sql\_idle\_timeout should be an integer * merged model change: flavorid needs to unique in model * testing refactor * flavorid needs to unique in model * Add forwarding rules for floating IPs to the OUTPUT chain on the network node in addition to the PREROUTING chain * typo * refactored api call to use instance\_types * Use a NullPool for sqlite connections * Get a fresh connection in rpc.cast rather than using a recycled one * Make rpc.cast create a fresh amqp connection. Each API request has its own thread, and they don't multiplex well * Only use NullPool when using sqlite * Also add floating ip forwarding to OUTPUT chain * trunk merge * removed ZoneCommands from nova-manage * Try using NullPool instead of SingletonPool * Try setting isolation\_level=immediate * This branch fixes bug #708347: RunInstances: Invalid instance type gives improper error message * Wrap line to under 79 characters * Launchpad automatic translations update * adding myself to Authors file * 1. Merged to rev654(?) 2. Fixed bug continuous request. if user continuouslly send live-migration request to same host, concurrent request to iptables occurs, and iptables complains. This version add retry for this issue * forgot to register new instance\_types table * Plugin tidying and more migration implementation * fixed overlooked mandatory changes in Xen * Renamed migration plugin * A lot of stuff * - population of public and private addresses containers in openstack api - replacement of sqlalchemy model in instance stub with dict * Fixes the ordering of init\_host commands so that iptables chains are created before they are used * Pass timestamps to the db layer in fixed\_ip\_disassociate\_all\_by\_timeout rather than converting to strings ahead of time, otherwise comparison between timestamps would often fail * Added support for 'SAN' style volumes. A SAN's big difference is that the iSCSI target won't normally run on the same host as the volume service * added support to pull list of ALL instance types even those that are marked deleted * Indent args to ssh\_connect correctly * Fix PEP8 violations * Added myself to Authors * 1) Moved tests for limiter to test\_common.py (from \_\_init\_\_.py) and expanded test suite to include bad inputs and tests for custom limits (#2) * Added my mail alias (Part of an experiment in using github, which got messy fast...) * Fixed pep8 error in vm\_utils.py * Add my name to AUTHORS, remove parentheses from the substitution made in the previous commit * Don't convert datetime objects to a string using .isoformat(). Leave it to sqlalchmeny (or pysqlite or whatever it is that does the magic) to work it out * Added test case for 'not enough memory' Successfully ran unit tests Fixed pep8 errors * Give a better error message if the instance type specified is invalid * Launchpad automatic translations update * added testing for instance\_types.py and refactored nova-manage to use instance\_types.py instead of going directly to db * added create and delete methods to instance\_types in preparation to call them from nova-manage * added testing for nova-manage instance\_type * additional error checking for nova-manage instance\_type * Typos and primary keys * Automates the setup for FlatDHCP regardless of whether the interface has an ip address * add docstring and revert set\_ip changes as they are unnecessary * Commas help * Changes and bug fixes * avoiding HOST\_UNAVAILABLE exception: if there is not enough free memory does not spawn the VM at all. instance state is set to "SHUTDOWN" * merge lp:nova at revision #654 * merge with lp:nova * Fixed pep8 errors Unit tests passed * merge source and remove ifconfig * fixes #713766 and probably #710959, please test the patch before committing it * use route -n instead of route to avoid chopped names * Updates to the multinode install doc based on Wayne's findings. Merged with trunk so should easily merge in * Checks whether the instance id is a list or not before assignment. This is to fix a bug relating to nova/boto. The AWK-SDK libraries pass in a string, not a list. The euca tools pass in a list * Launchpad automatic translations update * Catching all socket errors in \_get\_my\_ip, since any socket error is likely enough to cause a failure in detection * Catching all socket errors in \_get\_my\_ip, since any socket error is likely enough to cause a failure in detection * blargh * Some stuff * added INSTANCE\_TYPES to test for compatibility with current tests * Checking whether the instance id is a list or not before assignment. This is to fix a bug relating to nova/boto. The AWK-SDK libraries pass in a string, not a list. the euca tools pass in a list * Added data\_transfer xapi plugin * Another quick fix to multinode install doc * Made updates to multinode install doc * fixed instance\_types methods to use database backend * require user context for most flavor/instance\_type read calls * added network\_get\_all\_by\_instance(), call to reset\_network in vmops * added new parameter --dhcp\_domain to set the used domain by dnsmasq in /etc/nova/nova.conf * minor * Fix for bug #714709 * A few changes * fixed format according to PEP8 * replaced all calls to ifconfig with calls to ip * added myself to the Authors file * applied http://launchpadlibrarian.net/63698868/713434.patch * Launchpad automatic translations update * aliased flavor to instance\_types in nova-manage. will probably need to make flavor a full fledged class as users will want to list flavors by flavor name * simplified instance\_types db calls to return entire row - we may need these extra columns for some features and there seems to be little downside in including them. still need to fix testing calls * refactor to remove ugly code in flavors * updated api.create to use instance\_type table * added preliminary testing for bin/nova-manage while i am somewhat conflicted about the path these tests have taken, i think it is better than no tests at all * rewrote nova-manage instance\_type to use correct db.api returned objects and have more robust error handling * instance\_types should return in predicatable order (by name currently) * flavorid and name need to be unique in the database for the ec2 and openstack apis, repectively * corrected db.instance\_types to return expect dict instead of lists. updated openstack flavors to expect dicts instead of lists. added deleted column to returned dict * converted openstack flavors over to use instance\_types table. a few pep changes * added FIXME(kpepple) comments for all constant usage of INSTANCE\_TYPES. updated api/ec2/admin.py to use the new instance\_types db table * Launchpad automatic translations update * allow for bridge to be the public interface * Removed (newly) unused exception variables * Didn't mean to actually make changes to the glance plugin * Added a bunch of stubbed out functionality * Moved ssh\_execute to utils; moved comments to docstring * Fixes for Vish & Devin's feedback * Fixes https://bugs.launchpad.net/nova/+bug/681417 * Don't swallow exception stack traces by doing 'raise e'; just use 'raise' * Implementation of 'SAN' volumes A SAN volume is 'special' because the volume service probably won't run on the iSCSI target. Initial support is for Solaris with COMSTAR (Solaris 11) * merging * Fixed PEP8 test problems, complaining about too many blank lines at line 51 * Adds logging.basicConfig() to run\_tests.py so that attempting to log debug messages from tests will work * Launchpad automatic translations update * flagged all INSTANCE\_TYPES usage with FIXME comment. Added basic usage to nova-manage (needs formatting). created api methods * added seed data to migration * Don't need a route for guests. Turns out the issue with routing from the guests was due to duplicate macs * Changes the behavior of run\_test.sh so that pep8 is only run in the default case (when running all tests). It will no longer run when individual test cases are being given as in: * open cactus * some updates to HACKING to describe the docstrings * Casting to the scheduler * moves driver.init\_host into the base class so it happens before floating forwards and sets up proper iptables chains 2011.1 ------ * Set FINAL = True in version.py * Open Cactus development * Set FINAL = True in version.py * pass the set\_ip from ensure\_vlan\_bridge * don't fail on ip add exists and recreate default route on ip move if needed * initial support for dynamic instance\_types: db migration and model, stub tests and stub methods * better setup for flatdhcp * added to inject networking data into the xenstore * forgot context param for network\_get\_all * Fixes bug #709057 * Add and document the provider\_fw method in virt/FakeConnection * Fix for LP Bug #709510 * merge trunk * fix pep8 error :/ * Changed default handler for uncaughted exceptions. It uses logging instead print to stderr * Launchpad automatic translations update * rpartition sticks the rhs in [2] * Fix for LP Bug #709510 * change ensure\_bridge so it doesn't overwrite existing ips * Fix for LP Bug #709510 * Enabled modification of projects using the EC2 admin API * Reorder insance rules for provider rules immediately after base, before secgroups * Merged trunk * Match the initial db version to the actual Austin release db schema * 1. Discard nova-manage host list Reason: nova-manage service list can be replacement. Changes: nova-manage * Only run pep8 after tests if running all the tests * add logging.basicConfig() to tests * fix austin->bexar db migration * woops * trivial cleanup for context.py * Made adminclient get\_user return None instead of throwing EC2Exception if requested user not available * pep8 * Added modify project to ec2 admin api * incorporate feedback from devin - use sql consistently in instance\_destroy also, set deleted\_at * Fixed whitespace * Made adminclient get\_user return None instead of throwing EC2Exception if requested user not available * OS-55: Fix typo for libvirt\_conn operation * merge trunk * remove extraneous line * Fixed pep8 errors * Changed default handler for uncaughted exceptions. Logging with level critical instead of print to stderr * Disassociate all floating ips on terminate instance * Fixes simple scheduler to able to be run\_instance by admins + availability zones * Makes having sphinx to build docs a conditional thing - if you have it, you can get docs. If you don't, you can't * Fixed a pep8 spacing issue * fixes for bug #709057 * Working on api / manager / db support for zones * Launchpad automatic translations update * Adds security group output to describe\_instances * Use firewall\_driver flag as expected with NWFilterFirewall. This way, either you use NWFilterFirewall directly, or you use IptablesFirewall, which creates its own instance of NWFilterFirewall for the setup\_basic\_filtering command. This removes the requirement that LibvirtConnection would always need to know about NWFirewallFilter, and cleans up the area where the flag is used for loading the firewall class * simplify get and remove extra reference to import logging * Added a test that checks for localized strings in the source code that contain position-based string formatting placeholders. If found, an exception message is generated that summarizes the problem, as well as the location of the problematic code. This will prevent future trunk commits from adding localized strings that cannot be properly translated * Made changes based on code review * makes sure that : is in the availability zone before it attempts to use it to send instances to a particular host * Makes sure all instance and volume commands that raise not found are changed to show the ec2\_id instead of the internal id * remove all floating addresses on terminate instance * Merged in trunk changes * Fixed formatting issues in current codebase * Added the test for localized string formatting * Fixes NotFound messages in api to show the ec2\_id * Changed cpu limit to a static value of 100000 (100%) instead of using the vcpu value of 1. There is no weight/limit variable now so I see no other solution than the static max limit * Make nova.virt.images fetch images from a Glance URL when Glance is used as the image service (rather than unconditionally fetch them from an S3/objectstore URL) * Fixed spacing... AGAIN * Make unit tests clean up their mess in /tmp after themselves * Make xml namespace match the API version requested * Missing import in xen plugin * Shortened comment for 80char limt * Added missing import * Naive, low-regression-risk fix enabling Glance to work with libvirt/hyperv * Add unit test for xmlns version matching request version * Properly pulling the name attribute from security\_group * adding testcode * Fix Bug #703037. ra\_server is None * Fix regression in s3 image service. This should be a feature freeze exception * I have a feeling if we try to migrate from imageId to id we'll be tracking it down a while * more instanceId => id fixes * Fix regression in imageId => id field rename in s3 image service * Apply lp:707675 to this branch to be able to test * merge trunk * A couple of bugfixes * Fixes a stupid mistake I made when I moved this method from a module into a class * Add dan.prince to Authors * Make xml namespace match the API version requested * Fix issue in s3.py causing where '\_fix\_image\_id' is not defined * added mapping parameter to write\_network\_config\_to\_xenstore * OS-55: Added a test case for XenAPI file-based network injection OS-55: Stubbed out utils.execute for all XenAPI VM tests, including command simulation where necessary * Simple little changes related to openstack api to work better with glance * Merged trunk * Cleaned up \_start() and \_shutdown() * Added missing int to string conversion * Simple little changes related to openstack api to work better with glance * use 'ip addr change' * Fix merge miss * Changed method signature of create\_network * merged r621 * Merged with http://bazaar.launchpad.net/~vishvananda/nova/lp703037 * Merged with vish branch * Prefixed ending multi-line docstring with a newline * Fixing documentation strings. Second attempt at pep8 * Removal of image tempdir in test tearDown. Also, reformatted a couple method comments to match the file's style * Add DescribeInstanceTypes to admin api. This lets the dashboard know what sizes can be launched (using the -t flag in euca-run-instances, for example) and what resources they provide * Rename Mock, since it wasn't a Mock * Add DescribeInstanceTypes to admin api (dashboard uses it) * Fix for LP Bug #699654 * Change how libvirt firewall drivers work to have meaningful flags * Fixed pep8 errors * This branch updates docs to reflect the db sync addition. It additionally adds some useful errors to nova-manage to help people that are using old guides. It wraps sqlalchemy errors in generic DBError. Finally, it updates nova.sh to use current settings * Added myself to the authors list * fix pep8 issue (and my commit hook that didn't catch it) * Add a host argument to virt drivers's init\_host method. It will be set to the name of host it's running on * merged trunk * Wraps the NotFound exception at the api layer to print the proper instance id. Does the same for volume. Note that euca-describe-volumes doesn't pass in volume ids properly, so you will get no error messages on euca-describe-volumes with improper ids. We may also need to wrap a few other calls as well * Fixes issue with SNATTING chain not getting created or added to POSTROUTING when nova-network starts * Fix for bug #702237 * Moving init\_host before metadata\_forward, as metadata\_forward modifies prerouting rules * another trunk merge * Limit all lines to a maximum of 79 characters * Perform same filtering for OUTPUT as FORWARD in iptables * Fixed up a little image\_id return * Trunk merged * This patch: * Trunk merged * In instance chains and rules for ipv4 and ipv6, ACCEPT target was missing * moved imageId change to s3 client * Migration for provider firewall rules * Updates for provider\_fw\_rules in admin api * Adds driver.init\_host() call to flatdhcp driver * Fixed pep8 errors * Fixed pep8 errors * No longer hard coding to "/tmp/nova/images/". Using tempdir so tests run by different people on the same development machine pass * Perform same filtering for OUTPUT as FORWARD in iptables. This removes a way around the filtering * Fix pep-8 problem from prereq branch * Add a host argument to virt driver's init\_host method. It will be set to the name of host it's running on * updated authors since build is failing * Adds conditional around sphinx inclusion * merge with trunk * Fixes project and role checking when a user's naming attribute is not uid * I am new to nova, and wanted to fix a fairly trivial bug in order to understand the process * Fix for LP Bug #707554 * Added iptables rule to IptablesFirewallDriver like in Hisaharu Ishii patch with some workaround * Set the default number of IP's to to reserve for VPN to 0 * Merged with r606 * Properly fixed spacing issue for pep8 * Fixed spacing issue for pep8 * Fixed merge conflict * Added myself to ./Authors file * Switches from project\_get\_network to network\_get\_by\_instance, which actually works with all networking modes. Also removes a couple duplicate lines from a bad merge * Set the default number of IP's to to reserver for VPN to 0 * Localized strings that employ formatting should not use positional arguments, as they prevent the translator from re-ordering the translated text; instead, they should use mappings (i.e., dicts). This change replaces all localized formatted strings that use more than one formatting placeholder with a mapping version * add ip and network to nwfilter test * merged ntt branch * use network\_get\_by\_instance * Added myself (John Dewey) to Authors * corrected nesting of the data dictionary * Updated a couple data structures to pass pep8 * Added static cpu limit of 100000 (100%) to hyperv.py instead of using the vcpu value of 1 * PEP8 fixes * Changes \_\_dn\_to\_uid to return the uid attribute from the user's object * OS-55: PEP8 fixes * merged branch to name net\_manager.create\_networks args * the net\_managers expect different args to create\_networks, so nova-manage's call to net\_manager.create\_networks was changed to use named args to prevent argument mismatching * OS-55: Post-merge fixes * Fix describe\_regions by changing renamed flags. Also added a test to catch future errors * changed nova-manage to use named arguments to net\_manager.create\_networks * Merged trunk * Removed tabs form source. Merged trunk changes * allow docs to build in virtualenv prevent setup.py from failing with sphinx in virtualenv * fixes doc build and setup.py fail in virtualenv * fix reversed assignment * fixes and refactoring of smoketests * remove extra print * add test and fix describe regions * merged trunk * This patch skips VM shutdown if already in the halted state * Use Glance to relate machine image with kernel and ramdisk * Skip shutdown if already halted * Refactoring \_destroy into steps * i18n! * merged trunk fixed whitespace in rst * wrap sqlalchemy exceptions in a generic error * Wrap instance at api layer to print the proper error. Use same logic for volumes * This patch adds two flags: * Using new style logging * Adding ability to remap VBD device * Resolved trunk merge conflicts * Adds gettext to pluginlib\_nova.py. Fixes #706029 * Adding getttext to pluginlib\_nova * Add provider\_fw\_rules awareness to iptables firewall driver * No longer chmod 0777 instance directories, since nova works just fine without them * Updated docs for db sync requirements; merged with Vish's similar doc updates * Change default log formats so that:  \* they include a timestamp (necessary to correlate logs)  \* no longer display version on every line (shorter lines)  \* use [-] instead of [N/A] (shorter lines, less scary-looking)  \* show level before logger name (better human-readability) * OS55: pylint fixes * OS-55: Added unit test for network injection via xenstore * fixed typo * OS-55: Fix current unit tests * Fixed for pep8 * Merged with rev597 * No longer chmod 0777 instance directories * Reverted log type from error to audit * undid moving argument * Fix for LP Bug #699654 * moved argument for label * fixed the migration * really added migration for networks label * added default label to nova-manage and create\_networks * syntax * syntax error * added plugin call for resetnetworking * Fix metadata using versions other than /later. Patch via ~ttx * should be writing some kindof network info to the xenstore now, hopefully * Use ttx's patch to be explict about paths, as urlmap doesn't work as I expected * Doc changes for db sync * Fixes issue with instance creation throwing errors when non-default groups are used * Saving a database call by getting the security groups from the instance object * Fixes issue with describe\_instances requiring an admin context * OS-55: pylint fixes * Fixing another instance of getting a list of ids instead of a list of objects * Adds security group output to describe\_instances * Finds and fixes remaining strings for i18n. Fixes bug #705186 * Pass a PluginManager to nose.config.Config(). This lets us use plugins like coverage, xcoverage, etc * i18n's strings that were missed or have been added since initial i18n strings branch * OS-55: Only modify Linux image with no or injection-incapable guest agent OS-55: Support network configuration via xenstore for Windows images * A couple of copypasta errors * Keep exception tracing as it was * Pass a PluginManager to nose.config.Config(). This lets us use plugins like coverage, xcoverage, etc * Also print version at nova-api startup, for consistency * Add timestamp to default log format, invert name and level for better readability, log version once at startup * When radvd is already running, not to hup, but to restart * fix ipv6 conditional * more smoketest fixes * Passing in an elevated context instead of making the call non-elevated * Added changes to make errors and recovery for volumes more graceful: * Fetches the security group from ID, allowing the object to be used properly, later * Changing service\_get\_all\_by\_host to not require admin context as it is used for describing instances, which any user in a project can do * Exclude vcsversion.py from pep8 check. It's not compliant, but out of our control * Exclude vcsversion.py from pep8 check. It's not compliant, but out of our control * Include paste config in tarball * Add etc/ directory to tarball * Fixes for bugs: * Return non-zero if either unit tests or pep8 fails * Eagerly load fixed\_ip.network in instance\_get\_by\_id * Add Rob Kost to Authors * Return non-zero if either unit tests or pep8 fails * Merged trunk * merge trunk * Add paste and paste.deploy installation to nova.sh, needed for api server * Updated trunk changes to work with localization * Implement provider-level firewall rules in nwfilter * Whitespace (pep8) cleanups * Exception string lacking 'G' for gigabytes unit * Fixes \*\*params unpacking to ensure all kwargs are strings for compatibility with python 2.6.1 * make sure params have no unicode keys * Removed unneeded line * Merged trunk * Refactor run\_tests.sh to allow us to run an extra command after the tests * update the docs to reflect db sync as well * add helpful error messages to nova-manage and update nova.sh * Fixed unit tests * Merged trunk * fixed pep8 error * Eagerly load instance's fixed\_ip.network attribute * merged trunk changes * minor code cleanup * minor code cleanup * remove blank from Authors * .mailmap rewrite * .mailmap updated * Refactor run\_tests.sh to allow us to run an extra command after the tests * Add an apply\_instance\_filter method to NWFilter driver * PEP-8 fixes * Revert Firewalldriver * Replace an old use of ec2\_id with id in describe\_addresses * various fixes to smoketests, including allowing admin tests to run as a user, better timing, and allowing volume tests to run on non-udev linux * merged trunk * replace old ec2\_id with proper id in describe\_addresses * merge vish's changes (which merged trunk and fixed a pep8 problem) * merged trunkand fixed conflicts and pep error * get\_my\_linklocal raises exception * Completed first pass at converting all localized strings with multiple format substitutions * Allows moving from the Austin-style db to the Bexar-style * move db sync into nosetests package-level fixtures so that the existing nosetests attempt in hudson will pass * previous commit breaks volume.driver. fix it. * per vish's feedback, allow admin to specify volume id in any of the acceptable manners (vol-, volume-, and int) * Merged trunk * Fixed unit tests * Fix merge conflict * add two more columns, set string lengths) * Enable the use\_ipv6 flag in unit tests by default * Fixed unit tests * merge from upstream and fix small issues * merged to trunk rev572 * fixed based on reviewer's comment * Basic stubbing throughout the stack * Enable the use\_ipv6 flag in unit tests by default * Add an apply\_instance\_filter method to NWFilter driver * update status to 'error\_deleting' on volumes where deletion fails * Merged trunk * This disables ipv6 by default. Most use cases will not need it on and it makes dependencies more complex * The live\_migration branch ( https://code.launchpad.net/~nttdata/nova/live-migration/+merge/44940 ) was not ready to be merged * merge from upstream to fix conflict * Trunk merge * s/cleanup/volume. volume commands will need their own ns in the long run * disable ipv6 by default * Merged trunk * Plug VBD to existing instance and minor cleanup * fixes related to #701749. Also, added nova-manage commands to recover from certain states: * Implement support for streaming images from Glance when using the XenAPI virtualization backend, as per the bexar-xenapi-support-for-glance blueprint * Works around the app-armor problem of requiring disks with backing files to be named appropriately by changing the name of our extra disks * fix test to respect xml changes * merged trunk * Add refresh\_security\_group\_\* methods to nova/virt/fake.py, as FakeConnection is the reference for documentation and method signatures that should be implemented by virt connection drivers * added paste pastedeploy to nova.sh * authors needed for test * revert live\_migration branch * This removes the need for the custom udev rule for iscsi devices. It instead attaches the device based on /dev/disk/by-path/ which should make the setup of nova-volume a little easier * Merged trunk * Risk of Regression: This patch don’t modify existing functionlities, but I have added some. 1. nova.db.service.sqlalchemy.model.Serivce (adding a column to database) 2. nova.service ( nova-compute needes to insert information defined by 1 above) * Docstrings aren't guaranteed to exist, so split() can't automatically be called on a method without first checking for the method docstring's existence. Fixes Bug #704447 * Removes circular import issues from bin/stack and replaces utils.loads with json.loads. Fixes Bug#704424 * ComputeAPI -> compute.API in bin/nova-direct-api. Fixes LP#704422 * Fixed apply\_instance\_filter is not implemented in NWFilterFirewall * pep8 * I might have gone overboard with documenting \_members * Add rules to database, cast refresh message and trickle down to firewall driver * Fixed error message in get\_my\_linklocal * openstack api fixes for glance * Stubbed-out code for working with provider-firewalls * Merged trunk * Merged with trunk revno 572 * Better shutdown handling * Change where paste.deploy factories live and how they are called. They are now in the nova.wsgi.Application/Middleware classes, and call the \_\_init\_\_ method of their class with kwargs of the local configuration of the paste file * Further decouple api routing decisions and move into paste.deploy configuration. This makes paste back the nova-api binary * Clean up openstack api test fake * Merged trunk * Add Start/Shutdown support to XenAPI * The Openstack API requires image metadata to be returned immediately after an image-create call * merge trunk * Fixing whitespace * Returning image\_metadata from snapshot() * Merging trunk * Merged trunk * merged trunk rev569 * merged to rev 561 and fixed based on reviewer's comment * Adds a developer interface with direct access to the internal inter-service APIs and a command-line tool based on reflection to interact with them * merge from upstream * pep8 fixes... largely to things from trunk? * merge from upstream * pep8 * remove print statement * This branch fixes two outstanding bugs in compute. It also fixes a bad method signature in network and removes an unused method in cloud * Re-removes TrialTestCase. It was accidentally added in by some merges and causing issues with running tests individually * removed rpc in cloud * merged trial fix again * fix bad function signature in create\_networks * undo accidental removal of fake\_flags * Merged trunk * merged lp:~vishvananda/nova/lp703012 * remove TrialTestCase again and fix merge issues * import re, remove extra call in cloud.py. Move get\_console\_output to compute\_api * Create and use a generic handler for RPC calls to compute * Create and use a generic handler for RPC calls to compute * Create and use a generic handler for RPC calls * Merged trunk * OS-55: Inject network settings in linux images * Merged with trunk revno 565 * use .local and .rescue for disk images so they don't make app-armor puke * Implements the blueprint for enabling the setting of the root/admin password on an instance * OpenStack Compute (Nova) IPv4/IPv6 dual stack support http://wiki.openstack.org/BexarIpv6supportReadme * Merged to rev.563 * This change introduces support for Sheepdog (distributed block storage system) which is proposed in https://blueprints.launchpad.net/nova/+spec/sheepdog-support * Sort Authors * Update Authors * merge from upstream: * pep8 fixes * update migration script to add new tables since merge * sort Authors * Merged with r562 * This modifies libvirt to use CoW images instead of raw images. This is much more efficient and allows us to use the snapshotting capabilities available for qcow2 images. It also changes local storage to be a separate drive instead of a separate partition * pep8. Someday I'll remember 2 blank lines between module methods * remove ">>>MERGE" iin nova/db/sqlalchemy/api.py * checking based on pep8 * merged trunk * Modified per sorens review * Fix for Pep-8 * Merged with r561 * Moved commands which needs sudo to nova.sh * Added netaddr for pip-requires * Marking snapshots as private for now * Merging Trunk * Fixing Image ID workaround and typo * Fixed based on the comments from code review. Merged to trunk rev 561 * Add a new method to firewall drivers to tell them to stop filtering a particular instance. Call it when an instance has been destroyed * merged to trunk rev 561 * Merged trunk * merge trunk rev560 * Fixes related to how EC2 ids are displayed and dealt with * Get reviewed and fixed based on comments. Merged latest version * Make libvirt and XenAPI play nice together * Spelling is hard. Typing even moreso * Revert changes to version.py * Minor code cleanups * Minor code cleanups * Minor code cleanups * Make driver calls compatible * Merged trunk * Stubbed out XenServer rescue/unrescue * Added unit tests for the Diffie-Hellman class. Merged recent trunk changes * Bring NWFilter driver up to speed on unfilter\_instance * Replaced home-grown Diffie-Hellman implementation with the M2Crypto version supplied by Soren * Instead of a set() to keep track of instances and security groups, use a dict(). \_\_eq\_\_ for stuff coming out of sqlalchemy does not do what I expected (probably due to our use of sessions) * Fixes broken call to \_\_generate\_rc in auth manager * Fixes bug #701055. Moves code for instance termination inline so that the manager doesn't prematurely mark an instance as deleted. Prematurely doing so causes find calls to fail, prevents instance data from being deleted, and also causes some other issues * Revert r510 and r512 because Josh had already done the same work * merged trunk * Fixed Authors * Merged with 557 * Fixed missing \_(). Fixed to follow logging to LOG changes. Fixed merge miss (get\_fixed\_ip was moved away). Update some missing comments * merge from upstream and fix leaks in console tests * make sure get\_all returns * Fixes a typo in the name of a variable * Fixes #701055. Move instance termination code inline to prevent manager from prematurely marking it as destroyed * fix invalid variable reference in cloud api * fix indentation * add support for database migration * fix changed call to generate\_rc * merged with r555 * fixed method signature of modify\_rules fixed unit\_test for ipv6 * standardize volume ids * standardize volume ids * standardize on hex for ids, allow configurable instance names * correct volume ids for ec2 * correct formatting for volume ids * Fix test failures on Python 2.7 by eagerly loading the fixed\_ip attribute on instances. No clue why it doesn't affect python 2.6, though * Adding TODO to clarify status * Merging trunk * Do joinedload\_all('fixed\_ip.floating\_ips') instead of joinedload('fixed\_ip') * Initialize logging in nova-manage so we don't see errors about missing handlers * \_wait\_with\_callback was changed out from under suspend/resume. fixed * Make rescue/unrescue available to API * Stop error messages for logs when running nova-manage * Fixing stub so tests pass * Merging trunk * Merging trunk, small fixes * This branch adds a backend for using RBD (RADOS Block Device) volumes in nova via libvirt/qemu. This is described in the blueprint here: https://blueprints.launchpad.net/nova/+spec/ceph-block-driver * Fix url matching for years 2010-forward * Update config for launching logger with cleaner factory * Update paste config for ec2 request logging * merged changes from trunk * cleaned up prior merge mess * Merged trunk * My previous modifications to novarc had CLOUDSERVER\_AUTH\_URL pointing to the ec2 api port. Now it's correctly pointing to os api port * Check for whole pool name in check\_for\_setup\_error * change novarc template from cc\_port to osapi\_port. Removed osapi\_port from bin scripts * Start to add rescue/unrescue support * fixed pause and resume * Fixed another issue in \_stream\_disk, as it did never execute \_write\_partition. Fixed fake method accordingly. Fixed pep8 errors * pep8 fixes * Fixing the stub for \_stream\_disk as well * Fix for \_stream\_disk * Merged with r551 * Support IPv6 firewall with IptablesFirewallDriver * Fixed syntax errors * Check whether 'device\_path' has ':' before splitting it * PEP8 fixes, and switch to using the new LOG in vm\_utils, matching what's just come in from trunk * Merged with trunk * Merged with Orlando's recent changes * Added support of availability zones for compute. models.Service got additional field availability\_zone and was created ZoneScheduler that make decisions based on this field. Also replaced fake 'nova' zone in EC2 cloud api * Eagerly load fixed\_ip property of instances * Had to abandon the other branch (~annegentle/nova/newscript) because the diffs weren't working right for me. This is a fresh branch that should be merged correctly with trunk. Thanks for your patience. :) * Added unit tests for the xenapi-glance integration. This adds a glance simulator that can stub in place of glance.client.Client, and enhances the xapi simulator to add the additional calls that the Glance-specific path requires * Merged with 549 * Change command to get link local address Remove superfluous code * This branch adds web based serial console access. Here is an overview of how it works (for libvirt): * Merged with r548 * Fixed bug * Add DescribeInstanceV6 for backward compatibility * Fixed test environments. Fixed bugs in \_fetch\_image\_objecstore and \_lookup\_image\_objcestore (objectstore was broken!) Added tests for glance * Fixed for pep8 Remove temporary debugging * changed exception class * Changing DN creation to do searches for entries * Fixes bug #701575: run\_tests.sh fails with a meaningless error if virtualenv is not installed. Proposed fix tries to use easy\_install to install virtualenv if not present * merge trunk, fix conflict * more useful prefix and fix typo in string * use by-path instead of custom udev script * Quick bugfix. Also make the error message more specific and unique in the equivalent code in the revoke method * remove extra whitspaces * Raise meaningful exception when there aren't enough params for a sec group rule * bah - pep8 errors * resolve pylint warnings * Removing script file * Read Full Spec for implementation details and notes on how to boot an instance using OS API. http://etherpad.openstack.org/B2RK0q1CYj * Added my name to Authors list * Changes per Edays comments * Fixed a number of issues with the iptables firewall backend: \* Port specifications for firewalls come back from the data store as integers, but were compared as strings. \* --icmp-type was misspelled as --icmp\_type (underscore vs dash) \* There weren't any unit tests for these issues * merged trunk changes * Removed unneeded SimpleDH code from agent plugin. Improved handling of plugin call failures * Now tries to install virtualenv via easy\_install if not present * Merging trunk * fixed issue in pluginlib\_nova.py * Trunk merge and conflcts resolved * Implementation of xs-console blueprint (adds support for console proxies like xvp) * Fixed a number of issues with the iptables firewall backend: \* Port specifications for firewalls come back from the data store as integers, but were compared as strings. \* --icmp-type was misspelled as --icmp\_type (underscore vs dash) \* There weren't any unit tests for these issues * Add support for EBS volumes to the live migration feature. Currently, only AoE is supported * Changed shared\_ip\_group detail routing * Changed shared\_ip\_group detail routing * A few more changes to the smoeketests. Allows smoketests to find the nova package from the checkout. Adds smoketests for security groups. Also fixes a couple of typos * Fixes the metadata forwarding to work by default * Adds support to nova-manage to modify projects * Add glance to pip-requires, as we're now using the Glance client code from Nova * Now removing kernel/ramdisk VDI after copy Code tested with PV and HVM guests Fixed pep8 errors * merged trunk changes * consolidate boto\_extensions.py and euca-get-ajax-console, fix bugs from previous trunk merge * Fixed issues raised by reviews * xenapi\_conn was not terminating utils/LoopingCall when an exception was occurring. This was causing the eventlet Event to have send\_exception() called more than once (a no-no) * merge trunk * whups, fix accidental change to nova-combined * remove uneeded superclass * Bugfix * Adds the requisite infrastructure for automating translation templates import/export to Launchpad * Added babel/gettext build support * Can now correctly launch images with external kernels through glance * re-merged in trunk to correct conflict * Fix describe\_availablity\_zones versobse * Typo fix * merged changes from trunk * Adding modify option for projects * Fixes describe\_instances to filter by a list of instance\_ids * Late import module for register\_models() so it doesn't create the db before flags are loaded * Checks for existence of volume group using vgs instead of checking to see if /dev/nova-volumes exists. The dev is created by udev and isn't always there even if the volume group does exist * Add a new firewall backend for libvirt, based on iptables * Create LibvirtConnection directly, rather than going through libvirt\_conn.get\_connection. This should remove the dependency on libvirt for tests * Fixed xenapi\_conn wait\_for\_task to properly terminate LoopingCall on exception * Fixed xenapi\_conn wait\_for\_task to properly terminate LoopingCall on exception * Fixed xenapi\_conn wait\_for\_task to properly terminate LoopingCall on exception * optimize to call get if instance\_id is specified since most of the time people will just be requesting one id * fix describe instances + test * Moved get\_my\_ip into flags because that is the only thing it is being used for and use it to set a new flag called my\_ip * fixes Document make configuration by updating nova version mechanism to conform to rev530 update * alphbetized Authors * added myself to authors and fixed typo to follow standard * typo correction * fixed small glitch in \_fetch\_image\_glance virtual\_size = imeta['size'] * fixed doc make process for new nova version (rev530) machanism * late import module for register\_models() so it doesn't create the db before flags are loaded * use safer vgs call * Return proper region info in describe\_regions * change API classname to match the way other API's are done * small cleanups * First cut at implementing partition-adding in combination with the Glance streaming. Untested * some small cleanups * merged from upstream and made applicable changes * Adds a mechanism to programmatically determine the version of Nova. The designated version is defined in nova/version.py. When running python setup.py from a bzr checkout, information about the bzr branch is put into nova/vcsversion.py which is conditionally imported in nova/version.py * Return region info in the proper format * Now that we aren't using twisted we can vgs to check for the existence of the volume group * s/canonical\_version/canonical\_version\_string/g * Fix indentation * s/string\_with\_vcs/version\_string\_with\_vcs/g * Some fixes to \_lookup\_image\_glance: fix the return value from lookup\_image, attach the disk read-only before running pygrub, and add some debug logging * Reverted formatting change no longer necessary * removed a merge conflict line I missed before * merged trunk changes * set the hostname factory in the service init * incorporated changes suggested by eday * Add copyright and license info to version.py * Fixes issue in trunk with downloading s3 images for instance creation * Fix pep8 errors * Many fixes to the Glance integration * Wrap logs so we can: \* use a "context" kwarg to track requests all the way through the system \* use a custom formatter so we get the data we want (configurable with flags) \* allow additional formatting for debug statements for easer debugging \* add an AUDIT level, useful for noticing changes to system components \* use named logs instead of the general logger where it makes sesnse * pep8 fixes * Bug #699910: Nova RPC layer silently swallows exceptions * Bug #699912: When failing to connect to a data store, Nova doesn't log which data store it tried to connect to * Bug #699910: Nova RPC layer silently swallows exceptions * pv/hvm detection with pygrub updated for glance * Bug #699912: When failing to connect to a data store, Nova doesn't log which data store it tried to connect to * Resolved merge differences * Additional cleanup prior to pushing * Merged with trunk * Fixing unescaped quote in nova-CC-install.sh script plus formatting fixes to multinode install * getting ready to push for merge prop * Fixing headers line by wrapping the headers in single quotes * Less code generation * grabbed the get\_info fix from my other branch * merged changes from trunk * Remove redundant import of nova.context. Use db instance attribute rather than module directly * Merging trunk * Removing some FIXMEs * Reserving image before uploading * merge * Half-finished implementation of the streaming from Glance to a VDI through nova-compute * Fix Nova not to immediately blow up when talking to Glance: we were using the wrong URL to get the image metadata, and ended up getting the whole image instead (and trying to parse it as json) * another merge with trunk to remedy instance\_id issues * merge * Include date in API action query * Review feedback * This branch implements lock functionality. The lock is stored in the compute worker database. Decorators have been added to the openstack API actions which alter instances in any way * Review feedback * Review feedback * Review feedback * typo * refers to instance\_id instead of instance\_ref[instance\_id] * passing the correct parameters to decorated function * accidentally left unlocked in there, it should have been locked * various cleanup and fixes * merged trunk * pep8 * altered argument handling * Got the basic 'set admin password' stuff working * Include date in action query * Let documentation get version from nova/version.py as well * Add default version file for developers * merge pep8 fixes from newlog2 * Track version info, and make available for logging * pep8 * Merged trunk * merge pep8 and tests from wsgirouter branch * Remove test for removed class * Pep8 * pep8 fix * merged trunk changes * commit before merging trunk * Fixes format\_instances error by passing reservation\_id as a kwarg instead of an arg. Also removes extraneous yields in test\_cloud that were causing tests to pass with broken code * Remove module-level factory methods in favor of having a factory class-method on wsgi components themselves. Local options from config are passed to the \_\_init\_\_ method of the component as kwargs * fix the broken tests that allowed the breakage in format to happen * Fix format\_run\_instances to pass in reservation id as a kwarg * Add factories into the wsgi classes * Add blank \_\_init\_\_ file for fixing importability. The stale .pyc masked this error locally * merged trunk changes * Introduces basic support for spawning, rebooting and destroying vms when using Microsoft Hyper-V as the hypervisor. Images need to be in VHD format. Note that although Hyper-V doesn't accept kernel and ramdisk separate from the image, the nova objectstore api still expects an image to have an associated aki and ari. You can use dummy aki and ari images -- the hyper-v driver won't use them or try to download them. Requires Python's WMI module * merged trunk changes * Renamed 'set\_root\_password' to 'set\_admin\_password' globally * merge with trunk * renamed sharedipgroups to shared\_ip\_groups and fixed tests for display\_name * Fix openstack api tests and add a FaultWrapper to turn exceptions to faults * Fixed display\_name on create\_instance * fix some glitches due to someone removing instanc.internal\_id (not that I mind) remove accidental change to nova-combined script * Fixed trunk merge conflicts as spotted by dubs * OS API parity: map image ID to numeric ID. Ensure all other OS operations are at least stubbed out and callable * add in separate public hostname for console hosts. flesh out console api data * allow smoketests to find nova package and add security rules * Fix a bunch of pep8 stuff * This addition to the docs clarifies that it is a requirement for contributors to be listed in the Authors file before their commits can be merged to trunk * merge trunk * another merge from trunk to the latest rev * pulled changes from trunk added console api to openstack api * Removed dependencies on nova server components for the admin client * Remove stale doc files so the autogeneration extension for sphinx will work properly * Add to Authors and mailmap * Make test case work again * This branch contains the internal API cleanup branches I had previously proposed, but combined together and with all the UUID key replacement ripped out. This allows multiple REST interfaces (or other tools) to use the internal API directly, rather than having the logic tied up in the ec2 cloud.py file * socat will need to be added to our nova sudoers * merged trunk changes * intermediate work * Created a XenAPI plugin that will allow nova code to read/write/delete from xenstore records for a given instance. Added the basic methods for working with xenstore data to the vmops script, as well as plugin support to xenapi\_conn.py * Merged trunk * Recover from a lost data store connection * Updated register\_models() docstring * simplify decorator into a wrapper fn * add in xs-console worker and tests * pep8 cleanup * more fixes, docstrings * fix injection and xml * Fixing formatting problems with multinode install document * Split internal API get calls to get and get\_all, where the former takes an ID and returns one resource, and the latter can optionally take a filter and return a list of resources * missing \_() * Fixed for pep8 * Fixed:Create instance fails when use\_ipv6=False * Removed debug message which is not needed * Fixed misspelled variable * Fixed bug in nova\_project\_filter\_v6 * The \_update method in base Instance class overides dns\_name\_v6,so fixed it * self.XENAPI.. * Changed Paused power state from Error to Paused * fixed json syntax error * stop using partitions and first pass at cow images * Remove stale doc files * pep8 * tests fixed up * Better method for eventlet.wsgi.server logging * Silence eventlet.wsgi.server so it doesn't go to stdout and pollute our logs * Declare a flag for test to run in isolation * Build app manually for test\_api since nova.ec2.API is gone * Recover from a lost data store connection * Added xenstore plugin changed * merged changes from trunk * some more cleanup * need one more newline * Redis dependency no longer needed * Make test\_access use ec2.request instead of .controller and .action * Revert some unneeded formatting since twistd is no longer used * pep8 fixes * Remove flags and unused API class from openstack api, since such things are specified in paste config now * i18n logging and exception strings * remove unused nova/api/\_\_init\_\_.py * Make paste the default api pattern * Rework how routing is done in ec2 endpoint * Change all 2010 Copyright statements to 2010-2011 in doc source directory only * rename easy to direct in the scripts * fix typo in stack tool * rename Easy API to Direct API * Moved \_\_init\_\_ api code to api.py and changed allowed\_instances quota method argument to accept all type data, not just vcpu count * Made the plugin output fully json-ified, so I could remove the exception handlers in vmops.py. Cleaned up some pep8 issues that weren't caught in earlier runs * merged from trunk * Renamed argument to represent possible types in volume\_utils * Removed leftover UUID reference * Removed UUID keys for instance and volume * Merged trunk * Final edits to multi-node doc and install script * Merged trunk changes * Some Bug Fix * Fixed bug in libvirt * Fixed bug * Fixed for pep8 * Fixed conflict with r515 * Merged and fiexed conflicts with r515 * some fixes per vish's feedback * Don't know where that LOG went.. * Final few log tweaks, i18n, levels, including contexts, etc * Apply logging changes as a giant patch to work around the cloudpipe delete + add issue in the original patch * dabo fix to update for password reset v2 * krm\_mapping.json sample file added * dabo fix to update for password reset * added cloudserver vars to novarc template * Update Authors * Add support for rbd volumes * Fixes LP688545 * First pass at feature parity. Includes Image ID hash * Fixing merge conflicts with new branch * merged in trunk changes * Fixing merge conflicts * Fixes LP688545 * Make sure we point to the right PPA's everywhere * Editing note about the database schema available on the wiki * Modifying based on reviewer comments * Uses paste.deploy to make application running configurable. This includes the ability to swap out middlewares, define new endpoints, and generally move away from having code to build wsgi routers and middleware chains into a configurable, extensible method for running wsgi servers * Modifications to the nova-CC-installer.sh based on review * Adds the pool\_recycle option to the sql engine startup call. This enables connection auto-timeout so that connection pooling will work properly. The recommended setting (per sqlalchemy FAQ page) has been provided as a default for a new configuration flag. What this means is that if a db connection sits idle for the configured # of seconds, the engine will automatically close the connection and return it to the available thread pool. See Bug #690314 for info * Add burnin support. Services are now by default disabled, but can have instances and volumes run on them using availability\_zone = nova:HOSTNAME. This lets the hardware be put through its paces without being put in the generally available pool of hardware. There is a 'service' subcommand for nova-manage where you can enable, disable, and list statuses of services * pep8 fixes * Merged compute-api-cleanup branch * Removed compute dependency in quota.py * add timeout constant, set to 5 minutes * removed extra whitespace chars at the end of the changed lines * Several documentation corrections and formatting fixes * Minor edits prior to merging changes to the script file * add stubs for xen driver * merge in trunk * merged latest trunk * merge trunk * merge trunk * temp * Stop returning generators in the refresh\_security\_group\_{rules,members} methods * Don't lie about which is the default firewall implementation * Move a closing bracket * Stub out init\_host in libvirt driver * Adjust test suite to the split between base firewall rules provided by nwfilter and the security group filtering * Fix a merge artifact * Remove references to nova-core/ppa and openstack/ppa PPA's * Updated the password generation code * Add support for Sheepdog volumes * Add support for various block device types (block, network, file) * Added agent.py plugin. Merged xenstore plugin changes * fixed pep8 issues * Added OpenStack's copyright to the xenstore plugin * fixed pep8 issues * merged in trunk and xenstore-plugin changes * Ignore CA/crl.pem * Before merge with xenstore-plugin code * Corrected the sloppy import in the xenstore plugin that was copied from other plugins * Ignore CA/crl.pem * Merged trunk * Merged trunk * deleting README.livemigration.txt and nova/livemigration\_test/\* * Merged trunk * Merged trunk * 最新バージョンにマージ。変更点は以下の通り。 Authorsに自分の所属を追加 utils.pyのgenerate\_uidがおかしいのでインスタンスIDがオーバーフローしていたが、 その処理を一時撤廃。後で試験しなおしとすることにした。 * Merged trunk * Auth Tokens assumed the user\_id was an int, not a string * Removed dependencies on flags.py from adminclient * Make InstanceActions and live diagnostics available through the Admin API * Cleanup * Improved test * removed some debugging code left in previous push * Converted the pool\_recycle setting to be a flag with a default of 3600 seconds * completed the basic xenstore read/write/delete functionality * Removed problematic test * PEP8 fix * \* Fix bad query in \_\_project\_to\_dn \* use \_\_find\_dns instead of \_\_find\_objects in \_\_uid\_to\_dn and \_\_project\_to\_dn * Moved network operation code in ec2 api into a generic network API class. Removed a circular dependency with compute/quota * Oopsies * merge trunk * merge trunk * Make compute.api methods verbs * Fail * Review feedback * Cleans up the output of run\_tests.sh to look closer to Trial * change exit code * Changing DN creation to do searches for entries * Merged trunk * Implemented review feedback * This patch is beginning of XenServer snapshots in nova. It adds: * Merged trunk * Calling compute api directly from OpenStack image create * Several documentation corrections * merge recent revision(version of 2010/12/28) Change: 1. Use greenthread instead of defer at nova.virt.libvirt\_conn.live\_migration. 2. Move nova.scheduler.manager.live\_migration to nova.scheduler.driver 3. Move nova.scheduler.manager.has\_enough\_resource to nova.scheduler.driver 4. Any check routine in nova-manage.instance.live\_migration is moved to nova.scheduler.driver.schedule\_live\_migration * Merging trunk * Note that contributors are required to be listed in Authors file before work can be merged into trunk * Mention Authors and .mailmap files in Developer Guide * pep 8 * remove cloudpipe from paste config * Clean up how we determine IP to bind to * Converted a few more ec2 calls to use compute api * Cleaned up the compute API, mostly consistency with other parts of the system and renaming redundant module names * fixed the compute lock test * altered the compute lock test * removed tests.api.openstack.test\_servers test\_lock, to hell with it. i'm not even sure if testing lock needs to be at this level * fixed up the compute lock test, was failing because the context was always admin * syntax error * moved check lock decorator from the compute api to the come manager... when it rains it pours * removed db.set\_lock, using update\_instance instead * added some logging * typo, trying to hurry.. look where that got me * altered error exception/logging * altered error exception/logging * fixd variables being out of scope in lock decorator * moved check lock decorator to compute api level. altered openstack.test\_servers according and wrote test for lock in tests.test\_compute * Moved ec2 volume operations into a volume API interface for other components to use. Added attach/detach as compute.api methods, since they operate in the context of instances (and to avoid a dependency loop) * pep8 fix, and add in flags that don't refernece my laptop * apt-get install socat, which is used to connect to the console * removed lock check from show and changed returning 404 to 405 * fix lp:695182, scheduler tests needed to DECLARE flag to run standalone * removed () from if (can't believe i did that) and renamed checks\_lock decorator * Add the pool\_recycle setting to enable connection pooling features for the sql engine. The setting is hard-coded to 3600 seconds (one hour) per the recommendation provided on sqlalchemy's site * i18n * Pep-8 cleanup * Fix scheduler testcase so it knows all flags and can run in isolation * removed some code i didn't end up using * fixed merge conflict with trunk * pep8 * fixed up test for lock * added tests for EC2 describe\_instances * PEP8 cleanup * This branch fixes an issue where VM creation fails because of a missing flag definition for 'injected\_network\_template'. See Bug #695467 for more info * Added tests * added test for lock to os api * refactor * Re-added flag definition for injected\_network\_template. Tested & verified fix in the same env as the original bug * forgot import * syntax error * Merged trunk * Added implementation availability\_zones to EC2 API * Updating Authors * merge * Changes and error fixes to help ensure basic parity with the Rackspace API. Some features are still missing, such as shared ip groups, and will be added in a later patch set * initial lock functionality commit * Merged with trunk * Additional edits in nova.concepts.rst while waiting for script changes * Bug #694880: nova-compute now depends upon Cheetah even when not using libvirt * add ajax console proxy to nova.sh * merge trunk * Fix pep8 violations * add in unit tests * removed superfluous line * Address bug #695157 by using a blank request class and setting an empty request path * Defualt services to enabled * Address bug #695157 by using a blank request class and setting an empty request path * Add flag --enable\_new\_services to toggle default state of service when created * merge from trunk * This commit introduces scripts to apply XenServer host networking protections * Whoops * merge from upstream and fix conflicts * Update .mailmap with both email addresses for Ant and myself * Make action log available through Admin API * Merging trunk * Add some basic snapshot tests * Added get\_diagnostics placeholders to libvirt and fake * Merged trunk * Added InstanceAction DB functions * merge trunk * Bug #694890: run\_tests.sh sometimes doesn't pass arguments to nosetest * Output of run\_tests.sh to be closer to trial * I've added suspend along with a few changes to power state as well. I can't imagine suspend will be controversial but I've added a new power state for "suspended" to nova.compute.power\_states which libvirt doesn't use and updated the xenapi power mapping to use it for suspended state. I also updated the mappings in nova.api.openstack.servers to map PAUSED to "error" and SUSPENDED to "suspended". Thoughts there are that we don't currently (openstack API v1.0) use pause, so if somehow an instance were to be paused an error occurred somewhere, or someone did something in error. Either way asking the xenserver host for the status would show "paused". Support for more power states needs to be added to the next version of the openstack API * fixed a line length * Bug #694880: nova-compute now depends upon Cheetah even when not using libvirt * Bug #694890: run\_tests.sh sometimes doesn't pass arguments to nosetest * fix bug #lp694311 * Typo fix * Renamed based on feedback from another branch * Added stack command-line tool * missed a couple of gettext \_() * Cleans up nova.api.openstack.images and fix it to work with cloudservers api. Previously "cloudservers image-list" wouldn't work, now it will. There are mappings in place to handle s3 or glance/local image service. In the future when the local image service is working, we can probably drop the s3 mappings * Fixing snapshots, pep8 fixes * translate status was returning the wrong item * Fixing bad merge * Converted Volume model and operation to use UUIDs * inst -> item * syntax error * renaming things to be a bit more descriptive * Merging trunk * Converted instance references to GUID type * Added custom guid type so we can choose the most efficient backend DB type easily * backup schedule changes * Merged trunk * Merging trunk, fixing failed tests * A few fixes * removed \ * Moving README to doc/networking.rst per recommendation from Jay Pipes * Merged trunk * couple of pep8s * merge trunk * Fixed after Jay's review. Integrated code from Soren (we now use the same 'magic number' for images without kernel & ramdisk * Fixed pep8 errors * launch\_at を前回コミット時に追加したが、lauched\_atというカラムが既に存在し、 紛らわしいのでlauched\_onにした。 * logs inner exception in nova/utils.py->import\_class * Fix Bug #693963 * remove requirement of sudo on tests * merge trunk * Merge * adding zones to api * Support IPv6 * test commit * テスト項目表を再び追加した状態でコミット * テスト項目表をローカルから一度削除した状態でコミット * テスト項目表がなぜか消えたので追加 * nova.compute.managerがこれまでの修正でデグレしていたので修正 CPUID, その他のチェックルーチンをnova.scheduler.manager.live\_migrationに追加 * nova.compute.managerがこれまでの修正でデグレしていたので修正 CPUID, その他のチェックルーチンをnova.scheduler.manager.live\_migrationに追加 * Make nova work even when user has LANG or LC\_ALL configured * merged trunk, resolved trivial conflict * merged trunk, resolved conflict * Faked out handling for shared ip groups so they return something * another typo * applied power state conversion to test * trying again * typo * fixed the os api image test for glance * updated the xenstore methods to reflect that they write to the param record of xenstore, not the actual xenstore itself * fixed typo * Merged with trunk All tests passed Could not fix some pep8 errors in nova/virt/libvirt\_conn.py * fixed merge conflict * updated since dietz moved the limited function * fixed error occuring when tests used glance attributes, fixed docstrings * Merged again from trunk * fixed a few docstrings, added \_() for gettext * added \_() for gettext and a couple of pep8s * adds a reflection api * unit test - should be reworked * Moves implementation specific Openstack API code from the middleware to the drivers. Also cleans up a few areas and ensures all the API tests are passing again * PEP8 fix * One more time * Pep8 cleanup * Resolved merge conflict * Merged trunk * Trying to remove twisted dependencies, this gets everything working under nosetests * Merged Monty's branch * Merged trunk and resolved conflicts * Working diagnostics API; removed diagnostics DB model - not needed * merged trunk * merged trunk * Superfluous images include and added basic routes for shared ip groups * Simplifies and improves ldap schema * xenapi iscsi support + unittests * Fixed trunk and PEP8 cleanup * Merged trunk * Added reference in setup.py so that python setup.py test works now * merge lp:nova * better bin name, and pep8 * pep8 fixes * some pep8 fixes * removing xen/uml specific switches. If they need special treatment, we can add it * add license * delete xtra dir * move euca-get-ajax-console up one directory * merge trunk * move port range for ajaxterm to flag * more tweaks * add in license * some cleanup * rewrite proxy to not use twisted * added power state logging to nova.virt.xenapi.vm\_utils * added suspend as a power state * last merge trunk before push * merge trunk, fixed unittests, added i18n strings, cleanups etc etc * And the common module * minor notes, commit before rewriting proxy with eventlet * There were a few unclaimed addresses in mailmap * first merge after i18n * remove some notes * Add Ryan Lane as well * added tests to ensure the easy api works as a backend for Compute API * fix commits from Anthony and Vish that were committed with the wrong email * remove some yields that snuck in * merge from trunk * Basic Easy API functionality * Fixes reboot (and rescue) to work even if libvirt doesn't know about the instance and the network doesn't exist * merged trunk * Fixes reboot (and rescue) to work even if libvirt doesn't know about the instance and the network doesn't exist * Adds a flag to use the X-Forwarded-For header to find the ip of the remote server. This is needed when you have multiple api servers with a load balancing proxy in front. It is a flag that defaults to False because if you don't have a sanitizing proxy in front, users could masquerade as other ips by passing in the header manually * Got basic xenstore operations working * Merged trunk * Modified InstanceDiagnostics and truncate action * removed extra files * merged trunk * Moves the ip allocation requests to the from the api host into calls to the network host made from the compute host * pep8 fix * merged trunk and fixed conflicts * Accidentally yanked the datetime line in auth * remove extra files that slipped in * merged trunk * add missing flag * Optimize creation of nwfilter rules so they aren't constantly being recreated * use libvirt python bindings instead of system call * fixed more conflicts * merged trunk again * add in support of openstack api * merge trunk and upgrade to cheetah templating * Optimize nwfilter creation and project filter * Merging trunk * fixed conflicts * Adding more comments regarding XS snapshots * working connection security * WSGI middleware for lockout after failed authentications of ec2 access key * Modifies nova-network to recreate important data on start * Puts the creation of nova iptables chains into the source code and cleans up rule creation. This makes nova play more nicely with other iptables rules that may be created on the host * Forgot the copyright info * i18n support for xs-snaps * Finished moving the middleware layers and fixed the API tests again * Zone scheduler added * Moved some things for testing * Merging trunk * Abstracted auth and ratelimiting more * Getting Snapshots to work with cloudservers command-line tool * merge trunk * Minor bug fix * Populate user\_data field from run-instances call parameter, default to empty string to avoid metadata base64 decoding failure, LP: #691598 * Adding myself and Antony Messerli to the Authors file * Fixes per-project vpns (cloudpipe) and adds manage commands and support for certificate revocation * merge trunk * merge antonymesserli's changes, fixed some formatting, and added copyright notice * merged i8n and fixed conflicts * Added networking protections readme * Moved xenapi into xenserver specific directory * after trunk merge * Fixes documentation builds for gettext.. * committing so that I can merge trunk changes * Log all XenAPI actions to InstanceActions * Merged trunk * merging trunk * merging trunk * Fix doc building endpoint for gettext * All merged with trunk and let's see if a new merge prop (with no pre-req) works. * Problem was with a missplaced parentheses. ugh * Adding me in the Authors file * Populate user\_data field from run-instances call parameter, default to empty string to avoid metadata base64 decoding failure, LP: #691598 * connecting ajax proxy to rabbit to allow token based security * remove a debugging line * a few more fixes after merge with trunk * merging in trunk * move prototype code from api into compute worker * Burnin support by specifying a specific host via availability\_zone for running instances and volumes on * Merged trunk * This stops the nova-network dhcp ip from being added to all of the compute hosts * prototype works with kvm. now moving call from api to compute * Style correction * fix reboot command to work even if a host is rebooted * Filter templates and dom0 from list\_instances() * removed unused import and fix docstring * merge fakerabbit fix and turn fake back on for cloud unit tests * Reworked fakerabbit backend so each connection has it's own. Moved queues and exchanges to be globals * PEP8 cleanup * Refactored duplicate rpc.cast() calls in nova/compute/api.py. Cleaned up some formatting issues * Log all XenAPI actions * correct xenapi resume call * activate fake rabbit for debugging * change virtualization to not get network through project * update db/api.py as well * don't allocate networks when getting vpn info * Added InstanceDiagnostics and InstanceActions DB models * PEP8 cleanup * Merged trunk * merge trunk * 1) Merged from trunk 2) 'type' parameter in VMHelper.fetch\_image converted in enum 3) Fixed pep8 errors 4) Passed unit tests * Remove ec2 config chain and move openstack versions to top-level application * Use paste.deploy for running the api server * pep8 and removed extra imports * add missing greenthread import * add a few extra joined objects to get instance * remove extra print statements * Tests pass after cleaning up allocation process * Merging trunk * Typo fix, stubbing out to use admin project for now * Close devnull filehandle * added suspend and resume * Rewrite of vif\_rules.py to meet coding standards and be more pythonic in general. Use absolute paths for iptables/ebtables/arptables in host-rules * Add raw disk image support * Add my @linux2go.dk address to .mailmap * fixed some pep8 business * directly copy ip allocation into compute * Minor spellchecking fixes * Adds support for Pause and Unpause of xenserver instances * Make column names more generic * don't add the ip to bridge on compute hosts * PEP8 fixups * Added InstanceActions DB model * initial commit of xenserver host protections * Merged trunk * Fixed pep8 errors * Integrated changes from Soren (raw-disk-images). Updated authors file. All tests passed * pep8 (again again) * pep8 (again) * small clean up * テストコードをレポジトリに追加 nova.compute.manager.pre\_live\_migration()について、異常終了しているのに正常終了の戻り値を返すことがあったため変更 - 正常終了の戻り値をTrueに変更 - fixed\_ipが見つからないときにはRemoteErrorをraiseする - それに合わせてnova.compute.manager.live\_migrationも変更 * テストコードをレポジトリに追加 nova.compute.manager.pre\_live\_migration()について、異常終了しているのに正常終了の戻り値を返すことがあったため変更 - 正常終了の戻り値をTrueに変更 - fixed\_ipが見つからないときにはRemoteErrorをraiseする - それに合わせてnova.compute.manager.live\_migrationも変更 * Support proxying api by using X-Forwarded-For * eventlet merge updates * Cleaned up TODOs, using flags now * merge trunk and minor fix(for whatever reason validator\_unittest did not get removed from run\_test.py) * fixed unittests and further clean-up post-eventlet merge * All API tests finally pass * Removing unneeded Trial specific code * A few more tweaks to get the OS API tests passing * Adding new install script plus changes to multinode install doc * Removing unneeded Trial specific code * Replaced the use of redis in fakeldap with a customized dict class. Auth unittests should now run fine without a redis server running, or without python-redis installed * Adding Ed Leafe to Authors file * Some tweaks * Adding in Ed Leafe so we can land his remove-redis test branch * Add wait\_for\_vhd\_coalesce * Some typo fixes * pep8 cleanup * Fixed some old code that was merged incorrectly * Replaced redis with a modified dict class * bug fixes * first revision after eventlet merge. Currently xenapi-unittests are broken, but everything else seems to be running okay * Integrated eventlet\_merge patch * Code reviewed * XenAPI Snapshots first cut * Fixed network test (thanks Vish!) and fixed run\_tests.sh * First pass at converting run\_tests.py to nosetests. The network and objctstore tests don't yet work. Also, we need to manually remove the sqlite file between runs * remerged for pep8 * pep8 * merged in project-vpns to get flag changes * clean up use of iptables chains * move some flags around * add conditional bind to linux net * make sure all network data is recreated when nova-network is rebooted * merged trunk * merged trunk, fixed conflicts and tests * Added Instance Diagnostics DB model * Put flags back in nova.virt.xenapi/vm\_utils * Removed unnecessary blank lines * Put flags back in vm\_utils * This branch removes most of the dependencies on twisted and moves towards the plan described by https://blueprints.launchpad.net/nova/+spec/unified-service-architecture * pep8 fixes for bin * PEP8 cleanups * use getent, update docstring * pep8 fixes * reviewed the FIXMEs, and spotted an uncaught exception in volume\_utils...yay! * fixed a couple of more syntax errors * Moved implementation specific stuff from the middleware into their respective modules * typo * fixed up openstack api images index and detail * fake session clean-up * Removed FakeInstance and introduced stubout for DB. Code clean-up * removed extra stuff used for debugging * Restore code which was changed for testing reasons to the original state. Kudos to Armando for spotting this * Make nova work even when user has LANG or LC\_ALL configured * Merged changes from trunk into the branch * Hostテーブルのカラム名を修正 FlatManager, FlatDHCPManagerに対応 * merged with trunk. fixed compute.pause test * fixup after merge with trunk * memcached requires strings not unicode * Fix 688220 Added dependency on Twisted>=10.1.0 to pip-requires * Make sure we properly close the bzr WorkingTree in our Authors up-to-datedness unit test * fixes for xenapi (thanks sandywalsh) * clean up tests and add overriden time method to utils * merged from upstream * add missing import * Adding back in openssh-lpk schema, as keys will likely be stored in LDAP again * basic conversion of xs-pause to eventlet done * brougth clean-up from unittests branch and tests * I made pep8 happy * \* code cleanup \* revised unittest approach \* added stubout and a number of tests * clean up code to use timeout instead of two keys * final cleanup * Restore alphabetical order in Authors file * removed temporary comment lines * Lots of PEP-8 work * refresh\_security\_group renamed to refresh\_security\_group\_rules * added volume tests and extended fake to support them * Make sure the new, consolidated template gets included * Make sure we unlock the bzr tree again in the authors unit test * The ppa was moved. This updates nova.sh to reflect that * merged upstream * remove some logging * Merged from trunk and fixed merge issues. Also fixed pep8 issues * Lockout middleware for ec2 api * updates per review * Initial work on i18n. This adds the installation of the nova domain in gettext to all the "endpoints", which are all the bin/\* files and run\_tests.py * For some reason, I forgot to commit the other endpoints.. * Remove default\_{kernel,ramdisk} flags. They are not used anymore * Don't attempt to fiddle with partitions for whole-disk-images * pep8 * Includes architecture on register. Additionally removes a couple lines of cruft * nothing * nothing * nothing * support for pv guests (in progress) * merge trunk * Now that we have a templating engine, let's use it. Consolidate all the libvirt templates into one, extending the unit tests to make sure I didn't mess up * first cut of unittest framework for xenapi * Added my contacts to Authors file * final cleanup, after moving unittest work into another branch * fixup after merge with trunk * added callback param to fake\_conn * added not implemented stubs for libvirt * merge with trey tests * Fixed power state update with Twisted callback * simplified version using original logic * moving xenapi unittests changes into another branch * Adds support to the ec2 api for filtering describe volumes by volume\_ids * Added LiveCD info as well as some changes to reflect consolidation of .conf files * Fix exception throwing with wrong instance type * Add myself * removing imports that should have not been there * second round for unit testing framework * Added Twisted version dependency into pip-requires * only needs work for distinguishing pv from hvm * Move security group refresh logic into ComputeAPI * Refactored smoketests to use novarc environment and to separate user and admin specific tests * Changed OpenStack API auth layer to inject a RequestContext rather than building one everywhere we need it * Elaborate a bit on ipsets comment * Final round of marking translation strings * First round of i18n-ifying strings in Nova * Initial i18n commit for endpoints. All endpoints must install gettext, which injects the \_ function into the builtins * Fixed spelling errors in index.rst * fix pep8 * Includes kernel and ramdisk on register. Additinally removes a couple lines of cruft * port new patches * merge-a-tat-tat upstream to this branch * Format fixes and modification of Vish's email address * There is always the odd change that one forgets! * \* pylint fixes \* code clean-up \* first cut for xenapi unit tests * added pause and unpause to fake connection * merged changes from sandy's branch * added unittest for pause * add back utils.default\_flagflie * removed a few more references to twisted * formatting and naming cleanup * remove service and rename service\_eventlet to service * get service unittests runnning again * whitespace fix * make nova binaries use eventlet * Converted the instance table to use a uuid instead of a auto\_increment ID and a random internal\_id. I had to use a String(32) column with hex and not a String(16) with bytes because SQLAlchemy doesn't like non-unicode strings going in for String types. We could try another type, but I didn't want a primary\_key on blob types * remove debug messages * merge with trey * pause and unpause code/tests in place. To the point it stuffs request in the queue * import module and not classe directely as per Soren recommendation * Make XenServer VM diagnostics available through nova.virt.xenapi * Merged trunk * Added exception handling to get\_rrd() * Changed OpenStack API auth layer to inject a RequestContext rather than building one everywhere we need it * changed resume to unpause * Import module instead of function * filter describe volumes by supplied ids. Includes unittest * merging sandy's branch * Make get\_diagnostics async * raw instances can now be launched in xenapi (only as hvm at the moment) * pause from compute.manager <-> xenapi * Merged Armando's XenAPI fix * merge with trunk to pull in admin-api branch * Flag to define which operations are exposed in the OpenStack API, disabling all others * Fixed Authors conflict and re-merged with trunk * fixes exception throwing with wrong instance type * Ignore security group rules that reference foreign security groups * fixed how the XenAPI library is loaded * remove some unused files * port volume manager to eventlet also * intermediate commit to checkpoint progress * some pylint caught changes to compute * added to Authors * adds bzr to the list of dependencies in pip-require so that upon checkout using run\_tests.sh succeeds * merge conflict * merged upstream changes * add bzr to the dev dependencies * Fixed docstrings * Merged trunk * Got get\_diagnostics in working order * merged updates to trunk * merge trunk * typo fix * removing extraneous config ilnes * Finished cleaning up the openstack servers API, it no longer touches the database directly. Also cleaned up similar things in ec2 API and refactored a couple methods in nova.compute.api to accommodate this work * Pushed terminate instance and network manager/topic methods into network.compute.api * Merged trunk * Moved the reboot/rescue methods into nova.compute.api * PEP8 fixes * Setting the default schema version to the new schema * Adding support for choosing a schema version, so that users can more easily migrate from an old schema to the new schema * merged with trunk. All clear! * Removing novaProject from the schema. This change may look odd at first; here's how it works: * test commit * コメントを除去 README.live\_migration.txtのレビュー結果を修正 * This change adds better support for LDAP integration with pre-existing LDAP infrastructures. A new configuration option has been added to specify the LDAP driver should only modify/add/delete attributes for user entries * More pep8 fixes to remove deprecated functions * pep8 fix * Clarifying previously commited exception message * Raising an exception if the user doesn't exist before trying to modify its attributes * Removing redundant check * Added livecd instructions plus fixed references to .conf files * pylint fixes * Initial diagnostics import -- needs testing and cleanup * Added a script to use OpenDJ as an LDAP server instead of OpenLDAP. Also modified nova.sh to add an USE\_OPENDJ option, that will be checked when USE\_LDAP is set * Reverting last change * a few more things ironed out * Make sure Authors check also works for pending merges (otherwise stuff can get merged that will make the next merge fail this check) * It looks like Soren fixed the author file, can I hit the commit button? * merge trunk * Make sure Authors check also works for pending merges (otherwise stuff can get merged that will make the next merge fail this check) * Add a helpful error message to nova-manage in case of NoMoreNetworks * Add Ryan Lucio to Authors * Adding myself to the authors list * Add Ryan Lucio to Authors * Addresses bug 677475 by changing the DB column for internal\_id in the instances table to be unsigned * importing XenAPI module loaded late * Added docstring for get\_instances * small fixes on Exception handling * first test commit * and yet another pylint fix * fixed pylint violations that slipped out from a previous check * \* merged with lp:~armando-migliaccio/nova/xenapi-refactoring \* fixed pylint score \* complied with HACKING guidelines * addressed review comments, complied with HACKING guidelines * adding README.livemigration.txt * rev439ベースにライブマイグレーションの機能をマージ このバージョンはEBSなし、CPUフラグのチェックなし * modified a few files * Fixed conflicts with gundlach's fixes * Remove dead test code * Add iptables based security groups implementation * Merged gundlach's fixes * Don't wrap HTTPAccepted in a fault. Correctly pass kwargs to update\_instance * fixed import module in \_\_init\_\_.py * minor changes to docstrings * added interim solution for target discovery. Now info can either be passed via flags or discovered via iscsiadm. Long term solution is to add a few more fields to the db in the iscsi\_target table with the necessary info and modify the iscsi driver to set them * merge with lp:~armando-migliaccio/nova/xenapi-refactoring * merge trunk * moved XenAPI namespace definition into xenapi/\_\_init\_\_.py * pylint and pep8 fixes * Decreased the maximum value for instance-id generation from uint32 to int32 to avoid truncation when being entered into the instance table. Reverted fix to make internal\_id column a uint * Finished cleaning up the openstack servers API, it no longer touches the database directly. Also cleaned up similar things in ec2 API and refactored a couple methods in nova.compute.api to accomodate this work * Merged reboot-rescue into network-manager * Merged trunk * Fixes a missing step (nova-manage network create IP/nn n nn) in the single-node install guide * Tired of seeing various test files in bzr stat * Updated sqlalchemy model to make the internal\_id column of the instances table as unsigned integer * \* Removes unused schema \* Removes MUST uid from novaUser \* Changes isAdmin to isNovaAdmin \* Adds two new configuration options: \*\* ldap\_user\_id\_attribute, with a default of uid \*\* ldap\_user\_name\_attribute, with a default of cn \* ldapdriver.py has been modified to use these changes * Pushed terminate instance and network manager/topic methods into network.compute.api * Fix bugs that prevented OpenStack API from supporting server rename * pep8 * Use newfangled compute\_api * Update tests to use proper id * Fixing single node install doc * Oops, update 'display\_name', not 'name'. And un-extract-method * Correctly translate instance ids to internal\_ids in some spots we neglected * Added test files to be ignored * Consolidated the start instance logic in the two API classes into a single method. This also cleans up a number of small discrepencies between the two * Moved reboot/rescue methods into nova.compute.api * Merged trunk and resolved conflicts. Again * Instances are assigned a display\_name if one is not passed in -- and now, they're assigned a display\_name even if None is explicitly passed in (as the EC2 API does.) * Merged trunk and resolved conflicts * Default Instance.display\_name to a value even when None is explicitly passed in * Refactor nwfilter code somewhat. For iptables based firewalls, I still want to leave it to nwfilter to protect against arp, mac, and ip spoofing, so it needed a bit of a split * Add a helpful error message to nova-manage in case of NoMoreNetworks * minor refactoring after merge * merge lp:~armando-migliaccio/nova/refactoring * merge trunk * typo fix * moved flags into xenapi/novadeps.py * Add a simple abstraction for firewalls * fix nova.sh to reflect new location of ppa * Changed null\_kernel flag from aki-00000000 to nokernel * Guarantee that the OpenStack API's Server-related responses will always contain a "name" value. And get rid of a redundant field in models.py * Going for a record commits per line changes ratio * Oops, internal\_id isn't available until after a save. This code saves twice; if I moved it into the DB layer we could do it in one save. However, we're moving to one sqlite db per compute worker, so I'd rather have two saves in order to keep the logic in the right layer * Todd points out that the API doesn't require a display\_name, so let's make a default. That way the OpenStack API can rest assured that its server responses will always have a name key * Adds in more documentation contributions from Citrix * Remove duplicate field and make OpenStack API return server.name for EC2-API-created instances * Move cc\_host and cc\_port flags into nova/network/linux\_net.py. They weren't used anywhere else * Add include\_package\_data=True to setup.py * With utils.default\_flagfile() in its old location, the flagfile isn't being read -- twistd.serve() loads flags earlier than that point. Move the utils.default\_flagfile() call earlier so the flagfile is included * Removed a blank line * Broke parts of compute manager out into compute.api to separate what gets run on the API side vs the worker side * Move default\_flagfile() call to where it will be parsed in time to load the flagfile * minor refactoring * Move cc\_host and cc\_port flags into nova/network/linux\_net.py. They weren't used anywhere else * Added a script to use OpenDJ as an LDAP server instead of OpenLDAP. Also modified nova.sh to add an USE\_OPENDJ option, that will be checked when USE\_LDAP is set * Fixed termie's tiny bits from the prior merge request * Delete unused flag in nova.sh * Moving the openldap schema out of nova.sh into it's own files, and adding sun (opends/opendj/sun directory server/fedora ds) schema files * OpenStack API returns the wrong x-server-management-url. Fix that * Cleaned up pep8 errors * brought latest changes from trunk * iscsi volumes attach/detach complete. There is only one minor issue on how to discover targets from device\_path * Fix unit tests * Fix DescribeImages EC2 API call * merged Justin Santa Barbara's raw-disk-image back into the latest trunk * If only I weren't so lazy * Rename imageSet variable to images * remove FAKE\_subdomain reference * Return the correct server\_management\_url * Default flagfile moved in trunk recently. This updates nova.sh to run properly with the new flagfile location * Correctly handle imageId list passed to DescribeImages API call * update of nova.sh because default flagfile moved * merged trunk * Add a templating mechanism in the flag parsing * Adjust state\_path default setting so that api unit tests find things where they used to find them * Import string instead of importing Template from string. This is how we do things * brought the xenapi refactoring in plus trunk changes * changes * pep8 fixes and further round of refactoring * Rename cloudServersFault to computeFault -- I missed this Rackspace branding when we renamed nova.api.rackspace to nova.api.openstack * Make sure templated flags work across calls to ParseNewFlags * Add include\_package\_data=True to setup.py * fixed deps * first cut of the refactoring of the XenAPIConnection class. Currently the class merged both the code for managing the XenAPI connection and the business logic for implementing Nova operations. If left like this, it would eventually become difficult to read, maintain and extend. The file was getting kind of big and cluttered, so a quick refactoring now will save a lot of headaches later * other round of refactoring * further refactoring * typos and pep8 fixes * first cut of the refactoring of the XenAPIConnection class. Currently the class merged both the code for managing the XenAPI connection and the business logic for implementing Nova operations. If left like this, it would eventually become difficult to read, maintain and extend. The file was getting kind of big and cluttered, so a quick refactoring now will save a lot of headaches later * PEP fixes * Adding support for modification only of user accounts * This modification should have occured in a different branch. Reverting * added attach\_volume implementation * work on attach\_volume, with a few things to iron out * A few more changes: \* Fixed up some flags \* Put in an updated nova.sh \* Broke out metadata forwarding so it will work in flatdhcp mode \* Added descriptive docstrings explaining the networking modes in more detail * small conflict resolution * first cut of changes for the attach\_volume call * The image server should throw not found errors, don't need to check in compute manager * Consolidated the start instance logic in the two API classes into a single method. This also cleans up a number of small discrepencies between the two * Setting "name" back to "cn", since id and name should be separate * Adding support for modification only of user accounts * don't error on edge case where vpn has been launched but fails to get a network * Make sure all workers look for their flagfile in the same spot * Fix typo "nova.util" -> "nova.utils" * Fix typo "nova.util" -> "nova.utils" * Added a .mailmap that maps addresses in bzr to people's real, preferred e-mail addresses. (I made a few guesses along the way, feel free to adjust according to what is actually the preferred e-mail) * Add a placeholder in doc/build. Although bzr handles empty directories just fine, setuptools does not, so to actually ship this directory in the tarball, we need a file in it * Add a placeholder in doc/build. Although bzr handles empty directories just fine, setuptools does not, so to actually ship this directory in the tarball, we need a file in it * Merged trunk * pep8 * merged trunk, added recent nova.sh * fix typos in docstring * docstrings, more flags, breakout of metadata forwarding * doc/build was recently accidentally removed from VCS. This adds it back, which makes the docs build again * Add doc/build dir back to bzr * Make aws\_access\_key\_id and aws\_secret\_access\_key configurable * add vpn ping and optimize vpn list * Add an alias for Armando * the serial returned by x509 is already formatted in hex * Adding developer documentation - setting up dev environment and how to add to the OpenStack API * Add a --logdir flag that will be prepended to the logfile setting. This makes it easier to share a flagfile between multiple workers while still having separate log files * Address pep8 complaints * Address PEP8 complaints * Remove FAKE\_subdomain from docs * Adding more polish * Adding developer howtos * Remove FAKE\_subdomain from docs * Make aws\_access\_key\_id and aws\_secret\_access\_key configurable * updated nova.sh * added flat\_interface for flat\_dhcp binding * changed bridge\_dev to vlan\_interface * * Add a --logdir flag that will be prepended to the logfile setting. This makes it easier to share a flagfile between multiple workers while still having separate log files * added svg files (state.svg is missing because its source is a screen snapshot) * Unify the location of the default flagfile. Not all workers called utils.default\_flagfile, and nova-manage explicitly said to use the one in /etc/nova/nova-manage.conf * Set and use AMQP retry interval and max retry FLAGS * Incorporating security groups info * Rename cloudServersFault (rackspace branding) to computeFault. Fixes bug lp680285 * Use FLAGS instead of constants * Incorporating more networking info * Make time.sleep() non-blocking * Removed unnecessary continue * Update Authors and add a couple of names to .mailmap (from people who failed to set bzr whoami properly) * Refactor AMQP retry loop * Allows user to specify hosts to listen on for nova-api and -objectstore * Make sure all the libvirt templates are included in the tarball (by replacing the explicitly listed set with a glob pattern) * fixed pep8 violations * Set and use AMQP retry interval and max retry constants * pep8 violations fix * added placeholders * added test for invalid handles * Make sure all templates are included (at least rescue tempaltes didn't used to be included) * Check for running AMQP instances * Use logging.exception instead * Reverted some changes * Added some comments * Adds images (only links one in), start for a nova-manage man file, and also documents all nova-manage commands. Can we merge it in even though the man page build isn't working? * Added some comments * Check for running AMQP instances * first cut of fixes for bug #676128 * Removed .DS\_Store files everywhere, begone! * Moves the EC2 API S3 image service into nova.service. There is still work to be done to make the APIs align, but this is the first step * PEP8 fixes, 2 lines were too long * First step to getting the image APIs consolidated. The EC2 API was using a one-off S3 image service wrapper, but this should be moved into the nova.image space and use the same interface as the others. There are still some mismatches between the various image service implementations, but this patch was getting large and wanted to keep it within a resonable size * Improved Pylint Score * Fixes improper display of api error messages that happen to be unicode * Make sure that the response body is a string and not unicode * Soren updated setup.py so that the man page builds. Will continue working on man pages for nova-compute and nova-network * Overwrite build\_sphinx, making it run once for each of the html and man builders * fixes flatdhcp, updates nova.sh, allows for empty bridge device * Update version to 2011.1 as that is the version we expect to release next * really adding images * adding images * Documenting all nova-manage commands * Documenting all nova-manage commands * Fixes eventlet race condition in cloud tests * fix greenthread race conditions in trunk and floating ip leakage * Testing man page build through conf.py * Improved Pylint Score * adjusting images size and bulleted list * merged with trunk * small edit * Further editing and added images * Update version to 2011.1 as that is the version we expect to release next * ec2\_api commands for describe\_addresses and associate\_address are broken in trunk. This happened during the switch to ec2\_id and internal\_id. We clearly didn't have any unit tests for this, so I've added a couple in addition to the three line change to actually fix the bugs * delete floating ips after tests * remove extra line and ref. to LOG that doesn't exist * fix leaking floating ip from network unittests and use of fakeldap driver * Adds nova-debug to tools directory, for debugging of instances that lose networking * fixes errors in describe address and associate address. Adds test cases * Ryan\_Lane's code to handle /etc/network not existing when we try to inject /etc/network/interfaces into an image * pep8 * First dump of content related to Nova RPC and RabbitMQ * Add docstrings to any methods I touch * pep8 * PEP8 fixes * added myself to Authors file. Enjoy spiders * Changed from fine-grained operation control to binary admin on/off setting * Changed from fine-grained operation control to binary admin on/off setting * Lots of documentation and docstring updates * The docs are just going to be wrong for now. I'll file a bug upstream * Change how wsgified doc wrapping happens to fix test * merge to trunk * pep8 * Adding contributors and names * merge with trunk * base commit * saw a duplicate import ... statement in the code while reading through unit tests - this removes the dupe * removed redundant unit test import * add in bzr link * adding a bit more networking documentation * remove tab * fix title * tweak * Fix heading * merge in anne's changes * tweak * Just a few more edits, misspellings and the like * fix spacing to enable block * merge to remote * unify env syntax * Add sample puppet scripts * fix install guide * getting started * create SPHINX\_DEBUG env var. Setting this will disable aggressive autodoc generation. Also provide some sample for P syntax * fix conf file from earlier merge * notes, and add code to enable sorted "..todo:: P[1-5] xyz" syntax * merge in more networking docs - still a work in progress * anne's changes to the networking documentation * Updated Networking doc * anne gentle's changes to community page * merge in heckj's corrections to multi-node install * Added a .mailmap that maps addresses in bzr to people's real, preferred e-mail addresses. (I made a few guesses along the way, feel free to adjust according to what is actually the preferred e-mail) * Updated community.rst to fix a link to the IRC logs * merging in changes from ~anso/nova/trunkdoc * fixed another spacing typo causing poor rendering * fixed spacing typo causing poor rendering * merge in anne's work * add docs for ubuntu 4, 10, others * Updated Cloud101 and admonition color * merge heckj's multi install notes * working on single node install * updating install notes to reference Vish' nova.sh and installing in MYSQL * Add Flat mode doc * Add Flat mode doc * Add Flat mode doc * Add VLAN Mode doc * Add VLAN Mode doc * merge in anne's changes * home page tweaks * Updated CSS and community.rst file * modifications and additions based on doc sprint * incorporate some feedback from todd and anne * merge in trunk * working on novadoc structure * add some info on authentication and keys * Since we're autodocumenting from a sphinx ext, we can scrap it in Makefile * Use the autodoc tools in the setup.py build\_sphinx toolchain * Fix include paths so setup.py build\_sphinx works again * Cleanups to doc process * quieter doc building (less warnings) * File moves from "merge" of termie's branch * back out stacked merge * Doc updates: \* quieter build (fewer warnings) \* move api reference out of root directory \* auto glob api reference into a TOC \* remove old dev entries for new-fangled auto-generated docs * Normalization of Dev reference docs * Switch to module-per-file for the module index * Allow case-by-case overriding of autodocs * add exec flags, apparently bzr shelve/unshelve does not keep track of them * Build autodocs for all our libraries * add dmz to flags and change a couple defaults * Per-project vpns, certificates, and revocation * remove finished todo * Fix docstrings for wsigfied methods * fix default twitter username * shrink tweet text a bit * Document nova.sh environment * add twitter feed to the home page * Community contact info * small tweaks before context switch * use include to grab todd's quickstart * add in custom todo, and custom css * Format TODO items for sphinx todo extension * additions to home page * Change order of secions so puppeting is last, add more initial setup tasks * update types of services that may run on machines * Change directory structure for great justice! * Refactored smoketests to use novarc environment and to separate user and admin specific tests * start adding info to multi-node admin guide * document purpose of documentation * Getting Started Guide * Nova quickstart: move vish's novascript into contrib, and convert reademe.md to a quickstart.rst * merge trunk * Add a templating mechanism in the flag parsing. Add a state\_path flag that will be used as the top-level dir for all other state (such as images, instances, buckets, networks, etc). This way you only need to change one flag to put all your state in e.g. /var/lib/nova * add missing file * Cleanup nova-manage section * have "contents" look the same as other headings * Enables the exclusive flag for DirectConsumer queues * Ensures that keys for context from the queue are passed to the context constructor as strings. This prevents hangs on older versions of python that can't handle unicode kwargs * Fix for bug #640400, enables the exclusive flag on the temporary queues * pep8 whitespace and line length fixes * make sure context keys are not unicode so they can be passed as kwargs * merged trunk * merged source * prettier theme * Added an extra argument to the objectstore listen to separate out the listening host from the connecting host * Change socket type in nova.utils.get\_my\_ip() to SOCK\_DGRAM. This way, we don't actually have to set up a connection. Also, change the destination host to an IP (chose one of Google's DNS's at random) rather than a hostname, so we avoid doing a DNS lookup * Fix for bug#613264, allowing hosts to be specified for nova-api and objectstore listeners * Fixes issue with security groups not being associated with instances * Doc cleanups * Fix flags help display * Change socket type in nova.utils.get\_my\_ip() to SOCK\_DGRAM. This way, we don't actually have to set up a connection. Also, change the destination host to an IP (chose one of Google's DNS's at random) rather than a hostname, so we avoid doing a DNS lookup * ISCSI Volume support * merged * more descriptive title for cloudpipe * update of the architecture and fix some links * Fixes after trunk merge * removed some old instructions and updated concepts * merge * Documentation on Services, Managers, and Drivers * Document final undocumented python modules * merged trunk * cloudpipe docs * Fixed --help display for non-twisted bin/\* commands * Adds support for multiple API ports, one for each API type (OS, EC2) * Fixed tests to work with new default API argument * Added support for OpenStack and EC2 APIs to run on different ports * More docs * Language change for conformity * Add ec2 api docs * Exceptions docs * API endpoint documentation * basics to get proxied ajaxterm working with virsh * :noindex: on the fakes page for virt.fakes which is included in compute.rst * Virt documentation * Change retrieval of security groups from kwargs so they are associated properly and add test to verify * don't check for vgroup in fake mode * merged trunk, just in case * Update compute/disk.py docs * Change volume TODO list * Volume documentation * Remove fakes duplication * Update database docs * Add support for google analytics to only the hudson-produced docs * Changes to conf.py * Updated location of layout.html and change conf.py to use a build variable * Update database page a bit * Fakes cleanup (stop duplicate autodoc of FakeAOEDriver) * Document Fakes * Remove "nova Packages and Dependencies" * Finished TODO item * Pep-257 * Pep-257 cleanups * Clean up todos and the like for docs * A shell script for showing modules that aren't documented in .rst files * merge trunkdoc * link binaries section to concepts * :func: links to python functions in the documentation * Todo cleanups in docs * cleanup todos * fix title levels * wip architecture, a few auth formatting fixes, binaries, and overview * volume cleanups * Remove objectstore, not referenced anywhere * Clean up volumes / storage info * Moves db writes into compute manager class. Cleans up sqlalchemy model/api to remove redundant calls for updating what is really a dict * Another heading was too distracting, use instead * Fix underlining -> heading in rst file * Whitespace and docstring cleanups * Remove outdated endpoint documentation * Clean up indentation error by preformatting * Add missing rst file * clean up the compute documentation a bit * Remove unused updated\_data variable * Fix wiki link * added nova-manage docs * merged and fixed conflicts * updates to auth, concepts, and network, fix of docstring * cleanup rrd doc generation * Doc skeleton from collaborative etherpad hack session * OK, let's try this one more time * Doc updates * updates from review, fix models.get and note about exception raising * Style cleanups and review from Eric * New structure for documentation * Fixes PEP8 violations from the last few merges * More PEP8 fixes that were introduced in the last couple commits * Adding Google Analytics code to nova.openstack.org * Fixes service unit tests after tornado excision * Added Google Analytics code * renamed target\_id to iscsi\_target * merged gundlach's excision * Oops, didn't mean to check this one in. Ninja-patch * Delete BaseTestCase and with it the last reference to tornado * fix completely broken ServiceTestCase * Removes some cruft from sqlalchemy/models.py like unused imports and the unused str\_id method * Adds rescue and unrescue commands * actually remove the conditional * fix tests by removing missed reference to prefix and unnecessary conditional in generate\_uid * Making net injection create /etc/network if non-existant * Documentation was missing; added * Moving the openldap schema out of nova.sh into it's own files, and adding sun (opends/opendj/sun directory server/fedora ds) schema files * validates device parameter for attach-volume * add nova-debug to setup.py * nova-debug, relaunch an instance with a serial console * Remove the last vestigial bits of tornado code still in use * pep8 cleanup * print the exception on fail, because it doesn't seem to reraise it * use libvirt connection for attaching disks and avoid the symlink * update error message * Exceptions in the OpenStack API will be converted to Faults as they should be, rather than barfing a stack trace to the user * pep8 * pep8 * Duplicate the two trivial escaping functions remaining from tornado's code and remove the dependency * more bugfixes, flag for local volumes * fix bugs, describe volumes, detach on terminate * ISCSI Volume support * Removed unused imports and left over references to str\_id * logging.warn not raise logging.Warn * whitespace * move create\_console to cloud.py from admin.py * merge lp:nova * add NotFound to fake.py and document it * add in the xen rescue template * pep 8 cleanup and typo in resize * add methods to cloud for rescue and unrescue * update tests * merged trunk and fixed conflicts/changes * part way through porting the codebase off of twisted * Another pep8 cleanup branch for nova/tests, should be merged after lp:~eday/nova/pep8-fixes-other. After this, the pep8 violation count is 0! * Changes block size for dd to a reasonable number * Another pep8 cleanup branch for nova/api, should be merged after lp:~eday/nova/pep8-fixes 2010.1 ------ * Created Authors file * Actually adding Authors file * Created Authors file and added to manifest for Austin Release * speed up disk generation by increasing block size * PEP8 cleanup in nova/tests, except for tests. There should be no functional changes here, just style changes to get violations down * PEP8 cleanup in nova/\*, except for tests. There should be no functional changes here, just style changes to get violations down * PEP8 cleanup in nova/db. There should be no functional changes here, just style changes to get violations down * PEP8 cleanup in nova/api. There should be no functional changes here, just style changes to get violations down * PEP8 and pylint cleanup. There should be no functional changes here, just style changes to get violations down * Moves db writes into compute manager class. Cleans up sqlalchemy model/api to remove redundant calls for updating what is really a dict * validate device in AttachDisk * Cleanup of doc for dependencies (redis optional, remove tornado, etc). Please check for accuracy * Delays the creation of the looping calls that that check the queue until startService is called * Made updates based on review comments * Authorize image access instead of just blindly giving it away * Checks the pid of dnsmasq to make sure it is actually referring to the right process * change boto version from 1.9b1 to 1.9b in pip-requires * Check the pid to make sure it refers to the correct dnsmasq process * make sure looping calls are created after service starts and add some tests to verify service delegation works * fix typo in boto line of pip-requires * Updated documentation * Update version set in setup.py to 2010.1 in preparation for Austin release * Also update version in docs * Update version to 2010.1 in preparation for Austin release * \* Fills out the Parallax/Glance API calls for update/create/delete and adds unit tests for them. \* Modifies the ImageController and GlanceImageService/LocalImageService calls to use index and detail routes to comply perfectly with the RS/OpenStack API * Makes disk.partition resize root drive to 10G, unless it is m1.tiny which just leaves it as is. Larger images are just used as is * reverted python-boto version in pip-requires to 1.9b1 * Construct exception instead of raising a class * Authorize Image before download * Add unit test for XML requests converting errors to Faults * Fixes https://bugs.launchpad.net/nova/+bug/663551 by catching exceptions at the top level of the API and turning them into Faults * Adds reasonable default local storage gb to instance sizes * reverted python-boto version in pip-requires to 1.9b1.\ * Fix typo in test case * Remember to call limited() on detail() in image controller * Makes nova-dhcpbridge notify nova-network on old network lease updates * add reasonable gb to instance types * it is flags.DEFINE\_integer, not FLAGS.define\_int * Makes disk.partition resize root drive to 10G, unless it is m1.tiny which just leaves it as is. Larger images are just used as is * update leases on old leases as well * Adds a simple nova-manage command called scrub to deallocate the network and remove security groups for a projeect * Refresh MANIFEST.in to make the tarball include all the stuff that belongs in the tarball * Added test case to reproduce bug #660668 and provided a fix by using the user\_id from the auth layer instead of the username header * Add the last few things to MANIFEST.in * Also add Xen template to manifest * Fix two problems with get\_console\_log: \* libvirt has this annoying "feature" where it chown()s your console to the uid running libvirt. That gets in the way of reading it. Add a call to "sudo chown ...." right before we read it to make sure it works out well. \* We were looking in the wrong directory for console.log. \*blush\* * This branch converts incoming data to the api into the proper type * Fixes deprecated use of context in nova-manage network create * Add a bunch of stuff to MANIFEST.in that has been added to the tree over the last couple of months * Fix the --help flag for printing help on twistd-based services * Fix two problems with get\_console\_log: libvirt has this annoying "feature" where it chown()s your console to the uid running libvirt. That gets in the way of reading it. We were looking in the wrong directory for console.log. \*blush\* * Fix for bug 660818 by adding the resource ID argument * Reorg the image services code to push glance stuff into its own directory * Fix some unit tests: \* One is a race due to the polling nature of rpc in eventlet based unit tests. \* The other is a more real problem. It was caused by datastore.py being removed. It wasn't caught earlier because the .pyc file was still around on the tarmac box * Add a greenthread.sleep(0.3) in get\_console\_output unit test. This is needed because, for eventlet based unit tests, rpc polls, and there's a bit of a race. We need to fix this properly later on * Perform a redisectomy on bin/nova-dhcpbridge * Removed 'and True' oddity * use context for create\_networks * Make Redis completely optional: * make --help work for twistd-based services * trivial style change * prevent leakage of FLAGS changes across tests * run\_tests.sh presents a prompt: * Also accept 'y' * A few more fixes for deprecations * make run\_tests.sh's default perform as expected * Added test case to reproduce bug #660668 and provided a fix by using the user\_id from the auth layer instead of the username header * get flags for nova-manage and fix a couple more deprecations * Fix for bug#660818, allows tests to pass since delete expects a resource ID * This branch modifies the fixes all of the deprecation warnings about empty context. It does this by adding the following fixes/features \* promotes api/context.py to context.py because it is used by the whole system \* adds more information to the context object \* passes the context through rpc \* adds a helper method for promoting to admin context (elevate()) \* modifies most checks to use context.project\_id instead of context.project.id to avoid trips to the database * timestamps are passed as unicode * Removed stray spaces that were causing an unnecessary diff line * merged trunk * Minimized diff, fixed formatting * remove nonexistent exception * Merged with trunk, fixed broken stuff * revert to generic exceptions * fix indent * Fixes LP Bug#660095 * Move Redis code into fakeldap, since it's the only thing that still uses it. Adjust auth unittests to skip fakeldap tests if Redis isn't around. Adjust auth unittests to actually run the fakeldap tests if Redis /is/ around * fix nosetests * Fixes a few concurrency issues with creating volumes and instances. Most importantly it adds retries to a number of the volume shell commands and it adds a unique constraint on export\_devices and a safe create so that there aren't multiple copies of export devices in the database * unit tests and fix * call stuff project\_id instead of project * review fixes * fix context in bin files * add scrub command to clean up networks and sec groups * merged trunk * merged concurrency * review comments * Added a unit test but not integrated it * merged trunk * fix remaining tests * cleaned up most of the issues * remove accidental paste * use context.project\_id because it is more efficient * elevate in proper places, fix a couple of typos * merged trunk * Fixes bug 660115 * Address cerberus's comment * Fix several problems keeping AuthMiddleware from functioning in the OpenStack API * Implement the REST calls for create/update/delete in Glance * Adds unit test for WSGI image controller for OpenStack API using Glance Service * Fixes LP Bug#660095 * Xen support * Adds flat networking + dhcpserver mode * This patch removes the ugly network\_index that is used by VlanManager and turns network itself into a pool. It adds support for creating the networks through an api command: nova-manage network create # creates all of the networks defined by flags or nova-manage network create 5 # create the first five networks * Newlines again, reorder imports * Remove extraneous newlines * Fix typo, fix import * merged upstream * cleanup leftover addresses * super teardown * fix tests * merged trunk * merged trunk * merged trunk * merged trunk * Revert the conversion to 64-bit ints stored in a PickleType column, because PickleType is incompatible with having a unique constraint * Revert 64 bit storage and use 32 bit again. I didn't notice that we verify that randomly created uids don't already exist in the DB, so the chance of collision isn't really an issue until we get to tens of thousands of machines. Even then we should only expect a few retries before finding a free ID * Add design doc, docstrings, document hyper-v wmi, python wmi usage. Adhere to pep-8 more closely * This patch adds support for EC2 security groups using libvirt's nwfilter mechanism, which in turn uses iptables and ebtables on the individual compute nodes. This has a number of benefits: \* Inter-VM network traffic can take the fastest route through the network without our having to worry about getting it through a central firewall. \* Not relying on a central firewall also removes a potential SPOF. \* The filtering load is distributed, offering great scalability * Change internal\_id from a 32 bit int to a 64 bit int * 32 bit internal\_ids become 64 bit. Since there is no 64 bit native type in SqlAlchemy, we use PickleType which uses the Binary SqlAlchemy type under the hood * Make Instance.name a string again instead of an integer * Now that the ec2 id is not the same as the name of the instance, don't compare internal\_id [nee ec2\_id] to instance names provided by the virtualization driver. Compare names directly instead * Fix bug 659330 * Catch exception.NotFound when getting project VPN data * Improve the virt unit tests * Remove spurious project\_id addition to KeyPair model * APIRequestContext.admin is no more. * Rename ec2\_id\_list back to instance\_id to conform to EC2 argument spec * Fix bug 657001 (rename all Rackspace references to OpenStack references) * Extracts the kernel and ramdisk id from manifests and puts in into images' metadata * Fix EC2 GetConsoleOutput method and add unit tests for it * Rename rsapi to osapi, and make the default subdomain for OpenStack API calls be 'api' instead of 'rs' * Fix bug 658444 * Adds --force option to run\_tests.sh to clear virtualenv. Useful when dependencies change * If machine manifest includes a kernel and/or ramdisk id, include it in the image's metadata * Rename ec2 get\_console\_output's instance ID argument to 'instance\_id'. It's passed as a kwarg, based on key in the http query, so it must be named this way * if using local copy (use\_s3=false) we need to know where to find the image * curl not available on Windows for s3 download. also os-agnostic local copy * Register the Hyper-V module into the list of virt modules * hyper-v driver created * Twisted pidfile and other flag parameters simply do not function on Windows * Renames every instance of "rackspace" in the API and test code base. Also includes a minor patch for the API Servers controller to use images correctly in the absence of Glance * That's what I get for not using a good vimrc * Mass renaming * Start stripping out the translators * Remove redis dependency from RS Images API * Remove redis dependency from Images controller * Since FLAGS.images\_path was not set for nova-compute, I could not launch instances due to an exception at \_fetch\_local\_image() trying to access to it. I think that this is the reason of Bug655217 * Imported images\_path from nova.objectstore for nova-compute. Without its setting, it fails to launch instances by exception at \_fetch\_local\_image * Defined images\_path for nova-compute. Without its setting, it fails to launch instances by exception at \_fetch\_local\_image * Cleans up a broken servers unit test * Huge sweeping changes * Adds stubs and tests for GlanceImageService and LocalImageService. Adds basic plumbing for ParallaxClient and TellerClient and hooks that into the GlanceImageService * Typo * Missed an ec2\_id conversion to internal\_id * Cleanup around the rackspace API for the ec2 to internal\_id transition * merge prop fixes * A little more clean up * Replace model.Instance.ec2\_id with an integer internal\_id so that both APIs can represent the ID to external users * Fix clause comparing id to internal\_id * Adds unit test for calling show() on a non-existing image. Changes return from real Parallax service per sirp's recommendation for actual returned dict() values * Remove debugging code, and move import to the top * Make (some) cloud unit tests run without a full-blown set up * Stub out ec2.images.list() for unit tests * Make rpc calls work in unit tests by adding extra declare\_consumer and consume methods on the FakeRabbit backend * Add a connect\_to\_eventlet method * Un-twistedify get\_console\_ouptut * Create and destroy user appropriately. Remove security group related tests (since they haven't been merged yet) * Run the virt tests by default * Keep handles to loggers open after daemonizing * merged trunk and fixed tests * Cleans up the unit tests that are meant to be run with nosetests * Update Parallax default port number to match Glance * One last bad line * merge from gundlach ec2 conversion * Adds ParallaxClient and TellerClient plumbing for GlanceImageService. Adds stubs FakeParallaxClient and unit tests for LocalImageService and GlanceImageService * Fix broken unit tests * Matches changes in the database / model layer with corresponding fixes to nova.virt.xenapi * Replace the embarrasingly crude string based tests for to\_xml with some more sensible ElementTree based stuff * A shiny, new Auth driver backed by SQLAlchemy. Read it and weep. I did * Move manager\_class instantiation and db.service\_\* calls out of nova.service.Service.\_\_init\_\_ into a new nova.service.Service.startService method which gets called by twisted. This delays opening db connections (and thus sqlite file creation) until after privileges have been shed by twisted * Add pylint thingamajig for startService (name defined by Twisted) * Revert r312 * Add a context of None to the call to db.instance\_get\_all * Honour the --verbose flag by setting the logging level to DEBUG * Accidentally renamed volume related stuff * More clean up and conflict resolution * Move manager\_class instantiation and db.service\_\* calls out of nova.service.Service.\_\_init\_\_ into a new nova.service.Service.startService method which gets called by twisted. This delays opening db connections (and thus sqlite file creation) until after privileges have been shed by twisted * Bug #653560: AttributeError in VlanManager.periodic\_tasks * Bug #653534: NameError on session\_get in sqlalchemy.api.service\_update * Fixes to address the following issues: * s/APIRequestContext/get\_admin\_context/ <-- sudo for request contexts * Bug #654034: nova-manage doesn't honour --verbose flag * Bug #654025: nova-manage project zip and nova-manage vpn list broken by change in DB semantics when networks are missing * Bug #654023: nova-manage vpn commands broken, resulting in erroneous "Wrong number of arguments supplied" message * fix typo in setup\_compute\_network * pack and unpack context * add missing to\_dict * Bug #653651: XenAPI support completely broken by orm-refactor merge * Bug #653560: AttributeError in VlanManager.periodic\_tasks * Bug #653534: NameError on session\_get in sqlalchemy.api.service\_update * Adjust db api usage according to recent refactoring * Make \_dhcp\_file ensure the existence of the directory containing the files it returns * Keep handles to loggers open after daemonizing * Adds BaseImageService and flag to control image service loading. Adds unit test for local image service * Cleans up the unit tests that are meant to be run with nosetests * Refactor sqlalchemy api to perform contextual authorization * automatically convert strings passed into the api into their respective original values * Fix the deprecation warnings for passing no context * Address a few comments from Todd * Merged trunk * Locked down fixed ips and improved network tests * merged remove-network-index * Fixed flat network manager with network index gone * merged trunk * show project ids for groups instead of user ids * create a new manager for flat networking including dhcp * First attempt at a uuid generator -- but we've lost a 'topic' input so i don't know what that did * Find other places in the code that used ec2\_id or get\_instance\_by\_ec2\_id and use internal\_id as appropriate * Convert EC2 cloud.py from assuming that EC2 IDs are stored directly in the database, to assuming that EC2 IDs should be converted to internal IDs * Method cleanup and fixing the servers tests * merged trunk, removed extra quotas * Adds support for periodic\_tasks on manager that are regularly called by the service and recovers fixed\_ips that didn't get disassociated properly * Replace database instance 'ec2\_id' with 'internal\_id' throughout the nova.db package. internal\_id is now an integer -- we need to figure out how to make this a bigint or something * merged trunk * refactoring * refactoring * Includes changes for creating instances via the Rackspace API. Utilizes much of the existing EC2 functionality to power the Rackspace side of things, at least for now * Get rid of mention of mongo, since we are using openstack/swift * Mongo bad, swift good * Add a DB backend for auth manager * Bug #652103: NameError in exception handler in sqlalchemy API layer * Bug #652103: NameError in exception handler in sqlalchemy API layer * Bug #651887: xenapi list\_instances completely broken * Grabbed the wrong copyright info * Cleaned up db/api.py * Refactored APIRequestContext * Bug #651887: xenapi list\_instances completely broken * Simplified authorization with decorators" " * Removed deprecated bits from NovaBase * Wired up context auth for keypairs * Completed quota context auth * Finished context auth for network * Finished instance context auth * Finished instance context auth * Made network tests pass again * Whoops, forgot the exception handling bit * Missed a few attributes while mirroring the ec2 instance spin up * pylint and pep8 cleanup * Forgot the context module * Some minor cleanup * Servers stuff * merge rsapi\_reboot from gundlach * Wired up context auth for services * Server creation up to, but not including, network configuration * Progress on volumes Fixed foreign keys to respect deleted flag * Support reboot in api.rackspace by extracting reboot function from api.ec2 into api.cloud * Make Fault raiseable, and add a test to verify that * Make Fault raiseable by inheriting from webob.exc.HTTPException * Related: https://code.launchpad.net/~anso/nova/authupdate/+merge/36925 * Remove debuggish print statement * Make update work correctly * Server update name and password * Support the pagination interface in RS API -- the &offset and &limit parameters are now recognized * Update from trunk to handle one-line merge conflict * Support fault notation in error messages in the RS API * Limit entity lists by &offset and &limit * After update from trunk, a few more exceptions that need to be converted to Faults * fix ordering of rules to actually allow out and drop in * fix the primary and secondary join * autocreate the models and use security\_groups * Began wiring up context authorization * Apply patch from Vish to fix a hardcoded id in the unit tests * removed a few extra items * merged with soren's branch * fix loading to ignore deleted items * Add user-editable name & notes/description to volumes, instances, and images * merged trunk * patch for test * fix join and misnamed method * fix eagerload to be joins that filter by deleted == False * \* Create an AuthManager#update\_user method to change keys and admin status. \* Refactor the auth\_unittest to not care about test order \* Expose the update\_user method via nova-manage * Updates the fix-iptables branch with a number of bugfixes * Fixes reversed arguments in nova-manage project environment * Makes sure that multiple copies of nova-network don't create multiple copies of the same NetworkIndex * Fix a few errors in api calls related to mistyped database methods for floating\_ips: specifically describe addresses and and associate address * Merged Termie's branch that starts tornado removal and fixed rpc test cases for twisted. Nothing is testing the Eventlet version of rpc.call though yet * Adds bpython support to nova-manage shell, because it is super sexy * Adds a disabled flag to service model and check for it when scheduling instances and volumes * Adds bpython support to nova-manage shell, because it is super sexy * Added random ec2 style id's for volumes and instances * fix security group revoke * Fixed tests * Removed str\_id from FixedIp references * missed a comma * improved commenting * Fault support * fix flag defaults * typo s/boo/bool * merged and removed duplicated methods * fixed merge conflicts * removed extra code that slipped in from a test branch * Fixed name property on instance model * Implementation of the Rackspace servers API controller * Added checks for uniqueness for ec2 id * fix test for editable image * Add authorization info for cloud endpoints * Remove TODO, since apparently newer boto doesn't die on extra fields * add disabled column to services and check for it in scheduler * Hook the AuthManger#modify\_user method into nova-manage commands * Refactored adminclient to support multiple regions * merged network-lease-fix * merged floating-ips * move default group creation to api * Implemented random instance and volume strings for ec2 api * Adds --force option to run\_tests.sh to clear virtualenv. Useful when dependencies change * merge from trunk * Instance & Image renaming fixes * merge from gundlach * Testing testing testing * get rid of network indexes and make networks into a pool * Add Serializer.deserialize(xml\_or\_json\_string) * merged trunk * return a value if possible from export\_device\_create\_safe * merged floating-ip-by-project * merged network-lease-fix * merged trunk * Stop trying to install nova-api-new (it's gone). Install nova-scheduler * Call out to 'sudo kill' instead of using os.kill. dnsmasq runs as root or nobody, nova may or may not be running as root, so os.kill won't work * Make sure we also start dnsmasq on startup if we're managing networks * Improve unit tests for network filtering. It now tracks recursive filter dependencies, so even if we change the filter layering, it still correctly checks for the presence of the arp, mac, and ip spoofing filters * Make sure arguments to string format are in the correct order * Make the incoming blocking rules take precedence over the output accept rules * db api call to get instances by user and user checking in each of the server actions * More cleanup, backup\_schedules controller, server details and the beginnings of the servers action route * This is getting ridiculous * Power state mapping * Set priority of security group rules to 300 to make sure they override the defaults * Recreate ensure\_security\_group\_filter. Needed for refresh * Clean up nwfilter code. Move our filters into the ipv4 chain * If neither a security group nor a cidr has been passed, assume cidr=0.0.0.0/0 * More re-work around the ORM changes and testing * Support content type detection in serializer * If an instance never got scheduled for whatever reason, its host will turn up as None. Filter those out to make sure refresh works * Only call \_on\_set\_network\_host on nova-network hosts * Allow DHCP requests through, pass the IP of the gateway as the dhcp server * Add a flag the specifies where to find nova-dhcpbridge * Ensure dnsmasq can read updates to dnsmasq conffile * Set up network at manager instantiation time to ensure we're ready to handle the networks we're already supposed to handle * Add db api methods for retrieving the networks for which a host is the designated network host * Apply IP configuration to bridge regardless of whether it existed before. The fixes a race condition on hosts running both compute and network where, if compute got there first, it would set up the bridge, but not do IP configuration (because that's meant to happen on the network host), and when network came around, it would see the interface already there and not configure it further * Removed extra logging from debugging * reorganize iptables clear and make sure use\_nova\_chains is a boolean * allow in and out for network and compute hosts * Modification of test stubbing to match new domain requirements for the router, and removal of the unnecessary rackspace base controller * Minor changes to be committed so trunk can be merged in * disable output drop for the moment because it is too restrictive * add forwarding ACCEPT for outgoing packets on compute host * fix a few missed calls to \_confirm\_rule and 80 char issues * allow mgmt ip access to api * flush the nova chains * Test the AuthManager interface explicitly, in case the user/project wrappers fail or change at some point. Those interfaces should be tested on their own * Update auth manager to have a update\_user method and better tests * add a reset command * Merged Termie's branch and fixed rpc test cases for tesited. Nothing is testing the Eventlet version of rpc.call though yet * improved the shell script for iptables * Finished making admin client work for multi-region * Install nova-scheduler * nova-api-new is no more. Don't attempt to install it * Add multi region support for adminclient * Merging in changes from rs\_auth, since I needed something modern to develop on while waiting for Hudson to right itself * whatever * Put EC2 API -> eventlet back into trunk, fixing the bits that I missed when I put it into trunk on 9/21 * Apply vish's patch * Applied vish's fixes * Implementation of Rackspace token based authentication for the Openstack API * fixed a few missing params from iptables rules * removed extra line in manage * made use of nova\_ chains a flag and fixed a few typos * put setup\_iptables in the right dir * Fixed rpc consumer to use unique return connection to prevent overlap. This could be reworked to share a connection, but it should be a wait operation and not a fast poll like it was before. We could also keep a cache of opened connections to be used between requests * fixed a couple of typos * Re-added the ramdisk line I accidentally removed * Added a primary\_key to AuthToken, fixed some unbound variables, and now all unit tests pass * Missed the model include, and fixed a broken test after the merge * Some more refactoring and another unit test * Refactored the auth branch based on review feedback * Replaced the existing Rackspace Auth Mechanism with one that mirrors the implementation in the design document * Merged gundlach's branch * renamed ipchains to iptables * merged trunk * Fixed cloudpipe lib init * merged fix-iptables * When calculating timedeltas make sure both timestamps are in UTC. For people ahead of UTC, it makes the scheduler unit tests pass. For people behind UTC, it makes their services time out after 60 seconds without a heart beat rather than X hours and 60 seconds without a heart beat (where X is the number of hours they're behind UTC) * Spot-fix endpoint reference * Wrap WSGI container in server.serve to make it properly handle command line arguments as well as daemonise properly. Moved api and wsgi imports in the main() function to delay their inclusion until after python-daemon has closed all the file descriptors. Without this, eventlet's epoll fd gets opened before daemonize is called and thus its fd gets closed leading to very, very, very confusing errors * Apply vish's patch * Added FLAGS.FAKE\_subdomain letting you manually set the subdomain for testing on localhost * Address Vishy's comments * All timestamps should be in UTC. Without this patch, the scheduler unit tests fail for anyone sufficiently East of Greenwich * Compare project\_id to '' using == (equality) rather than 'is' (identity). This is needed because '' isn't the same as u'' * Various loose ends for endpoint and tornado removal cleanup, including cloudpipe API addition, rpc.call() cleanup by removing tornado ioloop, and fixing bin/\* programs. Tornado still exists as part of some test cases and those should be reworked to not require it * Re-add root and metadata request handlers to EC2 API * Re-added the ramdisk line I accidentally removed * Soren's patch to fix part of ec2 * Add user display fields to instances & volumes * Responding to eday's feedback -- make a clearer inner wsgi app * Added a primary\_key to AuthToken, fixed some unbound variables, and now all unit tests pass * merge from trunk * typo in instance\_get * typo in instance\_get * User updatable name & description for images * merged trunk and fixed errors * cleaned up exception handling for fixed\_ip\_get * Added server index and detail differentiation * merged trunk * typo s/an/a * Reenable access\_unittest now that it works with new rbac * Rewrite rbac tests to use Authorizer middleware * Missed the model include, and fixed a broke test after the merge * Delete nova.endpoint module, which used Tornado to serve up the Amazon EC2 API. Replace it with nova.api.ec2 module, which serves up the same API via a WSGI app in Eventlet. Convert relevant unit tests from Twisted to eventlet * Remove eventlet test, now that eventlet 0.9.10 has indeed been replaced by 0.9.12 per mtaylor * In desperation, I'm raising eventlet.\_\_version\_\_ so I can see why the trunk tests are failing * merged trunk * bpython is amazing * Fix quota unittest and don't run rbac unit tests for the moment * merged trunk * Some more refactoring and another unit test * Implements quotas with overrides for instances, volumes, and floating ips * Renamed cc\_ip flag to cc\_host * Moves keypairs out of ldap and into the common datastore * Fixes server error on get metadata when instances are started without keypairs * allows api servers to have a list of regions, allowing multi-cluster support if you have a shared image store and user database * Don't use something the shell will escape as a separator. | is now = * Added modify project command to auth manager to allow changing of project manager and description * merged trunk * merged trunk * Refactored the auth branch based on review feedback * Whitespace fixes * Support querying version list, per the RS API spec. Fixes bug 613117 * Undo run\_tests.py modification in the hopes of making this merge * Add a RateLimitingMiddleware to the Rackspace API, implementing the rate limits as defined by the current Cloud Servers spec. The Middleware can do rate counting in memory, or (for deployments that have more than one API Server) can offload to a rate limiting service * Use assertRaises * A small fix to the install\_venv program to allow us to run it on the tarmac box as part of the tarmac build * Removes second copy of ProcessExecutionError that creeped in during a bad merge * Adds an omitted yield in compute manager detach\_volume * Move the code that extracts the console output into the virt drivers. Move the code that formats it up into the API layer. Add support for Xen console * Add Xen template and use it by default if libvirt\_type=xen * added rescue mode support and made reboot work from any state * Adds timing fields to instances and volumes to track launch times and schedule times * Fixes two errors in cloud.py in the nova\_orm branch: a) self.network is actually called network\_manager b) the logic for describe-instances check on is\_admin was reversed * Adds timing fields to instances and volumes to track launch times and schedule times * updated docstring * add in a few comments * s/\t/ /g, and add some comments * add in support for ajaxterm console access * add security and session timeout to ajaxterm * initial commit of ajaxterm * Replaced the existing Rackspace Auth Mechanism with one that mirrors the implementation in the design document * Whitespace fixes * Added missing masquerade rules * Fix things not quite merged perfectly -- all tests now pass * Better error message on the failure of a spawned process, and it's a ProcessExecutionException irrespective of how the process is run (twisted or not) * Added iptables host initial configuration * Added iptables host initial configuration * Proposing merge to get feedback on orm refactoring. I am very interested in feedback to all of these changes * Support querying version list * Add support for middleware proxying to a ratelimiting.WSGIApp, for deployments that use more than one API Server and thus can't store ratelimiting counters in memory * Test the WSGIApp * RateLimitingMiddleware tests * Address a couple of the TODO's: We now have half-decent input validation for AuthorizeSecurityGroupIngress and RevokeDitto * Clean up use of ORM to remove the need for scoped\_session * Roll back my slightly over-zealous clean up work * More ORM object cleanup * Clean up use of objects coming out of the ORM * RateLimitingMiddleware * Add ratelimiting package into Nova. After Austin it'll be pulled out into PyPI * When destroying a VM using the XenAPI backend, if the VM is still running (the usual case) the destroy fails. It needs to be powered-off first * Leave out the network setting from the interfaces template. It does not get passed anymore * Network model has network\_str attribute * Cast process input to a str. It must not be unicode, but stuff that comes out of the database might very well be unicode, so using such a value in a template makes the whole thing unicode * Make refresh\_security\_groups play well with inlineCallbacks * Fix up rule generation. It turns out nwfilter gets very, very wonky indeed if you mix rules and rules. Setting a TCP rule adds an early rule to ebtables that ends up overriding the rules which are last in that table * Add a bunch of TODO's to the API implementation * Multiple security group support * Remove power state constants that have ended up duplicated following a bad merge. They were moved from nova.compute.node.Instance into nova.compute.power\_state at the same time that Instance was moved into nova.compute.service. We've ended up with these constants in both places * now we can run files - thanks vish * Move vol.destroy() call out of the \_check method in test\_multiple\_volume\_race\_condition test and into a callback of the DeferredList. This should fix the intermittent failure of that test. I /think/ test\_too\_many\_volumes's failure was caused by test\_multiple\_volume\_race\_condition failure, since I have not been able to reproduce its failure after fixing this one * Adds 'shell run' to nova manage, which spawns a shell with flags properly imported * Finish pulling S3ImageService out of this mergeprop * Pull S3ImageService out of this mergeprop * Correctly pass ip\_address to templates * Fix call to listNWFilters * (Untested) Make changes to security group rules propagate to the relevant compute nodes * Filters all get defined when running an instance * added missing yield in detach\_volume * multiple network controllers will not create duplicate indexes * renamed \_get\_quota to get\_quota and moved int(size) into quota.py * add a shell to nova-manage, which respects flags (taken from django) * Move vol.destroy() call out of the \_check method in test\_multiple\_volume\_race\_condition test and into a callback of the DeferredList. This should fix the intermittent failure of that test. I /think/ test\_too\_many\_volumes's failure was caused by test\_multiple\_volume\_race\_condition failure, since I have not been able to reproduce its failure after fixing this one * removed second copy of ProcessExecutionError * move the warnings about leasing ips * simplified query * missed a space * set leased = 0 as well on disassociate update * speed up the query and make sure allocated is false * workaround for mysql select in update * Periodic callback for services and managers. Added code to automatically disassociate stale ip addresses * fixed typo * flag for retries on volume commands * auto all and start all exceptions should be ignored * generalized retry into try\_execute * more error handling in volume driver code * handle exceptions thrown by vblade stop and vblade destroy * merged trunk * deleting is set by cloud * re added missing volume update * Integrity error is in a different exc file * allow multiple volumes to run ensure\_blades without creating duplicates * fixed name for unique constraint * export devices unique * merged instance time and added better concurrency * make fixed\_ip\_get\_by\_address return the instance as well so we don't run into concurrency issues where it is disassociated in between * disassociate floating is supposed to take floating\_address * speed up generation of dhcp\_hosts and don't run into None errors if instance is deleted * don't allocate the same floating ip multiple times * don't allow deletion or attachment of volume unless it is available * fixed reference to misnamed method * manage command for project quotas * merged trunk * implement floating\_ip\_get\_all\_by\_project and renamed db methods that get more then one to get\_all\_by instead of get\_by * fixed reversed args in nova-manage project environment * merged scheduler * fix instance time * move volume to the scheduler * tests for volumes work * update query and test * merged quotas * use gigabytes and cores * use a string version of key name when constructing mpi dict because None doesn't work well in lookup * db not self.db * Security Group API layer cleanup * merged trunk * added terminated\_at to volume and moved setting of terminated\_at into cloud * remerged scheduler * merged trunk * merged trunk * merged trunk * merged trunk * fixed reversed admin logic on describe instances * fixed typo network => network\_manager in cloud.py * fixed old key reference and made keypair name constistent -> key\_pair * typo fixes, add flag to nova-dhcpbridge * fixed tests, added a flag for updating dhcp on disassociate * simplified network instance association * fix network association issue * merged trunk * improved network error case handling for fixed ips * it is called regionEndpoint, and use pipe as a separator * move keypair generation out of auth and fix tests * Fixed manager\_user reference in create\_project * Finished security group / project refactor * delete keypairs when a user is deleted * remove keypair from driver * moved keypairs to db using the same interface * multi-region flag for describe regions * make api error messages more readable * Refactored to security group api to support projects * set dnsName on describe * merged orm and put instance in scheduling state * just warn if an ip was already deallocated * fix mpi 500 on fixed ip * hostname should be string id * dhcpbridge needed host instead of node name * add a simple iterator to NovaBase to support converting into dictionary * Adjust a few things to make the unit tests happy again * First pass of nwfilter based security group implementation. It is not where it is supposed to be and it does not actually do anything yet * couple more errors in metadata * typo in metadata call * fixed messed up call in metadata * added modify project command to allow project manager and description to be updated * Change "exn" to "exc" to fit with the common style * Create and delete security groups works. Adding and revoking rules works. DescribeSecurityGroups returns the groups and rules. So, the API seems to be done. Yay * merged describe\_speed * merged scheduler * set host when item is scheduled * remove print statements * removed extra quotes around instance\_type * don't pass topic into schedule\_run\_instance * added scheduled\_at to instances and volumes * quotas working and tests passing * address test almost works * quota tests * merged orm * fix unittest * merged orm * fix rare condition where describe is called before instance has an ip * merged orm * make the db creates return refs instead of ids * add missing files for quota * kwargs don't work if you prepend an underscore * merged orm, added database methods for getting volume and ip data for projects * database support for quotas * Correct style issues brought up in termie's review * mocking out quotas * don't need to pass instance\_id to network on associate * floating\_address is the name for the cast * merged support code from orm branch * faster describe\_addresses * added floating ip commands and launched\_at terminated\_at, deleted\_at for objects * merged orm * solution that works with this version * fix describe addresses * remove extraneous get\_host calls that were requiring an extra db trip * pass volume['id'] instead of string id to delete volume * fix volume delete issue and volume hostname display * fix logging for scheduler to properly display method name * fixed logic in set\_state code to stop endless loops * Authorize and Revoke access now works * list command for floating ips * merged describe speed * merged orm * floating ip commands * removed extraneous rollback * speed up describe by loading fixed and floating ips * AuthorizeSecurityGroupIngress now works * switch to using utcnow * Alright, first hole poked all the way through. We can now create security groups and read them back * don't fail in db if context isn't a dict, since we're still using a class based context in the api * logging for backend is now info instead of error * merged orm * merged orm * set state everywhere * put soren's fancy path code in scheduler bin as well * missing deleted ref * merged orm * merged orm * consistent naming for instance\_set\_state * Tests turn things into inlineCallbacks * Missed an instance of attach\_to\_tornado * Remove tornado-related code from almost everything * It's annoying and confusing to have to set PYTHONPATH to point to your development tree before you run any of the scripts * deleted typo * merged orm * merged orm * fixed missing paren * merge orm * make timestamps for instances and volumes, includes additions to get deleted objects from db using deleted flag * merged orm * remove end of line slashes from models.py * Make the scripts in bin/ detect if they're being run from a bzr checkout or an extracted release tarball or whatever and adjust PYTHONPATH accordingly * merged orm * merged orm branch * set state moved to db layer * updated to the new orm code * changed a few unused context to \_context * a few formatting fixes and moved exception * fixed a few bugs in volume handling * merged trunk * Last of cleanup, including removing fake\_storage flage * more fixes from code review * review db code cleanup * review cleanup for compute manager * first pass at cleanup rackspace/servers.py * dhcpbridge fixes from review * more fixes to session handling * few typos in updates * don't log all sql statements * one more whitespace fix * whitespace fixes * fix for getting reference on service update * clean up of session handling * New version of eventlet handles Twisted & eventlet running at the same time * fix docstrings and formatting * Oops, APIRequestContext's signature has changed * merged orm * fix floating\_ip to follow standard create pattern * Add stubbed out handler for AuthorizeSecurityGroupIngress EC2 API call * merged orm\_deux * Merged trunk * Add a clean-traffic filterref to the libvirt templates to prevent spoofing and snooping attacks from the guests * Lots of fixes to make the nova commands work properly and make datamodel work with mysql properly * Bug #630640: Duplicated power state constants * Bug #630636: XenAPI VM destroy fails when the VM is still running * removed extra equals * Just a couple of UML-only fixes:  \* Due to an issue with libvirt, we need to chown the disk image to root.  \* Just point UML's console directly at a file, and don't bother with the pty. It was only used for debugging * removed extra file and updated sql note * merged fixed format instances from orm * fixed up format\_instances * merged server.py change from orm branch * reverting accidental search/replace change to server.py * merged orm * removed model from nova-manage * merged orm branch * removed references to compute.model * send ultimate topic in to scheduler * more scheduler tests * test for too many instances work * merged trunk * fix service unit tests * removed dangling files * merged orm branch * merged trunk and cleaned up test * renamed daemon to service and update db on create and destroy * pass all extra args from service to manager * fix test to specify host * inject host into manager * Servers API remodeling and serialization handling * Move nova.endpoint.images to api.ec2 and delete nova.endpoint * Cloud tests pass * OMG got api\_unittests to pass * send requests to the main API instead of to the EC2 subset -- so that it can parse out the '/services/' prefix. Also, oops, match on path\_info instead of path like we're supposed to * Remove unused APIRequestContext.handler * Use port that boto expects * merged orm branch * scheduler + unittests * removed underscores from used context * updated models a bit and removed service classes * Small typos, plus rework api\_unittest to use WSGI instead of Tornado * Replace an if/else with a dict lookup to a factory method * Nurrr * Abstractified generalization mechanism * Revert the changes to the qemu libvirt template and make the appropriate changes in the UML template where they belong * Create console.log ahead of time. This ensures that the user running nova-compute maintains read privileges * This improves the changelog generated as part of "setup.py sdist". If you look at it now, it says that Tarmac has done everything and every little commit is listed. With this patch, it only logs the "top-most" commit and credits the author rather than the committer * Fix simple errors to the point where we can run the tests [but not pass] * notes -- conversion 'complete' except now the unit tests won't work and surely i have bugs :) * Moved API tests into a sub-folder of the tests/ and added a stubbed-out test declarations to mirror existing API tickets * Delete rbac.py, moving @rbac decorator knowledge into api.ec2.Authorizer WSGI middleware * Break Router() into Router() and Executor(), and put Authorizer() (currently a stub) in between them * Return error Responses properly, and don't muck with req.params -- make a copy instead * merged orm branch * pylint clean of manager and service * pylint cleanup of db classes * rename node\_name to host * merged trunk * Call getInfo() instead of getVersion() on the libvirt connection object. virConnectGetVersion was not exposed properly in the python bindings until quite recently, so this makes us rather more backwards compatible * Better log formatter for Nova. It's just like gnuchangelog, but logs the author rather than the committer * Remove all Twisted defer references from cloud.py * Remove inlineCallbacks and yield from cloud.py, as eventlet doesn't need it * Move cloudcontroller and admincontroller into new api * Adjust setup.py to match nova-rsapi -> nova-api-new rename * small import cleanup * Get rid of some convoluted exception handling that we don't need in eventlet * First steps in reworking EC2 APIRequestHandler into separate Authenticate() and Router() WSGI apps * Call getInfo() instead of getVersion() on the libvirt connection object. virConnectGetVersion was not exposed properly in the python bindings until quite recently, so this makes us rather more backwards compatible * Fix up setup.py to match nova-rsapi -> nova-api-new rename * a little more cleanup in compute * pylint cleanup of tests * add missing manager classes * volume cleanup * more cleanup and pylint fixes * more pep8 * more pep8 * pep8 cleanup * add sqlalchemy to pip requires * merged trunk, fixed a couple errors * Delete \_\_init\_\_.py in prep for turning apirequesthandler into \_\_init\_\_ * Move APIRequestContext into its own file * Move APIRequest into its own file * run and terminate work * Move class into its own file * fix daemon get * Notes for converting Tornado to Eventlet * undo change to get\_my\_ip * all tests pass again * rollback on exit * merged session from devin * Added session.py * Removed get\_backup\_schedules from the image test * merged devin's sqlalchemy changes * Making tests pass * Reconnect to libvirt on broken connection * pylint fixes for /nova/virt/connection.py * pylint fixes for nova/objectstore/handler.py * ip addresses work now * Add Flavors controller supporting * Resolve conflicts and merge trunk * Detect if libvirt connection has been broken and reestablish it * instance runs * Dead code removal * remove creation of volume groups on boot * tests pass * Making tests pass * Making tests pass * Refactored orm to support atomic actions * moved network code into business layer * move None context up into cloud * split volume into service/manager/driver * moved models.py * removed the last few references to models.py * chown disk images to root for uml. Due to libvirt dropping CAP\_DAC\_OVERRIDE for uml, root needs to have explicit access to the disk images for stuff to work * Create console.log ahead of time. This ensures that the user running nova-compute maintains read privileges * fixed service mox test cases * Renamed test.py and moved a test as per merge proposal feedback * fixed volume unit tests * work endpoint/images.py into an S3ImageService. The translation isn't perfect, but it's a start * get to look like trunk * Set UML guests to use a file as their console. This halfway fixes get-console-output for them * network tests pass again * Fixes issue with the same ip being assigned to multiple instances * merged trunk and fixed tests * Support GET //detail * Moved API tests into a sub-folder of the tests/ and added a stubbed-out test declarations to mirror existing API tickets * Turn imageid translator into general translator for rackspace api ids * move network\_type flag so it is accesible in data layer * Use compute.instance\_types for flavor data instead of a FlavorService * more data layer breakouts, lots of fixes to cloud.py * merged jesse * Initial support for Rackspace API /image requests. They will eventually be backed by Glance * Fix a pep8 violation * improve the volume export - sleep & check export * missing context and move volume\_update to before the export * update volume create code * A few small changes to install\_venv to let venv builds work on the tarmac box * small tweaks * move create volume to work like instances * work towards volumes using db layer * merge vish * fix setup compute network * merge vish * merge vish * use vlan for network type since it works * merge vish * more work on getting running instances to work * merge vish * more cleanup * Flavors work * pep8 * Delete unused directory * Move imageservice to its own directory * getting run/terminate/describe to work * OK, break out ternary operator (good to know that it slowed you down to read it) * Style fixes * fix some errors with networking rules * typo in release\_ip * run instances works * Ensure that --gid and --uid options work for both twisted and non-twisted daemons * Fixes an error in setup\_compute\_network that was causing network setup to fail * add back in the needed calls for dhcpbridge * removed old imports and moved flags * merge and fixes to creates to all return id * bunch more fixes * moving network code and fixing run\_instances * jesse's run\_instances changes * fix daemons and move network code * Rework virt.xenapi's concurrency model. There were many places where we were inadvertently blocking the reactor thread. The reworking puts all calls to XenAPI on background threads, so that they won't block the reactor thread * merged trunk and fixed merge errors * Refactored network model access into data abstraction layer * Get the output formatting correct * Typo * Don't serialize in Controller subclass now that wsgi.Controller handles it for us * Move serialize() to wsgi.Controller so \_\_call\_\_ can serialize() action return values if they are dicts * Serialize properly * Support opaque id to rs int id as well * License * Moves auth.manager to the data layer * Add db abstraction and unittets for service.py * Clarified what the 'Mapped device not found' exception really means. Fixed TODO. Some formatting to be closer to 80 chars * Added missing "self." * Alphabetize the methods in the db layer * fix concurrency issue with multiple instances getting the same ip * small fixes to network * Fixed typo * Better error message on subprocess spawn fail, and it's a ProcessExecutionException irrespective of how the process is run * Check exit codes when spawning processes by default Also pass --fail to curl so that it sets exit code when download fails * PEP8/pylint cleanup in bin and nova/auth * move volume code into datalayer and cleanup * Complete the Image API against a LocalImageService until Glance's API exists (at which point we'll make a GlanceImageService and make the choice of ImageService plugin configurable.) * Added unit tests for WSGI helpers and base WSGI API * merged termies abstractions * Move deferredToThread into utils, as suggested by termie * Remove whitespace to match style guide * Data abstraction for compute service * this file isn't being used * Cleaned up pep8/pylint style issues in nova/auth. There are still a few pylint warnings in manager.py, but the patch is already fairly large * More pylintrc updates * fix report state * Removed old cloud\_topic queue setup, it is no longer used * last few test fixes * More bin/ pep8/pylint cleanup * fixing more network issues * Added '-' as possible charater in module rgx * Merged with trunk * Updated the tests to use webob, removed the 'called' thing and just use return values instead * Fix unit test bug this uncovered: don't release\_ip that we haven't got from issue\_ip * Fix to better reflect (my believed intent) as to the meaning of error\_ok (ignore stderr vs accept failure) * Merged with trunk * use with\_lockmode for concurrency issues * First in a series of patches to port the API from Tornado to WSGI. Also includes a few small style fixes in the new API code * Pull in ~eday/nova/api-port * Merged trunk * Merged api-port into api-port-1 * Since pylint=0.19 is our version, force everyone to use the disable-msg syntax * Missed one * Removed the 'controllers' directory under 'rackspace' due to full class name redundancy * pep8 typo * Changed our minds: keep pylint equal to Ubuntu Lucid version, and use disable-msg throughout * Fixed typo * Image API work * Newest pylint supports 'disable=', not 'disable-msg=' * Fix pep8 violation * tests pass * network tests pass * Added unittests for wsgi and api * almost there * progress on tests passing * remove references to deleted files so tests run * fix vpn access for auth * merged trunk * removed extra files * network datamodel code * In an effort to keep new and old API code separate, I've created a nova.api to put all new API code under. This means nova.endpoint only contains the old Tornado implementation. I also cleaned up a few pep8 and other style nits in the new API code * No longer installs a virtualenv automatically and adds new options to bypass the interactive prompt * Stylistic improvements * Add documentation to spawn, reboot, and destroy stating that those functions should return Deferreds. Update the fake implementations to do so (the libvirt ones already do, and making the xenapi ones do so is the subject of a current merge request) * start with model code * clean up linux\_net * merged refresh from sleepsonthefloor * See description of change... what's the difference between that message and this message again? * Move eventlet-using class out of endpoint/\_\_init\_\_.py into its own submodule, so that twisted-related code using endpoint.[other stuff] wouldn't run eventlet and make unit tests throw crazy errors about eventlet 0.9.10 not playing nicely with twisted * Remove duplicate definition of flag * The file that I create automates this step in http://wiki.openstack.org/InstallationNova20100729 : * Simpler installation, and, can run install\_venv from anywhere instead of just from checkout root * Use the argument handler specified by twistd, if any * Fixes quite a few style issues across the entire nova codebase bringing it much closer to the guide described in HACKING * merge from trunk * merged trunk * merged trunk and fixed conflicts * Fixes issues with allocation and deallocation of fixed and elastic addresses * Added documentation for the nova.virt connection interface, a note about the need to chmod the objectstore script, and a reference for the XenAPI module * Make individual disables for R0201 instead of file-level * All controller actions receive a 'req' parameter containing the webob Request * improve compatibility with ec2 clients * PEP8 and name corrections * rather comprehensive style fixes * fix launching and describing instances to work with sqlalchemy * Add new libvirt\_type option "uml" for user-mode-linux.. This switches the libvirt URI to uml:///system and uses a different template for the libvirt xml * typos * don't try to create and destroy lvs in fake mode * refactoring volume and some cleanup in model and compute * Add documentation to spawn, reboot, and destroy stating that those functions should return Deferreds. Update the fake implementations to do so (the libvirt ones already do, and making the xenapi ones do so is the subject of a current merge request) * Rework virt.xenapi's concurrency model. There were many places where we were inadvertently blocking the reactor thread. The reworking puts all calls to XenAPI on background threads, so that they won't block the reactor thread * add refresh on model * merge in latedt from vish * Catches and logs exceptions for rpc calls and raises a RemoteError exception on the caller side * Removes requirement of internet connectivity to run api server * Fixed path to keys directory * Update cloud\_unittest to match renamed internal function * Removes the workaround for syslog-ng of removing newlines * Fixes bug lp:616312 by reversing the order of args in nova-manage when it calls AuthManager.get\_credentials * merged trunk * Sets a hostname for instances that properly resolves and cleans up network classes * merged fix-hostname and fixed conflict * Implemented admin client / admin api for fetching user roles * Improves pep8 compliance and pylint score in network code * Bug #617776: DescribeImagesResponse contains type element, when it should be called imageType * Bug 617913: RunInstances response doesn't meet EC2 specification * remove more direct session interactions * refactor to have base helper class with shared session and engine * ComputeConnectionTestCase is almost working again * more work on trying to get compute tests passing * re-add redis clearing * make the fake-ldap system work again * got run\_tests.py to run (with many failed tests) * Bug #617776: DescribeImagesResponse contains type element, when it should be called imageType * initial commit for orm based models * Add a few unit tests for libvirt\_conn * Move interfaces template into virt/, too * Refactor LibvirtConnection a little bit for easier testing * Remove extra "uml" from os.type * Fixes out of order arguments in get\_credentials * pep8 and pylint cleanup * Support JSON and XML in Serializer * Added note regarding dependency upon XenAPI.py * Added documentation to the nova.virt interface * make rpc.call propogate exception info. Includes tests * Undo the changes to cloud.py that somehow diverged from trunk * Mergeprop cleanup * Mergeprop cleanup * Make WSGI routing support routing to WSGI apps or to controller+action * Make --libvirt\_type=uml do the right thing: Sets the correct libvirt URI and use a special template for the XML * renamed missed reference to Address * die classmethod * merged fix-dhcpbridge * remove class method * typo allocated should be relased * rename address stuff to avoid name collision and make the .all() iterator work again * keep track of leasing state so we can delete ips that didn't ever get leased * remove syslog-ng workaround * Merged with trunk * Implement the same fix as lp:~vishvananda/nova/fix-curl-project, but for virt.xenapi * Fix exception in get\_info * Move libvirt.xml template into nova/virt * Parameterise libvirt URI * Merged with trunk * fix dhcpbridge issues * Adapts the run\_tests.sh script to allow interactive or automated creation of virtualenv, or to run tests outside of a virtualenv * Prototype implementation of Servers controller * Working router that can target WSGI middleware or a standard controller+action * Added a xapi plugin that can pull images from nova-objectstore, and use that to get a disk, kernel, and ramdisk for the VM * Serializing in middleware after all... by tying to the router. maybe a good idea? * Merged with trunk * Actually pass in hostname and create a proper model for data in network code * Improved roles functionality (listing & improved test coverage) * support a hostname that can be looked up * updated virtualenv to add eventlet, which is now a requirement * Changes the run\_tests.sh and /tools/install\_venv.py scripts to be more user-friendly and not depend on PIP while not in the virtual environment * Fixed admin api for user roles * Merged list\_roles * fix spacing issue in ldapdriver * Fixes bug lp:615857 by changing the name of the zip export method in nova-manage * Wired up admin api for user roles * change get\_roles to have a flag for project\_roles or not. Don't show 'projectmanager' in list of roles * Throw exceptions for illegal roles on role add * Adds get\_roles commands to manager and driver classes * more pylint fixes * Implement VIF creation in the xenapi module * lots more pylint fixes * work on a router that works with wsgi and non-wsgi routing * Pylint clean of vpn.py * Further pylint cleanup * Oops, we need eventlet as well * pylint cleanup * pep8 cleanup * merged trunk * pylint fixes for nova/objectstore/handler.py * rename create\_zip to zipfile so lazy match works * Quick fix on location of printouts when trying to install virtualenv * Changes the run\_tests.sh and /tools/install\_venv.py scripts to be more user-friendly and not depend on PIP while not in the virtual environment. Running run\_tests.sh should not just work out of the box on all systems supporting easy\_install.. * 2 changes in doing PEP8 & Pylint cleaning: \* adding pep8 and pylint to the PIP requirements files for Tools \* light cleaning work (mostly formatting) on nova/endpoints/cloud.py * More changes to volume to fix concurrency issues. Also testing updates * Merge * Merged nova-tests-apitest into pylint * Merged nova-virt-connection into nova-tests-apitest * Pylint fixes for /nova/tests/api\_unittest.py * pylint fixes for nova/virt/connection.py * merged trunk, fixed an error with releasing ip * fix releasing to work properly * Add some useful features to our flags * pylint fixes for /nova/test.py * Fixes pylint issues in /nova/server.py * importing merges from hudson branch * fixing - removing unused imports per Eric & Jay review * initial cleanup of tests for network * Implement the same fix as lp:~vishvananda/nova/fix-curl-project, but for virt.xenapi * Run correctly even if called while in tools/ directory, as 'python install\_venv.py' * This branch builds off of Todd and Michael's API branches to rework the Rackspace API endpoint and WSGI layers * separated scheduler types into own modules * Fix up variable names instead of disabling pylint naming rule. Makes variables able to be a single letter in pylintrc * Disables warning about TODO in code comments in pylintrc * More pylint/pep8 cleanup, this time in bin/\* files * pylint fixes for nova/server.py * remove duplicated report\_state that exists in the base class more pylint fixes * Fixed docstring format per Jay's review * pylint fixes for /nova/test.py * Move the xenapi top level directory under plugins, as suggested by Jay Pipes * Pull trunk merge through lp:~ewanmellor/nova/add-contains * Pull trunk merge through lp:~ewanmellor/nova/xapi-plugin * Merged with trunk again * light cleanup - convention stuff mostly * convention and variable naming cleanup for pylint/pep8 * Used new (clearer) flag names when calling processes * Merged with trunk * Greater compliance with pep8/pylint style checks * removing what appears to be an unused try/except statement - nova.auth.manager.UserError doesn't exist in this codebase. Leftover? Something intended to be there but never added? * variable name cleanup * attempting some cleanup work * adding pep8 and pylint for regular cleanup tasks * Cleaned up pep8/pylint for bin/\* files. I did not fix rsapi since this is already cleaned up in another branch * Merged trunk * Reworked WSGI helper module and converted rackspace API endpoint to use it * Changed the network imports to use new network layout * merged with trunk * Change nova/virt/images.py's \_fetch\_local\_image to accept 4 args, since fetch() tries to call it with that many * Merged Todd and Michael's changes * pep8 and pylint cleanups * Some pylink and pep8 cleanups. Added a pylintrc file * fix copyrights for new files, etc * a few more commands were putting output on stderr. In general, exceptions on stderr output seems like a bad idea * Moved Scheduler classes into scheduler.py. Created a way to specify scheduler class that the SchedulerService uses.. * Make network its own worker! This separates the network logic from the api server, allowing us to have multiple network controllers. There a lot of stuff in networking that is ugly and should be modified with the datamodel changes. I've attempted not to mess with those things too much to keep the changeset small(ha!) * Fixed instance model associations to host (node) and added association to ip * Fixed write authorization for public images * Fixes a bug where if a user was removed from a group after he had a role, he could not be re-added * fix search/replace error * merged trunk * Start breaking out scheduler classes.. * WsgiStack class, eventletserver.serve. Trying to work toward a simple API that anyone can use to start an eventlet-based server composed of several WSGI apps * Use webob to simplify wsgi middleware * Made group membership check only search group instead of subtree. Roles in a group are removed when a user is removed from that group. Added test * Fixes bug#614090 -- nova.virt.images.\_fetch\_local\_image being called with 4 args but only has 3 * Fixed image modification authorization, API cleanup * fixed doc string * compute topic for a node is compute.node not compute:node! * almost there on random scheduler. not pushing to correct compute node topic, yet, apparently.. * First pass at making a file pass pep8 and pylint tests as an example * merged trunk * rename networkdata to vpn * remove extra line accidentally added * compute nodes should store total memory and disk space available for VMs * merged from trunk * added bin/nova-listinstances, which is mostly just a duplication of euca-describe-instances but doesn't go through the API * Fixes various concurrency issues in volume worker * Changed volumes to use a pool instead of globbing filesystem for concurrency reasons. Fixed broken tests * clean up nova-manage. If vpn data isn't set for user it skips it * method is called set\_network\_host * fixed circular reference and tests * renamed Vpn to NetworkData, moved the creation of data to inside network * fix rpc command line call, remove useless deferreds * fix error on terminate instance relating to elastic ip * Move the xenapi top level directory under plugins, as suggested by Jay Pipes * fixed tests, moved compute network config call, added notes, made inject option into a boolean * fix extra reference, method passing to network, various errors in elastic\_ips * use iteritems * reference to self.project instead of context.project + self.network\_model instead of network\_model * fixes in get public address and extra references to self.network * method should return network topic instead of network host * use deferreds in network * don't \_\_ module methods * inline commands use returnValue * it helps to save files BEFORE committing * Added note to README * Fixes the curl to pass in the project properly * Adds flag for libvirt type (hvm, qemu, etc) * Fix deprecation warning in AuthManager. \_\_new\_\_ isn't allowed to take args * created assocaition between project and host, modified commands to get host async, simplified calls to network * use get to retrieve node\_name from initial\_state * change network\_service flag to network\_type and don't take full class name * vblade commands randomly toss stuff into stderr, ignore it * delete instance doesn't fail if instances dir doesn't exist * Huge network refactor, Round I * Fixes boto imports to support both beta and older versions of boto * Get IP doesn't fail of you not connected to the intetnet * updated doc string and wrapper * add copyright headers * Fix exception in get\_info * Implement VIF creation * Define \_\_contains\_\_ on BasicModel, so that we can use "x in datamodel" * Fixed instance model associations to host (node) and added association to ip * Added a xapi plugin that can pull images from nova-objectstore, and use that to get a disk, kernel, and ramdisk for the VM. The VM actually boots! * Added project as parameter to admin client x509 zip file download * Turn the private \_image\_url(path) into a public image\_url(image). This will be used by virt.xenapi to instruct xapi as to which images to download * Merged in configurable libvirt\_uri, and fixes to raw disk images from the virtualbox branch * Fixed up some of the raw disk stuff that broke in the abstraction out of libvirt * Merged with raw disk image * Recognize 'magic' kernel value that means "don't use a kernel" - currently aki-00000000 * Fix Tests * Fixes nova volumes. The async commands yield properly. Simplified the call to create volume in cloud. Added some notes * another try on fix boto * use user.access instead of user.id * Fixes access key passing in curl statement * Accept a configurable libvirt\_uri * Added Cheetah to pip-requires * Removed duplicate toXml method * Merged with trunk * Merged with trunk, added note about suspicious behaviour * Added exit code checking to process.py (twisted process utils). A bit of class refactoring to make it work & cleaner. Also added some more instructive messages to install\_venv.py, because otherwise people that don't know what they're doing will install the wrong pip... i.e. I did :-) * Make nodaemon twistd processes log to stdout * Make nodaemon twistd processes log to stdout * use the right tag * flag for libvirt type * boto.s3 no longer imports connection, so we need to explicitly import it * Added project param to admin client zip download * boto.utils import doesn't work with new boto, import boto instead * fix imports in endpoint/images.py boto.s3 no longer imports connection, so we need to explicitly import it * Added --fail argument to curl invocations, so that HTTP request fails get surfaced as non-zero exit codes * Merged with trunk * Merged with trunk * strip out some useless imports * Add some useful features to our flags * Fixed pep8 in run\_test.py * Blank commit to get tarmac merge to pick up the tags * Fixed assertion "Someone released me too many times: too many tokens!" * Replace the second singleton unit test, lost during a merge * Merged with trunk to resolve merge conflicts * oops retry and add extra exception check * Fix deprecation warning in AuthManager. \_\_new\_\_ isn't allowed to take args * Added ChangeLog generation * Implemented admin api for rbac * Move the reading of API parameters above the call to \_get\_image, so that they have a chance to take effect * Move the reading of API parameters above the call to \_get\_image, so that they have a chance to take effect * Adds initial support for XenAPI (not yet finished) * More merges from trunk. Not everything came over the first time * Allow driver specification in AuthManager creation * pep8 * Fixed pep8 issues in setup.py - thanks redbo * Use default kernel and ramdisk properly by default * Adds optional user param to the get projects command * Ensures default redis keys are lowercase like they were in prior versions of the code * Pass in environment to dnsmasq properly * Releaed 0.9.0, now on 0.9.1 * Merged trunk * Added ChangeLog generation * Wired up get/add/remove project members * Merged lp:~vishvananda/nova/lp609749 * Removes logging when associating a model to something that isn't a model class * allow driver to be passed in to auth manager instead of depending solely on flag * make redis name default to lower case * Merged get-projects-by-user * Merged trunk * Fixed project api * Specify a filter by user for get projects * Create a model for storing session tokens * Fixed a typo from the the refactor of auth code * Makes ldap flags work again * bzr merge lp:nova/trunk * Tagged 0.9.0 and bumped the version to 0.9.1 * Silence logs when associated models aren't found. Also document methods used ofr associating things. And get rid of some duplicated code * Fix dnsmasq commands to pass in environment properly 0.9.0 ----- * Got the tree set for debian packaging * use default kernel and ramdisk and check for legal access * import ldapdriver for flags * Removed extra include * Added the gitignore files back in for the folks who are still on the git * Added a few more missing files to MANIFEST.in and added some placeholder files so that setup.py would carry the empty dir * Updated setup.py file to install stuff on a python setup.py install command * Removed gitignore files * Made run\_tests.sh executable * Put in a single MANIFEST.in file that takes care of things * Changed Makefile to shell script. The Makefile approach completely broke debhelper's ability to figure out that this was a python package * fixed typo from auth refactor * Add sdist make target to build the MANIFEST.in file * Removes debian dir from main tree. We'll add it back in in a different branch * Merged trunk * Wired up user:project auth calls * Bump version to 0.9.0 * Makes the compute and volume daemon workers use a common base class called Service. Adds a NetworkService in preparation for splitting out networking code. General cleanup and standardizarion of naming * fixed path to keys directory * Fixes Bug lp:610611: deleted project vlans are deleted from the datastore before they are reused * Add a 'sdist' make target. It first generates a MANIFEST.in based on what's in bzr, then calls python setup.py sdist * properly delete old vlans assigned to deleted projects * Remove debian/ from main branch * Bump version to 0.9.0. Change author to "OpenStack". Change author\_email to nova@lists.launchpad.net. Change url to http://www.openstack.org/. Change description to "cloud computing fabric controller" * Make "make test" detect whether to use virtualenv or not, thus making virtualenv optional * merged trunk * Makes the objectstore require authorization, checks it properly, and makes nova-compute provide it when fetching images * Automatically choose the correct type of test (virtualenv or system) * Ensure that boto's config has a "Boto" section before attempting to set a value in it * fixes buildpackage failing with dh\_install: missing files * removed old reference from nova-common.install and fixed spacing * Flag for SessionToken ttl setting * resolving conflict w/ merge, cleaning up virtenv setups * resolving conflict w/ merge, cleaning up virtenv setups * Fixes bug#610140. Thanks to Vish and Muharem for the patch * A few minor fixes to the virtualenv installer that were breaking on ubuntu * Give SessionToken an is\_expired method * Refactor of auth code * Fixes bug#610140. Thanks to Vish and Muharem for the patch * Share my updates to the Rackspace API * Fixes to the virtualenv installer * Ensure consistent use of filename for dhcp bridge flag file * renamed xxxservice to service * Began wiring up rbac admin api * fix auth\_driver flag to default to usable driver * Adds support scripts for installing deps into a virtualenv * In fact, it should delete them * Lookup should only not return expired tokens * Adds support scripts for installing deps into a virtualenv * default flag file full path * moved misnamed nova-dchp file * Make \_fetch\_s3\_image pass proper AWS Authorization headers so that image downloads work again * Make image downloads work again in S3 handler. Listing worked, but fetching the images failed because I wasn't clever enough to use twisted.web.static.File correctly * Move virtualenv installation out of the makefile * Expiry awareness for SessionToken * class based singleton for SharedPool * Basic standup of SessionToken model for shortlived auth tokens * merged trunk * merged trunk * Updated doc layout to the Sphinx two-dir layout * Replace hardcoded "nova" with FLAGS.control\_exchange * Add a simple set of tests for S3 API (using boto) * Fix references to image\_object. This caused an internal error when using euca-deregister * Set durable=False on TopicPublisher * Added missing import * Replace hardcoded example URL, username, and password with flags called xenapi\_connection\_url, xenapi\_connection\_username, xenapi\_connection\_password * Fix instance cleanup * Fix references to image\_object. This caused an internal error when using euca-deregister * removed unused assignment * More Cleanup of code * Fix references to get\_argument, fixing internal error when calling euca-deregister * Changes nova-volume to use twisted * Fixes up Bucket to throw proper NotFound and NotEmpty exceptions in constructor and delete() method, and fixes up objectstore\_unittest to properly use assertRaises() to check for proper exceptions and remove the assert\_ calls * Adds missing yield statement that was causing partitioning to intermittently fail * Merged lp:~ewanmellor/nova/lp609792 * Merged lp:~ewanmellor/nova/lp609791 * Replace hardcoded "nova" with FLAGS.control\_exchange * Set durable=False on TopicPublisher, so that it matches the flag on TopicConsumer. This ensures that either redeclaration of the control\_exchange will use the same flag, and avoid AMQPChannelException * Add an import so that nova-compute sees the images\_path flag, so that it can be used on the command line * Return a 404 when attempting to access a bucket that does not exist * Removed creation of process pools. We don't use these any more now that we're using process.simple\_execute * Fix assertion "Someone released me too many times: too many tokens!" when more than one process was running at the same time. This was caused by the override of SharedPool.\_\_new\_\_ not stopping ProcessPool.\_\_init\_\_ from being run whenever process.simple\_execute is called * Always make sure to set a Date headers, since it's needed to calculate the S3 Auth header * Updated the README file * Updated sphinx layout to a two-dir layout like swift. Updated a doc string to get rid of a Sphinx warning * Updated URLs in the README file to point to current locations * Add missing import following merge from trunk (cset 150) * Merged with trunk, since a lot of useful things have gone in there recently * fixed bug where partition code was sometimes failing due to initial dd not being yielded properly * Fixed bug 608505 - was freeing the wrong address (should have freed 'secondaddress', was freeing 'address') * renamed xxxnode to xxservice * Add (completely untested) code to include an Authorization header for the S3 request to fetch an image * Check signature for S3 requests * Fixes problem with describe-addresses returning all public ips instead of the ones for just the user's project * Fix for extra spaces in export statements in scripts relating to x509 certs * Adds a Makefile to fill dependencies for testing * Fix syslogging of exceptions by stripping newlines from the exception info * Merged fix for bug 608505 so unit tests pass * Check exit codes when spawning processes by default * Nobody wants to take on this twisted cleanup. It works for now, but could be much nicer if twisted has a nice hook-point for exception mapping * syslog changes * typo fixes and extra print statements removed * added todo for ABC * Fixed bug 608505 - was freeing the wrong address (should have freed 'secondaddress', was freeing 'address') * Merged trunk, fixed extra references to fake\_users * refactoring of imports for fakeldapdriver * make nova-network executable * refactor daemons to use common base class in preparation for network refactor * reorder import statement and remove commented-out test case that is the same as api\_unittest in objectstore\_unittest * Fixes up Bucket to throw proper NotFound and NotEmpty exceptions in constructor and delete() method, and fixes up objectstore\_unittest to properly use assertRaises() to check for proper exceptions and remove the assert\_ calls * Fix bug 607501. Raise 403, not exception if Authorization header not passed. Also added missing call to request.finish() & Python exception-handling style tweak * merge with twisted-volume * remove all of the unused saved return values from attach\_to\_twisted * fix for describe addresses showing everyone's public ips * update the logic for calculating network sizes * Locally administered mac addresses have the second least significant bit of the most significant byte set. If this byte is set then udev on ubuntu doesn't set persistent net rules * use a locally administered mac address so it isn't saved by udev * Convert processpool to a singleton, and switch node.py calls to use it. (Replaces passing a processpool object around all the time.) * Fixed the broken reference to * remove spaces from export statements in scripts relating to certs * Cleanups * Able to set up DNS, and remove udev network rules * Move self.ldap to global ldap to make changes easier if we ever implement settings * Cleanup per suggestions * network unittest clean up * Test cleanup, make driver return dictionaries and construct objects in manager * Able to boot without kernel or ramdisk. libvirt.xml.template is now a Cheetah template * Merged https://code.launchpad.net/~justin-fathomdb/nova/copy-error-handling * Merged bug fixes * Map exceptions to 404 / 403 codes, as was done before the move to twisted. However, I don't think this is the right way to do this in Twisted. For example, exceptions thrown after the render method returns will not be mapped * Merged lp:~justin-fathomdb/nova/bug607501 * Merged trunk. Fixed new references to UserManager * I put the call to request.finish() in the wrong place. :-( * More docstrings, don't autocreate projects * Raise 401, not exception if Authorization header not passed. Also minor fixes & Python exception-handling style tweak * LdapDriver cleanup: docstrings and parameter ordering * Ask curl to set exit code if resource was not found * Fixes to dhcp lease code to use a flagfile * merged trunk * Massive refactor of users.py * Hmm, serves me right for not understanding the request, eh? :) Now too\_many\_addresses test case is idempotent in regards to running in isolation and uses self.flags.network\_size instead of the magic number 32 * Redirect STDERR to output to an errlog file when running run\_tests.py * Send message ack in rpc.call and make queues durable * Fixed name change caused by remove-vendor merge * Replace tornado objectstore with twisted web * merged in trunk and fixed import merge errors * First commit of XenAPI-specific code (i.e. connections to the open-source community project Xen Cloud Platform, or the open-source commercial product Citrix XenServer) * Remove the tight coupling between nova.compute.monitor and libvirt. The libvirt-specific code was placed in nova.virt.libvirt\_conn by the last changeset. This greatly simplifies the monitor code, and puts the libvirt-specific XML record parsing in a libvirt-specific place * In preparation for XenAPI support, refactor the interface between nova.compute and the hypervisor (i.e. libvirt) * Fixed references to nova.utils that were broken by a change of import statement in the remove-vendor merge * Remove s3\_internal\_port setting. Objectstore should be able to handle the beatings now. As such, nginx is no longer needed, so it's removed from the dependencies and the configuration files are removed * Replace nova-objectstore with a twistd style wrapper. Add a get\_application method to objectstore handler * Minor post-merge fixes * Fixed \_redis\_name and \_redis\_key * Add build\_sphinx support * fix conf file to no longer have daemonize=1 because twistd daemonizes by default * make nova-volume start with twisteds daemonize stuff * Makin the queues non-durable by default * Ack messages during call so rabbit leaks less * simplify call to simple\_execute * merge extra singleton-pool changes * Added a config file to let setup.py drive building the sphinx docs * make simple method wrapper for process pool simple\_execute * change volume code to use twisted * remove calls to runthis from node * merge with singleton pool * Removed unused Pool from process.py, added a singleton pool called SharedPool, changed calls in node to use singleton pool * Fixes things that were not quite right after big merge party * Make S3 API handler more idiomatic Twisted Web-y * \_redis\_name wasn't picking up override\_type correctly, and \_redis\_key wasn't using it * Quick fix to variable names for consistency in documentation.. * Adds a fix to the idempotency of the test\_too\_many\_addresses test case by adding a simple property to the BaseNetwork class and calculating the number of available IPs by asking the network class to tell the test how many static and preallocated IP addresses are in use before entering the loop to "blow up" the address allocation.. * Adds a flag to redirect STDERR when running run\_tests.py. Defaults to a truncate-on-write logfile named run\_tests.err.log. Adds ignore rule for generated errlog file * no more print in storage unittest * reorder imports spacing * Fixes to dhcp lease code to use a flagfile * merged trunk * This branch fixes some unfortunate interaction between Nova and boto * Make sure we pass str objects instead of unicode objects to boto as our credentials * remove import of vendor since we have PPA now * Updates the test suite to work * Disabled a tmpdir cleanup * remove vendor * update copyrights * Volume\_ID identifier needed a return in the property. Also looking for race conditions in the destructor * bin to import images from canonical image store * add logging import to datastore * fix merge errors * change default vpn ports and remove complex vpn ip iteration * fix reference to BasicModel and imports * Cleanups related to BasicModel (whitespace, names, etc) * Updating buildbot address * Fixed buildbot * work on importing images * When destroying an Instance, disassociate with Node * Smiteme * Smiteme * Smiteme * Smiteme * Move BasicModel into datastore * Smiteme * Smiteme * Whitespace change * unhardcode the binary name * Fooish * Finish singletonizing UserManager usage * Debian package additions for simple network template * Foo * Whitespace fix * Remove debug statement * Foo * fix a typo * Added build-deps to debian/control that are needed to run test suite. Fixed an error in a test case * optimization to not load all instances when describe instances is called * More buildbot testing * More buildbot testing * More buildbot testing * More buildbot testing * More buildbot testing * More buildbot testing * Addin buildbot * Fix merge changelog and merge errors in utils.py * Fixes from code review * release 0.2.2-10 * fix for extra space in vblade-persist * Avoid using s-expr, pkcs1-conv, and lsh-export-key * release 0.2.2-9 * fixed bug in auth group\_exists * Move nova related configuration files into /etc/nova/ * move check for none before get mpi data * Refactored smoketests flags * Fixes to smoketest flags * Minor smoketest refactoring * fixes from code review * typo in exception in crypto * datetime import typo * added missing isotime method from utils * release 0.2.2-8 * missed a comma * release 0.2.2-7 * use a flag for cert subject * whitespace fixes and header changes * Fixed the os.environ patch (bogus) * Fixes as per Vish review (whitespace, import statements) * Off by one error in the allocation test (can someone check my subnet math?) * Adding more tests, refactoring for dhcp logic * Got dhcpleasor working, with test ENV for testing, and rpc.cast for real world * Capture signals from dnsmasq and use them to update network state * Relax the Twisted dependency to python-twisted-core (rather than the full stack) * releasing version 0.3.0+really0.2.2-0ubuntu0ppa3 * If set, pass KernelId and RamdiskId from RunInstances call to the target compute node * Add a default flag file for nova-manage to help it find the CA * Ship the CA directory in nova-common * Add a dependency on nginx from nova-objectsstore and install a suitable configuration file * releasing version 0.3.0+really0.2.2-0ubuntu0ppa2 * Don't pass --daemonize=1 to nova-compute. It's already daemonising by default * Add debian/nova-common.dirs to create var/lib/nova/{buckets,CA,images,instances,keys,networks} * keeper\_path is really caled datastore\_path * Fixed package version * Move templates from python directories to /usr/share/nova * Added --network\_path setting to nova-compute's flagfile * releasing version 0.3.0+really0.2.2-0ubuntu0ppa1 * Use rmdir instead of rm -rf to remove a tempdir * Set better defaults in flagfiles * Fixes and add interface template * Simple network injection * Simple Network avoids vlans * clean a few merge errors from network * Add curl as a dependency of nova-compute * getting started update * getting started update * Remove \_s errors from merge * fix typos in node from merge * remove spaces from default cert * Make sure get\_assigned\_vlans and BaseNetwork.hosts always return a dict, even if the key is currently empty in the KVS * Add \_s instance attribute to Instance class. It's referenced in a bunch of places, but is never set. This is unlikely to be the right fix (why have two attributes pointing to the same object?), but it seems to make ends meet * Replace spaces in x509 cert subject with underscores. It ends up getting split(' ')'ed and passed to subprocess.Popen, so it needs to not have spaces in it, otherwise openssl gets very upset * Expand somewhat on the short and long descriptions in debian/control * Use separate configuration files for the different daemons * Removed trailing whitespace from header * Updated licenses * Added flags to smoketests. General cleanup * removed all references to keeper * reformatting * Vpn ips and ports use redis * review reformat * code review reformat * We need to be able to look up Instance by Node (live migration) * Get rid of RedisModel * formatting fixes and refactoring from code review * reformatting to fit within 80 characters * simplified handling of tempdir for Fakes * fix for multiple shelves for each volume node * add object class violation exception to fakeldap * remove spaces from default cert * remove silly default from generate cert * fix of fakeldap imports and exceptions * More Comments, cleanup, and reformatting * users.py cleanup for exception handling and typo * Make fakeldap use redis * Refactor network.Vlan to be a BasicModel, since it touched Redis * bugfix: rename \_s to datamodel in Node in some places it was overlooked * fix key injection script * Fixes based on code review 27001 * added TODO * Admin API + Worker Tracking * fixed typo * style cleanup * add more info to vpn list * Use flag for vpn key suffix instead of hardcoded string * don't fail to create vpn key if dir exists * Create Volume should only take an integer between 0 and 1000 * Placeholders for missing describe commands * Set forward delay to zero (partial fix to bug #518) * more comment reformatting * fit comment within 80 lines * removed extraneous reference to rpc in objectstore unit test * Fix queue connection bugs * Fix deletion of user when he is the last member of the group * Fix error message for checking for projectmanager role * Installer now creates global developer role * Removed trailing whitespace from header * added nova-instancemonitor debian config * Updated licenses * Added flags to smoketests. General cleanup * A few missing files from the twisted patch * Tweaks to get instancemonitor running * Initial commit of nodemonitor * Create DescribeImageAttribute api method * release 0.2.2-6 * disk.py needed input for key injection to work * release 2.2-5 * message checking callbacks only need to run 10 times a second * release 2.2-4 * trackback formatting isn't logging correctly * documentation updates * fix missing tab in nova-manage * Release 2.2-3 * use logger to print trace of unhandled exceptions * add exit status to nova-manage * fix fakeldap so it can use redis keeper * fix is\_running failing because state was stored as a string * more commands in nova-manage for projects and roles * More volume test fixes * typo in reboot instances * Fix mount of drive for test image * don't need sudo anymore * Cleaning up smoketests * boto uses instance\_type not size * Fix to volume smoketests * fix display of project name for admin in describe instances * make sure to deexpress before we remove the host since deexpress uses the host * fix error in disassociate address * fixed reversed filtering logic * filter keypairs for vpn keys * allow multiple vpn connections with the same credentials * Added admin command to restart networks * hide vpn instances unless you are an admin and allow run\_instances to launch vpn image even if it is private * typo in my ping call * try to ping vpn instances * sensible defaults for instance types * add missing import to pipelib * Give vpns the proper ip address * Fix format addresses * Release 0.2.2-2 * fix more casing errors and make attachment set print * removed extraneous .volume\_id * don't allow volumes to be attached to the same mountpoint * fix case for volume attributes * fix sectors off by one * Don't use keeper for instances * fix default state to be 0 instead of pending * Release 0.2.2 * Fix for mpi cpu reporting * fix detach volume * fix status code printing in cloud * add project ids to volumes * add back accidentally removed bridge name. str is reserved, so don't use it as a variable name * whitespace fixes and format instances set of object fixes * Use instdir to iterate through instances * fix bridge name * Adding basic validation of volume size on creation, plus tests for it * finished gutting keeper from volume * First pass at validation unit tests. Haven't figured out class methods yet * Removing keeper sludge * Set volume status properly, first pass at validation decorators * Adding missing default values and fixing bare Redis fetch for volume list * one more handler typo * fix objectstore handler typo * fix modify image attribute typo * NetworkNode doesn't exist anymore * Added back in missing gateway property on networks * Refactored Instance to get rid of \_s bits, and fixed some bugs in state management * Delete instance files on shutdown * Flush redis db in setup and teardown of tests * Cleaning up my accidental merge of the docs branch * change pipelib to work with projects * Volumes support intermediate state. Don't have to cast to storage nodes for attach/detach anymore, just let node update redis with state * Adding nojekyll for directories * Fix for #437 (deleting attached volumes), plus some >9 blade\_id fixes * fix instance iteration to use self.instdir.all instead of older iterators * nasa ldap defaults * sensible rbac defaults * Tests for rbac code * Patch to allow rbac * Adding mpi data * Adding cloudpipe and vpn data back in to network.py * how we build our debs * Revert "fix a bug with AOE number generation" * re-added cloudpipe * devin's smoketests * tools to clean vlans and run our old install script * fix a bug with AOE number generation * Initial commit of nodemonitor * Create DescribeImageAttribute api method * Create DescribeImageAttribute api method * More rackspace API * git checkpoint commit post-wsgi * update spacing * implement image serving in objectstore so nginx isn't required in development * update twitter username * make a "Running" topic instead of having it flow under "Configuration" * Make nginx config be in a code block * More doc updates: nginx & pycurl * Add a README, because GitHub loves them. Update the getting started docs * update spacing * Commit what I have almost working before diverging * first go at moving from tornado to twisted * implement image serving in objectstore so nginx isn't required in development * update twitter username * Update documentation * fix for reactor.spawnProcess sending deprecation warning * patch from issue 4001 * Fix for LoopingCall failing Added in exception logging around amqp calls Creating deferred in receive before ack() message was causing IOError (interrupted system calls), probably because the same message was getting processed twice in some situations, causing the system calls to be doubled. Moving the ack() earlier fixed the problem. The code works now with an interval of 0 but that causes heavy processor usage. An interval of 0.01 keeps the cpu usage within reasonable limits * get rid of anyjson in rpc and fix bad reference to rpc.Connection * gateway undefined * fix cloud instances method * Various cloud fixes * make get\_my\_ip return 127.0.0.1 for testing * Adds a Twisted implementation of a process pool * make a "Running" topic instead of having it flow under "Configuration" * Make nginx config be in a code block * More doc updates: nginx & pycurl * Add a README, because GitHub loves them. Update the getting started docs * whitespace fixes for nova/utils.py * Add project methods to nova-manage * Fix novarc to use project when creating access key * removed reference to nonexistent flag * Josh's networking refactor, modified to work with projects * Merged Vish's work on adding projects to nova * missed the gitignore * initial commit nova-17.0.1/requirements.txt0000666000175000017500000000453413250073136016061 0ustar zuulzuul00000000000000# The order of packages is significant, because pip processes them in the order # of appearance. Changing the order has an impact on the overall integration # process, which may cause wedges in the gate later. pbr!=2.1.0,>=2.0.0 # Apache-2.0 SQLAlchemy!=1.1.5,!=1.1.6,!=1.1.7,!=1.1.8,>=1.0.10 # MIT decorator>=3.4.0 # BSD eventlet!=0.18.3,!=0.20.1,<0.21.0,>=0.18.2 # MIT Jinja2!=2.9.0,!=2.9.1,!=2.9.2,!=2.9.3,!=2.9.4,>=2.8 # BSD License (3 clause) keystonemiddleware>=4.17.0 # Apache-2.0 lxml!=3.7.0,>=3.4.1 # BSD Routes>=2.3.1 # MIT cryptography!=2.0,>=1.9 # BSD/Apache-2.0 WebOb>=1.7.1 # MIT greenlet>=0.4.10 # MIT PasteDeploy>=1.5.0 # MIT Paste>=2.0.2 # MIT PrettyTable<0.8,>=0.7.1 # BSD sqlalchemy-migrate>=0.11.0 # Apache-2.0 netaddr>=0.7.18 # BSD netifaces>=0.10.4 # MIT paramiko>=2.0.0 # LGPLv2.1+ Babel!=2.4.0,>=2.3.4 # BSD enum34>=1.0.4;python_version=='2.7' or python_version=='2.6' or python_version=='3.3' # BSD iso8601>=0.1.11 # MIT jsonschema<3.0.0,>=2.6.0 # MIT python-cinderclient>=3.3.0 # Apache-2.0 keystoneauth1>=3.3.0 # Apache-2.0 python-neutronclient>=6.3.0 # Apache-2.0 python-glanceclient>=2.8.0 # Apache-2.0 requests>=2.14.2 # Apache-2.0 six>=1.10.0 # MIT stevedore>=1.20.0 # Apache-2.0 setuptools!=24.0.0,!=34.0.0,!=34.0.1,!=34.0.2,!=34.0.3,!=34.1.0,!=34.1.1,!=34.2.0,!=34.3.0,!=34.3.1,!=34.3.2,!=36.2.0,>=16.0 # PSF/ZPL websockify>=0.8.0 # LGPLv3 oslo.cache>=1.26.0 # Apache-2.0 oslo.concurrency>=3.25.0 # Apache-2.0 oslo.config>=5.1.0 # Apache-2.0 oslo.context>=2.19.2 # Apache-2.0 oslo.log>=3.36.0 # Apache-2.0 oslo.reports>=1.18.0 # Apache-2.0 oslo.serialization!=2.19.1,>=2.18.0 # Apache-2.0 oslo.utils>=3.33.0 # Apache-2.0 oslo.db>=4.27.0 # Apache-2.0 oslo.rootwrap>=5.8.0 # Apache-2.0 oslo.messaging>=5.29.0 # Apache-2.0 oslo.policy>=1.30.0 # Apache-2.0 oslo.privsep>=1.23.0 # Apache-2.0 oslo.i18n>=3.15.3 # Apache-2.0 oslo.service!=1.28.1,>=1.24.0 # Apache-2.0 rfc3986>=0.3.1 # Apache-2.0 oslo.middleware>=3.31.0 # Apache-2.0 psutil>=3.2.2 # BSD oslo.versionedobjects>=1.31.2 # Apache-2.0 os-brick>=2.2.0 # Apache-2.0 os-traits>=0.4.0 # Apache-2.0 os-vif!=1.8.0,>=1.7.0 # Apache-2.0 os-win>=3.0.0 # Apache-2.0 castellan>=0.16.0 # Apache-2.0 microversion-parse>=0.1.2 # Apache-2.0 os-xenapi>=0.3.1 # Apache-2.0 tooz>=1.58.0 # Apache-2.0 cursive>=0.2.1 # Apache-2.0 pypowervm>=1.1.10 # Apache-2.0 os-service-types>=1.1.0 # Apache-2.0 taskflow>=2.16.0 # Apache-2.0 nova-17.0.1/LICENSE0000666000175000017500000002363713250073126013606 0ustar zuulzuul00000000000000 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. nova-17.0.1/PKG-INFO0000664000175000017500000000731313250073472013671 0ustar zuulzuul00000000000000Metadata-Version: 1.1 Name: nova Version: 17.0.1 Summary: Cloud computing fabric controller Home-page: https://docs.openstack.org/nova/latest/ Author: OpenStack Author-email: openstack-dev@lists.openstack.org License: UNKNOWN Description-Content-Type: UNKNOWN Description: ======================== Team and repository tags ======================== .. image:: https://governance.openstack.org/badges/nova.svg :target: https://governance.openstack.org/reference/tags/index.html .. Change things from this point on OpenStack Nova ============== OpenStack Nova provides a cloud computing fabric controller, supporting a wide variety of compute technologies, including: libvirt (KVM, Xen, LXC and more), Hyper-V, VMware, XenServer, OpenStack Ironic and PowerVM. Use the following resources to learn more. API --- To learn how to use Nova's API, consult the documentation available online at: - `Compute API Guide `__ - `Compute API Reference `__ For more information on OpenStack APIs, SDKs and CLIs in general, refer to: - `OpenStack for App Developers `__ - `Development resources for OpenStack clouds `__ Operators --------- To learn how to deploy and configure OpenStack Nova, consult the documentation available online at: - `OpenStack Nova `__ In the unfortunate event that bugs are discovered, they should be reported to the appropriate bug tracker. If you obtained the software from a 3rd party operating system vendor, it is often wise to use their own bug tracker for reporting problems. In all other cases use the master OpenStack bug tracker, available at: - `Bug Tracker `__ Developers ---------- For information on how to contribute to Nova, please see the contents of the CONTRIBUTING.rst. Any new code must follow the development guidelines detailed in the HACKING.rst file, and pass all unit tests. Further developer focused documentation is available at: - `Official Nova Documentation `__ - `Official Client Documentation `__ Other Information ----------------- During each `Summit`_ and `Project Team Gathering`_, we agree on what the whole community wants to focus on for the upcoming release. The plans for nova can be found at: - `Nova Specs `__ .. _Summit: https://www.openstack.org/summit/ .. _Project Team Gathering: https://www.openstack.org/ptg/ Platform: UNKNOWN Classifier: Environment :: OpenStack Classifier: Intended Audience :: Information Technology Classifier: Intended Audience :: System Administrators Classifier: License :: OSI Approved :: Apache Software License Classifier: Operating System :: POSIX :: Linux Classifier: Programming Language :: Python Classifier: Programming Language :: Python :: 2 Classifier: Programming Language :: Python :: 2.7 Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.5 nova-17.0.1/.zuul.yaml0000666000175000017500000000577413250073136014545 0ustar zuulzuul00000000000000# See https://docs.openstack.org/infra/manual/drivers.html#naming-with-zuul-v3 # for job naming conventions. - job: name: nova-dsvm-base parent: legacy-dsvm-base description: | The base job definition for nova devstack/tempest jobs. Contains common configuration. timeout: 10800 required-projects: - openstack-infra/devstack-gate - openstack/nova - openstack/tempest irrelevant-files: - ^(placement-)?api-.*$ - ^(test-|)requirements.txt$ - ^.*\.rst$ - ^.git.*$ - ^doc/.*$ - ^nova/hacking/.*$ - ^nova/locale/.*$ - ^nova/tests/.*$ - ^releasenotes/.*$ - ^setup.cfg$ - ^tests-py3.txt$ - ^tools/.*$ - ^tox.ini$ - job: name: nova-tox-functional parent: openstack-tox description: | Run tox-based functional tests for the OpenStack Nova project with Nova specific irrelevant-files list. Uses tox with the ``functional`` environment. irrelevant-files: - ^.*\.rst$ - ^api-.*$ - ^doc/source/.*$ - ^nova/locale/.*$ - ^placement-api-ref/.*$ - ^releasenotes/.*$ vars: tox_envlist: functional timeout: 3600 - job: name: nova-tox-functional-py35 parent: openstack-tox description: | Run tox-based functional tests for the OpenStack Nova project under cPython version 3.5. with Nova specific irrelevant-files list. Uses tox with the ``functional-py35`` environment. irrelevant-files: - ^.*\.rst$ - ^api-.*$ - ^doc/source/.*$ - ^nova/locale/.*$ - ^placement-api-ref/.*$ - ^releasenotes/.*$ vars: tox_envlist: functional-py35 timeout: 3600 - job: name: nova-lvm parent: nova-dsvm-base description: | Run standard integration tests using LVM image backend. This is useful if there are tests touching this code. run: playbooks/legacy/nova-lvm/run.yaml post-run: playbooks/legacy/nova-lvm/post.yaml - job: name: nova-multiattach parent: nova-dsvm-base description: | Run tempest integration tests with volume multiattach support enabled. This job will only work starting with Queens. It uses the default Cinder volume type in devstack (lvm) and the default compute driver in devstack (libvirt). It also disables the Pike Ubuntu Cloud Archive because volume multiattach support with the libvirt driver only works with qemu<2.10 or libvirt>=3.10 which won't work with the Pike UCA. branches: ^(?!stable/(newton|ocata|pike)).*$ run: playbooks/legacy/nova-multiattach/run.yaml post-run: playbooks/legacy/nova-multiattach/post.yaml - project: # Please try to keep the list of job names sorted alphabetically. check: jobs: - nova-multiattach - nova-tox-functional - nova-tox-functional-py35 gate: jobs: - nova-multiattach - nova-tox-functional - nova-tox-functional-py35 experimental: jobs: - nova-lvm nova-17.0.1/.stestr.conf0000666000175000017500000000006113250073126015034 0ustar zuulzuul00000000000000[DEFAULT] test_path=./nova/tests/unit top_dir=./ nova-17.0.1/gate/0000775000175000017500000000000013250073471013507 5ustar zuulzuul00000000000000nova-17.0.1/gate/README0000666000175000017500000000043613250073126014371 0ustar zuulzuul00000000000000These are hooks to be used by the OpenStack infra test system. These scripts may be called by certain jobs at important times to do extra testing, setup, etc. They are really only relevant within the scope of the OpenStack infra system and are not expected to be useful to anyone else. nova-17.0.1/gate/post_test_hook.sh0000777000175000017500000000103313250073136017107 0ustar zuulzuul00000000000000#!/bin/bash -x MANAGE="/usr/local/bin/nova-manage" function archive_deleted_rows { # NOTE(danms): Run this a few times to make sure that we end # up with nothing more to archive for i in `seq 30`; do $MANAGE db archive_deleted_rows --verbose --max_rows 1000 RET=$? if [[ $RET -gt 1 ]]; then echo Archiving failed with result $RET return $RET elif [[ $RET -eq 0 ]]; then echo Archiving Complete break; fi done } archive_deleted_rows nova-17.0.1/bindep.txt0000666000175000017500000000254013250073126014571 0ustar zuulzuul00000000000000# This is a cross-platform list tracking distribution packages needed for install and tests; # see https://docs.openstack.org/infra/bindep/ for additional information. build-essential [platform:dpkg test] gcc [platform:rpm test] # gettext and graphviz are needed by doc builds only. For transition, # have them in both doc and test. # TODO(jaegerandi): Remove test once infra scripts are updated. gettext [doc test] graphviz [doc test] language-pack-en [platform:ubuntu] libffi-dev [platform:dpkg test] libffi-devel [platform:rpm test] libmysqlclient-dev [platform:dpkg] libpq-dev [platform:dpkg test] libsqlite3-dev [platform:dpkg test] libxml2-dev [platform:dpkg test] libxslt-devel [platform:rpm test] libxslt1-dev [platform:dpkg test] locales [platform:debian] mysql [platform:rpm] mysql-client [platform:dpkg] mysql-devel [platform:rpm test] mysql-server pkg-config [platform:dpkg test] pkgconfig [platform:rpm test] postgresql postgresql-client [platform:dpkg] postgresql-devel [platform:rpm test] postgresql-server [platform:rpm] python-dev [platform:dpkg test] python-devel [platform:rpm test] python3-all [platform:dpkg !platform:ubuntu-precise] python3-all-dev [platform:dpkg !platform:ubuntu-precise] python3-devel [platform:fedora] python34-devel [platform:centos] sqlite-devel [platform:rpm test] libpcre3-dev [platform:dpkg test] pcre-devel [platform:rpm test] nova-17.0.1/setup.cfg0000666000175000017500000000716013250073472014417 0ustar zuulzuul00000000000000[metadata] name = nova summary = Cloud computing fabric controller description-file = README.rst author = OpenStack author-email = openstack-dev@lists.openstack.org home-page = https://docs.openstack.org/nova/latest/ classifier = Environment :: OpenStack Intended Audience :: Information Technology Intended Audience :: System Administrators License :: OSI Approved :: Apache Software License Operating System :: POSIX :: Linux Programming Language :: Python Programming Language :: Python :: 2 Programming Language :: Python :: 2.7 Programming Language :: Python :: 3 Programming Language :: Python :: 3.5 [global] setup-hooks = pbr.hooks.setup_hook [files] data_files = etc/nova = etc/nova/api-paste.ini etc/nova/rootwrap.conf etc/nova/rootwrap.d = etc/nova/rootwrap.d/* packages = nova [entry_points] oslo.config.opts = nova.conf = nova.conf.opts:list_opts oslo.config.opts.defaults = nova.conf = nova.common.config:set_middleware_defaults oslo.policy.enforcer = nova = nova.policy:get_enforcer oslo.policy.policies = # The sample policies will be ordered by entry point and then by list # returned from that entry point. If more control is desired split out each # list_rules method into a separate entry point rather than using the # aggregate method. nova = nova.policies:list_rules nova.compute.monitors.cpu = virt_driver = nova.compute.monitors.cpu.virt_driver:Monitor console_scripts = nova-api = nova.cmd.api:main nova-api-metadata = nova.cmd.api_metadata:main nova-api-os-compute = nova.cmd.api_os_compute:main nova-cells = nova.cmd.cells:main nova-compute = nova.cmd.compute:main nova-conductor = nova.cmd.conductor:main nova-console = nova.cmd.console:main nova-consoleauth = nova.cmd.consoleauth:main nova-dhcpbridge = nova.cmd.dhcpbridge:main nova-manage = nova.cmd.manage:main nova-network = nova.cmd.network:main nova-novncproxy = nova.cmd.novncproxy:main nova-policy = nova.cmd.policy:main nova-rootwrap = oslo_rootwrap.cmd:main nova-rootwrap-daemon = oslo_rootwrap.cmd:daemon nova-scheduler = nova.cmd.scheduler:main nova-serialproxy = nova.cmd.serialproxy:main nova-spicehtml5proxy = nova.cmd.spicehtml5proxy:main nova-status = nova.cmd.status:main nova-xvpvncproxy = nova.cmd.xvpvncproxy:main wsgi_scripts = nova-placement-api = nova.api.openstack.placement.wsgi:init_application nova-api-wsgi = nova.api.openstack.compute.wsgi:init_application nova-metadata-wsgi = nova.api.metadata.wsgi:init_application nova.ipv6_backend = rfc2462 = nova.ipv6.rfc2462 account_identifier = nova.ipv6.account_identifier nova.scheduler.host_manager = host_manager = nova.scheduler.host_manager:HostManager # Deprecated starting from the 17.0.0 Queens release. ironic_host_manager = nova.scheduler.ironic_host_manager:IronicHostManager nova.scheduler.driver = filter_scheduler = nova.scheduler.filter_scheduler:FilterScheduler caching_scheduler = nova.scheduler.caching_scheduler:CachingScheduler chance_scheduler = nova.scheduler.chance:ChanceScheduler fake_scheduler = nova.tests.unit.scheduler.fakes:FakeScheduler [build_sphinx] all_files = 1 build-dir = doc/build source-dir = doc/source warning-is-error = 1 [build_apiguide] all_files = 1 build-dir = api-guide/build source-dir = api-guide/source [egg_info] tag_build = tag_date = 0 tag_svn_revision = 0 [compile_catalog] directory = nova/locale domain = nova [update_catalog] domain = nova output_dir = nova/locale input_file = nova/locale/nova.pot [extract_messages] keywords = _ gettext ngettext l_ lazy_gettext mapping_file = babel.cfg output_file = nova/locale/nova.pot [wheel] universal = 1 [extras] osprofiler = osprofiler>=1.4.0 # Apache-2.0 nova-17.0.1/babel.cfg0000666000175000017500000000002113250073126014305 0ustar zuulzuul00000000000000[python: **.py] nova-17.0.1/test-requirements.txt0000666000175000017500000000227513250073126017035 0ustar zuulzuul00000000000000# The order of packages is significant, because pip processes them in the order # of appearance. Changing the order has an impact on the overall integration # process, which may cause wedges in the gate later. hacking!=0.13.0,<0.14,>=0.12.0 # Apache-2.0 coverage!=4.4,>=4.0 # Apache-2.0 ddt>=1.0.1 # MIT fixtures>=3.0.0 # Apache-2.0/BSD mock>=2.0.0 # BSD mox3>=0.20.0 # Apache-2.0 psycopg2>=2.6.2 # LGPL/ZPL PyMySQL>=0.7.6 # MIT License python-barbicanclient!=4.5.0,!=4.5.1,>=4.0.0 # Apache-2.0 python-ironicclient>=2.2.0 # Apache-2.0 requests-mock>=1.1.0 # Apache-2.0 sphinx!=1.6.6,>=1.6.2 # BSD sphinxcontrib-actdiag>=0.8.5 # BSD sphinxcontrib-seqdiag>=0.8.4 # BSD os-api-ref>=1.4.0 # Apache-2.0 oslotest>=3.2.0 # Apache-2.0 stestr>=1.0.0 # Apache-2.0 osprofiler>=1.4.0 # Apache-2.0 testresources>=2.0.0 # Apache-2.0/BSD testscenarios>=0.4 # Apache-2.0/BSD testtools>=2.2.0 # MIT bandit>=1.1.0 # Apache-2.0 openstackdocstheme>=1.18.1 # Apache-2.0 gabbi>=1.35.0 # Apache-2.0 # vmwareapi driver specific dependencies oslo.vmware>=2.17.0 # Apache-2.0 # releasenotes reno>=2.5.0 # Apache-2.0 # placement functional tests wsgi-intercept>=1.4.1 # MIT License # redirect tests in docs whereto>=0.3.0 # Apache-2.0 nova-17.0.1/tools/0000775000175000017500000000000013250073472013730 5ustar zuulzuul00000000000000nova-17.0.1/tools/nova-manage.bash_completion0000666000175000017500000000214013250073126021204 0ustar zuulzuul00000000000000# bash completion for openstack nova-manage _nova_manage_opts="" # lazy init _nova_manage_opts_exp="" # lazy init # dict hack for bash 3 _set_nova_manage_subopts () { eval _nova_manage_subopts_"$1"='$2' } _get_nova_manage_subopts () { eval echo '${_nova_manage_subopts_'"$1"'#_nova_manage_subopts_}' } _nova_manage() { local cur prev subopts COMPREPLY=() cur="${COMP_WORDS[COMP_CWORD]}" prev="${COMP_WORDS[COMP_CWORD-1]}" if [ "x$_nova_manage_opts" == "x" ] ; then _nova_manage_opts="`nova-manage bash-completion 2>/dev/null`" _nova_manage_opts_exp="`echo $_nova_manage_opts | sed -e "s/\s/|/g"`" fi if [[ " `echo $_nova_manage_opts` " =~ " $prev " ]] ; then if [ "x$(_get_nova_manage_subopts "$prev")" == "x" ] ; then subopts="`nova-manage bash-completion $prev 2>/dev/null`" _set_nova_manage_subopts "$prev" "$subopts" fi COMPREPLY=($(compgen -W "$(_get_nova_manage_subopts "$prev")" -- ${cur})) elif [[ ! " ${COMP_WORDS[@]} " =~ " "($_nova_manage_opts_exp)" " ]] ; then COMPREPLY=($(compgen -W "${_nova_manage_opts}" -- ${cur})) fi return 0 } complete -F _nova_manage nova-manage nova-17.0.1/tools/releasenotes_tox.sh0000777000175000017500000000113313250073126017646 0ustar zuulzuul00000000000000#!/usr/bin/env bash rm -rf releasenotes/build sphinx-build -a -E -W \ -d releasenotes/build/doctrees \ -b html \ releasenotes/source releasenotes/build/html BUILD_RESULT=$? UNCOMMITTED_NOTES=$(git status --porcelain | \ awk '$1 ~ "M|A|??" && $2 ~ /releasenotes\/notes/ {print $2}') if [ "${UNCOMMITTED_NOTES}" ] then cat < B result = migrate_server(server_name) if not result: return False # Migrate B -> A return migrate_server(server_name) def rebuild_server(server_name, snapshot_name): run("nova rebuild %s %s --poll" % (server_name, snapshot_name)) cmd = "nova list | grep %s | awk '{print $6}'" % server_name proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True) stdout, stderr = proc.communicate() status = stdout.strip() if status != 'ACTIVE': sys.stderr.write("Server %s failed to rebuild" % server_name) return False return True def test_rebuild(context): count, args = context server_name = "server%d" % count snapshot_name = "snap%d" % count cleanup = args.cleanup with server_built(server_name, args.image, cleanup=cleanup): with snapshot_taken(server_name, snapshot_name, cleanup=cleanup): return rebuild_server(server_name, snapshot_name) def _parse_args(): parser = argparse.ArgumentParser( description='Test Nova for Race Conditions.') parser.add_argument('tests', metavar='TESTS', type=str, nargs='*', default=['rebuild', 'migrate'], help='tests to run: [rebuilt|migrate]') parser.add_argument('-i', '--image', help="image to build from", required=True) parser.add_argument('-n', '--num-runs', type=int, help="number of runs", default=1) parser.add_argument('-c', '--concurrency', type=int, default=5, help="number of concurrent processes") parser.add_argument('--no-cleanup', action='store_false', dest="cleanup", default=True) parser.add_argument('-d', '--dom0-ips', help="IP of dom0's to run cleanup script") return parser.parse_args() def main(): dom0_cleanup_script = DOM0_CLEANUP_SCRIPT args = _parse_args() if args.dom0_ips: dom0_ips = args.dom0_ips.split(',') else: dom0_ips = [] start_time = time.time() batch_size = min(args.num_runs, args.concurrency) pool = multiprocessing.Pool(processes=args.concurrency) results = [] for test in args.tests: test_func = globals().get("test_%s" % test) if not test_func: sys.stderr.write("test '%s' not found" % test) sys.exit(1) contexts = [(x, args) for x in range(args.num_runs)] try: results += pool.map(test_func, contexts) finally: if args.cleanup: for dom0_ip in dom0_ips: run("ssh root@%s %s" % (dom0_ip, dom0_cleanup_script)) success = all(results) result = "SUCCESS" if success else "FAILED" duration = time.time() - start_time print("%s, finished in %.2f secs" % (result, duration)) sys.exit(0 if success else 1) if __name__ == "__main__": main() nova-17.0.1/tools/xenserver/vm_vdi_cleaner.py0000777000175000017500000002456613250073126021316 0ustar zuulzuul00000000000000#!/usr/bin/env python # Copyright 2011 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """vm_vdi_cleaner.py - List or clean orphaned VDIs/instances on XenServer.""" import doctest import os import sys from oslo_config import cfg from oslo_utils import timeutils import XenAPI possible_topdir = os.getcwd() if os.path.exists(os.path.join(possible_topdir, "nova", "__init__.py")): sys.path.insert(0, possible_topdir) from nova import config from nova import context import nova.conf from nova import db from nova import exception from nova.virt import virtapi from nova.virt.xenapi import driver as xenapi_driver cleaner_opts = [ cfg.IntOpt('zombie_instance_updated_at_window', default=172800, help='Number of seconds zombie instances are cleaned up.'), ] cli_opt = cfg.StrOpt('command', help='Cleaner command') CONF = nova.conf.CONF CONF.register_opts(cleaner_opts) CONF.register_cli_opt(cli_opt) ALLOWED_COMMANDS = ["list-vdis", "clean-vdis", "list-instances", "clean-instances", "test"] def call_xenapi(xenapi, method, *args): """Make a call to xapi.""" return xenapi._session.call_xenapi(method, *args) def find_orphaned_instances(xenapi): """Find and return a list of orphaned instances.""" ctxt = context.get_admin_context(read_deleted="only") orphaned_instances = [] for vm_ref, vm_rec in _get_applicable_vm_recs(xenapi): try: uuid = vm_rec['other_config']['nova_uuid'] instance = db.instance_get_by_uuid(ctxt, uuid) except (KeyError, exception.InstanceNotFound): # NOTE(jk0): Err on the side of caution here. If we don't know # anything about the particular instance, ignore it. print_xen_object("INFO: Ignoring VM", vm_rec, indent_level=0) continue # NOTE(jk0): This would be triggered if a VM was deleted but the # actual deletion process failed somewhere along the line. is_active_and_deleting = (instance.vm_state == "active" and instance.task_state == "deleting") # NOTE(jk0): A zombie VM is an instance that is not active and hasn't # been updated in over the specified period. is_zombie_vm = (instance.vm_state != "active" and timeutils.is_older_than(instance.updated_at, CONF.zombie_instance_updated_at_window)) if is_active_and_deleting or is_zombie_vm: orphaned_instances.append((vm_ref, vm_rec, instance)) return orphaned_instances def cleanup_instance(xenapi, instance, vm_ref, vm_rec): """Delete orphaned instances.""" xenapi._vmops._destroy(instance, vm_ref) def _get_applicable_vm_recs(xenapi): """An 'applicable' VM is one that is not a template and not the control domain. """ for vm_ref in call_xenapi(xenapi, 'VM.get_all'): try: vm_rec = call_xenapi(xenapi, 'VM.get_record', vm_ref) except XenAPI.Failure, e: if e.details[0] != 'HANDLE_INVALID': raise continue if vm_rec["is_a_template"] or vm_rec["is_control_domain"]: continue yield vm_ref, vm_rec def print_xen_object(obj_type, obj, indent_level=0, spaces_per_indent=4): """Pretty-print a Xen object. Looks like: VM (abcd-abcd-abcd): 'name label here' """ uuid = obj["uuid"] try: name_label = obj["name_label"] except KeyError: name_label = "" msg = "%s (%s) '%s'" % (obj_type, uuid, name_label) indent = " " * spaces_per_indent * indent_level print("".join([indent, msg])) def _find_vdis_connected_to_vm(xenapi, connected_vdi_uuids): """Find VDIs which are connected to VBDs which are connected to VMs.""" def _is_null_ref(ref): return ref == "OpaqueRef:NULL" def _add_vdi_and_parents_to_connected(vdi_rec, indent_level): indent_level += 1 vdi_and_parent_uuids = [] cur_vdi_rec = vdi_rec while True: cur_vdi_uuid = cur_vdi_rec["uuid"] print_xen_object("VDI", vdi_rec, indent_level=indent_level) connected_vdi_uuids.add(cur_vdi_uuid) vdi_and_parent_uuids.append(cur_vdi_uuid) try: parent_vdi_uuid = vdi_rec["sm_config"]["vhd-parent"] except KeyError: parent_vdi_uuid = None # NOTE(sirp): VDI's can have themselves as a parent?! if parent_vdi_uuid and parent_vdi_uuid != cur_vdi_uuid: indent_level += 1 cur_vdi_ref = call_xenapi(xenapi, 'VDI.get_by_uuid', parent_vdi_uuid) try: cur_vdi_rec = call_xenapi(xenapi, 'VDI.get_record', cur_vdi_ref) except XenAPI.Failure, e: if e.details[0] != 'HANDLE_INVALID': raise break else: break for vm_ref, vm_rec in _get_applicable_vm_recs(xenapi): indent_level = 0 print_xen_object("VM", vm_rec, indent_level=indent_level) vbd_refs = vm_rec["VBDs"] for vbd_ref in vbd_refs: try: vbd_rec = call_xenapi(xenapi, 'VBD.get_record', vbd_ref) except XenAPI.Failure, e: if e.details[0] != 'HANDLE_INVALID': raise continue indent_level = 1 print_xen_object("VBD", vbd_rec, indent_level=indent_level) vbd_vdi_ref = vbd_rec["VDI"] if _is_null_ref(vbd_vdi_ref): continue try: vdi_rec = call_xenapi(xenapi, 'VDI.get_record', vbd_vdi_ref) except XenAPI.Failure, e: if e.details[0] != 'HANDLE_INVALID': raise continue _add_vdi_and_parents_to_connected(vdi_rec, indent_level) def _find_all_vdis_and_system_vdis(xenapi, all_vdi_uuids, connected_vdi_uuids): """Collects all VDIs and adds system VDIs to the connected set.""" def _system_owned(vdi_rec): vdi_name = vdi_rec["name_label"] return (vdi_name.startswith("USB") or vdi_name.endswith(".iso") or vdi_rec["type"] == "system") for vdi_ref in call_xenapi(xenapi, 'VDI.get_all'): try: vdi_rec = call_xenapi(xenapi, 'VDI.get_record', vdi_ref) except XenAPI.Failure, e: if e.details[0] != 'HANDLE_INVALID': raise continue vdi_uuid = vdi_rec["uuid"] all_vdi_uuids.add(vdi_uuid) # System owned and non-managed VDIs should be considered 'connected' # for our purposes. if _system_owned(vdi_rec): print_xen_object("SYSTEM VDI", vdi_rec, indent_level=0) connected_vdi_uuids.add(vdi_uuid) elif not vdi_rec["managed"]: print_xen_object("UNMANAGED VDI", vdi_rec, indent_level=0) connected_vdi_uuids.add(vdi_uuid) def find_orphaned_vdi_uuids(xenapi): """Walk VM -> VBD -> VDI change and accumulate connected VDIs.""" connected_vdi_uuids = set() _find_vdis_connected_to_vm(xenapi, connected_vdi_uuids) all_vdi_uuids = set() _find_all_vdis_and_system_vdis(xenapi, all_vdi_uuids, connected_vdi_uuids) orphaned_vdi_uuids = all_vdi_uuids - connected_vdi_uuids return orphaned_vdi_uuids def list_orphaned_vdis(vdi_uuids): """List orphaned VDIs.""" for vdi_uuid in vdi_uuids: print("ORPHANED VDI (%s)" % vdi_uuid) def clean_orphaned_vdis(xenapi, vdi_uuids): """Clean orphaned VDIs.""" for vdi_uuid in vdi_uuids: print("CLEANING VDI (%s)" % vdi_uuid) vdi_ref = call_xenapi(xenapi, 'VDI.get_by_uuid', vdi_uuid) try: call_xenapi(xenapi, 'VDI.destroy', vdi_ref) except XenAPI.Failure, exc: sys.stderr.write("Skipping %s: %s" % (vdi_uuid, exc)) def list_orphaned_instances(orphaned_instances): """List orphaned instances.""" for vm_ref, vm_rec, orphaned_instance in orphaned_instances: print("ORPHANED INSTANCE (%s)" % orphaned_instance.name) def clean_orphaned_instances(xenapi, orphaned_instances): """Clean orphaned instances.""" for vm_ref, vm_rec, instance in orphaned_instances: print("CLEANING INSTANCE (%s)" % instance.name) cleanup_instance(xenapi, instance, vm_ref, vm_rec) def main(): """Main loop.""" config.parse_args(sys.argv) args = CONF(args=sys.argv[1:], usage='%(prog)s [options] --command={' + '|'.join(ALLOWED_COMMANDS) + '}') command = CONF.command if not command or command not in ALLOWED_COMMANDS: CONF.print_usage() sys.exit(1) if CONF.zombie_instance_updated_at_window < CONF.resize_confirm_window: raise Exception("`zombie_instance_updated_at_window` has to be longer" " than `resize_confirm_window`.") # NOTE(blamar) This tool does not require DB access, so passing in the # 'abstract' VirtAPI class is acceptable xenapi = xenapi_driver.XenAPIDriver(virtapi.VirtAPI()) if command == "list-vdis": print("Connected VDIs:\n") orphaned_vdi_uuids = find_orphaned_vdi_uuids(xenapi) print("\nOrphaned VDIs:\n") list_orphaned_vdis(orphaned_vdi_uuids) elif command == "clean-vdis": orphaned_vdi_uuids = find_orphaned_vdi_uuids(xenapi) clean_orphaned_vdis(xenapi, orphaned_vdi_uuids) elif command == "list-instances": orphaned_instances = find_orphaned_instances(xenapi) list_orphaned_instances(orphaned_instances) elif command == "clean-instances": orphaned_instances = find_orphaned_instances(xenapi) clean_orphaned_instances(xenapi, orphaned_instances) elif command == "test": doctest.testmod() else: print("Unknown command '%s'" % command) sys.exit(1) if __name__ == "__main__": main() nova-17.0.1/tools/xenserver/vdi_chain_cleanup.py0000666000175000017500000000714113250073126021757 0ustar zuulzuul00000000000000#!/usr/bin/env python # Copyright 2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ This script is designed to cleanup any VHDs (and their descendents) which have a bad parent pointer. The script needs to be run in the dom0 of the affected host. The available actions are: - print: display the filenames of the affected VHDs - delete: remove the affected VHDs - move: move the affected VHDs out of the SR into another directory """ import glob import os import subprocess import sys class ExecutionFailed(Exception): def __init__(self, returncode, stdout, stderr, max_stream_length=32): self.returncode = returncode self.stdout = stdout[:max_stream_length] self.stderr = stderr[:max_stream_length] self.max_stream_length = max_stream_length def __repr__(self): return "" % ( self.returncode, self.stdout, self.stderr) __str__ = __repr__ def execute(cmd, ok_exit_codes=None): if ok_exit_codes is None: ok_exit_codes = [0] proc = subprocess.Popen( cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE) (stdout, stderr) = proc.communicate() if proc.returncode not in ok_exit_codes: raise ExecutionFailed(proc.returncode, stdout, stderr) return proc.returncode, stdout, stderr def usage(): print("usage: %s " % sys.argv[0]) sys.exit(1) def main(): if len(sys.argv) < 3: usage() sr_path = sys.argv[1] action = sys.argv[2] if action not in ('print', 'delete', 'move'): usage() if action == 'move': if len(sys.argv) < 4: print("error: must specify where to move bad VHDs") sys.exit(1) bad_vhd_path = sys.argv[3] if not os.path.exists(bad_vhd_path): os.makedirs(bad_vhd_path) bad_leaves = [] descendents = {} for fname in glob.glob(os.path.join(sr_path, "*.vhd")): (returncode, stdout, stderr) = execute( ['vhd-util', 'query', '-n', fname, '-p'], ok_exit_codes=[0, 22]) stdout = stdout.strip() if stdout.endswith('.vhd'): try: descendents[stdout].append(fname) except KeyError: descendents[stdout] = [fname] elif 'query failed' in stdout: bad_leaves.append(fname) def walk_vhds(root): yield root if root in descendents: for child in descendents[root]: for vhd in walk_vhds(child): yield vhd for bad_leaf in bad_leaves: for bad_vhd in walk_vhds(bad_leaf): print(bad_vhd) if action == "print": pass elif action == "delete": os.unlink(bad_vhd) elif action == "move": new_path = os.path.join(bad_vhd_path, os.path.basename(bad_vhd)) os.rename(bad_vhd, new_path) else: raise Exception("invalid action %s" % action) if __name__ == '__main__': main() nova-17.0.1/tools/xenserver/populate_other_config.py0000666000175000017500000000611113250073126022677 0ustar zuulzuul00000000000000#!/usr/bin/env python # Copyright 2013 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ One-time script to populate VDI.other_config. We use metadata stored in VDI.other_config to associate a VDI with a given instance so that we may safely cleanup orphaned VDIs. We had a bug in the code that meant that the vast majority of VDIs created would not have the other_config populated. After deploying the fixed code, this script is intended to be run against all compute-workers in a cluster so that existing VDIs can have their other_configs populated. Run on compute-worker (not Dom0): python ./tools/xenserver/populate_other_config.py [--dry-run] """ import os import sys possible_topdir = os.getcwd() if os.path.exists(os.path.join(possible_topdir, "nova", "__init__.py")): sys.path.insert(0, possible_topdir) from oslo_config import cfg from oslo_utils import uuidutils from nova import config from nova.virt import virtapi from nova.virt.xenapi import driver as xenapi_driver from nova.virt.xenapi import vm_utils cli_opts = [ cfg.BoolOpt('dry-run', default=False, help='Whether to actually update other_config.'), ] CONF = cfg.CONF CONF.register_cli_opts(cli_opts) def main(): config.parse_args(sys.argv) xenapi = xenapi_driver.XenAPIDriver(virtapi.VirtAPI()) session = xenapi._session vdi_refs = session.call_xenapi('VDI.get_all') for vdi_ref in vdi_refs: vdi_rec = session.call_xenapi('VDI.get_record', vdi_ref) other_config = vdi_rec['other_config'] # Already set... if 'nova_instance_uuid' in other_config: continue name_label = vdi_rec['name_label'] # We only want name-labels of form instance--[optional-suffix] if not name_label.startswith('instance-'): continue # Parse out UUID instance_uuid = name_label.replace('instance-', '')[:36] if not uuidutils.is_uuid_like(instance_uuid): print("error: name label '%s' wasn't UUID-like" % name_label) continue vdi_type = vdi_rec['name_description'] # We don't need a full instance record, just the UUID instance = {'uuid': instance_uuid} if not CONF.dry_run: vm_utils._set_vdi_info(session, vdi_ref, vdi_type, name_label, vdi_type, instance) print("Setting other_config for instance_uuid=%s vdi_uuid=%s" % ( instance_uuid, vdi_rec['uuid'])) if CONF.dry_run: print("Dry run completed") if __name__ == "__main__": main() nova-17.0.1/tools/xenserver/rotate_xen_guest_logs.sh0000777000175000017500000000472213250073126022716 0ustar zuulzuul00000000000000#!/bin/bash set -eux # Script to rotate console logs # # Should be run on Dom0, with cron, every minute: # * * * * * /root/rotate_xen_guest_logs.sh # # Should clear out the guest logs on every boot # because the domain ids may get re-used for a # different tenant after the reboot # # /var/log/xen/guest should be mounted into a # small loopback device to stop any guest being # able to fill dom0 file system log_dir="/var/log/xen/guest" kb=1024 max_size_bytes=$(($kb*$kb)) truncated_size_bytes=$((5*$kb)) syslog_tag='rotate_xen_guest_logs' log_file_base="${log_dir}/console." # Only delete log files older than this number of minutes # to avoid a race where Xen creates the domain and starts # logging before the XAPI VM start returns (and allows us # to preserve the log file using last_dom_id) min_logfile_age=10 # Ensure logging is setup correctly for all domains xenstore-write /local/logconsole/@ "${log_file_base}%d" # Grab the list of logs now to prevent a race where the domain is # started after we get the valid last_dom_ids, but before the logs are # deleted. Add spaces to ensure we can do containment tests below current_logs=$(find "$log_dir" -type f) # Ensure the last_dom_id is set + updated for all running VMs for vm in $(xe vm-list power-state=running --minimal | tr ',' ' '); do xe vm-param-set uuid=$vm other-config:last_dom_id=$(xe vm-param-get uuid=$vm param-name=dom-id) done # Get the last_dom_id for all VMs valid_last_dom_ids=$(xe vm-list params=other-config --minimal | tr ';,' '\n\n' | grep last_dom_id | sed -e 's/last_dom_id: //g' | xargs) echo "Valid dom IDs: $valid_last_dom_ids" | /usr/bin/logger -t $syslog_tag # Remove old console files that do not correspond to valid last_dom_id's allowed_consoles=".*console.\(${valid_last_dom_ids// /\\|}\)$" delete_logs=`find "$log_dir" -type f -mmin +${min_logfile_age} -not -regex "$allowed_consoles"` for log in $delete_logs; do if echo "$current_logs" | grep -q -w "$log"; then echo "Deleting: $log" | /usr/bin/logger -t $syslog_tag rm $log fi done # Truncate all remaining logs for log in `find "$log_dir" -type f -regex '.*console.*' -size +${max_size_bytes}c`; do echo "Truncating log: $log" | /usr/bin/logger -t $syslog_tag tmp="$log.tmp" tail -c $truncated_size_bytes "$log" > "$tmp" mv -f "$tmp" "$log" # Notify xen that it needs to reload the file domid="${log##*.}" xenstore-write /local/logconsole/$domid "$log" xenstore-rm /local/logconsole/$domid done nova-17.0.1/tools/placement_api_docs.py0000666000175000017500000000441713250073126020117 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Test to see if docs exists for routes and methods in the placement API.""" import os import sys from nova.api.openstack.placement import handler # A humane ordering of HTTP methods for sorted output. ORDERED_METHODS = ['GET', 'POST', 'PUT', 'PATCH', 'DELETE'] DEPRECATED_METHODS = [('POST', '/resource_providers/{uuid}/inventories')] def _header_line(map_entry): method, route = map_entry line = '.. rest_method:: %s %s' % (method, route) return line def inspect_doc(doc_files): """Load up doc_files and see if any routes are missing. The routes are defined in handler.ROUTE_DECLARATIONS. """ routes = [] for route in sorted(handler.ROUTE_DECLARATIONS, key=len): # Skip over the '' route. if route: for method in ORDERED_METHODS: if method in handler.ROUTE_DECLARATIONS[route]: routes.append((method, route)) header_lines = [] for map_entry in routes: if map_entry not in DEPRECATED_METHODS: header_lines.append(_header_line(map_entry)) content_lines = [] for doc_file in doc_files: with open(doc_file) as doc_fh: content_lines.extend(doc_fh.read().splitlines()) missing_lines = [] for line in header_lines: if line not in content_lines: missing_lines.append(line) if missing_lines: print('Documentation likely missing for the following routes:') for line in missing_lines: print(line) return 1 return 0 if __name__ == '__main__': path = sys.argv[1] doc_files = [os.path.join(path, file) for file in os.listdir(path) if file.endswith(".inc")] sys.exit(inspect_doc(doc_files)) nova-17.0.1/tools/test-setup.sh0000777000175000017500000000350413250073126016404 0ustar zuulzuul00000000000000#!/bin/bash -xe # This script will be run by OpenStack CI before unit tests are run, # it sets up the test system as needed. # Developers should setup their test systems in a similar way. # This setup needs to be run as a user that can run sudo. # The root password for the MySQL database; pass it in via # MYSQL_ROOT_PW. DB_ROOT_PW=${MYSQL_ROOT_PW:-insecure_slave} # This user and its password are used by the tests, if you change it, # your tests might fail. DB_USER=openstack_citest DB_PW=openstack_citest sudo -H mysqladmin -u root password $DB_ROOT_PW # It's best practice to remove anonymous users from the database. If # an anonymous user exists, then it matches first for connections and # other connections from that host will not work. sudo -H mysql -u root -p$DB_ROOT_PW -h localhost -e " DELETE FROM mysql.user WHERE User=''; FLUSH PRIVILEGES; GRANT ALL PRIVILEGES ON *.* TO '$DB_USER'@'%' identified by '$DB_PW' WITH GRANT OPTION;" # Now create our database. mysql -u $DB_USER -p$DB_PW -h 127.0.0.1 -e " SET default_storage_engine=MYISAM; DROP DATABASE IF EXISTS openstack_citest; CREATE DATABASE openstack_citest CHARACTER SET utf8;" # Same for PostgreSQL # Setup user root_roles=$(sudo -H -u postgres psql -t -c " SELECT 'HERE' from pg_roles where rolname='$DB_USER'") if [[ ${root_roles} == *HERE ]];then sudo -H -u postgres psql -c "ALTER ROLE $DB_USER WITH SUPERUSER LOGIN PASSWORD '$DB_PW'" else sudo -H -u postgres psql -c "CREATE ROLE $DB_USER WITH SUPERUSER LOGIN PASSWORD '$DB_PW'" fi # Store password for tests cat << EOF > $HOME/.pgpass *:*:*:$DB_USER:$DB_PW EOF chmod 0600 $HOME/.pgpass # Now create our database psql -h 127.0.0.1 -U $DB_USER -d template1 -c "DROP DATABASE IF EXISTS openstack_citest" createdb -h 127.0.0.1 -U $DB_USER -l C -T template0 -E utf8 openstack_citest nova-17.0.1/tools/build_latex_pdf.sh0000777000175000017500000000154013250073126017412 0ustar zuulzuul00000000000000#!/bin/bash # Build tox venv and use it tox -edocs --notest source .tox/docs/bin/activate # Build latex source sphinx-build -b latex doc/source doc/build/latex pushd doc/build/latex # Workaround all the sphinx latex bugs # Convert svg to png (requires ImageMagick) convert architecture.svg architecture.png # Update the latex to point to the new image, switch unicode chars to latex # markup, and add packages for symbols sed -i -e 's/architecture.svg/architecture.png/g' -e 's/\\code{✔}/\\checkmark/g' -e 's/\\code{✖}/\\ding{54}/g' -e 's/\\usepackage{multirow}/\\usepackage{multirow}\n\\usepackage{amsmath,amssymb,latexsym}\n\\usepackage{pifont}/g' Nova.tex # To run the actual latex build you need to ensure that you have latex installed # on ubuntu the texlive-full package will take care of this make deactivate popd cp doc/build/latex/Nova.pdf . nova-17.0.1/tools/abandon_old_reviews.sh0000777000175000017500000000534613250073126020301 0ustar zuulzuul00000000000000#!/bin/bash # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # # # # before you run this modify your .ssh/config to create a # review.openstack.org entry: # # Host review.openstack.org # User # Port 29418 # # Note: due to gerrit bug somewhere, this double posts messages. :( # first purge the all reviews that are more than 4w old and blocked by a core -2 set -o errexit function abandon_review { local gitid=$1 shift local msg=$@ echo "Abandoning $gitid" # echo ssh review.openstack.org gerrit review $gitid --abandon --message \"$msg\" ssh review.openstack.org gerrit review $gitid --abandon --message \"$msg\" } PROJECTS="(project:openstack/nova OR project:openstack/python-novaclient)" blocked_reviews=$(ssh review.openstack.org "gerrit query --current-patch-set --format json $PROJECTS status:open age:4w label:Code-Review<=-2" | jq .currentPatchSet.revision | grep -v null | sed 's/"//g') blocked_msg=$(cat < 4 weeks without comment and currently blocked by a core reviewer with a -2. We are abandoning this for now. Feel free to reactivate the review by pressing the restore button and contacting the reviewer with the -2 on this review to ensure you address their concerns. EOF ) # For testing, put in a git rev of something you own and uncomment # blocked_reviews="b6c4218ae4d75b86c33fa3d37c27bc23b46b6f0f" for review in $blocked_reviews; do # echo ssh review.openstack.org gerrit review $review --abandon --message \"$msg\" echo "Blocked review $review" abandon_review $review $blocked_msg done # then purge all the reviews that are > 4w with no changes and Jenkins has -1ed failing_reviews=$(ssh review.openstack.org "gerrit query --current-patch-set --format json $PROJECTS status:open age:4w NOT label:Verified>=1,jenkins" | jq .currentPatchSet.revision | grep -v null | sed 's/"//g') failing_msg=$(cat < 4 weeks without comment, and failed Jenkins the last time it was checked. We are abandoning this for now. Feel free to reactivate the review by pressing the restore button and leaving a 'recheck' comment to get fresh test results. EOF ) for review in $failing_reviews; do echo "Failing review $review" abandon_review $review $failing_msg done nova-17.0.1/tools/db/0000775000175000017500000000000013250073472014315 5ustar zuulzuul00000000000000nova-17.0.1/tools/db/schema_diff.py0000777000175000017500000001647413250073126017134 0ustar zuulzuul00000000000000#!/usr/bin/env python # Copyright 2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Utility for diff'ing two versions of the DB schema. Each release cycle the plan is to compact all of the migrations from that release into a single file. This is a manual and, unfortunately, error-prone process. To ensure that the schema doesn't change, this tool can be used to diff the compacted DB schema to the original, uncompacted form. The database is specified by providing a SQLAlchemy connection URL WITHOUT the database-name portion (that will be filled in automatically with a temporary database name). The schema versions are specified by providing a git ref (a branch name or commit hash) and a SQLAlchemy-Migrate version number: Run like: MYSQL: ./tools/db/schema_diff.py mysql+pymysql://root@localhost \ master:latest my_branch:82 POSTGRESQL: ./tools/db/schema_diff.py postgresql://localhost \ master:latest my_branch:82 """ from __future__ import print_function import datetime import glob import os import subprocess import sys from nova.i18n import _ # Dump def dump_db(db_driver, db_name, db_url, migration_version, dump_filename): if not db_url.endswith('/'): db_url += '/' db_url += db_name db_driver.create(db_name) try: _migrate(db_url, migration_version) db_driver.dump(db_name, dump_filename) finally: db_driver.drop(db_name) # Diff def diff_files(filename1, filename2): pipeline = ['diff -U 3 %(filename1)s %(filename2)s' % {'filename1': filename1, 'filename2': filename2}] # Use colordiff if available if subprocess.call(['which', 'colordiff']) == 0: pipeline.append('colordiff') pipeline.append('less -R') cmd = ' | '.join(pipeline) subprocess.check_call(cmd, shell=True) # Database class Mysql(object): def create(self, name): subprocess.check_call(['mysqladmin', '-u', 'root', 'create', name]) def drop(self, name): subprocess.check_call(['mysqladmin', '-f', '-u', 'root', 'drop', name]) def dump(self, name, dump_filename): subprocess.check_call( 'mysqldump -u root %(name)s > %(dump_filename)s' % {'name': name, 'dump_filename': dump_filename}, shell=True) class Postgresql(object): def create(self, name): subprocess.check_call(['createdb', name]) def drop(self, name): subprocess.check_call(['dropdb', name]) def dump(self, name, dump_filename): subprocess.check_call( 'pg_dump %(name)s > %(dump_filename)s' % {'name': name, 'dump_filename': dump_filename}, shell=True) def _get_db_driver_class(db_url): try: return globals()[db_url.split('://')[0].capitalize()] except KeyError: raise Exception(_("database %s not supported") % db_url) # Migrate MIGRATE_REPO = os.path.join(os.getcwd(), "nova/db/sqlalchemy/migrate_repo") def _migrate(db_url, migration_version): earliest_version = _migrate_get_earliest_version() # NOTE(sirp): sqlalchemy-migrate currently cannot handle the skipping of # migration numbers. _migrate_cmd( db_url, 'version_control', str(earliest_version - 1)) upgrade_cmd = ['upgrade'] if migration_version != 'latest': upgrade_cmd.append(str(migration_version)) _migrate_cmd(db_url, *upgrade_cmd) def _migrate_cmd(db_url, *cmd): manage_py = os.path.join(MIGRATE_REPO, 'manage.py') args = ['python', manage_py] args += cmd args += ['--repository=%s' % MIGRATE_REPO, '--url=%s' % db_url] subprocess.check_call(args) def _migrate_get_earliest_version(): versions_glob = os.path.join(MIGRATE_REPO, 'versions', '???_*.py') versions = [] for path in glob.iglob(versions_glob): filename = os.path.basename(path) prefix = filename.split('_', 1)[0] try: version = int(prefix) except ValueError: pass versions.append(version) versions.sort() return versions[0] # Git def git_current_branch_name(): ref_name = git_symbolic_ref('HEAD', quiet=True) current_branch_name = ref_name.replace('refs/heads/', '') return current_branch_name def git_symbolic_ref(ref, quiet=False): args = ['git', 'symbolic-ref', ref] if quiet: args.append('-q') proc = subprocess.Popen(args, stdout=subprocess.PIPE) stdout, stderr = proc.communicate() return stdout.strip() def git_checkout(branch_name): subprocess.check_call(['git', 'checkout', branch_name]) def git_has_uncommited_changes(): return subprocess.call(['git', 'diff', '--quiet', '--exit-code']) == 1 # Command def die(msg): print("ERROR: %s" % msg, file=sys.stderr) sys.exit(1) def usage(msg=None): if msg: print("ERROR: %s" % msg, file=sys.stderr) prog = "schema_diff.py" args = ["", "", ""] print("usage: %s %s" % (prog, ' '.join(args)), file=sys.stderr) sys.exit(1) def parse_options(): try: db_url = sys.argv[1] except IndexError: usage("must specify DB connection url") try: orig_branch, orig_version = sys.argv[2].split(':') except IndexError: usage('original branch and version required (e.g. master:82)') try: new_branch, new_version = sys.argv[3].split(':') except IndexError: usage('new branch and version required (e.g. master:82)') return db_url, orig_branch, orig_version, new_branch, new_version def main(): timestamp = datetime.datetime.utcnow().strftime("%Y%m%d_%H%M%S") ORIG_DB = 'orig_db_%s' % timestamp NEW_DB = 'new_db_%s' % timestamp ORIG_DUMP = ORIG_DB + ".dump" NEW_DUMP = NEW_DB + ".dump" options = parse_options() db_url, orig_branch, orig_version, new_branch, new_version = options # Since we're going to be switching branches, ensure user doesn't have any # uncommitted changes if git_has_uncommited_changes(): die("You have uncommitted changes. Please commit them before running " "this command.") db_driver = _get_db_driver_class(db_url)() users_branch = git_current_branch_name() git_checkout(orig_branch) try: # Dump Original Schema dump_db(db_driver, ORIG_DB, db_url, orig_version, ORIG_DUMP) # Dump New Schema git_checkout(new_branch) dump_db(db_driver, NEW_DB, db_url, new_version, NEW_DUMP) diff_files(ORIG_DUMP, NEW_DUMP) finally: git_checkout(users_branch) if os.path.exists(ORIG_DUMP): os.unlink(ORIG_DUMP) if os.path.exists(NEW_DUMP): os.unlink(NEW_DUMP) if __name__ == "__main__": main() nova-17.0.1/tools/reserve-migrations.py0000777000175000017500000000464413250073126020140 0ustar zuulzuul00000000000000#!/usr/bin/env python import argparse import glob import os import subprocess BASE = 'nova/db/sqlalchemy/migrate_repo/versions'.split('/') API_BASE = 'nova/db/sqlalchemy/api_migrations/migrate_repo/versions'.split('/') STUB = \ """# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # This is a placeholder for backports. # Do not use this number for new work. New work starts after # all the placeholders. # # See this for more information: # http://lists.openstack.org/pipermail/openstack-dev/2013-March/006827.html def upgrade(migrate_engine): pass """ def get_last_migration(base): path = os.path.join(*tuple(base + ['[0-9]*.py'])) migrations = sorted([os.path.split(fn)[-1] for fn in glob.glob(path)]) return int(migrations[-1].split('_')[0]) def reserve_migrations(base, number, git_add): last = get_last_migration(base) for i in range(last + 1, last + number + 1): name = '%03i_placeholder.py' % i path = os.path.join(*tuple(base + [name])) with open(path, 'w') as f: f.write(STUB) print('Created %s' % path) if git_add: subprocess.call('git add %s' % path, shell=True) def main(): parser = argparse.ArgumentParser() parser.add_argument('-n', '--number', default=10, type=int, help='Number of migrations to reserve') parser.add_argument('-g', '--git-add', action='store_const', const=True, default=False, help='Automatically git-add new migrations') parser.add_argument('-a', '--api', action='store_const', const=True, default=False, help='Reserve migrations for the API database') args = parser.parse_args() if args.api: base = API_BASE else: base = BASE reserve_migrations(base, args.number, args.git_add) if __name__ == '__main__': main() nova-17.0.1/nova/0000775000175000017500000000000013250073471013532 5ustar zuulzuul00000000000000nova-17.0.1/nova/block_device.py0000666000175000017500000005070213250073126016520 0ustar zuulzuul00000000000000# Copyright 2011 Isaku Yamahata # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import re from oslo_log import log as logging from oslo_utils import strutils import nova.conf from nova import exception from nova.i18n import _ from nova import utils from nova.virt import driver CONF = nova.conf.CONF LOG = logging.getLogger(__name__) DEFAULT_ROOT_DEV_NAME = '/dev/sda1' _DEFAULT_MAPPINGS = {'ami': 'sda1', 'ephemeral0': 'sda2', 'root': DEFAULT_ROOT_DEV_NAME, 'swap': 'sda3'} bdm_legacy_fields = set(['device_name', 'delete_on_termination', 'virtual_name', 'snapshot_id', 'volume_id', 'volume_size', 'no_device', 'connection_info']) bdm_new_fields = set(['source_type', 'destination_type', 'guest_format', 'device_type', 'disk_bus', 'boot_index', 'device_name', 'delete_on_termination', 'snapshot_id', 'volume_id', 'volume_size', 'image_id', 'no_device', 'connection_info', 'tag']) bdm_db_only_fields = set(['id', 'instance_uuid', 'attachment_id', 'uuid']) bdm_db_inherited_fields = set(['created_at', 'updated_at', 'deleted_at', 'deleted']) class BlockDeviceDict(dict): """Represents a Block Device Mapping in Nova.""" _fields = bdm_new_fields _db_only_fields = (bdm_db_only_fields | bdm_db_inherited_fields) _required_fields = set(['source_type']) def __init__(self, bdm_dict=None, do_not_default=None, **kwargs): super(BlockDeviceDict, self).__init__() bdm_dict = bdm_dict or {} bdm_dict.update(kwargs) do_not_default = do_not_default or set() self._validate(bdm_dict) if bdm_dict.get('device_name'): bdm_dict['device_name'] = prepend_dev(bdm_dict['device_name']) bdm_dict['delete_on_termination'] = bool( bdm_dict.get('delete_on_termination')) # NOTE (ndipanov): Never default db fields self.update({field: None for field in self._fields - do_not_default}) self.update(bdm_dict.items()) def _validate(self, bdm_dict): """Basic data format validations.""" dict_fields = set(key for key, _ in bdm_dict.items()) valid_fields = self._fields | self._db_only_fields # Check that there are no bogus fields if not (dict_fields <= valid_fields): raise exception.InvalidBDMFormat( details=("Following fields are invalid: %s" % " ".join(dict_fields - valid_fields))) if bdm_dict.get('no_device'): return # Check that all required fields are there if (self._required_fields and not ((dict_fields & self._required_fields) == self._required_fields)): raise exception.InvalidBDMFormat( details=_("Some required fields are missing")) if 'delete_on_termination' in bdm_dict: bdm_dict['delete_on_termination'] = strutils.bool_from_string( bdm_dict['delete_on_termination']) if bdm_dict.get('device_name') is not None: validate_device_name(bdm_dict['device_name']) validate_and_default_volume_size(bdm_dict) if bdm_dict.get('boot_index'): try: bdm_dict['boot_index'] = int(bdm_dict['boot_index']) except ValueError: raise exception.InvalidBDMFormat( details=_("Boot index is invalid.")) @classmethod def from_legacy(cls, legacy_bdm): copy_over_fields = bdm_legacy_fields & bdm_new_fields copy_over_fields |= (bdm_db_only_fields | bdm_db_inherited_fields) # NOTE (ndipanov): These fields cannot be computed # from legacy bdm, so do not default them # to avoid overwriting meaningful values in the db non_computable_fields = set(['boot_index', 'disk_bus', 'guest_format', 'device_type']) new_bdm = {fld: val for fld, val in legacy_bdm.items() if fld in copy_over_fields} virt_name = legacy_bdm.get('virtual_name') if is_swap_or_ephemeral(virt_name): new_bdm['source_type'] = 'blank' new_bdm['delete_on_termination'] = True new_bdm['destination_type'] = 'local' if virt_name == 'swap': new_bdm['guest_format'] = 'swap' else: new_bdm['guest_format'] = CONF.default_ephemeral_format elif legacy_bdm.get('snapshot_id'): new_bdm['source_type'] = 'snapshot' new_bdm['destination_type'] = 'volume' elif legacy_bdm.get('volume_id'): new_bdm['source_type'] = 'volume' new_bdm['destination_type'] = 'volume' elif legacy_bdm.get('no_device'): # NOTE (ndipanov): Just keep the BDM for now, pass else: raise exception.InvalidBDMFormat( details=_("Unrecognized legacy format.")) return cls(new_bdm, non_computable_fields) @classmethod def from_api(cls, api_dict, image_uuid_specified): """Transform the API format of data to the internally used one. Only validate if the source_type field makes sense. """ if not api_dict.get('no_device'): source_type = api_dict.get('source_type') device_uuid = api_dict.get('uuid') destination_type = api_dict.get('destination_type') if source_type == 'blank' and device_uuid: raise exception.InvalidBDMFormat( details=_("Invalid device UUID.")) elif source_type != 'blank': if not device_uuid: raise exception.InvalidBDMFormat( details=_("Missing device UUID.")) api_dict[source_type + '_id'] = device_uuid if source_type == 'image' and destination_type == 'local': # NOTE(mriedem): boot_index can be None so we need to # account for that to avoid a TypeError. boot_index = api_dict.get('boot_index', -1) if boot_index is None: # boot_index=None is equivalent to -1. boot_index = -1 boot_index = int(boot_index) # if this bdm is generated from --image ,then # source_type = image and destination_type = local is allowed if not (image_uuid_specified and boot_index == 0): raise exception.InvalidBDMFormat( details=_("Mapping image to local is not supported.")) api_dict.pop('uuid', None) return cls(api_dict) def legacy(self): copy_over_fields = bdm_legacy_fields - set(['virtual_name']) copy_over_fields |= (bdm_db_only_fields | bdm_db_inherited_fields) legacy_block_device = {field: self.get(field) for field in copy_over_fields if field in self} source_type = self.get('source_type') destination_type = self.get('destination_type') no_device = self.get('no_device') if source_type == 'blank': if self['guest_format'] == 'swap': legacy_block_device['virtual_name'] = 'swap' else: # NOTE (ndipanov): Always label as 0, it is up to # the calling routine to re-enumerate them legacy_block_device['virtual_name'] = 'ephemeral0' elif source_type in ('volume', 'snapshot') or no_device: legacy_block_device['virtual_name'] = None elif source_type == 'image': if destination_type != 'volume': # NOTE(ndipanov): Image bdms with local destination # have no meaning in the legacy format - raise raise exception.InvalidBDMForLegacy() legacy_block_device['virtual_name'] = None return legacy_block_device def get_image_mapping(self): drop_fields = (set(['connection_info']) | self._db_only_fields) mapping_dict = dict(self) for fld in drop_fields: mapping_dict.pop(fld, None) return mapping_dict def is_safe_for_update(block_device_dict): """Determine if passed dict is a safe subset for update. Safe subset in this case means a safe subset of both legacy and new versions of data, that can be passed to an UPDATE query without any transformation. """ fields = set(block_device_dict.keys()) return fields <= (bdm_new_fields | bdm_db_inherited_fields | bdm_db_only_fields) def create_image_bdm(image_ref, boot_index=0): """Create a block device dict based on the image_ref. This is useful in the API layer to keep the compatibility with having an image_ref as a field in the instance requests """ return BlockDeviceDict( {'source_type': 'image', 'image_id': image_ref, 'delete_on_termination': True, 'boot_index': boot_index, 'device_type': 'disk', 'destination_type': 'local'}) def create_blank_bdm(size, guest_format=None): return BlockDeviceDict( {'source_type': 'blank', 'delete_on_termination': True, 'device_type': 'disk', 'boot_index': -1, 'destination_type': 'local', 'guest_format': guest_format, 'volume_size': size}) def snapshot_from_bdm(snapshot_id, template): """Create a basic volume snapshot BDM from a given template bdm.""" copy_from_template = ('disk_bus', 'device_type', 'boot_index', 'delete_on_termination', 'volume_size', 'device_name') snapshot_dict = {'source_type': 'snapshot', 'destination_type': 'volume', 'snapshot_id': snapshot_id} for key in copy_from_template: snapshot_dict[key] = template.get(key) return BlockDeviceDict(snapshot_dict) def legacy_mapping(block_device_mapping): """Transform a list of block devices of an instance back to the legacy data format. """ legacy_block_device_mapping = [] for bdm in block_device_mapping: try: legacy_block_device = BlockDeviceDict(bdm).legacy() except exception.InvalidBDMForLegacy: continue legacy_block_device_mapping.append(legacy_block_device) # Re-enumerate the ephemeral devices for i, dev in enumerate(dev for dev in legacy_block_device_mapping if dev['virtual_name'] and is_ephemeral(dev['virtual_name'])): dev['virtual_name'] = dev['virtual_name'][:-1] + str(i) return legacy_block_device_mapping def from_legacy_mapping(legacy_block_device_mapping, image_uuid='', root_device_name=None, no_root=False): """Transform a legacy list of block devices to the new data format.""" new_bdms = [BlockDeviceDict.from_legacy(legacy_bdm) for legacy_bdm in legacy_block_device_mapping] # NOTE (ndipanov): We will not decide which device is root here - we assume # that it will be supplied later. This is useful for having the root device # as part of the image defined mappings that are already in the v2 format. if no_root: for bdm in new_bdms: bdm['boot_index'] = -1 return new_bdms image_bdm = None volume_backed = False # Try to assign boot_device if not root_device_name and not image_uuid: # NOTE (ndipanov): If there is no root_device, pick the first non # blank one. non_blank = [bdm for bdm in new_bdms if bdm['source_type'] != 'blank'] if non_blank: non_blank[0]['boot_index'] = 0 else: for bdm in new_bdms: if (bdm['source_type'] in ('volume', 'snapshot', 'image') and root_device_name is not None and (strip_dev(bdm.get('device_name')) == strip_dev(root_device_name))): bdm['boot_index'] = 0 volume_backed = True elif not bdm['no_device']: bdm['boot_index'] = -1 else: bdm['boot_index'] = None if not volume_backed and image_uuid: image_bdm = create_image_bdm(image_uuid, boot_index=0) return ([image_bdm] if image_bdm else []) + new_bdms def properties_root_device_name(properties): """Get root device name from image meta data. If it isn't specified, return None. """ root_device_name = None # NOTE(yamahata): see image_service.s3.s3create() for bdm in properties.get('mappings', []): if bdm['virtual'] == 'root': root_device_name = bdm['device'] # NOTE(yamahata): register_image's command line can override # .manifest.xml if 'root_device_name' in properties: root_device_name = properties['root_device_name'] return root_device_name def validate_device_name(value): try: # NOTE (ndipanov): Do not allow empty device names # until assigning default values # are supported by nova.compute utils.check_string_length(value, 'Device name', min_length=1, max_length=255) except exception.InvalidInput: raise exception.InvalidBDMFormat( details=_("Device name empty or too long.")) if ' ' in value: raise exception.InvalidBDMFormat( details=_("Device name contains spaces.")) def validate_and_default_volume_size(bdm): if bdm.get('volume_size'): try: bdm['volume_size'] = utils.validate_integer( bdm['volume_size'], 'volume_size', min_value=0) except exception.InvalidInput: # NOTE: We can remove this validation code after removing # Nova v2.0 API code, because v2.1 API validates this case # already at its REST API layer. raise exception.InvalidBDMFormat( details=_("Invalid volume_size.")) _ephemeral = re.compile('^ephemeral(\d|[1-9]\d+)$') def is_ephemeral(device_name): return _ephemeral.match(device_name) is not None def ephemeral_num(ephemeral_name): assert is_ephemeral(ephemeral_name) return int(_ephemeral.sub('\\1', ephemeral_name)) def is_swap_or_ephemeral(device_name): return (device_name and (device_name == 'swap' or is_ephemeral(device_name))) def new_format_is_swap(bdm): if (bdm.get('source_type') == 'blank' and bdm.get('destination_type') == 'local' and bdm.get('guest_format') == 'swap'): return True return False def new_format_is_ephemeral(bdm): if (bdm.get('source_type') == 'blank' and bdm.get('destination_type') == 'local' and bdm.get('guest_format') != 'swap'): return True return False def get_root_bdm(bdms): try: return next(bdm for bdm in bdms if bdm.get('boot_index', -1) == 0) except StopIteration: return None def get_bdms_to_connect(bdms, exclude_root_mapping=False): """Will return non-root mappings, when exclude_root_mapping is true. Otherwise all mappings will be returned. """ return (bdm for bdm in bdms if bdm.get('boot_index', -1) != 0 or not exclude_root_mapping) def mappings_prepend_dev(mappings): """Prepend '/dev/' to 'device' entry of swap/ephemeral virtual type.""" for m in mappings: virtual = m['virtual'] if (is_swap_or_ephemeral(virtual) and (not m['device'].startswith('/'))): m['device'] = '/dev/' + m['device'] return mappings _dev = re.compile('^/dev/') def strip_dev(device_name): """remove leading '/dev/'.""" return _dev.sub('', device_name) if device_name else device_name def prepend_dev(device_name): """Make sure there is a leading '/dev/'.""" return device_name and '/dev/' + strip_dev(device_name) _pref = re.compile('^((x?v|s|h)d)') def strip_prefix(device_name): """remove both leading /dev/ and xvd or sd or vd or hd.""" device_name = strip_dev(device_name) return _pref.sub('', device_name) if device_name else device_name _nums = re.compile('\d+') def get_device_letter(device_name): letter = strip_prefix(device_name) # NOTE(vish): delete numbers in case we have something like # /dev/sda1 return _nums.sub('', letter) if device_name else device_name def instance_block_mapping(instance, bdms): root_device_name = instance['root_device_name'] # NOTE(clayg): remove this when xenapi is setting default_root_device if root_device_name is None: if driver.is_xenapi(): root_device_name = '/dev/xvda' else: return _DEFAULT_MAPPINGS mappings = {} mappings['ami'] = strip_dev(root_device_name) mappings['root'] = root_device_name default_ephemeral_device = instance.get('default_ephemeral_device') if default_ephemeral_device: mappings['ephemeral0'] = default_ephemeral_device default_swap_device = instance.get('default_swap_device') if default_swap_device: mappings['swap'] = default_swap_device ebs_devices = [] blanks = [] # 'ephemeralN', 'swap' and ebs for bdm in bdms: # ebs volume case if bdm.destination_type == 'volume': ebs_devices.append(bdm.device_name) continue if bdm.source_type == 'blank': blanks.append(bdm) # NOTE(yamahata): I'm not sure how ebs device should be numbered. # Right now sort by device name for deterministic # result. if ebs_devices: # NOTE(claudiub): python2.7 sort places None values first. # this sort will maintain the same behaviour for both py27 and py34. ebs_devices = sorted(ebs_devices, key=lambda x: (x is not None, x)) for nebs, ebs in enumerate(ebs_devices): mappings['ebs%d' % nebs] = ebs swap = [bdm for bdm in blanks if bdm.guest_format == 'swap'] if swap: mappings['swap'] = swap.pop().device_name ephemerals = [bdm for bdm in blanks if bdm.guest_format != 'swap'] if ephemerals: for num, eph in enumerate(ephemerals): mappings['ephemeral%d' % num] = eph.device_name return mappings def match_device(device): """Matches device name and returns prefix, suffix.""" match = re.match("(^/dev/x{0,1}[a-z]{0,1}d{0,1})([a-z]+)[0-9]*$", device) if not match: return None return match.groups() def volume_in_mapping(mount_device, block_device_info): block_device_list = [strip_dev(vol['mount_device']) for vol in driver.block_device_info_get_mapping( block_device_info)] swap = driver.block_device_info_get_swap(block_device_info) if driver.swap_is_usable(swap): block_device_list.append(strip_dev(swap['device_name'])) block_device_list += [strip_dev(ephemeral['device_name']) for ephemeral in driver.block_device_info_get_ephemerals( block_device_info)] LOG.debug("block_device_list %s", sorted(filter(None, block_device_list))) return strip_dev(mount_device) in block_device_list def get_bdm_ephemeral_disk_size(block_device_mappings): return sum(bdm.get('volume_size', 0) for bdm in block_device_mappings if new_format_is_ephemeral(bdm)) def get_bdm_swap_list(block_device_mappings): return [bdm for bdm in block_device_mappings if new_format_is_swap(bdm)] def get_bdm_local_disk_num(block_device_mappings): return len([bdm for bdm in block_device_mappings if bdm.get('destination_type') == 'local']) nova-17.0.1/nova/service_auth.py0000666000175000017500000000342013250073126016563 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from keystoneauth1 import loading as ks_loading from keystoneauth1 import service_token from oslo_log import log as logging import nova.conf CONF = nova.conf.CONF LOG = logging.getLogger(__name__) _SERVICE_AUTH = None def reset_globals(): """For async unit test consistency.""" global _SERVICE_AUTH _SERVICE_AUTH = None def get_auth_plugin(context): user_auth = context.get_auth_plugin() if CONF.service_user.send_service_user_token: global _SERVICE_AUTH if not _SERVICE_AUTH: _SERVICE_AUTH = ks_loading.load_auth_from_conf_options( CONF, group= nova.conf.service_token.SERVICE_USER_GROUP) if _SERVICE_AUTH is None: # This indicates a misconfiguration so log a warning and # return the user_auth. LOG.warning('Unable to load auth from [service_user] ' 'configuration. Ensure "auth_type" is set.') return user_auth return service_token.ServiceTokenAuthWrapper( user_auth=user_auth, service_auth=_SERVICE_AUTH) return user_auth nova-17.0.1/nova/volume/0000775000175000017500000000000013250073472015042 5ustar zuulzuul00000000000000nova-17.0.1/nova/volume/cinder.py0000666000175000017500000010025413250073126016660 0ustar zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Handles all requests relating to volumes + cinder. """ import collections import copy import functools import sys from cinderclient import api_versions as cinder_api_versions from cinderclient import client as cinder_client from cinderclient import exceptions as cinder_exception from keystoneauth1 import exceptions as keystone_exception from keystoneauth1 import loading as ks_loading from oslo_log import log as logging from oslo_utils import encodeutils from oslo_utils import excutils from oslo_utils import strutils import six from nova import availability_zones as az import nova.conf from nova import exception from nova.i18n import _ from nova.i18n import _LE from nova.i18n import _LW from nova import service_auth CONF = nova.conf.CONF LOG = logging.getLogger(__name__) _ADMIN_AUTH = None _SESSION = None def reset_globals(): """Testing method to reset globals. """ global _ADMIN_AUTH global _SESSION _ADMIN_AUTH = None _SESSION = None def _load_auth_plugin(conf): auth_plugin = ks_loading.load_auth_from_conf_options(conf, nova.conf.cinder.cinder_group.name) if auth_plugin: return auth_plugin err_msg = _('Unknown auth type: %s') % conf.cinder.auth_type raise cinder_exception.Unauthorized(401, message=err_msg) def _check_microversion(url, microversion): """Checks to see if the requested microversion is supported by the current version of python-cinderclient and the volume API endpoint. :param url: Cinder API endpoint URL. :param microversion: Requested microversion. If not available at the given API endpoint URL, a CinderAPIVersionNotAvailable exception is raised. :returns: The microversion if it is available. This can be used to construct the cinder v3 client object. :raises: CinderAPIVersionNotAvailable if the microversion is not available. """ max_api_version = cinder_client.get_highest_client_server_version(url) # get_highest_client_server_version returns a float which we need to cast # to a str and create an APIVersion object to do our version comparison. max_api_version = cinder_api_versions.APIVersion(str(max_api_version)) # Check if the max_api_version matches the requested minimum microversion. if max_api_version.matches(microversion): # The requested microversion is supported by the client and the server. return microversion raise exception.CinderAPIVersionNotAvailable(version=microversion) def _get_cinderclient_parameters(context): global _ADMIN_AUTH global _SESSION if not _SESSION: _SESSION = ks_loading.load_session_from_conf_options( CONF, nova.conf.cinder.cinder_group.name) # NOTE(lixipeng): Auth token is none when call # cinder API from compute periodic tasks, context # from them generated from 'context.get_admin_context' # which only set is_admin=True but is without token. # So add load_auth_plugin when this condition appear. if context.is_admin and not context.auth_token: if not _ADMIN_AUTH: _ADMIN_AUTH = _load_auth_plugin(CONF) auth = _ADMIN_AUTH else: auth = service_auth.get_auth_plugin(context) url = None service_type, service_name, interface = CONF.cinder.catalog_info.split(':') service_parameters = {'service_type': service_type, 'service_name': service_name, 'interface': interface, 'region_name': CONF.cinder.os_region_name} if CONF.cinder.endpoint_template: url = CONF.cinder.endpoint_template % context.to_dict() else: url = _SESSION.get_endpoint(auth, **service_parameters) return auth, service_parameters, url def is_microversion_supported(context, microversion): _, _, url = _get_cinderclient_parameters(context) _check_microversion(url, microversion) def cinderclient(context, microversion=None, skip_version_check=False): """Constructs a cinder client object for making API requests. :param context: The nova request context for auth. :param microversion: Optional microversion to check against the client. This implies that Cinder v3 is required for any calls that require a microversion. If the microversion is not available, this method will raise an CinderAPIVersionNotAvailable exception. :param skip_version_check: If True and a specific microversion is requested, the version discovery check is skipped and the microversion is used directly. This should only be used if a previous check for the same microversion was successful. """ endpoint_override = None auth, service_parameters, url = _get_cinderclient_parameters(context) if CONF.cinder.endpoint_template: endpoint_override = url # TODO(jamielennox): This should be using proper version discovery from # the cinder service rather than just inspecting the URL for certain string # values. version = cinder_client.get_volume_api_from_url(url) if version != '3': raise exception.UnsupportedCinderAPIVersion(version=version) version = '3.0' # Check to see a specific microversion is requested and if so, can it # be handled by the backing server. if microversion is not None: if skip_version_check: version = microversion else: version = _check_microversion(url, microversion) return cinder_client.Client(version, session=_SESSION, auth=auth, endpoint_override=endpoint_override, connect_retries=CONF.cinder.http_retries, global_request_id=context.global_id, **service_parameters) def _untranslate_volume_summary_view(context, vol): """Maps keys for volumes summary view.""" d = {} d['id'] = vol.id d['status'] = vol.status d['size'] = vol.size d['availability_zone'] = vol.availability_zone d['created_at'] = vol.created_at # TODO(jdg): The calling code expects attach_time and # mountpoint to be set. When the calling # code is more defensive this can be # removed. d['attach_time'] = "" d['mountpoint'] = "" d['multiattach'] = getattr(vol, 'multiattach', False) if vol.attachments: d['attachments'] = collections.OrderedDict() for attachment in vol.attachments: a = {attachment['server_id']: {'attachment_id': attachment.get('attachment_id'), 'mountpoint': attachment.get('device')} } d['attachments'].update(a.items()) d['attach_status'] = 'attached' else: d['attach_status'] = 'detached' d['display_name'] = vol.name d['display_description'] = vol.description # TODO(jdg): Information may be lost in this translation d['volume_type_id'] = vol.volume_type d['snapshot_id'] = vol.snapshot_id d['bootable'] = strutils.bool_from_string(vol.bootable) d['volume_metadata'] = {} for key, value in vol.metadata.items(): d['volume_metadata'][key] = value if hasattr(vol, 'volume_image_metadata'): d['volume_image_metadata'] = copy.deepcopy(vol.volume_image_metadata) # The 3.48 microversion exposes a shared_targets boolean and service_uuid # string parameter which can be used with locks during volume attach # and detach. if hasattr(vol, 'shared_targets'): d['shared_targets'] = vol.shared_targets d['service_uuid'] = vol.service_uuid return d def _untranslate_snapshot_summary_view(context, snapshot): """Maps keys for snapshots summary view.""" d = {} d['id'] = snapshot.id d['status'] = snapshot.status d['progress'] = snapshot.progress d['size'] = snapshot.size d['created_at'] = snapshot.created_at d['display_name'] = snapshot.name d['display_description'] = snapshot.description d['volume_id'] = snapshot.volume_id d['project_id'] = snapshot.project_id d['volume_size'] = snapshot.size return d def _translate_attachment_ref(attachment_ref): """Building old style connection_info by adding the 'data' key back.""" translated_con_info = {} connection_info_data = attachment_ref.pop('connection_info', None) if connection_info_data: connection_info_data.pop('attachment_id', None) translated_con_info['driver_volume_type'] = \ connection_info_data.pop('driver_volume_type', None) translated_con_info['data'] = connection_info_data translated_con_info['status'] = attachment_ref.pop('status', None) translated_con_info['instance'] = attachment_ref.pop('instance', None) translated_con_info['attach_mode'] = attachment_ref.pop('attach_mode', None) translated_con_info['attached_at'] = attachment_ref.pop('attached_at', None) translated_con_info['detached_at'] = attachment_ref.pop('detached_at', None) # Now the catch all... for k, v in attachment_ref.items(): if k != "id": translated_con_info[k] = v attachment_ref['connection_info'] = translated_con_info return attachment_ref def translate_cinder_exception(method): """Transforms a cinder exception but keeps its traceback intact.""" @functools.wraps(method) def wrapper(self, ctx, *args, **kwargs): try: res = method(self, ctx, *args, **kwargs) except (cinder_exception.ConnectionError, keystone_exception.ConnectionError) as exc: err_msg = encodeutils.exception_to_unicode(exc) _reraise(exception.CinderConnectionFailed(reason=err_msg)) except (keystone_exception.BadRequest, cinder_exception.BadRequest) as exc: err_msg = encodeutils.exception_to_unicode(exc) _reraise(exception.InvalidInput(reason=err_msg)) except (keystone_exception.Forbidden, cinder_exception.Forbidden) as exc: err_msg = encodeutils.exception_to_unicode(exc) _reraise(exception.Forbidden(err_msg)) return res return wrapper def translate_volume_exception(method): """Transforms the exception for the volume but keeps its traceback intact. """ def wrapper(self, ctx, volume_id, *args, **kwargs): try: res = method(self, ctx, volume_id, *args, **kwargs) except (keystone_exception.NotFound, cinder_exception.NotFound): _reraise(exception.VolumeNotFound(volume_id=volume_id)) except cinder_exception.OverLimit as e: _reraise(exception.OverQuota(message=e.message)) return res return translate_cinder_exception(wrapper) def translate_attachment_exception(method): """Transforms the exception for the attachment but keeps its traceback intact. """ def wrapper(self, ctx, attachment_id, *args, **kwargs): try: res = method(self, ctx, attachment_id, *args, **kwargs) except (keystone_exception.NotFound, cinder_exception.NotFound): _reraise(exception.VolumeAttachmentNotFound( attachment_id=attachment_id)) return res return translate_cinder_exception(wrapper) def translate_snapshot_exception(method): """Transforms the exception for the snapshot but keeps its traceback intact. """ def wrapper(self, ctx, snapshot_id, *args, **kwargs): try: res = method(self, ctx, snapshot_id, *args, **kwargs) except (keystone_exception.NotFound, cinder_exception.NotFound): _reraise(exception.SnapshotNotFound(snapshot_id=snapshot_id)) return res return translate_cinder_exception(wrapper) def translate_mixed_exceptions(method): """Transforms exceptions that can come from both volumes and snapshots.""" def wrapper(self, ctx, res_id, *args, **kwargs): try: res = method(self, ctx, res_id, *args, **kwargs) except (keystone_exception.NotFound, cinder_exception.NotFound): _reraise(exception.VolumeNotFound(volume_id=res_id)) except cinder_exception.OverLimit: _reraise(exception.OverQuota(overs='snapshots')) return res return translate_cinder_exception(wrapper) def _reraise(desired_exc): six.reraise(type(desired_exc), desired_exc, sys.exc_info()[2]) class API(object): """API for interacting with the volume manager.""" @translate_volume_exception def get(self, context, volume_id, microversion=None): """Get the details about a volume given it's ID. :param context: the nova request context :param volume_id: the id of the volume to get :param microversion: optional string microversion value :raises: CinderAPIVersionNotAvailable if the specified microversion is not available. """ item = cinderclient( context, microversion=microversion).volumes.get(volume_id) return _untranslate_volume_summary_view(context, item) @translate_cinder_exception def get_all(self, context, search_opts=None): search_opts = search_opts or {} items = cinderclient(context).volumes.list(detailed=True, search_opts=search_opts) rval = [] for item in items: rval.append(_untranslate_volume_summary_view(context, item)) return rval def check_attached(self, context, volume): if volume['status'] != "in-use": msg = _("volume '%(vol)s' status must be 'in-use'. Currently in " "'%(status)s' status") % {"vol": volume['id'], "status": volume['status']} raise exception.InvalidVolume(reason=msg) def check_availability_zone(self, context, volume, instance=None): """Ensure that the availability zone is the same.""" # TODO(walter-boring): move this check to Cinder as part of # the reserve call. if instance and not CONF.cinder.cross_az_attach: instance_az = az.get_instance_availability_zone(context, instance) if instance_az != volume['availability_zone']: msg = _("Instance %(instance)s and volume %(vol)s are not in " "the same availability_zone. Instance is in " "%(ins_zone)s. Volume is in %(vol_zone)s") % { "instance": instance.uuid, "vol": volume['id'], 'ins_zone': instance_az, 'vol_zone': volume['availability_zone']} raise exception.InvalidVolume(reason=msg) @translate_volume_exception def reserve_volume(self, context, volume_id): cinderclient(context).volumes.reserve(volume_id) @translate_volume_exception def unreserve_volume(self, context, volume_id): cinderclient(context).volumes.unreserve(volume_id) @translate_volume_exception def begin_detaching(self, context, volume_id): cinderclient(context).volumes.begin_detaching(volume_id) @translate_volume_exception def roll_detaching(self, context, volume_id): cinderclient(context).volumes.roll_detaching(volume_id) @translate_volume_exception def attach(self, context, volume_id, instance_uuid, mountpoint, mode='rw'): cinderclient(context).volumes.attach(volume_id, instance_uuid, mountpoint, mode=mode) @translate_volume_exception def detach(self, context, volume_id, instance_uuid=None, attachment_id=None): client = cinderclient(context) if attachment_id is None: volume = self.get(context, volume_id) if volume['multiattach']: attachments = volume.get('attachments', {}) if instance_uuid: attachment_id = attachments.get(instance_uuid, {}).\ get('attachment_id') if not attachment_id: LOG.warning(_LW("attachment_id couldn't be retrieved " "for volume %(volume_id)s with " "instance_uuid %(instance_id)s. The " "volume has the 'multiattach' flag " "enabled, without the attachment_id " "Cinder most probably cannot perform " "the detach."), {'volume_id': volume_id, 'instance_id': instance_uuid}) else: LOG.warning(_LW("attachment_id couldn't be retrieved for " "volume %(volume_id)s. The volume has the " "'multiattach' flag enabled, without the " "attachment_id Cinder most probably " "cannot perform the detach."), {'volume_id': volume_id}) client.volumes.detach(volume_id, attachment_id) @translate_volume_exception def initialize_connection(self, context, volume_id, connector): try: connection_info = cinderclient( context).volumes.initialize_connection(volume_id, connector) connection_info['connector'] = connector return connection_info except cinder_exception.ClientException as ex: with excutils.save_and_reraise_exception(): LOG.error(_LE('Initialize connection failed for volume ' '%(vol)s on host %(host)s. Error: %(msg)s ' 'Code: %(code)s. Attempting to terminate ' 'connection.'), {'vol': volume_id, 'host': connector.get('host'), 'msg': six.text_type(ex), 'code': ex.code}) try: self.terminate_connection(context, volume_id, connector) except Exception as exc: LOG.error(_LE('Connection between volume %(vol)s and host ' '%(host)s might have succeeded, but attempt ' 'to terminate connection has failed. ' 'Validate the connection and determine if ' 'manual cleanup is needed. Error: %(msg)s ' 'Code: %(code)s.'), {'vol': volume_id, 'host': connector.get('host'), 'msg': six.text_type(exc), 'code': ( exc.code if hasattr(exc, 'code') else None)}) @translate_volume_exception def terminate_connection(self, context, volume_id, connector): return cinderclient(context).volumes.terminate_connection(volume_id, connector) @translate_cinder_exception def migrate_volume_completion(self, context, old_volume_id, new_volume_id, error=False): return cinderclient(context).volumes.migrate_volume_completion( old_volume_id, new_volume_id, error) @translate_volume_exception def create(self, context, size, name, description, snapshot=None, image_id=None, volume_type=None, metadata=None, availability_zone=None): client = cinderclient(context) if snapshot is not None: snapshot_id = snapshot['id'] else: snapshot_id = None kwargs = dict(snapshot_id=snapshot_id, volume_type=volume_type, user_id=context.user_id, project_id=context.project_id, availability_zone=availability_zone, metadata=metadata, imageRef=image_id, name=name, description=description) item = client.volumes.create(size, **kwargs) return _untranslate_volume_summary_view(context, item) @translate_volume_exception def delete(self, context, volume_id): cinderclient(context).volumes.delete(volume_id) @translate_volume_exception def update(self, context, volume_id, fields): raise NotImplementedError() @translate_cinder_exception def get_absolute_limits(self, context): """Returns quota limit and usage information for the given tenant See the /v3/{project_id}/limits API reference for details. :param context: The nova RequestContext for the user request. Note that the limit information returned from Cinder is specific to the project_id within this context. :returns: dict of absolute limits """ # cinderclient returns a generator of AbsoluteLimit objects, so iterate # over the generator and return a dictionary which is easier for the # nova client-side code to handle. limits = cinderclient(context).limits.get().absolute return {limit.name: limit.value for limit in limits} @translate_snapshot_exception def get_snapshot(self, context, snapshot_id): item = cinderclient(context).volume_snapshots.get(snapshot_id) return _untranslate_snapshot_summary_view(context, item) @translate_cinder_exception def get_all_snapshots(self, context): items = cinderclient(context).volume_snapshots.list(detailed=True) rvals = [] for item in items: rvals.append(_untranslate_snapshot_summary_view(context, item)) return rvals @translate_mixed_exceptions def create_snapshot(self, context, volume_id, name, description): item = cinderclient(context).volume_snapshots.create(volume_id, False, name, description) return _untranslate_snapshot_summary_view(context, item) @translate_mixed_exceptions def create_snapshot_force(self, context, volume_id, name, description): item = cinderclient(context).volume_snapshots.create(volume_id, True, name, description) return _untranslate_snapshot_summary_view(context, item) @translate_snapshot_exception def delete_snapshot(self, context, snapshot_id): cinderclient(context).volume_snapshots.delete(snapshot_id) @translate_cinder_exception def get_volume_encryption_metadata(self, context, volume_id): return cinderclient(context).volumes.get_encryption_metadata(volume_id) @translate_snapshot_exception def update_snapshot_status(self, context, snapshot_id, status): vs = cinderclient(context).volume_snapshots # '90%' here is used to tell Cinder that Nova is done # with its portion of the 'creating' state. This can # be removed when we are able to split the Cinder states # into 'creating' and a separate state of # 'creating_in_nova'. (Same for 'deleting' state.) vs.update_snapshot_status( snapshot_id, {'status': status, 'progress': '90%'} ) @translate_volume_exception def attachment_create(self, context, volume_id, instance_id, connector=None, mountpoint=None): """Create a volume attachment. This requires microversion >= 3.44. The attachment_create call was introduced in microversion 3.27. We need 3.44 as minmum here as we need attachment_complete to finish the attaching process and it which was introduced in version 3.44. :param context: The nova request context. :param volume_id: UUID of the volume on which to create the attachment. :param instance_id: UUID of the instance to which the volume will be attached. :param connector: host connector dict; if None, the attachment will be 'reserved' but not yet attached. :param mountpoint: Optional mount device name for the attachment, e.g. "/dev/vdb". This is only used if a connector is provided. :returns: a dict created from the cinderclient.v3.attachments.VolumeAttachment object with a backward compatible connection_info dict """ # NOTE(mriedem): Due to a limitation in the POST /attachments/ # API in Cinder, we have to pass the mountpoint in via the # host connector rather than pass it in as a top-level parameter # like in the os-attach volume action API. Hopefully this will be # fixed some day with a new Cinder microversion but until then we # work around it client-side. _connector = connector if _connector and mountpoint and 'mountpoint' not in _connector: # Make a copy of the connector so we don't modify it by # reference. _connector = copy.deepcopy(connector) _connector['mountpoint'] = mountpoint try: attachment_ref = cinderclient(context, '3.44').attachments.create( volume_id, _connector, instance_id) return _translate_attachment_ref(attachment_ref) except cinder_exception.ClientException as ex: with excutils.save_and_reraise_exception(): LOG.error(('Create attachment failed for volume ' '%(volume_id)s. Error: %(msg)s Code: %(code)s'), {'volume_id': volume_id, 'msg': six.text_type(ex), 'code': getattr(ex, 'code', None)}, instance_uuid=instance_id) @translate_attachment_exception def attachment_get(self, context, attachment_id): """Gets a volume attachment. :param context: The nova request context. :param attachment_id: UUID of the volume attachment to get. :returns: a dict created from the cinderclient.v3.attachments.VolumeAttachment object with a backward compatible connection_info dict """ try: attachment_ref = cinderclient( context, '3.44', skip_version_check=True).attachments.show( attachment_id) translated_attach_ref = _translate_attachment_ref( attachment_ref.to_dict()) return translated_attach_ref except cinder_exception.ClientException as ex: with excutils.save_and_reraise_exception(): LOG.error(('Show attachment failed for attachment ' '%(id)s. Error: %(msg)s Code: %(code)s'), {'id': attachment_id, 'msg': six.text_type(ex), 'code': getattr(ex, 'code', None)}) @translate_attachment_exception def attachment_update(self, context, attachment_id, connector, mountpoint=None): """Updates the connector on the volume attachment. An attachment without a connector is considered reserved but not fully attached. :param context: The nova request context. :param attachment_id: UUID of the volume attachment to update. :param connector: host connector dict. This is required when updating a volume attachment. To terminate a connection, the volume attachment for that connection must be deleted. :param mountpoint: Optional mount device name for the attachment, e.g. "/dev/vdb". Theoretically this is optional per volume backend, but in practice it's normally required so it's best to always provide a value. :returns: a dict created from the cinderclient.v3.attachments.VolumeAttachment object with a backward compatible connection_info dict """ # NOTE(mriedem): Due to a limitation in the PUT /attachments/{id} # API in Cinder, we have to pass the mountpoint in via the # host connector rather than pass it in as a top-level parameter # like in the os-attach volume action API. Hopefully this will be # fixed some day with a new Cinder microversion but until then we # work around it client-side. _connector = connector if mountpoint and 'mountpoint' not in connector: # Make a copy of the connector so we don't modify it by # reference. _connector = copy.deepcopy(connector) _connector['mountpoint'] = mountpoint try: attachment_ref = cinderclient( context, '3.44', skip_version_check=True).attachments.update( attachment_id, _connector) translated_attach_ref = _translate_attachment_ref( attachment_ref.to_dict()) return translated_attach_ref except cinder_exception.ClientException as ex: with excutils.save_and_reraise_exception(): LOG.error(('Update attachment failed for attachment ' '%(id)s. Error: %(msg)s Code: %(code)s'), {'id': attachment_id, 'msg': six.text_type(ex), 'code': getattr(ex, 'code', None)}) @translate_attachment_exception def attachment_delete(self, context, attachment_id): try: cinderclient( context, '3.44', skip_version_check=True).attachments.delete( attachment_id) except cinder_exception.ClientException as ex: with excutils.save_and_reraise_exception(): LOG.error(('Delete attachment failed for attachment ' '%(id)s. Error: %(msg)s Code: %(code)s'), {'id': attachment_id, 'msg': six.text_type(ex), 'code': getattr(ex, 'code', None)}) @translate_attachment_exception def attachment_complete(self, context, attachment_id): """Marks a volume attachment complete. This call should be used to inform Cinder that a volume attachment is fully connected on the compute host so Cinder can apply the necessary state changes to the volume info in its database. :param context: The nova request context. :param attachment_id: UUID of the volume attachment to update. """ try: cinderclient( context, '3.44', skip_version_check=True).attachments.complete( attachment_id) except cinder_exception.ClientException as ex: with excutils.save_and_reraise_exception(): LOG.error(('Complete attachment failed for attachment ' '%(id)s. Error: %(msg)s Code: %(code)s'), {'id': attachment_id, 'msg': six.text_type(ex), 'code': getattr(ex, 'code', None)}) nova-17.0.1/nova/volume/__init__.py0000666000175000017500000000000013250073126017137 0ustar zuulzuul00000000000000nova-17.0.1/nova/version.py0000666000175000017500000000441713250073126015576 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import pbr.version from nova.i18n import _LE NOVA_VENDOR = "OpenStack Foundation" NOVA_PRODUCT = "OpenStack Nova" NOVA_PACKAGE = None # OS distro package version suffix loaded = False version_info = pbr.version.VersionInfo('nova') version_string = version_info.version_string def _load_config(): # Don't load in global context, since we can't assume # these modules are accessible when distutils uses # this module from six.moves import configparser from oslo_config import cfg from oslo_log import log as logging global loaded, NOVA_VENDOR, NOVA_PRODUCT, NOVA_PACKAGE if loaded: return loaded = True cfgfile = cfg.CONF.find_file("release") if cfgfile is None: return try: cfg = configparser.RawConfigParser() cfg.read(cfgfile) if cfg.has_option("Nova", "vendor"): NOVA_VENDOR = cfg.get("Nova", "vendor") if cfg.has_option("Nova", "product"): NOVA_PRODUCT = cfg.get("Nova", "product") if cfg.has_option("Nova", "package"): NOVA_PACKAGE = cfg.get("Nova", "package") except Exception as ex: LOG = logging.getLogger(__name__) LOG.error(_LE("Failed to load %(cfgfile)s: %(ex)s"), {'cfgfile': cfgfile, 'ex': ex}) def vendor_string(): _load_config() return NOVA_VENDOR def product_string(): _load_config() return NOVA_PRODUCT def package_string(): _load_config() return NOVA_PACKAGE def version_string_with_package(): if package_string() is None: return version_info.version_string() else: return "%s-%s" % (version_info.version_string(), package_string()) nova-17.0.1/nova/hooks.py0000666000175000017500000001323413250073126015231 0ustar zuulzuul00000000000000# Copyright (c) 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Decorator and config option definitions for adding custom code (hooks) around callables. NOTE: as of Nova 13.0 hooks are DEPRECATED and will be removed in the near future. You should not build any new code using this facility. Any method may have the 'add_hook' decorator applied, which yields the ability to invoke Hook objects before or after the method. (i.e. pre and post) Hook objects are loaded by HookLoaders. Each named hook may invoke multiple Hooks. Example Hook object:: | class MyHook(object): | def pre(self, *args, **kwargs): | # do stuff before wrapped callable runs | | def post(self, rv, *args, **kwargs): | # do stuff after wrapped callable runs Example Hook object with function parameters:: | class MyHookWithFunction(object): | def pre(self, f, *args, **kwargs): | # do stuff with wrapped function info | def post(self, f, *args, **kwargs): | # do stuff with wrapped function info """ import functools from oslo_log import log as logging import stevedore from nova.i18n import _, _LE, _LW LOG = logging.getLogger(__name__) NS = 'nova.hooks' _HOOKS = {} # hook name => hook manager class FatalHookException(Exception): """Exception which should be raised by hooks to indicate that normal execution of the hooked function should be terminated. Raised exception will be logged and reraised. """ pass class HookManager(stevedore.hook.HookManager): def __init__(self, name): """Invoke_on_load creates an instance of the Hook class :param name: The name of the hooks to load. :type name: str """ super(HookManager, self).__init__(NS, name, invoke_on_load=True) def _run(self, name, method_type, args, kwargs, func=None): if method_type not in ('pre', 'post'): msg = _("Wrong type of hook method. " "Only 'pre' and 'post' type allowed") raise ValueError(msg) for e in self.extensions: obj = e.obj hook_method = getattr(obj, method_type, None) if hook_method: LOG.warning(_LW("Hooks are deprecated as of Nova 13.0 and " "will be removed in a future release")) LOG.debug("Running %(name)s %(type)s-hook: %(obj)s", {'name': name, 'type': method_type, 'obj': obj}) try: if func: hook_method(func, *args, **kwargs) else: hook_method(*args, **kwargs) except FatalHookException: msg = _LE("Fatal Exception running %(name)s " "%(type)s-hook: %(obj)s") LOG.exception(msg, {'name': name, 'type': method_type, 'obj': obj}) raise except Exception: msg = _LE("Exception running %(name)s " "%(type)s-hook: %(obj)s") LOG.exception(msg, {'name': name, 'type': method_type, 'obj': obj}) def run_pre(self, name, args, kwargs, f=None): """Execute optional pre methods of loaded hooks. :param name: The name of the loaded hooks. :param args: Positional arguments which would be transmitted into all pre methods of loaded hooks. :param kwargs: Keyword args which would be transmitted into all pre methods of loaded hooks. :param f: Target function. """ self._run(name=name, method_type='pre', args=args, kwargs=kwargs, func=f) def run_post(self, name, rv, args, kwargs, f=None): """Execute optional post methods of loaded hooks. :param name: The name of the loaded hooks. :param rv: Return values of target method call. :param args: Positional arguments which would be transmitted into all post methods of loaded hooks. :param kwargs: Keyword args which would be transmitted into all post methods of loaded hooks. :param f: Target function. """ self._run(name=name, method_type='post', args=(rv,) + args, kwargs=kwargs, func=f) def add_hook(name, pass_function=False): """Execute optional pre and post methods around the decorated function. This is useful for customization around callables. """ def outer(f): f.__hook_name__ = name @functools.wraps(f) def inner(*args, **kwargs): manager = _HOOKS.setdefault(name, HookManager(name)) function = None if pass_function: function = f manager.run_pre(name, args, kwargs, f=function) rv = f(*args, **kwargs) manager.run_post(name, rv, args, kwargs, f=function) return rv return inner return outer def reset(): """Clear loaded hooks.""" _HOOKS.clear() nova-17.0.1/nova/exception.py0000666000175000017500000020477113250073136016115 0ustar zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Nova base exception handling. Includes decorator for re-raising Nova-type exceptions. SHOULD include dedicated exception logging. """ from oslo_log import log as logging import webob.exc from webob import util as woutil from nova.i18n import _, _LE LOG = logging.getLogger(__name__) class ConvertedException(webob.exc.WSGIHTTPException): def __init__(self, code, title="", explanation=""): self.code = code # There is a strict rule about constructing status line for HTTP: # '...Status-Line, consisting of the protocol version followed by a # numeric status code and its associated textual phrase, with each # element separated by SP characters' # (http://www.faqs.org/rfcs/rfc2616.html) # 'code' and 'title' can not be empty because they correspond # to numeric status code and its associated text if title: self.title = title else: try: self.title = woutil.status_reasons[self.code] except KeyError: msg = _LE("Improper or unknown HTTP status code used: %d") LOG.error(msg, code) self.title = woutil.status_generic_reasons[self.code // 100] self.explanation = explanation super(ConvertedException, self).__init__() class NovaException(Exception): """Base Nova Exception To correctly use this class, inherit from it and define a 'msg_fmt' property. That msg_fmt will get printf'd with the keyword arguments provided to the constructor. """ msg_fmt = _("An unknown exception occurred.") code = 500 headers = {} safe = False def __init__(self, message=None, **kwargs): self.kwargs = kwargs if 'code' not in self.kwargs: try: self.kwargs['code'] = self.code except AttributeError: pass if not message: try: message = self.msg_fmt % kwargs except Exception: # NOTE(melwitt): This is done in a separate method so it can be # monkey-patched during testing to make it a hard failure. self._log_exception() message = self.msg_fmt self.message = message super(NovaException, self).__init__(message) def _log_exception(self): # kwargs doesn't match a variable in the message # log the issue and the kwargs LOG.exception(_LE('Exception in string format operation')) for name, value in self.kwargs.items(): LOG.error("%s: %s" % (name, value)) # noqa def format_message(self): # NOTE(mrodden): use the first argument to the python Exception object # which should be our full NovaException message, (see __init__) return self.args[0] class EncryptionFailure(NovaException): msg_fmt = _("Failed to encrypt text: %(reason)s") class DecryptionFailure(NovaException): msg_fmt = _("Failed to decrypt text: %(reason)s") class RevokeCertFailure(NovaException): msg_fmt = _("Failed to revoke certificate for %(project_id)s") class VirtualInterfaceCreateException(NovaException): msg_fmt = _("Virtual Interface creation failed") class VirtualInterfaceMacAddressException(NovaException): msg_fmt = _("Creation of virtual interface with " "unique mac address failed") class VirtualInterfacePlugException(NovaException): msg_fmt = _("Virtual interface plugin failed") class VirtualInterfaceUnplugException(NovaException): msg_fmt = _("Failed to unplug virtual interface: %(reason)s") class GlanceConnectionFailed(NovaException): msg_fmt = _("Connection to glance host %(server)s failed: " "%(reason)s") class CinderConnectionFailed(NovaException): msg_fmt = _("Connection to cinder host failed: %(reason)s") class UnsupportedCinderAPIVersion(NovaException): msg_fmt = _('Nova does not support Cinder API version %(version)s') class CinderAPIVersionNotAvailable(NovaException): """Used to indicate that a requested Cinder API version, generally a microversion, is not available. """ msg_fmt = _('Cinder API version %(version)s is not available.') class Forbidden(NovaException): msg_fmt = _("Forbidden") code = 403 class AdminRequired(Forbidden): msg_fmt = _("User does not have admin privileges") class PolicyNotAuthorized(Forbidden): msg_fmt = _("Policy doesn't allow %(action)s to be performed.") class ImageNotActive(NovaException): # NOTE(jruzicka): IncorrectState is used for volumes only in EC2, # but it still seems like the most appropriate option. msg_fmt = _("Image %(image_id)s is not active.") class ImageNotAuthorized(NovaException): msg_fmt = _("Not authorized for image %(image_id)s.") class Invalid(NovaException): msg_fmt = _("Bad Request - Invalid Parameters") code = 400 class InvalidBDM(Invalid): msg_fmt = _("Block Device Mapping is Invalid.") class InvalidBDMSnapshot(InvalidBDM): msg_fmt = _("Block Device Mapping is Invalid: " "failed to get snapshot %(id)s.") class InvalidBDMVolume(InvalidBDM): msg_fmt = _("Block Device Mapping is Invalid: " "failed to get volume %(id)s.") class InvalidBDMImage(InvalidBDM): msg_fmt = _("Block Device Mapping is Invalid: " "failed to get image %(id)s.") class InvalidBDMBootSequence(InvalidBDM): msg_fmt = _("Block Device Mapping is Invalid: " "Boot sequence for the instance " "and image/block device mapping " "combination is not valid.") class InvalidBDMLocalsLimit(InvalidBDM): msg_fmt = _("Block Device Mapping is Invalid: " "You specified more local devices than the " "limit allows") class InvalidBDMEphemeralSize(InvalidBDM): msg_fmt = _("Ephemeral disks requested are larger than " "the instance type allows. If no size is given " "in one block device mapping, flavor ephemeral " "size will be used.") class InvalidBDMSwapSize(InvalidBDM): msg_fmt = _("Swap drive requested is larger than instance type allows.") class InvalidBDMFormat(InvalidBDM): msg_fmt = _("Block Device Mapping is Invalid: " "%(details)s") class InvalidBDMForLegacy(InvalidBDM): msg_fmt = _("Block Device Mapping cannot " "be converted to legacy format. ") class InvalidBDMVolumeNotBootable(InvalidBDM): msg_fmt = _("Block Device %(id)s is not bootable.") class InvalidAttribute(Invalid): msg_fmt = _("Attribute not supported: %(attr)s") class ValidationError(Invalid): msg_fmt = "%(detail)s" class VolumeAttachFailed(Invalid): msg_fmt = _("Volume %(volume_id)s could not be attached. " "Reason: %(reason)s") class MultiattachNotSupportedByVirtDriver(NovaException): # This exception indicates the compute hosting the instance does not # support multiattach volumes. This should generally be considered a # 409 HTTPConflict error in the API since we expect all virt drivers to # eventually support multiattach volumes. msg_fmt = _("Volume %(volume_id)s has 'multiattach' set, " "which is not supported for this instance.") code = 409 class MultiattachSupportNotYetAvailable(NovaException): # This exception indicates the deployment is not yet new enough to support # multiattach volumes, so a 409 HTTPConflict response is generally used # for handling this in the API. msg_fmt = _("Multiattach volume support is not yet available.") code = 409 class MultiattachNotSupportedOldMicroversion(Invalid): msg_fmt = _('Multiattach volumes are only supported starting with ' 'compute API version 2.60.') class MultiattachToShelvedNotSupported(Invalid): msg_fmt = _("Attaching multiattach volumes is not supported for " "shelved-offloaded instances.") class VolumeNotCreated(NovaException): msg_fmt = _("Volume %(volume_id)s did not finish being created" " even after we waited %(seconds)s seconds or %(attempts)s" " attempts. And its status is %(volume_status)s.") class ExtendVolumeNotSupported(Invalid): msg_fmt = _("Volume size extension is not supported by the hypervisor.") class VolumeEncryptionNotSupported(Invalid): msg_fmt = _("Volume encryption is not supported for %(volume_type)s " "volume %(volume_id)s") class TaggedAttachmentNotSupported(Invalid): msg_fmt = _("Tagged device attachment is not yet available.") class VolumeTaggedAttachNotSupported(TaggedAttachmentNotSupported): msg_fmt = _("Tagged volume attachment is not supported for this server " "instance.") class VolumeTaggedAttachToShelvedNotSupported(TaggedAttachmentNotSupported): msg_fmt = _("Tagged volume attachment is not supported for " "shelved-offloaded instances.") class NetworkInterfaceTaggedAttachNotSupported(TaggedAttachmentNotSupported): msg_fmt = _("Tagged network interface attachment is not supported for " "this server instance.") class InvalidKeypair(Invalid): msg_fmt = _("Keypair data is invalid: %(reason)s") class InvalidRequest(Invalid): msg_fmt = _("The request is invalid.") class InvalidInput(Invalid): msg_fmt = _("Invalid input received: %(reason)s") class InvalidVolume(Invalid): msg_fmt = _("Invalid volume: %(reason)s") class InvalidVolumeAccessMode(Invalid): msg_fmt = _("Invalid volume access mode: %(access_mode)s") class InvalidMetadata(Invalid): msg_fmt = _("Invalid metadata: %(reason)s") class InvalidMetadataSize(Invalid): msg_fmt = _("Invalid metadata size: %(reason)s") class InvalidPortRange(Invalid): msg_fmt = _("Invalid port range %(from_port)s:%(to_port)s. %(msg)s") class InvalidIpProtocol(Invalid): msg_fmt = _("Invalid IP protocol %(protocol)s.") class InvalidContentType(Invalid): msg_fmt = _("Invalid content type %(content_type)s.") class InvalidAPIVersionString(Invalid): msg_fmt = _("API Version String %(version)s is of invalid format. Must " "be of format MajorNum.MinorNum.") class VersionNotFoundForAPIMethod(Invalid): msg_fmt = _("API version %(version)s is not supported on this method.") class InvalidGlobalAPIVersion(Invalid): msg_fmt = _("Version %(req_ver)s is not supported by the API. Minimum " "is %(min_ver)s and maximum is %(max_ver)s.") class ApiVersionsIntersect(Invalid): msg_fmt = _("Version of %(name)s %(min_ver)s %(max_ver)s intersects " "with another versions.") # Cannot be templated as the error syntax varies. # msg needs to be constructed when raised. class InvalidParameterValue(Invalid): msg_fmt = "%(err)s" class InvalidAggregateAction(Invalid): msg_fmt = _("Unacceptable parameters.") code = 400 class InvalidAggregateActionAdd(InvalidAggregateAction): msg_fmt = _("Cannot add host to aggregate " "%(aggregate_id)s. Reason: %(reason)s.") class InvalidAggregateActionDelete(InvalidAggregateAction): msg_fmt = _("Cannot remove host from aggregate " "%(aggregate_id)s. Reason: %(reason)s.") class InvalidAggregateActionUpdate(InvalidAggregateAction): msg_fmt = _("Cannot update aggregate " "%(aggregate_id)s. Reason: %(reason)s.") class InvalidAggregateActionUpdateMeta(InvalidAggregateAction): msg_fmt = _("Cannot update metadata of aggregate " "%(aggregate_id)s. Reason: %(reason)s.") class InvalidSortKey(Invalid): msg_fmt = _("Sort key supplied was not valid.") class InvalidStrTime(Invalid): msg_fmt = _("Invalid datetime string: %(reason)s") class InvalidNUMANodesNumber(Invalid): msg_fmt = _("The property 'numa_nodes' cannot be '%(nodes)s'. " "It must be a number greater than 0") class InvalidName(Invalid): msg_fmt = _("An invalid 'name' value was provided. " "The name must be: %(reason)s") class InstanceInvalidState(Invalid): msg_fmt = _("Instance %(instance_uuid)s in %(attr)s %(state)s. Cannot " "%(method)s while the instance is in this state.") class InstanceNotRunning(Invalid): msg_fmt = _("Instance %(instance_id)s is not running.") class InstanceNotInRescueMode(Invalid): msg_fmt = _("Instance %(instance_id)s is not in rescue mode") class InstanceNotRescuable(Invalid): msg_fmt = _("Instance %(instance_id)s cannot be rescued: %(reason)s") class InstanceNotReady(Invalid): msg_fmt = _("Instance %(instance_id)s is not ready") class InstanceSuspendFailure(Invalid): msg_fmt = _("Failed to suspend instance: %(reason)s") class InstanceResumeFailure(Invalid): msg_fmt = _("Failed to resume instance: %(reason)s") class InstancePowerOnFailure(Invalid): msg_fmt = _("Failed to power on instance: %(reason)s") class InstancePowerOffFailure(Invalid): msg_fmt = _("Failed to power off instance: %(reason)s") class InstanceRebootFailure(Invalid): msg_fmt = _("Failed to reboot instance: %(reason)s") class InstanceTerminationFailure(Invalid): msg_fmt = _("Failed to terminate instance: %(reason)s") class InstanceDeployFailure(Invalid): msg_fmt = _("Failed to deploy instance: %(reason)s") class MultiplePortsNotApplicable(Invalid): msg_fmt = _("Failed to launch instances: %(reason)s") class InvalidFixedIpAndMaxCountRequest(Invalid): msg_fmt = _("Failed to launch instances: %(reason)s") class ServiceUnavailable(Invalid): msg_fmt = _("Service is unavailable at this time.") class ServiceNotUnique(Invalid): msg_fmt = _("More than one possible service found.") class ComputeResourcesUnavailable(ServiceUnavailable): msg_fmt = _("Insufficient compute resources: %(reason)s.") class HypervisorUnavailable(NovaException): msg_fmt = _("Connection to the hypervisor is broken on host: %(host)s") class ComputeServiceUnavailable(ServiceUnavailable): msg_fmt = _("Compute service of %(host)s is unavailable at this time.") class ComputeServiceInUse(NovaException): msg_fmt = _("Compute service of %(host)s is still in use.") class UnableToMigrateToSelf(Invalid): msg_fmt = _("Unable to migrate instance (%(instance_id)s) " "to current host (%(host)s).") class InvalidHypervisorType(Invalid): msg_fmt = _("The supplied hypervisor type of is invalid.") class HypervisorTooOld(Invalid): msg_fmt = _("This compute node's hypervisor is older than the minimum " "supported version: %(version)s.") class DestinationHypervisorTooOld(Invalid): msg_fmt = _("The instance requires a newer hypervisor version than " "has been provided.") class ServiceTooOld(Invalid): msg_fmt = _("This service is older (v%(thisver)i) than the minimum " "(v%(minver)i) version of the rest of the deployment. " "Unable to continue.") class DestinationDiskExists(Invalid): msg_fmt = _("The supplied disk path (%(path)s) already exists, " "it is expected not to exist.") class InvalidDevicePath(Invalid): msg_fmt = _("The supplied device path (%(path)s) is invalid.") class DevicePathInUse(Invalid): msg_fmt = _("The supplied device path (%(path)s) is in use.") code = 409 class InvalidCPUInfo(Invalid): msg_fmt = _("Unacceptable CPU info: %(reason)s") class InvalidIpAddressError(Invalid): msg_fmt = _("%(address)s is not a valid IP v4/6 address.") class InvalidVLANTag(Invalid): msg_fmt = _("VLAN tag is not appropriate for the port group " "%(bridge)s. Expected VLAN tag is %(tag)s, " "but the one associated with the port group is %(pgroup)s.") class InvalidVLANPortGroup(Invalid): msg_fmt = _("vSwitch which contains the port group %(bridge)s is " "not associated with the desired physical adapter. " "Expected vSwitch is %(expected)s, but the one associated " "is %(actual)s.") class InvalidDiskFormat(Invalid): msg_fmt = _("Disk format %(disk_format)s is not acceptable") class InvalidDiskInfo(Invalid): msg_fmt = _("Disk info file is invalid: %(reason)s") class DiskInfoReadWriteFail(Invalid): msg_fmt = _("Failed to read or write disk info file: %(reason)s") class ImageUnacceptable(Invalid): msg_fmt = _("Image %(image_id)s is unacceptable: %(reason)s") class ImageBadRequest(Invalid): msg_fmt = _("Request of image %(image_id)s got BadRequest response: " "%(response)s") class InstanceUnacceptable(Invalid): msg_fmt = _("Instance %(instance_id)s is unacceptable: %(reason)s") class InvalidEc2Id(Invalid): msg_fmt = _("Ec2 id %(ec2_id)s is unacceptable.") class InvalidUUID(Invalid): msg_fmt = _("Expected a uuid but received %(uuid)s.") class InvalidID(Invalid): msg_fmt = _("Invalid ID received %(id)s.") class ConstraintNotMet(NovaException): msg_fmt = _("Constraint not met.") code = 412 class NotFound(NovaException): msg_fmt = _("Resource could not be found.") code = 404 class AgentBuildNotFound(NotFound): msg_fmt = _("No agent-build associated with id %(id)s.") class AgentBuildExists(NovaException): msg_fmt = _("Agent-build with hypervisor %(hypervisor)s os %(os)s " "architecture %(architecture)s exists.") class VolumeAttachmentNotFound(NotFound): msg_fmt = _("Volume attachment %(attachment_id)s could not be found.") class VolumeNotFound(NotFound): msg_fmt = _("Volume %(volume_id)s could not be found.") class UndefinedRootBDM(NovaException): msg_fmt = _("Undefined Block Device Mapping root: BlockDeviceMappingList " "contains Block Device Mappings from multiple instances.") class BDMNotFound(NotFound): msg_fmt = _("No Block Device Mapping with id %(id)s.") class VolumeBDMNotFound(NotFound): msg_fmt = _("No volume Block Device Mapping with id %(volume_id)s.") class VolumeBDMIsMultiAttach(Invalid): msg_fmt = _("Block Device Mapping %(volume_id)s is a multi-attach volume" " and is not valid for this operation.") class VolumeBDMPathNotFound(VolumeBDMNotFound): msg_fmt = _("No volume Block Device Mapping at path: %(path)s") class DeviceDetachFailed(NovaException): msg_fmt = _("Device detach failed for %(device)s: %(reason)s") class DeviceNotFound(NotFound): msg_fmt = _("Device '%(device)s' not found.") class SnapshotNotFound(NotFound): msg_fmt = _("Snapshot %(snapshot_id)s could not be found.") class DiskNotFound(NotFound): msg_fmt = _("No disk at %(location)s") class VolumeDriverNotFound(NotFound): msg_fmt = _("Could not find a handler for %(driver_type)s volume.") class InvalidImageRef(Invalid): msg_fmt = _("Invalid image href %(image_href)s.") class AutoDiskConfigDisabledByImage(Invalid): msg_fmt = _("Requested image %(image)s " "has automatic disk resize disabled.") class ImageNotFound(NotFound): msg_fmt = _("Image %(image_id)s could not be found.") class ImageDeleteConflict(NovaException): msg_fmt = _("Conflict deleting image. Reason: %(reason)s.") class PreserveEphemeralNotSupported(Invalid): msg_fmt = _("The current driver does not support " "preserving ephemeral partitions.") class ProjectNotFound(NotFound): msg_fmt = _("Project %(project_id)s could not be found.") class StorageRepositoryNotFound(NotFound): msg_fmt = _("Cannot find SR to read/write VDI.") class InstanceMappingNotFound(NotFound): msg_fmt = _("Instance %(uuid)s has no mapping to a cell.") class NetworkDhcpReleaseFailed(NovaException): msg_fmt = _("Failed to release IP %(address)s with MAC %(mac_address)s") class NetworkInUse(NovaException): msg_fmt = _("Network %(network_id)s is still in use.") class NetworkSetHostFailed(NovaException): msg_fmt = _("Network set host failed for network %(network_id)s.") class NetworkNotCreated(Invalid): msg_fmt = _("%(req)s is required to create a network.") class LabelTooLong(Invalid): msg_fmt = _("Maximum allowed length for 'label' is 255.") class InvalidIntValue(Invalid): msg_fmt = _("%(key)s must be an integer.") class InvalidCidr(Invalid): msg_fmt = _("%(cidr)s is not a valid IP network.") class InvalidAddress(Invalid): msg_fmt = _("%(address)s is not a valid IP address.") class AddressOutOfRange(Invalid): msg_fmt = _("%(address)s is not within %(cidr)s.") class DuplicateVlan(NovaException): msg_fmt = _("Detected existing vlan with id %(vlan)d") code = 409 class CidrConflict(NovaException): msg_fmt = _('Requested cidr (%(cidr)s) conflicts ' 'with existing cidr (%(other)s)') code = 409 class NetworkHasProject(NetworkInUse): msg_fmt = _('Network must be disassociated from project ' '%(project_id)s before it can be deleted.') class NetworkNotFound(NotFound): msg_fmt = _("Network %(network_id)s could not be found.") class PortNotFound(NotFound): msg_fmt = _("Port id %(port_id)s could not be found.") class NetworkNotFoundForBridge(NetworkNotFound): msg_fmt = _("Network could not be found for bridge %(bridge)s") class NetworkNotFoundForUUID(NetworkNotFound): msg_fmt = _("Network could not be found for uuid %(uuid)s") class NetworkNotFoundForCidr(NetworkNotFound): msg_fmt = _("Network could not be found with cidr %(cidr)s.") class NetworkNotFoundForInstance(NetworkNotFound): msg_fmt = _("Network could not be found for instance %(instance_id)s.") class NoNetworksFound(NotFound): msg_fmt = _("No networks defined.") class NoMoreNetworks(NovaException): msg_fmt = _("No more available networks.") class NetworkNotFoundForProject(NetworkNotFound): msg_fmt = _("Either network uuid %(network_uuid)s is not present or " "is not assigned to the project %(project_id)s.") class NetworkAmbiguous(Invalid): msg_fmt = _("More than one possible network found. Specify " "network ID(s) to select which one(s) to connect to.") class UnableToAutoAllocateNetwork(Invalid): msg_fmt = _('Unable to automatically allocate a network for project ' '%(project_id)s') class NetworkRequiresSubnet(Invalid): msg_fmt = _("Network %(network_uuid)s requires a subnet in order to boot" " instances on.") class ExternalNetworkAttachForbidden(Forbidden): msg_fmt = _("It is not allowed to create an interface on " "external network %(network_uuid)s") class NetworkMissingPhysicalNetwork(NovaException): msg_fmt = _("Physical network is missing for network %(network_uuid)s") class VifDetailsMissingVhostuserSockPath(Invalid): msg_fmt = _("vhostuser_sock_path not present in vif_details" " for vif %(vif_id)s") class VifDetailsMissingMacvtapParameters(Invalid): msg_fmt = _("Parameters %(missing_params)s not present in" " vif_details for vif %(vif_id)s. Check your Neutron" " configuration to validate that the macvtap parameters are" " correct.") class OvsConfigurationFailure(NovaException): msg_fmt = _("OVS configuration failed with: %(inner_exception)s.") class DatastoreNotFound(NotFound): msg_fmt = _("Could not find the datastore reference(s) which the VM uses.") class PortInUse(Invalid): msg_fmt = _("Port %(port_id)s is still in use.") class PortRequiresFixedIP(Invalid): msg_fmt = _("Port %(port_id)s requires a FixedIP in order to be used.") class PortNotUsable(Invalid): msg_fmt = _("Port %(port_id)s not usable for instance %(instance)s.") class PortNotUsableDNS(Invalid): msg_fmt = _("Port %(port_id)s not usable for instance %(instance)s. " "Value %(value)s assigned to dns_name attribute does not " "match instance's hostname %(hostname)s") class PortNotFree(Invalid): msg_fmt = _("No free port available for instance %(instance)s.") class PortBindingFailed(Invalid): msg_fmt = _("Binding failed for port %(port_id)s, please check neutron " "logs for more information.") class PortUpdateFailed(Invalid): msg_fmt = _("Port update failed for port %(port_id)s: %(reason)s") class FixedIpExists(NovaException): msg_fmt = _("Fixed IP %(address)s already exists.") class FixedIpNotFound(NotFound): msg_fmt = _("No fixed IP associated with id %(id)s.") class FixedIpNotFoundForAddress(FixedIpNotFound): msg_fmt = _("Fixed IP not found for address %(address)s.") class FixedIpNotFoundForInstance(FixedIpNotFound): msg_fmt = _("Instance %(instance_uuid)s has zero fixed IPs.") class FixedIpNotFoundForNetworkHost(FixedIpNotFound): msg_fmt = _("Network host %(host)s has zero fixed IPs " "in network %(network_id)s.") class FixedIpNotFoundForSpecificInstance(FixedIpNotFound): msg_fmt = _("Instance %(instance_uuid)s doesn't have fixed IP '%(ip)s'.") class FixedIpNotFoundForNetwork(FixedIpNotFound): msg_fmt = _("Fixed IP address (%(address)s) does not exist in " "network (%(network_uuid)s).") class FixedIpAssociateFailed(NovaException): msg_fmt = _("Fixed IP associate failed for network: %(net)s.") class FixedIpAlreadyInUse(NovaException): msg_fmt = _("Fixed IP address %(address)s is already in use on instance " "%(instance_uuid)s.") class FixedIpAssociatedWithMultipleInstances(NovaException): msg_fmt = _("More than one instance is associated with fixed IP address " "'%(address)s'.") class FixedIpInvalid(Invalid): msg_fmt = _("Fixed IP address %(address)s is invalid.") class FixedIpInvalidOnHost(Invalid): msg_fmt = _("The fixed IP associated with port %(port_id)s is not " "compatible with the host.") class NoMoreFixedIps(NovaException): msg_fmt = _("No fixed IP addresses available for network: %(net)s") class NoFixedIpsDefined(NotFound): msg_fmt = _("Zero fixed IPs could be found.") class FloatingIpExists(NovaException): msg_fmt = _("Floating IP %(address)s already exists.") class FloatingIpNotFound(NotFound): msg_fmt = _("Floating IP not found for ID %(id)s.") class FloatingIpDNSExists(Invalid): msg_fmt = _("The DNS entry %(name)s already exists in domain %(domain)s.") class FloatingIpNotFoundForAddress(FloatingIpNotFound): msg_fmt = _("Floating IP not found for address %(address)s.") class FloatingIpNotFoundForHost(FloatingIpNotFound): msg_fmt = _("Floating IP not found for host %(host)s.") class FloatingIpMultipleFoundForAddress(NovaException): msg_fmt = _("Multiple floating IPs are found for address %(address)s.") class FloatingIpPoolNotFound(NotFound): msg_fmt = _("Floating IP pool not found.") safe = True class NoMoreFloatingIps(FloatingIpNotFound): msg_fmt = _("Zero floating IPs available.") safe = True class FloatingIpAssociated(NovaException): msg_fmt = _("Floating IP %(address)s is associated.") class FloatingIpNotAssociated(NovaException): msg_fmt = _("Floating IP %(address)s is not associated.") class NoFloatingIpsDefined(NotFound): msg_fmt = _("Zero floating IPs exist.") class NoFloatingIpInterface(NotFound): msg_fmt = _("Interface %(interface)s not found.") class FloatingIpAllocateFailed(NovaException): msg_fmt = _("Floating IP allocate failed.") class FloatingIpAssociateFailed(NovaException): msg_fmt = _("Floating IP %(address)s association has failed.") class FloatingIpBadRequest(Invalid): msg_fmt = _("The floating IP request failed with a BadRequest") class CannotDisassociateAutoAssignedFloatingIP(NovaException): msg_fmt = _("Cannot disassociate auto assigned floating IP") class KeypairNotFound(NotFound): msg_fmt = _("Keypair %(name)s not found for user %(user_id)s") class ServiceNotFound(NotFound): msg_fmt = _("Service %(service_id)s could not be found.") class ConfGroupForServiceTypeNotFound(ServiceNotFound): msg_fmt = _("No conf group name could be found for service type " "%(stype)s.") class ServiceBinaryExists(NovaException): msg_fmt = _("Service with host %(host)s binary %(binary)s exists.") class ServiceTopicExists(NovaException): msg_fmt = _("Service with host %(host)s topic %(topic)s exists.") class HostNotFound(NotFound): msg_fmt = _("Host %(host)s could not be found.") class ComputeHostNotFound(HostNotFound): msg_fmt = _("Compute host %(host)s could not be found.") class HostBinaryNotFound(NotFound): msg_fmt = _("Could not find binary %(binary)s on host %(host)s.") class InvalidReservationExpiration(Invalid): msg_fmt = _("Invalid reservation expiration %(expire)s.") class InvalidQuotaValue(Invalid): msg_fmt = _("Change would make usage less than 0 for the following " "resources: %(unders)s") class InvalidQuotaMethodUsage(Invalid): msg_fmt = _("Wrong quota method %(method)s used on resource %(res)s") class QuotaNotFound(NotFound): msg_fmt = _("Quota could not be found") class QuotaExists(NovaException): msg_fmt = _("Quota exists for project %(project_id)s, " "resource %(resource)s") class QuotaResourceUnknown(QuotaNotFound): msg_fmt = _("Unknown quota resources %(unknown)s.") class ProjectUserQuotaNotFound(QuotaNotFound): msg_fmt = _("Quota for user %(user_id)s in project %(project_id)s " "could not be found.") class ProjectQuotaNotFound(QuotaNotFound): msg_fmt = _("Quota for project %(project_id)s could not be found.") class QuotaClassNotFound(QuotaNotFound): msg_fmt = _("Quota class %(class_name)s could not be found.") class QuotaClassExists(NovaException): msg_fmt = _("Quota class %(class_name)s exists for resource %(resource)s") class QuotaUsageNotFound(QuotaNotFound): msg_fmt = _("Quota usage for project %(project_id)s could not be found.") class QuotaUsageRefreshNotAllowed(Invalid): msg_fmt = _("Quota usage refresh of resource %(resource)s for project " "%(project_id)s, user %(user_id)s, is not allowed. " "The allowed resources are %(syncable)s.") class ReservationNotFound(QuotaNotFound): msg_fmt = _("Quota reservation %(uuid)s could not be found.") class OverQuota(NovaException): msg_fmt = _("Quota exceeded for resources: %(overs)s") class SecurityGroupNotFound(NotFound): msg_fmt = _("Security group %(security_group_id)s not found.") class SecurityGroupNotFoundForProject(SecurityGroupNotFound): msg_fmt = _("Security group %(security_group_id)s not found " "for project %(project_id)s.") class SecurityGroupNotFoundForRule(SecurityGroupNotFound): msg_fmt = _("Security group with rule %(rule_id)s not found.") class SecurityGroupExists(Invalid): msg_fmt = _("Security group %(security_group_name)s already exists " "for project %(project_id)s.") class SecurityGroupExistsForInstance(Invalid): msg_fmt = _("Security group %(security_group_id)s is already associated" " with the instance %(instance_id)s") class SecurityGroupNotExistsForInstance(Invalid): msg_fmt = _("Security group %(security_group_id)s is not associated with" " the instance %(instance_id)s") class SecurityGroupDefaultRuleNotFound(Invalid): msg_fmt = _("Security group default rule (%rule_id)s not found.") class SecurityGroupCannotBeApplied(Invalid): msg_fmt = _("Network requires port_security_enabled and subnet associated" " in order to apply security groups.") class NoUniqueMatch(NovaException): msg_fmt = _("No Unique Match Found.") code = 409 class NoActiveMigrationForInstance(NotFound): msg_fmt = _("Active live migration for instance %(instance_id)s not found") class MigrationNotFound(NotFound): msg_fmt = _("Migration %(migration_id)s could not be found.") class MigrationNotFoundByStatus(MigrationNotFound): msg_fmt = _("Migration not found for instance %(instance_id)s " "with status %(status)s.") class MigrationNotFoundForInstance(MigrationNotFound): msg_fmt = _("Migration %(migration_id)s not found for instance " "%(instance_id)s") class InvalidMigrationState(Invalid): msg_fmt = _("Migration %(migration_id)s state of instance " "%(instance_uuid)s is %(state)s. Cannot %(method)s while the " "migration is in this state.") class ConsoleLogOutputException(NovaException): msg_fmt = _("Console log output could not be retrieved for instance " "%(instance_id)s. Reason: %(reason)s") class ConsolePoolExists(NovaException): msg_fmt = _("Console pool with host %(host)s, console_type " "%(console_type)s and compute_host %(compute_host)s " "already exists.") class ConsolePoolNotFoundForHostType(NotFound): msg_fmt = _("Console pool of type %(console_type)s " "for compute host %(compute_host)s " "on proxy host %(host)s not found.") class ConsoleNotFound(NotFound): msg_fmt = _("Console %(console_id)s could not be found.") class ConsoleNotFoundForInstance(ConsoleNotFound): msg_fmt = _("Console for instance %(instance_uuid)s could not be found.") class ConsoleNotAvailable(NotFound): msg_fmt = _("Guest does not have a console available.") class ConsoleNotFoundInPoolForInstance(ConsoleNotFound): msg_fmt = _("Console for instance %(instance_uuid)s " "in pool %(pool_id)s could not be found.") class ConsoleTypeInvalid(Invalid): msg_fmt = _("Invalid console type %(console_type)s") class ConsoleTypeUnavailable(Invalid): msg_fmt = _("Unavailable console type %(console_type)s.") class ConsolePortRangeExhausted(NovaException): msg_fmt = _("The console port range %(min_port)d-%(max_port)d is " "exhausted.") class FlavorNotFound(NotFound): msg_fmt = _("Flavor %(flavor_id)s could not be found.") class FlavorNotFoundByName(FlavorNotFound): msg_fmt = _("Flavor with name %(flavor_name)s could not be found.") class FlavorAccessNotFound(NotFound): msg_fmt = _("Flavor access not found for %(flavor_id)s / " "%(project_id)s combination.") class FlavorExtraSpecUpdateCreateFailed(NovaException): msg_fmt = _("Flavor %(id)s extra spec cannot be updated or created " "after %(retries)d retries.") class CellNotFound(NotFound): msg_fmt = _("Cell %(cell_name)s doesn't exist.") class CellExists(NovaException): msg_fmt = _("Cell with name %(name)s already exists.") class CellRoutingInconsistency(NovaException): msg_fmt = _("Inconsistency in cell routing: %(reason)s") class CellServiceAPIMethodNotFound(NotFound): msg_fmt = _("Service API method not found: %(detail)s") class CellTimeout(NotFound): msg_fmt = _("Timeout waiting for response from cell") class CellMaxHopCountReached(NovaException): msg_fmt = _("Cell message has reached maximum hop count: %(hop_count)s") class NoCellsAvailable(NovaException): msg_fmt = _("No cells available matching scheduling criteria.") class CellsUpdateUnsupported(NovaException): msg_fmt = _("Cannot update cells configuration file.") class InstanceUnknownCell(NotFound): msg_fmt = _("Cell is not known for instance %(instance_uuid)s") class SchedulerHostFilterNotFound(NotFound): msg_fmt = _("Scheduler Host Filter %(filter_name)s could not be found.") class FlavorExtraSpecsNotFound(NotFound): msg_fmt = _("Flavor %(flavor_id)s has no extra specs with " "key %(extra_specs_key)s.") class ComputeHostMetricNotFound(NotFound): msg_fmt = _("Metric %(name)s could not be found on the compute " "host node %(host)s.%(node)s.") class FileNotFound(NotFound): msg_fmt = _("File %(file_path)s could not be found.") class SwitchNotFoundForNetworkAdapter(NotFound): msg_fmt = _("Virtual switch associated with the " "network adapter %(adapter)s not found.") class NetworkAdapterNotFound(NotFound): msg_fmt = _("Network adapter %(adapter)s could not be found.") class ClassNotFound(NotFound): msg_fmt = _("Class %(class_name)s could not be found: %(exception)s") class InstanceTagNotFound(NotFound): msg_fmt = _("Instance %(instance_id)s has no tag '%(tag)s'") class KeyPairExists(NovaException): msg_fmt = _("Key pair '%(key_name)s' already exists.") class InstanceExists(NovaException): msg_fmt = _("Instance %(name)s already exists.") class FlavorExists(NovaException): msg_fmt = _("Flavor with name %(name)s already exists.") class FlavorIdExists(NovaException): msg_fmt = _("Flavor with ID %(flavor_id)s already exists.") class FlavorAccessExists(NovaException): msg_fmt = _("Flavor access already exists for flavor %(flavor_id)s " "and project %(project_id)s combination.") class InvalidSharedStorage(NovaException): msg_fmt = _("%(path)s is not on shared storage: %(reason)s") class InvalidLocalStorage(NovaException): msg_fmt = _("%(path)s is not on local storage: %(reason)s") class StorageError(NovaException): msg_fmt = _("Storage error: %(reason)s") class MigrationError(NovaException): msg_fmt = _("Migration error: %(reason)s") class MigrationPreCheckError(MigrationError): msg_fmt = _("Migration pre-check error: %(reason)s") class MigrationPreCheckClientException(MigrationError): msg_fmt = _("Client exception during Migration Pre check: %(reason)s") class MigrationSchedulerRPCError(MigrationError): msg_fmt = _("Migration select destinations error: %(reason)s") class RPCPinnedToOldVersion(NovaException): msg_fmt = _("RPC is pinned to old version") class MalformedRequestBody(NovaException): msg_fmt = _("Malformed message body: %(reason)s") # NOTE(johannes): NotFound should only be used when a 404 error is # appropriate to be returned class ConfigNotFound(NovaException): msg_fmt = _("Could not find config at %(path)s") class PasteAppNotFound(NovaException): msg_fmt = _("Could not load paste app '%(name)s' from %(path)s") class CannotResizeToSameFlavor(NovaException): msg_fmt = _("When resizing, instances must change flavor!") class ResizeError(NovaException): msg_fmt = _("Resize error: %(reason)s") class CannotResizeDisk(NovaException): msg_fmt = _("Server disk was unable to be resized because: %(reason)s") class FlavorMemoryTooSmall(NovaException): msg_fmt = _("Flavor's memory is too small for requested image.") class FlavorDiskTooSmall(NovaException): msg_fmt = _("The created instance's disk would be too small.") class FlavorDiskSmallerThanImage(FlavorDiskTooSmall): msg_fmt = _("Flavor's disk is too small for requested image. Flavor disk " "is %(flavor_size)i bytes, image is %(image_size)i bytes.") class FlavorDiskSmallerThanMinDisk(FlavorDiskTooSmall): msg_fmt = _("Flavor's disk is smaller than the minimum size specified in " "image metadata. Flavor disk is %(flavor_size)i bytes, " "minimum size is %(image_min_disk)i bytes.") class VolumeSmallerThanMinDisk(FlavorDiskTooSmall): msg_fmt = _("Volume is smaller than the minimum size specified in image " "metadata. Volume size is %(volume_size)i bytes, minimum " "size is %(image_min_disk)i bytes.") class InsufficientFreeMemory(NovaException): msg_fmt = _("Insufficient free memory on compute node to start %(uuid)s.") class NoValidHost(NovaException): msg_fmt = _("No valid host was found. %(reason)s") class MaxRetriesExceeded(NoValidHost): msg_fmt = _("Exceeded maximum number of retries. %(reason)s") class QuotaError(NovaException): msg_fmt = _("Quota exceeded: code=%(code)s") # NOTE(cyeoh): 413 should only be used for the ec2 API # The error status code for out of quota for the nova api should be # 403 Forbidden. code = 413 safe = True class TooManyInstances(QuotaError): msg_fmt = _("Quota exceeded for %(overs)s: Requested %(req)s," " but already used %(used)s of %(allowed)s %(overs)s") class FloatingIpLimitExceeded(QuotaError): msg_fmt = _("Maximum number of floating IPs exceeded") class FixedIpLimitExceeded(QuotaError): msg_fmt = _("Maximum number of fixed IPs exceeded") class MetadataLimitExceeded(QuotaError): msg_fmt = _("Maximum number of metadata items exceeds %(allowed)d") class OnsetFileLimitExceeded(QuotaError): msg_fmt = _("Personality file limit exceeded") class OnsetFilePathLimitExceeded(OnsetFileLimitExceeded): msg_fmt = _("Personality file path exceeds maximum %(allowed)s") class OnsetFileContentLimitExceeded(OnsetFileLimitExceeded): msg_fmt = _("Personality file content exceeds maximum %(allowed)s") class KeypairLimitExceeded(QuotaError): msg_fmt = _("Maximum number of key pairs exceeded") class SecurityGroupLimitExceeded(QuotaError): msg_fmt = _("Maximum number of security groups or rules exceeded") class PortLimitExceeded(QuotaError): msg_fmt = _("Maximum number of ports exceeded") class AggregateError(NovaException): msg_fmt = _("Aggregate %(aggregate_id)s: action '%(action)s' " "caused an error: %(reason)s.") class AggregateNotFound(NotFound): msg_fmt = _("Aggregate %(aggregate_id)s could not be found.") class AggregateNameExists(NovaException): msg_fmt = _("Aggregate %(aggregate_name)s already exists.") class AggregateHostNotFound(NotFound): msg_fmt = _("Aggregate %(aggregate_id)s has no host %(host)s.") class AggregateMetadataNotFound(NotFound): msg_fmt = _("Aggregate %(aggregate_id)s has no metadata with " "key %(metadata_key)s.") class AggregateHostExists(NovaException): msg_fmt = _("Aggregate %(aggregate_id)s already has host %(host)s.") class InstancePasswordSetFailed(NovaException): msg_fmt = _("Failed to set admin password on %(instance)s " "because %(reason)s") safe = True class InstanceNotFound(NotFound): msg_fmt = _("Instance %(instance_id)s could not be found.") class InstanceInfoCacheNotFound(NotFound): msg_fmt = _("Info cache for instance %(instance_uuid)s could not be " "found.") class MarkerNotFound(NotFound): msg_fmt = _("Marker %(marker)s could not be found.") class CouldNotFetchImage(NovaException): msg_fmt = _("Could not fetch image %(image_id)s") class CouldNotUploadImage(NovaException): msg_fmt = _("Could not upload image %(image_id)s") class TaskAlreadyRunning(NovaException): msg_fmt = _("Task %(task_name)s is already running on host %(host)s") class TaskNotRunning(NovaException): msg_fmt = _("Task %(task_name)s is not running on host %(host)s") class InstanceIsLocked(InstanceInvalidState): msg_fmt = _("Instance %(instance_uuid)s is locked") class ConfigDriveInvalidValue(Invalid): msg_fmt = _("Invalid value for Config Drive option: %(option)s") class ConfigDriveUnsupportedFormat(Invalid): msg_fmt = _("Config drive format '%(format)s' is not supported.") class ConfigDriveMountFailed(NovaException): msg_fmt = _("Could not mount vfat config drive. %(operation)s failed. " "Error: %(error)s") class ConfigDriveUnknownFormat(NovaException): msg_fmt = _("Unknown config drive format %(format)s. Select one of " "iso9660 or vfat.") class ConfigDriveNotFound(NotFound): msg_fmt = _("Instance %(instance_uuid)s requires config drive, but it " "does not exist.") class InterfaceAttachFailed(Invalid): msg_fmt = _("Failed to attach network adapter device to " "%(instance_uuid)s") class InterfaceAttachFailedNoNetwork(InterfaceAttachFailed): msg_fmt = _("No specific network was requested and none are available " "for project '%(project_id)s'.") class InterfaceDetachFailed(Invalid): msg_fmt = _("Failed to detach network adapter device from " "%(instance_uuid)s") class InstanceUserDataMalformed(NovaException): msg_fmt = _("User data needs to be valid base 64.") class InstanceUpdateConflict(NovaException): msg_fmt = _("Conflict updating instance %(instance_uuid)s. " "Expected: %(expected)s. Actual: %(actual)s") class UnknownInstanceUpdateConflict(InstanceUpdateConflict): msg_fmt = _("Conflict updating instance %(instance_uuid)s, but we were " "unable to determine the cause") class UnexpectedTaskStateError(InstanceUpdateConflict): pass class UnexpectedDeletingTaskStateError(UnexpectedTaskStateError): pass class InstanceActionNotFound(NovaException): msg_fmt = _("Action for request_id %(request_id)s on instance" " %(instance_uuid)s not found") class InstanceActionEventNotFound(NovaException): msg_fmt = _("Event %(event)s not found for action id %(action_id)s") class CryptoCAFileNotFound(FileNotFound): msg_fmt = _("The CA file for %(project)s could not be found") class CryptoCRLFileNotFound(FileNotFound): msg_fmt = _("The CRL file for %(project)s could not be found") class InstanceRecreateNotSupported(Invalid): msg_fmt = _('Instance recreate is not supported.') class DBNotAllowed(NovaException): msg_fmt = _('%(binary)s attempted direct database access which is ' 'not allowed by policy') class UnsupportedVirtType(Invalid): msg_fmt = _("Virtualization type '%(virt)s' is not supported by " "this compute driver") class UnsupportedHardware(Invalid): msg_fmt = _("Requested hardware '%(model)s' is not supported by " "the '%(virt)s' virt driver") class Base64Exception(NovaException): msg_fmt = _("Invalid Base 64 data for file %(path)s") class BuildAbortException(NovaException): msg_fmt = _("Build of instance %(instance_uuid)s aborted: %(reason)s") class RescheduledException(NovaException): msg_fmt = _("Build of instance %(instance_uuid)s was re-scheduled: " "%(reason)s") class ShadowTableExists(NovaException): msg_fmt = _("Shadow table with name %(name)s already exists.") class InstanceFaultRollback(NovaException): def __init__(self, inner_exception=None): message = _("Instance rollback performed due to: %s") self.inner_exception = inner_exception super(InstanceFaultRollback, self).__init__(message % inner_exception) class OrphanedObjectError(NovaException): msg_fmt = _('Cannot call %(method)s on orphaned %(objtype)s object') class ObjectActionError(NovaException): msg_fmt = _('Object action %(action)s failed because: %(reason)s') class AgentError(NovaException): msg_fmt = _('Error during following call to agent: %(method)s') class AgentTimeout(AgentError): msg_fmt = _('Unable to contact guest agent. ' 'The following call timed out: %(method)s') class AgentNotImplemented(AgentError): msg_fmt = _('Agent does not support the call: %(method)s') class InstanceGroupNotFound(NotFound): msg_fmt = _("Instance group %(group_uuid)s could not be found.") class InstanceGroupIdExists(NovaException): msg_fmt = _("Instance group %(group_uuid)s already exists.") class InstanceGroupMemberNotFound(NotFound): msg_fmt = _("Instance group %(group_uuid)s has no member with " "id %(instance_id)s.") class InstanceGroupSaveException(NovaException): msg_fmt = _("%(field)s should not be part of the updates.") class ResourceMonitorError(NovaException): msg_fmt = _("Error when creating resource monitor: %(monitor)s") class PciDeviceWrongAddressFormat(NovaException): msg_fmt = _("The PCI address %(address)s has an incorrect format.") class PciDeviceInvalidDeviceName(NovaException): msg_fmt = _("Invalid PCI Whitelist: " "The PCI whitelist can specify devname or address," " but not both") class PciDeviceNotFoundById(NotFound): msg_fmt = _("PCI device %(id)s not found") class PciDeviceNotFound(NotFound): msg_fmt = _("PCI Device %(node_id)s:%(address)s not found.") class PciDeviceInvalidStatus(Invalid): msg_fmt = _( "PCI device %(compute_node_id)s:%(address)s is %(status)s " "instead of %(hopestatus)s") class PciDeviceVFInvalidStatus(Invalid): msg_fmt = _( "Not all Virtual Functions of PF %(compute_node_id)s:%(address)s " "are free.") class PciDevicePFInvalidStatus(Invalid): msg_fmt = _( "Physical Function %(compute_node_id)s:%(address)s, related to VF" " %(compute_node_id)s:%(vf_address)s is %(status)s " "instead of %(hopestatus)s") class PciDeviceInvalidOwner(Invalid): msg_fmt = _( "PCI device %(compute_node_id)s:%(address)s is owned by %(owner)s " "instead of %(hopeowner)s") class PciDeviceRequestFailed(NovaException): msg_fmt = _( "PCI device request %(requests)s failed") class PciDevicePoolEmpty(NovaException): msg_fmt = _( "Attempt to consume PCI device %(compute_node_id)s:%(address)s " "from empty pool") class PciInvalidAlias(Invalid): msg_fmt = _("Invalid PCI alias definition: %(reason)s") class PciRequestAliasNotDefined(NovaException): msg_fmt = _("PCI alias %(alias)s is not defined") class PciConfigInvalidWhitelist(Invalid): msg_fmt = _("Invalid PCI devices Whitelist config: %(reason)s") # Cannot be templated, msg needs to be constructed when raised. class InternalError(NovaException): """Generic hypervisor errors. Consider subclassing this to provide more specific exceptions. """ msg_fmt = "%(err)s" class PciDevicePrepareFailed(NovaException): msg_fmt = _("Failed to prepare PCI device %(id)s for instance " "%(instance_uuid)s: %(reason)s") class PciDeviceDetachFailed(NovaException): msg_fmt = _("Failed to detach PCI device %(dev)s: %(reason)s") class PciDeviceUnsupportedHypervisor(NovaException): msg_fmt = _("%(type)s hypervisor does not support PCI devices") class KeyManagerError(NovaException): msg_fmt = _("Key manager error: %(reason)s") class VolumesNotRemoved(Invalid): msg_fmt = _("Failed to remove volume(s): (%(reason)s)") class VolumeRebaseFailed(NovaException): msg_fmt = _("Volume rebase failed: %(reason)s") class InvalidVideoMode(Invalid): msg_fmt = _("Provided video model (%(model)s) is not supported.") class RngDeviceNotExist(Invalid): msg_fmt = _("The provided RNG device path: (%(path)s) is not " "present on the host.") class RequestedVRamTooHigh(NovaException): msg_fmt = _("The requested amount of video memory %(req_vram)d is higher " "than the maximum allowed by flavor %(max_vram)d.") class SecurityProxyNegotiationFailed(NovaException): msg_fmt = _("Failed to negotiate security type with server: %(reason)s") class RFBAuthHandshakeFailed(NovaException): msg_fmt = _("Failed to complete auth handshake: %(reason)s") class RFBAuthNoAvailableScheme(NovaException): msg_fmt = _("No matching auth scheme: allowed types: '%(allowed_types)s', " "desired types: '%(desired_types)s'") class InvalidWatchdogAction(Invalid): msg_fmt = _("Provided watchdog action (%(action)s) is not supported.") class LiveMigrationWithOldNovaNotSupported(NovaException): msg_fmt = _("Live migration with API v2.25 requires all the Mitaka " "upgrade to be complete before it is available.") class SelectionObjectsWithOldRPCVersionNotSupported(NovaException): msg_fmt = _("Requests for Selection objects with alternates are not " "supported in select_destinations() before RPC version 4.5; " "version %(version)s requested.") class LiveMigrationURINotAvailable(NovaException): msg_fmt = _('No live migration URI configured and no default available ' 'for "%(virt_type)s" hypervisor virtualization type.') class UnshelveException(NovaException): msg_fmt = _("Error during unshelve instance %(instance_id)s: %(reason)s") class ImageVCPULimitsRangeExceeded(Invalid): msg_fmt = _("Image vCPU limits %(sockets)d:%(cores)d:%(threads)d " "exceeds permitted %(maxsockets)d:%(maxcores)d:%(maxthreads)d") class ImageVCPUTopologyRangeExceeded(Invalid): msg_fmt = _("Image vCPU topology %(sockets)d:%(cores)d:%(threads)d " "exceeds permitted %(maxsockets)d:%(maxcores)d:%(maxthreads)d") class ImageVCPULimitsRangeImpossible(Invalid): msg_fmt = _("Requested vCPU limits %(sockets)d:%(cores)d:%(threads)d " "are impossible to satisfy for vcpus count %(vcpus)d") class InvalidArchitectureName(Invalid): msg_fmt = _("Architecture name '%(arch)s' is not recognised") class ImageNUMATopologyIncomplete(Invalid): msg_fmt = _("CPU and memory allocation must be provided for all " "NUMA nodes") class ImageNUMATopologyForbidden(Forbidden): msg_fmt = _("Image property '%(name)s' is not permitted to override " "NUMA configuration set against the flavor") class ImageNUMATopologyAsymmetric(Invalid): msg_fmt = _("Instance CPUs and/or memory cannot be evenly distributed " "across instance NUMA nodes. Explicit assignment of CPUs " "and memory to nodes is required") class ImageNUMATopologyCPUOutOfRange(Invalid): msg_fmt = _("CPU number %(cpunum)d is larger than max %(cpumax)d") class ImageNUMATopologyCPUDuplicates(Invalid): msg_fmt = _("CPU number %(cpunum)d is assigned to two nodes") class ImageNUMATopologyCPUsUnassigned(Invalid): msg_fmt = _("CPU number %(cpuset)s is not assigned to any node") class ImageNUMATopologyMemoryOutOfRange(Invalid): msg_fmt = _("%(memsize)d MB of memory assigned, but expected " "%(memtotal)d MB") class InvalidHostname(Invalid): msg_fmt = _("Invalid characters in hostname '%(hostname)s'") class NumaTopologyNotFound(NotFound): msg_fmt = _("Instance %(instance_uuid)s does not specify a NUMA topology") class MigrationContextNotFound(NotFound): msg_fmt = _("Instance %(instance_uuid)s does not specify a migration " "context.") class SocketPortRangeExhaustedException(NovaException): msg_fmt = _("Not able to acquire a free port for %(host)s") class SocketPortInUseException(NovaException): msg_fmt = _("Not able to bind %(host)s:%(port)d, %(error)s") class ImageSerialPortNumberInvalid(Invalid): msg_fmt = _("Number of serial ports '%(num_ports)s' specified in " "'%(property)s' isn't valid.") class ImageSerialPortNumberExceedFlavorValue(Invalid): msg_fmt = _("Forbidden to exceed flavor value of number of serial " "ports passed in image meta.") class SerialPortNumberLimitExceeded(Invalid): msg_fmt = _("Maximum number of serial port exceeds %(allowed)d " "for %(virt_type)s") class InvalidImageConfigDrive(Invalid): msg_fmt = _("Image's config drive option '%(config_drive)s' is invalid") class InvalidHypervisorVirtType(Invalid): msg_fmt = _("Hypervisor virtualization type '%(hv_type)s' is not " "recognised") class InvalidVirtualMachineMode(Invalid): msg_fmt = _("Virtual machine mode '%(vmmode)s' is not recognised") class InvalidToken(Invalid): msg_fmt = _("The token '%(token)s' is invalid or has expired") class TokenInUse(Invalid): msg_fmt = _("The generated token is invalid") class InvalidConnectionInfo(Invalid): msg_fmt = _("Invalid Connection Info") class InstanceQuiesceNotSupported(Invalid): msg_fmt = _('Quiescing is not supported in instance %(instance_id)s') class InstanceAgentNotEnabled(Invalid): msg_fmt = _('Guest agent is not enabled for the instance') safe = True class QemuGuestAgentNotEnabled(InstanceAgentNotEnabled): msg_fmt = _('QEMU guest agent is not enabled') class SetAdminPasswdNotSupported(Invalid): msg_fmt = _('Set admin password is not supported') safe = True class MemoryPageSizeInvalid(Invalid): msg_fmt = _("Invalid memory page size '%(pagesize)s'") class MemoryPageSizeForbidden(Invalid): msg_fmt = _("Page size %(pagesize)s forbidden against '%(against)s'") class MemoryPageSizeNotSupported(Invalid): msg_fmt = _("Page size %(pagesize)s is not supported by the host.") class CPUPinningNotSupported(Invalid): msg_fmt = _("CPU pinning is not supported by the host: " "%(reason)s") class CPUPinningInvalid(Invalid): msg_fmt = _("CPU set to pin %(requested)s must be a subset of " "free CPU set %(free)s") class CPUUnpinningInvalid(Invalid): msg_fmt = _("CPU set to unpin %(requested)s must be a subset of " "pinned CPU set %(pinned)s") class CPUPinningUnknown(Invalid): msg_fmt = _("CPU set to pin %(requested)s must be a subset of " "known CPU set %(cpuset)s") class CPUUnpinningUnknown(Invalid): msg_fmt = _("CPU set to unpin %(requested)s must be a subset of " "known CPU set %(cpuset)s") class ImageCPUPinningForbidden(Forbidden): msg_fmt = _("Image property 'hw_cpu_policy' is not permitted to override " "CPU pinning policy set against the flavor") class ImageCPUThreadPolicyForbidden(Forbidden): msg_fmt = _("Image property 'hw_cpu_thread_policy' is not permitted to " "override CPU thread pinning policy set against the flavor") class UnsupportedPolicyException(Invalid): msg_fmt = _("ServerGroup policy is not supported: %(reason)s") class CellMappingNotFound(NotFound): msg_fmt = _("Cell %(uuid)s has no mapping.") class NUMATopologyUnsupported(Invalid): msg_fmt = _("Host does not support guests with NUMA topology set") class MemoryPagesUnsupported(Invalid): msg_fmt = _("Host does not support guests with custom memory page sizes") class InvalidImageFormat(Invalid): msg_fmt = _("Invalid image format '%(format)s'") class UnsupportedImageModel(Invalid): msg_fmt = _("Image model '%(image)s' is not supported") class HostMappingNotFound(Invalid): msg_fmt = _("Host '%(name)s' is not mapped to any cell") class RealtimeConfigurationInvalid(Invalid): msg_fmt = _("Cannot set realtime policy in a non dedicated " "cpu pinning policy") class CPUThreadPolicyConfigurationInvalid(Invalid): msg_fmt = _("Cannot set cpu thread pinning policy in a non dedicated " "cpu pinning policy") class RequestSpecNotFound(NotFound): msg_fmt = _("RequestSpec not found for instance %(instance_uuid)s") class UEFINotSupported(Invalid): msg_fmt = _("UEFI is not supported") class TriggerCrashDumpNotSupported(Invalid): msg_fmt = _("Triggering crash dump is not supported") class UnsupportedHostCPUControlPolicy(Invalid): msg_fmt = _("Requested CPU control policy not supported by host") class LibguestfsCannotReadKernel(Invalid): msg_fmt = _("Libguestfs does not have permission to read host kernel.") class MaxDBRetriesExceeded(NovaException): msg_fmt = _("Max retries of DB transaction exceeded attempting to " "perform %(action)s.") class RealtimePolicyNotSupported(Invalid): msg_fmt = _("Realtime policy not supported by hypervisor") class RealtimeMaskNotFoundOrInvalid(Invalid): msg_fmt = _("Realtime policy needs vCPU(s) mask configured with at least " "1 RT vCPU and 1 ordinary vCPU. See hw:cpu_realtime_mask " "or hw_cpu_realtime_mask") class OsInfoNotFound(NotFound): msg_fmt = _("No configuration information found for operating system " "%(os_name)s") class BuildRequestNotFound(NotFound): msg_fmt = _("BuildRequest not found for instance %(uuid)s") class AttachInterfaceNotSupported(Invalid): msg_fmt = _("Attaching interfaces is not supported for " "instance %(instance_uuid)s.") class InstanceDiagnosticsNotSupported(Invalid): msg_fmt = _("Instance diagnostics are not supported by compute node.") class InvalidReservedMemoryPagesOption(Invalid): msg_fmt = _("The format of the option 'reserved_huge_pages' is invalid. " "(found '%(conf)s') Please refer to the nova " "config-reference.") class ConcurrentUpdateDetected(NovaException): msg_fmt = _("Another thread concurrently updated the data. " "Please retry your update") class ResourceClassNotFound(NotFound): msg_fmt = _("No such resource class %(resource_class)s.") class CannotDeleteParentResourceProvider(NovaException): msg_fmt = _("Cannot delete resource provider that is a parent of " "another. Delete child providers first.") class ResourceProviderInUse(NovaException): msg_fmt = _("Resource provider has allocations.") class ResourceProviderRetrievalFailed(NovaException): msg_fmt = _("Failed to get resource provider with UUID %(uuid)s") class ResourceProviderAggregateRetrievalFailed(NovaException): msg_fmt = _("Failed to get aggregates for resource provider with UUID" " %(uuid)s") class ResourceProviderTraitRetrievalFailed(NovaException): msg_fmt = _("Failed to get traits for resource provider with UUID" " %(uuid)s") class ResourceProviderCreationFailed(NovaException): msg_fmt = _("Failed to create resource provider %(name)s") class ResourceProviderDeletionFailed(NovaException): msg_fmt = _("Failed to delete resource provider %(uuid)s") class ResourceProviderUpdateFailed(NovaException): msg_fmt = _("Failed to update resource provider via URL %(url)s: " "%(error)s") class PlacementAPIConflict(NovaException): """Any 409 error from placement APIs should use (a subclass of) this exception. """ msg_fmt = _("A conflict was encountered attempting to invoke the " "placement API at URL %(url)s: %(error)s") class ResourceProviderUpdateConflict(PlacementAPIConflict): """A 409 caused by generation mismatch from attempting to update an existing provider record or its associated data (aggregates, traits, etc.). """ msg_fmt = _("A conflict was encountered attempting to update resource " "provider %(uuid)s (generation %(generation)d): %(error)s") class InventoryWithResourceClassNotFound(NotFound): msg_fmt = _("No inventory of class %(resource_class)s found.") class InvalidResourceClass(Invalid): msg_fmt = _("Resource class '%(resource_class)s' invalid.") class ResourceClassExists(NovaException): msg_fmt = _("Resource class %(resource_class)s already exists.") class ResourceClassInUse(Invalid): msg_fmt = _("Cannot delete resource class %(resource_class)s. " "Class is in use in inventory.") class ResourceClassCannotDeleteStandard(Invalid): msg_fmt = _("Cannot delete standard resource class %(resource_class)s.") class ResourceClassCannotUpdateStandard(Invalid): msg_fmt = _("Cannot update standard resource class %(resource_class)s.") class InvalidResourceAmount(Invalid): msg_fmt = _("Resource amounts must be integers. Received '%(amount)s'.") class InvalidInventory(Invalid): msg_fmt = _("Inventory for '%(resource_class)s' on " "resource provider '%(resource_provider)s' invalid.") class InventoryInUse(InvalidInventory): # NOTE(mriedem): This message cannot change without impacting the # nova.scheduler.client.report._RE_INV_IN_USE regex. msg_fmt = _("Inventory for '%(resource_classes)s' on " "resource provider '%(resource_provider)s' in use.") class InvalidInventoryCapacity(InvalidInventory): msg_fmt = _("Invalid inventory for '%(resource_class)s' on " "resource provider '%(resource_provider)s'. " "The reserved value is greater than or equal to total.") class InvalidAllocationCapacityExceeded(InvalidInventory): msg_fmt = _("Unable to create allocation for '%(resource_class)s' on " "resource provider '%(resource_provider)s'. The requested " "amount would exceed the capacity.") class InvalidAllocationConstraintsViolated(InvalidInventory): msg_fmt = _("Unable to create allocation for '%(resource_class)s' on " "resource provider '%(resource_provider)s'. The requested " "amount would violate inventory constraints.") class UnsupportedPointerModelRequested(Invalid): msg_fmt = _("Pointer model '%(model)s' requested is not supported by " "host.") class NotSupportedWithOption(Invalid): msg_fmt = _("%(operation)s is not supported in conjunction with the " "current %(option)s setting. Please refer to the nova " "config-reference.") class Unauthorized(NovaException): msg_fmt = _("Not authorized.") code = 401 class NeutronAdminCredentialConfigurationInvalid(Invalid): msg_fmt = _("Networking client is experiencing an unauthorized exception.") class PlacementNotConfigured(NovaException): msg_fmt = _("This compute is not configured to talk to the placement " "service. Configure the [placement] section of nova.conf " "and restart the service.") class InvalidEmulatorThreadsPolicy(Invalid): msg_fmt = _("CPU emulator threads option requested is invalid, " "given: '%(requested)s', available: '%(available)s'.") class BadRequirementEmulatorThreadsPolicy(Invalid): msg_fmt = _("An isolated CPU emulator threads option requires a dedicated " "CPU policy option.") class PowerVMAPIFailed(NovaException): msg_fmt = _("PowerVM API failed to complete for instance=%(inst_name)s. " "%(reason)s") class TraitNotFound(NotFound): msg_fmt = _("No such trait(s): %(names)s.") class TraitExists(NovaException): msg_fmt = _("The Trait %(name)s already exists") class TraitCannotDeleteStandard(Invalid): msg_fmt = _("Cannot delete standard trait %(name)s.") class TraitInUse(Invalid): msg_fmt = _("The trait %(name)s is in use by a resource provider.") class TraitRetrievalFailed(NovaException): msg_fmt = _("Failed to retrieve traits from the placement API: %(error)s") class TraitCreationFailed(NovaException): msg_fmt = _("Failed to create trait %(name)s: %(error)s") class CannotMigrateWithTargetHost(NovaException): msg_fmt = _("Cannot migrate with target host. Retry without a host " "specified.") class CannotMigrateToSameHost(NovaException): msg_fmt = _("Cannot migrate to the host where the server exists.") nova-17.0.1/nova/keymgr/0000775000175000017500000000000013250073472015031 5ustar zuulzuul00000000000000nova-17.0.1/nova/keymgr/__init__.py0000666000175000017500000000000013250073126017126 0ustar zuulzuul00000000000000nova-17.0.1/nova/keymgr/conf_key_mgr.py0000666000175000017500000001157213250073126020051 0ustar zuulzuul00000000000000# Copyright (c) 2013 The Johns Hopkins University/Applied Physics Laboratory # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ An implementation of a key manager that reads its key from the project's configuration options. This key manager implementation provides limited security, assuming that the key remains secret. Using the volume encryption feature as an example, encryption provides protection against a lost or stolen disk, assuming that the configuration file that contains the key is not stored on the disk. Encryption also protects the confidentiality of data as it is transmitted via iSCSI from the compute host to the storage host (again assuming that an attacker who intercepts the data does not know the secret key). Because this implementation uses a single, fixed key, it proffers no protection once that key is compromised. In particular, different volumes encrypted with a key provided by this key manager actually share the same encryption key so *any* volume can be decrypted once the fixed key is known. """ import binascii from castellan.common.objects import symmetric_key as key from castellan.key_manager import key_manager from oslo_log import log as logging import nova.conf from nova import exception from nova.i18n import _ CONF = nova.conf.CONF LOG = logging.getLogger(__name__) class ConfKeyManager(key_manager.KeyManager): """This key manager implementation supports all the methods specified by the key manager interface. This implementation creates a single key in response to all invocations of create_key. Side effects (e.g., raising exceptions) for each method are handled as specified by the key manager interface. """ def __init__(self, configuration): LOG.warning('This key manager is insecure and is not recommended ' 'for production deployments') super(ConfKeyManager, self).__init__(configuration) self.key_id = '00000000-0000-0000-0000-000000000000' self.conf = CONF if configuration is None else configuration if CONF.key_manager.fixed_key is None: raise ValueError(_('keymgr.fixed_key not defined')) self._hex_key = CONF.key_manager.fixed_key super(ConfKeyManager, self).__init__(configuration) def _get_key(self): key_bytes = bytes(binascii.unhexlify(self._hex_key)) return key.SymmetricKey('AES', len(key_bytes) * 8, key_bytes) def create_key(self, context, algorithm, length, **kwargs): """Creates a symmetric key. This implementation returns a UUID for the key read from the configuration file. A Forbidden exception is raised if the specified context is None. """ if context is None: raise exception.Forbidden() return self.key_id def create_key_pair(self, context, **kwargs): raise NotImplementedError( "ConfKeyManager does not support asymmetric keys") def store(self, context, managed_object, **kwargs): """Stores (i.e., registers) a key with the key manager.""" if context is None: raise exception.Forbidden() if managed_object != self._get_key(): raise exception.KeyManagerError( reason="cannot store arbitrary keys") return self.key_id def get(self, context, managed_object_id): """Retrieves the key identified by the specified id. This implementation returns the key that is associated with the specified UUID. A Forbidden exception is raised if the specified context is None; a KeyError is raised if the UUID is invalid. """ if context is None: raise exception.Forbidden() if managed_object_id != self.key_id: raise KeyError(str(managed_object_id) + " != " + str(self.key_id)) return self._get_key() def delete(self, context, managed_object_id): """Represents deleting the key. Because the ConfKeyManager has only one key, which is read from the configuration file, the key is not actually deleted when this is called. """ if context is None: raise exception.Forbidden() if managed_object_id != self.key_id: raise exception.KeyManagerError( reason="cannot delete non-existent key") LOG.warning("Not deleting key %s", managed_object_id) nova-17.0.1/nova/common/0000775000175000017500000000000013250073471015022 5ustar zuulzuul00000000000000nova-17.0.1/nova/common/__init__.py0000666000175000017500000000000013250073126017120 0ustar zuulzuul00000000000000nova-17.0.1/nova/common/config.py0000666000175000017500000000256713250073126016652 0ustar zuulzuul00000000000000# Copyright 2016 Hewlett Packard Enterprise Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_middleware import cors def set_middleware_defaults(): """Update default configuration options for oslo.middleware.""" cors.set_defaults( allow_headers=['X-Auth-Token', 'X-Openstack-Request-Id', 'X-Identity-Status', 'X-Roles', 'X-Service-Catalog', 'X-User-Id', 'X-Tenant-Id'], expose_headers=['X-Auth-Token', 'X-Openstack-Request-Id', 'X-Subject-Token', 'X-Service-Token'], allow_methods=['GET', 'PUT', 'POST', 'DELETE', 'PATCH'] ) nova-17.0.1/nova/filters.py0000666000175000017500000001304213250073126015553 0ustar zuulzuul00000000000000# Copyright (c) 2011-2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Filter support """ from oslo_log import log as logging from nova.i18n import _LI from nova import loadables LOG = logging.getLogger(__name__) class BaseFilter(object): """Base class for all filter classes.""" def _filter_one(self, obj, spec_obj): """Return True if it passes the filter, False otherwise. Override this in a subclass. """ return True def filter_all(self, filter_obj_list, spec_obj): """Yield objects that pass the filter. Can be overridden in a subclass, if you need to base filtering decisions on all objects. Otherwise, one can just override _filter_one() to filter a single object. """ for obj in filter_obj_list: if self._filter_one(obj, spec_obj): yield obj # Set to true in a subclass if a filter only needs to be run once # for each request rather than for each instance run_filter_once_per_request = False def run_filter_for_index(self, index): """Return True if the filter needs to be run for the "index-th" instance in a request. Only need to override this if a filter needs anything other than "first only" or "all" behaviour. """ if self.run_filter_once_per_request and index > 0: return False else: return True class BaseFilterHandler(loadables.BaseLoader): """Base class to handle loading filter classes. This class should be subclassed where one needs to use filters. """ def get_filtered_objects(self, filters, objs, spec_obj, index=0): list_objs = list(objs) LOG.debug("Starting with %d host(s)", len(list_objs)) # Track the hosts as they are removed. The 'full_filter_results' list # contains the host/nodename info for every host that passes each # filter, while the 'part_filter_results' list just tracks the number # removed by each filter, unless the filter returns zero hosts, in # which case it records the host/nodename for the last batch that was # removed. Since the full_filter_results can be very large, it is only # recorded if the LOG level is set to debug. part_filter_results = [] full_filter_results = [] log_msg = "%(cls_name)s: (start: %(start)s, end: %(end)s)" for filter_ in filters: if filter_.run_filter_for_index(index): cls_name = filter_.__class__.__name__ start_count = len(list_objs) objs = filter_.filter_all(list_objs, spec_obj) if objs is None: LOG.debug("Filter %s says to stop filtering", cls_name) return list_objs = list(objs) end_count = len(list_objs) part_filter_results.append(log_msg % {"cls_name": cls_name, "start": start_count, "end": end_count}) if list_objs: remaining = [(getattr(obj, "host", obj), getattr(obj, "nodename", "")) for obj in list_objs] full_filter_results.append((cls_name, remaining)) else: LOG.info(_LI("Filter %s returned 0 hosts"), cls_name) full_filter_results.append((cls_name, None)) break LOG.debug("Filter %(cls_name)s returned " "%(obj_len)d host(s)", {'cls_name': cls_name, 'obj_len': len(list_objs)}) if not list_objs: # Log the filtration history # NOTE(sbauza): Since the Cells scheduler still provides a legacy # dictionary for filter_props, and since we agreed on not modifying # the Cells scheduler to support that because of Cells v2, we # prefer to define a compatible way to address both types if isinstance(spec_obj, dict): rspec = spec_obj.get("request_spec", {}) inst_props = rspec.get("instance_properties", {}) inst_uuid = inst_props.get("uuid", "") else: inst_uuid = spec_obj.instance_uuid msg_dict = {"inst_uuid": inst_uuid, "str_results": str(full_filter_results), } full_msg = ("Filtering removed all hosts for the request with " "instance ID " "'%(inst_uuid)s'. Filter results: %(str_results)s" ) % msg_dict msg_dict["str_results"] = str(part_filter_results) part_msg = _LI("Filtering removed all hosts for the request with " "instance ID " "'%(inst_uuid)s'. Filter results: %(str_results)s" ) % msg_dict LOG.debug(full_msg) LOG.info(part_msg) return list_objs nova-17.0.1/nova/api/0000775000175000017500000000000013250073471014303 5ustar zuulzuul00000000000000nova-17.0.1/nova/api/metadata/0000775000175000017500000000000013250073471016063 5ustar zuulzuul00000000000000nova-17.0.1/nova/api/metadata/base.py0000666000175000017500000006637413250073126017366 0ustar zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Instance Metadata information.""" import os import posixpath from oslo_log import log as logging from oslo_serialization import base64 from oslo_serialization import jsonutils from oslo_utils import timeutils import six from nova.api.ec2 import ec2utils from nova.api.metadata import password from nova.api.metadata import vendordata_dynamic from nova.api.metadata import vendordata_json from nova import block_device from nova.cells import opts as cells_opts from nova.cells import rpcapi as cells_rpcapi import nova.conf from nova import context from nova import exception from nova import network from nova.network.security_group import openstack_driver from nova import objects from nova.objects import virt_device_metadata as metadata_obj from nova import utils from nova.virt import netutils CONF = nova.conf.CONF VERSIONS = [ '1.0', '2007-01-19', '2007-03-01', '2007-08-29', '2007-10-10', '2007-12-15', '2008-02-01', '2008-09-01', '2009-04-04', ] # NOTE(mikal): think of these strings as version numbers. They traditionally # correlate with OpenStack release dates, with all the changes for a given # release bundled into a single version. Note that versions in the future are # hidden from the listing, but can still be requested explicitly, which is # required for testing purposes. We know this isn't great, but its inherited # from EC2, which this needs to be compatible with. FOLSOM = '2012-08-10' GRIZZLY = '2013-04-04' HAVANA = '2013-10-17' LIBERTY = '2015-10-15' NEWTON_ONE = '2016-06-30' NEWTON_TWO = '2016-10-06' OCATA = '2017-02-22' OPENSTACK_VERSIONS = [ FOLSOM, GRIZZLY, HAVANA, LIBERTY, NEWTON_ONE, NEWTON_TWO, OCATA, ] VERSION = "version" CONTENT = "content" CONTENT_DIR = "content" MD_JSON_NAME = "meta_data.json" VD_JSON_NAME = "vendor_data.json" VD2_JSON_NAME = "vendor_data2.json" NW_JSON_NAME = "network_data.json" UD_NAME = "user_data" PASS_NAME = "password" MIME_TYPE_TEXT_PLAIN = "text/plain" MIME_TYPE_APPLICATION_JSON = "application/json" LOG = logging.getLogger(__name__) class InvalidMetadataVersion(Exception): pass class InvalidMetadataPath(Exception): pass class InstanceMetadata(object): """Instance metadata.""" def __init__(self, instance, address=None, content=None, extra_md=None, network_info=None, network_metadata=None, request_context=None): """Creation of this object should basically cover all time consuming collection. Methods after that should not cause time delays due to network operations or lengthy cpu operations. The user should then get a single instance and make multiple method calls on it. """ if not content: content = [] ctxt = context.get_admin_context() # NOTE(danms): Sanitize the instance to limit the amount of stuff # inside that may not pickle well (i.e. context). We also touch # some of the things we'll lazy load later to make sure we keep their # values in what we cache. instance.ec2_ids instance.keypairs instance.device_metadata instance = objects.Instance.obj_from_primitive( instance.obj_to_primitive()) # The default value of mimeType is set to MIME_TYPE_TEXT_PLAIN self.set_mimetype(MIME_TYPE_TEXT_PLAIN) self.instance = instance self.extra_md = extra_md self.availability_zone = instance.get('availability_zone') secgroup_api = openstack_driver.get_openstack_security_group_driver() self.security_groups = secgroup_api.get_instance_security_groups( ctxt, instance) self.mappings = _format_instance_mapping(ctxt, instance) if instance.user_data is not None: self.userdata_raw = base64.decode_as_bytes(instance.user_data) else: self.userdata_raw = None self.address = address # expose instance metadata. self.launch_metadata = utils.instance_meta(instance) self.password = password.extract_password(instance) self.uuid = instance.uuid self.content = {} self.files = [] # get network info, and the rendered network template if network_info is None: network_info = instance.info_cache.network_info # expose network metadata if network_metadata is None: self.network_metadata = netutils.get_network_metadata(network_info) else: self.network_metadata = network_metadata self.ip_info = \ ec2utils.get_ip_info_for_instance_from_nw_info(network_info) self.network_config = None cfg = netutils.get_injected_network_template(network_info) if cfg: key = "%04i" % len(self.content) self.content[key] = cfg self.network_config = {"name": "network_config", 'content_path': "/%s/%s" % (CONTENT_DIR, key)} # 'content' is passed in from the configdrive code in # nova/virt/libvirt/driver.py. That's how we get the injected files # (personalities) in. AFAIK they're not stored in the db at all, # so are not available later (web service metadata time). for (path, contents) in content: key = "%04i" % len(self.content) self.files.append({'path': path, 'content_path': "/%s/%s" % (CONTENT_DIR, key)}) self.content[key] = contents self.route_configuration = None # NOTE(mikal): the decision to not pass extra_md here like we # do to the StaticJSON driver is deliberate. extra_md will # contain the admin password for the instance, and we shouldn't # pass that to external services. self.vendordata_providers = { 'StaticJSON': vendordata_json.JsonFileVendorData( instance=instance, address=address, extra_md=extra_md, network_info=network_info), 'DynamicJSON': vendordata_dynamic.DynamicVendorData( instance=instance, address=address, network_info=network_info, context=request_context) } def _route_configuration(self): if self.route_configuration: return self.route_configuration path_handlers = {UD_NAME: self._user_data, PASS_NAME: self._password, VD_JSON_NAME: self._vendor_data, VD2_JSON_NAME: self._vendor_data2, MD_JSON_NAME: self._metadata_as_json, NW_JSON_NAME: self._network_data, VERSION: self._handle_version, CONTENT: self._handle_content} self.route_configuration = RouteConfiguration(path_handlers) return self.route_configuration def set_mimetype(self, mime_type): self.md_mimetype = mime_type def get_mimetype(self): return self.md_mimetype def get_ec2_metadata(self, version): if version == "latest": version = VERSIONS[-1] if version not in VERSIONS: raise InvalidMetadataVersion(version) hostname = self._get_hostname() floating_ips = self.ip_info['floating_ips'] floating_ip = floating_ips and floating_ips[0] or '' fixed_ips = self.ip_info['fixed_ips'] fixed_ip = fixed_ips and fixed_ips[0] or '' fmt_sgroups = [x['name'] for x in self.security_groups] meta_data = { 'ami-id': self.instance.ec2_ids.ami_id, 'ami-launch-index': self.instance.launch_index, 'ami-manifest-path': 'FIXME', 'instance-id': self.instance.ec2_ids.instance_id, 'hostname': hostname, 'local-ipv4': fixed_ip or self.address, 'reservation-id': self.instance.reservation_id, 'security-groups': fmt_sgroups} # public keys are strangely rendered in ec2 metadata service # meta-data/public-keys/ returns '0=keyname' (with no trailing /) # and only if there is a public key given. # '0=keyname' means there is a normally rendered dict at # meta-data/public-keys/0 # # meta-data/public-keys/ : '0=%s' % keyname # meta-data/public-keys/0/ : 'openssh-key' # meta-data/public-keys/0/openssh-key : '%s' % publickey if self.instance.key_name: meta_data['public-keys'] = { '0': {'_name': "0=" + self.instance.key_name, 'openssh-key': self.instance.key_data}} if self._check_version('2007-01-19', version): meta_data['local-hostname'] = hostname meta_data['public-hostname'] = hostname meta_data['public-ipv4'] = floating_ip if False and self._check_version('2007-03-01', version): # TODO(vish): store product codes meta_data['product-codes'] = [] if self._check_version('2007-08-29', version): instance_type = self.instance.get_flavor() meta_data['instance-type'] = instance_type['name'] if False and self._check_version('2007-10-10', version): # TODO(vish): store ancestor ids meta_data['ancestor-ami-ids'] = [] if self._check_version('2007-12-15', version): meta_data['block-device-mapping'] = self.mappings if self.instance.ec2_ids.kernel_id: meta_data['kernel-id'] = self.instance.ec2_ids.kernel_id if self.instance.ec2_ids.ramdisk_id: meta_data['ramdisk-id'] = self.instance.ec2_ids.ramdisk_id if self._check_version('2008-02-01', version): meta_data['placement'] = {'availability-zone': self.availability_zone} if self._check_version('2008-09-01', version): meta_data['instance-action'] = 'none' data = {'meta-data': meta_data} if self.userdata_raw is not None: data['user-data'] = self.userdata_raw return data def get_ec2_item(self, path_tokens): # get_ec2_metadata returns dict without top level version data = self.get_ec2_metadata(path_tokens[0]) return find_path_in_tree(data, path_tokens[1:]) def get_openstack_item(self, path_tokens): if path_tokens[0] == CONTENT_DIR: return self._handle_content(path_tokens) return self._route_configuration().handle_path(path_tokens) def _metadata_as_json(self, version, path): metadata = {'uuid': self.uuid} if self.launch_metadata: metadata['meta'] = self.launch_metadata if self.files: metadata['files'] = self.files if self.extra_md: metadata.update(self.extra_md) if self.network_config: metadata['network_config'] = self.network_config if self.instance.key_name: if cells_opts.get_cell_type() == 'compute': cells_api = cells_rpcapi.CellsAPI() try: keypair = cells_api.get_keypair_at_top( context.get_admin_context(), self.instance.user_id, self.instance.key_name) except exception.KeypairNotFound: # NOTE(lpigueir): If keypair was deleted, treat # it like it never had any keypair = None else: keypairs = self.instance.keypairs # NOTE(mriedem): It's possible for the keypair to be deleted # before it was migrated to the instance_extra table, in which # case lazy-loading instance.keypairs will handle the 404 and # just set an empty KeyPairList object on the instance. keypair = keypairs[0] if keypairs else None if keypair: metadata['public_keys'] = { keypair.name: keypair.public_key, } metadata['keys'] = [ {'name': keypair.name, 'type': keypair.type, 'data': keypair.public_key} ] else: LOG.debug("Unable to find keypair for instance with " "key name '%s'.", self.instance.key_name, instance=self.instance) metadata['hostname'] = self._get_hostname() metadata['name'] = self.instance.display_name metadata['launch_index'] = self.instance.launch_index metadata['availability_zone'] = self.availability_zone if self._check_os_version(GRIZZLY, version): metadata['random_seed'] = base64.encode_as_text(os.urandom(512)) if self._check_os_version(LIBERTY, version): metadata['project_id'] = self.instance.project_id if self._check_os_version(NEWTON_ONE, version): metadata['devices'] = self._get_device_metadata(version) self.set_mimetype(MIME_TYPE_APPLICATION_JSON) return jsonutils.dump_as_bytes(metadata) def _get_device_metadata(self, version): """Build a device metadata dict based on the metadata objects. This is done here in the metadata API as opposed to in the objects themselves because the metadata dict is part of the guest API and thus must be controlled. """ device_metadata_list = [] vif_vlans_supported = self._check_os_version(OCATA, version) if self.instance.device_metadata is not None: for device in self.instance.device_metadata.devices: device_metadata = {} bus = 'none' address = 'none' if 'bus' in device: # TODO(artom/mriedem) It would be nice if we had something # more generic, like a type identifier or something, built # into these types of objects, like a get_meta_type() # abstract method on the base DeviceBus class. if isinstance(device.bus, metadata_obj.PCIDeviceBus): bus = 'pci' elif isinstance(device.bus, metadata_obj.USBDeviceBus): bus = 'usb' elif isinstance(device.bus, metadata_obj.SCSIDeviceBus): bus = 'scsi' elif isinstance(device.bus, metadata_obj.IDEDeviceBus): bus = 'ide' elif isinstance(device.bus, metadata_obj.XenDeviceBus): bus = 'xen' else: LOG.debug('Metadata for device with unknown bus %s ' 'has not been included in the ' 'output', device.bus.__class__.__name__) continue if 'address' in device.bus: address = device.bus.address if isinstance(device, metadata_obj.NetworkInterfaceMetadata): vlan = None if vif_vlans_supported and 'vlan' in device: vlan = device.vlan # Skip devices without tags on versions that # don't support vlans if not (vlan or 'tags' in device): continue device_metadata['type'] = 'nic' device_metadata['mac'] = device.mac if vlan: device_metadata['vlan'] = vlan elif isinstance(device, metadata_obj.DiskMetadata): device_metadata['type'] = 'disk' # serial and path are optional parameters if 'serial' in device: device_metadata['serial'] = device.serial if 'path' in device: device_metadata['path'] = device.path else: LOG.debug('Metadata for device of unknown type %s has not ' 'been included in the ' 'output', device.__class__.__name__) continue device_metadata['bus'] = bus device_metadata['address'] = address if 'tags' in device: device_metadata['tags'] = device.tags device_metadata_list.append(device_metadata) return device_metadata_list def _handle_content(self, path_tokens): if len(path_tokens) == 1: raise KeyError("no listing for %s" % "/".join(path_tokens)) if len(path_tokens) != 2: raise KeyError("Too many tokens for /%s" % CONTENT_DIR) return self.content[path_tokens[1]] def _handle_version(self, version, path): # request for /version, give a list of what is available ret = [MD_JSON_NAME] if self.userdata_raw is not None: ret.append(UD_NAME) if self._check_os_version(GRIZZLY, version): ret.append(PASS_NAME) if self._check_os_version(HAVANA, version): ret.append(VD_JSON_NAME) if self._check_os_version(LIBERTY, version): ret.append(NW_JSON_NAME) if self._check_os_version(NEWTON_TWO, version): ret.append(VD2_JSON_NAME) return ret def _user_data(self, version, path): if self.userdata_raw is None: raise KeyError(path) return self.userdata_raw def _network_data(self, version, path): if self.network_metadata is None: return jsonutils.dump_as_bytes({}) return jsonutils.dump_as_bytes(self.network_metadata) def _password(self, version, path): if self._check_os_version(GRIZZLY, version): return password.handle_password raise KeyError(path) def _vendor_data(self, version, path): if self._check_os_version(HAVANA, version): self.set_mimetype(MIME_TYPE_APPLICATION_JSON) if (CONF.api.vendordata_providers and 'StaticJSON' in CONF.api.vendordata_providers): return jsonutils.dump_as_bytes( self.vendordata_providers['StaticJSON'].get()) raise KeyError(path) def _vendor_data2(self, version, path): if self._check_os_version(NEWTON_TWO, version): self.set_mimetype(MIME_TYPE_APPLICATION_JSON) j = {} for provider in CONF.api.vendordata_providers: if provider == 'StaticJSON': j['static'] = self.vendordata_providers['StaticJSON'].get() else: values = self.vendordata_providers[provider].get() for key in list(values): if key in j: LOG.warning('Removing duplicate metadata key: %s', key, instance=self.instance) del values[key] j.update(values) return jsonutils.dump_as_bytes(j) raise KeyError(path) def _check_version(self, required, requested, versions=VERSIONS): return versions.index(requested) >= versions.index(required) def _check_os_version(self, required, requested): return self._check_version(required, requested, OPENSTACK_VERSIONS) def _get_hostname(self): return "%s%s%s" % (self.instance.hostname, '.' if CONF.dhcp_domain else '', CONF.dhcp_domain) def lookup(self, path): if path == "" or path[0] != "/": path = posixpath.normpath("/" + path) else: path = posixpath.normpath(path) # Set default mimeType. It will be modified only if there is a change self.set_mimetype(MIME_TYPE_TEXT_PLAIN) # fix up requests, prepending /ec2 to anything that does not match path_tokens = path.split('/')[1:] if path_tokens[0] not in ("ec2", "openstack"): if path_tokens[0] == "": # request for / path_tokens = ["ec2"] else: path_tokens = ["ec2"] + path_tokens path = "/" + "/".join(path_tokens) # all values of 'path' input starts with '/' and have no trailing / # specifically handle the top level request if len(path_tokens) == 1: if path_tokens[0] == "openstack": # NOTE(vish): don't show versions that are in the future today = timeutils.utcnow().strftime("%Y-%m-%d") versions = [v for v in OPENSTACK_VERSIONS if v <= today] if OPENSTACK_VERSIONS != versions: LOG.debug("future versions %s hidden in version list", [v for v in OPENSTACK_VERSIONS if v not in versions], instance=self.instance) versions += ["latest"] else: versions = VERSIONS + ["latest"] return versions try: if path_tokens[0] == "openstack": data = self.get_openstack_item(path_tokens[1:]) else: data = self.get_ec2_item(path_tokens[1:]) except (InvalidMetadataVersion, KeyError): raise InvalidMetadataPath(path) return data def metadata_for_config_drive(self): """Yields (path, value) tuples for metadata elements.""" # EC2 style metadata for version in VERSIONS + ["latest"]: if version in CONF.api.config_drive_skip_versions.split(' '): continue data = self.get_ec2_metadata(version) if 'user-data' in data: filepath = os.path.join('ec2', version, 'user-data') yield (filepath, data['user-data']) del data['user-data'] try: del data['public-keys']['0']['_name'] except KeyError: pass filepath = os.path.join('ec2', version, 'meta-data.json') yield (filepath, jsonutils.dump_as_bytes(data['meta-data'])) ALL_OPENSTACK_VERSIONS = OPENSTACK_VERSIONS + ["latest"] for version in ALL_OPENSTACK_VERSIONS: path = 'openstack/%s/%s' % (version, MD_JSON_NAME) yield (path, self.lookup(path)) path = 'openstack/%s/%s' % (version, UD_NAME) if self.userdata_raw is not None: yield (path, self.lookup(path)) if self._check_version(HAVANA, version, ALL_OPENSTACK_VERSIONS): path = 'openstack/%s/%s' % (version, VD_JSON_NAME) yield (path, self.lookup(path)) if self._check_version(LIBERTY, version, ALL_OPENSTACK_VERSIONS): path = 'openstack/%s/%s' % (version, NW_JSON_NAME) yield (path, self.lookup(path)) if self._check_version(NEWTON_TWO, version, ALL_OPENSTACK_VERSIONS): path = 'openstack/%s/%s' % (version, VD2_JSON_NAME) yield (path, self.lookup(path)) for (cid, content) in self.content.items(): yield ('%s/%s/%s' % ("openstack", CONTENT_DIR, cid), content) class RouteConfiguration(object): """Routes metadata paths to request handlers.""" def __init__(self, path_handler): self.path_handlers = path_handler def _version(self, version): if version == "latest": version = OPENSTACK_VERSIONS[-1] if version not in OPENSTACK_VERSIONS: raise InvalidMetadataVersion(version) return version def handle_path(self, path_tokens): version = self._version(path_tokens[0]) if len(path_tokens) == 1: path = VERSION else: path = '/'.join(path_tokens[1:]) path_handler = self.path_handlers[path] if path_handler is None: raise KeyError(path) return path_handler(version, path) def get_metadata_by_address(address): ctxt = context.get_admin_context() fixed_ip = network.API().get_fixed_ip_by_address(ctxt, address) LOG.info('Fixed IP %(ip)s translates to instance UUID %(uuid)s', {'ip': address, 'uuid': fixed_ip['instance_uuid']}) return get_metadata_by_instance_id(fixed_ip['instance_uuid'], address, ctxt) def get_metadata_by_instance_id(instance_id, address, ctxt=None): ctxt = ctxt or context.get_admin_context() attrs = ['ec2_ids', 'flavor', 'info_cache', 'metadata', 'system_metadata', 'security_groups', 'keypairs', 'device_metadata'] try: im = objects.InstanceMapping.get_by_instance_uuid(ctxt, instance_id) except exception.InstanceMappingNotFound: LOG.warning('Instance mapping for %(uuid)s not found; ' 'cell setup is incomplete', {'uuid': instance_id}) instance = objects.Instance.get_by_uuid(ctxt, instance_id, expected_attrs=attrs) return InstanceMetadata(instance, address) with context.target_cell(ctxt, im.cell_mapping) as cctxt: instance = objects.Instance.get_by_uuid(cctxt, instance_id, expected_attrs=attrs) return InstanceMetadata(instance, address) def _format_instance_mapping(ctxt, instance): bdms = objects.BlockDeviceMappingList.get_by_instance_uuid( ctxt, instance.uuid) return block_device.instance_block_mapping(instance, bdms) def ec2_md_print(data): if isinstance(data, dict): output = '' for key in sorted(data.keys()): if key == '_name': continue if isinstance(data[key], dict): if '_name' in data[key]: output += str(data[key]['_name']) else: output += key + '/' else: output += key output += '\n' return output[:-1] elif isinstance(data, list): return '\n'.join(data) elif isinstance(data, (bytes, six.text_type)): return data else: return str(data) def find_path_in_tree(data, path_tokens): # given a dict/list tree, and a path in that tree, return data found there. for i in range(0, len(path_tokens)): if isinstance(data, dict) or isinstance(data, list): if path_tokens[i] in data: data = data[path_tokens[i]] else: raise KeyError("/".join(path_tokens[0:i])) else: if i != len(path_tokens) - 1: raise KeyError("/".join(path_tokens[0:i])) data = data[path_tokens[i]] return data nova-17.0.1/nova/api/metadata/vendordata_dynamic.py0000666000175000017500000001252713250073126022276 0ustar zuulzuul00000000000000# Copyright 2016 Rackspace Australia # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Render vendordata as stored fetched from REST microservices.""" import sys from keystoneauth1 import exceptions as ks_exceptions from keystoneauth1 import loading as ks_loading from oslo_log import log as logging from oslo_serialization import jsonutils import six from nova.api.metadata import vendordata import nova.conf CONF = nova.conf.CONF LOG = logging.getLogger(__name__) _SESSION = None _ADMIN_AUTH = None def _load_ks_session(conf): """Load session. This is either an authenticated session or a requests session, depending on what's configured. """ global _ADMIN_AUTH global _SESSION if not _ADMIN_AUTH: _ADMIN_AUTH = ks_loading.load_auth_from_conf_options( conf, nova.conf.vendordata.vendordata_group.name) if not _ADMIN_AUTH: LOG.warning('Passing insecure dynamic vendordata requests ' 'because of missing or incorrect service account ' 'configuration.') if not _SESSION: _SESSION = ks_loading.load_session_from_conf_options( conf, nova.conf.vendordata.vendordata_group.name, auth=_ADMIN_AUTH) return _SESSION class DynamicVendorData(vendordata.VendorDataDriver): def __init__(self, context=None, instance=None, address=None, network_info=None): # NOTE(mikal): address and network_info are unused, but can't be # removed / renamed as this interface is shared with the static # JSON plugin. self.context = context self.instance = instance # We only create the session if we make a request. self.session = None def _do_request(self, service_name, url): if self.session is None: self.session = _load_ks_session(CONF) try: body = {'project-id': self.instance.project_id, 'instance-id': self.instance.uuid, 'image-id': self.instance.image_ref, 'user-data': self.instance.user_data, 'hostname': self.instance.hostname, 'metadata': self.instance.metadata, 'boot-roles': self.instance.system_metadata.get( 'boot_roles', '')} headers = {'Content-Type': 'application/json', 'Accept': 'application/json', 'User-Agent': 'openstack-nova-vendordata'} # SSL verification verify = url.startswith('https://') if verify and CONF.api.vendordata_dynamic_ssl_certfile: verify = CONF.api.vendordata_dynamic_ssl_certfile timeout = (CONF.api.vendordata_dynamic_connect_timeout, CONF.api.vendordata_dynamic_read_timeout) res = self.session.request(url, 'POST', data=jsonutils.dumps(body), verify=verify, headers=headers, timeout=timeout) if res and res.text: # TODO(mikal): Use the Cache-Control response header to do some # sensible form of caching here. return jsonutils.loads(res.text) return {} except (TypeError, ValueError, ks_exceptions.connection.ConnectionError, ks_exceptions.http.HttpError) as e: LOG.warning('Error from dynamic vendordata service ' '%(service_name)s at %(url)s: %(error)s', {'service_name': service_name, 'url': url, 'error': e}, instance=self.instance) if CONF.api.vendordata_dynamic_failure_fatal: six.reraise(type(e), e, sys.exc_info()[2]) return {} def get(self): j = {} for target in CONF.api.vendordata_dynamic_targets: # NOTE(mikal): a target is composed of the following: # name@url # where name is the name to use in the metadata handed to # instances, and url is the URL to fetch it from if target.find('@') == -1: LOG.warning('Vendordata target %(target)s lacks a name. ' 'Skipping', {'target': target}, instance=self.instance) continue tokens = target.split('@') name = tokens[0] url = '@'.join(tokens[1:]) if name in j: LOG.warning('Vendordata already contains an entry named ' '%(target)s. Skipping', {'target': target}, instance=self.instance) continue j[name] = self._do_request(name, url) return j nova-17.0.1/nova/api/metadata/handler.py0000666000175000017500000003051713250073126020057 0ustar zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Metadata request handler.""" import hashlib import hmac import os from oslo_log import log as logging from oslo_utils import encodeutils from oslo_utils import secretutils as secutils import six import webob.dec import webob.exc from nova.api.metadata import base from nova import cache_utils import nova.conf from nova import context as nova_context from nova import exception from nova.i18n import _ from nova.network.neutronv2 import api as neutronapi from nova import wsgi CONF = nova.conf.CONF LOG = logging.getLogger(__name__) class MetadataRequestHandler(wsgi.Application): """Serve metadata.""" def __init__(self): self._cache = cache_utils.get_client( expiration_time=CONF.api.metadata_cache_expiration) if (CONF.neutron.service_metadata_proxy and not CONF.neutron.metadata_proxy_shared_secret): LOG.warning("metadata_proxy_shared_secret is not configured, " "the metadata information returned by the proxy " "cannot be trusted") def get_metadata_by_remote_address(self, address): if not address: raise exception.FixedIpNotFoundForAddress(address=address) cache_key = 'metadata-%s' % address data = self._cache.get(cache_key) if data: LOG.debug("Using cached metadata for %s", address) return data try: data = base.get_metadata_by_address(address) except exception.NotFound: return None if CONF.api.metadata_cache_expiration > 0: self._cache.set(cache_key, data) return data def get_metadata_by_instance_id(self, instance_id, address): cache_key = 'metadata-%s' % instance_id data = self._cache.get(cache_key) if data: LOG.debug("Using cached metadata for instance %s", instance_id) return data try: data = base.get_metadata_by_instance_id(instance_id, address) except exception.NotFound: return None if CONF.api.metadata_cache_expiration > 0: self._cache.set(cache_key, data) return data @webob.dec.wsgify(RequestClass=wsgi.Request) def __call__(self, req): if os.path.normpath(req.path_info) == "/": resp = base.ec2_md_print(base.VERSIONS + ["latest"]) req.response.body = encodeutils.to_utf8(resp) req.response.content_type = base.MIME_TYPE_TEXT_PLAIN return req.response LOG.debug('Metadata request headers: %s', req.headers) if CONF.neutron.service_metadata_proxy: if req.headers.get('X-Metadata-Provider'): meta_data = self._handle_instance_id_request_from_lb(req) else: meta_data = self._handle_instance_id_request(req) else: if req.headers.get('X-Instance-ID'): LOG.warning( "X-Instance-ID present in request headers. The " "'service_metadata_proxy' option must be " "enabled to process this header.") meta_data = self._handle_remote_ip_request(req) if meta_data is None: raise webob.exc.HTTPNotFound() try: data = meta_data.lookup(req.path_info) except base.InvalidMetadataPath: raise webob.exc.HTTPNotFound() if callable(data): return data(req, meta_data) resp = base.ec2_md_print(data) req.response.body = encodeutils.to_utf8(resp) req.response.content_type = meta_data.get_mimetype() return req.response def _handle_remote_ip_request(self, req): remote_address = req.remote_addr if CONF.api.use_forwarded_for: remote_address = req.headers.get('X-Forwarded-For', remote_address) try: meta_data = self.get_metadata_by_remote_address(remote_address) except Exception: LOG.exception('Failed to get metadata for IP %s', remote_address) msg = _('An unknown error has occurred. ' 'Please try your request again.') raise webob.exc.HTTPInternalServerError( explanation=six.text_type(msg)) if meta_data is None: LOG.error('Failed to get metadata for IP %s: no metadata', remote_address) return meta_data def _handle_instance_id_request(self, req): instance_id = req.headers.get('X-Instance-ID') tenant_id = req.headers.get('X-Tenant-ID') signature = req.headers.get('X-Instance-ID-Signature') remote_address = req.headers.get('X-Forwarded-For') # Ensure that only one header was passed if instance_id is None: msg = _('X-Instance-ID header is missing from request.') elif signature is None: msg = _('X-Instance-ID-Signature header is missing from request.') elif tenant_id is None: msg = _('X-Tenant-ID header is missing from request.') elif not isinstance(instance_id, six.string_types): msg = _('Multiple X-Instance-ID headers found within request.') elif not isinstance(tenant_id, six.string_types): msg = _('Multiple X-Tenant-ID headers found within request.') else: msg = None if msg: raise webob.exc.HTTPBadRequest(explanation=msg) self._validate_shared_secret(instance_id, signature, remote_address) return self._get_meta_by_instance_id(instance_id, tenant_id, remote_address) def _get_instance_id_from_lb(self, provider_id, instance_address): # We use admin context, admin=True to lookup the # inter-Edge network port context = nova_context.get_admin_context() neutron = neutronapi.get_client(context, admin=True) # Tenant, instance ids are found in the following method: # X-Metadata-Provider contains id of the metadata provider, and since # overlapping networks cannot be connected to the same metadata # provider, the combo of tenant's instance IP and the metadata # provider has to be unique. # # The networks which are connected to the metadata provider are # retrieved in the 1st call to neutron.list_subnets() # In the 2nd call we read the ports which belong to any of the # networks retrieved above, and have the X-Forwarded-For IP address. # This combination has to be unique as explained above, and we can # read the instance_id, tenant_id from that port entry. # Retrieve networks which are connected to metadata provider md_subnets = neutron.list_subnets( context, advanced_service_providers=[provider_id], fields=['network_id']) md_networks = [subnet['network_id'] for subnet in md_subnets['subnets']] try: # Retrieve the instance data from the instance's port instance_data = neutron.list_ports( context, fixed_ips='ip_address=' + instance_address, network_id=md_networks, fields=['device_id', 'tenant_id'])['ports'][0] except Exception as e: LOG.error('Failed to get instance id for metadata ' 'request, provider %(provider)s ' 'networks %(networks)s ' 'requester %(requester)s. Error: %(error)s', {'provider': provider_id, 'networks': md_networks, 'requester': instance_address, 'error': e}) msg = _('An unknown error has occurred. ' 'Please try your request again.') raise webob.exc.HTTPBadRequest(explanation=msg) instance_id = instance_data['device_id'] tenant_id = instance_data['tenant_id'] # instance_data is unicode-encoded, while cache_utils doesn't like # that. Therefore we convert to str if isinstance(instance_id, six.text_type): instance_id = instance_id.encode('utf-8') return instance_id, tenant_id def _handle_instance_id_request_from_lb(self, req): remote_address = req.headers.get('X-Forwarded-For') if remote_address is None: msg = _('X-Forwarded-For is missing from request.') raise webob.exc.HTTPBadRequest(explanation=msg) provider_id = req.headers.get('X-Metadata-Provider') if provider_id is None: msg = _('X-Metadata-Provider is missing from request.') raise webob.exc.HTTPBadRequest(explanation=msg) instance_address = remote_address.split(',')[0] # If authentication token is set, authenticate if CONF.neutron.metadata_proxy_shared_secret: signature = req.headers.get('X-Metadata-Provider-Signature') self._validate_shared_secret(provider_id, signature, instance_address) instance_id, tenant_id = self._get_instance_id_from_lb( provider_id, instance_address) LOG.debug('Instance %s with address %s matches provider %s', instance_id, remote_address, provider_id) return self._get_meta_by_instance_id(instance_id, tenant_id, instance_address) def _validate_shared_secret(self, requestor_id, signature, requestor_address): expected_signature = hmac.new( encodeutils.to_utf8(CONF.neutron.metadata_proxy_shared_secret), encodeutils.to_utf8(requestor_id), hashlib.sha256).hexdigest() if not secutils.constant_time_compare(expected_signature, signature): if requestor_id: LOG.warning('X-Instance-ID-Signature: %(signature)s does ' 'not match the expected value: ' '%(expected_signature)s for id: ' '%(requestor_id)s. Request From: ' '%(requestor_address)s', {'signature': signature, 'expected_signature': expected_signature, 'requestor_id': requestor_id, 'requestor_address': requestor_address}) msg = _('Invalid proxy request signature.') raise webob.exc.HTTPForbidden(explanation=msg) def _get_meta_by_instance_id(self, instance_id, tenant_id, remote_address): try: meta_data = self.get_metadata_by_instance_id(instance_id, remote_address) except Exception: LOG.exception('Failed to get metadata for instance id: %s', instance_id) msg = _('An unknown error has occurred. ' 'Please try your request again.') raise webob.exc.HTTPInternalServerError( explanation=six.text_type(msg)) if meta_data is None: LOG.error('Failed to get metadata for instance id: %s', instance_id) elif meta_data.instance.project_id != tenant_id: LOG.warning("Tenant_id %(tenant_id)s does not match tenant_id " "of instance %(instance_id)s.", {'tenant_id': tenant_id, 'instance_id': instance_id}) # causes a 404 to be raised meta_data = None return meta_data nova-17.0.1/nova/api/metadata/__init__.py0000666000175000017500000000145413250073126020177 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ :mod:`nova.api.metadata` -- Nova Metadata Server ================================================ .. automodule:: nova.api.metadata :platform: Unix :synopsis: Metadata Server for Nova """ nova-17.0.1/nova/api/metadata/password.py0000666000175000017500000000527613250073126020310 0ustar zuulzuul00000000000000# Copyright 2012 Nebula, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six from six.moves import range from webob import exc from nova import context from nova import exception from nova.i18n import _ from nova import objects from nova import utils CHUNKS = 4 CHUNK_LENGTH = 255 MAX_SIZE = CHUNKS * CHUNK_LENGTH def extract_password(instance): result = '' sys_meta = utils.instance_sys_meta(instance) for key in sorted(sys_meta.keys()): if key.startswith('password_'): result += sys_meta[key] return result or None def convert_password(context, password): """Stores password as system_metadata items. Password is stored with the keys 'password_0' -> 'password_3'. """ password = password or '' if six.PY3 and isinstance(password, bytes): password = password.decode('utf-8') meta = {} for i in range(CHUNKS): meta['password_%d' % i] = password[:CHUNK_LENGTH] password = password[CHUNK_LENGTH:] return meta def handle_password(req, meta_data): ctxt = context.get_admin_context() if req.method == 'GET': return meta_data.password elif req.method == 'POST': # NOTE(vish): The conflict will only happen once the metadata cache # updates, but it isn't a huge issue if it can be set for # a short window. if meta_data.password: raise exc.HTTPConflict() if (req.content_length > MAX_SIZE or len(req.body) > MAX_SIZE): msg = _("Request is too large.") raise exc.HTTPBadRequest(explanation=msg) im = objects.InstanceMapping.get_by_instance_uuid(ctxt, meta_data.uuid) with context.target_cell(ctxt, im.cell_mapping) as cctxt: try: instance = objects.Instance.get_by_uuid(cctxt, meta_data.uuid) except exception.InstanceNotFound as e: raise exc.HTTPBadRequest(explanation=e.format_message()) instance.system_metadata.update(convert_password(ctxt, req.body)) instance.save() else: msg = _("GET and POST only are supported.") raise exc.HTTPBadRequest(explanation=msg) nova-17.0.1/nova/api/metadata/wsgi.py0000666000175000017500000000141113250073126017402 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """WSGI application entry-point for Nova Metadata API, installed by pbr.""" from nova.api.openstack import wsgi_app NAME = "metadata" def init_application(): return wsgi_app.init_application(NAME) nova-17.0.1/nova/api/metadata/vendordata_json.py0000666000175000017500000000363113250073126021617 0ustar zuulzuul00000000000000# Copyright 2013 Canonical Ltd # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Render Vendordata as stored in configured file.""" import errno from oslo_log import log as logging from oslo_serialization import jsonutils from nova.api.metadata import vendordata import nova.conf CONF = nova.conf.CONF LOG = logging.getLogger(__name__) class JsonFileVendorData(vendordata.VendorDataDriver): def __init__(self, *args, **kwargs): super(JsonFileVendorData, self).__init__(*args, **kwargs) data = {} fpath = CONF.api.vendordata_jsonfile_path logprefix = "vendordata_jsonfile_path[%s]:" % fpath if fpath: try: with open(fpath, "rb") as fp: data = jsonutils.load(fp) except IOError as e: if e.errno == errno.ENOENT: LOG.warning("%(logprefix)s file does not exist", {'logprefix': logprefix}) else: LOG.warning("%(logprefix)s unexpected IOError when " "reading", {'logprefix': logprefix}) raise except ValueError: LOG.warning("%(logprefix)s failed to load json", {'logprefix': logprefix}) raise self._data = data def get(self): return self._data nova-17.0.1/nova/api/metadata/vendordata.py0000666000175000017500000000215013250073126020561 0ustar zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. class VendorDataDriver(object): """The base VendorData Drivers should inherit from.""" def __init__(self, *args, **kwargs): """Init method should do all expensive operations.""" self._data = {} def get(self): """Return a dictionary of primitives to be rendered in metadata :return: A dictionary of primitives. """ return self._data nova-17.0.1/nova/api/openstack/0000775000175000017500000000000013250073471016272 5ustar zuulzuul00000000000000nova-17.0.1/nova/api/openstack/common.py0000666000175000017500000004763013250073126020145 0ustar zuulzuul00000000000000# Copyright 2010 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import functools import itertools import re from oslo_log import log as logging from oslo_utils import strutils import six import six.moves.urllib.parse as urlparse import webob from webob import exc from nova.api.openstack import api_version_request from nova.compute import task_states from nova.compute import vm_states import nova.conf from nova import exception from nova.i18n import _ from nova import objects from nova import quota from nova import utils CONF = nova.conf.CONF LOG = logging.getLogger(__name__) QUOTAS = quota.QUOTAS _STATE_MAP = { vm_states.ACTIVE: { 'default': 'ACTIVE', task_states.REBOOTING: 'REBOOT', task_states.REBOOT_PENDING: 'REBOOT', task_states.REBOOT_STARTED: 'REBOOT', task_states.REBOOTING_HARD: 'HARD_REBOOT', task_states.REBOOT_PENDING_HARD: 'HARD_REBOOT', task_states.REBOOT_STARTED_HARD: 'HARD_REBOOT', task_states.UPDATING_PASSWORD: 'PASSWORD', task_states.REBUILDING: 'REBUILD', task_states.REBUILD_BLOCK_DEVICE_MAPPING: 'REBUILD', task_states.REBUILD_SPAWNING: 'REBUILD', task_states.MIGRATING: 'MIGRATING', task_states.RESIZE_PREP: 'RESIZE', task_states.RESIZE_MIGRATING: 'RESIZE', task_states.RESIZE_MIGRATED: 'RESIZE', task_states.RESIZE_FINISH: 'RESIZE', }, vm_states.BUILDING: { 'default': 'BUILD', }, vm_states.STOPPED: { 'default': 'SHUTOFF', task_states.RESIZE_PREP: 'RESIZE', task_states.RESIZE_MIGRATING: 'RESIZE', task_states.RESIZE_MIGRATED: 'RESIZE', task_states.RESIZE_FINISH: 'RESIZE', task_states.REBUILDING: 'REBUILD', task_states.REBUILD_BLOCK_DEVICE_MAPPING: 'REBUILD', task_states.REBUILD_SPAWNING: 'REBUILD', }, vm_states.RESIZED: { 'default': 'VERIFY_RESIZE', # Note(maoy): the OS API spec 1.1 doesn't have CONFIRMING_RESIZE # state so we comment that out for future reference only. # task_states.RESIZE_CONFIRMING: 'CONFIRMING_RESIZE', task_states.RESIZE_REVERTING: 'REVERT_RESIZE', }, vm_states.PAUSED: { 'default': 'PAUSED', task_states.MIGRATING: 'MIGRATING', }, vm_states.SUSPENDED: { 'default': 'SUSPENDED', }, vm_states.RESCUED: { 'default': 'RESCUE', }, vm_states.ERROR: { 'default': 'ERROR', task_states.REBUILDING: 'REBUILD', task_states.REBUILD_BLOCK_DEVICE_MAPPING: 'REBUILD', task_states.REBUILD_SPAWNING: 'REBUILD', }, vm_states.DELETED: { 'default': 'DELETED', }, vm_states.SOFT_DELETED: { 'default': 'SOFT_DELETED', }, vm_states.SHELVED: { 'default': 'SHELVED', }, vm_states.SHELVED_OFFLOADED: { 'default': 'SHELVED_OFFLOADED', }, } def status_from_state(vm_state, task_state='default'): """Given vm_state and task_state, return a status string.""" task_map = _STATE_MAP.get(vm_state, dict(default='UNKNOWN')) status = task_map.get(task_state, task_map['default']) if status == "UNKNOWN": LOG.error("status is UNKNOWN from vm_state=%(vm_state)s " "task_state=%(task_state)s. Bad upgrade or db " "corrupted?", {'vm_state': vm_state, 'task_state': task_state}) return status def task_and_vm_state_from_status(statuses): """Map the server's multiple status strings to list of vm states and list of task states. """ vm_states = set() task_states = set() lower_statuses = [status.lower() for status in statuses] for state, task_map in _STATE_MAP.items(): for task_state, mapped_state in task_map.items(): status_string = mapped_state if status_string.lower() in lower_statuses: vm_states.add(state) task_states.add(task_state) # Add sort to avoid different order on set in Python 3 return sorted(vm_states), sorted(task_states) def get_sort_params(input_params, default_key='created_at', default_dir='desc'): """Retrieves sort keys/directions parameters. Processes the parameters to create a list of sort keys and sort directions that correspond to the 'sort_key' and 'sort_dir' parameter values. These sorting parameters can be specified multiple times in order to generate the list of sort keys and directions. The input parameters are not modified. :param input_params: webob.multidict of request parameters (from nova.wsgi.Request.params) :param default_key: default sort key value, added to the list if no 'sort_key' parameters are supplied :param default_dir: default sort dir value, added to the list if no 'sort_dir' parameters are supplied :returns: list of sort keys, list of sort dirs """ params = input_params.copy() sort_keys = [] sort_dirs = [] while 'sort_key' in params: sort_keys.append(params.pop('sort_key').strip()) while 'sort_dir' in params: sort_dirs.append(params.pop('sort_dir').strip()) if len(sort_keys) == 0 and default_key: sort_keys.append(default_key) if len(sort_dirs) == 0 and default_dir: sort_dirs.append(default_dir) return sort_keys, sort_dirs def get_pagination_params(request): """Return marker, limit tuple from request. :param request: `wsgi.Request` possibly containing 'marker' and 'limit' GET variables. 'marker' is the id of the last element the client has seen, and 'limit' is the maximum number of items to return. If 'limit' is not specified, 0, or > max_limit, we default to max_limit. Negative values for either marker or limit will cause exc.HTTPBadRequest() exceptions to be raised. """ params = {} if 'limit' in request.GET: params['limit'] = _get_int_param(request, 'limit') if 'page_size' in request.GET: params['page_size'] = _get_int_param(request, 'page_size') if 'marker' in request.GET: params['marker'] = _get_marker_param(request) if 'offset' in request.GET: params['offset'] = _get_int_param(request, 'offset') return params def _get_int_param(request, param): """Extract integer param from request or fail.""" try: int_param = utils.validate_integer(request.GET[param], param, min_value=0) except exception.InvalidInput as e: raise webob.exc.HTTPBadRequest(explanation=e.format_message()) return int_param def _get_marker_param(request): """Extract marker id from request or fail.""" return request.GET['marker'] def limited(items, request): """Return a slice of items according to requested offset and limit. :param items: A sliceable entity :param request: ``wsgi.Request`` possibly containing 'offset' and 'limit' GET variables. 'offset' is where to start in the list, and 'limit' is the maximum number of items to return. If 'limit' is not specified, 0, or > max_limit, we default to max_limit. Negative values for either offset or limit will cause exc.HTTPBadRequest() exceptions to be raised. """ params = get_pagination_params(request) offset = params.get('offset', 0) limit = CONF.api.max_limit limit = min(limit, params.get('limit') or limit) return items[offset:(offset + limit)] def get_limit_and_marker(request): """Get limited parameter from request.""" params = get_pagination_params(request) limit = CONF.api.max_limit limit = min(limit, params.get('limit', limit)) marker = params.get('marker', None) return limit, marker def get_id_from_href(href): """Return the id or uuid portion of a url. Given: 'http://www.foo.com/bar/123?q=4' Returns: '123' Given: 'http://www.foo.com/bar/abc123?q=4' Returns: 'abc123' """ return urlparse.urlsplit("%s" % href).path.split('/')[-1] def remove_trailing_version_from_href(href): """Removes the api version from the href. Given: 'http://www.nova.com/compute/v1.1' Returns: 'http://www.nova.com/compute' Given: 'http://www.nova.com/v1.1' Returns: 'http://www.nova.com' """ parsed_url = urlparse.urlsplit(href) url_parts = parsed_url.path.rsplit('/', 1) # NOTE: this should match vX.X or vX expression = re.compile(r'^v([0-9]+|[0-9]+\.[0-9]+)(/.*|$)') if not expression.match(url_parts.pop()): LOG.debug('href %s does not contain version', href) raise ValueError(_('href %s does not contain version') % href) new_path = url_join(*url_parts) parsed_url = list(parsed_url) parsed_url[2] = new_path return urlparse.urlunsplit(parsed_url) def check_img_metadata_properties_quota(context, metadata): if not metadata: return try: QUOTAS.limit_check(context, metadata_items=len(metadata)) except exception.OverQuota: expl = _("Image metadata limit exceeded") raise webob.exc.HTTPForbidden(explanation=expl) def get_networks_for_instance_from_nw_info(nw_info): networks = collections.OrderedDict() for vif in nw_info: ips = vif.fixed_ips() floaters = vif.floating_ips() label = vif['network']['label'] if label not in networks: networks[label] = {'ips': [], 'floating_ips': []} for ip in itertools.chain(ips, floaters): ip['mac_address'] = vif['address'] networks[label]['ips'].extend(ips) networks[label]['floating_ips'].extend(floaters) return networks def get_networks_for_instance(context, instance): """Returns a prepared nw_info list for passing into the view builders We end up with a data structure like:: {'public': {'ips': [{'address': '10.0.0.1', 'version': 4, 'mac_address': 'aa:aa:aa:aa:aa:aa'}, {'address': '2001::1', 'version': 6, 'mac_address': 'aa:aa:aa:aa:aa:aa'}], 'floating_ips': [{'address': '172.16.0.1', 'version': 4, 'mac_address': 'aa:aa:aa:aa:aa:aa'}, {'address': '172.16.2.1', 'version': 4, 'mac_address': 'aa:aa:aa:aa:aa:aa'}]}, ...} """ nw_info = instance.get_network_info() return get_networks_for_instance_from_nw_info(nw_info) def raise_http_conflict_for_instance_invalid_state(exc, action, server_id): """Raises a webob.exc.HTTPConflict instance containing a message appropriate to return via the API based on the original InstanceInvalidState exception. """ attr = exc.kwargs.get('attr') state = exc.kwargs.get('state') if attr is not None and state is not None: msg = _("Cannot '%(action)s' instance %(server_id)s while it is in " "%(attr)s %(state)s") % {'action': action, 'attr': attr, 'state': state, 'server_id': server_id} else: # At least give some meaningful message msg = _("Instance %(server_id)s is in an invalid state for " "'%(action)s'") % {'action': action, 'server_id': server_id} raise webob.exc.HTTPConflict(explanation=msg) def check_snapshots_enabled(f): @functools.wraps(f) def inner(*args, **kwargs): if not CONF.api.allow_instance_snapshots: LOG.warning('Rejecting snapshot request, snapshots currently' ' disabled') msg = _("Instance snapshots are not permitted at this time.") raise webob.exc.HTTPBadRequest(explanation=msg) return f(*args, **kwargs) return inner def url_join(*parts): """Convenience method for joining parts of a URL Any leading and trailing '/' characters are removed, and the parts joined together with '/' as a separator. If last element of 'parts' is an empty string, the returned URL will have a trailing slash. """ parts = parts or [""] clean_parts = [part.strip("/") for part in parts if part] if not parts[-1]: # Empty last element should add a trailing slash clean_parts.append("") return "/".join(clean_parts) class ViewBuilder(object): """Model API responses as dictionaries.""" def _get_project_id(self, request): """Get project id from request url if present or empty string otherwise """ project_id = request.environ["nova.context"].project_id if project_id and project_id in request.url: return project_id return '' def _get_links(self, request, identifier, collection_name): return [{ "rel": "self", "href": self._get_href_link(request, identifier, collection_name), }, { "rel": "bookmark", "href": self._get_bookmark_link(request, identifier, collection_name), }] def _get_next_link(self, request, identifier, collection_name): """Return href string with proper limit and marker params.""" params = collections.OrderedDict(sorted(request.params.items())) params["marker"] = identifier prefix = self._update_compute_link_prefix(request.application_url) url = url_join(prefix, self._get_project_id(request), collection_name) return "%s?%s" % (url, urlparse.urlencode(params)) def _get_href_link(self, request, identifier, collection_name): """Return an href string pointing to this object.""" prefix = self._update_compute_link_prefix(request.application_url) return url_join(prefix, self._get_project_id(request), collection_name, str(identifier)) def _get_bookmark_link(self, request, identifier, collection_name): """Create a URL that refers to a specific resource.""" base_url = remove_trailing_version_from_href(request.application_url) base_url = self._update_compute_link_prefix(base_url) return url_join(base_url, self._get_project_id(request), collection_name, str(identifier)) def _get_collection_links(self, request, items, collection_name, id_key="uuid"): """Retrieve 'next' link, if applicable. This is included if: 1) 'limit' param is specified and equals the number of items. 2) 'limit' param is specified but it exceeds CONF.api.max_limit, in this case the number of items is CONF.api.max_limit. 3) 'limit' param is NOT specified but the number of items is CONF.api.max_limit. """ links = [] max_items = min( int(request.params.get("limit", CONF.api.max_limit)), CONF.api.max_limit) if max_items and max_items == len(items): last_item = items[-1] if id_key in last_item: last_item_id = last_item[id_key] elif 'id' in last_item: last_item_id = last_item["id"] else: last_item_id = last_item["flavorid"] links.append({ "rel": "next", "href": self._get_next_link(request, last_item_id, collection_name), }) return links def _update_link_prefix(self, orig_url, prefix): if not prefix: return orig_url url_parts = list(urlparse.urlsplit(orig_url)) prefix_parts = list(urlparse.urlsplit(prefix)) url_parts[0:2] = prefix_parts[0:2] url_parts[2] = prefix_parts[2] + url_parts[2] return urlparse.urlunsplit(url_parts).rstrip('/') def _update_glance_link_prefix(self, orig_url): return self._update_link_prefix(orig_url, CONF.api.glance_link_prefix) def _update_compute_link_prefix(self, orig_url): return self._update_link_prefix(orig_url, CONF.api.compute_link_prefix) def get_instance(compute_api, context, instance_id, expected_attrs=None): """Fetch an instance from the compute API, handling error checking.""" try: return compute_api.get(context, instance_id, expected_attrs=expected_attrs) except exception.InstanceNotFound as e: raise exc.HTTPNotFound(explanation=e.format_message()) def normalize_name(name): # NOTE(alex_xu): This method is used by v2.1 legacy v2 compat mode. # In the legacy v2 API, some of APIs strip the spaces and some of APIs not. # The v2.1 disallow leading/trailing, for compatible v2 API and consistent, # we enable leading/trailing spaces and strip spaces in legacy v2 compat # mode. Althrough in legacy v2 API there are some APIs didn't strip spaces, # but actually leading/trailing spaces(that means user depend on leading/ # trailing spaces distinguish different instance) is pointless usecase. return name.strip() def raise_feature_not_supported(msg=None): if msg is None: msg = _("The requested functionality is not supported.") raise webob.exc.HTTPNotImplemented(explanation=msg) def get_flavor(context, flavor_id): try: return objects.Flavor.get_by_flavor_id(context, flavor_id) except exception.FlavorNotFound as error: raise exc.HTTPNotFound(explanation=error.format_message()) def check_cells_enabled(function): @functools.wraps(function) def inner(*args, **kwargs): if not CONF.cells.enable: raise_feature_not_supported() return function(*args, **kwargs) return inner def is_all_tenants(search_opts): """Checks to see if the all_tenants flag is in search_opts :param dict search_opts: The search options for a request :returns: boolean indicating if all_tenants are being requested or not """ all_tenants = search_opts.get('all_tenants') if all_tenants: try: all_tenants = strutils.bool_from_string(all_tenants, True) except ValueError as err: raise exception.InvalidInput(six.text_type(err)) else: # The empty string is considered enabling all_tenants all_tenants = 'all_tenants' in search_opts return all_tenants def supports_multiattach_volume(req): """Check to see if the requested API version is high enough for multiattach Microversion 2.60 adds support for booting from a multiattach volume. The actual validation for a multiattach volume is done in the compute API code, this is just checking the version so we can tell the API code if the request version is high enough to even support it. :param req: The incoming API request :returns: True if the requested API microversion is high enough for volume multiattach support, False otherwise. """ return api_version_request.is_supported(req, '2.60') nova-17.0.1/nova/api/openstack/requestlog.py0000666000175000017500000000633013250073126021037 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """Simple middleware for request logging.""" import time from oslo_log import log as logging from oslo_utils import excutils import webob.dec import webob.exc from nova.api.openstack import wsgi from nova import wsgi as base_wsgi # TODO(sdague) maybe we can use a better name here for the logger LOG = logging.getLogger(__name__) class RequestLog(base_wsgi.Middleware): """WSGI Middleware to write a simple request log to. Borrowed from Paste Translogger """ # This matches the placement fil _log_format = ('%(REMOTE_ADDR)s "%(REQUEST_METHOD)s %(REQUEST_URI)s" ' 'status: %(status)s len: %(len)s ' 'microversion: %(microversion)s time: %(time).6f') @staticmethod def _get_uri(environ): req_uri = (environ.get('SCRIPT_NAME', '') + environ.get('PATH_INFO', '')) if environ.get('QUERY_STRING'): req_uri += '?' + environ['QUERY_STRING'] return req_uri @staticmethod def _should_emit(req): """Conditions under which we should skip logging. If we detect we are running under eventlet wsgi processing, we already are logging things, let it go. This is broken out as a separate method so that it can be easily adjusted for testing. """ if req.environ.get('eventlet.input', None) is not None: return False return True def _log_req(self, req, res, start): if not self._should_emit(req): return # in the event that things exploded really badly deeper in the # wsgi stack, res is going to be an empty dict for the # fallback logging. So never count on it having attributes. status = getattr(res, "status", "500 Error").split(None, 1)[0] data = { 'REMOTE_ADDR': req.environ.get('REMOTE_ADDR', '-'), 'REQUEST_METHOD': req.environ['REQUEST_METHOD'], 'REQUEST_URI': self._get_uri(req.environ), 'status': status, 'len': getattr(res, "content_length", 0), 'time': time.time() - start, 'microversion': '-' } # set microversion if it exists if not req.api_version_request.is_null(): data["microversion"] = req.api_version_request.get_string() LOG.info(self._log_format, data) @webob.dec.wsgify(RequestClass=wsgi.Request) def __call__(self, req): res = {} start = time.time() try: res = req.get_response(self.application) self._log_req(req, res, start) return res except Exception: with excutils.save_and_reraise_exception(): self._log_req(req, res, start) nova-17.0.1/nova/api/openstack/placement/0000775000175000017500000000000013250073471020242 5ustar zuulzuul00000000000000nova-17.0.1/nova/api/openstack/placement/lib.py0000666000175000017500000000307613250073126021367 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """Symbols intended to be imported by both placement code and placement API consumers. When placement is separated out, this module should be part of a common library that both placement and its consumers can require.""" class RequestGroup(object): def __init__(self, use_same_provider=True, resources=None, required_traits=None): """Create a grouping of resource and trait requests. :param use_same_provider: If True, (the default) this RequestGroup represents requests for resources and traits which must be satisfied by a single resource provider. If False, represents a request for resources and traits in any resource provider in the same tree, or a sharing provider. :param resources: A dict of { resource_class: amount, ... } :param required_traits: A set of { trait_name, ... } """ self.use_same_provider = use_same_provider self.resources = resources or {} self.required_traits = required_traits or set() nova-17.0.1/nova/api/openstack/placement/rest_api_version_history.rst0000666000175000017500000002223213250073136026131 0ustar zuulzuul00000000000000REST API Version History ~~~~~~~~~~~~~~~~~~~~~~~~ This documents the changes made to the REST API with every microversion change. The description for each version should be a verbose one which has enough information to be suitable for use in user documentation. 1.0 (Maximum in Newton) ----------------------- This is the initial version of the placement REST API that was released in Nova 14.0.0 (Newton). This contains the following routes: * /resource_providers * /resource_providers/allocations * /resource_providers/inventories * /resource_providers/usages * /allocations 1.1 Resource provider aggregates -------------------------------- The 1.1 version adds support for associating aggregates with resource providers with ``GET`` and ``PUT`` methods on one new route: * /resource_providers/{uuid}/aggregates 1.2 Custom resource classes --------------------------- Placement API version 1.2 adds basic operations allowing an admin to create, list and delete custom resource classes. The following new routes are added: * GET /resource_classes: return all resource classes * POST /resource_classes: create a new custom resource class * PUT /resource_classes/{name}: update name of custom resource class * DELETE /resource_classes/{name}: deletes a custom resource class * GET /resource_classes/{name}: get a single resource class Custom resource classes must begin with the prefix "CUSTOM\_" and contain only the letters A through Z, the numbers 0 through 9 and the underscore "\_" character. 1.3 member_of query parameter ----------------------------- Version 1.3 adds support for listing resource providers that are members of any of the list of aggregates provided using a ``member_of`` query parameter: * /resource_providers?member_of=in:{agg1_uuid},{agg2_uuid},{agg3_uuid} 1.4 Filter resource providers by requested resource capacity (Maximum in Ocata) ------------------------------------------------------------------------------- The 1.4 version adds support for querying resource providers that have the ability to serve a requested set of resources. A new "resources" query string parameter is now accepted to the `GET /resource_providers` API call. This parameter indicates the requested amounts of various resources that a provider must have the capacity to serve. The "resources" query string parameter takes the form: ``?resources=$RESOURCE_CLASS_NAME:$AMOUNT,$RESOURCE_CLASS_NAME:$AMOUNT`` For instance, if the user wishes to see resource providers that can service a request for 2 vCPUs, 1024 MB of RAM and 50 GB of disk space, the user can issue a request to: `GET /resource_providers?resources=VCPU:2,MEMORY_MB:1024,DISK_GB:50` If the resource class does not exist, then it will return a HTTP 400. .. note:: The resources filtering is also based on the `min_unit`, `max_unit` and `step_size` of the inventory record. For example, if the `max_unit` is 512 for the DISK_GB inventory for a particular resource provider and a GET request is made for `DISK_GB:1024`, that resource provider will not be returned. The `min_unit` is the minimum amount of resource that can be requested for a given inventory and resource provider. The `step_size` is the increment of resource that can be requested for a given resource on a given provider. 1.5 DELETE all inventory for a resource provider ------------------------------------------------ Placement API version 1.5 adds DELETE method for deleting all inventory for a resource provider. The following new method is supported: * DELETE /resource_providers/{uuid}/inventories 1.6 Traits API -------------- The 1.6 version adds basic operations allowing an admin to create, list, and delete custom traits, also adds basic operations allowing an admin to attach traits to a resource provider. The following new routes are added: * GET /traits: Returns all resource classes. * PUT /traits/{name}: To insert a single custom trait. * GET /traits/{name}: To check if a trait name exists. * DELETE /traits/{name}: To delete the specified trait. * GET /resource_providers/{uuid}/traits: a list of traits associated with a specific resource provider * PUT /resource_providers/{uuid}/traits: Set all the traits for a specific resource provider * DELETE /resource_providers/{uuid}/traits: Remove any existing trait associations for a specific resource provider Custom traits must begin with the prefix "CUSTOM\_" and contain only the letters A through Z, the numbers 0 through 9 and the underscore "\_" character. 1.7 Idempotent PUT /resource_classes/{name} ------------------------------------------- The 1.7 version changes handling of `PUT /resource_classes/{name}` to be a create or verification of the resource class with `{name}`. If the resource class is a custom resource class and does not already exist it will be created and a ``201`` response code returned. If the class already exists the response code will be ``204``. This makes it possible to check or create a resource class in one request. 1.8 Require placement 'project_id', 'user_id' in PUT /allocations ----------------------------------------------------------------- The 1.8 version adds ``project_id`` and ``user_id`` required request parameters to ``PUT /allocations``. 1.9 Add GET /usages -------------------- The 1.9 version adds usages that can be queried by a project or project/user. The following new routes are added: ``GET /usages?project_id=`` Returns all usages for a given project. ``GET /usages?project_id=&user_id=`` Returns all usages for a given project and user. 1.10 Allocation candidates (Maximum in Pike) -------------------------------------------- The 1.10 version brings a new REST resource endpoint for getting a list of allocation candidates. Allocation candidates are collections of possible allocations against resource providers that can satisfy a particular request for resources. 1.11 Add 'allocations' link to the ``GET /resource_providers`` response ----------------------------------------------------------------------- The ``/resource_providers/{rp_uuid}/allocations`` endpoint has been available since version 1.0, but was not listed in the ``links`` section of the ``GET /resource_providers`` response. The link is included as of version 1.11. 1.12 PUT dict format to /allocations/{consumer_uuid} ---------------------------------------------------- In version 1.12 the request body of a ``PUT /allocations/{consumer_uuid}`` is expected to have an `object` for the ``allocations`` property, not as `array` as with earlier microversions. This puts the request body more in alignment with the structure of the ``GET /allocations/{consumer_uuid}`` response body. Because the `PUT` request requires `user_id` and `project_id` in the request body, these fields are added to the `GET` response. In addition, the response body for ``GET /allocation_candidates`` is updated so the allocations in the ``alocation_requests`` object work with the new `PUT` format. 1.13 POST multiple allocations to /allocations ---------------------------------------------- Version 1.13 gives the ability to set or clear allocations for more than one consumer uuid with a request to ``POST /allocations``. 1.14 Add nested resource providers ---------------------------------- The 1.14 version introduces the concept of nested resource providers. The resource provider resource now contains two new attributes: * ``parent_provider_uuid`` indicates the provider's direct parent, or null if there is no parent. This attribute can be set in the call to ``POST /resource_providers`` and ``PUT /resource_providers/{uuid}`` if the attribute has not already been set to a non-NULL value (i.e. we do not support "reparenting" a provider) * ``root_provider_uuid`` indicates the UUID of the root resource provider in the provider's tree. This is a read-only attribute A new ``in_tree=`` parameter is now available in the ``GET /resource-providers`` API call. Supplying a UUID value for the ``in_tree`` parameter will cause all resource providers within the "provider tree" of the provider matching ```` to be returned. 1.15 Add 'last-modified' and 'cache-control' headers ---------------------------------------------------- Throughout the API, 'last-modified' headers have been added to GET responses and those PUT and POST responses that have bodies. The value is either the actual last modified time of the most recently modified associated database entity or the current time if there is no direct mapping to the database. In addition, 'cache-control: no-cache' headers are added where the 'last-modified' header has been added to prevent inadvertent caching of resources. 1.16 Limit allocation candidates -------------------------------- Add support for a ``limit`` query parameter when making a ``GET /allocation_candidates`` request. The parameter accepts an integer value, `N`, which limits the maximum number of candidates returned. 1.17 Add 'required' parameter to the allocation candidates (Maximum in Queens) ------------------------------------------------------------------------------ Add the `required` parameter to the `GET /allocation_candidates` API. It accepts a list of traits separated by `,`. The provider summary in the response will include the attached traits also. nova-17.0.1/nova/api/openstack/placement/requestlog.py0000666000175000017500000000617113250073126023012 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """Simple middleware for request logging.""" from oslo_log import log as logging from nova.api.openstack.placement import microversion LOG = logging.getLogger(__name__) class RequestLog(object): """WSGI Middleware to write a simple request log to. Borrowed from Paste Translogger """ format = ('%(REMOTE_ADDR)s "%(REQUEST_METHOD)s %(REQUEST_URI)s" ' 'status: %(status)s len: %(bytes)s ' 'microversion: %(microversion)s') def __init__(self, application): self.application = application def __call__(self, environ, start_response): LOG.debug('Starting request: %s "%s %s"', environ['REMOTE_ADDR'], environ['REQUEST_METHOD'], self._get_uri(environ)) # Set the accept header if it is not otherwise set. This # ensures that error responses will be in JSON. if not environ.get('HTTP_ACCEPT'): environ['HTTP_ACCEPT'] = 'application/json' if LOG.isEnabledFor(logging.INFO): return self._log_app(environ, start_response) else: return self.application(environ, start_response) @staticmethod def _get_uri(environ): req_uri = (environ.get('SCRIPT_NAME', '') + environ.get('PATH_INFO', '')) if environ.get('QUERY_STRING'): req_uri += '?' + environ['QUERY_STRING'] return req_uri def _log_app(self, environ, start_response): req_uri = self._get_uri(environ) def replacement_start_response(status, headers, exc_info=None): """We need to gaze at the content-length, if set, to write log info. """ size = None for name, value in headers: if name.lower() == 'content-length': size = value self.write_log(environ, req_uri, status, size) return start_response(status, headers, exc_info) return self.application(environ, replacement_start_response) def write_log(self, environ, req_uri, status, size): """Write the log info out in a formatted form to ``LOG.info``. """ if size is None: size = '-' log_format = { 'REMOTE_ADDR': environ.get('REMOTE_ADDR', '-'), 'REQUEST_METHOD': environ['REQUEST_METHOD'], 'REQUEST_URI': req_uri, 'status': status.split(None, 1)[0], 'bytes': size, 'microversion': environ.get( microversion.MICROVERSION_ENVIRON, '-'), } LOG.info(self.format, log_format) nova-17.0.1/nova/api/openstack/placement/handler.py0000666000175000017500000002221213250073126022227 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Handlers for placement API. Individual handlers are associated with URL paths in the ROUTE_DECLARATIONS dictionary. At the top level each key is a Routes compliant path. The value of that key is a dictionary mapping individual HTTP request methods to a Python function representing a simple WSGI application for satisfying that request. The ``make_map`` method processes ROUTE_DECLARATIONS to create a Routes.Mapper, including automatic handlers to respond with a 405 when a request is made against a valid URL with an invalid method. """ import routes import webob from oslo_log import log as logging from nova.api.openstack.placement.handlers import aggregate from nova.api.openstack.placement.handlers import allocation from nova.api.openstack.placement.handlers import allocation_candidate from nova.api.openstack.placement.handlers import inventory from nova.api.openstack.placement.handlers import resource_class from nova.api.openstack.placement.handlers import resource_provider from nova.api.openstack.placement.handlers import root from nova.api.openstack.placement.handlers import trait from nova.api.openstack.placement.handlers import usage from nova.api.openstack.placement import policy from nova.api.openstack.placement import util from nova import exception from nova.i18n import _ LOG = logging.getLogger(__name__) # URLs and Handlers # NOTE(cdent): When adding URLs here, do not use regex patterns in # the path parameters (e.g. {uuid:[0-9a-zA-Z-]+}) as that will lead # to 404s that are controlled outside of the individual resources # and thus do not include specific information on the why of the 404. ROUTE_DECLARATIONS = { '/': { 'GET': root.home, }, # NOTE(cdent): This allows '/placement/' and '/placement' to # both work as the root of the service, which we probably want # for those situations where the service is mounted under a # prefix (as it is in devstack). While weird, an empty string is # a legit key in a dictionary and matches as desired in Routes. '': { 'GET': root.home, }, '/resource_classes': { 'GET': resource_class.list_resource_classes, 'POST': resource_class.create_resource_class }, '/resource_classes/{name}': { 'GET': resource_class.get_resource_class, 'PUT': resource_class.update_resource_class, 'DELETE': resource_class.delete_resource_class, }, '/resource_providers': { 'GET': resource_provider.list_resource_providers, 'POST': resource_provider.create_resource_provider }, '/resource_providers/{uuid}': { 'GET': resource_provider.get_resource_provider, 'DELETE': resource_provider.delete_resource_provider, 'PUT': resource_provider.update_resource_provider }, '/resource_providers/{uuid}/inventories': { 'GET': inventory.get_inventories, 'POST': inventory.create_inventory, 'PUT': inventory.set_inventories, 'DELETE': inventory.delete_inventories }, '/resource_providers/{uuid}/inventories/{resource_class}': { 'GET': inventory.get_inventory, 'PUT': inventory.update_inventory, 'DELETE': inventory.delete_inventory }, '/resource_providers/{uuid}/usages': { 'GET': usage.list_usages }, '/resource_providers/{uuid}/aggregates': { 'GET': aggregate.get_aggregates, 'PUT': aggregate.set_aggregates }, '/resource_providers/{uuid}/allocations': { 'GET': allocation.list_for_resource_provider, }, '/allocations': { 'POST': allocation.set_allocations, }, '/allocations/{consumer_uuid}': { 'GET': allocation.list_for_consumer, 'PUT': allocation.set_allocations_for_consumer, 'DELETE': allocation.delete_allocations, }, '/allocation_candidates': { 'GET': allocation_candidate.list_allocation_candidates, }, '/traits': { 'GET': trait.list_traits, }, '/traits/{name}': { 'GET': trait.get_trait, 'PUT': trait.put_trait, 'DELETE': trait.delete_trait, }, '/resource_providers/{uuid}/traits': { 'GET': trait.list_traits_for_resource_provider, 'PUT': trait.update_traits_for_resource_provider, 'DELETE': trait.delete_traits_for_resource_provider }, '/usages': { 'GET': usage.get_total_usages, }, } def dispatch(environ, start_response, mapper): """Find a matching route for the current request. If no match is found, raise a 404 response. If there is a matching route, but no matching handler for the given method, raise a 405. """ result = mapper.match(environ=environ) if result is None: raise webob.exc.HTTPNotFound( json_formatter=util.json_error_formatter) # We can't reach this code without action being present. handler = result.pop('action') environ['wsgiorg.routing_args'] = ((), result) return handler(environ, start_response) def handle_405(environ, start_response): """Return a 405 response when method is not allowed. If _methods are in routing_args, send an allow header listing the methods that are possible on the provided URL. """ _methods = util.wsgi_path_item(environ, '_methods') headers = {} if _methods: # Ensure allow header is a python 2 or 3 native string (thus # not unicode in python 2 but stay a string in python 3) # In the process done by Routes to save the allowed methods # to its routing table they become unicode in py2. headers['allow'] = str(_methods) # Use Exception class as WSGI Application. We don't want to raise here. response = webob.exc.HTTPMethodNotAllowed( _('The method specified is not allowed for this resource.'), headers=headers, json_formatter=util.json_error_formatter) return response(environ, start_response) def make_map(declarations): """Process route declarations to create a Route Mapper.""" mapper = routes.Mapper() for route, targets in declarations.items(): allowed_methods = [] for method in targets: mapper.connect(route, action=targets[method], conditions=dict(method=[method])) allowed_methods.append(method) allowed_methods = ', '.join(allowed_methods) mapper.connect(route, action=handle_405, _methods=allowed_methods) return mapper class PlacementHandler(object): """Serve Placement API. Dispatch to handlers defined in ROUTE_DECLARATIONS. """ def __init__(self, **local_config): # NOTE(cdent): Local config currently unused. self._map = make_map(ROUTE_DECLARATIONS) def __call__(self, environ, start_response): # All requests but '/' require admin. if environ['PATH_INFO'] != '/': context = environ['placement.context'] # TODO(cdent): Using is_admin everywhere (except /) is # insufficiently flexible for future use case but is # convenient for initial exploration. if not policy.placement_authorize(context, 'placement'): raise webob.exc.HTTPForbidden( _('admin required'), json_formatter=util.json_error_formatter) # Check that an incoming request with a content-length header # that is an integer > 0 and not empty, also has a content-type # header that is not empty. If not raise a 400. clen = environ.get('CONTENT_LENGTH') try: if clen and (int(clen) > 0) and not environ.get('CONTENT_TYPE'): raise webob.exc.HTTPBadRequest( _('content-type header required when content-length > 0'), json_formatter=util.json_error_formatter) except ValueError as exc: raise webob.exc.HTTPBadRequest( _('content-length header must be an integer'), json_formatter=util.json_error_formatter) try: return dispatch(environ, start_response, self._map) # Trap the NotFound exceptions raised by the objects used # with the API and transform them into webob.exc.HTTPNotFound. except exception.NotFound as exc: raise webob.exc.HTTPNotFound( exc, json_formatter=util.json_error_formatter) # Trap the HTTPNotFound that can be raised by dispatch() # when no route is found. The exception is passed through to # the FaultWrap middleware without causing an alarming log # message. except webob.exc.HTTPNotFound: raise except Exception as exc: LOG.exception("Uncaught exception") raise nova-17.0.1/nova/api/openstack/placement/util.py0000666000175000017500000003531413250073136021577 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Utility methods for placement API.""" import functools import re import jsonschema from oslo_middleware import request_id from oslo_serialization import jsonutils from oslo_utils import timeutils from oslo_utils import uuidutils import webob from nova.api.openstack.placement import lib as placement_lib # NOTE(cdent): avoid cyclical import conflict between util and # microversion import nova.api.openstack.placement.microversion from nova.i18n import _ # Querystring-related constants _QS_RESOURCES = 'resources' _QS_REQUIRED = 'required' _QS_KEY_PATTERN = re.compile( r"^(%s)([1-9][0-9]*)?$" % '|'.join((_QS_RESOURCES, _QS_REQUIRED))) # NOTE(cdent): This registers a FormatChecker on the jsonschema # module. Do not delete this code! Although it appears that nothing # is using the decorated method it is being used in JSON schema # validations to check uuid formatted strings. @jsonschema.FormatChecker.cls_checks('uuid') def _validate_uuid_format(instance): return uuidutils.is_uuid_like(instance) def check_accept(*types): """If accept is set explicitly, try to follow it. If there is no match for the incoming accept header send a 406 response code. If accept is not set send our usual content-type in response. """ def decorator(f): @functools.wraps(f) def decorated_function(req): if req.accept: best_match = req.accept.best_match(types) if not best_match: type_string = ', '.join(types) raise webob.exc.HTTPNotAcceptable( _('Only %(type)s is provided') % {'type': type_string}, json_formatter=json_error_formatter) return f(req) return decorated_function return decorator def extract_json(body, schema): """Extract JSON from a body and validate with the provided schema.""" try: data = jsonutils.loads(body) except ValueError as exc: raise webob.exc.HTTPBadRequest( _('Malformed JSON: %(error)s') % {'error': exc}, json_formatter=json_error_formatter) try: jsonschema.validate(data, schema, format_checker=jsonschema.FormatChecker()) except jsonschema.ValidationError as exc: raise webob.exc.HTTPBadRequest( _('JSON does not validate: %(error)s') % {'error': exc}, json_formatter=json_error_formatter) return data def inventory_url(environ, resource_provider, resource_class=None): url = '%s/inventories' % resource_provider_url(environ, resource_provider) if resource_class: url = '%s/%s' % (url, resource_class) return url def json_error_formatter(body, status, title, environ): """A json_formatter for webob exceptions. Follows API-WG guidelines at http://specs.openstack.org/openstack/api-wg/guidelines/errors.html """ # Clear out the html that webob sneaks in. body = webob.exc.strip_tags(body) # Get status code out of status message. webob's error formatter # only passes entire status string. status_code = int(status.split(None, 1)[0]) error_dict = { 'status': status_code, 'title': title, 'detail': body } # If the request id middleware has had a chance to add an id, # put it in the error response. if request_id.ENV_REQUEST_ID in environ: error_dict['request_id'] = environ[request_id.ENV_REQUEST_ID] # When there is a no microversion in the environment and a 406, # microversion parsing failed so we need to include microversion # min and max information in the error response. microversion = nova.api.openstack.placement.microversion if status_code == 406 and microversion.MICROVERSION_ENVIRON not in environ: error_dict['max_version'] = microversion.max_version_string() error_dict['min_version'] = microversion.min_version_string() return {'errors': [error_dict]} def pick_last_modified(last_modified, obj): """Choose max of last_modified and obj.updated_at or obj.created_at. If updated_at is not implemented in `obj` use the current time in UTC. """ try: current_modified = (obj.updated_at or obj.created_at) except NotImplementedError: # If updated_at is not implemented, we are looking at objects that # have not come from the database, so "now" is the right modified # time. current_modified = timeutils.utcnow(with_timezone=True) if last_modified: last_modified = max(last_modified, current_modified) else: last_modified = current_modified return last_modified def require_content(content_type): """Decorator to require a content type in a handler.""" def decorator(f): @functools.wraps(f) def decorated_function(req): if req.content_type != content_type: # webob's unset content_type is the empty string so # set it the error message content to 'None' to make # a useful message in that case. This also avoids a # KeyError raised when webob.exc eagerly fills in a # Template for output we will never use. if not req.content_type: req.content_type = 'None' raise webob.exc.HTTPUnsupportedMediaType( _('The media type %(bad_type)s is not supported, ' 'use %(good_type)s') % {'bad_type': req.content_type, 'good_type': content_type}, json_formatter=json_error_formatter) else: return f(req) return decorated_function return decorator def resource_class_url(environ, resource_class): """Produce the URL for a resource class. If SCRIPT_NAME is present, it is the mount point of the placement WSGI app. """ prefix = environ.get('SCRIPT_NAME', '') return '%s/resource_classes/%s' % (prefix, resource_class.name) def resource_provider_url(environ, resource_provider): """Produce the URL for a resource provider. If SCRIPT_NAME is present, it is the mount point of the placement WSGI app. """ prefix = environ.get('SCRIPT_NAME', '') return '%s/resource_providers/%s' % (prefix, resource_provider.uuid) def trait_url(environ, trait): """Produce the URL for a trait. If SCRIPT_NAME is present, it is the mount point of the placement WSGI app. """ prefix = environ.get('SCRIPT_NAME', '') return '%s/traits/%s' % (prefix, trait.name) def validate_query_params(req, schema): try: jsonschema.validate(dict(req.GET), schema, format_checker=jsonschema.FormatChecker()) except jsonschema.ValidationError as exc: raise webob.exc.HTTPBadRequest( _('Invalid query string parameters: %(exc)s') % {'exc': exc}) def wsgi_path_item(environ, name): """Extract the value of a named field in a URL. Return None if the name is not present or there are no path items. """ # NOTE(cdent): For the time being we don't need to urldecode # the value as the entire placement API has paths that accept no # encoded values. try: return environ['wsgiorg.routing_args'][1][name] except (KeyError, IndexError): return None def normalize_resources_qs_param(qs): """Given a query string parameter for resources, validate it meets the expected format and return a dict of amounts, keyed by resource class name. The expected format of the resources parameter looks like so: $RESOURCE_CLASS_NAME:$AMOUNT,$RESOURCE_CLASS_NAME:$AMOUNT So, if the user was looking for resource providers that had room for an instance that will consume 2 vCPUs, 1024 MB of RAM and 50GB of disk space, they would use the following query string: ?resources=VCPU:2,MEMORY_MB:1024,DISK_GB:50 The returned value would be: { "VCPU": 2, "MEMORY_MB": 1024, "DISK_GB": 50, } :param qs: The value of the 'resources' query string parameter :raises `webob.exc.HTTPBadRequest` if the parameter's value isn't in the expected format. """ if qs.strip() == "": msg = _('Badly formed resources parameter. Expected resources ' 'query string parameter in form: ' '?resources=VCPU:2,MEMORY_MB:1024. Got: empty string.') raise webob.exc.HTTPBadRequest(msg) result = {} resource_tuples = qs.split(',') for rt in resource_tuples: try: rc_name, amount = rt.split(':') except ValueError: msg = _('Badly formed resources parameter. Expected resources ' 'query string parameter in form: ' '?resources=VCPU:2,MEMORY_MB:1024. Got: %s.') msg = msg % rt raise webob.exc.HTTPBadRequest(msg) try: amount = int(amount) except ValueError: msg = _('Requested resource %(resource_name)s expected positive ' 'integer amount. Got: %(amount)s.') msg = msg % { 'resource_name': rc_name, 'amount': amount, } raise webob.exc.HTTPBadRequest(msg) if amount < 1: msg = _('Requested resource %(resource_name)s requires ' 'amount >= 1. Got: %(amount)d.') msg = msg % { 'resource_name': rc_name, 'amount': amount, } raise webob.exc.HTTPBadRequest(msg) result[rc_name] = amount return result def normalize_traits_qs_param(val): """Parse a traits query string parameter value. Note that this method doesn't know or care about the query parameter key, which may currently be of the form `required`, `required123`, etc., but which may someday also include `preferred`, etc. This method currently does no format validation of trait strings, other than to ensure they're not zero-length. :param val: A traits query parameter value: a comma-separated string of trait names. :return: A set of trait names. :raises `webob.exc.HTTPBadRequest` if the val parameter is not in the expected format. """ ret = set(substr.strip() for substr in val.split(',')) if not all(trait for trait in ret): msg = _("Invalid query string parameters: Expected 'required' " "parameter value of the form: HW_CPU_X86_VMX,CUSTOM_MAGIC. " "Got: %s") % val raise webob.exc.HTTPBadRequest(msg) return ret def parse_qs_request_groups(qsdict): """Parse numbered resources and traits groupings out of a querystring dict. The input qsdict represents a query string of the form: ?resources=$RESOURCE_CLASS_NAME:$AMOUNT,$RESOURCE_CLASS_NAME:$AMOUNT &required=$TRAIT_NAME,$TRAIT_NAME &resources1=$RESOURCE_CLASS_NAME:$AMOUNT,RESOURCE_CLASS_NAME:$AMOUNT &required1=$TRAIT_NAME,$TRAIT_NAME &resources2=$RESOURCE_CLASS_NAME:$AMOUNT,RESOURCE_CLASS_NAME:$AMOUNT &required2=$TRAIT_NAME,$TRAIT_NAME These are parsed in groups according to the numeric suffix of the key. For each group, a RequestGroup instance is created containing that group's resources and required traits. For the (single) group with no suffix, the RequestGroup.use_same_provider attribute is False; for the numbered groups it is True. The return is a list of these RequestGroup instances. As an example, if qsdict represents the query string: ?resources=VCPU:2,MEMORY_MB:1024,DISK_GB=50 &required=HW_CPU_X86_VMX,CUSTOM_STORAGE_RAID &resources1=SRIOV_NET_VF:2 &required1=CUSTOM_PHYSNET_PUBLIC,CUSTOM_SWITCH_A &resources2=SRIOV_NET_VF:1 &required2=CUSTOM_PHYSNET_PRIVATE ...the return value will be: [ RequestGroup( use_same_provider=False, resources={ "VCPU": 2, "MEMORY_MB": 1024, "DISK_GB" 50, }, required_traits=[ "HW_CPU_X86_VMX", "CUSTOM_STORAGE_RAID", ], ), RequestGroup( use_same_provider=True, resources={ "SRIOV_NET_VF": 2, }, required_traits=[ "CUSTOM_PHYSNET_PUBLIC", "CUSTOM_SWITCH_A", ], ), RequestGroup( use_same_provider=True, resources={ "SRIOV_NET_VF": 1, }, required_traits=[ "CUSTOM_PHYSNET_PRIVATE", ], ), ] :param qsdict: The MultiDict representing the querystring on a GET. :return: A list of RequestGroup instances. :raises `webob.exc.HTTPBadRequest` if any value is malformed, or if a trait list is given without corresponding resources. """ # Temporary dict of the form: { suffix: RequestGroup } by_suffix = {} def get_request_group(suffix): if suffix not in by_suffix: rq_grp = placement_lib.RequestGroup(use_same_provider=bool(suffix)) by_suffix[suffix] = rq_grp return by_suffix[suffix] for key, val in qsdict.items(): match = _QS_KEY_PATTERN.match(key) if not match: continue # `prefix` is 'resources' or 'required' # `suffix` is an integer string, or None prefix, suffix = match.groups() request_group = get_request_group(suffix or '') if prefix == _QS_RESOURCES: request_group.resources = normalize_resources_qs_param(val) elif prefix == _QS_REQUIRED: request_group.required_traits = normalize_traits_qs_param(val) # Ensure any group with 'required' also has 'resources'. orphans = [('required%s' % suff) for suff, group in by_suffix.items() if group.required_traits and not group.resources] if orphans: msg = _('All traits parameters must be associated with resources. ' 'Found the following orphaned traits keys: %s') raise webob.exc.HTTPBadRequest(msg % ', '.join(orphans)) # NOTE(efried): The sorting is not necessary for the API, but it makes # testing easier. return [by_suffix[suff] for suff in sorted(by_suffix)] nova-17.0.1/nova/api/openstack/placement/schemas/0000775000175000017500000000000013250073471021665 5ustar zuulzuul00000000000000nova-17.0.1/nova/api/openstack/placement/schemas/allocation_candidate.py0000666000175000017500000000263113250073126026361 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Placement API schemas for getting allocation candidates.""" import copy # Represents the allowed query string parameters to the GET # /allocation_candidates API call GET_SCHEMA_1_10 = { "type": "object", "properties": { "resources": { "type": "string" }, }, "required": [ "resources", ], "additionalProperties": False, } # Add limit query parameter. GET_SCHEMA_1_16 = copy.deepcopy(GET_SCHEMA_1_10) GET_SCHEMA_1_16['properties']['limit'] = { # A query parameter is always a string in webOb, but # we'll handle integer here as well. "type": ["integer", "string"], "pattern": "^[1-9][0-9]*$", "minimum": 1, "minLength": 1 } # Add required parameter. GET_SCHEMA_1_17 = copy.deepcopy(GET_SCHEMA_1_16) GET_SCHEMA_1_17['properties']['required'] = { "type": ["string"] } nova-17.0.1/nova/api/openstack/placement/schemas/trait.py0000666000175000017500000000301713250073126023362 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Trait schemas for Placement API.""" import copy TRAIT = { "type": "string", 'minLength': 1, 'maxLength': 255, } CUSTOM_TRAIT = copy.deepcopy(TRAIT) CUSTOM_TRAIT.update({"pattern": "^CUSTOM_[A-Z0-9_]+$"}) PUT_TRAITS_SCHEMA = { "type": "object", "properties": { "traits": { "type": "array", "items": CUSTOM_TRAIT, } }, 'required': ['traits'], 'additionalProperties': False } SET_TRAITS_FOR_RP_SCHEMA = copy.deepcopy(PUT_TRAITS_SCHEMA) SET_TRAITS_FOR_RP_SCHEMA['properties']['traits']['items'] = TRAIT SET_TRAITS_FOR_RP_SCHEMA['properties'][ 'resource_provider_generation'] = {'type': 'integer'} SET_TRAITS_FOR_RP_SCHEMA['required'].append('resource_provider_generation') LIST_TRAIT_SCHEMA = { "type": "object", "properties": { "name": { "type": "string" }, "associated": { "type": "string", } }, "additionalProperties": False } nova-17.0.1/nova/api/openstack/placement/schemas/usage.py0000666000175000017500000000210213250073126023335 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Placement API schemas for usage information.""" # Represents the allowed query string parameters to GET /usages GET_USAGES_SCHEMA_1_9 = { "type": "object", "properties": { "project_id": { "type": "string", "minLength": 1, "maxLength": 255, }, "user_id": { "type": "string", "minLength": 1, "maxLength": 255, }, }, "required": [ "project_id" ], "additionalProperties": False, } nova-17.0.1/nova/api/openstack/placement/schemas/inventory.py0000666000175000017500000000511413250073136024275 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Inventory schemas for Placement API.""" import copy from nova import db RESOURCE_CLASS_IDENTIFIER = "^[A-Z0-9_]+$" BASE_INVENTORY_SCHEMA = { "type": "object", "properties": { "resource_provider_generation": { "type": "integer" }, "total": { "type": "integer", "maximum": db.MAX_INT, "minimum": 1, }, "reserved": { "type": "integer", "maximum": db.MAX_INT, "minimum": 0, }, "min_unit": { "type": "integer", "maximum": db.MAX_INT, "minimum": 1 }, "max_unit": { "type": "integer", "maximum": db.MAX_INT, "minimum": 1 }, "step_size": { "type": "integer", "maximum": db.MAX_INT, "minimum": 1 }, "allocation_ratio": { "type": "number", "maximum": db.SQL_SP_FLOAT_MAX }, }, "required": [ "total", "resource_provider_generation" ], "additionalProperties": False } POST_INVENTORY_SCHEMA = copy.deepcopy(BASE_INVENTORY_SCHEMA) POST_INVENTORY_SCHEMA['properties']['resource_class'] = { "type": "string", "pattern": RESOURCE_CLASS_IDENTIFIER, } POST_INVENTORY_SCHEMA['required'].append('resource_class') POST_INVENTORY_SCHEMA['required'].remove('resource_provider_generation') PUT_INVENTORY_RECORD_SCHEMA = copy.deepcopy(BASE_INVENTORY_SCHEMA) PUT_INVENTORY_RECORD_SCHEMA['required'].remove('resource_provider_generation') PUT_INVENTORY_SCHEMA = { "type": "object", "properties": { "resource_provider_generation": { "type": "integer" }, "inventories": { "type": "object", "patternProperties": { RESOURCE_CLASS_IDENTIFIER: PUT_INVENTORY_RECORD_SCHEMA, } } }, "required": [ "resource_provider_generation", "inventories" ], "additionalProperties": False } nova-17.0.1/nova/api/openstack/placement/schemas/resource_class.py0000666000175000017500000000172313250073126025255 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Placement API schemas for resource classes.""" import copy POST_RC_SCHEMA_V1_2 = { "type": "object", "properties": { "name": { "type": "string", "pattern": "^CUSTOM\_[A-Z0-9_]+$", "maxLength": 255, }, }, "required": [ "name" ], "additionalProperties": False, } PUT_RC_SCHEMA_V1_2 = copy.deepcopy(POST_RC_SCHEMA_V1_2) nova-17.0.1/nova/api/openstack/placement/schemas/__init__.py0000666000175000017500000000000013250073126023763 0ustar zuulzuul00000000000000nova-17.0.1/nova/api/openstack/placement/schemas/resource_provider.py0000666000175000017500000000615613250073136026010 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Placement API schemas for resource providers.""" import copy POST_RESOURCE_PROVIDER_SCHEMA = { "type": "object", "properties": { "name": { "type": "string", "maxLength": 200 }, "uuid": { "type": "string", "format": "uuid" } }, "required": [ "name" ], "additionalProperties": False, } # Remove uuid to create the schema for PUTting a resource provider PUT_RESOURCE_PROVIDER_SCHEMA = copy.deepcopy(POST_RESOURCE_PROVIDER_SCHEMA) PUT_RESOURCE_PROVIDER_SCHEMA['properties'].pop('uuid') # Placement API microversion 1.14 adds an optional parent_provider_uuid field # to the POST and PUT request schemas POST_RP_SCHEMA_V1_14 = copy.deepcopy(POST_RESOURCE_PROVIDER_SCHEMA) POST_RP_SCHEMA_V1_14["properties"]["parent_provider_uuid"] = { "anyOf": [ { "type": "string", "format": "uuid", }, { "type": "null", } ] } PUT_RP_SCHEMA_V1_14 = copy.deepcopy(POST_RP_SCHEMA_V1_14) PUT_RP_SCHEMA_V1_14['properties'].pop('uuid') # Represents the allowed query string parameters to the GET /resource_providers # API call GET_RPS_SCHEMA_1_0 = { "type": "object", "properties": { "name": { "type": "string" }, "uuid": { "type": "string", "format": "uuid" } }, "additionalProperties": False, } # Placement API microversion 1.3 adds support for a member_of attribute GET_RPS_SCHEMA_1_3 = copy.deepcopy(GET_RPS_SCHEMA_1_0) GET_RPS_SCHEMA_1_3['properties']['member_of'] = { "type": "string" } # Placement API microversion 1.4 adds support for requesting resource providers # having some set of capacity for some resources. The query string is a # comma-delimited set of "$RESOURCE_CLASS_NAME:$AMOUNT" strings. The validation # of the string is left up to the helper code in the # normalize_resources_qs_param() function. GET_RPS_SCHEMA_1_4 = copy.deepcopy(GET_RPS_SCHEMA_1_3) GET_RPS_SCHEMA_1_4['properties']['resources'] = { "type": "string" } # Placement API microversion 1.14 adds support for requesting resource # providers within a tree of providers. The 'in_tree' query string parameter # should be the UUID of a resource provider. The result of the GET call will # include only those resource providers in the same "provider tree" as the # provider with the UUID represented by 'in_tree' GET_RPS_SCHEMA_1_14 = copy.deepcopy(GET_RPS_SCHEMA_1_4) GET_RPS_SCHEMA_1_14['properties']['in_tree'] = { "type": "string", "format": "uuid", } nova-17.0.1/nova/api/openstack/placement/schemas/aggregate.py0000666000175000017500000000137313250073126024170 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Aggregate schemas for Placement API.""" PUT_AGGREGATES_SCHEMA = { "type": "array", "items": { "type": "string", "format": "uuid" }, "uniqueItems": True } nova-17.0.1/nova/api/openstack/placement/schemas/allocation.py0000666000175000017500000001175413250073126024373 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Placement API schemas for setting and deleting allocations.""" import copy ALLOCATION_SCHEMA = { "type": "object", "properties": { "allocations": { "type": "array", "minItems": 1, "items": { "type": "object", "properties": { "resource_provider": { "type": "object", "properties": { "uuid": { "type": "string", "format": "uuid" } }, "additionalProperties": False, "required": ["uuid"] }, "resources": { "type": "object", "minProperties": 1, "patternProperties": { "^[0-9A-Z_]+$": { "type": "integer", "minimum": 1, } }, "additionalProperties": False } }, "required": [ "resource_provider", "resources" ], "additionalProperties": False } } }, "required": ["allocations"], "additionalProperties": False } ALLOCATION_SCHEMA_V1_8 = copy.deepcopy(ALLOCATION_SCHEMA) ALLOCATION_SCHEMA_V1_8['properties']['project_id'] = {'type': 'string', 'minLength': 1, 'maxLength': 255} ALLOCATION_SCHEMA_V1_8['properties']['user_id'] = {'type': 'string', 'minLength': 1, 'maxLength': 255} ALLOCATION_SCHEMA_V1_8['required'].extend(['project_id', 'user_id']) # Update the allocation schema to achieve symmetry with the representation # used when GET /allocations/{consumer_uuid} is called. # NOTE(cdent): Explicit duplication here for sake of comprehensibility. ALLOCATION_SCHEMA_V1_12 = { "type": "object", "properties": { "allocations": { "type": "object", "minProperties": 1, # resource provider uuid "patternProperties": { "^[0-9a-fA-F-]{36}$": { "type": "object", "properties": { # generation is optional "generation": { "type": "integer", }, "resources": { "type": "object", "minProperties": 1, # resource class "patternProperties": { "^[0-9A-Z_]+$": { "type": "integer", "minimum": 1, } }, "additionalProperties": False } }, "required": ["resources"], "additionalProperties": False } }, "additionalProperties": False }, "project_id": { "type": "string", "minLength": 1, "maxLength": 255 }, "user_id": { "type": "string", "minLength": 1, "maxLength": 255 } }, "required": [ "allocations", "project_id", "user_id" ] } # POST to /allocations, added in microversion 1.13, uses the # POST_ALLOCATIONS_V1_13 schema to allow multiple allocations # from multiple consumers in one request. It is a dict, keyed by # consumer uuid, using the form of PUT allocations from microversion # 1.12. In POST the allocations can be empty, so DELETABLE_ALLOCATIONS # modifies ALLOCATION_SCHEMA_V1_12 accordingly. DELETABLE_ALLOCATIONS = copy.deepcopy(ALLOCATION_SCHEMA_V1_12) DELETABLE_ALLOCATIONS['properties']['allocations']['minProperties'] = 0 POST_ALLOCATIONS_V1_13 = { "type": "object", "minProperties": 1, "additionalProperties": False, "patternProperties": { "^[0-9a-fA-F-]{36}$": DELETABLE_ALLOCATIONS } } nova-17.0.1/nova/api/openstack/placement/__init__.py0000666000175000017500000000000013250073126022340 0ustar zuulzuul00000000000000nova-17.0.1/nova/api/openstack/placement/auth.py0000666000175000017500000000632613250073126021563 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from keystonemiddleware import auth_token from oslo_context import context from oslo_db.sqlalchemy import enginefacade from oslo_log import log as logging from oslo_middleware import request_id import webob.dec import webob.exc LOG = logging.getLogger(__name__) class Middleware(object): def __init__(self, application, **kwargs): self.application = application # NOTE(cdent): Only to be used in tests where auth is being faked. class NoAuthMiddleware(Middleware): """Require a token if one isn't present.""" def __init__(self, application): self.application = application @webob.dec.wsgify def __call__(self, req): if req.environ['PATH_INFO'] == '/': return self.application if 'X-Auth-Token' not in req.headers: return webob.exc.HTTPUnauthorized() token = req.headers['X-Auth-Token'] user_id, _sep, project_id = token.partition(':') project_id = project_id or user_id if user_id == 'admin': roles = ['admin'] else: roles = [] req.headers['X_USER_ID'] = user_id req.headers['X_TENANT_ID'] = project_id req.headers['X_ROLES'] = ','.join(roles) return self.application @enginefacade.transaction_context_provider class RequestContext(context.RequestContext): pass class PlacementKeystoneContext(Middleware): """Make a request context from keystone headers.""" @webob.dec.wsgify def __call__(self, req): req_id = req.environ.get(request_id.ENV_REQUEST_ID) ctx = RequestContext.from_environ( req.environ, request_id=req_id) if ctx.user_id is None and req.environ['PATH_INFO'] != '/': LOG.debug("Neither X_USER_ID nor X_USER found in request") return webob.exc.HTTPUnauthorized() req.environ['placement.context'] = ctx return self.application class PlacementAuthProtocol(auth_token.AuthProtocol): """A wrapper on Keystone auth_token middleware. Does not perform verification of authentication tokens for root in the API. """ def __init__(self, app, conf): self._placement_app = app super(PlacementAuthProtocol, self).__init__(app, conf) def __call__(self, environ, start_response): if environ['PATH_INFO'] == '/': return self._placement_app(environ, start_response) return super(PlacementAuthProtocol, self).__call__( environ, start_response) def filter_factory(global_conf, **local_conf): conf = global_conf.copy() conf.update(local_conf) def auth_filter(app): return PlacementAuthProtocol(app, conf) return auth_filter nova-17.0.1/nova/api/openstack/placement/wsgi.py0000666000175000017500000000440413250073126021566 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """WSGI script for Placement API WSGI handler for running Placement API under Apache2, nginx, gunicorn etc. """ import logging as py_logging import os import os.path from oslo_log import log as logging from oslo_service import _options as service_opts from nova.api.openstack.placement import deploy from nova import conf from nova import config CONFIG_FILE = 'nova.conf' def setup_logging(config): # Any dependent libraries that have unhelp debug levels should be # pinned to a higher default. extra_log_level_defaults = [ 'routes=INFO', ] logging.set_defaults(default_log_levels=logging.get_default_log_levels() + extra_log_level_defaults) logging.setup(config, 'nova') py_logging.captureWarnings(True) def _get_config_file(env=None): if env is None: env = os.environ dirname = env.get('OS_PLACEMENT_CONFIG_DIR', '/etc/nova').strip() return os.path.join(dirname, CONFIG_FILE) def init_application(): # initialize the config system conffile = _get_config_file() config.parse_args([], default_config_files=[conffile]) # initialize the logging system setup_logging(conf.CONF) # dump conf at debug (log_options option comes from oslo.service) # FIXME(mriedem): This is gross but we don't have a public hook into # oslo.service to register these options, so we are doing it manually for # now; remove this when we have a hook method into oslo.service. conf.CONF.register_opts(service_opts.service_opts) if conf.CONF.log_options: conf.CONF.log_opt_values( logging.getLogger(__name__), logging.DEBUG) # build and return our WSGI app return deploy.loadapp(conf.CONF) nova-17.0.1/nova/api/openstack/placement/handlers/0000775000175000017500000000000013250073471022042 5ustar zuulzuul00000000000000nova-17.0.1/nova/api/openstack/placement/handlers/allocation_candidate.py0000666000175000017500000001762613250073136026551 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Placement API handlers for getting allocation candidates.""" import collections from oslo_log import log as logging from oslo_serialization import jsonutils from oslo_utils import encodeutils from oslo_utils import timeutils import six import webob from nova.api.openstack.placement import microversion from nova.api.openstack.placement.schemas import allocation_candidate as schema from nova.api.openstack.placement import util from nova.api.openstack.placement import wsgi_wrapper from nova import exception from nova.i18n import _ from nova.objects import resource_provider as rp_obj LOG = logging.getLogger(__name__) def _transform_allocation_requests_dict(alloc_reqs): """Turn supplied list of AllocationRequest objects into a list of allocations dicts keyed by resource provider uuid of resources involved in the allocation request. The returned results are intended to be used as the body of a PUT /allocations/{consumer_uuid} HTTP request at micoversion 1.12 (and beyond). The JSON objects look like the following: [ { "allocations": { $rp_uuid1: { "resources": { "MEMORY_MB": 512 ... } }, $rp_uuid2: { "resources": { "DISK_GB": 1024 ... } } }, }, ... ] """ results = [] for ar in alloc_reqs: # A default dict of {$rp_uuid: "resources": {}) rp_resources = collections.defaultdict(lambda: dict(resources={})) for rr in ar.resource_requests: res_dict = rp_resources[rr.resource_provider.uuid]['resources'] res_dict[rr.resource_class] = rr.amount results.append(dict(allocations=rp_resources)) return results def _transform_allocation_requests_list(alloc_reqs): """Turn supplied list of AllocationRequest objects into a list of dicts of resources involved in the allocation request. The returned results is intended to be able to be used as the body of a PUT /allocations/{consumer_uuid} HTTP request, prior to microversion 1.12, so therefore we return a list of JSON objects that looks like the following: [ { "allocations": [ { "resource_provider": { "uuid": $rp_uuid, } "resources": { $resource_class: $requested_amount, ... }, }, ... ], }, ... ] """ results = [] for ar in alloc_reqs: provider_resources = collections.defaultdict(dict) for rr in ar.resource_requests: res_dict = provider_resources[rr.resource_provider.uuid] res_dict[rr.resource_class] = rr.amount allocs = [ { "resource_provider": { "uuid": rp_uuid, }, "resources": resources, } for rp_uuid, resources in provider_resources.items() ] alloc = { "allocations": allocs } results.append(alloc) return results def _transform_provider_summaries(p_sums, include_traits=False): """Turn supplied list of ProviderSummary objects into a dict, keyed by resource provider UUID, of dicts of provider and inventory information. The traits only show up when `include_traits` is `True`. { RP_UUID_1: { 'resources': { 'DISK_GB': { 'capacity': 100, 'used': 0, }, 'VCPU': { 'capacity': 4, 'used': 0, } }, 'traits': [ 'HW_CPU_X86_AVX512F', 'HW_CPU_X86_AVX512CD' ] }, RP_UUID_2: { 'resources': { 'DISK_GB': { 'capacity': 100, 'used': 0, }, 'VCPU': { 'capacity': 4, 'used': 0, } }, 'traits': [ 'HW_NIC_OFFLOAD_TSO', 'HW_NIC_OFFLOAD_GRO' ] } } """ ret = {} for ps in p_sums: resources = { psr.resource_class: { 'capacity': psr.capacity, 'used': psr.used, } for psr in ps.resources } ret[ps.resource_provider.uuid] = {'resources': resources} if include_traits: ret[ps.resource_provider.uuid]['traits'] = [ t.name for t in ps.traits] return ret def _transform_allocation_candidates(alloc_cands, want_version): """Turn supplied AllocationCandidates object into a dict containing allocation requests and provider summaries. { 'allocation_requests': , 'provider_summaries': , } """ if want_version.matches((1, 12)): a_reqs = _transform_allocation_requests_dict( alloc_cands.allocation_requests) else: a_reqs = _transform_allocation_requests_list( alloc_cands.allocation_requests) include_traits = want_version.matches((1, 17)) p_sums = _transform_provider_summaries(alloc_cands.provider_summaries, include_traits=include_traits) return { 'allocation_requests': a_reqs, 'provider_summaries': p_sums, } @wsgi_wrapper.PlacementWsgify @microversion.version_handler('1.10') @util.check_accept('application/json') def list_allocation_candidates(req): """GET a JSON object with a list of allocation requests and a JSON object of provider summary objects On success return a 200 and an application/json body representing a collection of allocation requests and provider summaries """ context = req.environ['placement.context'] want_version = req.environ[microversion.MICROVERSION_ENVIRON] get_schema = schema.GET_SCHEMA_1_10 if want_version.matches((1, 17)): get_schema = schema.GET_SCHEMA_1_17 elif want_version.matches((1, 16)): get_schema = schema.GET_SCHEMA_1_16 util.validate_query_params(req, get_schema) requests = util.parse_qs_request_groups(req.GET) limit = req.GET.getall('limit') # JSONschema has already confirmed that limit has the form # of an integer. if limit: limit = int(limit[0]) try: cands = rp_obj.AllocationCandidates.get_by_requests(context, requests, limit) except exception.ResourceClassNotFound as exc: raise webob.exc.HTTPBadRequest( _('Invalid resource class in resources parameter: %(error)s') % {'error': exc}) except exception.TraitNotFound as exc: raise webob.exc.HTTPBadRequest(six.text_type(exc)) response = req.response trx_cands = _transform_allocation_candidates(cands, want_version) json_data = jsonutils.dumps(trx_cands) response.body = encodeutils.to_utf8(json_data) response.content_type = 'application/json' if want_version.matches((1, 15)): response.cache_control = 'no-cache' response.last_modified = timeutils.utcnow(with_timezone=True) return response nova-17.0.1/nova/api/openstack/placement/handlers/trait.py0000666000175000017500000002272013250073126023541 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Traits handlers for Placement API.""" import jsonschema from oslo_serialization import jsonutils from oslo_utils import encodeutils from oslo_utils import timeutils import webob from nova.api.openstack.placement import microversion from nova.api.openstack.placement.schemas import trait as schema from nova.api.openstack.placement import util from nova.api.openstack.placement import wsgi_wrapper from nova import exception from nova.i18n import _ from nova.objects import resource_provider as rp_obj def _normalize_traits_qs_param(qs): try: op, value = qs.split(':', 1) except ValueError: msg = _('Badly formatted name parameter. Expected name query string ' 'parameter in form: ' '?name=[in|startswith]:[name1,name2|prefix]. Got: "%s"') msg = msg % qs raise webob.exc.HTTPBadRequest(msg) filters = {} if op == 'in': filters['name_in'] = value.split(',') elif op == 'startswith': filters['prefix'] = value return filters def _serialize_traits(traits, want_version): last_modified = None get_last_modified = want_version.matches((1, 15)) trait_names = [] for trait in traits: if get_last_modified: last_modified = util.pick_last_modified(last_modified, trait) trait_names.append(trait.name) # If there were no traits, set last_modified to now last_modified = last_modified or timeutils.utcnow(with_timezone=True) return {'traits': trait_names}, last_modified @wsgi_wrapper.PlacementWsgify @microversion.version_handler('1.6') def put_trait(req): context = req.environ['placement.context'] want_version = req.environ[microversion.MICROVERSION_ENVIRON] name = util.wsgi_path_item(req.environ, 'name') try: jsonschema.validate(name, schema.CUSTOM_TRAIT) except jsonschema.ValidationError: raise webob.exc.HTTPBadRequest( _('The trait is invalid. A valid trait must be no longer than ' '255 characters, start with the prefix "CUSTOM_" and use ' 'following characters: "A"-"Z", "0"-"9" and "_"')) trait = rp_obj.Trait(context) trait.name = name try: trait.create() req.response.status = 201 except exception.TraitExists: # Get the trait that already exists to get last-modified time. if want_version.matches((1, 15)): trait = rp_obj.Trait.get_by_name(context, name) req.response.status = 204 req.response.content_type = None req.response.location = util.trait_url(req.environ, trait) if want_version.matches((1, 15)): req.response.last_modified = trait.created_at req.response.cache_control = 'no-cache' return req.response @wsgi_wrapper.PlacementWsgify @microversion.version_handler('1.6') def get_trait(req): context = req.environ['placement.context'] want_version = req.environ[microversion.MICROVERSION_ENVIRON] name = util.wsgi_path_item(req.environ, 'name') try: trait = rp_obj.Trait.get_by_name(context, name) except exception.TraitNotFound as ex: raise webob.exc.HTTPNotFound( explanation=ex.format_message()) req.response.status = 204 req.response.content_type = None if want_version.matches((1, 15)): req.response.last_modified = trait.created_at req.response.cache_control = 'no-cache' return req.response @wsgi_wrapper.PlacementWsgify @microversion.version_handler('1.6') def delete_trait(req): context = req.environ['placement.context'] name = util.wsgi_path_item(req.environ, 'name') try: trait = rp_obj.Trait.get_by_name(context, name) trait.destroy() except exception.TraitNotFound as ex: raise webob.exc.HTTPNotFound( explanation=ex.format_message()) except exception.TraitCannotDeleteStandard as ex: raise webob.exc.HTTPBadRequest( explanation=ex.format_message()) except exception.TraitInUse as ex: raise webob.exc.HTTPConflict( explanation=ex.format_message()) req.response.status = 204 req.response.content_type = None return req.response @wsgi_wrapper.PlacementWsgify @microversion.version_handler('1.6') @util.check_accept('application/json') def list_traits(req): context = req.environ['placement.context'] want_version = req.environ[microversion.MICROVERSION_ENVIRON] filters = {} util.validate_query_params(req, schema.LIST_TRAIT_SCHEMA) if 'name' in req.GET: filters = _normalize_traits_qs_param(req.GET['name']) if 'associated' in req.GET: if req.GET['associated'].lower() not in ['true', 'false']: raise webob.exc.HTTPBadRequest( explanation=_('The query parameter "associated" only accepts ' '"true" or "false"')) filters['associated'] = ( True if req.GET['associated'].lower() == 'true' else False) traits = rp_obj.TraitList.get_all(context, filters) req.response.status = 200 output, last_modified = _serialize_traits(traits, want_version) if want_version.matches((1, 15)): req.response.last_modified = last_modified req.response.cache_control = 'no-cache' req.response.body = encodeutils.to_utf8(jsonutils.dumps(output)) req.response.content_type = 'application/json' return req.response @wsgi_wrapper.PlacementWsgify @microversion.version_handler('1.6') @util.check_accept('application/json') def list_traits_for_resource_provider(req): context = req.environ['placement.context'] want_version = req.environ[microversion.MICROVERSION_ENVIRON] uuid = util.wsgi_path_item(req.environ, 'uuid') # Resource provider object is needed for two things: If it is # NotFound we'll get a 404 here, which needs to happen because # get_all_by_resource_provider can return an empty list. # It is also needed for the generation, used in the outgoing # representation. try: rp = rp_obj.ResourceProvider.get_by_uuid(context, uuid) except exception.NotFound as exc: raise webob.exc.HTTPNotFound( _("No resource provider with uuid %(uuid)s found: %(error)s") % {'uuid': uuid, 'error': exc}) traits = rp_obj.TraitList.get_all_by_resource_provider(context, rp) response_body, last_modified = _serialize_traits(traits, want_version) response_body["resource_provider_generation"] = rp.generation if want_version.matches((1, 15)): req.response.last_modified = last_modified req.response.cache_control = 'no-cache' req.response.status = 200 req.response.body = encodeutils.to_utf8(jsonutils.dumps(response_body)) req.response.content_type = 'application/json' return req.response @wsgi_wrapper.PlacementWsgify @microversion.version_handler('1.6') @util.require_content('application/json') def update_traits_for_resource_provider(req): context = req.environ['placement.context'] want_version = req.environ[microversion.MICROVERSION_ENVIRON] uuid = util.wsgi_path_item(req.environ, 'uuid') data = util.extract_json(req.body, schema.SET_TRAITS_FOR_RP_SCHEMA) rp_gen = data['resource_provider_generation'] traits = data['traits'] resource_provider = rp_obj.ResourceProvider.get_by_uuid( context, uuid) if resource_provider.generation != rp_gen: raise webob.exc.HTTPConflict( _("Resource provider's generation already changed. Please update " "the generation and try again."), json_formatter=util.json_error_formatter) trait_objs = rp_obj.TraitList.get_all( context, filters={'name_in': traits}) traits_name = set([obj.name for obj in trait_objs]) non_existed_trait = set(traits) - set(traits_name) if non_existed_trait: raise webob.exc.HTTPBadRequest( _("No such trait %s") % ', '.join(non_existed_trait)) resource_provider.set_traits(trait_objs) response_body, last_modified = _serialize_traits(trait_objs, want_version) response_body[ 'resource_provider_generation'] = resource_provider.generation if want_version.matches((1, 15)): req.response.last_modified = last_modified req.response.cache_control = 'no-cache' req.response.status = 200 req.response.body = encodeutils.to_utf8(jsonutils.dumps(response_body)) req.response.content_type = 'application/json' return req.response @wsgi_wrapper.PlacementWsgify @microversion.version_handler('1.6') def delete_traits_for_resource_provider(req): context = req.environ['placement.context'] uuid = util.wsgi_path_item(req.environ, 'uuid') resource_provider = rp_obj.ResourceProvider.get_by_uuid(context, uuid) try: resource_provider.set_traits(rp_obj.TraitList(objects=[])) except exception.ConcurrentUpdateDetected as e: raise webob.exc.HTTPConflict(explanation=e.format_message()) req.response.status = 204 req.response.content_type = None return req.response nova-17.0.1/nova/api/openstack/placement/handlers/usage.py0000666000175000017500000001132213250073126023516 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Placement API handlers for usage information.""" from oslo_serialization import jsonutils from oslo_utils import encodeutils from oslo_utils import timeutils import webob from nova.api.openstack.placement import microversion from nova.api.openstack.placement.schemas import usage as schema from nova.api.openstack.placement import util from nova.api.openstack.placement import wsgi_wrapper from nova import exception from nova.i18n import _ from nova.objects import resource_provider as rp_obj def _serialize_usages(resource_provider, usage): usage_dict = {resource.resource_class: resource.usage for resource in usage} return {'resource_provider_generation': resource_provider.generation, 'usages': usage_dict} @wsgi_wrapper.PlacementWsgify @util.check_accept('application/json') def list_usages(req): """GET a dictionary of resource provider usage by resource class. If the resource provider does not exist return a 404. On success return a 200 with an application/json representation of the usage dictionary. """ context = req.environ['placement.context'] uuid = util.wsgi_path_item(req.environ, 'uuid') want_version = req.environ[microversion.MICROVERSION_ENVIRON] # Resource provider object needed for two things: If it is # NotFound we'll get a 404 here, which needs to happen because # get_all_by_resource_provider_uuid can return an empty list. # It is also needed for the generation, used in the outgoing # representation. try: resource_provider = rp_obj.ResourceProvider.get_by_uuid( context, uuid) except exception.NotFound as exc: raise webob.exc.HTTPNotFound( _("No resource provider with uuid %(uuid)s found: %(error)s") % {'uuid': uuid, 'error': exc}) usage = rp_obj.UsageList.get_all_by_resource_provider_uuid( context, uuid) response = req.response response.body = encodeutils.to_utf8(jsonutils.dumps( _serialize_usages(resource_provider, usage))) req.response.content_type = 'application/json' if want_version.matches((1, 15)): req.response.cache_control = 'no-cache' # While it would be possible to generate a last-modified time # based on the collection of allocations that result in a usage # value (with some spelunking in the SQL) that doesn't align with # the question that is being asked in a request for usages: What # is the usage, now? So the last-modified time is set to utcnow. req.response.last_modified = timeutils.utcnow(with_timezone=True) return req.response @wsgi_wrapper.PlacementWsgify @microversion.version_handler('1.9') @util.check_accept('application/json') def get_total_usages(req): """GET the sum of usages for a project or a project/user. On success return a 200 and an application/json body representing the sum/total of usages. Return 404 Not Found if the wanted microversion does not match. """ context = req.environ['placement.context'] want_version = req.environ[microversion.MICROVERSION_ENVIRON] util.validate_query_params(req, schema.GET_USAGES_SCHEMA_1_9) project_id = req.GET.get('project_id') user_id = req.GET.get('user_id') usages = rp_obj.UsageList.get_all_by_project_user(context, project_id, user_id=user_id) response = req.response usages_dict = {'usages': {resource.resource_class: resource.usage for resource in usages}} response.body = encodeutils.to_utf8(jsonutils.dumps(usages_dict)) req.response.content_type = 'application/json' if want_version.matches((1, 15)): req.response.cache_control = 'no-cache' # While it would be possible to generate a last-modified time # based on the collection of allocations that result in a usage # value (with some spelunking in the SQL) that doesn't align with # the question that is being asked in a request for usages: What # is the usage, now? So the last-modified time is set to utcnow. req.response.last_modified = timeutils.utcnow(with_timezone=True) return req.response nova-17.0.1/nova/api/openstack/placement/handlers/inventory.py0000666000175000017500000003711713250073136024462 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Inventory handlers for Placement API.""" import copy from oslo_db import exception as db_exc from oslo_serialization import jsonutils from oslo_utils import encodeutils import webob from nova.api.openstack.placement import microversion from nova.api.openstack.placement.schemas import inventory as schema from nova.api.openstack.placement import util from nova.api.openstack.placement import wsgi_wrapper from nova import db from nova import exception from nova.i18n import _ from nova.objects import resource_provider as rp_obj # NOTE(cdent): We keep our own representation of inventory defaults # and output fields, separate from the versioned object to avoid # inadvertent API changes when the object defaults are changed. OUTPUT_INVENTORY_FIELDS = [ 'total', 'reserved', 'min_unit', 'max_unit', 'step_size', 'allocation_ratio', ] INVENTORY_DEFAULTS = { 'reserved': 0, 'min_unit': 1, 'max_unit': db.MAX_INT, 'step_size': 1, 'allocation_ratio': 1.0 } def _extract_inventory(body, schema): """Extract and validate inventory from JSON body.""" data = util.extract_json(body, schema) inventory_data = copy.copy(INVENTORY_DEFAULTS) inventory_data.update(data) return inventory_data def _extract_inventories(body, schema): """Extract and validate multiple inventories from JSON body.""" data = util.extract_json(body, schema) inventories = {} for res_class, raw_inventory in data['inventories'].items(): inventory_data = copy.copy(INVENTORY_DEFAULTS) inventory_data.update(raw_inventory) inventories[res_class] = inventory_data data['inventories'] = inventories return data def _make_inventory_object(resource_provider, resource_class, **data): """Single place to catch malformed Inventories.""" # TODO(cdent): Some of the validation checks that are done here # could be done via JSONschema (using, for example, "minimum": # 0) for non-negative integers. It's not clear if that is # duplication or decoupling so leaving it as this for now. try: inventory = rp_obj.Inventory( resource_provider=resource_provider, resource_class=resource_class, **data) except (ValueError, TypeError) as exc: raise webob.exc.HTTPBadRequest( _('Bad inventory %(class)s for resource provider ' '%(rp_uuid)s: %(error)s') % {'class': resource_class, 'rp_uuid': resource_provider.uuid, 'error': exc}) return inventory def _send_inventories(req, resource_provider, inventories): """Send a JSON representation of a list of inventories.""" response = req.response response.status = 200 output, last_modified = _serialize_inventories( inventories, resource_provider.generation) response.body = encodeutils.to_utf8(jsonutils.dumps(output)) response.content_type = 'application/json' want_version = req.environ[microversion.MICROVERSION_ENVIRON] if want_version.matches((1, 15)): response.last_modified = last_modified response.cache_control = 'no-cache' return response def _send_inventory(req, resource_provider, inventory, status=200): """Send a JSON representation of one single inventory.""" response = req.response response.status = status response.body = encodeutils.to_utf8(jsonutils.dumps(_serialize_inventory( inventory, generation=resource_provider.generation))) response.content_type = 'application/json' want_version = req.environ[microversion.MICROVERSION_ENVIRON] if want_version.matches((1, 15)): modified = util.pick_last_modified(None, inventory) response.last_modified = modified response.cache_control = 'no-cache' return response def _serialize_inventory(inventory, generation=None): """Turn a single inventory into a dictionary.""" data = { field: getattr(inventory, field) for field in OUTPUT_INVENTORY_FIELDS } if generation: data['resource_provider_generation'] = generation return data def _serialize_inventories(inventories, generation): """Turn a list of inventories in a dict by resource class.""" inventories_by_class = {inventory.resource_class: inventory for inventory in inventories} inventories_dict = {} last_modified = None for resource_class, inventory in inventories_by_class.items(): last_modified = util.pick_last_modified(last_modified, inventory) inventories_dict[resource_class] = _serialize_inventory( inventory, generation=None) return ({'resource_provider_generation': generation, 'inventories': inventories_dict}, last_modified) @wsgi_wrapper.PlacementWsgify @util.require_content('application/json') def create_inventory(req): """POST to create one inventory. On success return a 201 response, a location header pointing to the newly created inventory and an application/json representation of the inventory. """ context = req.environ['placement.context'] uuid = util.wsgi_path_item(req.environ, 'uuid') resource_provider = rp_obj.ResourceProvider.get_by_uuid( context, uuid) data = _extract_inventory(req.body, schema.POST_INVENTORY_SCHEMA) resource_class = data.pop('resource_class') inventory = _make_inventory_object(resource_provider, resource_class, **data) try: resource_provider.add_inventory(inventory) except (exception.ConcurrentUpdateDetected, db_exc.DBDuplicateEntry) as exc: raise webob.exc.HTTPConflict( _('Update conflict: %(error)s') % {'error': exc}) except (exception.InvalidInventoryCapacity, exception.NotFound) as exc: raise webob.exc.HTTPBadRequest( _('Unable to create inventory for resource provider ' '%(rp_uuid)s: %(error)s') % {'rp_uuid': resource_provider.uuid, 'error': exc}) response = req.response response.location = util.inventory_url( req.environ, resource_provider, resource_class) return _send_inventory(req, resource_provider, inventory, status=201) @wsgi_wrapper.PlacementWsgify def delete_inventory(req): """DELETE to destroy a single inventory. If the inventory is in use or resource provider generation is out of sync return a 409. On success return a 204 and an empty body. """ context = req.environ['placement.context'] uuid = util.wsgi_path_item(req.environ, 'uuid') resource_class = util.wsgi_path_item(req.environ, 'resource_class') resource_provider = rp_obj.ResourceProvider.get_by_uuid( context, uuid) try: resource_provider.delete_inventory(resource_class) except (exception.ConcurrentUpdateDetected, exception.InventoryInUse) as exc: raise webob.exc.HTTPConflict( _('Unable to delete inventory of class %(class)s: %(error)s') % {'class': resource_class, 'error': exc}) except exception.NotFound as exc: raise webob.exc.HTTPNotFound( _('No inventory of class %(class)s found for delete: %(error)s') % {'class': resource_class, 'error': exc}) response = req.response response.status = 204 response.content_type = None return response @wsgi_wrapper.PlacementWsgify @util.check_accept('application/json') def get_inventories(req): """GET a list of inventories. On success return a 200 with an application/json body representing a collection of inventories. """ context = req.environ['placement.context'] uuid = util.wsgi_path_item(req.environ, 'uuid') try: rp = rp_obj.ResourceProvider.get_by_uuid(context, uuid) except exception.NotFound as exc: raise webob.exc.HTTPNotFound( _("No resource provider with uuid %(uuid)s found : %(error)s") % {'uuid': uuid, 'error': exc}) inv_list = rp_obj.InventoryList.get_all_by_resource_provider(context, rp) return _send_inventories(req, rp, inv_list) @wsgi_wrapper.PlacementWsgify @util.check_accept('application/json') def get_inventory(req): """GET one inventory. On success return a 200 an application/json body representing one inventory. """ context = req.environ['placement.context'] uuid = util.wsgi_path_item(req.environ, 'uuid') resource_class = util.wsgi_path_item(req.environ, 'resource_class') try: rp = rp_obj.ResourceProvider.get_by_uuid(context, uuid) except exception.NotFound as exc: raise webob.exc.HTTPNotFound( _("No resource provider with uuid %(uuid)s found : %(error)s") % {'uuid': uuid, 'error': exc}) inv_list = rp_obj.InventoryList.get_all_by_resource_provider(context, rp) inventory = inv_list.find(resource_class) if not inventory: raise webob.exc.HTTPNotFound( _('No inventory of class %(class)s for %(rp_uuid)s') % {'class': resource_class, 'rp_uuid': uuid}) return _send_inventory(req, rp, inventory) @wsgi_wrapper.PlacementWsgify @util.require_content('application/json') def set_inventories(req): """PUT to set all inventory for a resource provider. Create, update and delete inventory as required to reset all the inventory. If the resource generation is out of sync, return a 409. If an inventory to be deleted is in use, return a 409. If any inventory to be created or updated has settings which are invalid (for example reserved exceeds capacity), return a 400. On success return a 200 with an application/json body representing the inventories. """ context = req.environ['placement.context'] uuid = util.wsgi_path_item(req.environ, 'uuid') resource_provider = rp_obj.ResourceProvider.get_by_uuid( context, uuid) data = _extract_inventories(req.body, schema.PUT_INVENTORY_SCHEMA) if data['resource_provider_generation'] != resource_provider.generation: raise webob.exc.HTTPConflict( _('resource provider generation conflict')) inv_list = [] for res_class, inventory_data in data['inventories'].items(): inventory = _make_inventory_object( resource_provider, res_class, **inventory_data) inv_list.append(inventory) inventories = rp_obj.InventoryList(objects=inv_list) try: resource_provider.set_inventory(inventories) except exception.ResourceClassNotFound as exc: raise webob.exc.HTTPBadRequest( _('Unknown resource class in inventory for resource provider ' '%(rp_uuid)s: %(error)s') % {'rp_uuid': resource_provider.uuid, 'error': exc}) except exception.InventoryWithResourceClassNotFound as exc: raise webob.exc.HTTPConflict( _('Race condition detected when setting inventory. No inventory ' 'record with resource class for resource provider ' '%(rp_uuid)s: %(error)s') % {'rp_uuid': resource_provider.uuid, 'error': exc}) except (exception.ConcurrentUpdateDetected, exception.InventoryInUse, db_exc.DBDuplicateEntry) as exc: raise webob.exc.HTTPConflict( _('update conflict: %(error)s') % {'error': exc}) except exception.InvalidInventoryCapacity as exc: raise webob.exc.HTTPBadRequest( _('Unable to update inventory for resource provider ' '%(rp_uuid)s: %(error)s') % {'rp_uuid': resource_provider.uuid, 'error': exc}) return _send_inventories(req, resource_provider, inventories) @wsgi_wrapper.PlacementWsgify @microversion.version_handler('1.5', status_code=405) def delete_inventories(req): """DELETE all inventory for a resource provider. Delete inventory as required to reset all the inventory. If an inventory to be deleted is in use, return a 409 Conflict. On success return a 204 No content. Return 405 Method Not Allowed if the wanted microversion does not match. """ context = req.environ['placement.context'] uuid = util.wsgi_path_item(req.environ, 'uuid') resource_provider = rp_obj.ResourceProvider.get_by_uuid( context, uuid) inventories = rp_obj.InventoryList(objects=[]) try: resource_provider.set_inventory(inventories) except exception.ConcurrentUpdateDetected: raise webob.exc.HTTPConflict( _('Unable to delete inventory for resource provider ' '%(rp_uuid)s because the inventory was updated by ' 'another process. Please retry your request.') % {'rp_uuid': resource_provider.uuid}) except exception.InventoryInUse as ex: # NOTE(mriedem): This message cannot change without impacting the # nova.scheduler.client.report._RE_INV_IN_USE regex. raise webob.exc.HTTPConflict(explanation=ex.format_message()) response = req.response response.status = 204 response.content_type = None return response @wsgi_wrapper.PlacementWsgify @util.require_content('application/json') def update_inventory(req): """PUT to update one inventory. If the resource generation is out of sync, return a 409. If the inventory has settings which are invalid (for example reserved exceeds capacity), return a 400. On success return a 200 with an application/json body representing the inventory. """ context = req.environ['placement.context'] uuid = util.wsgi_path_item(req.environ, 'uuid') resource_class = util.wsgi_path_item(req.environ, 'resource_class') resource_provider = rp_obj.ResourceProvider.get_by_uuid( context, uuid) data = _extract_inventory(req.body, schema.BASE_INVENTORY_SCHEMA) if data['resource_provider_generation'] != resource_provider.generation: raise webob.exc.HTTPConflict( _('resource provider generation conflict')) inventory = _make_inventory_object(resource_provider, resource_class, **data) try: resource_provider.update_inventory(inventory) except (exception.ConcurrentUpdateDetected, db_exc.DBDuplicateEntry) as exc: raise webob.exc.HTTPConflict( _('update conflict: %(error)s') % {'error': exc}) except exception.InventoryWithResourceClassNotFound as exc: raise webob.exc.HTTPBadRequest( _('No inventory record with resource class for resource provider ' '%(rp_uuid)s: %(error)s') % {'rp_uuid': resource_provider.uuid, 'error': exc}) except exception.InvalidInventoryCapacity as exc: raise webob.exc.HTTPBadRequest( _('Unable to update inventory for resource provider ' '%(rp_uuid)s: %(error)s') % {'rp_uuid': resource_provider.uuid, 'error': exc}) return _send_inventory(req, resource_provider, inventory) nova-17.0.1/nova/api/openstack/placement/handlers/resource_class.py0000666000175000017500000002035113250073126025430 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Placement API handlers for resource classes.""" from oslo_serialization import jsonutils from oslo_utils import encodeutils from oslo_utils import timeutils import webob from nova.api.openstack.placement import microversion from nova.api.openstack.placement.schemas import resource_class as schema from nova.api.openstack.placement import util from nova.api.openstack.placement import wsgi_wrapper from nova import exception from nova.i18n import _ from nova.objects import resource_provider as rp_obj def _serialize_links(environ, rc): url = util.resource_class_url(environ, rc) links = [{'rel': 'self', 'href': url}] return links def _serialize_resource_class(environ, rc): data = { 'name': rc.name, 'links': _serialize_links(environ, rc) } return data def _serialize_resource_classes(environ, rcs, want_version): output = [] last_modified = None get_last_modified = want_version.matches((1, 15)) for rc in rcs: if get_last_modified: last_modified = util.pick_last_modified(last_modified, rc) data = _serialize_resource_class(environ, rc) output.append(data) last_modified = last_modified or timeutils.utcnow(with_timezone=True) return ({"resource_classes": output}, last_modified) @wsgi_wrapper.PlacementWsgify @microversion.version_handler('1.2') @util.require_content('application/json') def create_resource_class(req): """POST to create a resource class. On success return a 201 response with an empty body and a location header pointing to the newly created resource class. """ context = req.environ['placement.context'] data = util.extract_json(req.body, schema.POST_RC_SCHEMA_V1_2) try: rc = rp_obj.ResourceClass(context, name=data['name']) rc.create() except exception.ResourceClassExists: raise webob.exc.HTTPConflict( _('Conflicting resource class already exists: %(name)s') % {'name': data['name']}) except exception.MaxDBRetriesExceeded: raise webob.exc.HTTPConflict( _('Max retries of DB transaction exceeded attempting ' 'to create resource class: %(name)s, please' 'try again.') % {'name': data['name']}) req.response.location = util.resource_class_url(req.environ, rc) req.response.status = 201 req.response.content_type = None return req.response @wsgi_wrapper.PlacementWsgify @microversion.version_handler('1.2') def delete_resource_class(req): """DELETE to destroy a single resource class. On success return a 204 and an empty body. """ name = util.wsgi_path_item(req.environ, 'name') context = req.environ['placement.context'] # The containing application will catch a not found here. rc = rp_obj.ResourceClass.get_by_name(context, name) try: rc.destroy() except exception.ResourceClassCannotDeleteStandard as exc: raise webob.exc.HTTPBadRequest( _('Error in delete resource class: %(error)s') % {'error': exc}) except exception.ResourceClassInUse as exc: raise webob.exc.HTTPConflict( _('Error in delete resource class: %(error)s') % {'error': exc}) req.response.status = 204 req.response.content_type = None return req.response @wsgi_wrapper.PlacementWsgify @microversion.version_handler('1.2') @util.check_accept('application/json') def get_resource_class(req): """Get a single resource class. On success return a 200 with an application/json body representing the resource class. """ name = util.wsgi_path_item(req.environ, 'name') context = req.environ['placement.context'] want_version = req.environ[microversion.MICROVERSION_ENVIRON] # The containing application will catch a not found here. rc = rp_obj.ResourceClass.get_by_name(context, name) req.response.body = encodeutils.to_utf8(jsonutils.dumps( _serialize_resource_class(req.environ, rc)) ) req.response.content_type = 'application/json' if want_version.matches((1, 15)): req.response.cache_control = 'no-cache' # Non-custom resource classes will return None from pick_last_modified, # so the 'or' causes utcnow to be used. last_modified = util.pick_last_modified(None, rc) or timeutils.utcnow( with_timezone=True) req.response.last_modified = last_modified return req.response @wsgi_wrapper.PlacementWsgify @microversion.version_handler('1.2') @util.check_accept('application/json') def list_resource_classes(req): """GET a list of resource classes. On success return a 200 and an application/json body representing a collection of resource classes. """ context = req.environ['placement.context'] want_version = req.environ[microversion.MICROVERSION_ENVIRON] rcs = rp_obj.ResourceClassList.get_all(context) response = req.response output, last_modified = _serialize_resource_classes( req.environ, rcs, want_version) response.body = encodeutils.to_utf8(jsonutils.dumps(output)) response.content_type = 'application/json' if want_version.matches((1, 15)): response.last_modified = last_modified response.cache_control = 'no-cache' return response @wsgi_wrapper.PlacementWsgify @microversion.version_handler('1.2', '1.6') @util.require_content('application/json') def update_resource_class(req): """PUT to update a single resource class. On success return a 200 response with a representation of the updated resource class. """ name = util.wsgi_path_item(req.environ, 'name') context = req.environ['placement.context'] data = util.extract_json(req.body, schema.PUT_RC_SCHEMA_V1_2) # The containing application will catch a not found here. rc = rp_obj.ResourceClass.get_by_name(context, name) rc.name = data['name'] try: rc.save() except exception.ResourceClassExists: raise webob.exc.HTTPConflict( _('Resource class already exists: %(name)s') % {'name': rc.name}) except exception.ResourceClassCannotUpdateStandard: raise webob.exc.HTTPBadRequest( _('Cannot update standard resource class %(rp_name)s') % {'rp_name': name}) req.response.body = encodeutils.to_utf8(jsonutils.dumps( _serialize_resource_class(req.environ, rc)) ) req.response.status = 200 req.response.content_type = 'application/json' return req.response @wsgi_wrapper.PlacementWsgify # noqa @microversion.version_handler('1.7') def update_resource_class(req): """PUT to create or validate the existence of single resource class. On a successful create return 201. Return 204 if the class already exists. If the resource class is not a custom resource class, return a 400. 409 might be a better choice, but 400 aligns with previous code. """ name = util.wsgi_path_item(req.environ, 'name') context = req.environ['placement.context'] # Use JSON validation to validation resource class name. util.extract_json('{"name": "%s"}' % name, schema.PUT_RC_SCHEMA_V1_2) status = 204 try: rc = rp_obj.ResourceClass.get_by_name(context, name) except exception.NotFound: try: rc = rp_obj.ResourceClass(context, name=name) rc.create() status = 201 # We will not see ResourceClassCannotUpdateStandard because # that was already caught when validating the {name}. except exception.ResourceClassExists: # Someone just now created the class, so stick with 204 pass req.response.status = status req.response.content_type = None req.response.location = util.resource_class_url(req.environ, rc) return req.response nova-17.0.1/nova/api/openstack/placement/handlers/__init__.py0000666000175000017500000000000013250073126024140 0ustar zuulzuul00000000000000nova-17.0.1/nova/api/openstack/placement/handlers/resource_provider.py0000666000175000017500000002476613250073136026174 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Placement API handlers for resource providers.""" from oslo_db import exception as db_exc from oslo_serialization import jsonutils from oslo_utils import encodeutils from oslo_utils import timeutils from oslo_utils import uuidutils import webob from nova.api.openstack.placement import microversion from nova.api.openstack.placement.schemas import resource_provider as rp_schema from nova.api.openstack.placement import util from nova.api.openstack.placement import wsgi_wrapper from nova import exception from nova.i18n import _ from nova.objects import resource_provider as rp_obj def _serialize_links(environ, resource_provider): url = util.resource_provider_url(environ, resource_provider) links = [{'rel': 'self', 'href': url}] rel_types = ['inventories', 'usages'] want_version = environ[microversion.MICROVERSION_ENVIRON] if want_version >= (1, 1): rel_types.append('aggregates') if want_version >= (1, 6): rel_types.append('traits') if want_version >= (1, 11): rel_types.append('allocations') for rel in rel_types: links.append({'rel': rel, 'href': '%s/%s' % (url, rel)}) return links def _serialize_provider(environ, resource_provider, want_version): data = { 'uuid': resource_provider.uuid, 'name': resource_provider.name, 'generation': resource_provider.generation, 'links': _serialize_links(environ, resource_provider) } if want_version.matches((1, 14)): data['parent_provider_uuid'] = resource_provider.parent_provider_uuid data['root_provider_uuid'] = resource_provider.root_provider_uuid return data def _serialize_providers(environ, resource_providers, want_version): output = [] last_modified = None get_last_modified = want_version.matches((1, 15)) for provider in resource_providers: if get_last_modified: last_modified = util.pick_last_modified(last_modified, provider) provider_data = _serialize_provider(environ, provider, want_version) output.append(provider_data) last_modified = last_modified or timeutils.utcnow(with_timezone=True) return ({"resource_providers": output}, last_modified) @wsgi_wrapper.PlacementWsgify @util.require_content('application/json') def create_resource_provider(req): """POST to create a resource provider. On success return a 201 response with an empty body and a location header pointing to the newly created resource provider. """ context = req.environ['placement.context'] schema = rp_schema.POST_RESOURCE_PROVIDER_SCHEMA want_version = req.environ[microversion.MICROVERSION_ENVIRON] if want_version.matches((1, 14)): schema = rp_schema.POST_RP_SCHEMA_V1_14 data = util.extract_json(req.body, schema) try: uuid = data.setdefault('uuid', uuidutils.generate_uuid()) resource_provider = rp_obj.ResourceProvider(context, **data) resource_provider.create() except db_exc.DBDuplicateEntry as exc: # Whether exc.columns has one or two entries (in the event # of both fields being duplicates) appears to be database # dependent, so going with the complete solution here. duplicate = ', '.join(['%s: %s' % (column, data[column]) for column in exc.columns]) raise webob.exc.HTTPConflict( _('Conflicting resource provider %(duplicate)s already exists.') % {'duplicate': duplicate}) except exception.ObjectActionError as exc: raise webob.exc.HTTPBadRequest( _('Unable to create resource provider "%(name)s", %(rp_uuid)s: ' '%(error)s') % {'name': data['name'], 'rp_uuid': uuid, 'error': exc}) req.response.location = util.resource_provider_url( req.environ, resource_provider) req.response.status = 201 req.response.content_type = None return req.response @wsgi_wrapper.PlacementWsgify def delete_resource_provider(req): """DELETE to destroy a single resource provider. On success return a 204 and an empty body. """ uuid = util.wsgi_path_item(req.environ, 'uuid') context = req.environ['placement.context'] # The containing application will catch a not found here. try: resource_provider = rp_obj.ResourceProvider.get_by_uuid( context, uuid) resource_provider.destroy() except exception.ResourceProviderInUse as exc: raise webob.exc.HTTPConflict( _('Unable to delete resource provider %(rp_uuid)s: %(error)s') % {'rp_uuid': uuid, 'error': exc}) except exception.NotFound as exc: raise webob.exc.HTTPNotFound( _("No resource provider with uuid %s found for delete") % uuid) req.response.status = 204 req.response.content_type = None return req.response @wsgi_wrapper.PlacementWsgify @util.check_accept('application/json') def get_resource_provider(req): """Get a single resource provider. On success return a 200 with an application/json body representing the resource provider. """ want_version = req.environ[microversion.MICROVERSION_ENVIRON] uuid = util.wsgi_path_item(req.environ, 'uuid') # The containing application will catch a not found here. context = req.environ['placement.context'] resource_provider = rp_obj.ResourceProvider.get_by_uuid( context, uuid) response = req.response response.body = encodeutils.to_utf8(jsonutils.dumps( _serialize_provider(req.environ, resource_provider, want_version))) response.content_type = 'application/json' if want_version.matches((1, 15)): modified = util.pick_last_modified(None, resource_provider) response.last_modified = modified response.cache_control = 'no-cache' return response @wsgi_wrapper.PlacementWsgify @util.check_accept('application/json') def list_resource_providers(req): """GET a list of resource providers. On success return a 200 and an application/json body representing a collection of resource providers. """ context = req.environ['placement.context'] want_version = req.environ[microversion.MICROVERSION_ENVIRON] schema = rp_schema.GET_RPS_SCHEMA_1_0 if want_version.matches((1, 14)): schema = rp_schema.GET_RPS_SCHEMA_1_14 elif want_version.matches((1, 4)): schema = rp_schema.GET_RPS_SCHEMA_1_4 elif want_version.matches((1, 3)): schema = rp_schema.GET_RPS_SCHEMA_1_3 util.validate_query_params(req, schema) filters = {} for attr in ['uuid', 'name', 'member_of', 'in_tree']: if attr in req.GET: value = req.GET[attr] # special case member_of to always make its value a # list, either by accepting the single value, or if it # starts with 'in:' splitting on ','. # NOTE(cdent): This will all change when we start using # JSONSchema validation of query params. if attr == 'member_of': if value.startswith('in:'): value = value[3:].split(',') else: value = [value] # Make sure the values are actually UUIDs. for aggr_uuid in value: if not uuidutils.is_uuid_like(aggr_uuid): raise webob.exc.HTTPBadRequest( _('Invalid uuid value: %(uuid)s') % {'uuid': aggr_uuid}) filters[attr] = value if 'resources' in req.GET: resources = util.normalize_resources_qs_param(req.GET['resources']) filters['resources'] = resources try: resource_providers = rp_obj.ResourceProviderList.get_all_by_filters( context, filters) except exception.ResourceClassNotFound as exc: raise webob.exc.HTTPBadRequest( _('Invalid resource class in resources parameter: %(error)s') % {'error': exc}) response = req.response output, last_modified = _serialize_providers( req.environ, resource_providers, want_version) response.body = encodeutils.to_utf8(jsonutils.dumps(output)) response.content_type = 'application/json' if want_version.matches((1, 15)): response.last_modified = last_modified response.cache_control = 'no-cache' return response @wsgi_wrapper.PlacementWsgify @util.require_content('application/json') def update_resource_provider(req): """PUT to update a single resource provider. On success return a 200 response with a representation of the updated resource provider. """ uuid = util.wsgi_path_item(req.environ, 'uuid') context = req.environ['placement.context'] want_version = req.environ[microversion.MICROVERSION_ENVIRON] # The containing application will catch a not found here. resource_provider = rp_obj.ResourceProvider.get_by_uuid( context, uuid) schema = rp_schema.PUT_RESOURCE_PROVIDER_SCHEMA if want_version.matches((1, 14)): schema = rp_schema.PUT_RP_SCHEMA_V1_14 data = util.extract_json(req.body, schema) for field in rp_obj.ResourceProvider.SETTABLE_FIELDS: if field in data: setattr(resource_provider, field, data[field]) try: resource_provider.save() except db_exc.DBDuplicateEntry as exc: raise webob.exc.HTTPConflict( _('Conflicting resource provider %(name)s already exists.') % {'name': data['name']}) except exception.ObjectActionError as exc: raise webob.exc.HTTPBadRequest( _('Unable to save resource provider %(rp_uuid)s: %(error)s') % {'rp_uuid': uuid, 'error': exc}) response = req.response response.status = 200 response.body = encodeutils.to_utf8(jsonutils.dumps( _serialize_provider(req.environ, resource_provider, want_version))) response.content_type = 'application/json' if want_version.matches((1, 15)): response.last_modified = resource_provider.updated_at response.cache_control = 'no-cache' return response nova-17.0.1/nova/api/openstack/placement/handlers/aggregate.py0000666000175000017500000000576313250073126024354 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Aggregate handlers for Placement API.""" from oslo_serialization import jsonutils from oslo_utils import encodeutils from oslo_utils import timeutils from nova.api.openstack.placement import microversion from nova.api.openstack.placement.schemas import aggregate as schema from nova.api.openstack.placement import util from nova.api.openstack.placement import wsgi_wrapper from nova.objects import resource_provider as rp_obj def _send_aggregates(req, aggregate_uuids): want_version = req.environ[microversion.MICROVERSION_ENVIRON] response = req.response response.status = 200 response.body = encodeutils.to_utf8( jsonutils.dumps(_serialize_aggregates(aggregate_uuids))) response.content_type = 'application/json' if want_version.matches((1, 15)): req.response.cache_control = 'no-cache' # We never get an aggregate itself, we get the list of aggregates # that are associated with a resource provider. We don't record the # time when that association was made and the time when an aggregate # uuid was created is not relevant, so here we punt and use utcnow. req.response.last_modified = timeutils.utcnow(with_timezone=True) return response def _serialize_aggregates(aggregate_uuids): return {'aggregates': aggregate_uuids} @wsgi_wrapper.PlacementWsgify @util.check_accept('application/json') @microversion.version_handler('1.1') def get_aggregates(req): """GET a list of aggregates associated with a resource provider. If the resource provider does not exist return a 404. On success return a 200 with an application/json body containing a list of aggregate uuids. """ context = req.environ['placement.context'] uuid = util.wsgi_path_item(req.environ, 'uuid') resource_provider = rp_obj.ResourceProvider.get_by_uuid( context, uuid) aggregate_uuids = resource_provider.get_aggregates() return _send_aggregates(req, aggregate_uuids) @wsgi_wrapper.PlacementWsgify @util.require_content('application/json') @microversion.version_handler('1.1') def set_aggregates(req): context = req.environ['placement.context'] uuid = util.wsgi_path_item(req.environ, 'uuid') resource_provider = rp_obj.ResourceProvider.get_by_uuid( context, uuid) aggregate_uuids = util.extract_json(req.body, schema.PUT_AGGREGATES_SCHEMA) resource_provider.set_aggregates(aggregate_uuids) return _send_aggregates(req, aggregate_uuids) nova-17.0.1/nova/api/openstack/placement/handlers/root.py0000666000175000017500000000337213250073126023403 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Handler for the root of the Placement API.""" from oslo_serialization import jsonutils from oslo_utils import encodeutils from oslo_utils import timeutils from nova.api.openstack.placement import microversion from nova.api.openstack.placement import wsgi_wrapper @wsgi_wrapper.PlacementWsgify def home(req): want_version = req.environ[microversion.MICROVERSION_ENVIRON] min_version = microversion.min_version_string() max_version = microversion.max_version_string() # NOTE(cdent): As sections of the api are added, links can be # added to this output to align with the guidelines at # http://specs.openstack.org/openstack/api-wg/guidelines/microversion_specification.html#version-discovery version_data = { 'id': 'v%s' % min_version, 'max_version': max_version, 'min_version': min_version, } version_json = jsonutils.dumps({'versions': [version_data]}) req.response.body = encodeutils.to_utf8(version_json) req.response.content_type = 'application/json' if want_version.matches((1, 15)): req.response.cache_control = 'no-cache' req.response.last_modified = timeutils.utcnow(with_timezone=True) return req.response nova-17.0.1/nova/api/openstack/placement/handlers/allocation.py0000666000175000017500000003636313250073126024553 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Placement API handlers for setting and deleting allocations.""" import collections from oslo_log import log as logging from oslo_serialization import jsonutils from oslo_utils import encodeutils from oslo_utils import timeutils import webob from nova.api.openstack.placement import microversion from nova.api.openstack.placement.schemas import allocation as schema from nova.api.openstack.placement import util from nova.api.openstack.placement import wsgi_wrapper from nova import exception from nova.i18n import _ from nova.objects import resource_provider as rp_obj LOG = logging.getLogger(__name__) def _allocations_dict(allocations, key_fetcher, resource_provider=None, want_version=None): """Turn allocations into a dict of resources keyed by key_fetcher.""" allocation_data = collections.defaultdict(dict) # NOTE(cdent): The last_modified for an allocation will always be # based off the created_at column because allocations are only # ever inserted, never updated. last_modified = None # Only calculate last-modified if we are using a microversion that # supports it. get_last_modified = want_version and want_version.matches((1, 15)) for allocation in allocations: if get_last_modified: last_modified = util.pick_last_modified(last_modified, allocation) key = key_fetcher(allocation) if 'resources' not in allocation_data[key]: allocation_data[key]['resources'] = {} resource_class = allocation.resource_class allocation_data[key]['resources'][resource_class] = allocation.used if not resource_provider: generation = allocation.resource_provider.generation allocation_data[key]['generation'] = generation result = {'allocations': allocation_data} if resource_provider: result['resource_provider_generation'] = resource_provider.generation else: if allocations and want_version and want_version.matches((1, 12)): # We're looking at a list of allocations by consumer id so # project and user are consistent across the list result['project_id'] = allocations[0].project_id result['user_id'] = allocations[0].user_id last_modified = last_modified or timeutils.utcnow(with_timezone=True) return result, last_modified def _serialize_allocations_for_consumer(allocations, want_version=None): """Turn a list of allocations into a dict by resource provider uuid. { 'allocations': { RP_UUID_1: { 'generation': GENERATION, 'resources': { 'DISK_GB': 4, 'VCPU': 2 } }, RP_UUID_2: { 'generation': GENERATION, 'resources': { 'DISK_GB': 6, 'VCPU': 3 } } }, # project_id and user_id are added with microverion 1.12 'project_id': PROJECT_ID, 'user_id': USER_ID } """ return _allocations_dict(allocations, lambda x: x.resource_provider.uuid, want_version=want_version) def _serialize_allocations_for_resource_provider(allocations, resource_provider): """Turn a list of allocations into a dict by consumer id. {'resource_provider_generation': GENERATION, 'allocations': CONSUMER_ID_1: { 'resources': { 'DISK_GB': 4, 'VCPU': 2 } }, CONSUMER_ID_2: { 'resources': { 'DISK_GB': 6, 'VCPU': 3 } } } """ return _allocations_dict(allocations, lambda x: x.consumer_id, resource_provider=resource_provider) @wsgi_wrapper.PlacementWsgify @util.check_accept('application/json') def list_for_consumer(req): """List allocations associated with a consumer.""" context = req.environ['placement.context'] want_version = req.environ[microversion.MICROVERSION_ENVIRON] consumer_id = util.wsgi_path_item(req.environ, 'consumer_uuid') want_version = req.environ[microversion.MICROVERSION_ENVIRON] # NOTE(cdent): There is no way for a 404 to be returned here, # only an empty result. We do not have a way to validate a # consumer id. allocations = rp_obj.AllocationList.get_all_by_consumer_id( context, consumer_id) output, last_modified = _serialize_allocations_for_consumer( allocations, want_version) allocations_json = jsonutils.dumps(output) response = req.response response.status = 200 response.body = encodeutils.to_utf8(allocations_json) response.content_type = 'application/json' if want_version.matches((1, 15)): response.last_modified = last_modified response.cache_control = 'no-cache' return response @wsgi_wrapper.PlacementWsgify @util.check_accept('application/json') def list_for_resource_provider(req): """List allocations associated with a resource provider.""" # TODO(cdent): On a shared resource provider (for example a # giant disk farm) this list could get very long. At the moment # we have no facility for limiting the output. Given that we are # using a dict of dicts for the output we are potentially limiting # ourselves in terms of sorting and filtering. context = req.environ['placement.context'] want_version = req.environ[microversion.MICROVERSION_ENVIRON] uuid = util.wsgi_path_item(req.environ, 'uuid') # confirm existence of resource provider so we get a reasonable # 404 instead of empty list try: rp = rp_obj.ResourceProvider.get_by_uuid(context, uuid) except exception.NotFound as exc: raise webob.exc.HTTPNotFound( _("Resource provider '%(rp_uuid)s' not found: %(error)s") % {'rp_uuid': uuid, 'error': exc}) allocs = rp_obj.AllocationList.get_all_by_resource_provider(context, rp) output, last_modified = _serialize_allocations_for_resource_provider( allocs, rp) allocations_json = jsonutils.dumps(output) response = req.response response.status = 200 response.body = encodeutils.to_utf8(allocations_json) response.content_type = 'application/json' if want_version.matches((1, 15)): response.last_modified = last_modified response.cache_control = 'no-cache' return response def _new_allocations(context, resource_provider_uuid, consumer_uuid, resources, project_id, user_id): """Create new allocation objects for a set of resources Returns a list of Allocation objects. :param context: The placement context. :param resource_provider_uuid: The uuid of the resource provider that has the resources. :param consumer_uuid: The uuid of the consumer of the resources. :param resources: A dict of resource classes and values. :param project_id: The project consuming the resources. :param user_id: The user consuming the resources. """ allocations = [] try: resource_provider = rp_obj.ResourceProvider.get_by_uuid( context, resource_provider_uuid) except exception.NotFound: raise webob.exc.HTTPBadRequest( _("Allocation for resource provider '%(rp_uuid)s' " "that does not exist.") % {'rp_uuid': resource_provider_uuid}) for resource_class in resources: allocation = rp_obj.Allocation( resource_provider=resource_provider, consumer_id=consumer_uuid, resource_class=resource_class, project_id=project_id, user_id=user_id, used=resources[resource_class]) allocations.append(allocation) return allocations def _set_allocations_for_consumer(req, schema): context = req.environ['placement.context'] consumer_uuid = util.wsgi_path_item(req.environ, 'consumer_uuid') data = util.extract_json(req.body, schema) allocation_data = data['allocations'] # Normalize allocation data to dict. want_version = req.environ[microversion.MICROVERSION_ENVIRON] if not want_version.matches((1, 12)): allocations_dict = {} # Allocation are list-ish, transform to dict-ish for allocation in allocation_data: resource_provider_uuid = allocation['resource_provider']['uuid'] allocations_dict[resource_provider_uuid] = { 'resources': allocation['resources'] } allocation_data = allocations_dict # If the body includes an allocation for a resource provider # that does not exist, raise a 400. allocation_objects = [] for resource_provider_uuid, allocation in allocation_data.items(): new_allocations = _new_allocations(context, resource_provider_uuid, consumer_uuid, allocation['resources'], data.get('project_id'), data.get('user_id')) allocation_objects.extend(new_allocations) allocations = rp_obj.AllocationList( context, objects=allocation_objects) try: allocations.create_all() LOG.debug("Successfully wrote allocations %s", allocations) # InvalidInventory is a parent for several exceptions that # indicate either that Inventory is not present, or that # capacity limits have been exceeded. except exception.NotFound as exc: raise webob.exc.HTTPBadRequest( _("Unable to allocate inventory for consumer " "%(consumer_uuid)s: %(error)s") % {'consumer_uuid': consumer_uuid, 'error': exc}) except exception.InvalidInventory as exc: raise webob.exc.HTTPConflict( _('Unable to allocate inventory: %(error)s') % {'error': exc}) except exception.ConcurrentUpdateDetected as exc: raise webob.exc.HTTPConflict( _('Inventory changed while attempting to allocate: %(error)s') % {'error': exc}) req.response.status = 204 req.response.content_type = None return req.response @wsgi_wrapper.PlacementWsgify @microversion.version_handler('1.0', '1.7') @util.require_content('application/json') def set_allocations_for_consumer(req): return _set_allocations_for_consumer(req, schema.ALLOCATION_SCHEMA) @wsgi_wrapper.PlacementWsgify # noqa @microversion.version_handler('1.8', '1.11') @util.require_content('application/json') def set_allocations_for_consumer(req): return _set_allocations_for_consumer(req, schema.ALLOCATION_SCHEMA_V1_8) @wsgi_wrapper.PlacementWsgify # noqa @microversion.version_handler('1.12') @util.require_content('application/json') def set_allocations_for_consumer(req): return _set_allocations_for_consumer(req, schema.ALLOCATION_SCHEMA_V1_12) @wsgi_wrapper.PlacementWsgify @microversion.version_handler('1.13') @util.require_content('application/json') def set_allocations(req): context = req.environ['placement.context'] data = util.extract_json(req.body, schema.POST_ALLOCATIONS_V1_13) # Create a sequence of allocation objects to be used in an # AllocationList.create_all() call, which will mean all the changes # happen within a single transaction and with resource provider # generations check all in one go. allocation_objects = [] for consumer_uuid in data: project_id = data[consumer_uuid]['project_id'] user_id = data[consumer_uuid]['user_id'] allocations = data[consumer_uuid]['allocations'] if allocations: for resource_provider_uuid in allocations: resources = allocations[resource_provider_uuid]['resources'] new_allocations = _new_allocations(context, resource_provider_uuid, consumer_uuid, resources, project_id, user_id) allocation_objects.extend(new_allocations) else: # The allocations are empty, which means wipe them out. # Internal to the allocation object this is signalled by a # used value of 0. allocations = rp_obj.AllocationList.get_all_by_consumer_id( context, consumer_uuid) for allocation in allocations: allocation.used = 0 allocation_objects.append(allocation) allocations = rp_obj.AllocationList( context, objects=allocation_objects) try: allocations.create_all() LOG.debug("Successfully wrote allocations %s", allocations) except exception.NotFound as exc: raise webob.exc.HTTPBadRequest( _("Unable to allocate inventory %(error)s") % {'error': exc}) except exception.InvalidInventory as exc: # InvalidInventory is a parent for several exceptions that # indicate either that Inventory is not present, or that # capacity limits have been exceeded. raise webob.exc.HTTPConflict( _('Unable to allocate inventory: %(error)s') % {'error': exc}) except exception.ConcurrentUpdateDetected as exc: raise webob.exc.HTTPConflict( _('Inventory changed while attempting to allocate: %(error)s') % {'error': exc}) req.response.status = 204 req.response.content_type = None return req.response @wsgi_wrapper.PlacementWsgify def delete_allocations(req): context = req.environ['placement.context'] consumer_uuid = util.wsgi_path_item(req.environ, 'consumer_uuid') allocations = rp_obj.AllocationList.get_all_by_consumer_id( context, consumer_uuid) if allocations: try: allocations.delete_all() # NOTE(pumaranikar): Following NotFound exception added in the case # when allocation is deleted from allocations list by some other # activity. In that case, delete_all() will throw a NotFound exception. except exception.NotFound as exc: raise webob.exc.HTPPNotFound( _("Allocation for consumer with id %(id)s not found." "error: %(error)s") % {'id': consumer_uuid, 'error': exc}) else: raise webob.exc.HTTPNotFound( _("No allocations for consumer '%(consumer_uuid)s'") % {'consumer_uuid': consumer_uuid}) LOG.debug("Successfully deleted allocations %s", allocations) req.response.status = 204 req.response.content_type = None return req.response nova-17.0.1/nova/api/openstack/placement/deploy.py0000666000175000017500000000655713250073136022125 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Deployment handling for Placmenent API.""" import oslo_middleware from oslo_middleware import cors from nova.api import openstack as common_api from nova.api.openstack.placement import auth from nova.api.openstack.placement import handler from nova.api.openstack.placement import microversion from nova.api.openstack.placement import requestlog from nova import objects # TODO(cdent): NAME points to the config project being used, so for # now this is "nova" but we probably want "placement" eventually. NAME = "nova" # Make sure that objects are registered for this running of the # placement API. objects.register_all() def deploy(conf, project_name): """Assemble the middleware pipeline leading to the placement app.""" if conf.api.auth_strategy == 'noauth2': auth_middleware = auth.NoAuthMiddleware else: # Do not use 'oslo_config_project' param here as the conf # location may have been overridden earlier in the deployment # process with OS_PLACEMENT_CONFIG_DIR in wsgi.py. auth_middleware = auth.filter_factory( {}, oslo_config_config=conf) # Pass in our CORS config, if any, manually as that's a) # explicit, b) makes testing more straightfoward, c) let's # us control the use of cors by the presence of its config. conf.register_opts(cors.CORS_OPTS, 'cors') if conf.cors.allowed_origin: cors_middleware = oslo_middleware.CORS.factory( {}, **conf.cors) else: cors_middleware = None context_middleware = auth.PlacementKeystoneContext req_id_middleware = oslo_middleware.RequestId microversion_middleware = microversion.MicroversionMiddleware fault_wrap = common_api.FaultWrapper request_log = requestlog.RequestLog application = handler.PlacementHandler() # NOTE(cdent): The ordering here is important. The list is ordered # from the inside out. For a single request req_id_middleware is called # first and microversion_middleware last. Then the request is finally # passed to the application (the PlacementHandler). At that point # the response ascends the middleware in the reverse of the # order the request went in. This order ensures that log messages # all see the same contextual information including request id and # authentication information. for middleware in (microversion_middleware, fault_wrap, request_log, context_middleware, auth_middleware, cors_middleware, req_id_middleware, ): if middleware: application = middleware(application) return application def loadapp(config, project_name=NAME): application = deploy(config, project_name) return application nova-17.0.1/nova/api/openstack/placement/microversion.py0000666000175000017500000002337613250073136023346 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """Microversion handling.""" # NOTE(cdent): This code is taken from enamel: # https://github.com/jaypipes/enamel and was the original source of # the code now used in microversion_parse library. import collections import inspect import microversion_parse import webob # NOTE(cdent): avoid cyclical import conflict between util and # microversion import nova.api.openstack.placement.util from nova.i18n import _ SERVICE_TYPE = 'placement' MICROVERSION_ENVIRON = '%s.microversion' % SERVICE_TYPE VERSIONED_METHODS = collections.defaultdict(list) # The Canonical Version List VERSIONS = [ '1.0', '1.1', # initial support for aggregate.get_aggregates and set_aggregates '1.2', # Adds /resource_classes resource endpoint '1.3', # Adds 'member_of' query parameter to get resource providers # that are members of any of the listed aggregates '1.4', # Adds resources query string parameter in GET /resource_providers '1.5', # Adds DELETE /resource_providers/{uuid}/inventories '1.6', # Adds /traits and /resource_providers{uuid}/traits resource # endpoints '1.7', # PUT /resource_classes/{name} is bodiless create or update '1.8', # Adds 'project_id' and 'user_id' required request parameters to # PUT /allocations '1.9', # Adds GET /usages '1.10', # Adds GET /allocation_candidates resource endpoint '1.11', # Adds 'allocations' link to the GET /resource_providers response '1.12', # Add project_id and user_id to GET /allocations/{consumer_uuid} # and PUT to /allocations/{consumer_uuid} in the same dict form # as GET. The 'allocation_requests' format in GET # /allocation_candidates is updated to be the same as well. '1.13', # Adds POST /allocations to set allocations for multiple consumers '1.14', # Adds parent and root provider UUID on resource provider # representation and 'in_tree' filter on GET /resource_providers '1.15', # Include last-modified and cache-control headers '1.16', # Add 'limit' query parameter to GET /allocation_candidates '1.17', # Add 'required' query parameter to GET /allocation_candidates and # return traits in the provider summary. ] def max_version_string(): return VERSIONS[-1] def min_version_string(): return VERSIONS[0] def parse_version_string(version_string): """Turn a version string into a Version :param version_string: A string of two numerals, X.Y, or 'latest' :returns: a Version :raises: TypeError """ if version_string == 'latest': version_string = max_version_string() try: # The combination of int and a limited split with the # named tuple means that this incantation will raise # ValueError or TypeError when the incoming data is # poorly formed but will, however, naturally adapt to # extraneous whitespace. return Version(*(int(value) for value in version_string.split('.', 1))) except (ValueError, TypeError) as exc: raise TypeError( _('invalid version string: %(version_string)s; %(exc)s') % {'version_string': version_string, 'exc': exc}) class MicroversionMiddleware(object): """WSGI middleware for getting microversion info.""" def __init__(self, application): self.application = application @webob.dec.wsgify def __call__(self, req): util = nova.api.openstack.placement.util try: microversion = extract_version(req.headers) except ValueError as exc: raise webob.exc.HTTPNotAcceptable( _('Invalid microversion: %(error)s') % {'error': exc}, json_formatter=util.json_error_formatter) except TypeError as exc: raise webob.exc.HTTPBadRequest( _('Invalid microversion: %(error)s') % {'error': exc}, json_formatter=util.json_error_formatter) req.environ[MICROVERSION_ENVIRON] = microversion microversion_header = '%s %s' % (SERVICE_TYPE, microversion) try: response = req.get_response(self.application) except webob.exc.HTTPError as exc: # If there was an error in the application we still need # to send the microversion header, so add the header and # re-raise the exception. exc.headers.add(Version.HEADER, microversion_header) raise exc response.headers.add(Version.HEADER, microversion_header) response.headers.add('vary', Version.HEADER) return response class Version(collections.namedtuple('Version', 'major minor')): """A namedtuple containing major and minor values. Since it is a tuple is automatically comparable. """ HEADER = 'OpenStack-API-Version' MIN_VERSION = None MAX_VERSION = None def __str__(self): return '%s.%s' % (self.major, self.minor) @property def max_version(self): if not self.MAX_VERSION: self.MAX_VERSION = parse_version_string(max_version_string()) return self.MAX_VERSION @property def min_version(self): if not self.MIN_VERSION: self.MIN_VERSION = parse_version_string(min_version_string()) return self.MIN_VERSION def matches(self, min_version=None, max_version=None): if min_version is None: min_version = self.min_version if max_version is None: max_version = self.max_version return min_version <= self <= max_version def extract_version(headers): """Extract the microversion from Version.HEADER There may be multiple headers and some which don't match our service. """ found_version = microversion_parse.get_version(headers, service_type=SERVICE_TYPE) version_string = found_version or min_version_string() request_version = parse_version_string(version_string) # We need a version that is in VERSION and within MIX and MAX. # This gives us the option to administratively disable a # version if we really need to. if (str(request_version) in VERSIONS and request_version.matches()): return request_version raise ValueError(_('Unacceptable version header: %s') % version_string) # From twisted # https://github.com/twisted/twisted/blob/trunk/twisted/python/deprecate.py def _fully_qualified_name(obj): """Return the fully qualified name of a module, class, method or function. Classes and functions need to be module level ones to be correctly qualified. """ try: name = obj.__qualname__ except AttributeError: name = obj.__name__ if inspect.isclass(obj) or inspect.isfunction(obj): moduleName = obj.__module__ return "%s.%s" % (moduleName, name) elif inspect.ismethod(obj): try: cls = obj.im_class except AttributeError: # Python 3 eliminates im_class, substitutes __module__ and # __qualname__ to provide similar information. return "%s.%s" % (obj.__module__, obj.__qualname__) else: className = _fully_qualified_name(cls) return "%s.%s" % (className, name) return name def _find_method(f, version, status_code): """Look in VERSIONED_METHODS for method with right name matching version. If no match is found a HTTPError corresponding to status_code will be returned. """ qualified_name = _fully_qualified_name(f) # A KeyError shouldn't be possible here, but let's be robust # just in case. method_list = VERSIONED_METHODS.get(qualified_name, []) for min_version, max_version, func in method_list: if min_version <= version <= max_version: return func raise webob.exc.status_map[status_code] def version_handler(min_ver, max_ver=None, status_code=404): """Decorator for versioning API methods. Add as a decorator to a placement API handler to constrain the microversions at which it will run. Add after the ``wsgify`` decorator. This does not check for version intersections. That's the domain of tests. :param min_ver: A string of two numerals, X.Y indicating the minimum version allowed for the decorated method. :param max_ver: A string of two numerals, X.Y, indicating the maximum version allowed for the decorated method. :param status_code: A status code to indicate error, 404 by default """ def decorator(f): min_version = parse_version_string(min_ver) if max_ver: max_version = parse_version_string(max_ver) else: max_version = parse_version_string(max_version_string()) qualified_name = _fully_qualified_name(f) VERSIONED_METHODS[qualified_name].append( (min_version, max_version, f)) def decorated_func(req, *args, **kwargs): version = req.environ[MICROVERSION_ENVIRON] return _find_method(f, version, status_code)(req, *args, **kwargs) # Sort highest min version to beginning of list. VERSIONED_METHODS[qualified_name].sort(key=lambda x: x[0], reverse=True) return decorated_func return decorator nova-17.0.1/nova/api/openstack/placement/policy.py0000666000175000017500000000647013250073126022121 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Policy Enforcement for placement API.""" from oslo_config import cfg from oslo_log import log as logging from oslo_policy import policy from oslo_serialization import jsonutils CONF = cfg.CONF LOG = logging.getLogger(__name__) _ENFORCER_PLACEMENT = None def placement_init(): """Init an Enforcer class for placement policy. This method uses a different list of policies than other parts of Nova. This is done to facilitate a split out of the placement service later. """ global _ENFORCER_PLACEMENT if not _ENFORCER_PLACEMENT: # TODO(cdent): Using is_admin everywhere (except /) is # insufficiently flexible for future use case but is # convenient for initial exploration. We will need to # determine how to manage authorization/policy and # implement that, probably per handler. rules = policy.Rules.load(jsonutils.dumps({'placement': 'role:admin'})) # Enforcer is initialized so that the above rule is loaded in and no # policy file is read. # TODO(alaski): Register a default rule rather than loading it in like # this. That requires that a policy file is specified to be read. When # this is split out such that a placement policy file makes sense then # change to rule registration. _ENFORCER_PLACEMENT = policy.Enforcer(CONF, rules=rules, use_conf=False) def placement_authorize(context, action, target=None): """Verifies that the action is valid on the target in this context. :param context: RequestContext object :param action: string representing the action to be checked :param target: dictionary representing the object of the action for object creation this should be a dictionary representing the location of the object e.g. ``{'project_id': context.project_id}`` :return: returns a non-False value (not necessarily "True") if authorized, and the exact value False if not authorized. """ placement_init() if target is None: target = {'project_id': context.project_id, 'user_id': context.user_id} credentials = context.to_policy_values() # TODO(alaski): Change this to use authorize() when rules are registered. # noqa the following line because a hacking check disallows using enforce. result = _ENFORCER_PLACEMENT.enforce(action, target, credentials, do_raise=False, exc=None, action=action) if result is False: LOG.debug('Policy check for %(action)s failed with credentials ' '%(credentials)s', {'action': action, 'credentials': credentials}) return result nova-17.0.1/nova/api/openstack/placement/wsgi_wrapper.py0000666000175000017500000000232513250073126023326 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Extend functionality from webob.dec.wsgify for Placement API.""" import webob from oslo_log import log as logging from webob.dec import wsgify from nova.api.openstack.placement import util LOG = logging.getLogger(__name__) class PlacementWsgify(wsgify): def call_func(self, req, *args, **kwargs): """Add json_error_formatter to any webob HTTPExceptions.""" try: super(PlacementWsgify, self).call_func(req, *args, **kwargs) except webob.exc.HTTPException as exc: LOG.debug("Placement API returning an error response: %s", exc) exc.json_formatter = util.json_error_formatter raise nova-17.0.1/nova/api/openstack/versioned_method.py0000666000175000017500000000234513250073126022205 0ustar zuulzuul00000000000000# Copyright 2014 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. class VersionedMethod(object): def __init__(self, name, start_version, end_version, func): """Versioning information for a single method @name: Name of the method @start_version: Minimum acceptable version @end_version: Maximum acceptable_version @func: Method to call Minimum and maximums are inclusive """ self.name = name self.start_version = start_version self.end_version = end_version self.func = func def __str__(self): return ("Version Method %s: min: %s, max: %s" % (self.name, self.start_version, self.end_version)) nova-17.0.1/nova/api/openstack/api_version_request.py0000666000175000017500000003200313250073126022727 0ustar zuulzuul00000000000000# Copyright 2014 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import re from nova import exception from nova.i18n import _ # Define the minimum and maximum version of the API across all of the # REST API. The format of the version is: # X.Y where: # # - X will only be changed if a significant backwards incompatible API # change is made which affects the API as whole. That is, something # that is only very very rarely incremented. # # - Y when you make any change to the API. Note that this includes # semantic changes which may not affect the input or output formats or # even originate in the API code layer. We are not distinguishing # between backwards compatible and backwards incompatible changes in # the versioning system. It must be made clear in the documentation as # to what is a backwards compatible change and what is a backwards # incompatible one. # # You must update the API version history string below with a one or # two line description as well as update rest_api_version_history.rst REST_API_VERSION_HISTORY = """REST API Version History: * 2.1 - Initial version. Equivalent to v2.0 code * 2.2 - Adds (keypair) type parameter for os-keypairs plugin Fixes success status code for create/delete a keypair method * 2.3 - Exposes additional os-extended-server-attributes Exposes delete_on_termination for os-extended-volumes * 2.4 - Exposes reserved field in os-fixed-ips. * 2.5 - Allow server search option ip6 for non-admin * 2.6 - Consolidate the APIs for getting remote consoles * 2.7 - Check flavor type before add tenant access. * 2.8 - Add new protocol for VM console (mks) * 2.9 - Exposes lock information in server details. * 2.10 - Allow admins to query, create and delete keypairs owned by any user. * 2.11 - Exposes forced_down attribute for os-services * 2.12 - Exposes VIF net_id in os-virtual-interfaces * 2.13 - Add project id and user id information for os-server-groups API * 2.14 - Remove onSharedStorage from evacuate request body and remove adminPass from the response body * 2.15 - Add soft-affinity and soft-anti-affinity policies * 2.16 - Exposes host_status for servers/detail and servers/{server_id} * 2.17 - Add trigger_crash_dump to server actions * 2.18 - Makes project_id optional in v2.1 * 2.19 - Allow user to set and get the server description * 2.20 - Add attach and detach volume operations for instances in shelved and shelved_offloaded state * 2.21 - Make os-instance-actions read deleted instances * 2.22 - Add API to force live migration to complete * 2.23 - Add index/show API for server migrations. Also add migration_type for /os-migrations and add ref link for it when the migration is an in progress live migration. * 2.24 - Add API to cancel a running live migration * 2.25 - Make block_migration support 'auto' and remove disk_over_commit for os-migrateLive. * 2.26 - Adds support of server tags * 2.27 - Adds support for new-style microversion headers while keeping support for the original style. * 2.28 - Changes compute_node.cpu_info from string to object * 2.29 - Add a force flag in evacuate request body and change the behaviour for the host flag by calling the scheduler. * 2.30 - Add a force flag in live-migrate request body and change the behaviour for the host flag by calling the scheduler. * 2.31 - Fix os-console-auth-tokens to work for all console types. * 2.32 - Add tag to networks and block_device_mapping_v2 in server boot request body. * 2.33 - Add pagination support for hypervisors. * 2.34 - Checks before live-migration are made in asynchronous way. os-Migratelive Action does not throw badRequest in case of pre-checks failure. Verification result is available over instance-actions. * 2.35 - Adds keypairs pagination support. * 2.36 - Deprecates all the API which proxy to another service and fping API. * 2.37 - Adds support for auto-allocating networking, otherwise known as "Get me a Network". Also enforces server.networks.uuid to be in UUID format. * 2.38 - Add a condition to return HTTPBadRequest if invalid status is provided for listing servers. * 2.39 - Deprecates image-metadata proxy API * 2.40 - Adds simple tenant usage pagination support. * 2.41 - Return uuid attribute for aggregates. * 2.42 - In the context of device tagging at instance boot time, re-introduce the tag attribute that, due to bugs, was lost starting with version 2.33 for block devices and starting with version 2.37 for network interfaces. * 2.43 - Deprecate os-hosts API * 2.44 - The servers action addFixedIp, removeFixedIp, addFloatingIp, removeFloatingIp and os-virtual-interfaces APIs are deprecated. * 2.45 - The createImage and createBackup APIs no longer return a Location header in the response for the snapshot image, they now return a json dict in the response body with an image_id key and uuid value. * 2.46 - Return ``X-OpenStack-Request-ID`` header on requests. * 2.47 - When displaying server details, display the flavor as a dict rather than a link. If the user is prevented from retrieving the flavor extra-specs by policy, simply omit the field from the output. * 2.48 - Standardize VM diagnostics info. * 2.49 - Support tagged attachment of network interfaces and block devices. * 2.50 - Exposes ``server_groups`` and ``server_group_members`` keys in GET & PUT ``os-quota-class-sets`` APIs response. Also filter out Network related quotas from ``os-quota-class-sets`` API * 2.51 - Adds new event name to external-events (volume-extended). Also, non-admins can see instance action event details except for the traceback field. * 2.52 - Adds support for applying tags when creating a server. * 2.53 - Service and compute node (hypervisor) database ids are hidden. The os-services and os-hypervisors APIs now return a uuid in the id field, and takes a uuid in requests. PUT and GET requests and responses are also changed. * 2.54 - Enable reset key pair while rebuilding instance. * 2.55 - Added flavor.description to GET/POST/PUT flavors APIs. * 2.56 - Add a host parameter in migrate request body in order to enable users to specify a target host in cold migration. The target host is checked by the scheduler. * 2.57 - Deprecated personality files from POST /servers and the rebuild server action APIs. Added the ability to pass new user_data to the rebuild server action API. Personality / file injection related limits and quota resources are also removed. * 2.58 - Add pagination support and changes-since filter for os-instance-actions API. * 2.59 - Add pagination support and changes-since filter for os-migrations API. And the os-migrations API now returns both the id and the uuid in response. * 2.60 - Add support for attaching a single volume to multiple instances. """ # The minimum and maximum versions of the API supported # The default api version request is defined to be the # minimum version of the API supported. # Note(cyeoh): This only applies for the v2.1 API once microversions # support is fully merged. It does not affect the V2 API. _MIN_API_VERSION = "2.1" _MAX_API_VERSION = "2.60" DEFAULT_API_VERSION = _MIN_API_VERSION # Almost all proxy APIs which are related to network, images and baremetal # were deprecated from 2.36. MAX_PROXY_API_SUPPORT_VERSION = '2.35' MIN_WITHOUT_PROXY_API_SUPPORT_VERSION = '2.36' # Starting from microversion 2.39 also image-metadata proxy API is deprecated. MAX_IMAGE_META_PROXY_API_VERSION = '2.38' MIN_WITHOUT_IMAGE_META_PROXY_API_VERSION = '2.39' # NOTE(cyeoh): min and max versions declared as functions so we can # mock them for unittests. Do not use the constants directly anywhere # else. def min_api_version(): return APIVersionRequest(_MIN_API_VERSION) def max_api_version(): return APIVersionRequest(_MAX_API_VERSION) def is_supported(req, min_version=_MIN_API_VERSION, max_version=_MAX_API_VERSION): """Check if API request version satisfies version restrictions. :param req: request object :param min_version: minimal version of API needed for correct request processing :param max_version: maximum version of API needed for correct request processing :returns: True if request satisfies minimal and maximum API version requirements. False in other case. """ return (APIVersionRequest(max_version) >= req.api_version_request >= APIVersionRequest(min_version)) class APIVersionRequest(object): """This class represents an API Version Request with convenience methods for manipulation and comparison of version numbers that we need to do to implement microversions. """ def __init__(self, version_string=None): """Create an API version request object. :param version_string: String representation of APIVersionRequest. Correct format is 'X.Y', where 'X' and 'Y' are int values. None value should be used to create Null APIVersionRequest, which is equal to 0.0 """ self.ver_major = 0 self.ver_minor = 0 if version_string is not None: match = re.match(r"^([1-9]\d*)\.([1-9]\d*|0)$", version_string) if match: self.ver_major = int(match.group(1)) self.ver_minor = int(match.group(2)) else: raise exception.InvalidAPIVersionString(version=version_string) def __str__(self): """Debug/Logging representation of object.""" return ("API Version Request Major: %s, Minor: %s" % (self.ver_major, self.ver_minor)) def is_null(self): return self.ver_major == 0 and self.ver_minor == 0 def _format_type_error(self, other): return TypeError(_("'%(other)s' should be an instance of '%(cls)s'") % {"other": other, "cls": self.__class__}) def __lt__(self, other): if not isinstance(other, APIVersionRequest): raise self._format_type_error(other) return ((self.ver_major, self.ver_minor) < (other.ver_major, other.ver_minor)) def __eq__(self, other): if not isinstance(other, APIVersionRequest): raise self._format_type_error(other) return ((self.ver_major, self.ver_minor) == (other.ver_major, other.ver_minor)) def __gt__(self, other): if not isinstance(other, APIVersionRequest): raise self._format_type_error(other) return ((self.ver_major, self.ver_minor) > (other.ver_major, other.ver_minor)) def __le__(self, other): return self < other or self == other def __ne__(self, other): return not self.__eq__(other) def __ge__(self, other): return self > other or self == other def matches(self, min_version, max_version): """Returns whether the version object represents a version greater than or equal to the minimum version and less than or equal to the maximum version. @param min_version: Minimum acceptable version. @param max_version: Maximum acceptable version. @returns: boolean If min_version is null then there is no minimum limit. If max_version is null then there is no maximum limit. If self is null then raise ValueError """ if self.is_null(): raise ValueError if max_version.is_null() and min_version.is_null(): return True elif max_version.is_null(): return min_version <= self elif min_version.is_null(): return self <= max_version else: return min_version <= self <= max_version def get_string(self): """Converts object to string representation which if used to create an APIVersionRequest object results in the same version request. """ if self.is_null(): raise ValueError return "%s.%s" % (self.ver_major, self.ver_minor) nova-17.0.1/nova/api/openstack/__init__.py0000666000175000017500000002056213250073126020407 0ustar zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ WSGI middleware for OpenStack API controllers. """ from oslo_log import log as logging import routes import webob.dec import webob.exc from nova.api.openstack import wsgi import nova.conf from nova.i18n import translate from nova import utils from nova import wsgi as base_wsgi LOG = logging.getLogger(__name__) CONF = nova.conf.CONF class FaultWrapper(base_wsgi.Middleware): """Calls down the middleware stack, making exceptions into faults.""" _status_to_type = {} @staticmethod def status_to_type(status): if not FaultWrapper._status_to_type: for clazz in utils.walk_class_hierarchy(webob.exc.HTTPError): FaultWrapper._status_to_type[clazz.code] = clazz return FaultWrapper._status_to_type.get( status, webob.exc.HTTPInternalServerError)() def _error(self, inner, req): LOG.exception("Caught error: %s", inner) safe = getattr(inner, 'safe', False) headers = getattr(inner, 'headers', None) status = getattr(inner, 'code', 500) if status is None: status = 500 msg_dict = dict(url=req.url, status=status) LOG.info("%(url)s returned with HTTP %(status)d", msg_dict) outer = self.status_to_type(status) if headers: outer.headers = headers # NOTE(johannes): We leave the explanation empty here on # purpose. It could possibly have sensitive information # that should not be returned back to the user. See # bugs 868360 and 874472 # NOTE(eglynn): However, it would be over-conservative and # inconsistent with the EC2 API to hide every exception, # including those that are safe to expose, see bug 1021373 if safe: user_locale = req.best_match_language() inner_msg = translate(inner.message, user_locale) outer.explanation = '%s: %s' % (inner.__class__.__name__, inner_msg) return wsgi.Fault(outer) @webob.dec.wsgify(RequestClass=wsgi.Request) def __call__(self, req): try: return req.get_response(self.application) except Exception as ex: return self._error(ex, req) class LegacyV2CompatibleWrapper(base_wsgi.Middleware): def _filter_request_headers(self, req): """For keeping same behavior with v2 API, ignores microversions HTTP headers X-OpenStack-Nova-API-Version and OpenStack-API-Version in the request. """ if wsgi.API_VERSION_REQUEST_HEADER in req.headers: del req.headers[wsgi.API_VERSION_REQUEST_HEADER] if wsgi.LEGACY_API_VERSION_REQUEST_HEADER in req.headers: del req.headers[wsgi.LEGACY_API_VERSION_REQUEST_HEADER] return req def _filter_response_headers(self, response): """For keeping same behavior with v2 API, filter out microversions HTTP header and microversions field in header 'Vary'. """ if wsgi.API_VERSION_REQUEST_HEADER in response.headers: del response.headers[wsgi.API_VERSION_REQUEST_HEADER] if wsgi.LEGACY_API_VERSION_REQUEST_HEADER in response.headers: del response.headers[wsgi.LEGACY_API_VERSION_REQUEST_HEADER] if 'Vary' in response.headers: vary_headers = response.headers['Vary'].split(',') filtered_vary = [] for vary in vary_headers: vary = vary.strip() if (vary == wsgi.API_VERSION_REQUEST_HEADER or vary == wsgi.LEGACY_API_VERSION_REQUEST_HEADER): continue filtered_vary.append(vary) if filtered_vary: response.headers['Vary'] = ','.join(filtered_vary) else: del response.headers['Vary'] return response @webob.dec.wsgify(RequestClass=wsgi.Request) def __call__(self, req): req.set_legacy_v2() req = self._filter_request_headers(req) response = req.get_response(self.application) return self._filter_response_headers(response) class APIMapper(routes.Mapper): def routematch(self, url=None, environ=None): if url == "": result = self._match("", environ) return result[0], result[1] return routes.Mapper.routematch(self, url, environ) def connect(self, *args, **kargs): # NOTE(vish): Default the format part of a route to only accept json # and xml so it doesn't eat all characters after a '.' # in the url. kargs.setdefault('requirements', {}) if not kargs['requirements'].get('format'): kargs['requirements']['format'] = 'json|xml' return routes.Mapper.connect(self, *args, **kargs) class ProjectMapper(APIMapper): def _get_project_id_token(self): # NOTE(sdague): project_id parameter is only valid if its hex # or hex + dashes (note, integers are a subset of this). This # is required to hand our overlaping routes issues. project_id_regex = '[0-9a-f\-]+' if CONF.osapi_v21.project_id_regex: project_id_regex = CONF.osapi_v21.project_id_regex return '{project_id:%s}' % project_id_regex def resource(self, member_name, collection_name, **kwargs): project_id_token = self._get_project_id_token() if 'parent_resource' not in kwargs: kwargs['path_prefix'] = '%s/' % project_id_token else: parent_resource = kwargs['parent_resource'] p_collection = parent_resource['collection_name'] p_member = parent_resource['member_name'] kwargs['path_prefix'] = '%s/%s/:%s_id' % ( project_id_token, p_collection, p_member) routes.Mapper.resource( self, member_name, collection_name, **kwargs) # while we are in transition mode, create additional routes # for the resource that do not include project_id. if 'parent_resource' not in kwargs: del kwargs['path_prefix'] else: parent_resource = kwargs['parent_resource'] p_collection = parent_resource['collection_name'] p_member = parent_resource['member_name'] kwargs['path_prefix'] = '%s/:%s_id' % (p_collection, p_member) routes.Mapper.resource(self, member_name, collection_name, **kwargs) def create_route(self, path, method, controller, action): project_id_token = self._get_project_id_token() # while we transition away from project IDs in the API URIs, create # additional routes that include the project_id self.connect('/%s%s' % (project_id_token, path), conditions=dict(method=[method]), controller=controller, action=action) self.connect(path, conditions=dict(method=[method]), controller=controller, action=action) class PlainMapper(APIMapper): def resource(self, member_name, collection_name, **kwargs): if 'parent_resource' in kwargs: parent_resource = kwargs['parent_resource'] p_collection = parent_resource['collection_name'] p_member = parent_resource['member_name'] kwargs['path_prefix'] = '%s/:%s_id' % (p_collection, p_member) routes.Mapper.resource(self, member_name, collection_name, **kwargs) nova-17.0.1/nova/api/openstack/identity.py0000666000175000017500000000536713250073126020507 0ustar zuulzuul00000000000000# Copyright 2017 IBM # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from keystoneauth1 import exceptions as kse from oslo_log import log as logging import webob from nova.i18n import _ from nova import utils LOG = logging.getLogger(__name__) def verify_project_id(context, project_id): """verify that a project_id exists. This attempts to verify that a project id exists. If it does not, an HTTPBadRequest is emitted. """ adap = utils.get_ksa_adapter( 'identity', ksa_auth=context.get_auth_plugin(), min_version=(3, 0), max_version=(3, 'latest')) failure = webob.exc.HTTPBadRequest( explanation=_("Project ID %s is not a valid project.") % project_id) try: resp = adap.get('/projects/%s' % project_id, raise_exc=False) except kse.EndpointNotFound: LOG.error( "Keystone identity service version 3.0 was not found. This might " "be because your endpoint points to the v2.0 versioned endpoint " "which is not supported. Please fix this.") raise failure except kse.ClientException: # something is wrong, like there isn't a keystone v3 endpoint, # or nova isn't configured for the interface to talk to it; # we'll take the pass and default to everything being ok. LOG.info("Unable to contact keystone to verify project_id") return True if resp: # All is good with this 20x status return True elif resp.status_code == 404: # we got access, and we know this project is not there raise failure elif resp.status_code == 403: # we don't have enough permission to verify this, so default # to "it's ok". LOG.info( "Insufficient permissions for user %(user)s to verify " "existence of project_id %(pid)s", {"user": context.user_id, "pid": project_id}) return True else: LOG.warning( "Unexpected response from keystone trying to " "verify project_id %(pid)s - resp: %(code)s %(content)s", {"pid": project_id, "code": resp.status_code, "content": resp.content}) # realize we did something wrong, but move on with a warning return True nova-17.0.1/nova/api/openstack/compute/0000775000175000017500000000000013250073471017746 5ustar zuulzuul00000000000000nova-17.0.1/nova/api/openstack/compute/server_diagnostics.py0000666000175000017500000000571513250073126024224 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import webob from nova.api.openstack import api_version_request from nova.api.openstack import common from nova.api.openstack.compute.views import server_diagnostics from nova.api.openstack import wsgi from nova import compute from nova import exception from nova.i18n import _ from nova.policies import server_diagnostics as sd_policies class ServerDiagnosticsController(wsgi.Controller): _view_builder_class = server_diagnostics.ViewBuilder def __init__(self, *args, **kwargs): super(ServerDiagnosticsController, self).__init__(*args, **kwargs) self.compute_api = compute.API() @wsgi.expected_errors((400, 404, 409, 501)) def index(self, req, server_id): context = req.environ["nova.context"] context.can(sd_policies.BASE_POLICY_NAME) instance = common.get_instance(self.compute_api, context, server_id) try: if api_version_request.is_supported(req, min_version='2.48'): diagnostics = self.compute_api.get_instance_diagnostics( context, instance) return self._view_builder.instance_diagnostics(diagnostics) return self.compute_api.get_diagnostics(context, instance) except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state(state_error, 'get_diagnostics', server_id) except exception.InstanceNotReady as e: raise webob.exc.HTTPConflict(explanation=e.format_message()) except exception.InstanceDiagnosticsNotSupported: # NOTE(snikitin): During upgrade we may face situation when env # has new API and old compute. New compute returns a # Diagnostics object. Old compute returns a dictionary. So we # can't perform a request correctly if compute is too old. msg = _('Compute node is too old. You must complete the ' 'upgrade process to be able to get standardized ' 'diagnostics data which is available since v2.48. However ' 'you are still able to get diagnostics data in ' 'non-standardized format which is available until v2.47.') raise webob.exc.HTTPBadRequest(explanation=msg) except NotImplementedError: common.raise_feature_not_supported() nova-17.0.1/nova/api/openstack/compute/pause_server.py0000666000175000017500000000574513250073126023035 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from webob import exc from nova.api.openstack import common from nova.api.openstack import wsgi from nova import compute from nova import exception from nova.policies import pause_server as ps_policies class PauseServerController(wsgi.Controller): def __init__(self, *args, **kwargs): super(PauseServerController, self).__init__(*args, **kwargs) self.compute_api = compute.API() @wsgi.response(202) @wsgi.expected_errors((404, 409, 501)) @wsgi.action('pause') def _pause(self, req, id, body): """Permit Admins to pause the server.""" ctxt = req.environ['nova.context'] server = common.get_instance(self.compute_api, ctxt, id) ctxt.can(ps_policies.POLICY_ROOT % 'pause', target={'user_id': server.user_id, 'project_id': server.project_id}) try: self.compute_api.pause(ctxt, server) except exception.InstanceIsLocked as e: raise exc.HTTPConflict(explanation=e.format_message()) except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state(state_error, 'pause', id) except (exception.InstanceUnknownCell, exception.InstanceNotFound) as e: raise exc.HTTPNotFound(explanation=e.format_message()) except NotImplementedError: common.raise_feature_not_supported() @wsgi.response(202) @wsgi.expected_errors((404, 409, 501)) @wsgi.action('unpause') def _unpause(self, req, id, body): """Permit Admins to unpause the server.""" ctxt = req.environ['nova.context'] ctxt.can(ps_policies.POLICY_ROOT % 'unpause') server = common.get_instance(self.compute_api, ctxt, id) try: self.compute_api.unpause(ctxt, server) except exception.InstanceIsLocked as e: raise exc.HTTPConflict(explanation=e.format_message()) except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state(state_error, 'unpause', id) except (exception.InstanceUnknownCell, exception.InstanceNotFound) as e: raise exc.HTTPNotFound(explanation=e.format_message()) except NotImplementedError: common.raise_feature_not_supported() nova-17.0.1/nova/api/openstack/compute/extended_status.py0000666000175000017500000000437613250073126023534 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """The Extended Status Admin API extension.""" from nova.api.openstack import wsgi from nova.policies import extended_status as es_policies class ExtendedStatusController(wsgi.Controller): def _extend_server(self, server, instance): # Note(gmann): Removed 'locked_by' from extended status # to make it same as V2. If needed it can be added with # microversion. for state in ['task_state', 'vm_state', 'power_state']: # NOTE(mriedem): The OS-EXT-STS prefix should not be used for new # attributes after v2.1. They are only in v2.1 for backward compat # with v2.0. key = "%s:%s" % ('OS-EXT-STS', state) server[key] = instance[state] @wsgi.extends def show(self, req, resp_obj, id): context = req.environ['nova.context'] if context.can(es_policies.BASE_POLICY_NAME, fatal=False): server = resp_obj.obj['server'] db_instance = req.get_db_instance(server['id']) # server['id'] is guaranteed to be in the cache due to # the core API adding it in its 'show' method. self._extend_server(server, db_instance) @wsgi.extends def detail(self, req, resp_obj): context = req.environ['nova.context'] if context.can(es_policies.BASE_POLICY_NAME, fatal=False): servers = list(resp_obj.obj['servers']) for server in servers: db_instance = req.get_db_instance(server['id']) # server['id'] is guaranteed to be in the cache due to # the core API adding it in its 'detail' method. self._extend_server(server, db_instance) nova-17.0.1/nova/api/openstack/compute/helpers.py0000666000175000017500000001054313250073126021764 0ustar zuulzuul00000000000000# Copyright 2016 HPE, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils import strutils from webob import exc from nova.i18n import _ API_DISK_CONFIG = "OS-DCF:diskConfig" API_ACCESS_V4 = "accessIPv4" API_ACCESS_V6 = "accessIPv6" # possible ops CREATE = 'create' UPDATE = 'update' REBUILD = 'rebuild' RESIZE = 'resize' def disk_config_from_api(value): if value == 'AUTO': return True elif value == 'MANUAL': return False else: msg = _("%s must be either 'MANUAL' or 'AUTO'.") % API_DISK_CONFIG raise exc.HTTPBadRequest(explanation=msg) def get_injected_files(personality): """Create a list of injected files from the personality attribute. At this time, injected_files must be formatted as a list of (file_path, file_content) pairs for compatibility with the underlying compute service. """ injected_files = [] for item in personality: injected_files.append((item['path'], item['contents'])) return injected_files def translate_attributes(op, server_dict, operation_kwargs): """Translate REST attributes on create to server object kwargs. Our REST API is relatively fixed, but internal representations change over time, this is a translator for inbound REST request attributes that modifies the server dict that we get and adds appropriate attributes to ``operation_kwargs`` that will be passed down to instance objects later. It's done in a common function as this is used for create / resize / rebuild / update The ``op`` is the operation that we are transforming, because there are times when we translate differently for different operations. (Yes, it's a little nuts, but legacy... ) The ``server_dict`` is a representation of the server in question. During ``create`` and ``update`` operations this will actually be the ``server`` element of the request body. During actions, such as ``rebuild`` and ``resize`` this will be the attributes passed to the action object during the operation. This is equivalent to the ``server`` object. Not all operations support all attributes listed here. Which is why it's important to only set operation_kwargs if there is something to set. Input validation will ensure that we are only operating on appropriate attributes for each operation. """ # Disk config auto_disk_config_raw = server_dict.pop(API_DISK_CONFIG, None) if auto_disk_config_raw is not None: auto_disk_config = disk_config_from_api(auto_disk_config_raw) operation_kwargs['auto_disk_config'] = auto_disk_config if API_ACCESS_V4 in server_dict: operation_kwargs['access_ip_v4'] = server_dict.pop(API_ACCESS_V4) if API_ACCESS_V6 in server_dict: operation_kwargs['access_ip_v6'] = server_dict.pop(API_ACCESS_V6) # This is only ever expected during rebuild operations, and only # does anything with Ironic driver. It also demonstrates the lack # of understanding of the word ephemeral. if 'preserve_ephemeral' in server_dict and op == REBUILD: preserve = strutils.bool_from_string( server_dict.pop('preserve_ephemeral'), strict=True) operation_kwargs['preserve_ephemeral'] = preserve # yes, we use different kwargs, this goes all the way back to # commit cebc98176926f57016a508d5c59b11f55dfcf2b3. if 'personality' in server_dict: if op == REBUILD: operation_kwargs['files_to_inject'] = get_injected_files( server_dict.pop('personality')) # NOTE(sdague): the deprecated hooks infrastructure doesn't # function if injected files is not defined as a list. Once hooks # are removed, this should go back inside the personality # conditional above. if op == CREATE: operation_kwargs['injected_files'] = get_injected_files( server_dict.pop('personality', [])) nova-17.0.1/nova/api/openstack/compute/console_auth_tokens.py0000666000175000017500000000457713250073126024402 0ustar zuulzuul00000000000000# Copyright 2013 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import webob from nova.api.openstack import wsgi from nova.consoleauth import rpcapi as consoleauth_rpcapi from nova.i18n import _ from nova.policies import console_auth_tokens as cat_policies class ConsoleAuthTokensController(wsgi.Controller): def __init__(self, *args, **kwargs): self._consoleauth_rpcapi = consoleauth_rpcapi.ConsoleAuthAPI() super(ConsoleAuthTokensController, self).__init__(*args, **kwargs) def _show(self, req, id, rdp_only): """Checks a console auth token and returns the related connect info.""" context = req.environ['nova.context'] context.can(cat_policies.BASE_POLICY_NAME) token = id if not token: msg = _("token not provided") raise webob.exc.HTTPBadRequest(explanation=msg) connect_info = self._consoleauth_rpcapi.check_token(context, token) if not connect_info: raise webob.exc.HTTPNotFound(explanation=_("Token not found")) console_type = connect_info.get('console_type') if rdp_only and console_type != "rdp-html5": raise webob.exc.HTTPUnauthorized( explanation=_("The requested console type details are not " "accessible")) return {'console': {i: connect_info[i] for i in ['instance_uuid', 'host', 'port', 'internal_access_path'] if i in connect_info}} @wsgi.Controller.api_version("2.1", "2.30") @wsgi.expected_errors((400, 401, 404)) def show(self, req, id): return self._show(req, id, True) @wsgi.Controller.api_version("2.31") # noqa @wsgi.expected_errors((400, 404)) def show(self, req, id): return self._show(req, id, False) nova-17.0.1/nova/api/openstack/compute/cells.py0000666000175000017500000002572613250073126021435 0ustar zuulzuul00000000000000# Copyright 2011-2012 OpenStack Foundation # All Rights Reserved. # Copyright 2013 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """The cells extension.""" import oslo_messaging as messaging from oslo_utils import strutils import six from webob import exc from nova.api.openstack import common from nova.api.openstack.compute.schemas import cells from nova.api.openstack import wsgi from nova.api import validation from nova.cells import rpcapi as cells_rpcapi import nova.conf from nova import exception from nova.i18n import _ from nova.policies import cells as cells_policies from nova import rpc CONF = nova.conf.CONF def _filter_keys(item, keys): """Filters all model attributes except for keys item is a dict """ return {k: v for k, v in item.items() if k in keys} def _fixup_cell_info(cell_info, keys): """If the transport_url is present in the cell, derive username, rpc_host, and rpc_port from it. """ if 'transport_url' not in cell_info: return # Disassemble the transport URL transport_url = cell_info.pop('transport_url') try: transport_url = rpc.get_transport_url(transport_url) except messaging.InvalidTransportURL: # Just go with None's for key in keys: cell_info.setdefault(key, None) return if not transport_url.hosts: return transport_host = transport_url.hosts[0] transport_field_map = {'rpc_host': 'hostname', 'rpc_port': 'port'} for key in keys: if key in cell_info: continue transport_field = transport_field_map.get(key, key) cell_info[key] = getattr(transport_host, transport_field) def _scrub_cell(cell, detail=False): keys = ['name', 'username', 'rpc_host', 'rpc_port'] if detail: keys.append('capabilities') cell_info = _filter_keys(cell, keys + ['transport_url']) _fixup_cell_info(cell_info, keys) cell_info['type'] = 'parent' if cell['is_parent'] else 'child' return cell_info class CellsController(wsgi.Controller): """Controller for Cell resources.""" def __init__(self): self.cells_rpcapi = cells_rpcapi.CellsAPI() def _get_cells(self, ctxt, req, detail=False): """Return all cells.""" # Ask the CellsManager for the most recent data items = self.cells_rpcapi.get_cell_info_for_neighbors(ctxt) items = common.limited(items, req) items = [_scrub_cell(item, detail=detail) for item in items] return dict(cells=items) @wsgi.expected_errors(501) @common.check_cells_enabled def index(self, req): """Return all cells in brief.""" ctxt = req.environ['nova.context'] ctxt.can(cells_policies.BASE_POLICY_NAME) return self._get_cells(ctxt, req) @wsgi.expected_errors(501) @common.check_cells_enabled def detail(self, req): """Return all cells in detail.""" ctxt = req.environ['nova.context'] ctxt.can(cells_policies.BASE_POLICY_NAME) return self._get_cells(ctxt, req, detail=True) @wsgi.expected_errors(501) @common.check_cells_enabled def info(self, req): """Return name and capabilities for this cell.""" context = req.environ['nova.context'] context.can(cells_policies.BASE_POLICY_NAME) cell_capabs = {} my_caps = CONF.cells.capabilities for cap in my_caps: key, value = cap.split('=') cell_capabs[key] = value cell = {'name': CONF.cells.name, 'type': 'self', 'rpc_host': None, 'rpc_port': 0, 'username': None, 'capabilities': cell_capabs} return dict(cell=cell) @wsgi.expected_errors((404, 501)) @common.check_cells_enabled def capacities(self, req, id=None): """Return capacities for a given cell or all cells.""" # TODO(kaushikc): return capacities as a part of cell info and # cells detail calls in v2.1, along with capabilities context = req.environ['nova.context'] context.can(cells_policies.BASE_POLICY_NAME) try: capacities = self.cells_rpcapi.get_capacities(context, cell_name=id) except exception.CellNotFound as e: raise exc.HTTPNotFound(explanation=e.format_message()) return dict(cell={"capacities": capacities}) @wsgi.expected_errors((404, 501)) @common.check_cells_enabled def show(self, req, id): """Return data about the given cell name. 'id' is a cell name.""" context = req.environ['nova.context'] context.can(cells_policies.BASE_POLICY_NAME) try: cell = self.cells_rpcapi.cell_get(context, id) except exception.CellNotFound as e: raise exc.HTTPNotFound(explanation=e.format_message()) return dict(cell=_scrub_cell(cell)) # NOTE(gmann): Returns 200 for backwards compatibility but should be 204 # as this operation complete the deletion of aggregate resource and return # no response body. @wsgi.expected_errors((403, 404, 501)) @common.check_cells_enabled def delete(self, req, id): """Delete a child or parent cell entry. 'id' is a cell name.""" context = req.environ['nova.context'] context.can(cells_policies.POLICY_ROOT % "delete") try: num_deleted = self.cells_rpcapi.cell_delete(context, id) except exception.CellsUpdateUnsupported as e: raise exc.HTTPForbidden(explanation=e.format_message()) if num_deleted == 0: raise exc.HTTPNotFound( explanation=_("Cell %s doesn't exist.") % id) def _normalize_cell(self, cell, existing=None): """Normalize input cell data. Normalizations include: * Converting cell['type'] to is_parent boolean. * Merging existing transport URL with transport information. """ if 'name' in cell: cell['name'] = common.normalize_name(cell['name']) # Start with the cell type conversion if 'type' in cell: cell['is_parent'] = cell.pop('type') == 'parent' # Avoid cell type being overwritten to 'child' elif existing: cell['is_parent'] = existing['is_parent'] else: cell['is_parent'] = False # Now we disassemble the existing transport URL... transport_url = existing.get('transport_url') if existing else None transport_url = rpc.get_transport_url(transport_url) if 'rpc_virtual_host' in cell: transport_url.virtual_host = cell.pop('rpc_virtual_host') if not transport_url.hosts: transport_url.hosts.append(messaging.TransportHost()) transport_host = transport_url.hosts[0] if 'rpc_port' in cell: cell['rpc_port'] = int(cell['rpc_port']) # Copy over the input fields transport_field_map = { 'username': 'username', 'password': 'password', 'hostname': 'rpc_host', 'port': 'rpc_port', } for key, input_field in transport_field_map.items(): # Only override the value if we're given an override if input_field in cell: setattr(transport_host, key, cell.pop(input_field)) # Now set the transport URL cell['transport_url'] = str(transport_url) # NOTE(gmann): Returns 200 for backwards compatibility but should be 201 # as this operation complete the creation of aggregates resource when # returning a response. @wsgi.expected_errors((400, 403, 501)) @common.check_cells_enabled @validation.schema(cells.create_v20, '2.0', '2.0') @validation.schema(cells.create, '2.1') def create(self, req, body): """Create a child cell entry.""" context = req.environ['nova.context'] context.can(cells_policies.POLICY_ROOT % "create") cell = body['cell'] self._normalize_cell(cell) try: cell = self.cells_rpcapi.cell_create(context, cell) except exception.CellsUpdateUnsupported as e: raise exc.HTTPForbidden(explanation=e.format_message()) return dict(cell=_scrub_cell(cell)) @wsgi.expected_errors((400, 403, 404, 501)) @common.check_cells_enabled @validation.schema(cells.update_v20, '2.0', '2.0') @validation.schema(cells.update, '2.1') def update(self, req, id, body): """Update a child cell entry. 'id' is the cell name to update.""" context = req.environ['nova.context'] context.can(cells_policies.POLICY_ROOT % "update") cell = body['cell'] cell.pop('id', None) try: # NOTE(Vek): There is a race condition here if multiple # callers are trying to update the cell # information simultaneously. Since this # operation is administrative in nature, and # will be going away in the future, I don't see # it as much of a problem... existing = self.cells_rpcapi.cell_get(context, id) except exception.CellNotFound as e: raise exc.HTTPNotFound(explanation=e.format_message()) self._normalize_cell(cell, existing) try: cell = self.cells_rpcapi.cell_update(context, id, cell) except exception.CellNotFound as e: raise exc.HTTPNotFound(explanation=e.format_message()) except exception.CellsUpdateUnsupported as e: raise exc.HTTPForbidden(explanation=e.format_message()) return dict(cell=_scrub_cell(cell)) # NOTE(gmann): Returns 200 for backwards compatibility but should be 204 # as this operation complete the sync instance info and return # no response body. @wsgi.expected_errors((400, 501)) @common.check_cells_enabled @validation.schema(cells.sync_instances) def sync_instances(self, req, body): """Tell all cells to sync instance info.""" context = req.environ['nova.context'] context.can(cells_policies.POLICY_ROOT % "sync_instances") project_id = body.pop('project_id', None) deleted = body.pop('deleted', False) updated_since = body.pop('updated_since', None) if isinstance(deleted, six.string_types): deleted = strutils.bool_from_string(deleted, strict=True) self.cells_rpcapi.sync_instances(context, project_id=project_id, updated_since=updated_since, deleted=deleted) nova-17.0.1/nova/api/openstack/compute/rest_api_version_history.rst0000666000175000017500000006143713250073126025646 0ustar zuulzuul00000000000000REST API Version History ======================== This documents the changes made to the REST API with every microversion change. The description for each version should be a verbose one which has enough information to be suitable for use in user documentation. 2.1 --- This is the initial version of the v2.1 API which supports microversions. The V2.1 API is from the REST API users's point of view exactly the same as v2.0 except with strong input validation. A user can specify a header in the API request:: X-OpenStack-Nova-API-Version: where ```` is any valid api version for this API. If no version is specified then the API will behave as if a version request of v2.1 was requested. 2.2 --- Added Keypair type. A user can request the creation of a certain 'type' of keypair (``ssh`` or ``x509``) in the ``os-keypairs`` plugin If no keypair type is specified, then the default ``ssh`` type of keypair is created. Fixes status code for ``os-keypairs`` create method from 200 to 201 Fixes status code for ``os-keypairs`` delete method from 202 to 204 2.3 (Maximum in Kilo) --------------------- Exposed additional attributes in ``os-extended-server-attributes``: ``reservation_id``, ``launch_index``, ``ramdisk_id``, ``kernel_id``, ``hostname``, ``root_device_name``, ``userdata``. Exposed ``delete_on_termination`` for ``volumes_attached`` in ``os-extended-volumes``. This change is required for the extraction of EC2 API into a standalone service. It exposes necessary properties absent in public nova APIs yet. Add info for Standalone EC2 API to cut access to Nova DB. 2.4 --- Show the ``reserved`` status on a ``FixedIP`` object in the ``os-fixed-ips`` API extension. The extension allows one to ``reserve`` and ``unreserve`` a fixed IP but the show method does not report the current status. 2.5 --- Before version 2.5, the command ``nova list --ip6 xxx`` returns all servers for non-admins, as the filter option is silently discarded. There is no reason to treat ip6 different from ip, though, so we just add this option to the allowed list. 2.6 --- A new API for getting remote console is added:: POST /servers//remote-consoles { "remote_console": { "protocol": ["vnc"|"rdp"|"serial"|"spice"], "type": ["novnc"|"xpvnc"|"rdp-html5"|"spice-html5"|"serial"] } } Example response:: { "remote_console": { "protocol": "vnc", "type": "novnc", "url": "http://example.com:6080/vnc_auto.html?token=XYZ" } } The old APIs 'os-getVNCConsole', 'os-getSPICEConsole', 'os-getSerialConsole' and 'os-getRDPConsole' are removed. 2.7 --- Check the ``is_public`` attribute of a flavor before adding tenant access to it. Reject the request with HTTPConflict error. 2.8 --- Add 'mks' protocol and 'webmks' type for remote consoles. 2.9 --- Add a new ``locked`` attribute to the detailed view, update, and rebuild action. ``locked`` will be ``true`` if anyone is currently holding a lock on the server, ``false`` otherwise. 2.10 ---- Added user_id parameter to os-keypairs plugin, as well as a new property in the request body, for the create operation. Administrators will be able to list, get details and delete keypairs owned by users other than themselves and to create new keypairs on behalf of their users. 2.11 ---- Exposed attribute ``forced_down`` for ``os-services``. Added ability to change the ``forced_down`` attribute by calling an update. 2.12 (Maximum in Liberty) ------------------------- Exposes VIF ``net_id`` attribute in ``os-virtual-interfaces``. User will be able to get Virtual Interfaces ``net_id`` in Virtual Interfaces list and can determine in which network a Virtual Interface is plugged into. 2.13 ---- Add information ``project_id`` and ``user_id`` to ``os-server-groups`` API response data. 2.14 ---- Remove ``onSharedStorage`` parameter from server's evacuate action. Nova will automatically detect if the instance is on shared storage. Also adminPass is removed from the response body. The user can get the password with the server's os-server-password action. 2.15 ---- From this version of the API users can choose 'soft-affinity' and 'soft-anti-affinity' rules too for server-groups. 2.16 ---- Exposes new host_status attribute for servers/detail and servers/{server_id}. Ability to get nova-compute status when querying servers. By default, this is only exposed to cloud administrators. 2.17 ---- Add a new API for triggering crash dump in an instance. Different operation systems in instance may need different configurations to trigger crash dump. 2.18 ---- Establishes a set of routes that makes project_id an optional construct in v2.1. 2.19 ---- Allow the user to set and get the server description. The user will be able to set the description when creating, rebuilding, or updating a server, and get the description as part of the server details. 2.20 ---- From this version of the API user can call detach and attach volumes for instances which are in shelved and shelved_offloaded state. 2.21 ---- The ``os-instance-actions`` API now returns information from deleted instances. 2.22 ---- A new resource servers:migrations added. A new API to force live migration to complete added:: POST /servers//migrations//action { "force_complete": null } 2.23 ---- From this version of the API users can get the migration summary list by index API or the information of a specific migration by get API. And the old top-level resource `/os-migrations` won't be extended anymore. Add migration_type for old /os-migrations API, also add ref link to the /servers/{uuid}/migrations/{id} for it when the migration is an in-progress live-migration. 2.24 ---- A new API call to cancel a running live migration:: DELETE /servers//migrations/ 2.25 (Maximum in Mitaka) ------------------------ Modify input parameter for ``os-migrateLive``. The block_migration will support 'auto' value, and disk_over_commit flag will be removed. 2.26 ---- Added support of server tags. A user can create, update, delete or check existence of simple string tags for servers by the os-server-tags plugin. Tags have the following schema restrictions: * Tag is a Unicode bytestring no longer than 60 characters. * Tag is a non-empty string. * '/' is not allowed to be in a tag name * Comma is not allowed to be in a tag name in order to simplify requests that specify lists of tags * All other characters are allowed to be in a tag name * Each server can have up to 50 tags. The resource point for these operations is /servers//tags A user can add a single tag to the server by sending PUT request to the /servers//tags/ where is any valid tag name. A user can replace **all** current server tags to the new set of tags by sending PUT request to the /servers//tags. New set of tags must be specified in request body. This set must be in list 'tags'. A user can remove specified tag from the server by sending DELETE request to the /servers//tags/ where is tag name which user wants to remove. A user can remove **all** tags from the server by sending DELETE request to the /servers//tags A user can get a set of server tags with information about server by sending GET request to the /servers/ Request returns dictionary with information about specified server, including list 'tags' :: { 'id': {server_id}, ... 'tags': ['foo', 'bar', 'baz'] } A user can get **only** a set of server tags by sending GET request to the /servers//tags Response :: { 'tags': ['foo', 'bar', 'baz'] } A user can check if a tag exists or not on a server by sending GET /servers/{server_id}/tags/{tag} Request returns `204 No Content` if tag exist on a server or `404 Not Found` if tag doesn't exist on a server. A user can filter servers in GET /servers request by new filters: * tags * tags-any * not-tags * not-tags-any These filters can be combined. Also user can use more than one string tags for each filter. In this case string tags for each filter must be separated by comma: GET /servers?tags=red&tags-any=green,orange 2.27 ---- Added support for the new form of microversion headers described in the `Microversion Specification `_. Both the original form of header and the new form is supported. 2.28 ---- Nova API hypervisor.cpu_info change from string to JSON object. From this version of the API the hypervisor's 'cpu_info' field will be will returned as JSON object (not string) by sending GET request to the /v2.1/os-hypervisors/{hypervisor_id}. 2.29 ---- Updates the POST request body for the ``evacuate`` action to include the optional ``force`` boolean field defaulted to False. Also changes the evacuate action behaviour when providing a ``host`` string field by calling the nova scheduler to verify the provided host unless the ``force`` attribute is set. 2.30 ---- Updates the POST request body for the ``live-migrate`` action to include the optional ``force`` boolean field defaulted to False. Also changes the live-migrate action behaviour when providing a ``host`` string field by calling the nova scheduler to verify the provided host unless the ``force`` attribute is set. 2.31 ---- Fix os-console-auth-tokens to return connection info for all types of tokens, not just RDP. 2.32 ---- Adds an optional, arbitrary 'tag' item to the 'networks' item in the server boot request body. In addition, every item in the block_device_mapping_v2 array can also have an optional, arbitrary 'tag' item. These tags are used to identify virtual device metadata, as exposed in the metadata API and on the config drive. For example, a network interface on the virtual PCI bus tagged with 'nic1' will appear in the metadata along with its bus (PCI), bus address (ex: 0000:00:02.0), MAC address, and tag ('nic1'). .. note:: A bug has caused the tag attribute to no longer be accepted for networks starting with version 2.37 and for block_device_mapping_v2 starting with version 2.33. In other words, networks could only be tagged between versions 2.32 and 2.36 inclusively and block devices only in version 2.32. As of version 2.42 the tag attribute has been restored and both networks and block devices can be tagged again. 2.33 ---- Support pagination for hypervisor by accepting limit and marker from the GET API request:: GET /v2.1/{tenant_id}/os-hypervisors?marker={hypervisor_id}&limit={limit} In the context of device tagging at server create time, 2.33 also removes the tag attribute from block_device_mapping_v2. This is a bug that is fixed in 2.42, in which the tag attribute is reintroduced. 2.34 ---- Checks in ``os-migrateLive`` before live-migration actually starts are now made in background. ``os-migrateLive`` is not throwing `400 Bad Request` if pre-live-migration checks fail. 2.35 ---- Added pagination support for keypairs. Optional parameters 'limit' and 'marker' were added to GET /os-keypairs request, the default sort_key was changed to 'name' field as ASC order, the generic request format is:: GET /os-keypairs?limit={limit}&marker={kp_name} 2.36 ---- All the APIs which proxy to another service were deprecated in this version, also the fping API. Those APIs will return 404 with Microversion 2.36. The network related quotas and limits are removed from API also. The deprecated API endpoints as below:: '/images' '/os-networks' '/os-tenant-networks' '/os-fixed-ips' '/os-floating-ips' '/os-floating-ips-bulk' '/os-floating-ip-pools' '/os-floating-ip-dns' '/os-security-groups' '/os-security-group-rules' '/os-security-group-default-rules' '/os-volumes' '/os-snapshots' '/os-baremetal-nodes' '/os-fping' .. note:: A `regression`_ was introduced in this microversion which broke the ``force`` parameter in the ``PUT /os-quota-sets`` API. The fix will have to be applied to restore this functionality. .. _regression: https://bugs.launchpad.net/nova/+bug/1733886 2.37 ---- Added support for automatic allocation of networking, also known as "Get Me a Network". With this microversion, when requesting the creation of a new server (or servers) the ``networks`` entry in the ``server`` portion of the request body is required. The ``networks`` object in the request can either be a list or an enum with values: #. *none* which means no networking will be allocated for the created server(s). #. *auto* which means either a network that is already available to the project will be used, or if one does not exist, will be automatically created for the project. Automatic network allocation for a project only happens once for a project. Subsequent requests using *auto* for the same project will reuse the network that was previously allocated. Also, the ``uuid`` field in the ``networks`` object in the server create request is now strictly enforced to be in UUID format. In the context of device tagging at server create time, 2.37 also removes the tag attribute from networks. This is a bug that is fixed in 2.42, in which the tag attribute is reintroduced. 2.38 (Maximum in Newton) ------------------------ Before version 2.38, the command ``nova list --status invalid_status`` was returning empty list for non admin user and 500 InternalServerError for admin user. As there are sufficient statuses defined already, any invalid status should not be accepted. From this version of the API admin as well as non admin user will get 400 HTTPBadRequest if invalid status is passed to nova list command. 2.39 ---- Deprecates image-metadata proxy API that is just a proxy for Glance API to operate the image metadata. Also removes the extra quota enforcement with Nova `metadata` quota (quota checks for 'createImage' and 'createBackup' actions in Nova were removed). After this version Glance configuration option `image_property_quota` should be used to control the quota of image metadatas. Also, removes the `maxImageMeta` field from `os-limits` API response. 2.40 ---- Optional query parameters ``limit`` and ``marker`` were added to the ``os-simple-tenant-usage`` endpoints for pagination. If a limit isn't provided, the configurable ``max_limit`` will be used which currently defaults to 1000. :: GET /os-simple-tenant-usage?limit={limit}&marker={instance_uuid} GET /os-simple-tenant-usage/{tenant_id}?limit={limit}&marker={instance_uuid} A tenant's usage statistics may span multiple pages when the number of instances exceeds limit, and API consumers will need to stitch together the aggregate results if they still want totals for all instances in a specific time window, grouped by tenant. Older versions of the ``os-simple-tenant-usage`` endpoints will not accept these new paging query parameters, but they will start to silently limit by ``max_limit`` to encourage the adoption of this new microversion, and circumvent the existing possibility of DoS-like usage requests when there are thousands of instances. 2.41 ---- The 'uuid' attribute of an aggregate is now returned from calls to the `/os-aggregates` endpoint. This attribute is auto-generated upon creation of an aggregate. The `os-aggregates` API resource endpoint remains an administrator-only API. 2.42 (Maximum in Ocata) ----------------------- In the context of device tagging at server create time, a bug has caused the tag attribute to no longer be accepted for networks starting with version 2.37 and for block_device_mapping_v2 starting with version 2.33. Microversion 2.42 restores the tag parameter to both networks and block_device_mapping_v2, allowing networks and block devices to be tagged again. 2.43 ---- The ``os-hosts`` API is deprecated as of the 2.43 microversion. Requests made with microversion >= 2.43 will result in a 404 error. To list and show host details, use the ``os-hypervisors`` API. To enable or disable a service, use the ``os-services`` API. There is no replacement for the `shutdown`, `startup`, `reboot`, or `maintenance_mode` actions as those are system-level operations which should be outside of the control of the compute service. 2.44 ---- The following APIs which are considered as proxies of Neutron networking API, are deprecated and will result in a 404 error response in new Microversion:: POST /servers/{server_uuid}/action { "addFixedIp": {...} } POST /servers/{server_uuid}/action { "removeFixedIp": {...} } POST /servers/{server_uuid}/action { "addFloatingIp": {...} } POST /servers/{server_uuid}/action { "removeFloatingIp": {...} } Those server actions can be replaced by calling the Neutron API directly. The nova-network specific API to query the server's interfaces is deprecated:: GET /servers/{server_uuid}/os-virtual-interfaces To query attached neutron interfaces for a specific server, the API `GET /servers/{server_uuid}/os-interface` can be used. 2.45 ---- The ``createImage`` and ``createBackup`` server action APIs no longer return a ``Location`` header in the response for the snapshot image, they now return a json dict in the response body with an ``image_id`` key and uuid value. 2.46 ---- The request_id created for every inbound request is now returned in ``X-OpenStack-Request-ID`` in addition to ``X-Compute-Request-ID`` to be consistent with the rest of OpenStack. This is a signaling only microversion, as these header settings happen well before microversion processing. 2.47 ---- Replace the ``flavor`` name/ref with the actual flavor details from the embedded flavor object when displaying server details. Requests made with microversion >= 2.47 will no longer return the flavor ID/link but instead will return a subset of the flavor details. If the user is prevented by policy from indexing extra-specs, then the ``extra_specs`` field will not be included in the flavor information. 2.48 ---- Before version 2.48, VM diagnostics response was just a 'blob' of data returned by each hypervisor. From this version VM diagnostics response is standardized. It has a set of fields which each hypervisor will try to fill. If a hypervisor driver is unable to provide a specific field then this field will be reported as 'None'. 2.49 ---- Continuing from device role tagging at server create time introduced in version 2.32 and later fixed in 2.42, microversion 2.49 allows the attachment of network interfaces and volumes with an optional ``tag`` parameter. This tag is used to identify the virtual devices in the guest and is exposed in the metadata API. Because the config drive cannot be updated while the guest is running, it will only contain metadata of devices that were tagged at boot time. Any changes made to devices while the instance is running - be it detaching a tagged device or performing a tagged device attachment - will not be reflected in the config drive. Tagged volume attachment is not supported for shelved-offloaded instances. 2.50 ---- The ``server_groups`` and ``server_group_members`` keys are exposed in GET & PUT ``os-quota-class-sets`` APIs Response body. Networks related quotas have been filtered out from os-quota-class. Below quotas are filtered out and not available in ``os-quota-class-sets`` APIs from this microversion onwards. - "fixed_ips" - "floating_ips" - "networks", - "security_group_rules" - "security_groups" 2.51 ---- There are two changes for the 2.51 microversion: * Add ``volume-extended`` event name to the ``os-server-external-events`` API. This will be used by the Block Storage service when extending the size of an attached volume. This signals the Compute service to perform any necessary actions on the compute host or hypervisor to adjust for the new volume block device size. * Expose the ``events`` field in the response body for the ``GET /servers/{server_id}/os-instance-actions/{request_id}`` API. This is useful for API users to monitor when a volume extend operation completes for the given server instance. By default only users with the administrator role will be able to see event ``traceback`` details. 2.52 ---- Adds support for applying tags when creating a server. The tag schema is the same as in the `2.26`_ microversion. .. _2.53-microversion: 2.53 (Maximum in Pike) ---------------------- **os-services** Services are now identified by uuid instead of database id to ensure uniqueness across cells. This microversion brings the following changes: * ``GET /os-services`` returns a uuid in the ``id`` field of the response * ``DELETE /os-services/{service_uuid}`` requires a service uuid in the path * The following APIs have been superseded by ``PUT /os-services/{service_uuid}/``: * ``PUT /os-services/disable`` * ``PUT /os-services/disable-log-reason`` * ``PUT /os-services/enable`` * ``PUT /os-services/force-down`` ``PUT /os-services/{service_uuid}`` takes the following fields in the body: * ``status`` - can be either "enabled" or "disabled" to enable or disable the given service * ``disabled_reason`` - specify with status="disabled" to log a reason for why the service is disabled * ``forced_down`` - boolean indicating if the service was forced down by an external service * ``PUT /os-services/{service_uuid}`` will now return a full service resource representation like in a ``GET`` response **os-hypervisors** Hypervisors are now identified by uuid instead of database id to ensure uniqueness across cells. This microversion brings the following changes: * ``GET /os-hypervisors/{hypervisor_hostname_pattern}/search`` is deprecated and replaced with the ``hypervisor_hostname_pattern`` query parameter on the ``GET /os-hypervisors`` and ``GET /os-hypervisors/detail`` APIs. Paging with ``hypervisor_hostname_pattern`` is not supported. * ``GET /os-hypervisors/{hypervisor_hostname_pattern}/servers`` is deprecated and replaced with the ``with_servers`` query parameter on the ``GET /os-hypervisors`` and ``GET /os-hypervisors/detail`` APIs. * ``GET /os-hypervisors/{hypervisor_id}`` supports the ``with_servers`` query parameter to include hosted server details in the response. * ``GET /os-hypervisors/{hypervisor_id}`` and ``GET /os-hypervisors/{hypervisor_id}/uptime`` APIs now take a uuid value for the ``{hypervisor_id}`` path parameter. * The ``GET /os-hypervisors`` and ``GET /os-hypervisors/detail`` APIs will now use a uuid marker for paging across cells. * The following APIs will now return a uuid value for the hypervisor id and optionally service id fields in the response: * ``GET /os-hypervisors`` * ``GET /os-hypervisors/detail`` * ``GET /os-hypervisors/{hypervisor_id}`` * ``GET /os-hypervisors/{hypervisor_id}/uptime`` 2.54 ---- Allow the user to set the server key pair while rebuilding. 2.55 ---- Adds a ``description`` field to the flavor resource in the following APIs: * ``GET /flavors`` * ``GET /flavors/detail`` * ``GET /flavors/{flavor_id}`` * ``POST /flavors`` * ``PUT /flavors/{flavor_id}`` The embedded flavor description will not be included in server representations. 2.56 ---- Updates the POST request body for the ``migrate`` action to include the the optional ``host`` string field defaulted to ``null``. If ``host`` is set the migrate action verifies the provided host with the nova scheduler and uses it as the destination for the migration. 2.57 ---- The 2.57 microversion makes the following changes: * The ``personality`` parameter is removed from the server create and rebuild APIs. * The ``user_data`` parameter is added to the server rebuild API. * The ``maxPersonality`` and ``maxPersonalitySize`` limits are excluded from the ``GET /limits`` API response. * The ``injected_files``, ``injected_file_content_bytes`` and ``injected_file_path_bytes`` quotas are removed from the ``os-quota-sets`` and ``os-quota-class-sets`` APIs. 2.58 ---- Add pagination support and ``changes-since`` filter for os-instance-actions API. Users can now use ``limit`` and ``marker`` to perform paginated query when listing instance actions. Users can also use ``changes-since`` filter to filter the results based on the last time the instance action was updated. 2.59 ---- Added pagination support for migrations, there are four changes: * Add pagination support and ``changes-since`` filter for os-migrations API. Users can now use ``limit`` and ``marker`` to perform paginate query when listing migrations. * Users can also use ``changes-since`` filter to filter the results based on the last time the migration record was updated. * ``GET /os-migrations``, ``GET /servers/{server_id}/migrations/{migration_id}`` and ``GET /servers/{server_id}/migrations`` will now return a uuid value in addition to the migrations id in the response. * The query parameter schema of the ``GET /os-migrations`` API no longer allows additional properties. 2.60 (Maximum in Queens) ------------------------ From this version of the API users can attach a ``multiattach`` capable volume to multiple instances. The API request for creating the additional attachments is the same. The chosen virt driver and the volume back end has to support the functionality as well. nova-17.0.1/nova/api/openstack/compute/consoles.py0000666000175000017500000000747513250073126022161 0ustar zuulzuul00000000000000# Copyright 2010 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from webob import exc from nova.api.openstack import wsgi from nova.console import api as console_api from nova import exception from nova.policies import consoles as consoles_policies def _translate_keys(cons): """Coerces a console instance into proper dictionary format.""" pool = cons['pool'] info = {'id': cons['id'], 'console_type': pool['console_type']} return dict(console=info) def _translate_detail_keys(cons): """Coerces a console instance into proper dictionary format with detail.""" pool = cons['pool'] info = {'id': cons['id'], 'console_type': pool['console_type'], 'password': cons['password'], 'instance_name': cons['instance_name'], 'port': cons['port'], 'host': pool['public_hostname']} return dict(console=info) class ConsolesController(wsgi.Controller): """The Consoles controller for the OpenStack API.""" def __init__(self): self.console_api = console_api.API() @wsgi.expected_errors(()) def index(self, req, server_id): """Returns a list of consoles for this instance.""" context = req.environ['nova.context'] context.can(consoles_policies.POLICY_ROOT % 'index') consoles = self.console_api.get_consoles( req.environ['nova.context'], server_id) return dict(consoles=[_translate_keys(console) for console in consoles]) # NOTE(gmann): Here should be 201 instead of 200 by v2.1 # +microversions because the console has been created # completely when returning a response. @wsgi.expected_errors(404) def create(self, req, server_id, body): """Creates a new console.""" context = req.environ['nova.context'] context.can(consoles_policies.POLICY_ROOT % 'create') try: self.console_api.create_console( req.environ['nova.context'], server_id) except exception.InstanceNotFound as e: raise exc.HTTPNotFound(explanation=e.format_message()) @wsgi.expected_errors(404) def show(self, req, server_id, id): """Shows in-depth information on a specific console.""" context = req.environ['nova.context'] context.can(consoles_policies.POLICY_ROOT % 'show') try: console = self.console_api.get_console( req.environ['nova.context'], server_id, int(id)) except exception.ConsoleNotFound as e: raise exc.HTTPNotFound(explanation=e.format_message()) return _translate_detail_keys(console) @wsgi.response(202) @wsgi.expected_errors(404) def delete(self, req, server_id, id): """Deletes a console.""" context = req.environ['nova.context'] context.can(consoles_policies.POLICY_ROOT % 'delete') try: self.console_api.delete_console(req.environ['nova.context'], server_id, int(id)) except exception.ConsoleNotFound as e: raise exc.HTTPNotFound(explanation=e.format_message()) nova-17.0.1/nova/api/openstack/compute/ips.py0000666000175000017500000000446613250073126021124 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from webob import exc import nova from nova.api.openstack import common from nova.api.openstack.compute.views import addresses as views_addresses from nova.api.openstack import wsgi from nova.i18n import _ from nova.policies import ips as ips_policies class IPsController(wsgi.Controller): """The servers addresses API controller for the OpenStack API.""" # Note(gmann): here using V2 view builder instead of V3 to have V2.1 # server ips response same as V2 which does not include "OS-EXT-IPS:type" # & "OS-EXT-IPS-MAC:mac_addr". If needed those can be added with # microversion by using V2.1 view builder. _view_builder_class = views_addresses.ViewBuilder def __init__(self, **kwargs): super(IPsController, self).__init__(**kwargs) self._compute_api = nova.compute.API() @wsgi.expected_errors(404) def index(self, req, server_id): context = req.environ["nova.context"] context.can(ips_policies.POLICY_ROOT % 'index') instance = common.get_instance(self._compute_api, context, server_id) networks = common.get_networks_for_instance(context, instance) return self._view_builder.index(networks) @wsgi.expected_errors(404) def show(self, req, server_id, id): context = req.environ["nova.context"] context.can(ips_policies.POLICY_ROOT % 'show') instance = common.get_instance(self._compute_api, context, server_id) networks = common.get_networks_for_instance(context, instance) if id not in networks: msg = _("Instance is not a member of specified network") raise exc.HTTPNotFound(explanation=msg) return self._view_builder.show(networks[id], id) nova-17.0.1/nova/api/openstack/compute/extension_info.py0000666000175000017500000007605213250073136023361 0ustar zuulzuul00000000000000# Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging import webob.exc from nova.api.openstack import wsgi from nova.policies import extensions as ext_policies LOG = logging.getLogger(__name__) EXTENSION_LIST = [ { "alias": "NMN", "description": "Multiple network support.", "links": [], "name": "Multinic", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-DCF", "description": "Disk Management Extension.", "links": [], "name": "DiskConfig", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-EXT-AZ", "description": "Extended Availability Zone support.", "links": [], "name": "ExtendedAvailabilityZone", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-EXT-IMG-SIZE", "description": "Adds image size to image listings.", "links": [], "name": "ImageSize", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-EXT-IPS", "description": "Adds type parameter to the ip list.", "links": [], "name": "ExtendedIps", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-EXT-IPS-MAC", "description": "Adds mac address parameter to the ip list.", "links": [], "name": "ExtendedIpsMac", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-EXT-SRV-ATTR", "description": "Extended Server Attributes support.", "links": [], "name": "ExtendedServerAttributes", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-EXT-STS", "description": "Extended Status support.", "links": [], "name": "ExtendedStatus", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-FLV-DISABLED", "description": "Support to show the disabled status of a flavor.", "links": [], "name": "FlavorDisabled", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-FLV-EXT-DATA", "description": "Provide additional data for flavors.", "links": [], "name": "FlavorExtraData", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-SCH-HNT", "description": "Pass arbitrary key/value pairs to the scheduler.", "links": [], "name": "SchedulerHints", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "OS-SRV-USG", "description": "Adds launched_at and terminated_at on Servers.", "links": [], "name": "ServerUsage", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-access-ips", "description": "Access IPs support.", "links": [], "name": "AccessIPs", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-admin-actions", "description": "Enable admin-only server actions\n\n " "Actions include: resetNetwork, injectNetworkInfo, " "os-resetState\n ", "links": [], "name": "AdminActions", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-admin-password", "description": "Admin password management support.", "links": [], "name": "AdminPassword", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-agents", "description": "Agents support.", "links": [], "name": "Agents", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-aggregates", "description": "Admin-only aggregate administration.", "links": [], "name": "Aggregates", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-assisted-volume-snapshots", "description": "Assisted volume snapshots.", "links": [], "name": "AssistedVolumeSnapshots", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-attach-interfaces", "description": "Attach interface support.", "links": [], "name": "AttachInterfaces", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-availability-zone", "description": "1. Add availability_zone to the Create Server " "API.\n 2. Add availability zones " "describing.\n ", "links": [], "name": "AvailabilityZone", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-baremetal-ext-status", "description": "Add extended status in Baremetal Nodes v2 API.", "links": [], "name": "BareMetalExtStatus", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-baremetal-nodes", "description": "Admin-only bare-metal node administration.", "links": [], "name": "BareMetalNodes", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-block-device-mapping", "description": "Block device mapping boot support.", "links": [], "name": "BlockDeviceMapping", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-block-device-mapping-v2-boot", "description": "Allow boot with the new BDM data format.", "links": [], "name": "BlockDeviceMappingV2Boot", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-cell-capacities", "description": "Adding functionality to get cell capacities.", "links": [], "name": "CellCapacities", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-cells", "description": "Enables cells-related functionality such as adding " "neighbor cells,\n listing neighbor cells, " "and getting the capabilities of the local cell.\n ", "links": [], "name": "Cells", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-certificates", "description": "Certificates support.", "links": [], "name": "Certificates", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-cloudpipe", "description": "Adds actions to create cloudpipe instances.\n\n " "When running with the Vlan network mode, you need a " "mechanism to route\n from the public Internet to " "your vlans. This mechanism is known as a\n " "cloudpipe.\n\n At the time of creating this class, " "only OpenVPN is supported. Support for\n a SSH " "Bastion host is forthcoming.\n ", "links": [], "name": "Cloudpipe", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-cloudpipe-update", "description": "Adds the ability to set the vpn ip/port for cloudpipe " "instances.", "links": [], "name": "CloudpipeUpdate", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-config-drive", "description": "Config Drive Extension.", "links": [], "name": "ConfigDrive", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-console-auth-tokens", "description": "Console token authentication support.", "links": [], "name": "ConsoleAuthTokens", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-console-output", "description": "Console log output support, with tailing ability.", "links": [], "name": "ConsoleOutput", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-consoles", "description": "Interactive Console support.", "links": [], "name": "Consoles", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-create-backup", "description": "Create a backup of a server.", "links": [], "name": "CreateBackup", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-create-server-ext", "description": "Extended support to the Create Server v1.1 API.", "links": [], "name": "Createserverext", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-deferred-delete", "description": "Instance deferred delete.", "links": [], "name": "DeferredDelete", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-evacuate", "description": "Enables server evacuation.", "links": [], "name": "Evacuate", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-evacuate-find-host", "description": "Enables server evacuation without target host. " "Scheduler will select one to target.", "links": [], "name": "ExtendedEvacuateFindHost", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-floating-ips", "description": "Adds optional fixed_address to the add floating IP " "command.", "links": [], "name": "ExtendedFloatingIps", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-hypervisors", "description": "Extended hypervisors support.", "links": [], "name": "ExtendedHypervisors", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-networks", "description": "Adds additional fields to networks.", "links": [], "name": "ExtendedNetworks", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-quotas", "description": "Adds ability for admins to delete quota and " "optionally force the update Quota command.", "links": [], "name": "ExtendedQuotas", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-rescue-with-image", "description": "Allow the user to specify the image to use for " "rescue.", "links": [], "name": "ExtendedRescueWithImage", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-services", "description": "Extended services support.", "links": [], "name": "ExtendedServices", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-services-delete", "description": "Extended services deletion support.", "links": [], "name": "ExtendedServicesDelete", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-status", "description": "Extended Status support.", "links": [], "name": "ExtendedStatus", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-extended-volumes", "description": "Extended Volumes support.", "links": [], "name": "ExtendedVolumes", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-fixed-ips", "description": "Fixed IPs support.", "links": [], "name": "FixedIPs", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-flavor-access", "description": "Flavor access support.", "links": [], "name": "FlavorAccess", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-flavor-extra-specs", "description": "Flavors extra specs support.", "links": [], "name": "FlavorExtraSpecs", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-flavor-manage", "description": "Flavor create/delete API support.", "links": [], "name": "FlavorManage", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-flavor-rxtx", "description": "Support to show the rxtx status of a flavor.", "links": [], "name": "FlavorRxtx", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-flavor-swap", "description": "Support to show the swap status of a flavor.", "links": [], "name": "FlavorSwap", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-floating-ip-dns", "description": "Floating IP DNS support.", "links": [], "name": "FloatingIpDns", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-floating-ip-pools", "description": "Floating IPs support.", "links": [], "name": "FloatingIpPools", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-floating-ips", "description": "Floating IPs support.", "links": [], "name": "FloatingIps", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-floating-ips-bulk", "description": "Bulk handling of Floating IPs.", "links": [], "name": "FloatingIpsBulk", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-fping", "description": "Fping Management Extension.", "links": [], "name": "Fping", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-hide-server-addresses", "description": "Support hiding server addresses in certain states.", "links": [], "name": "HideServerAddresses", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-hosts", "description": "Admin-only host administration.", "links": [], "name": "Hosts", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-hypervisor-status", "description": "Show hypervisor status.", "links": [], "name": "HypervisorStatus", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-hypervisors", "description": "Admin-only hypervisor administration.", "links": [], "name": "Hypervisors", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-instance-actions", "description": "View a log of actions and events taken on an " "instance.", "links": [], "name": "InstanceActions", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-instance_usage_audit_log", "description": "Admin-only Task Log Monitoring.", "links": [], "name": "OSInstanceUsageAuditLog", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-keypairs", "description": "Keypair Support.", "links": [], "name": "Keypairs", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-lock-server", "description": "Enable lock/unlock server actions.", "links": [], "name": "LockServer", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-migrate-server", "description": "Enable migrate and live-migrate server actions.", "links": [], "name": "MigrateServer", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-migrations", "description": "Provide data on migrations.", "links": [], "name": "Migrations", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-multiple-create", "description": "Allow multiple create in the Create Server v2.1 API.", "links": [], "name": "MultipleCreate", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-networks", "description": "Admin-only Network Management Extension.", "links": [], "name": "Networks", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-networks-associate", "description": "Network association support.", "links": [], "name": "NetworkAssociationSupport", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-pause-server", "description": "Enable pause/unpause server actions.", "links": [], "name": "PauseServer", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-personality", "description": "Personality support.", "links": [], "name": "Personality", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-preserve-ephemeral-rebuild", "description": "Allow preservation of the ephemeral partition on " "rebuild.", "links": [], "name": "PreserveEphemeralOnRebuild", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-quota-class-sets", "description": "Quota classes management support.", "links": [], "name": "QuotaClasses", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-quota-sets", "description": "Quotas management support.", "links": [], "name": "Quotas", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-rescue", "description": "Instance rescue mode.", "links": [], "name": "Rescue", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-security-group-default-rules", "description": "Default rules for security group support.", "links": [], "name": "SecurityGroupDefaultRules", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-security-groups", "description": "Security group support.", "links": [], "name": "SecurityGroups", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-server-diagnostics", "description": "Allow Admins to view server diagnostics through " "server action.", "links": [], "name": "ServerDiagnostics", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-server-external-events", "description": "Server External Event Triggers.", "links": [], "name": "ServerExternalEvents", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-server-group-quotas", "description": "Adds quota support to server groups.", "links": [], "name": "ServerGroupQuotas", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-server-groups", "description": "Server group support.", "links": [], "name": "ServerGroups", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-server-list-multi-status", "description": "Allow to filter the servers by a set of status " "values.", "links": [], "name": "ServerListMultiStatus", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-server-password", "description": "Server password support.", "links": [], "name": "ServerPassword", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-server-sort-keys", "description": "Add sorting support in get Server v2 API.", "links": [], "name": "ServerSortKeys", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-server-start-stop", "description": "Start/Stop instance compute API support.", "links": [], "name": "ServerStartStop", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-services", "description": "Services support.", "links": [], "name": "Services", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-shelve", "description": "Instance shelve mode.", "links": [], "name": "Shelve", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-simple-tenant-usage", "description": "Simple tenant usage extension.", "links": [], "name": "SimpleTenantUsage", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-suspend-server", "description": "Enable suspend/resume server actions.", "links": [], "name": "SuspendServer", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-tenant-networks", "description": "Tenant-based Network Management Extension.", "links": [], "name": "OSTenantNetworks", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-used-limits", "description": "Provide data on limited resources that are being " "used.", "links": [], "name": "UsedLimits", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-used-limits-for-admin", "description": "Provide data to admin on limited resources used by " "other tenants.", "links": [], "name": "UsedLimitsForAdmin", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-user-data", "description": "Add user_data to the Create Server API.", "links": [], "name": "UserData", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-user-quotas", "description": "Project user quota support.", "links": [], "name": "UserQuotas", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-virtual-interfaces", "description": "Virtual interface support.", "links": [], "name": "VirtualInterfaces", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-volume-attachment-update", "description": "Support for updating a volume attachment.", "links": [], "name": "VolumeAttachmentUpdate", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" }, { "alias": "os-volumes", "description": "Volumes support.", "links": [], "name": "Volumes", "namespace": "http://docs.openstack.org/compute/ext/fake_xml", "updated": "2014-12-03T00:00:00Z" } ] EXTENSION_LIST_LEGACY_V2_COMPATIBLE = EXTENSION_LIST[:] EXTENSION_LIST_LEGACY_V2_COMPATIBLE.append({ 'alias': 'OS-EXT-VIF-NET', 'description': 'Adds network id parameter to the virtual interface list.', 'links': [], 'name': 'ExtendedVIFNet', 'namespace': 'http://docs.openstack.org/compute/ext/fake_xml', "updated": "2014-12-03T00:00:00Z" }) EXTENSION_LIST_LEGACY_V2_COMPATIBLE = sorted( EXTENSION_LIST_LEGACY_V2_COMPATIBLE, key=lambda x: x['alias']) class ExtensionInfoController(wsgi.Controller): def _add_vif_extension(self, all_extensions): vif_extension_info = {'name': 'ExtendedVIFNet', 'alias': 'OS-EXT-VIF-NET', 'description': 'Adds network id parameter' ' to the virtual interface list.'} all_extensions.append(vif_extension_info) @wsgi.expected_errors(()) def index(self, req): context = req.environ['nova.context'] context.can(ext_policies.BASE_POLICY_NAME) # NOTE(gmann): This is for v2.1 compatible mode where # extension list should show all extensions as shown by v2. if req.is_legacy_v2(): return dict(extensions=EXTENSION_LIST_LEGACY_V2_COMPATIBLE) return dict(extensions=EXTENSION_LIST) @wsgi.expected_errors(404) def show(self, req, id): context = req.environ['nova.context'] context.can(ext_policies.BASE_POLICY_NAME) all_exts = EXTENSION_LIST # NOTE(gmann): This is for v2.1 compatible mode where # extension list should show all extensions as shown by v2. if req.is_legacy_v2(): all_exts = EXTENSION_LIST_LEGACY_V2_COMPATIBLE # NOTE(dprince): the extensions alias is used as the 'id' for show for ext in all_exts: if ext['alias'] == id: return dict(extension=ext) raise webob.exc.HTTPNotFound() nova-17.0.1/nova/api/openstack/compute/hosts.py0000666000175000017500000003011313250073126021455 0ustar zuulzuul00000000000000# Copyright (c) 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """The hosts admin extension.""" from oslo_log import log as logging import six import webob.exc from nova.api.openstack import common from nova.api.openstack.compute.schemas import hosts from nova.api.openstack import wsgi from nova.api import validation from nova import compute from nova import context as nova_context from nova import exception from nova import objects from nova.policies import hosts as hosts_policies LOG = logging.getLogger(__name__) class HostController(wsgi.Controller): """The Hosts API controller for the OpenStack API.""" def __init__(self): self.api = compute.HostAPI() super(HostController, self).__init__() @wsgi.Controller.api_version("2.1", "2.42") @validation.query_schema(hosts.index_query) @wsgi.expected_errors(()) def index(self, req): """Returns a dict in the format | {'hosts': [{'host_name': 'some.host.name', | 'service': 'cells', | 'zone': 'internal'}, | {'host_name': 'some.other.host.name', | 'service': 'cells', | 'zone': 'internal'}, | {'host_name': 'some.celly.host.name', | 'service': 'cells', | 'zone': 'internal'}, | {'host_name': 'console1.host.com', | 'service': 'consoleauth', | 'zone': 'internal'}, | {'host_name': 'network1.host.com', | 'service': 'network', | 'zone': 'internal'}, | {'host_name': 'network2.host.com', | 'service': 'network', | 'zone': 'internal'}, | {'host_name': 'compute1.host.com', | 'service': 'compute', | 'zone': 'nova'}, | {'host_name': 'compute2.host.com', | 'service': 'compute', | 'zone': 'nova'}, | {'host_name': 'sched1.host.com', | 'service': 'scheduler', | 'zone': 'internal'}, | {'host_name': 'sched2.host.com', | 'service': 'scheduler', | 'zone': 'internal'}, | {'host_name': 'vol1.host.com', | 'service': 'volume', | 'zone': 'internal'}]} """ context = req.environ['nova.context'] context.can(hosts_policies.BASE_POLICY_NAME) filters = {'disabled': False} zone = req.GET.get('zone', None) if zone: filters['availability_zone'] = zone services = self.api.service_get_all(context, filters=filters, set_zones=True, all_cells=True) hosts = [] api_services = ('nova-osapi_compute', 'nova-ec2', 'nova-metadata') for service in services: if service.binary not in api_services: hosts.append({'host_name': service['host'], 'service': service['topic'], 'zone': service['availability_zone']}) return {'hosts': hosts} @wsgi.Controller.api_version("2.1", "2.42") @wsgi.expected_errors((400, 404, 501)) @validation.schema(hosts.update) def update(self, req, id, body): """Return booleanized version of body dict. :param Request req: The request object (containing 'nova-context' env var). :param str id: The host name. :param dict body: example format {'host': {'status': 'enable', 'maintenance_mode': 'enable'}} :return: Same dict as body but 'enable' strings for 'status' and 'maintenance_mode' are converted into True, else False. :rtype: dict """ def read_enabled(orig_val): # Convert enable/disable str to a bool. val = orig_val.strip().lower() return val == "enable" context = req.environ['nova.context'] context.can(hosts_policies.BASE_POLICY_NAME) # See what the user wants to 'update' status = body.get('status') maint_mode = body.get('maintenance_mode') if status is not None: status = read_enabled(status) if maint_mode is not None: maint_mode = read_enabled(maint_mode) # Make the calls and merge the results result = {'host': id} if status is not None: result['status'] = self._set_enabled_status(context, id, status) if maint_mode is not None: result['maintenance_mode'] = self._set_host_maintenance(context, id, maint_mode) return result def _set_host_maintenance(self, context, host_name, mode=True): """Start/Stop host maintenance window. On start, it triggers guest VMs evacuation. """ LOG.info("Putting host %(host_name)s in maintenance mode %(mode)s.", {'host_name': host_name, 'mode': mode}) try: result = self.api.set_host_maintenance(context, host_name, mode) except NotImplementedError: common.raise_feature_not_supported() except (exception.HostNotFound, exception.HostMappingNotFound) as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) except exception.ComputeServiceUnavailable as e: raise webob.exc.HTTPBadRequest(explanation=e.format_message()) if result not in ("on_maintenance", "off_maintenance"): raise webob.exc.HTTPBadRequest(explanation=result) return result def _set_enabled_status(self, context, host_name, enabled): """Sets the specified host's ability to accept new instances. :param enabled: a boolean - if False no new VMs will be able to start on the host. """ if enabled: LOG.info("Enabling host %s.", host_name) else: LOG.info("Disabling host %s.", host_name) try: result = self.api.set_host_enabled(context, host_name, enabled) except NotImplementedError: common.raise_feature_not_supported() except (exception.HostNotFound, exception.HostMappingNotFound) as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) except exception.ComputeServiceUnavailable as e: raise webob.exc.HTTPBadRequest(explanation=e.format_message()) if result not in ("enabled", "disabled"): raise webob.exc.HTTPBadRequest(explanation=result) return result def _host_power_action(self, req, host_name, action): """Reboots, shuts down or powers up the host.""" context = req.environ['nova.context'] context.can(hosts_policies.BASE_POLICY_NAME) try: result = self.api.host_power_action(context, host_name, action) except NotImplementedError: common.raise_feature_not_supported() except (exception.HostNotFound, exception.HostMappingNotFound) as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) except exception.ComputeServiceUnavailable as e: raise webob.exc.HTTPBadRequest(explanation=e.format_message()) return {"host": host_name, "power_action": result} @wsgi.Controller.api_version("2.1", "2.42") @wsgi.expected_errors((400, 404, 501)) def startup(self, req, id): return self._host_power_action(req, host_name=id, action="startup") @wsgi.Controller.api_version("2.1", "2.42") @wsgi.expected_errors((400, 404, 501)) def shutdown(self, req, id): return self._host_power_action(req, host_name=id, action="shutdown") @wsgi.Controller.api_version("2.1", "2.42") @wsgi.expected_errors((400, 404, 501)) def reboot(self, req, id): return self._host_power_action(req, host_name=id, action="reboot") @staticmethod def _get_total_resources(host_name, compute_node): return {'resource': {'host': host_name, 'project': '(total)', 'cpu': compute_node.vcpus, 'memory_mb': compute_node.memory_mb, 'disk_gb': compute_node.local_gb}} @staticmethod def _get_used_now_resources(host_name, compute_node): return {'resource': {'host': host_name, 'project': '(used_now)', 'cpu': compute_node.vcpus_used, 'memory_mb': compute_node.memory_mb_used, 'disk_gb': compute_node.local_gb_used}} @staticmethod def _get_resource_totals_from_instances(host_name, instances): cpu_sum = 0 mem_sum = 0 hdd_sum = 0 for instance in instances: cpu_sum += instance['vcpus'] mem_sum += instance['memory_mb'] hdd_sum += instance['root_gb'] + instance['ephemeral_gb'] return {'resource': {'host': host_name, 'project': '(used_max)', 'cpu': cpu_sum, 'memory_mb': mem_sum, 'disk_gb': hdd_sum}} @staticmethod def _get_resources_by_project(host_name, instances): # Getting usage resource per project project_map = {} for instance in instances: resource = project_map.setdefault(instance['project_id'], {'host': host_name, 'project': instance['project_id'], 'cpu': 0, 'memory_mb': 0, 'disk_gb': 0}) resource['cpu'] += instance['vcpus'] resource['memory_mb'] += instance['memory_mb'] resource['disk_gb'] += (instance['root_gb'] + instance['ephemeral_gb']) return project_map @wsgi.Controller.api_version("2.1", "2.42") @wsgi.expected_errors(404) def show(self, req, id): """Shows the physical/usage resource given by hosts. :param id: hostname :returns: expected to use HostShowTemplate. ex.:: {'host': {'resource':D},..} D: {'host': 'hostname','project': 'admin', 'cpu': 1, 'memory_mb': 2048, 'disk_gb': 30} """ context = req.environ['nova.context'] context.can(hosts_policies.BASE_POLICY_NAME) host_name = id try: mapping = objects.HostMapping.get_by_host(context, host_name) nova_context.set_target_cell(context, mapping.cell_mapping) compute_node = ( objects.ComputeNode.get_first_node_by_host_for_old_compat( context, host_name)) instances = self.api.instance_get_all_by_host(context, host_name) except (exception.ComputeHostNotFound, exception.HostMappingNotFound) as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) resources = [self._get_total_resources(host_name, compute_node)] resources.append(self._get_used_now_resources(host_name, compute_node)) resources.append(self._get_resource_totals_from_instances(host_name, instances)) by_proj_resources = self._get_resources_by_project(host_name, instances) for resource in six.itervalues(by_proj_resources): resources.append({'resource': resource}) return {'host': resources} nova-17.0.1/nova/api/openstack/compute/versionsV21.py0000666000175000017500000000221513250073126022460 0ustar zuulzuul00000000000000# Copyright 2013 IBM Corp. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import webob.exc from nova.api.openstack.compute import versions from nova.api.openstack.compute.views import versions as views_versions from nova.api.openstack import wsgi class VersionsController(wsgi.Controller): @wsgi.expected_errors(404) def show(self, req, id='v2.1'): builder = views_versions.get_view_builder(req) if req.is_legacy_v2(): id = 'v2.0' if id not in versions.VERSIONS: raise webob.exc.HTTPNotFound() return builder.build_version(versions.VERSIONS[id]) nova-17.0.1/nova/api/openstack/compute/extended_volumes.py0000666000175000017500000001027713250073126023700 0ustar zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """The Extended Volumes API extension.""" from oslo_log import log as logging from nova.api.openstack import api_version_request from nova.api.openstack import wsgi from nova import context from nova import objects from nova.policies import extended_volumes as ev_policies LOG = logging.getLogger(__name__) class ExtendedVolumesController(wsgi.Controller): def _extend_server(self, context, server, req, bdms): volumes_attached = [] for bdm in bdms: if bdm.get('volume_id'): volume_attached = {'id': bdm['volume_id']} if api_version_request.is_supported(req, min_version='2.3'): volume_attached['delete_on_termination'] = ( bdm['delete_on_termination']) volumes_attached.append(volume_attached) # NOTE(mriedem): The os-extended-volumes prefix should not be used for # new attributes after v2.1. They are only in v2.1 for backward compat # with v2.0. key = "os-extended-volumes:volumes_attached" server[key] = volumes_attached @wsgi.extends def show(self, req, resp_obj, id): context = req.environ['nova.context'] if context.can(ev_policies.BASE_POLICY_NAME, fatal=False): server = resp_obj.obj['server'] bdms = objects.BlockDeviceMappingList.bdms_by_instance_uuid( context, [server['id']]) instance_bdms = self._get_instance_bdms(bdms, server) self._extend_server(context, server, req, instance_bdms) @staticmethod def _get_instance_bdms_in_multiple_cells(ctxt, servers): instance_uuids = [server['id'] for server in servers] inst_maps = objects.InstanceMappingList.get_by_instance_uuids( ctxt, instance_uuids) cell_mappings = {} for inst_map in inst_maps: if (inst_map.cell_mapping is not None and inst_map.cell_mapping.uuid not in cell_mappings): cell_mappings.update( {inst_map.cell_mapping.uuid: inst_map.cell_mapping}) bdms = {} results = context.scatter_gather_cells( ctxt, cell_mappings.values(), 60, objects.BlockDeviceMappingList.bdms_by_instance_uuid, instance_uuids) for cell_uuid, result in results.items(): if result is context.raised_exception_sentinel: LOG.warning('Failed to get block device mappings for cell %s', cell_uuid) elif result is context.did_not_respond_sentinel: LOG.warning('Timeout getting block device mappings for cell ' '%s', cell_uuid) else: bdms.update(result) return bdms @wsgi.extends def detail(self, req, resp_obj): context = req.environ['nova.context'] if context.can(ev_policies.BASE_POLICY_NAME, fatal=False): servers = list(resp_obj.obj['servers']) bdms = self._get_instance_bdms_in_multiple_cells(context, servers) for server in servers: instance_bdms = self._get_instance_bdms(bdms, server) self._extend_server(context, server, req, instance_bdms) def _get_instance_bdms(self, bdms, server): # server['id'] is guaranteed to be in the cache due to # the core API adding it in the 'detail' or 'show' method. # If that instance has since been deleted, it won't be in the # 'bdms' dictionary though, so use 'get' to avoid KeyErrors. return bdms.get(server['id'], []) nova-17.0.1/nova/api/openstack/compute/migrations.py0000666000175000017500000001425313250073136022501 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils import timeutils from webob import exc from nova.api.openstack import common from nova.api.openstack.compute.schemas import migrations as schema_migrations from nova.api.openstack.compute.views import migrations as migrations_view from nova.api.openstack import wsgi from nova.api import validation from nova import compute from nova import exception from nova.objects import base as obj_base from nova.policies import migrations as migrations_policies class MigrationsController(wsgi.Controller): """Controller for accessing migrations in OpenStack API.""" _view_builder_class = migrations_view.ViewBuilder _collection_name = "servers/%s/migrations" def __init__(self): super(MigrationsController, self).__init__() self.compute_api = compute.API() def _output(self, req, migrations_obj, add_link=False, add_uuid=False): """Returns the desired output of the API from an object. From a MigrationsList's object this method returns a list of primitive objects with the only necessary fields. """ detail_keys = ['memory_total', 'memory_processed', 'memory_remaining', 'disk_total', 'disk_processed', 'disk_remaining'] # TODO(Shaohe Feng) we should share the in-progress list. live_migration_in_progress = ['queued', 'preparing', 'running', 'post-migrating'] # Note(Shaohe Feng): We need to leverage the oslo.versionedobjects. # Then we can pass the target version to it's obj_to_primitive. objects = obj_base.obj_to_primitive(migrations_obj) objects = [x for x in objects if not x['hidden']] for obj in objects: del obj['deleted'] del obj['deleted_at'] del obj['hidden'] if not add_uuid: del obj['uuid'] if 'memory_total' in obj: for key in detail_keys: del obj[key] # NOTE(Shaohe Feng) above version 2.23, add migration_type for all # kinds of migration, but we only add links just for in-progress # live-migration. if add_link and obj['migration_type'] == "live-migration" and ( obj["status"] in live_migration_in_progress): obj["links"] = self._view_builder._get_links( req, obj["id"], self._collection_name % obj['instance_uuid']) elif add_link is False: del obj['migration_type'] return objects def _index(self, req, add_link=False, next_link=False, add_uuid=False, sort_dirs=None, sort_keys=None, limit=None, marker=None, allow_changes_since=False): context = req.environ['nova.context'] context.can(migrations_policies.POLICY_ROOT % 'index') search_opts = {} search_opts.update(req.GET) if 'changes-since' in search_opts: if allow_changes_since: search_opts['changes-since'] = timeutils.parse_isotime( search_opts['changes-since']) else: # Before microversion 2.59, the changes-since filter was not # supported in the DB API. However, the schema allowed # additionalProperties=True, so a user could pass it before # 2.59 and filter by the updated_at field if we don't remove # it from search_opts. del search_opts['changes-since'] if sort_keys: try: migrations = self.compute_api.get_migrations_sorted( context, search_opts, sort_dirs=sort_dirs, sort_keys=sort_keys, limit=limit, marker=marker) except exception.MarkerNotFound as e: raise exc.HTTPBadRequest(explanation=e.format_message()) else: migrations = self.compute_api.get_migrations( context, search_opts) migrations = self._output(req, migrations, add_link, add_uuid) migrations_dict = {'migrations': migrations} if next_link: migrations_links = self._view_builder.get_links(req, migrations) if migrations_links: migrations_dict['migrations_links'] = migrations_links return migrations_dict @wsgi.Controller.api_version("2.1", "2.22") # noqa @wsgi.expected_errors(()) @validation.query_schema(schema_migrations.list_query_schema_v20, "2.1", "2.22") def index(self, req): """Return all migrations using the query parameters as filters.""" return self._index(req) @wsgi.Controller.api_version("2.23", "2.58") # noqa @wsgi.expected_errors(()) @validation.query_schema(schema_migrations.list_query_schema_v20, "2.23", "2.58") def index(self, req): """Return all migrations using the query parameters as filters.""" return self._index(req, add_link=True) @wsgi.Controller.api_version("2.59") # noqa @wsgi.expected_errors(400) @validation.query_schema(schema_migrations.list_query_params_v259, "2.59") def index(self, req): """Return all migrations using the query parameters as filters.""" limit, marker = common.get_limit_and_marker(req) return self._index(req, add_link=True, next_link=True, add_uuid=True, sort_keys=['created_at', 'id'], sort_dirs=['desc', 'desc'], limit=limit, marker=marker, allow_changes_since=True) nova-17.0.1/nova/api/openstack/compute/routes.py0000666000175000017500000007635413250073126021657 0ustar zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import functools import nova.api.openstack from nova.api.openstack.compute import admin_actions from nova.api.openstack.compute import admin_password from nova.api.openstack.compute import agents from nova.api.openstack.compute import aggregates from nova.api.openstack.compute import assisted_volume_snapshots from nova.api.openstack.compute import attach_interfaces from nova.api.openstack.compute import availability_zone from nova.api.openstack.compute import baremetal_nodes from nova.api.openstack.compute import cells from nova.api.openstack.compute import certificates from nova.api.openstack.compute import cloudpipe from nova.api.openstack.compute import config_drive from nova.api.openstack.compute import console_auth_tokens from nova.api.openstack.compute import console_output from nova.api.openstack.compute import consoles from nova.api.openstack.compute import create_backup from nova.api.openstack.compute import deferred_delete from nova.api.openstack.compute import evacuate from nova.api.openstack.compute import extended_availability_zone from nova.api.openstack.compute import extended_server_attributes from nova.api.openstack.compute import extended_status from nova.api.openstack.compute import extended_volumes from nova.api.openstack.compute import extension_info from nova.api.openstack.compute import fixed_ips from nova.api.openstack.compute import flavor_access from nova.api.openstack.compute import flavor_manage from nova.api.openstack.compute import flavors from nova.api.openstack.compute import flavors_extraspecs from nova.api.openstack.compute import floating_ip_dns from nova.api.openstack.compute import floating_ip_pools from nova.api.openstack.compute import floating_ips from nova.api.openstack.compute import floating_ips_bulk from nova.api.openstack.compute import fping from nova.api.openstack.compute import hide_server_addresses from nova.api.openstack.compute import hosts from nova.api.openstack.compute import hypervisors from nova.api.openstack.compute import image_metadata from nova.api.openstack.compute import image_size from nova.api.openstack.compute import images from nova.api.openstack.compute import instance_actions from nova.api.openstack.compute import instance_usage_audit_log from nova.api.openstack.compute import ips from nova.api.openstack.compute import keypairs from nova.api.openstack.compute import limits from nova.api.openstack.compute import lock_server from nova.api.openstack.compute import migrate_server from nova.api.openstack.compute import migrations from nova.api.openstack.compute import multinic from nova.api.openstack.compute import networks from nova.api.openstack.compute import networks_associate from nova.api.openstack.compute import pause_server from nova.api.openstack.compute import quota_classes from nova.api.openstack.compute import quota_sets from nova.api.openstack.compute import remote_consoles from nova.api.openstack.compute import rescue from nova.api.openstack.compute import security_group_default_rules from nova.api.openstack.compute import security_groups from nova.api.openstack.compute import server_diagnostics from nova.api.openstack.compute import server_external_events from nova.api.openstack.compute import server_groups from nova.api.openstack.compute import server_metadata from nova.api.openstack.compute import server_migrations from nova.api.openstack.compute import server_password from nova.api.openstack.compute import server_tags from nova.api.openstack.compute import server_usage from nova.api.openstack.compute import servers from nova.api.openstack.compute import services from nova.api.openstack.compute import shelve from nova.api.openstack.compute import simple_tenant_usage from nova.api.openstack.compute import suspend_server from nova.api.openstack.compute import tenant_networks from nova.api.openstack.compute import used_limits from nova.api.openstack.compute import versionsV21 from nova.api.openstack.compute import virtual_interfaces from nova.api.openstack.compute import volumes from nova.api.openstack import wsgi import nova.conf from nova import wsgi as base_wsgi CONF = nova.conf.CONF def _create_controller(main_controller, controller_list, action_controller_list): """This is a helper method to create controller with a list of extended controller. This is for backward compatible with old extension interface. Finally, the controller for the same resource will be merged into single one controller. """ controller = wsgi.Resource(main_controller()) for ctl in controller_list: controller.register_extensions(ctl()) for ctl in action_controller_list: controller.register_actions(ctl()) return controller agents_controller = functools.partial( _create_controller, agents.AgentController, [], []) aggregates_controller = functools.partial( _create_controller, aggregates.AggregateController, [], []) assisted_volume_snapshots_controller = functools.partial( _create_controller, assisted_volume_snapshots.AssistedVolumeSnapshotsController, [], []) availability_zone_controller = functools.partial( _create_controller, availability_zone.AvailabilityZoneController, [], []) baremetal_nodes_controller = functools.partial( _create_controller, baremetal_nodes.BareMetalNodeController, [], []) cells_controller = functools.partial( _create_controller, cells.CellsController, [], []) certificates_controller = functools.partial( _create_controller, certificates.CertificatesController, [], []) cloudpipe_controller = functools.partial( _create_controller, cloudpipe.CloudpipeController, [], []) extensions_controller = functools.partial( _create_controller, extension_info.ExtensionInfoController, [], []) fixed_ips_controller = functools.partial(_create_controller, fixed_ips.FixedIPController, [], []) flavor_controller = functools.partial(_create_controller, flavors.FlavorsController, [], [ flavor_manage.FlavorManageController, flavor_access.FlavorActionController ] ) flavor_access_controller = functools.partial(_create_controller, flavor_access.FlavorAccessController, [], []) flavor_extraspec_controller = functools.partial(_create_controller, flavors_extraspecs.FlavorExtraSpecsController, [], []) floating_ip_dns_controller = functools.partial(_create_controller, floating_ip_dns.FloatingIPDNSDomainController, [], []) floating_ip_dnsentry_controller = functools.partial(_create_controller, floating_ip_dns.FloatingIPDNSEntryController, [], []) floating_ip_pools_controller = functools.partial(_create_controller, floating_ip_pools.FloatingIPPoolsController, [], []) floating_ips_controller = functools.partial(_create_controller, floating_ips.FloatingIPController, [], []) floating_ips_bulk_controller = functools.partial(_create_controller, floating_ips_bulk.FloatingIPBulkController, [], []) fping_controller = functools.partial(_create_controller, fping.FpingController, [], []) hosts_controller = functools.partial( _create_controller, hosts.HostController, [], []) hypervisors_controller = functools.partial( _create_controller, hypervisors.HypervisorsController, [], []) images_controller = functools.partial( _create_controller, images.ImagesController, [image_size.ImageSizeController], []) image_metadata_controller = functools.partial( _create_controller, image_metadata.ImageMetadataController, [], []) instance_actions_controller = functools.partial(_create_controller, instance_actions.InstanceActionsController, [], []) instance_usage_audit_log_controller = functools.partial(_create_controller, instance_usage_audit_log.InstanceUsageAuditLogController, [], []) ips_controller = functools.partial(_create_controller, ips.IPsController, [], []) keypairs_controller = functools.partial( _create_controller, keypairs.KeypairController, [], []) limits_controller = functools.partial( _create_controller, limits.LimitsController, [ used_limits.UsedLimitsController, ], []) migrations_controller = functools.partial(_create_controller, migrations.MigrationsController, [], []) networks_controller = functools.partial(_create_controller, networks.NetworkController, [], [networks_associate.NetworkAssociateActionController]) quota_classes_controller = functools.partial(_create_controller, quota_classes.QuotaClassSetsController, [], []) quota_set_controller = functools.partial(_create_controller, quota_sets.QuotaSetsController, [], []) security_group_controller = functools.partial(_create_controller, security_groups.SecurityGroupController, [], []) security_group_default_rules_controller = functools.partial(_create_controller, security_group_default_rules.SecurityGroupDefaultRulesController, [], []) security_group_rules_controller = functools.partial(_create_controller, security_groups.SecurityGroupRulesController, [], []) server_controller = functools.partial(_create_controller, servers.ServersController, [ config_drive.ConfigDriveController, extended_availability_zone.ExtendedAZController, extended_server_attributes.ExtendedServerAttributesController, extended_status.ExtendedStatusController, extended_volumes.ExtendedVolumesController, hide_server_addresses.Controller, keypairs.Controller, security_groups.SecurityGroupsOutputController, server_usage.ServerUsageController, ], [ admin_actions.AdminActionsController, admin_password.AdminPasswordController, console_output.ConsoleOutputController, create_backup.CreateBackupController, deferred_delete.DeferredDeleteController, evacuate.EvacuateController, floating_ips.FloatingIPActionController, lock_server.LockServerController, migrate_server.MigrateServerController, multinic.MultinicController, pause_server.PauseServerController, remote_consoles.RemoteConsolesController, rescue.RescueController, security_groups.SecurityGroupActionController, shelve.ShelveController, suspend_server.SuspendServerController ] ) console_auth_tokens_controller = functools.partial(_create_controller, console_auth_tokens.ConsoleAuthTokensController, [], []) consoles_controller = functools.partial(_create_controller, consoles.ConsolesController, [], []) server_diagnostics_controller = functools.partial(_create_controller, server_diagnostics.ServerDiagnosticsController, [], []) server_external_events_controller = functools.partial(_create_controller, server_external_events.ServerExternalEventsController, [], []) server_groups_controller = functools.partial(_create_controller, server_groups.ServerGroupController, [], []) server_metadata_controller = functools.partial(_create_controller, server_metadata.ServerMetadataController, [], []) server_migrations_controller = functools.partial(_create_controller, server_migrations.ServerMigrationsController, [], []) server_os_interface_controller = functools.partial(_create_controller, attach_interfaces.InterfaceAttachmentController, [], []) server_password_controller = functools.partial(_create_controller, server_password.ServerPasswordController, [], []) server_remote_consoles_controller = functools.partial(_create_controller, remote_consoles.RemoteConsolesController, [], []) server_security_groups_controller = functools.partial(_create_controller, security_groups.ServerSecurityGroupController, [], []) server_tags_controller = functools.partial(_create_controller, server_tags.ServerTagsController, [], []) server_volume_attachments_controller = functools.partial(_create_controller, volumes.VolumeAttachmentController, [], []) services_controller = functools.partial(_create_controller, services.ServiceController, [], []) simple_tenant_usage_controller = functools.partial(_create_controller, simple_tenant_usage.SimpleTenantUsageController, [], []) snapshots_controller = functools.partial(_create_controller, volumes.SnapshotController, [], []) tenant_networks_controller = functools.partial(_create_controller, tenant_networks.TenantNetworkController, [], []) version_controller = functools.partial(_create_controller, versionsV21.VersionsController, [], []) virtual_interfaces_controller = functools.partial(_create_controller, virtual_interfaces.ServerVirtualInterfaceController, [], []) volumes_controller = functools.partial(_create_controller, volumes.VolumeController, [], []) # NOTE(alex_xu): This is structure of this route list as below: # ( # ('Route path': { # 'HTTP method: [ # 'Controller', # 'The method of controller is used to handle this route' # ], # ... # }), # ... # ) # # Also note that this is ordered tuple. For example, the '/servers/detail' # should be in the front of '/servers/{id}', otherwise the request to # '/servers/detail' always matches to '/servers/{id}' as the id is 'detail'. ROUTE_LIST = ( # NOTE: This is a redirection from '' to '/'. The request to the '/v2.1' # or '/2.0' without the ending '/' will get a response with status code # '302' returned. ('', '/'), ('/', { 'GET': [version_controller, 'show'] }), ('/versions/{id}', { 'GET': [version_controller, 'show'] }), ('/extensions', { 'GET': [extensions_controller, 'index'], }), ('/extensions/{id}', { 'GET': [extensions_controller, 'show'], }), ('/flavors', { 'GET': [flavor_controller, 'index'], 'POST': [flavor_controller, 'create'] }), ('/flavors/detail', { 'GET': [flavor_controller, 'detail'] }), ('/flavors/{id}', { 'GET': [flavor_controller, 'show'], 'PUT': [flavor_controller, 'update'], 'DELETE': [flavor_controller, 'delete'] }), ('/flavors/{id}/action', { 'POST': [flavor_controller, 'action'] }), ('/flavors/{flavor_id}/os-extra_specs', { 'GET': [flavor_extraspec_controller, 'index'], 'POST': [flavor_extraspec_controller, 'create'] }), ('/flavors/{flavor_id}/os-extra_specs/{id}', { 'GET': [flavor_extraspec_controller, 'show'], 'PUT': [flavor_extraspec_controller, 'update'], 'DELETE': [flavor_extraspec_controller, 'delete'] }), ('/flavors/{flavor_id}/os-flavor-access', { 'GET': [flavor_access_controller, 'index'] }), ('/images', { 'GET': [images_controller, 'index'] }), ('/images/detail', { 'GET': [images_controller, 'detail'], }), ('/images/{id}', { 'GET': [images_controller, 'show'], 'DELETE': [images_controller, 'delete'] }), ('/images/{image_id}/metadata', { 'GET': [image_metadata_controller, 'index'], 'POST': [image_metadata_controller, 'create'], 'PUT': [image_metadata_controller, 'update_all'] }), ('/images/{image_id}/metadata/{id}', { 'GET': [image_metadata_controller, 'show'], 'PUT': [image_metadata_controller, 'update'], 'DELETE': [image_metadata_controller, 'delete'] }), ('/limits', { 'GET': [limits_controller, 'index'] }), ('/os-agents', { 'GET': [agents_controller, 'index'], 'POST': [agents_controller, 'create'] }), ('/os-agents/{id}', { 'PUT': [agents_controller, 'update'], 'DELETE': [agents_controller, 'delete'] }), ('/os-aggregates', { 'GET': [aggregates_controller, 'index'], 'POST': [aggregates_controller, 'create'] }), ('/os-aggregates/{id}', { 'GET': [aggregates_controller, 'show'], 'PUT': [aggregates_controller, 'update'], 'DELETE': [aggregates_controller, 'delete'] }), ('/os-aggregates/{id}/action', { 'POST': [aggregates_controller, 'action'], }), ('/os-assisted-volume-snapshots', { 'POST': [assisted_volume_snapshots_controller, 'create'] }), ('/os-assisted-volume-snapshots/{id}', { 'DELETE': [assisted_volume_snapshots_controller, 'delete'] }), ('/os-availability-zone', { 'GET': [availability_zone_controller, 'index'] }), ('/os-availability-zone/detail', { 'GET': [availability_zone_controller, 'detail'], }), ('/os-baremetal-nodes', { 'GET': [baremetal_nodes_controller, 'index'], 'POST': [baremetal_nodes_controller, 'create'] }), ('/os-baremetal-nodes/{id}', { 'GET': [baremetal_nodes_controller, 'show'], 'DELETE': [baremetal_nodes_controller, 'delete'] }), ('/os-baremetal-nodes/{id}/action', { 'POST': [baremetal_nodes_controller, 'action'] }), ('/os-cells', { 'POST': [cells_controller, 'create'], 'GET': [cells_controller, 'index'], }), ('/os-cells/capacities', { 'GET': [cells_controller, 'capacities'] }), ('/os-cells/detail', { 'GET': [cells_controller, 'detail'] }), ('/os-cells/info', { 'GET': [cells_controller, 'info'] }), ('/os-cells/sync_instances', { 'POST': [cells_controller, 'sync_instances'] }), ('/os-cells/{id}', { 'GET': [cells_controller, 'show'], 'PUT': [cells_controller, 'update'], 'DELETE': [cells_controller, 'delete'] }), ('/os-cells/{id}/capacities', { 'GET': [cells_controller, 'capacities'] }), ('/os-certificates', { 'POST': [certificates_controller, 'create'] }), ('/os-certificates/{id}', { 'GET': [certificates_controller, 'show'] }), ('/os-cloudpipe', { 'GET': [cloudpipe_controller, 'index'], 'POST': [cloudpipe_controller, 'create'] }), ('/os-cloudpipe/{id}', { 'PUT': [cloudpipe_controller, 'update'] }), ('/os-console-auth-tokens/{id}', { 'GET': [console_auth_tokens_controller, 'show'] }), ('/os-fixed-ips/{id}', { 'GET': [fixed_ips_controller, 'show'] }), ('/os-fixed-ips/{id}/action', { 'POST': [fixed_ips_controller, 'action'], }), ('/os-floating-ip-dns', { 'GET': [floating_ip_dns_controller, 'index'] }), ('/os-floating-ip-dns/{id}', { 'PUT': [floating_ip_dns_controller, 'update'], 'DELETE': [floating_ip_dns_controller, 'delete'] }), ('/os-floating-ip-dns/{domain_id}/entries/{id}', { 'GET': [floating_ip_dnsentry_controller, 'show'], 'PUT': [floating_ip_dnsentry_controller, 'update'], 'DELETE': [floating_ip_dnsentry_controller, 'delete'] }), ('/os-floating-ip-pools', { 'GET': [floating_ip_pools_controller, 'index'], }), ('/os-floating-ips', { 'GET': [floating_ips_controller, 'index'], 'POST': [floating_ips_controller, 'create'] }), ('/os-floating-ips/{id}', { 'GET': [floating_ips_controller, 'show'], 'DELETE': [floating_ips_controller, 'delete'] }), ('/os-floating-ips-bulk', { 'GET': [floating_ips_bulk_controller, 'index'], 'POST': [floating_ips_bulk_controller, 'create'] }), ('/os-floating-ips-bulk/{id}', { 'GET': [floating_ips_bulk_controller, 'show'], 'PUT': [floating_ips_bulk_controller, 'update'] }), ('/os-fping', { 'GET': [fping_controller, 'index'] }), ('/os-fping/{id}', { 'GET': [fping_controller, 'show'] }), ('/os-hosts', { 'GET': [hosts_controller, 'index'] }), ('/os-hosts/{id}', { 'GET': [hosts_controller, 'show'], 'PUT': [hosts_controller, 'update'] }), ('/os-hosts/{id}/reboot', { 'GET': [hosts_controller, 'reboot'] }), ('/os-hosts/{id}/shutdown', { 'GET': [hosts_controller, 'shutdown'] }), ('/os-hosts/{id}/startup', { 'GET': [hosts_controller, 'startup'] }), ('/os-hypervisors', { 'GET': [hypervisors_controller, 'index'] }), ('/os-hypervisors/detail', { 'GET': [hypervisors_controller, 'detail'] }), ('/os-hypervisors/statistics', { 'GET': [hypervisors_controller, 'statistics'] }), ('/os-hypervisors/{id}', { 'GET': [hypervisors_controller, 'show'] }), ('/os-hypervisors/{id}/search', { 'GET': [hypervisors_controller, 'search'] }), ('/os-hypervisors/{id}/servers', { 'GET': [hypervisors_controller, 'servers'] }), ('/os-hypervisors/{id}/uptime', { 'GET': [hypervisors_controller, 'uptime'] }), ('/os-instance_usage_audit_log', { 'GET': [instance_usage_audit_log_controller, 'index'] }), ('/os-instance_usage_audit_log/{id}', { 'GET': [instance_usage_audit_log_controller, 'show'] }), ('/os-keypairs', { 'GET': [keypairs_controller, 'index'], 'POST': [keypairs_controller, 'create'] }), ('/os-keypairs/{id}', { 'GET': [keypairs_controller, 'show'], 'DELETE': [keypairs_controller, 'delete'] }), ('/os-migrations', { 'GET': [migrations_controller, 'index'] }), ('/os-networks', { 'GET': [networks_controller, 'index'], 'POST': [networks_controller, 'create'] }), ('/os-networks/add', { 'POST': [networks_controller, 'add'] }), ('/os-networks/{id}', { 'GET': [networks_controller, 'show'], 'DELETE': [networks_controller, 'delete'] }), ('/os-networks/{id}/action', { 'POST': [networks_controller, 'action'], }), ('/os-quota-class-sets/{id}', { 'GET': [quota_classes_controller, 'show'], 'PUT': [quota_classes_controller, 'update'] }), ('/os-quota-sets/{id}', { 'GET': [quota_set_controller, 'show'], 'PUT': [quota_set_controller, 'update'], 'DELETE': [quota_set_controller, 'delete'] }), ('/os-quota-sets/{id}/detail', { 'GET': [quota_set_controller, 'detail'] }), ('/os-quota-sets/{id}/defaults', { 'GET': [quota_set_controller, 'defaults'] }), ('/os-security-group-default-rules', { 'GET': [security_group_default_rules_controller, 'index'], 'POST': [security_group_default_rules_controller, 'create'] }), ('/os-security-group-default-rules/{id}', { 'GET': [security_group_default_rules_controller, 'show'], 'DELETE': [security_group_default_rules_controller, 'delete'] }), ('/os-security-group-rules', { 'POST': [security_group_rules_controller, 'create'] }), ('/os-security-group-rules/{id}', { 'DELETE': [security_group_rules_controller, 'delete'] }), ('/os-security-groups', { 'GET': [security_group_controller, 'index'], 'POST': [security_group_controller, 'create'] }), ('/os-security-groups/{id}', { 'GET': [security_group_controller, 'show'], 'PUT': [security_group_controller, 'update'], 'DELETE': [security_group_controller, 'delete'] }), ('/os-server-external-events', { 'POST': [server_external_events_controller, 'create'] }), ('/os-server-groups', { 'GET': [server_groups_controller, 'index'], 'POST': [server_groups_controller, 'create'] }), ('/os-server-groups/{id}', { 'GET': [server_groups_controller, 'show'], 'DELETE': [server_groups_controller, 'delete'] }), ('/os-services', { 'GET': [services_controller, 'index'] }), ('/os-services/{id}', { 'PUT': [services_controller, 'update'], 'DELETE': [services_controller, 'delete'] }), ('/os-simple-tenant-usage', { 'GET': [simple_tenant_usage_controller, 'index'] }), ('/os-simple-tenant-usage/{id}', { 'GET': [simple_tenant_usage_controller, 'show'] }), ('/os-snapshots', { 'GET': [snapshots_controller, 'index'], 'POST': [snapshots_controller, 'create'] }), ('/os-snapshots/detail', { 'GET': [snapshots_controller, 'detail'] }), ('/os-snapshots/{id}', { 'GET': [snapshots_controller, 'show'], 'DELETE': [snapshots_controller, 'delete'] }), ('/os-tenant-networks', { 'GET': [tenant_networks_controller, 'index'], 'POST': [tenant_networks_controller, 'create'] }), ('/os-tenant-networks/{id}', { 'GET': [tenant_networks_controller, 'show'], 'DELETE': [tenant_networks_controller, 'delete'] }), ('/os-volumes', { 'GET': [volumes_controller, 'index'], 'POST': [volumes_controller, 'create'], }), ('/os-volumes/detail', { 'GET': [volumes_controller, 'detail'], }), ('/os-volumes/{id}', { 'GET': [volumes_controller, 'show'], 'DELETE': [volumes_controller, 'delete'] }), # NOTE: '/os-volumes_boot' is a clone of '/servers'. We may want to # deprecate it in the future. ('/os-volumes_boot', { 'GET': [server_controller, 'index'], 'POST': [server_controller, 'create'] }), ('/os-volumes_boot/detail', { 'GET': [server_controller, 'detail'] }), ('/os-volumes_boot/{id}', { 'GET': [server_controller, 'show'], 'PUT': [server_controller, 'update'], 'DELETE': [server_controller, 'delete'] }), ('/os-volumes_boot/{id}/action', { 'POST': [server_controller, 'action'] }), ('/servers', { 'GET': [server_controller, 'index'], 'POST': [server_controller, 'create'] }), ('/servers/detail', { 'GET': [server_controller, 'detail'] }), ('/servers/{id}', { 'GET': [server_controller, 'show'], 'PUT': [server_controller, 'update'], 'DELETE': [server_controller, 'delete'] }), ('/servers/{id}/action', { 'POST': [server_controller, 'action'] }), ('/servers/{server_id}/consoles', { 'GET': [consoles_controller, 'index'], 'POST': [consoles_controller, 'create'] }), ('/servers/{server_id}/consoles/{id}', { 'GET': [consoles_controller, 'show'], 'DELETE': [consoles_controller, 'delete'] }), ('/servers/{server_id}/diagnostics', { 'GET': [server_diagnostics_controller, 'index'] }), ('/servers/{server_id}/ips', { 'GET': [ips_controller, 'index'] }), ('/servers/{server_id}/ips/{id}', { 'GET': [ips_controller, 'show'] }), ('/servers/{server_id}/metadata', { 'GET': [server_metadata_controller, 'index'], 'POST': [server_metadata_controller, 'create'], 'PUT': [server_metadata_controller, 'update_all'], }), ('/servers/{server_id}/metadata/{id}', { 'GET': [server_metadata_controller, 'show'], 'PUT': [server_metadata_controller, 'update'], 'DELETE': [server_metadata_controller, 'delete'], }), ('/servers/{server_id}/migrations', { 'GET': [server_migrations_controller, 'index'] }), ('/servers/{server_id}/migrations/{id}', { 'GET': [server_migrations_controller, 'show'], 'DELETE': [server_migrations_controller, 'delete'] }), ('/servers/{server_id}/migrations/{id}/action', { 'POST': [server_migrations_controller, 'action'] }), ('/servers/{server_id}/os-instance-actions', { 'GET': [instance_actions_controller, 'index'] }), ('/servers/{server_id}/os-instance-actions/{id}', { 'GET': [instance_actions_controller, 'show'] }), ('/servers/{server_id}/os-interface', { 'GET': [server_os_interface_controller, 'index'], 'POST': [server_os_interface_controller, 'create'] }), ('/servers/{server_id}/os-interface/{id}', { 'GET': [server_os_interface_controller, 'show'], 'DELETE': [server_os_interface_controller, 'delete'] }), ('/servers/{server_id}/os-server-password', { 'GET': [server_password_controller, 'index'], 'DELETE': [server_password_controller, 'clear'] }), ('/servers/{server_id}/os-virtual-interfaces', { 'GET': [virtual_interfaces_controller, 'index'] }), ('/servers/{server_id}/os-volume_attachments', { 'GET': [server_volume_attachments_controller, 'index'], 'POST': [server_volume_attachments_controller, 'create'], }), ('/servers/{server_id}/os-volume_attachments/{id}', { 'GET': [server_volume_attachments_controller, 'show'], 'PUT': [server_volume_attachments_controller, 'update'], 'DELETE': [server_volume_attachments_controller, 'delete'] }), ('/servers/{server_id}/remote-consoles', { 'POST': [server_remote_consoles_controller, 'create'] }), ('/servers/{server_id}/os-security-groups', { 'GET': [server_security_groups_controller, 'index'] }), ('/servers/{server_id}/tags', { 'GET': [server_tags_controller, 'index'], 'PUT': [server_tags_controller, 'update_all'], 'DELETE': [server_tags_controller, 'delete_all'], }), ('/servers/{server_id}/tags/{id}', { 'GET': [server_tags_controller, 'show'], 'PUT': [server_tags_controller, 'update'], 'DELETE': [server_tags_controller, 'delete'] }), ) class APIRouterV21(base_wsgi.Router): """Routes requests on the OpenStack API to the appropriate controller and method. The URL mapping based on the plain list `ROUTE_LIST` is built at here. """ def __init__(self, custom_routes=None): """:param custom_routes: the additional routes can be added by this parameter. This parameter is used to test on some fake routes primarily. """ super(APIRouterV21, self).__init__(nova.api.openstack.ProjectMapper()) if custom_routes is None: custom_routes = tuple() for path, methods in ROUTE_LIST + custom_routes: # NOTE(alex_xu): The variable 'methods' is a dict in normal, since # the dict includes all the methods supported in the path. But # if the variable 'method' is a string, it means a redirection. # For example, the request to the '' will be redirect to the '/' in # the Nova API. To indicate that, using the target path instead of # a dict. The route entry just writes as "('', '/)". if isinstance(methods, str): self.map.redirect(path, methods) continue for method, controller_info in methods.items(): # TODO(alex_xu): In the end, I want to create single controller # instance instead of create controller instance for each # route. controller = controller_info[0]() action = controller_info[1] self.map.create_route(path, method, controller, action) @classmethod def factory(cls, global_config, **local_config): """Simple paste factory, :class:`nova.wsgi.Router` doesn't have one.""" return cls() nova-17.0.1/nova/api/openstack/compute/used_limits.py0000666000175000017500000000474113250073126022646 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.openstack import api_version_request from nova.api.openstack.api_version_request \ import MIN_WITHOUT_PROXY_API_SUPPORT_VERSION from nova.api.openstack import wsgi from nova.policies import used_limits as ul_policies from nova import quota QUOTAS = quota.QUOTAS class UsedLimitsController(wsgi.Controller): @wsgi.extends @wsgi.expected_errors(()) def index(self, req, resp_obj): context = req.environ['nova.context'] project_id = self._project_id(context, req) quotas = QUOTAS.get_project_quotas(context, project_id, usages=True) if api_version_request.is_supported( req, min_version=MIN_WITHOUT_PROXY_API_SUPPORT_VERSION): quota_map = { 'totalRAMUsed': 'ram', 'totalCoresUsed': 'cores', 'totalInstancesUsed': 'instances', 'totalServerGroupsUsed': 'server_groups', } else: quota_map = { 'totalRAMUsed': 'ram', 'totalCoresUsed': 'cores', 'totalInstancesUsed': 'instances', 'totalFloatingIpsUsed': 'floating_ips', 'totalSecurityGroupsUsed': 'security_groups', 'totalServerGroupsUsed': 'server_groups', } used_limits = {} for display_name, key in quota_map.items(): if key in quotas: used_limits[display_name] = quotas[key]['in_use'] resp_obj.obj['limits']['absolute'].update(used_limits) def _project_id(self, context, req): if 'tenant_id' in req.GET: tenant_id = req.GET.get('tenant_id') target = { 'project_id': tenant_id, 'user_id': context.user_id } context.can(ul_policies.BASE_POLICY_NAME, target) return tenant_id return context.project_id nova-17.0.1/nova/api/openstack/compute/user_data.py0000666000175000017500000000221713250073126022270 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.openstack.compute.schemas import user_data as schema_user_data ATTRIBUTE_NAME = 'user_data' # NOTE(gmann): This function is not supposed to use 'body_deprecated_param' # parameter as this is placed to handle scheduler_hint extension for V2.1. def server_create(server_dict, create_kwargs, body_deprecated_param): create_kwargs['user_data'] = server_dict.get(ATTRIBUTE_NAME) def get_server_create_schema(version): if version == '2.0': return schema_user_data.server_create_v20 return schema_user_data.server_create nova-17.0.1/nova/api/openstack/compute/migrate_server.py0000666000175000017500000001460713250073136023346 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging from oslo_utils import excutils from oslo_utils import strutils from webob import exc from nova.api.openstack import api_version_request from nova.api.openstack import common from nova.api.openstack.compute.schemas import migrate_server from nova.api.openstack import wsgi from nova.api import validation from nova import compute from nova import exception from nova.i18n import _ from nova.policies import migrate_server as ms_policies LOG = logging.getLogger(__name__) class MigrateServerController(wsgi.Controller): def __init__(self, *args, **kwargs): super(MigrateServerController, self).__init__(*args, **kwargs) self.compute_api = compute.API() @wsgi.response(202) @wsgi.expected_errors((400, 403, 404, 409)) @wsgi.action('migrate') @validation.schema(migrate_server.migrate_v2_56, "2.56") def _migrate(self, req, id, body): """Permit admins to migrate a server to a new host.""" context = req.environ['nova.context'] context.can(ms_policies.POLICY_ROOT % 'migrate') host_name = None if (api_version_request.is_supported(req, min_version='2.56') and body['migrate'] is not None): host_name = body['migrate'].get('host') instance = common.get_instance(self.compute_api, context, id) try: self.compute_api.resize(req.environ['nova.context'], instance, host_name=host_name) except (exception.TooManyInstances, exception.QuotaError) as e: raise exc.HTTPForbidden(explanation=e.format_message()) except (exception.InstanceIsLocked, exception.CannotMigrateWithTargetHost) as e: raise exc.HTTPConflict(explanation=e.format_message()) except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state(state_error, 'migrate', id) except exception.InstanceNotFound as e: raise exc.HTTPNotFound(explanation=e.format_message()) except (exception.NoValidHost, exception.ComputeHostNotFound, exception.CannotMigrateToSameHost) as e: raise exc.HTTPBadRequest(explanation=e.format_message()) @wsgi.response(202) @wsgi.expected_errors((400, 404, 409)) @wsgi.action('os-migrateLive') @validation.schema(migrate_server.migrate_live, "2.0", "2.24") @validation.schema(migrate_server.migrate_live_v2_25, "2.25", "2.29") @validation.schema(migrate_server.migrate_live_v2_30, "2.30") def _migrate_live(self, req, id, body): """Permit admins to (live) migrate a server to a new host.""" context = req.environ["nova.context"] context.can(ms_policies.POLICY_ROOT % 'migrate_live') host = body["os-migrateLive"]["host"] block_migration = body["os-migrateLive"]["block_migration"] force = None async = api_version_request.is_supported(req, min_version='2.34') if api_version_request.is_supported(req, min_version='2.30'): force = self._get_force_param_for_live_migration(body, host) if api_version_request.is_supported(req, min_version='2.25'): if block_migration == 'auto': block_migration = None else: block_migration = strutils.bool_from_string(block_migration, strict=True) disk_over_commit = None else: disk_over_commit = body["os-migrateLive"]["disk_over_commit"] block_migration = strutils.bool_from_string(block_migration, strict=True) disk_over_commit = strutils.bool_from_string(disk_over_commit, strict=True) instance = common.get_instance(self.compute_api, context, id) try: self.compute_api.live_migrate(context, instance, block_migration, disk_over_commit, host, force, async) except exception.InstanceUnknownCell as e: raise exc.HTTPNotFound(explanation=e.format_message()) except (exception.NoValidHost, exception.ComputeServiceUnavailable, exception.ComputeHostNotFound, exception.InvalidHypervisorType, exception.InvalidCPUInfo, exception.UnableToMigrateToSelf, exception.DestinationHypervisorTooOld, exception.InvalidLocalStorage, exception.InvalidSharedStorage, exception.HypervisorUnavailable, exception.MigrationPreCheckError, exception.LiveMigrationWithOldNovaNotSupported) as ex: if async: with excutils.save_and_reraise_exception(): LOG.error("Unexpected exception received from " "conductor during pre-live-migration checks " "'%(ex)s'", {'ex': ex}) else: raise exc.HTTPBadRequest(explanation=ex.format_message()) except exception.InstanceIsLocked as e: raise exc.HTTPConflict(explanation=e.format_message()) except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state(state_error, 'os-migrateLive', id) def _get_force_param_for_live_migration(self, body, host): force = body["os-migrateLive"].get("force", False) force = strutils.bool_from_string(force, strict=True) if force is True and not host: message = _("Can't force to a non-provided destination") raise exc.HTTPBadRequest(explanation=message) return force nova-17.0.1/nova/api/openstack/compute/deferred_delete.py0000666000175000017500000000535313250073126023427 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """The deferred instance delete extension.""" import webob from nova.api.openstack import common from nova.api.openstack import wsgi from nova import compute from nova import exception from nova.policies import deferred_delete as dd_policies class DeferredDeleteController(wsgi.Controller): def __init__(self, *args, **kwargs): super(DeferredDeleteController, self).__init__(*args, **kwargs) self.compute_api = compute.API() @wsgi.response(202) @wsgi.expected_errors((403, 404, 409)) @wsgi.action('restore') def _restore(self, req, id, body): """Restore a previously deleted instance.""" context = req.environ["nova.context"] context.can(dd_policies.BASE_POLICY_NAME) instance = common.get_instance(self.compute_api, context, id) try: self.compute_api.restore(context, instance) except exception.InstanceUnknownCell as error: raise webob.exc.HTTPNotFound(explanation=error.format_message()) except exception.QuotaError as error: raise webob.exc.HTTPForbidden(explanation=error.format_message()) except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state(state_error, 'restore', id) @wsgi.response(202) @wsgi.expected_errors((404, 409)) @wsgi.action('forceDelete') def _force_delete(self, req, id, body): """Force delete of instance before deferred cleanup.""" context = req.environ["nova.context"] instance = common.get_instance(self.compute_api, context, id) context.can(dd_policies.BASE_POLICY_NAME, target={'user_id': instance.user_id, 'project_id': instance.project_id}) try: self.compute_api.force_delete(context, instance) except (exception.InstanceNotFound, exception.InstanceUnknownCell) as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) except exception.InstanceIsLocked as e: raise webob.exc.HTTPConflict(explanation=e.format_message()) nova-17.0.1/nova/api/openstack/compute/block_device_mapping_v1.py0000666000175000017500000000401013250073126025044 0ustar zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """The legacy block device mappings extension.""" from oslo_utils import strutils from webob import exc from nova.api.openstack.compute.schemas import block_device_mapping_v1 as \ schema_block_device_mapping from nova.i18n import _ ATTRIBUTE_NAME = "block_device_mapping" ATTRIBUTE_NAME_V2 = "block_device_mapping_v2" # NOTE(gmann): This function is not supposed to use 'body_deprecated_param' # parameter as this is placed to handle scheduler_hint extension for V2.1. def server_create(server_dict, create_kwargs, body_deprecated_param): block_device_mapping = server_dict.get(ATTRIBUTE_NAME, []) block_device_mapping_v2 = server_dict.get(ATTRIBUTE_NAME_V2, []) if block_device_mapping and block_device_mapping_v2: expl = _('Using different block_device_mapping syntaxes ' 'is not allowed in the same request.') raise exc.HTTPBadRequest(explanation=expl) for bdm in block_device_mapping: if 'delete_on_termination' in bdm: bdm['delete_on_termination'] = strutils.bool_from_string( bdm['delete_on_termination']) if block_device_mapping: create_kwargs['block_device_mapping'] = block_device_mapping # Sets the legacy_bdm flag if we got a legacy block device mapping. create_kwargs['legacy_bdm'] = True def get_server_create_schema(version): return schema_block_device_mapping.server_create nova-17.0.1/nova/api/openstack/compute/console_output.py0000666000175000017500000000560013250073126023402 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # Copyright 2011 Grid Dynamics # Copyright 2011 Eldar Nugaev, Kirill Shileev, Ilya Alekseyev # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import re import webob from nova.api.openstack import common from nova.api.openstack.compute.schemas import console_output from nova.api.openstack import wsgi from nova.api import validation from nova import compute from nova import exception from nova.policies import console_output as co_policies class ConsoleOutputController(wsgi.Controller): def __init__(self, *args, **kwargs): super(ConsoleOutputController, self).__init__(*args, **kwargs) self.compute_api = compute.API() @wsgi.expected_errors((404, 409, 501)) @wsgi.action('os-getConsoleOutput') @validation.schema(console_output.get_console_output) def get_console_output(self, req, id, body): """Get text console output.""" context = req.environ['nova.context'] context.can(co_policies.BASE_POLICY_NAME) instance = common.get_instance(self.compute_api, context, id) length = body['os-getConsoleOutput'].get('length') # TODO(cyeoh): In a future API update accept a length of -1 # as meaning unlimited length (convert to None) try: output = self.compute_api.get_console_output(context, instance, length) # NOTE(cyeoh): This covers race conditions where the instance is # deleted between common.get_instance and get_console_output # being called except (exception.InstanceNotFound, exception.ConsoleNotAvailable) as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) except exception.InstanceNotReady as e: raise webob.exc.HTTPConflict(explanation=e.format_message()) except NotImplementedError: common.raise_feature_not_supported() # XML output is not correctly escaped, so remove invalid characters # NOTE(cyeoh): We don't support XML output with V2.1, but for # backwards compatibility reasons we continue to filter the output # We should remove this in the future remove_re = re.compile('[\x00-\x08\x0B-\x1F]') output = remove_re.sub('', output) return {'output': output} nova-17.0.1/nova/api/openstack/compute/flavors_extraspecs.py0000666000175000017500000001272713250073126024245 0ustar zuulzuul00000000000000# Copyright 2010 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six import webob from nova.api.openstack import common from nova.api.openstack.compute.schemas import flavors_extraspecs from nova.api.openstack import wsgi from nova.api import validation from nova import exception from nova.i18n import _ from nova.policies import flavor_extra_specs as fes_policies from nova import utils class FlavorExtraSpecsController(wsgi.Controller): """The flavor extra specs API controller for the OpenStack API.""" def _get_extra_specs(self, context, flavor_id): flavor = common.get_flavor(context, flavor_id) return dict(extra_specs=flavor.extra_specs) # NOTE(gmann): Max length for numeric value is being checked # explicitly as json schema cannot have max length check for numeric value def _check_extra_specs_value(self, specs): for value in specs.values(): try: if isinstance(value, (six.integer_types, float)): value = six.text_type(value) utils.check_string_length(value, 'extra_specs value', max_length=255) except exception.InvalidInput as error: raise webob.exc.HTTPBadRequest( explanation=error.format_message()) @wsgi.expected_errors(404) def index(self, req, flavor_id): """Returns the list of extra specs for a given flavor.""" context = req.environ['nova.context'] context.can(fes_policies.POLICY_ROOT % 'index') return self._get_extra_specs(context, flavor_id) # NOTE(gmann): Here should be 201 instead of 200 by v2.1 # +microversions because the flavor extra specs has been created # completely when returning a response. @wsgi.expected_errors((400, 404, 409)) @validation.schema(flavors_extraspecs.create) def create(self, req, flavor_id, body): context = req.environ['nova.context'] context.can(fes_policies.POLICY_ROOT % 'create') specs = body['extra_specs'] self._check_extra_specs_value(specs) flavor = common.get_flavor(context, flavor_id) try: flavor.extra_specs = dict(flavor.extra_specs, **specs) flavor.save() except exception.FlavorExtraSpecUpdateCreateFailed as e: raise webob.exc.HTTPConflict(explanation=e.format_message()) except exception.FlavorNotFound as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) return body @wsgi.expected_errors((400, 404, 409)) @validation.schema(flavors_extraspecs.update) def update(self, req, flavor_id, id, body): context = req.environ['nova.context'] context.can(fes_policies.POLICY_ROOT % 'update') self._check_extra_specs_value(body) if id not in body: expl = _('Request body and URI mismatch') raise webob.exc.HTTPBadRequest(explanation=expl) flavor = common.get_flavor(context, flavor_id) try: flavor.extra_specs = dict(flavor.extra_specs, **body) flavor.save() except exception.FlavorExtraSpecUpdateCreateFailed as e: raise webob.exc.HTTPConflict(explanation=e.format_message()) except exception.FlavorNotFound as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) return body @wsgi.expected_errors(404) def show(self, req, flavor_id, id): """Return a single extra spec item.""" context = req.environ['nova.context'] context.can(fes_policies.POLICY_ROOT % 'show') flavor = common.get_flavor(context, flavor_id) try: return {id: flavor.extra_specs[id]} except KeyError: msg = _("Flavor %(flavor_id)s has no extra specs with " "key %(key)s.") % dict(flavor_id=flavor_id, key=id) raise webob.exc.HTTPNotFound(explanation=msg) # NOTE(gmann): Here should be 204(No Content) instead of 200 by v2.1 # +microversions because the flavor extra specs has been deleted # completely when returning a response. @wsgi.expected_errors(404) def delete(self, req, flavor_id, id): """Deletes an existing extra spec.""" context = req.environ['nova.context'] context.can(fes_policies.POLICY_ROOT % 'delete') flavor = common.get_flavor(context, flavor_id) try: del flavor.extra_specs[id] flavor.save() except (exception.FlavorExtraSpecsNotFound, exception.FlavorNotFound) as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) except KeyError: msg = _("Flavor %(flavor_id)s has no extra specs with " "key %(key)s.") % dict(flavor_id=flavor_id, key=id) raise webob.exc.HTTPNotFound(explanation=msg) nova-17.0.1/nova/api/openstack/compute/server_migrations.py0000666000175000017500000001543413250073126024070 0ustar zuulzuul00000000000000# Copyright 2016 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from webob import exc from nova.api.openstack import api_version_request from nova.api.openstack import common from nova.api.openstack.compute.schemas import server_migrations from nova.api.openstack import wsgi from nova.api import validation from nova import compute from nova import exception from nova.i18n import _ from nova.policies import servers_migrations as sm_policies def output(migration, include_uuid=False): """Returns the desired output of the API from an object. From a Migrations's object this method returns the primitive object with the only necessary and expected fields. """ result = { "created_at": migration.created_at, "dest_compute": migration.dest_compute, "dest_host": migration.dest_host, "dest_node": migration.dest_node, "disk_processed_bytes": migration.disk_processed, "disk_remaining_bytes": migration.disk_remaining, "disk_total_bytes": migration.disk_total, "id": migration.id, "memory_processed_bytes": migration.memory_processed, "memory_remaining_bytes": migration.memory_remaining, "memory_total_bytes": migration.memory_total, "server_uuid": migration.instance_uuid, "source_compute": migration.source_compute, "source_node": migration.source_node, "status": migration.status, "updated_at": migration.updated_at } if include_uuid: result['uuid'] = migration.uuid return result class ServerMigrationsController(wsgi.Controller): """The server migrations API controller for the OpenStack API.""" def __init__(self): self.compute_api = compute.API() super(ServerMigrationsController, self).__init__() @wsgi.Controller.api_version("2.22") @wsgi.response(202) @wsgi.expected_errors((400, 403, 404, 409)) @wsgi.action('force_complete') @validation.schema(server_migrations.force_complete) def _force_complete(self, req, id, server_id, body): context = req.environ['nova.context'] context.can(sm_policies.POLICY_ROOT % 'force_complete') instance = common.get_instance(self.compute_api, context, server_id) try: self.compute_api.live_migrate_force_complete(context, instance, id) except exception.InstanceNotFound as e: raise exc.HTTPNotFound(explanation=e.format_message()) except (exception.MigrationNotFoundByStatus, exception.InvalidMigrationState, exception.MigrationNotFoundForInstance) as e: raise exc.HTTPBadRequest(explanation=e.format_message()) except exception.InstanceIsLocked as e: raise exc.HTTPConflict(explanation=e.format_message()) except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state( state_error, 'force_complete', server_id) @wsgi.Controller.api_version("2.23") @wsgi.expected_errors(404) def index(self, req, server_id): """Return all migrations of an instance in progress.""" context = req.environ['nova.context'] context.can(sm_policies.POLICY_ROOT % 'index') # NOTE(Shaohe Feng) just check the instance is available. To keep # consistency with other API, check it before get migrations. common.get_instance(self.compute_api, context, server_id) migrations = self.compute_api.get_migrations_in_progress_by_instance( context, server_id, 'live-migration') include_uuid = api_version_request.is_supported(req, '2.59') return {'migrations': [output( migration, include_uuid) for migration in migrations]} @wsgi.Controller.api_version("2.23") @wsgi.expected_errors(404) def show(self, req, server_id, id): """Return the migration of an instance in progress by id.""" context = req.environ['nova.context'] context.can(sm_policies.POLICY_ROOT % 'show') # NOTE(Shaohe Feng) just check the instance is available. To keep # consistency with other API, check it before get migrations. common.get_instance(self.compute_api, context, server_id) try: migration = self.compute_api.get_migration_by_id_and_instance( context, id, server_id) except exception.MigrationNotFoundForInstance: msg = _("In-progress live migration %(id)s is not found for" " server %(uuid)s.") % {"id": id, "uuid": server_id} raise exc.HTTPNotFound(explanation=msg) if migration.get("migration_type") != "live-migration": msg = _("Migration %(id)s for server %(uuid)s is not" " live-migration.") % {"id": id, "uuid": server_id} raise exc.HTTPNotFound(explanation=msg) # TODO(Shaohe Feng) we should share the in-progress list. in_progress = ['queued', 'preparing', 'running', 'post-migrating'] if migration.get("status") not in in_progress: msg = _("Live migration %(id)s for server %(uuid)s is not in" " progress.") % {"id": id, "uuid": server_id} raise exc.HTTPNotFound(explanation=msg) include_uuid = api_version_request.is_supported(req, '2.59') return {'migration': output(migration, include_uuid)} @wsgi.Controller.api_version("2.24") @wsgi.response(202) @wsgi.expected_errors((400, 404, 409)) def delete(self, req, server_id, id): """Abort an in progress migration of an instance.""" context = req.environ['nova.context'] context.can(sm_policies.POLICY_ROOT % 'delete') instance = common.get_instance(self.compute_api, context, server_id) try: self.compute_api.live_migrate_abort(context, instance, id) except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state( state_error, "abort live migration", server_id) except exception.MigrationNotFoundForInstance as e: raise exc.HTTPNotFound(explanation=e.format_message()) except exception.InvalidMigrationState as e: raise exc.HTTPBadRequest(explanation=e.format_message()) nova-17.0.1/nova/api/openstack/compute/networks.py0000666000175000017500000001633313250073126022201 0ustar zuulzuul00000000000000# Copyright 2011 Grid Dynamics # Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import netaddr from webob import exc from nova.api.openstack.api_version_request \ import MAX_PROXY_API_SUPPORT_VERSION from nova.api.openstack import common from nova.api.openstack.compute.schemas import networks as schema from nova.api.openstack import wsgi from nova.api import validation from nova import exception from nova.i18n import _ from nova import network from nova.objects import base as base_obj from nova.objects import fields as obj_fields from nova.policies import networks as net_policies def network_dict(context, network): fields = ('id', 'cidr', 'netmask', 'gateway', 'broadcast', 'dns1', 'dns2', 'cidr_v6', 'gateway_v6', 'label', 'netmask_v6') admin_fields = ('created_at', 'updated_at', 'deleted_at', 'deleted', 'injected', 'bridge', 'vlan', 'vpn_public_address', 'vpn_public_port', 'vpn_private_address', 'dhcp_start', 'project_id', 'host', 'bridge_interface', 'multi_host', 'priority', 'rxtx_base', 'mtu', 'dhcp_server', 'enable_dhcp', 'share_address') if network: # NOTE(mnaser): We display a limited set of fields so users can know # what networks are available, extra system-only fields # are only visible if they are an admin. if context.is_admin: fields += admin_fields # TODO(mriedem): Remove the NovaObject type check once the # network.create API is returning objects. is_obj = isinstance(network, base_obj.NovaObject) result = {} for field in fields: # NOTE(mriedem): If network is an object, IPAddress fields need to # be cast to a string so they look the same in the response as # before the objects conversion. if is_obj and isinstance(network.fields[field].AUTO_TYPE, obj_fields.IPAddress): # NOTE(danms): Here, network should be an object, which could # have come from neutron and thus be missing most of the # attributes. Providing a default to get() avoids trying to # lazy-load missing attributes. val = network.get(field, None) if val is not None: result[field] = str(val) else: result[field] = val else: # It's either not an object or it's not an IPAddress field. result[field] = network.get(field, None) uuid = network.get('uuid') if uuid: result['id'] = uuid return result else: return {} class NetworkController(wsgi.Controller): def __init__(self, network_api=None): self.network_api = network_api or network.API() @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors(()) def index(self, req): context = req.environ['nova.context'] context.can(net_policies.POLICY_ROOT % 'view') networks = self.network_api.get_all(context) result = [network_dict(context, net_ref) for net_ref in networks] return {'networks': result} @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.response(202) @wsgi.expected_errors((404, 501)) @wsgi.action("disassociate") def _disassociate_host_and_project(self, req, id, body): context = req.environ['nova.context'] context.can(net_policies.BASE_POLICY_NAME) try: self.network_api.associate(context, id, host=None, project=None) except exception.NetworkNotFound: msg = _("Network not found") raise exc.HTTPNotFound(explanation=msg) except NotImplementedError: common.raise_feature_not_supported() @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors(404) def show(self, req, id): context = req.environ['nova.context'] context.can(net_policies.POLICY_ROOT % 'view') try: network = self.network_api.get(context, id) except exception.NetworkNotFound: msg = _("Network not found") raise exc.HTTPNotFound(explanation=msg) return {'network': network_dict(context, network)} @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.response(202) @wsgi.expected_errors((404, 409)) def delete(self, req, id): context = req.environ['nova.context'] context.can(net_policies.BASE_POLICY_NAME) try: self.network_api.delete(context, id) except exception.NetworkInUse as e: raise exc.HTTPConflict(explanation=e.format_message()) except exception.NetworkNotFound: msg = _("Network not found") raise exc.HTTPNotFound(explanation=msg) @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors((400, 409, 501)) @validation.schema(schema.create) def create(self, req, body): context = req.environ['nova.context'] context.can(net_policies.BASE_POLICY_NAME) params = body["network"] cidr = params.get("cidr") or params.get("cidr_v6") params["num_networks"] = 1 params["network_size"] = netaddr.IPNetwork(cidr).size try: network = self.network_api.create(context, **params)[0] except (exception.InvalidCidr, exception.InvalidIntValue, exception.InvalidAddress, exception.NetworkNotCreated) as ex: raise exc.HTTPBadRequest(explanation=ex.format_message) except (exception.CidrConflict, exception.DuplicateVlan) as ex: raise exc.HTTPConflict(explanation=ex.format_message()) return {"network": network_dict(context, network)} @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.response(202) @wsgi.expected_errors((400, 501)) @validation.schema(schema.add_network_to_project) def add(self, req, body): context = req.environ['nova.context'] context.can(net_policies.BASE_POLICY_NAME) network_id = body['id'] project_id = context.project_id try: self.network_api.add_network_to_project( context, project_id, network_id) except NotImplementedError: common.raise_feature_not_supported() except (exception.NoMoreNetworks, exception.NetworkNotFoundForUUID) as e: raise exc.HTTPBadRequest(explanation=e.format_message()) nova-17.0.1/nova/api/openstack/compute/quota_sets.py0000666000175000017500000002663313250073126022520 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils import strutils import six.moves.urllib.parse as urlparse import webob from nova.api.openstack.api_version_request \ import MAX_PROXY_API_SUPPORT_VERSION from nova.api.openstack.api_version_request \ import MIN_WITHOUT_PROXY_API_SUPPORT_VERSION from nova.api.openstack.compute.schemas import quota_sets from nova.api.openstack import identity from nova.api.openstack import wsgi from nova.api import validation import nova.conf from nova import exception from nova.i18n import _ from nova import objects from nova.policies import quota_sets as qs_policies from nova import quota CONF = nova.conf.CONF QUOTAS = quota.QUOTAS FILTERED_QUOTAS_2_36 = ["fixed_ips", "floating_ips", "networks", "security_group_rules", "security_groups"] FILTERED_QUOTAS_2_57 = list(FILTERED_QUOTAS_2_36) FILTERED_QUOTAS_2_57.extend(['injected_files', 'injected_file_content_bytes', 'injected_file_path_bytes']) class QuotaSetsController(wsgi.Controller): def _format_quota_set(self, project_id, quota_set, filtered_quotas): """Convert the quota object to a result dict.""" if project_id: result = dict(id=str(project_id)) else: result = {} for resource in QUOTAS.resources: if (resource not in filtered_quotas and resource in quota_set): result[resource] = quota_set[resource] return dict(quota_set=result) def _validate_quota_limit(self, resource, limit, minimum, maximum): def conv_inf(value): return float("inf") if value == -1 else value if conv_inf(limit) < conv_inf(minimum): msg = (_("Quota limit %(limit)s for %(resource)s must " "be greater than or equal to already used and " "reserved %(minimum)s.") % {'limit': limit, 'resource': resource, 'minimum': minimum}) raise webob.exc.HTTPBadRequest(explanation=msg) if conv_inf(limit) > conv_inf(maximum): msg = (_("Quota limit %(limit)s for %(resource)s must be " "less than or equal to %(maximum)s.") % {'limit': limit, 'resource': resource, 'maximum': maximum}) raise webob.exc.HTTPBadRequest(explanation=msg) def _get_quotas(self, context, id, user_id=None, usages=False): if user_id: values = QUOTAS.get_user_quotas(context, id, user_id, usages=usages) else: values = QUOTAS.get_project_quotas(context, id, usages=usages) if usages: # NOTE(melwitt): For the detailed quota view with usages, the API # returns a response in the format: # { # "quota_set": { # "cores": { # "in_use": 0, # "limit": 20, # "reserved": 0 # }, # ... # We've re-architected quotas to eliminate reservations, so we no # longer have a 'reserved' key returned from get_*_quotas, so set # it here to satisfy the REST API response contract. reserved = QUOTAS.get_reserved() for v in values.values(): v['reserved'] = reserved return values else: return {k: v['limit'] for k, v in values.items()} @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors(400) def show(self, req, id): return self._show(req, id, []) @wsgi.Controller.api_version( # noqa MIN_WITHOUT_PROXY_API_SUPPORT_VERSION, '2.56') @wsgi.expected_errors(400) def show(self, req, id): return self._show(req, id, FILTERED_QUOTAS_2_36) @wsgi.Controller.api_version('2.57') # noqa @wsgi.expected_errors(400) def show(self, req, id): return self._show(req, id, FILTERED_QUOTAS_2_57) @validation.query_schema(quota_sets.query_schema) def _show(self, req, id, filtered_quotas): context = req.environ['nova.context'] context.can(qs_policies.POLICY_ROOT % 'show', {'project_id': id}) identity.verify_project_id(context, id) params = urlparse.parse_qs(req.environ.get('QUERY_STRING', '')) user_id = params.get('user_id', [None])[0] return self._format_quota_set(id, self._get_quotas(context, id, user_id=user_id), filtered_quotas=filtered_quotas) @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors(400) def detail(self, req, id): return self._detail(req, id, []) @wsgi.Controller.api_version( # noqa MIN_WITHOUT_PROXY_API_SUPPORT_VERSION, '2.56') @wsgi.expected_errors(400) def detail(self, req, id): return self._detail(req, id, FILTERED_QUOTAS_2_36) @wsgi.Controller.api_version('2.57') # noqa @wsgi.expected_errors(400) def detail(self, req, id): return self._detail(req, id, FILTERED_QUOTAS_2_57) @validation.query_schema(quota_sets.query_schema) def _detail(self, req, id, filtered_quotas): context = req.environ['nova.context'] context.can(qs_policies.POLICY_ROOT % 'detail', {'project_id': id}) identity.verify_project_id(context, id) user_id = req.GET.get('user_id', None) return self._format_quota_set( id, self._get_quotas(context, id, user_id=user_id, usages=True), filtered_quotas=filtered_quotas) @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors(400) @validation.schema(quota_sets.update) def update(self, req, id, body): return self._update(req, id, body, []) @wsgi.Controller.api_version( # noqa MIN_WITHOUT_PROXY_API_SUPPORT_VERSION, '2.56') @wsgi.expected_errors(400) @validation.schema(quota_sets.update_v236) def update(self, req, id, body): return self._update(req, id, body, FILTERED_QUOTAS_2_36) @wsgi.Controller.api_version('2.57') # noqa @wsgi.expected_errors(400) @validation.schema(quota_sets.update_v257) def update(self, req, id, body): return self._update(req, id, body, FILTERED_QUOTAS_2_57) @validation.query_schema(quota_sets.query_schema) def _update(self, req, id, body, filtered_quotas): context = req.environ['nova.context'] context.can(qs_policies.POLICY_ROOT % 'update', {'project_id': id}) identity.verify_project_id(context, id) project_id = id params = urlparse.parse_qs(req.environ.get('QUERY_STRING', '')) user_id = params.get('user_id', [None])[0] quota_set = body['quota_set'] # NOTE(alex_xu): The CONF.enable_network_quota was deprecated # due to it is only used by nova-network, and nova-network will be # deprecated also. So when CONF.enable_newtork_quota is removed, # the networks quota will disappeare also. if not CONF.enable_network_quota and 'networks' in quota_set: raise webob.exc.HTTPBadRequest( explanation=_('The networks quota is disabled')) force_update = strutils.bool_from_string(quota_set.get('force', 'False')) settable_quotas = QUOTAS.get_settable_quotas(context, project_id, user_id=user_id) # NOTE(dims): Pass #1 - In this loop for quota_set.items(), we validate # min/max values and bail out if any of the items in the set is bad. valid_quotas = {} for key, value in body['quota_set'].items(): if key == 'force' or (not value and value != 0): continue # validate whether already used and reserved exceeds the new # quota, this check will be ignored if admin want to force # update value = int(value) if not force_update: minimum = settable_quotas[key]['minimum'] maximum = settable_quotas[key]['maximum'] self._validate_quota_limit(key, value, minimum, maximum) valid_quotas[key] = value # NOTE(dims): Pass #2 - At this point we know that all the # values are correct and we can iterate and update them all in one # shot without having to worry about rolling back etc as we have done # the validation up front in the loop above. for key, value in valid_quotas.items(): try: objects.Quotas.create_limit(context, project_id, key, value, user_id=user_id) except exception.QuotaExists: objects.Quotas.update_limit(context, project_id, key, value, user_id=user_id) # Note(gmann): Removed 'id' from update's response to make it same # as V2. If needed it can be added with microversion. return self._format_quota_set( None, self._get_quotas(context, id, user_id=user_id), filtered_quotas=filtered_quotas) @wsgi.Controller.api_version("2.0", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors(400) def defaults(self, req, id): return self._defaults(req, id, []) @wsgi.Controller.api_version( # noqa MIN_WITHOUT_PROXY_API_SUPPORT_VERSION, '2.56') @wsgi.expected_errors(400) def defaults(self, req, id): return self._defaults(req, id, FILTERED_QUOTAS_2_36) @wsgi.Controller.api_version('2.57') # noqa @wsgi.expected_errors(400) def defaults(self, req, id): return self._defaults(req, id, FILTERED_QUOTAS_2_57) def _defaults(self, req, id, filtered_quotas): context = req.environ['nova.context'] context.can(qs_policies.POLICY_ROOT % 'defaults', {'project_id': id}) identity.verify_project_id(context, id) values = QUOTAS.get_defaults(context) return self._format_quota_set(id, values, filtered_quotas=filtered_quotas) # TODO(oomichi): Here should be 204(No Content) instead of 202 by v2.1 # +microversions because the resource quota-set has been deleted completely # when returning a response. @wsgi.expected_errors(()) @validation.query_schema(quota_sets.query_schema) @wsgi.response(202) def delete(self, req, id): context = req.environ['nova.context'] context.can(qs_policies.POLICY_ROOT % 'delete', {'project_id': id}) params = urlparse.parse_qs(req.environ.get('QUERY_STRING', '')) user_id = params.get('user_id', [None])[0] if user_id: QUOTAS.destroy_all_by_project_and_user(context, id, user_id) else: QUOTAS.destroy_all_by_project(context, id) nova-17.0.1/nova/api/openstack/compute/networks_associate.py0000666000175000017500000000613313250073126024231 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from webob import exc from nova.api.openstack.api_version_request \ import MAX_PROXY_API_SUPPORT_VERSION from nova.api.openstack import common from nova.api.openstack.compute.schemas import networks_associate from nova.api.openstack import wsgi from nova.api import validation from nova import exception from nova.i18n import _ from nova import network from nova.policies import networks_associate as na_policies class NetworkAssociateActionController(wsgi.Controller): """Network Association API Controller.""" def __init__(self, network_api=None): self.network_api = network_api or network.API() @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.action("disassociate_host") @wsgi.response(202) @wsgi.expected_errors((404, 501)) def _disassociate_host_only(self, req, id, body): context = req.environ['nova.context'] context.can(na_policies.BASE_POLICY_NAME) try: self.network_api.associate(context, id, host=None) except exception.NetworkNotFound: msg = _("Network not found") raise exc.HTTPNotFound(explanation=msg) except NotImplementedError: common.raise_feature_not_supported() @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.action("disassociate_project") @wsgi.response(202) @wsgi.expected_errors((404, 501)) def _disassociate_project_only(self, req, id, body): context = req.environ['nova.context'] context.can(na_policies.BASE_POLICY_NAME) try: self.network_api.associate(context, id, project=None) except exception.NetworkNotFound: msg = _("Network not found") raise exc.HTTPNotFound(explanation=msg) except NotImplementedError: common.raise_feature_not_supported() @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.action("associate_host") @wsgi.response(202) @wsgi.expected_errors((404, 501)) @validation.schema(networks_associate.associate_host) def _associate_host(self, req, id, body): context = req.environ['nova.context'] context.can(na_policies.BASE_POLICY_NAME) try: self.network_api.associate(context, id, host=body['associate_host']) except exception.NetworkNotFound: msg = _("Network not found") raise exc.HTTPNotFound(explanation=msg) except NotImplementedError: common.raise_feature_not_supported() nova-17.0.1/nova/api/openstack/compute/simple_tenant_usage.py0000666000175000017500000003364413250073126024357 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import iso8601 from oslo_utils import timeutils import six import six.moves.urllib.parse as urlparse from webob import exc from nova.api.openstack import common from nova.api.openstack.compute.schemas import simple_tenant_usage as schema from nova.api.openstack.compute.views import usages as usages_view from nova.api.openstack import wsgi from nova.api import validation import nova.conf from nova import context as nova_context from nova import exception from nova.i18n import _ from nova import objects from nova.policies import simple_tenant_usage as stu_policies CONF = nova.conf.CONF def parse_strtime(dstr, fmt): try: return timeutils.parse_strtime(dstr, fmt) except (TypeError, ValueError) as e: raise exception.InvalidStrTime(reason=six.text_type(e)) class SimpleTenantUsageController(wsgi.Controller): _view_builder_class = usages_view.ViewBuilder def _hours_for(self, instance, period_start, period_stop): launched_at = instance.launched_at terminated_at = instance.terminated_at if terminated_at is not None: if not isinstance(terminated_at, datetime.datetime): # NOTE(mriedem): Instance object DateTime fields are # timezone-aware so convert using isotime. terminated_at = timeutils.parse_isotime(terminated_at) if launched_at is not None: if not isinstance(launched_at, datetime.datetime): launched_at = timeutils.parse_isotime(launched_at) if terminated_at and terminated_at < period_start: return 0 # nothing if it started after the usage report ended if launched_at and launched_at > period_stop: return 0 if launched_at: # if instance launched after period_started, don't charge for first start = max(launched_at, period_start) if terminated_at: # if instance stopped before period_stop, don't charge after stop = min(period_stop, terminated_at) else: # instance is still running, so charge them up to current time stop = period_stop dt = stop - start return dt.total_seconds() / 3600.0 else: # instance hasn't launched, so no charge return 0 def _get_flavor(self, context, instance, flavors_cache): """Get flavor information from the instance object, allowing a fallback to lookup by-id for deleted instances only. """ try: return instance.get_flavor() except exception.NotFound: if not instance.deleted: # Only support the fallback mechanism for deleted instances # that would have been skipped by migration #153 raise flavor_type = instance.instance_type_id if flavor_type in flavors_cache: return flavors_cache[flavor_type] try: flavor_ref = objects.Flavor.get_by_id(context, flavor_type) flavors_cache[flavor_type] = flavor_ref except exception.FlavorNotFound: # can't bill if there is no flavor flavor_ref = None return flavor_ref def _get_instances_all_cells(self, context, period_start, period_stop, tenant_id, limit, marker): all_instances = [] cells = objects.CellMappingList.get_all(context) for cell in cells: with nova_context.target_cell(context, cell) as cctxt: try: instances = ( objects.InstanceList.get_active_by_window_joined( cctxt, period_start, period_stop, tenant_id, expected_attrs=['flavor'], limit=limit, marker=marker)) except exception.MarkerNotFound: # NOTE(danms): We need to keep looking through the later # cells to find the marker continue all_instances.extend(instances) # NOTE(danms): We must have found a marker if we had one, # so make sure we don't require a marker in the next cell marker = None if limit: limit -= len(instances) if limit <= 0: break if marker is not None and len(all_instances) == 0: # NOTE(danms): If we did not find the marker in any cell, # mimic the db_api behavior here raise exception.MarkerNotFound(marker=marker) return all_instances def _tenant_usages_for_period(self, context, period_start, period_stop, tenant_id=None, detailed=True, limit=None, marker=None): instances = self._get_instances_all_cells(context, period_start, period_stop, tenant_id, limit, marker) rval = {} flavors = {} all_server_usages = [] for instance in instances: info = {} info['hours'] = self._hours_for(instance, period_start, period_stop) flavor = self._get_flavor(context, instance, flavors) if not flavor: info['flavor'] = '' else: info['flavor'] = flavor.name info['instance_id'] = instance.uuid info['name'] = instance.display_name info['tenant_id'] = instance.project_id try: info['memory_mb'] = instance.flavor.memory_mb info['local_gb'] = (instance.flavor.root_gb + instance.flavor.ephemeral_gb) info['vcpus'] = instance.flavor.vcpus except exception.InstanceNotFound: # This is rare case, instance disappear during analysis # As it's just info collection, we can try next one continue # NOTE(mriedem): We need to normalize the start/end times back # to timezone-naive so the response doesn't change after the # conversion to objects. info['started_at'] = timeutils.normalize_time(instance.launched_at) info['ended_at'] = ( timeutils.normalize_time(instance.terminated_at) if instance.terminated_at else None) if info['ended_at']: info['state'] = 'terminated' else: info['state'] = instance.vm_state now = timeutils.utcnow() if info['state'] == 'terminated': delta = info['ended_at'] - info['started_at'] else: delta = now - info['started_at'] info['uptime'] = int(delta.total_seconds()) if info['tenant_id'] not in rval: summary = {} summary['tenant_id'] = info['tenant_id'] if detailed: summary['server_usages'] = [] summary['total_local_gb_usage'] = 0 summary['total_vcpus_usage'] = 0 summary['total_memory_mb_usage'] = 0 summary['total_hours'] = 0 summary['start'] = timeutils.normalize_time(period_start) summary['stop'] = timeutils.normalize_time(period_stop) rval[info['tenant_id']] = summary summary = rval[info['tenant_id']] summary['total_local_gb_usage'] += info['local_gb'] * info['hours'] summary['total_vcpus_usage'] += info['vcpus'] * info['hours'] summary['total_memory_mb_usage'] += (info['memory_mb'] * info['hours']) summary['total_hours'] += info['hours'] all_server_usages.append(info) if detailed: summary['server_usages'].append(info) return list(rval.values()), all_server_usages def _parse_datetime(self, dtstr): if not dtstr: value = timeutils.utcnow() elif isinstance(dtstr, datetime.datetime): value = dtstr else: for fmt in ["%Y-%m-%dT%H:%M:%S", "%Y-%m-%dT%H:%M:%S.%f", "%Y-%m-%d %H:%M:%S.%f"]: try: value = parse_strtime(dtstr, fmt) break except exception.InvalidStrTime: pass else: msg = _("Datetime is in invalid format") raise exception.InvalidStrTime(reason=msg) # NOTE(mriedem): Instance object DateTime fields are timezone-aware # so we have to force UTC timezone for comparing this datetime against # instance object fields and still maintain backwards compatibility # in the API. if value.utcoffset() is None: value = value.replace(tzinfo=iso8601.UTC) return value def _get_datetime_range(self, req): qs = req.environ.get('QUERY_STRING', '') env = urlparse.parse_qs(qs) # NOTE(lzyeval): env.get() always returns a list period_start = self._parse_datetime(env.get('start', [None])[0]) period_stop = self._parse_datetime(env.get('end', [None])[0]) if not period_start < period_stop: msg = _("Invalid start time. The start time cannot occur after " "the end time.") raise exc.HTTPBadRequest(explanation=msg) detailed = env.get('detailed', ['0'])[0] == '1' return (period_start, period_stop, detailed) @wsgi.Controller.api_version("2.40") @validation.query_schema(schema.index_query_v240) @wsgi.expected_errors(400) def index(self, req): """Retrieve tenant_usage for all tenants.""" return self._index(req, links=True) @wsgi.Controller.api_version("2.1", "2.39") # noqa @validation.query_schema(schema.index_query) @wsgi.expected_errors(400) def index(self, req): """Retrieve tenant_usage for all tenants.""" return self._index(req) @wsgi.Controller.api_version("2.40") @validation.query_schema(schema.show_query_v240) @wsgi.expected_errors(400) def show(self, req, id): """Retrieve tenant_usage for a specified tenant.""" return self._show(req, id, links=True) @wsgi.Controller.api_version("2.1", "2.39") # noqa @validation.query_schema(schema.show_query) @wsgi.expected_errors(400) def show(self, req, id): """Retrieve tenant_usage for a specified tenant.""" return self._show(req, id) def _index(self, req, links=False): context = req.environ['nova.context'] context.can(stu_policies.POLICY_ROOT % 'list') try: (period_start, period_stop, detailed) = self._get_datetime_range( req) except exception.InvalidStrTime as e: raise exc.HTTPBadRequest(explanation=e.format_message()) now = timeutils.parse_isotime(timeutils.utcnow().isoformat()) if period_stop > now: period_stop = now marker = None limit = CONF.api.max_limit if links: limit, marker = common.get_limit_and_marker(req) try: usages, server_usages = self._tenant_usages_for_period( context, period_start, period_stop, detailed=detailed, limit=limit, marker=marker) except exception.MarkerNotFound as e: raise exc.HTTPBadRequest(explanation=e.format_message()) tenant_usages = {'tenant_usages': usages} if links: usages_links = self._view_builder.get_links(req, server_usages) if usages_links: tenant_usages['tenant_usages_links'] = usages_links return tenant_usages def _show(self, req, id, links=False): tenant_id = id context = req.environ['nova.context'] context.can(stu_policies.POLICY_ROOT % 'show', {'project_id': tenant_id}) try: (period_start, period_stop, ignore) = self._get_datetime_range( req) except exception.InvalidStrTime as e: raise exc.HTTPBadRequest(explanation=e.format_message()) now = timeutils.parse_isotime(timeutils.utcnow().isoformat()) if period_stop > now: period_stop = now marker = None limit = CONF.api.max_limit if links: limit, marker = common.get_limit_and_marker(req) try: usage, server_usages = self._tenant_usages_for_period( context, period_start, period_stop, tenant_id=tenant_id, detailed=True, limit=limit, marker=marker) except exception.MarkerNotFound as e: raise exc.HTTPBadRequest(explanation=e.format_message()) if len(usage): usage = list(usage)[0] else: usage = {} tenant_usage = {'tenant_usage': usage} if links: usages_links = self._view_builder.get_links( req, server_usages, tenant_id=tenant_id) if usages_links: tenant_usage['tenant_usage_links'] = usages_links return tenant_usage nova-17.0.1/nova/api/openstack/compute/suspend_server.py0000666000175000017500000000535013250073126023371 0ustar zuulzuul00000000000000# Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from webob import exc from nova.api.openstack import common from nova.api.openstack import wsgi from nova import compute from nova import exception from nova.policies import suspend_server as ss_policies class SuspendServerController(wsgi.Controller): def __init__(self, *args, **kwargs): super(SuspendServerController, self).__init__(*args, **kwargs) self.compute_api = compute.API() @wsgi.response(202) @wsgi.expected_errors((404, 409)) @wsgi.action('suspend') def _suspend(self, req, id, body): """Permit admins to suspend the server.""" context = req.environ['nova.context'] server = common.get_instance(self.compute_api, context, id) try: context.can(ss_policies.POLICY_ROOT % 'suspend', target={'user_id': server.user_id, 'project_id': server.project_id}) self.compute_api.suspend(context, server) except exception.InstanceUnknownCell as e: raise exc.HTTPNotFound(explanation=e.format_message()) except exception.InstanceIsLocked as e: raise exc.HTTPConflict(explanation=e.format_message()) except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state(state_error, 'suspend', id) @wsgi.response(202) @wsgi.expected_errors((404, 409)) @wsgi.action('resume') def _resume(self, req, id, body): """Permit admins to resume the server from suspend.""" context = req.environ['nova.context'] context.can(ss_policies.POLICY_ROOT % 'resume') server = common.get_instance(self.compute_api, context, id) try: self.compute_api.resume(context, server) except exception.InstanceUnknownCell as e: raise exc.HTTPNotFound(explanation=e.format_message()) except exception.InstanceIsLocked as e: raise exc.HTTPConflict(explanation=e.format_message()) except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state(state_error, 'resume', id) nova-17.0.1/nova/api/openstack/compute/server_tags.py0000666000175000017500000002254613250073126022654 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import jsonschema import webob from nova.api.openstack import common from nova.api.openstack.compute.schemas import server_tags as schema from nova.api.openstack.compute.views import server_tags from nova.api.openstack import wsgi from nova.api import validation from nova.api.validation import parameter_types from nova import compute from nova.compute import vm_states from nova import context as nova_context from nova import exception from nova.i18n import _ from nova.notifications import base as notifications_base from nova import objects from nova.policies import server_tags as st_policies def _get_tags_names(tags): return [t.tag for t in tags] def _get_instance_mapping(context, server_id): try: return objects.InstanceMapping.get_by_instance_uuid(context, server_id) except exception.InstanceMappingNotFound as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) class ServerTagsController(wsgi.Controller): _view_builder_class = server_tags.ViewBuilder def __init__(self): self.compute_api = compute.API() super(ServerTagsController, self).__init__() def _check_instance_in_valid_state(self, context, server_id, action): instance = common.get_instance(self.compute_api, context, server_id) if instance.vm_state not in (vm_states.ACTIVE, vm_states.PAUSED, vm_states.SUSPENDED, vm_states.STOPPED): exc = exception.InstanceInvalidState(attr='vm_state', instance_uuid=instance.uuid, state=instance.vm_state, method=action) common.raise_http_conflict_for_instance_invalid_state(exc, action, server_id) return instance @wsgi.Controller.api_version("2.26") @wsgi.response(204) @wsgi.expected_errors(404) def show(self, req, server_id, id): context = req.environ["nova.context"] context.can(st_policies.POLICY_ROOT % 'show') try: im = objects.InstanceMapping.get_by_instance_uuid(context, server_id) with nova_context.target_cell(context, im.cell_mapping) as cctxt: exists = objects.Tag.exists(cctxt, server_id, id) except (exception.InstanceNotFound, exception.InstanceMappingNotFound) as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) if not exists: msg = (_("Server %(server_id)s has no tag '%(tag)s'") % {'server_id': server_id, 'tag': id}) raise webob.exc.HTTPNotFound(explanation=msg) @wsgi.Controller.api_version("2.26") @wsgi.expected_errors(404) def index(self, req, server_id): context = req.environ["nova.context"] context.can(st_policies.POLICY_ROOT % 'index') try: im = objects.InstanceMapping.get_by_instance_uuid(context, server_id) with nova_context.target_cell(context, im.cell_mapping) as cctxt: tags = objects.TagList.get_by_resource_id(cctxt, server_id) except (exception.InstanceNotFound, exception.InstanceMappingNotFound) as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) return {'tags': _get_tags_names(tags)} @wsgi.Controller.api_version("2.26") @wsgi.expected_errors((400, 404, 409)) @validation.schema(schema.update) def update(self, req, server_id, id, body): context = req.environ["nova.context"] context.can(st_policies.POLICY_ROOT % 'update') im = _get_instance_mapping(context, server_id) with nova_context.target_cell(context, im.cell_mapping) as cctxt: instance = self._check_instance_in_valid_state( cctxt, server_id, 'update tag') try: jsonschema.validate(id, parameter_types.tag) except jsonschema.ValidationError as e: msg = (_("Tag '%(tag)s' is invalid. It must be a non empty string " "without characters '/' and ','. Validation error " "message: %(err)s") % {'tag': id, 'err': e.message}) raise webob.exc.HTTPBadRequest(explanation=msg) try: with nova_context.target_cell(context, im.cell_mapping) as cctxt: tags = objects.TagList.get_by_resource_id(cctxt, server_id) except exception.InstanceNotFound as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) if len(tags) >= objects.instance.MAX_TAG_COUNT: msg = (_("The number of tags exceeded the per-server limit %d") % objects.instance.MAX_TAG_COUNT) raise webob.exc.HTTPBadRequest(explanation=msg) if id in _get_tags_names(tags): # NOTE(snikitin): server already has specified tag return webob.Response(status_int=204) try: with nova_context.target_cell(context, im.cell_mapping) as cctxt: tag = objects.Tag(context=cctxt, resource_id=server_id, tag=id) tag.create() instance.tags = objects.TagList.get_by_resource_id(cctxt, server_id) except exception.InstanceNotFound as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) notifications_base.send_instance_update_notification( context, instance, service="nova-api") response = webob.Response(status_int=201) response.headers['Location'] = self._view_builder.get_location( req, server_id, id) return response @wsgi.Controller.api_version("2.26") @wsgi.expected_errors((404, 409)) @validation.schema(schema.update_all) def update_all(self, req, server_id, body): context = req.environ["nova.context"] context.can(st_policies.POLICY_ROOT % 'update_all') im = _get_instance_mapping(context, server_id) with nova_context.target_cell(context, im.cell_mapping) as cctxt: instance = self._check_instance_in_valid_state( cctxt, server_id, 'update tags') try: with nova_context.target_cell(context, im.cell_mapping) as cctxt: tags = objects.TagList.create(cctxt, server_id, body['tags']) instance.tags = tags except exception.InstanceNotFound as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) notifications_base.send_instance_update_notification( context, instance, service="nova-api") return {'tags': _get_tags_names(tags)} @wsgi.Controller.api_version("2.26") @wsgi.response(204) @wsgi.expected_errors((404, 409)) def delete(self, req, server_id, id): context = req.environ["nova.context"] context.can(st_policies.POLICY_ROOT % 'delete') im = _get_instance_mapping(context, server_id) with nova_context.target_cell(context, im.cell_mapping) as cctxt: instance = self._check_instance_in_valid_state( cctxt, server_id, 'delete tag') try: with nova_context.target_cell(context, im.cell_mapping) as cctxt: objects.Tag.destroy(cctxt, server_id, id) instance.tags = objects.TagList.get_by_resource_id(cctxt, server_id) except (exception.InstanceTagNotFound, exception.InstanceNotFound) as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) notifications_base.send_instance_update_notification( context, instance, service="nova-api") @wsgi.Controller.api_version("2.26") @wsgi.response(204) @wsgi.expected_errors((404, 409)) def delete_all(self, req, server_id): context = req.environ["nova.context"] context.can(st_policies.POLICY_ROOT % 'delete_all') im = _get_instance_mapping(context, server_id) with nova_context.target_cell(context, im.cell_mapping) as cctxt: instance = self._check_instance_in_valid_state( cctxt, server_id, 'delete tags') try: with nova_context.target_cell(context, im.cell_mapping) as cctxt: objects.TagList.destroy(cctxt, server_id) instance.tags = objects.TagList() except exception.InstanceNotFound as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) notifications_base.send_instance_update_notification( context, instance, service="nova-api") nova-17.0.1/nova/api/openstack/compute/images.py0000666000175000017500000001270413250073126021570 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import webob.exc from nova.api.openstack.api_version_request \ import MAX_PROXY_API_SUPPORT_VERSION from nova.api.openstack import common from nova.api.openstack.compute.views import images as views_images from nova.api.openstack import wsgi from nova import exception from nova.i18n import _ import nova.image import nova.utils SUPPORTED_FILTERS = { 'name': 'name', 'status': 'status', 'changes-since': 'changes-since', 'server': 'property-instance_uuid', 'type': 'property-image_type', 'minRam': 'min_ram', 'minDisk': 'min_disk', } class ImagesController(wsgi.Controller): """Base controller for retrieving/displaying images.""" _view_builder_class = views_images.ViewBuilder def __init__(self, **kwargs): super(ImagesController, self).__init__(**kwargs) self._image_api = nova.image.API() def _get_filters(self, req): """Return a dictionary of query param filters from the request. :param req: the Request object coming from the wsgi layer :retval a dict of key/value filters """ filters = {} for param in req.params: if param in SUPPORTED_FILTERS or param.startswith('property-'): # map filter name or carry through if property-* filter_name = SUPPORTED_FILTERS.get(param, param) filters[filter_name] = req.params.get(param) # ensure server filter is the instance uuid filter_name = 'property-instance_uuid' try: filters[filter_name] = filters[filter_name].rsplit('/', 1)[1] except (AttributeError, IndexError, KeyError): pass filter_name = 'status' if filter_name in filters: # The Image API expects us to use lowercase strings for status filters[filter_name] = filters[filter_name].lower() return filters @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors(404) def show(self, req, id): """Return detailed information about a specific image. :param req: `wsgi.Request` object :param id: Image identifier """ context = req.environ['nova.context'] try: image = self._image_api.get(context, id) except (exception.ImageNotFound, exception.InvalidImageRef): explanation = _("Image not found.") raise webob.exc.HTTPNotFound(explanation=explanation) req.cache_db_items('images', [image], 'id') return self._view_builder.show(req, image) @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors((403, 404)) @wsgi.response(204) def delete(self, req, id): """Delete an image, if allowed. :param req: `wsgi.Request` object :param id: Image identifier (integer) """ context = req.environ['nova.context'] try: self._image_api.delete(context, id) except exception.ImageNotFound: explanation = _("Image not found.") raise webob.exc.HTTPNotFound(explanation=explanation) except exception.ImageNotAuthorized: # The image service raises this exception on delete if glanceclient # raises HTTPForbidden. explanation = _("You are not allowed to delete the image.") raise webob.exc.HTTPForbidden(explanation=explanation) @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors(400) def index(self, req): """Return an index listing of images available to the request. :param req: `wsgi.Request` object """ context = req.environ['nova.context'] filters = self._get_filters(req) page_params = common.get_pagination_params(req) try: images = self._image_api.get_all(context, filters=filters, **page_params) except exception.Invalid as e: raise webob.exc.HTTPBadRequest(explanation=e.format_message()) return self._view_builder.index(req, images) @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors(400) def detail(self, req): """Return a detailed index listing of images available to the request. :param req: `wsgi.Request` object. """ context = req.environ['nova.context'] filters = self._get_filters(req) page_params = common.get_pagination_params(req) try: images = self._image_api.get_all(context, filters=filters, **page_params) except exception.Invalid as e: raise webob.exc.HTTPBadRequest(explanation=e.format_message()) req.cache_db_items('images', images, 'id') return self._view_builder.detail(req, images) nova-17.0.1/nova/api/openstack/compute/multiple_create.py0000666000175000017500000000357413250073126023506 0ustar zuulzuul00000000000000# Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from webob import exc from nova.api.openstack.compute.schemas import multiple_create as \ schema_multiple_create from nova.i18n import _ MIN_ATTRIBUTE_NAME = "min_count" MAX_ATTRIBUTE_NAME = "max_count" RRID_ATTRIBUTE_NAME = "return_reservation_id" # NOTE(gmann): This function is not supposed to use 'body_deprecated_param' # parameter as this is placed to handle scheduler_hint extension for V2.1. def server_create(server_dict, create_kwargs, body_deprecated_param): # min_count and max_count are optional. If they exist, they may come # in as strings. Verify that they are valid integers and > 0. # Also, we want to default 'min_count' to 1, and default # 'max_count' to be 'min_count'. min_count = int(server_dict.get(MIN_ATTRIBUTE_NAME, 1)) max_count = int(server_dict.get(MAX_ATTRIBUTE_NAME, min_count)) return_id = server_dict.get(RRID_ATTRIBUTE_NAME, False) if min_count > max_count: msg = _('min_count must be <= max_count') raise exc.HTTPBadRequest(explanation=msg) create_kwargs['min_count'] = min_count create_kwargs['max_count'] = max_count create_kwargs['return_reservation_id'] = return_id def get_server_create_schema(version): return schema_multiple_create.server_create nova-17.0.1/nova/api/openstack/compute/hypervisors.py0000666000175000017500000004340113250073126022716 0ustar zuulzuul00000000000000# Copyright (c) 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """The hypervisors admin extension.""" from oslo_log import log as logging from oslo_serialization import jsonutils from oslo_utils import strutils from oslo_utils import uuidutils import webob.exc from nova.api.openstack import api_version_request from nova.api.openstack import common from nova.api.openstack.compute.schemas import hypervisors as hyper_schema from nova.api.openstack.compute.views import hypervisors as hyper_view from nova.api.openstack import wsgi from nova.api import validation from nova.cells import utils as cells_utils from nova import compute from nova import exception from nova.i18n import _ from nova.policies import hypervisors as hv_policies from nova import servicegroup from nova import utils LOG = logging.getLogger(__name__) UUID_FOR_ID_MIN_VERSION = '2.53' class HypervisorsController(wsgi.Controller): """The Hypervisors API controller for the OpenStack API.""" _view_builder_class = hyper_view.ViewBuilder def __init__(self): self.host_api = compute.HostAPI() self.servicegroup_api = servicegroup.API() super(HypervisorsController, self).__init__() def _view_hypervisor(self, hypervisor, service, detail, req, servers=None, **kwargs): alive = self.servicegroup_api.service_is_up(service) # The 2.53 microversion returns the compute node uuid rather than id. uuid_for_id = api_version_request.is_supported( req, min_version=UUID_FOR_ID_MIN_VERSION) hyp_dict = { 'id': hypervisor.uuid if uuid_for_id else hypervisor.id, 'hypervisor_hostname': hypervisor.hypervisor_hostname, 'state': 'up' if alive else 'down', 'status': ('disabled' if service.disabled else 'enabled'), } if detail: for field in ('vcpus', 'memory_mb', 'local_gb', 'vcpus_used', 'memory_mb_used', 'local_gb_used', 'hypervisor_type', 'hypervisor_version', 'free_ram_mb', 'free_disk_gb', 'current_workload', 'running_vms', 'disk_available_least', 'host_ip'): hyp_dict[field] = getattr(hypervisor, field) service_id = service.uuid if uuid_for_id else service.id hyp_dict['service'] = { 'id': service_id, 'host': hypervisor.host, 'disabled_reason': service.disabled_reason, } if api_version_request.is_supported(req, min_version='2.28'): if hypervisor.cpu_info: hyp_dict['cpu_info'] = jsonutils.loads(hypervisor.cpu_info) else: hyp_dict['cpu_info'] = {} else: hyp_dict['cpu_info'] = hypervisor.cpu_info if servers: hyp_dict['servers'] = [dict(name=serv['name'], uuid=serv['uuid']) for serv in servers] # Add any additional info if kwargs: hyp_dict.update(kwargs) return hyp_dict def _get_compute_nodes_by_name_pattern(self, context, hostname_match): compute_nodes = self.host_api.compute_node_search_by_hypervisor( context, hostname_match) if not compute_nodes: msg = (_("No hypervisor matching '%s' could be found.") % hostname_match) raise webob.exc.HTTPNotFound(explanation=msg) return compute_nodes def _get_hypervisors(self, req, detail=False, limit=None, marker=None, links=False): """Get hypervisors for the given request. :param req: nova.api.openstack.wsgi.Request for the GET request :param detail: If True, return a detailed response. :param limit: An optional user-supplied page limit. :param marker: An optional user-supplied marker for paging. :param links: If True, return links in the response for paging. """ context = req.environ['nova.context'] context.can(hv_policies.BASE_POLICY_NAME) # The 2.53 microversion moves the search and servers routes into # GET /os-hypervisors and GET /os-hypervisors/detail with query # parameters. if api_version_request.is_supported( req, min_version=UUID_FOR_ID_MIN_VERSION): hypervisor_match = req.GET.get('hypervisor_hostname_pattern') with_servers = strutils.bool_from_string( req.GET.get('with_servers', False), strict=True) else: hypervisor_match = None with_servers = False if hypervisor_match is not None: # We have to check for 'limit' in the request itself because # the limit passed in is CONF.api.max_limit by default. if 'limit' in req.GET or marker: # Paging with hostname pattern isn't supported. raise webob.exc.HTTPBadRequest( _('Paging over hypervisors with the ' 'hypervisor_hostname_pattern query parameter is not ' 'supported.')) # Explicitly do not try to generate links when querying with the # hostname pattern since the request in the link would fail the # check above. links = False # Get all compute nodes with a hypervisor_hostname that matches # the given pattern. If none are found then it's a 404 error. compute_nodes = self._get_compute_nodes_by_name_pattern( context, hypervisor_match) else: # Get all compute nodes. try: compute_nodes = self.host_api.compute_node_get_all( context, limit=limit, marker=marker) except exception.MarkerNotFound: msg = _('marker [%s] not found') % marker raise webob.exc.HTTPBadRequest(explanation=msg) hypervisors_list = [] for hyp in compute_nodes: try: instances = None if with_servers: instances = self.host_api.instance_get_all_by_host( context, hyp.host) service = self.host_api.service_get_by_compute_host( context, hyp.host) hypervisors_list.append( self._view_hypervisor( hyp, service, detail, req, servers=instances)) except (exception.ComputeHostNotFound, exception.HostMappingNotFound): # The compute service could be deleted which doesn't delete # the compute node record, that has to be manually removed # from the database so we just ignore it when listing nodes. LOG.debug('Unable to find service for compute node %s. The ' 'service may be deleted and compute nodes need to ' 'be manually cleaned up.', hyp.host) hypervisors_dict = dict(hypervisors=hypervisors_list) if links: hypervisors_links = self._view_builder.get_links( req, hypervisors_list, detail) if hypervisors_links: hypervisors_dict['hypervisors_links'] = hypervisors_links return hypervisors_dict @wsgi.Controller.api_version(UUID_FOR_ID_MIN_VERSION) @validation.query_schema(hyper_schema.list_query_schema_v253, UUID_FOR_ID_MIN_VERSION) @wsgi.expected_errors((400, 404)) def index(self, req): """Starting with the 2.53 microversion, the id field in the response is the compute_nodes.uuid value. Also, the search and servers routes are superseded and replaced with query parameters for listing hypervisors by a hostname pattern and whether or not to include hosted servers in the response. """ limit, marker = common.get_limit_and_marker(req) return self._index(req, limit=limit, marker=marker, links=True) @wsgi.Controller.api_version("2.33", "2.52") # noqa @validation.query_schema(hyper_schema.list_query_schema_v233) @wsgi.expected_errors((400)) def index(self, req): limit, marker = common.get_limit_and_marker(req) return self._index(req, limit=limit, marker=marker, links=True) @wsgi.Controller.api_version("2.1", "2.32") # noqa @wsgi.expected_errors(()) def index(self, req): return self._index(req) def _index(self, req, limit=None, marker=None, links=False): return self._get_hypervisors(req, detail=False, limit=limit, marker=marker, links=links) @wsgi.Controller.api_version(UUID_FOR_ID_MIN_VERSION) @validation.query_schema(hyper_schema.list_query_schema_v253, UUID_FOR_ID_MIN_VERSION) @wsgi.expected_errors((400, 404)) def detail(self, req): """Starting with the 2.53 microversion, the id field in the response is the compute_nodes.uuid value. Also, the search and servers routes are superseded and replaced with query parameters for listing hypervisors by a hostname pattern and whether or not to include hosted servers in the response. """ limit, marker = common.get_limit_and_marker(req) return self._detail(req, limit=limit, marker=marker, links=True) @wsgi.Controller.api_version("2.33", "2.52") # noqa @validation.query_schema(hyper_schema.list_query_schema_v233) @wsgi.expected_errors((400)) def detail(self, req): limit, marker = common.get_limit_and_marker(req) return self._detail(req, limit=limit, marker=marker, links=True) @wsgi.Controller.api_version("2.1", "2.32") # noqa @wsgi.expected_errors(()) def detail(self, req): return self._detail(req) def _detail(self, req, limit=None, marker=None, links=False): return self._get_hypervisors(req, detail=True, limit=limit, marker=marker, links=links) @staticmethod def _validate_id(req, hypervisor_id): """Validates that the id is a uuid for microversions that require it. :param req: The HTTP request object which contains the requested microversion information. :param hypervisor_id: The provided hypervisor id. :raises: webob.exc.HTTPBadRequest if the requested microversion is greater than or equal to 2.53 and the id is not a uuid. :raises: webob.exc.HTTPNotFound if the requested microversion is less than 2.53 and the id is not an integer. """ expect_uuid = api_version_request.is_supported( req, min_version=UUID_FOR_ID_MIN_VERSION) if expect_uuid: if not uuidutils.is_uuid_like(hypervisor_id): msg = _('Invalid uuid %s') % hypervisor_id raise webob.exc.HTTPBadRequest(explanation=msg) else: # This API is supported for cells v1 and as such the id can be # a cell v1 delimited string, so we have to parse it first. if cells_utils.CELL_ITEM_SEP in str(hypervisor_id): hypervisor_id = cells_utils.split_cell_and_item( hypervisor_id)[1] try: utils.validate_integer(hypervisor_id, 'id') except exception.InvalidInput: msg = (_("Hypervisor with ID '%s' could not be found.") % hypervisor_id) raise webob.exc.HTTPNotFound(explanation=msg) @wsgi.Controller.api_version(UUID_FOR_ID_MIN_VERSION) @validation.query_schema(hyper_schema.show_query_schema_v253, UUID_FOR_ID_MIN_VERSION) @wsgi.expected_errors((400, 404)) def show(self, req, id): """The 2.53 microversion requires that the id is a uuid and as a result it can also return a 400 response if an invalid uuid is passed. The 2.53 microversion also supports the with_servers query parameter to include a list of servers on the given hypervisor if requested. """ with_servers = strutils.bool_from_string( req.GET.get('with_servers', False), strict=True) return self._show(req, id, with_servers) @wsgi.Controller.api_version("2.1", "2.52") # noqa F811 @wsgi.expected_errors(404) def show(self, req, id): return self._show(req, id) def _show(self, req, id, with_servers=False): context = req.environ['nova.context'] context.can(hv_policies.BASE_POLICY_NAME) self._validate_id(req, id) try: hyp = self.host_api.compute_node_get(context, id) instances = None if with_servers: instances = self.host_api.instance_get_all_by_host( context, hyp.host) service = self.host_api.service_get_by_compute_host( context, hyp.host) except (ValueError, exception.ComputeHostNotFound, exception.HostMappingNotFound): msg = _("Hypervisor with ID '%s' could not be found.") % id raise webob.exc.HTTPNotFound(explanation=msg) return dict(hypervisor=self._view_hypervisor( hyp, service, True, req, instances)) @wsgi.expected_errors((400, 404, 501)) def uptime(self, req, id): context = req.environ['nova.context'] context.can(hv_policies.BASE_POLICY_NAME) self._validate_id(req, id) try: hyp = self.host_api.compute_node_get(context, id) except (ValueError, exception.ComputeHostNotFound): msg = _("Hypervisor with ID '%s' could not be found.") % id raise webob.exc.HTTPNotFound(explanation=msg) # Get the uptime try: host = hyp.host uptime = self.host_api.get_host_uptime(context, host) service = self.host_api.service_get_by_compute_host(context, host) except NotImplementedError: common.raise_feature_not_supported() except exception.ComputeServiceUnavailable as e: raise webob.exc.HTTPBadRequest(explanation=e.format_message()) except exception.HostMappingNotFound: # NOTE(danms): This mirrors the compute_node_get() behavior # where the node is missing, resulting in NotFound instead of # BadRequest if we fail on the map lookup. msg = _("Hypervisor with ID '%s' could not be found.") % id raise webob.exc.HTTPNotFound(explanation=msg) return dict(hypervisor=self._view_hypervisor(hyp, service, False, req, uptime=uptime)) @wsgi.Controller.api_version('2.1', '2.52') @wsgi.expected_errors(404) def search(self, req, id): """Prior to microversion 2.53 you could search for hypervisors by a hostname pattern on a dedicated route. Starting with 2.53, searching by a hostname pattern is a query parameter in the GET /os-hypervisors index and detail methods. """ context = req.environ['nova.context'] context.can(hv_policies.BASE_POLICY_NAME) hypervisors = self._get_compute_nodes_by_name_pattern(context, id) try: return dict(hypervisors=[ self._view_hypervisor( hyp, self.host_api.service_get_by_compute_host(context, hyp.host), False, req) for hyp in hypervisors]) except exception.HostMappingNotFound: msg = _("No hypervisor matching '%s' could be found.") % id raise webob.exc.HTTPNotFound(explanation=msg) @wsgi.Controller.api_version('2.1', '2.52') @wsgi.expected_errors(404) def servers(self, req, id): """Prior to microversion 2.53 you could search for hypervisors by a hostname pattern and include servers on those hosts in the response on a dedicated route. Starting with 2.53, searching by a hostname pattern and including hosted servers is a query parameter in the GET /os-hypervisors index and detail methods. """ context = req.environ['nova.context'] context.can(hv_policies.BASE_POLICY_NAME) compute_nodes = self._get_compute_nodes_by_name_pattern(context, id) hypervisors = [] for compute_node in compute_nodes: try: instances = self.host_api.instance_get_all_by_host(context, compute_node.host) service = self.host_api.service_get_by_compute_host( context, compute_node.host) except exception.HostMappingNotFound as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) hyp = self._view_hypervisor(compute_node, service, False, req, instances) hypervisors.append(hyp) return dict(hypervisors=hypervisors) @wsgi.expected_errors(()) def statistics(self, req): context = req.environ['nova.context'] context.can(hv_policies.BASE_POLICY_NAME) stats = self.host_api.compute_node_statistics(context) return dict(hypervisor_statistics=stats) nova-17.0.1/nova/api/openstack/compute/rescue.py0000666000175000017500000001004513250073126021605 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """The rescue mode extension.""" from webob import exc from nova.api.openstack import common from nova.api.openstack.compute.schemas import rescue from nova.api.openstack import wsgi from nova.api import validation from nova import compute import nova.conf from nova import exception from nova.policies import rescue as rescue_policies from nova import utils CONF = nova.conf.CONF class RescueController(wsgi.Controller): def __init__(self, *args, **kwargs): super(RescueController, self).__init__(*args, **kwargs) self.compute_api = compute.API() # TODO(cyeoh): Should be responding here with 202 Accept # because rescue is an async call, but keep to 200 # for backwards compatibility reasons. @wsgi.expected_errors((400, 404, 409, 501)) @wsgi.action('rescue') @validation.schema(rescue.rescue) def _rescue(self, req, id, body): """Rescue an instance.""" context = req.environ["nova.context"] if body['rescue'] and 'adminPass' in body['rescue']: password = body['rescue']['adminPass'] else: password = utils.generate_password() instance = common.get_instance(self.compute_api, context, id) context.can(rescue_policies.BASE_POLICY_NAME, target={'user_id': instance.user_id, 'project_id': instance.project_id}) rescue_image_ref = None if body['rescue']: rescue_image_ref = body['rescue'].get('rescue_image_ref') try: self.compute_api.rescue(context, instance, rescue_password=password, rescue_image_ref=rescue_image_ref) except exception.InstanceUnknownCell as e: raise exc.HTTPNotFound(explanation=e.format_message()) except exception.InstanceIsLocked as e: raise exc.HTTPConflict(explanation=e.format_message()) except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state(state_error, 'rescue', id) except exception.InvalidVolume as volume_error: raise exc.HTTPConflict(explanation=volume_error.format_message()) except exception.InstanceNotRescuable as non_rescuable: raise exc.HTTPBadRequest( explanation=non_rescuable.format_message()) if CONF.api.enable_instance_password: return {'adminPass': password} else: return {} @wsgi.response(202) @wsgi.expected_errors((404, 409, 501)) @wsgi.action('unrescue') def _unrescue(self, req, id, body): """Unrescue an instance.""" context = req.environ["nova.context"] context.can(rescue_policies.BASE_POLICY_NAME) instance = common.get_instance(self.compute_api, context, id) try: self.compute_api.unrescue(context, instance) except exception.InstanceUnknownCell as e: raise exc.HTTPNotFound(explanation=e.format_message()) except exception.InstanceIsLocked as e: raise exc.HTTPConflict(explanation=e.format_message()) except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state(state_error, 'unrescue', id) nova-17.0.1/nova/api/openstack/compute/evacuate.py0000666000175000017500000001225313250073126022117 0ustar zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils import strutils from webob import exc from nova.api.openstack import api_version_request from nova.api.openstack import common from nova.api.openstack.compute.schemas import evacuate from nova.api.openstack import wsgi from nova.api import validation from nova import compute import nova.conf from nova import exception from nova.i18n import _ from nova.policies import evacuate as evac_policies from nova import utils CONF = nova.conf.CONF class EvacuateController(wsgi.Controller): def __init__(self, *args, **kwargs): super(EvacuateController, self).__init__(*args, **kwargs) self.compute_api = compute.API() self.host_api = compute.HostAPI() def _get_on_shared_storage(self, req, evacuate_body): if api_version_request.is_supported(req, min_version='2.14'): return None else: return strutils.bool_from_string(evacuate_body["onSharedStorage"]) def _get_password(self, req, evacuate_body, on_shared_storage): password = None if 'adminPass' in evacuate_body: # check that if requested to evacuate server on shared storage # password not specified if on_shared_storage: msg = _("admin password can't be changed on existing disk") raise exc.HTTPBadRequest(explanation=msg) password = evacuate_body['adminPass'] elif not on_shared_storage: password = utils.generate_password() return password def _get_password_v214(self, req, evacuate_body): if 'adminPass' in evacuate_body: password = evacuate_body['adminPass'] else: password = utils.generate_password() return password # TODO(eliqiao): Should be responding here with 202 Accept # because evacuate is an async call, but keep to 200 for # backwards compatibility reasons. @wsgi.expected_errors((400, 404, 409)) @wsgi.action('evacuate') @validation.schema(evacuate.evacuate, "2.0", "2.13") @validation.schema(evacuate.evacuate_v214, "2.14", "2.28") @validation.schema(evacuate.evacuate_v2_29, "2.29") def _evacuate(self, req, id, body): """Permit admins to evacuate a server from a failed host to a new one. """ context = req.environ["nova.context"] instance = common.get_instance(self.compute_api, context, id) context.can(evac_policies.BASE_POLICY_NAME, target={'user_id': instance.user_id, 'project_id': instance.project_id}) evacuate_body = body["evacuate"] host = evacuate_body.get("host") force = None on_shared_storage = self._get_on_shared_storage(req, evacuate_body) if api_version_request.is_supported(req, min_version='2.29'): force = body["evacuate"].get("force", False) force = strutils.bool_from_string(force, strict=True) if force is True and not host: message = _("Can't force to a non-provided destination") raise exc.HTTPBadRequest(explanation=message) if api_version_request.is_supported(req, min_version='2.14'): password = self._get_password_v214(req, evacuate_body) else: password = self._get_password(req, evacuate_body, on_shared_storage) if host is not None: try: self.host_api.service_get_by_compute_host(context, host) except (exception.ComputeHostNotFound, exception.HostMappingNotFound): msg = _("Compute host %s not found.") % host raise exc.HTTPNotFound(explanation=msg) if instance.host == host: msg = _("The target host can't be the same one.") raise exc.HTTPBadRequest(explanation=msg) try: self.compute_api.evacuate(context, instance, host, on_shared_storage, password, force) except exception.InstanceUnknownCell as e: raise exc.HTTPNotFound(explanation=e.format_message()) except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state(state_error, 'evacuate', id) except exception.ComputeServiceInUse as e: raise exc.HTTPBadRequest(explanation=e.format_message()) if (not api_version_request.is_supported(req, min_version='2.14') and CONF.api.enable_instance_password): return {'adminPass': password} else: return None nova-17.0.1/nova/api/openstack/compute/certificates.py0000666000175000017500000000206313250073126022765 0ustar zuulzuul00000000000000# Copyright (c) 2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import webob.exc from nova.api.openstack import wsgi class CertificatesController(wsgi.Controller): """The x509 Certificates API controller for the OpenStack API.""" @wsgi.expected_errors(410) def show(self, req, id): """Return certificate information.""" raise webob.exc.HTTPGone() @wsgi.expected_errors((410)) def create(self, req, body=None): """Create a certificate.""" raise webob.exc.HTTPGone() nova-17.0.1/nova/api/openstack/compute/admin_actions.py0000666000175000017500000000656213250073126023140 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from webob import exc from nova.api.openstack import common from nova.api.openstack.compute.schemas import reset_server_state from nova.api.openstack import wsgi from nova.api import validation from nova import compute from nova.compute import vm_states from nova import exception from nova.policies import admin_actions as aa_policies # States usable in resetState action # NOTE: It is necessary to update the schema of nova/api/openstack/compute/ # schemas/reset_server_state.py, when updating this state_map. state_map = dict(active=vm_states.ACTIVE, error=vm_states.ERROR) class AdminActionsController(wsgi.Controller): def __init__(self, *args, **kwargs): super(AdminActionsController, self).__init__(*args, **kwargs) self.compute_api = compute.API() @wsgi.response(202) @wsgi.expected_errors((404, 409)) @wsgi.action('resetNetwork') def _reset_network(self, req, id, body): """Permit admins to reset networking on a server.""" context = req.environ['nova.context'] context.can(aa_policies.POLICY_ROOT % 'reset_network') instance = common.get_instance(self.compute_api, context, id) try: self.compute_api.reset_network(context, instance) except exception.InstanceUnknownCell as e: raise exc.HTTPNotFound(explanation=e.format_message()) except exception.InstanceIsLocked as e: raise exc.HTTPConflict(explanation=e.format_message()) @wsgi.response(202) @wsgi.expected_errors((404, 409)) @wsgi.action('injectNetworkInfo') def _inject_network_info(self, req, id, body): """Permit admins to inject network info into a server.""" context = req.environ['nova.context'] context.can(aa_policies.POLICY_ROOT % 'inject_network_info') instance = common.get_instance(self.compute_api, context, id) try: self.compute_api.inject_network_info(context, instance) except exception.InstanceUnknownCell as e: raise exc.HTTPNotFound(explanation=e.format_message()) except exception.InstanceIsLocked as e: raise exc.HTTPConflict(explanation=e.format_message()) @wsgi.response(202) @wsgi.expected_errors(404) @wsgi.action('os-resetState') @validation.schema(reset_server_state.reset_state) def _reset_state(self, req, id, body): """Permit admins to reset the state of a server.""" context = req.environ["nova.context"] context.can(aa_policies.POLICY_ROOT % 'reset_state') # Identify the desired state from the body state = state_map[body["os-resetState"]["state"]] instance = common.get_instance(self.compute_api, context, id) instance.vm_state = state instance.task_state = None instance.save(admin_state_reset=True) nova-17.0.1/nova/api/openstack/compute/extended_availability_zone.py0000666000175000017500000000372513250073126025713 0ustar zuulzuul00000000000000# Copyright 2013 Netease, LLC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """The Extended Availability Zone Status API extension.""" from nova.api.openstack import wsgi from nova import availability_zones as avail_zone from nova.policies import extended_availability_zone as eaz_policies PREFIX = "OS-EXT-AZ" class ExtendedAZController(wsgi.Controller): def _extend_server(self, context, server, instance): # NOTE(mriedem): The OS-EXT-AZ prefix should not be used for new # attributes after v2.1. They are only in v2.1 for backward compat # with v2.0. key = "%s:availability_zone" % PREFIX az = avail_zone.get_instance_availability_zone(context, instance) server[key] = az or '' @wsgi.extends def show(self, req, resp_obj, id): context = req.environ['nova.context'] if context.can(eaz_policies.BASE_POLICY_NAME, fatal=False): server = resp_obj.obj['server'] db_instance = req.get_db_instance(server['id']) self._extend_server(context, server, db_instance) @wsgi.extends def detail(self, req, resp_obj): context = req.environ['nova.context'] if context.can(eaz_policies.BASE_POLICY_NAME, fatal=False): servers = list(resp_obj.obj['servers']) for server in servers: db_instance = req.get_db_instance(server['id']) self._extend_server(context, server, db_instance) nova-17.0.1/nova/api/openstack/compute/flavors.py0000666000175000017500000001104013250073126021767 0ustar zuulzuul00000000000000# Copyright 2010 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils import strutils import webob from nova.api.openstack import api_version_request from nova.api.openstack import common from nova.api.openstack.compute.schemas import flavors as schema from nova.api.openstack.compute.views import flavors as flavors_view from nova.api.openstack import wsgi from nova.api import validation from nova.compute import flavors from nova import exception from nova.i18n import _ from nova import objects from nova import utils ALIAS = 'flavors' class FlavorsController(wsgi.Controller): """Flavor controller for the OpenStack API.""" _view_builder_class = flavors_view.ViewBuilder @validation.query_schema(schema.index_query) @wsgi.expected_errors(400) def index(self, req): """Return all flavors in brief.""" limited_flavors = self._get_flavors(req) return self._view_builder.index(req, limited_flavors) @validation.query_schema(schema.index_query) @wsgi.expected_errors(400) def detail(self, req): """Return all flavors in detail.""" limited_flavors = self._get_flavors(req) req.cache_db_flavors(limited_flavors) return self._view_builder.detail(req, limited_flavors) @wsgi.expected_errors(404) def show(self, req, id): """Return data about the given flavor id.""" context = req.environ['nova.context'] try: flavor = flavors.get_flavor_by_flavor_id(id, ctxt=context) req.cache_db_flavor(flavor) except exception.FlavorNotFound as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) include_description = api_version_request.is_supported( req, flavors_view.FLAVOR_DESCRIPTION_MICROVERSION) return self._view_builder.show(req, flavor, include_description) def _parse_is_public(self, is_public): """Parse is_public into something usable.""" if is_public is None: # preserve default value of showing only public flavors return True elif utils.is_none_string(is_public): return None else: try: return strutils.bool_from_string(is_public, strict=True) except ValueError: msg = _('Invalid is_public filter [%s]') % is_public raise webob.exc.HTTPBadRequest(explanation=msg) def _get_flavors(self, req): """Helper function that returns a list of flavor dicts.""" filters = {} sort_key = req.params.get('sort_key') or 'flavorid' sort_dir = req.params.get('sort_dir') or 'asc' limit, marker = common.get_limit_and_marker(req) context = req.environ['nova.context'] if context.is_admin: # Only admin has query access to all flavor types filters['is_public'] = self._parse_is_public( req.params.get('is_public', None)) else: filters['is_public'] = True filters['disabled'] = False if 'minRam' in req.params: try: filters['min_memory_mb'] = int(req.params['minRam']) except ValueError: msg = _('Invalid minRam filter [%s]') % req.params['minRam'] raise webob.exc.HTTPBadRequest(explanation=msg) if 'minDisk' in req.params: try: filters['min_root_gb'] = int(req.params['minDisk']) except ValueError: msg = (_('Invalid minDisk filter [%s]') % req.params['minDisk']) raise webob.exc.HTTPBadRequest(explanation=msg) try: limited_flavors = objects.FlavorList.get_all(context, filters=filters, sort_key=sort_key, sort_dir=sort_dir, limit=limit, marker=marker) except exception.MarkerNotFound: msg = _('marker [%s] not found') % marker raise webob.exc.HTTPBadRequest(explanation=msg) return limited_flavors nova-17.0.1/nova/api/openstack/compute/floating_ip_pools.py0000666000175000017500000000333313250073126024030 0ustar zuulzuul00000000000000# Copyright (c) 2011 X.commerce, a business unit of eBay Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.openstack.api_version_request \ import MAX_PROXY_API_SUPPORT_VERSION from nova.api.openstack import wsgi from nova import network from nova.policies import floating_ip_pools as fip_policies def _translate_floating_ip_view(pool_name): return { 'name': pool_name, } def _translate_floating_ip_pools_view(pools): return { 'floating_ip_pools': [_translate_floating_ip_view(pool_name) for pool_name in pools] } class FloatingIPPoolsController(wsgi.Controller): """The Floating IP Pool API controller for the OpenStack API.""" def __init__(self): self.network_api = network.API() super(FloatingIPPoolsController, self).__init__() @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors(()) def index(self, req): """Return a list of pools.""" context = req.environ['nova.context'] context.can(fip_policies.BASE_POLICY_NAME) pools = self.network_api.get_floating_ip_pools(context) return _translate_floating_ip_pools_view(pools) nova-17.0.1/nova/api/openstack/compute/services.py0000666000175000017500000003416113250073126022147 0ustar zuulzuul00000000000000# Copyright 2012 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils import strutils from oslo_utils import uuidutils import webob.exc from nova.api.openstack import api_version_request from nova.api.openstack.compute.schemas import services from nova.api.openstack import wsgi from nova.api import validation from nova import availability_zones from nova import compute from nova import exception from nova.i18n import _ from nova.policies import services as services_policies from nova import servicegroup from nova import utils UUID_FOR_ID_MIN_VERSION = '2.53' class ServiceController(wsgi.Controller): def __init__(self): self.host_api = compute.HostAPI() self.aggregate_api = compute.api.AggregateAPI() self.servicegroup_api = servicegroup.API() self.actions = {"enable": self._enable, "disable": self._disable, "disable-log-reason": self._disable_log_reason} def _get_services(self, req): # The API services are filtered out since they are not RPC services # and therefore their state is not reported through the service group # API, so they would always be reported as 'down' (see bug 1543625). api_services = ('nova-osapi_compute', 'nova-ec2', 'nova-metadata') context = req.environ['nova.context'] context.can(services_policies.BASE_POLICY_NAME) _services = [ s for s in self.host_api.service_get_all(context, set_zones=True, all_cells=True) if s['binary'] not in api_services ] host = '' if 'host' in req.GET: host = req.GET['host'] binary = '' if 'binary' in req.GET: binary = req.GET['binary'] if host: _services = [s for s in _services if s['host'] == host] if binary: _services = [s for s in _services if s['binary'] == binary] return _services def _get_service_detail(self, svc, additional_fields, req): alive = self.servicegroup_api.service_is_up(svc) state = (alive and "up") or "down" active = 'enabled' if svc['disabled']: active = 'disabled' updated_time = self.servicegroup_api.get_updated_time(svc) uuid_for_id = api_version_request.is_supported( req, min_version=UUID_FOR_ID_MIN_VERSION) if 'availability_zone' not in svc: # The service wasn't loaded with the AZ so we need to do it here. # Yes this looks weird, but set_availability_zones makes a copy of # the list passed in and mutates the objects within it, so we have # to pull it back out from the resulting copied list. svc.availability_zone = ( availability_zones.set_availability_zones( req.environ['nova.context'], [svc])[0]['availability_zone']) service_detail = {'binary': svc['binary'], 'host': svc['host'], 'id': svc['uuid' if uuid_for_id else 'id'], 'zone': svc['availability_zone'], 'status': active, 'state': state, 'updated_at': updated_time, 'disabled_reason': svc['disabled_reason']} for field in additional_fields: service_detail[field] = svc[field] return service_detail def _get_services_list(self, req, additional_fields=()): _services = self._get_services(req) return [self._get_service_detail(svc, additional_fields, req) for svc in _services] def _enable(self, body, context): """Enable scheduling for a service.""" return self._enable_disable(body, context, "enabled", {'disabled': False, 'disabled_reason': None}) def _disable(self, body, context, reason=None): """Disable scheduling for a service with optional log.""" return self._enable_disable(body, context, "disabled", {'disabled': True, 'disabled_reason': reason}) def _disable_log_reason(self, body, context): """Disable scheduling for a service with a log.""" try: reason = body['disabled_reason'] except KeyError: msg = _('Missing disabled reason field') raise webob.exc.HTTPBadRequest(explanation=msg) return self._disable(body, context, reason) def _enable_disable(self, body, context, status, params_to_update): """Enable/Disable scheduling for a service.""" reason = params_to_update.get('disabled_reason') ret_value = { 'service': { 'host': body['host'], 'binary': body['binary'], 'status': status }, } if reason: ret_value['service']['disabled_reason'] = reason self._update(context, body['host'], body['binary'], params_to_update) return ret_value def _forced_down(self, body, context): """Set or unset forced_down flag for the service""" try: forced_down = strutils.bool_from_string(body["forced_down"]) except KeyError: msg = _('Missing forced_down field') raise webob.exc.HTTPBadRequest(explanation=msg) host = body['host'] binary = body['binary'] ret_value = {'service': {'host': host, 'binary': binary, 'forced_down': forced_down}} self._update(context, host, binary, {"forced_down": forced_down}) return ret_value def _update(self, context, host, binary, payload): """Do the actual PUT/update""" try: self.host_api.service_update(context, host, binary, payload) except (exception.HostBinaryNotFound, exception.HostMappingNotFound) as exc: raise webob.exc.HTTPNotFound(explanation=exc.format_message()) def _perform_action(self, req, id, body, actions): """Calculate action dictionary dependent on provided fields""" context = req.environ['nova.context'] context.can(services_policies.BASE_POLICY_NAME) try: action = actions[id] except KeyError: msg = _("Unknown action") raise webob.exc.HTTPNotFound(explanation=msg) return action(body, context) @wsgi.response(204) @wsgi.expected_errors((400, 404)) def delete(self, req, id): """Deletes the specified service.""" context = req.environ['nova.context'] context.can(services_policies.BASE_POLICY_NAME) if api_version_request.is_supported( req, min_version=UUID_FOR_ID_MIN_VERSION): if not uuidutils.is_uuid_like(id): msg = _('Invalid uuid %s') % id raise webob.exc.HTTPBadRequest(explanation=msg) else: try: utils.validate_integer(id, 'id') except exception.InvalidInput as exc: raise webob.exc.HTTPBadRequest( explanation=exc.format_message()) try: service = self.host_api.service_get_by_id(context, id) # remove the service from all the aggregates in which it's included if service.binary == 'nova-compute': aggrs = self.aggregate_api.get_aggregates_by_host(context, service.host) for ag in aggrs: self.aggregate_api.remove_host_from_aggregate(context, ag.id, service.host) self.host_api.service_delete(context, id) except exception.ServiceNotFound: explanation = _("Service %s not found.") % id raise webob.exc.HTTPNotFound(explanation=explanation) except exception.ServiceNotUnique: explanation = _("Service id %s refers to multiple services.") % id raise webob.exc.HTTPBadRequest(explanation=explanation) @validation.query_schema(services.index_query_schema) @wsgi.expected_errors(()) def index(self, req): """Return a list of all running services. Filter by host & service name """ if api_version_request.is_supported(req, min_version='2.11'): _services = self._get_services_list(req, ['forced_down']) else: _services = self._get_services_list(req) return {'services': _services} @wsgi.Controller.api_version('2.1', '2.52') @wsgi.expected_errors((400, 404)) @validation.schema(services.service_update, '2.0', '2.10') @validation.schema(services.service_update_v211, '2.11', '2.52') def update(self, req, id, body): """Perform service update Before microversion 2.53, the body contains a host and binary value to identify the service on which to perform the action. There is no service ID passed on the path, just the action, for example PUT /os-services/disable. """ if api_version_request.is_supported(req, min_version='2.11'): actions = self.actions.copy() actions["force-down"] = self._forced_down else: actions = self.actions return self._perform_action(req, id, body, actions) @wsgi.Controller.api_version(UUID_FOR_ID_MIN_VERSION) # noqa F811 @wsgi.expected_errors((400, 404)) @validation.schema(services.service_update_v2_53, UUID_FOR_ID_MIN_VERSION) def update(self, req, id, body): """Perform service update Starting with microversion 2.53, the service uuid is passed in on the path of the request to uniquely identify the service record on which to perform a given update, which is defined in the body of the request. """ service_id = id # Validate that the service ID is a UUID. if not uuidutils.is_uuid_like(service_id): msg = _('Invalid uuid %s') % service_id raise webob.exc.HTTPBadRequest(explanation=msg) # Validate the request context against the policy. context = req.environ['nova.context'] context.can(services_policies.BASE_POLICY_NAME) # Get the service by uuid. try: service = self.host_api.service_get_by_id(context, service_id) # At this point the context is targeted to the cell that the # service was found in so we don't need to do any explicit cell # targeting below. except exception.ServiceNotFound as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) # Return 400 if service.binary is not nova-compute. # Before the earlier PUT handlers were made cells-aware, you could # technically disable a nova-scheduler service, although that doesn't # really do anything within Nova and is just confusing. Now trying to # do that will fail as a nova-scheduler service won't have a host # mapping so you'll get a 404. In this new microversion, we close that # old gap and make sure you can only enable/disable and set forced_down # on nova-compute services since those are the only ones that make # sense to update for those operations. if service.binary != 'nova-compute': msg = (_('Updating a %(binary)s service is not supported. Only ' 'nova-compute services can be updated.') % {'binary': service.binary}) raise webob.exc.HTTPBadRequest(explanation=msg) # Now determine the update to perform based on the body. We are # intentionally not using _perform_action or the other old-style # action functions. if 'status' in body: # This is a status update for either enabled or disabled. if body['status'] == 'enabled': # Fail if 'disabled_reason' was requested when enabling the # service since those two combined don't make sense. if body.get('disabled_reason'): msg = _("Specifying 'disabled_reason' with status " "'enabled' is invalid.") raise webob.exc.HTTPBadRequest(explanation=msg) service.disabled = False service.disabled_reason = None elif body['status'] == 'disabled': service.disabled = True # The disabled reason is optional. service.disabled_reason = body.get('disabled_reason') # This is intentionally not an elif, i.e. it's in addition to the # status update. if 'forced_down' in body: service.forced_down = strutils.bool_from_string( body['forced_down'], strict=True) # Check to see if anything was actually updated since the schema does # not define any required fields. if not service.obj_what_changed(): msg = _("No updates were requested. Fields 'status' or " "'forced_down' should be specified.") raise webob.exc.HTTPBadRequest(explanation=msg) # Now save our updates to the service record in the database. service.save() # Return the full service record details. additional_fields = ['forced_down'] return {'service': self._get_service_detail( service, additional_fields, req)} nova-17.0.1/nova/api/openstack/compute/create_backup.py0000666000175000017500000000731513250073126023115 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import webob from nova.api.openstack import api_version_request from nova.api.openstack import common from nova.api.openstack.compute.schemas import create_backup from nova.api.openstack import wsgi from nova.api import validation from nova import compute from nova import exception from nova.policies import create_backup as cb_policies class CreateBackupController(wsgi.Controller): def __init__(self, *args, **kwargs): super(CreateBackupController, self).__init__(*args, **kwargs) self.compute_api = compute.API() @wsgi.response(202) @wsgi.expected_errors((400, 403, 404, 409)) @wsgi.action('createBackup') @validation.schema(create_backup.create_backup_v20, '2.0', '2.0') @validation.schema(create_backup.create_backup, '2.1') def _create_backup(self, req, id, body): """Backup a server instance. Images now have an `image_type` associated with them, which can be 'snapshot' or the backup type, like 'daily' or 'weekly'. If the image_type is backup-like, then the rotation factor can be included and that will cause the oldest backups that exceed the rotation factor to be deleted. """ context = req.environ["nova.context"] context.can(cb_policies.BASE_POLICY_NAME) entity = body["createBackup"] image_name = common.normalize_name(entity["name"]) backup_type = entity["backup_type"] rotation = int(entity["rotation"]) props = {} metadata = entity.get('metadata', {}) # Starting from microversion 2.39 we don't check quotas on createBackup if api_version_request.is_supported( req, max_version= api_version_request.MAX_IMAGE_META_PROXY_API_VERSION): common.check_img_metadata_properties_quota(context, metadata) props.update(metadata) instance = common.get_instance(self.compute_api, context, id) try: image = self.compute_api.backup(context, instance, image_name, backup_type, rotation, extra_properties=props) except exception.InstanceUnknownCell as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state(state_error, 'createBackup', id) except exception.InvalidRequest as e: raise webob.exc.HTTPBadRequest(explanation=e.format_message()) # Starting with microversion 2.45 we return a response body containing # the snapshot image id without the Location header. if api_version_request.is_supported(req, '2.45'): return {'image_id': image['id']} resp = webob.Response(status_int=202) # build location of newly-created image entity if rotation is not zero if rotation > 0: image_id = str(image['id']) image_ref = common.url_join(req.application_url, 'images', image_id) resp.headers['Location'] = image_ref return resp nova-17.0.1/nova/api/openstack/compute/schemas/0000775000175000017500000000000013250073471021371 5ustar zuulzuul00000000000000nova-17.0.1/nova/api/openstack/compute/schemas/cells.py0000666000175000017500000000707613250073126023056 0ustar zuulzuul00000000000000# Copyright 2014 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from nova.api.validation import parameter_types create = { 'type': 'object', 'properties': { 'cell': { 'type': 'object', 'properties': { 'name': parameter_types.cell_name, 'type': { 'type': 'string', 'enum': ['parent', 'child'], }, # NOTE: In unparse_transport_url(), a url consists of the # following parameters: # "qpid://:@:/" # or # "rabbit://:@:/" # Then the url is stored into transport_url of cells table # which is defined with String(255). 'username': { 'type': 'string', 'maxLength': 255, 'pattern': '^[a-zA-Z0-9-_]*$' }, 'password': { # Allow to specify any string for strong password. 'type': 'string', 'maxLength': 255, }, 'rpc_host': parameter_types.hostname_or_ip_address, 'rpc_port': parameter_types.tcp_udp_port, 'rpc_virtual_host': parameter_types.hostname_or_ip_address, }, 'required': ['name'], 'additionalProperties': False, }, }, 'required': ['cell'], 'additionalProperties': False, } create_v20 = copy.deepcopy(create) create_v20['properties']['cell']['properties']['name'] = (parameter_types. cell_name_leading_trailing_spaces) update = { 'type': 'object', 'properties': { 'cell': { 'type': 'object', 'properties': { 'name': parameter_types.cell_name, 'type': { 'type': 'string', 'enum': ['parent', 'child'], }, 'username': { 'type': 'string', 'maxLength': 255, 'pattern': '^[a-zA-Z0-9-_]*$' }, 'password': { 'type': 'string', 'maxLength': 255, }, 'rpc_host': parameter_types.hostname_or_ip_address, 'rpc_port': parameter_types.tcp_udp_port, 'rpc_virtual_host': parameter_types.hostname_or_ip_address, }, 'additionalProperties': False, }, }, 'required': ['cell'], 'additionalProperties': False, } update_v20 = copy.deepcopy(create) update_v20['properties']['cell']['properties']['name'] = (parameter_types. cell_name_leading_trailing_spaces) sync_instances = { 'type': 'object', 'properties': { 'project_id': parameter_types.project_id, 'deleted': parameter_types.boolean, 'updated_since': { 'type': 'string', 'format': 'date-time', }, }, 'additionalProperties': False, } nova-17.0.1/nova/api/openstack/compute/schemas/hosts.py0000666000175000017500000000325413250073126023106 0ustar zuulzuul00000000000000# Copyright 2014 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.validation import parameter_types update = { 'type': 'object', 'properties': { 'status': { 'type': 'string', 'enum': ['enable', 'disable', 'Enable', 'Disable', 'ENABLE', 'DISABLE'], }, 'maintenance_mode': { 'type': 'string', 'enum': ['enable', 'disable', 'Enable', 'Disable', 'ENABLE', 'DISABLE'], }, 'anyOf': [ {'required': ['status']}, {'required': ['maintenance_mode']} ], }, 'additionalProperties': False } index_query = { 'type': 'object', 'properties': { 'zone': parameter_types.multi_params({'type': 'string'}) }, # NOTE(gmann): This is kept True to keep backward compatibility. # As of now Schema validation stripped out the additional parameters and # does not raise 400. In the future, we may block the additional parameters # by bump in Microversion. 'additionalProperties': True } nova-17.0.1/nova/api/openstack/compute/schemas/migrations.py0000666000175000017500000000322413250073126024117 0ustar zuulzuul00000000000000# Copyright 2017 Huawei Technologies Co.,LTD. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from nova.api.validation import parameter_types list_query_schema_v20 = { 'type': 'object', 'properties': { 'hidden': parameter_types.common_query_param, 'host': parameter_types.common_query_param, 'instance_uuid': parameter_types.common_query_param, 'source_compute': parameter_types.common_query_param, 'status': parameter_types.common_query_param, 'migration_type': parameter_types.common_query_param, }, # For backward compatible changes 'additionalProperties': True } list_query_params_v259 = copy.deepcopy(list_query_schema_v20) list_query_params_v259['properties'].update({ # The 2.59 microversion added support for paging by limit and marker # and filtering by changes-since. 'limit': parameter_types.single_param( parameter_types.non_negative_integer), 'marker': parameter_types.single_param({'type': 'string'}), 'changes-since': parameter_types.single_param( {'type': 'string', 'format': 'date-time'}), }) list_query_params_v259['additionalProperties'] = False nova-17.0.1/nova/api/openstack/compute/schemas/user_data.py0000666000175000017500000000163113250073126023712 0ustar zuulzuul00000000000000# Copyright 2014 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. common_user_data = { 'type': 'string', 'format': 'base64', 'maxLength': 65535 } server_create = { 'user_data': common_user_data } server_create_v20 = { 'user_data': { 'oneOf': [ common_user_data, {'type': 'null'}, ], }, } nova-17.0.1/nova/api/openstack/compute/schemas/migrate_server.py0000666000175000017500000000426113250073126024763 0ustar zuulzuul00000000000000# Copyright 2014 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from nova.api.validation import parameter_types host = copy.deepcopy(parameter_types.hostname) host['type'] = ['string', 'null'] migrate_v2_56 = { 'type': 'object', 'properties': { 'migrate': { 'type': ['object', 'null'], 'properties': { 'host': host, }, 'additionalProperties': False, }, }, 'required': ['migrate'], 'additionalProperties': False, } migrate_live = { 'type': 'object', 'properties': { 'os-migrateLive': { 'type': 'object', 'properties': { 'block_migration': parameter_types.boolean, 'disk_over_commit': parameter_types.boolean, 'host': host }, 'required': ['block_migration', 'disk_over_commit', 'host'], 'additionalProperties': False, }, }, 'required': ['os-migrateLive'], 'additionalProperties': False, } block_migration = copy.deepcopy(parameter_types.boolean) block_migration['enum'].append('auto') migrate_live_v2_25 = copy.deepcopy(migrate_live) del migrate_live_v2_25['properties']['os-migrateLive']['properties'][ 'disk_over_commit'] migrate_live_v2_25['properties']['os-migrateLive']['properties'][ 'block_migration'] = block_migration migrate_live_v2_25['properties']['os-migrateLive']['required'] = ( ['block_migration', 'host']) migrate_live_v2_30 = copy.deepcopy(migrate_live_v2_25) migrate_live_v2_30['properties']['os-migrateLive']['properties'][ 'force'] = parameter_types.boolean nova-17.0.1/nova/api/openstack/compute/schemas/block_device_mapping_v1.py0000666000175000017500000000336513250073126026503 0ustar zuulzuul00000000000000# Copyright 2014 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.validation import parameter_types legacy_block_device_mapping = { 'type': 'object', 'properties': { 'virtual_name': { 'type': 'string', 'maxLength': 255, }, 'volume_id': parameter_types.volume_id, 'snapshot_id': parameter_types.image_id, 'volume_size': parameter_types.volume_size, # Do not allow empty device names and number values and # containing spaces(defined in nova/block_device.py:from_api()) 'device_name': { 'type': 'string', 'minLength': 1, 'maxLength': 255, 'pattern': '^[a-zA-Z0-9._-r/]*$', }, # Defined as boolean in nova/block_device.py:from_api() 'delete_on_termination': parameter_types.boolean, 'no_device': {}, # Defined as mediumtext in column "connection_info" in table # "block_device_mapping" 'connection_info': { 'type': 'string', 'maxLength': 16777215 }, }, 'additionalProperties': False } server_create = { 'block_device_mapping': { 'type': 'array', 'items': legacy_block_device_mapping } } nova-17.0.1/nova/api/openstack/compute/schemas/console_output.py0000666000175000017500000000256113250073126025030 0ustar zuulzuul00000000000000# Copyright 2014 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. get_console_output = { 'type': 'object', 'properties': { 'os-getConsoleOutput': { 'type': 'object', 'properties': { 'length': { 'type': ['integer', 'string', 'null'], 'pattern': '^-?[0-9]+$', # NOTE: -1 means an unlimited length. # TODO(cyeoh): None also means unlimited length # and is supported for v2 backwards compatibility # Should remove in the future with a microversion 'minimum': -1, }, }, 'additionalProperties': False, }, }, 'required': ['os-getConsoleOutput'], 'additionalProperties': False, } nova-17.0.1/nova/api/openstack/compute/schemas/flavors_extraspecs.py0000666000175000017500000000225313250073126025661 0ustar zuulzuul00000000000000# Copyright 2014 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from nova.api.validation import parameter_types # NOTE(oomichi): The metadata of flavor_extraspecs should accept numbers # as its values. metadata = copy.deepcopy(parameter_types.metadata) metadata['patternProperties']['^[a-zA-Z0-9-_:. ]{1,255}$']['type'] = \ ['string', 'number'] create = { 'type': 'object', 'properties': { 'extra_specs': metadata }, 'required': ['extra_specs'], 'additionalProperties': False, } update = copy.deepcopy(metadata) update.update({ 'minProperties': 1, 'maxProperties': 1 }) nova-17.0.1/nova/api/openstack/compute/schemas/server_migrations.py0000666000175000017500000000151313250073126025504 0ustar zuulzuul00000000000000# Copyright 2016 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. force_complete = { 'type': 'object', 'properties': { 'force_complete': { 'type': 'null' } }, 'required': ['force_complete'], 'additionalProperties': False, } nova-17.0.1/nova/api/openstack/compute/schemas/networks.py0000666000175000017500000000535313250073126023624 0ustar zuulzuul00000000000000# Copyright 2015 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.validation import parameter_types create = { 'type': 'object', 'properties': { 'network': { 'type': 'object', 'properties': { 'label': { 'type': 'string', 'maxLength': 255 }, 'ipam': parameter_types.boolean, 'cidr': parameter_types.cidr, 'cidr_v6': parameter_types.cidr, 'project_id': parameter_types.project_id, 'multi_host': parameter_types.boolean, 'gateway': parameter_types.ipv4, 'gateway_v6': parameter_types.ipv6, 'bridge': { 'type': 'string', }, 'bridge_interface': { 'type': 'string', }, # NOTE: In _extract_subnets(), dns1, dns2 dhcp_server are # used only for IPv4, not IPv6. 'dns1': parameter_types.ipv4, 'dns2': parameter_types.ipv4, 'dhcp_server': parameter_types.ipv4, 'fixed_cidr': parameter_types.cidr, 'allowed_start': parameter_types.ip_address, 'allowed_end': parameter_types.ip_address, 'enable_dhcp': parameter_types.boolean, 'share_address': parameter_types.boolean, 'mtu': parameter_types.positive_integer_with_empty_str, 'vlan': parameter_types.positive_integer_with_empty_str, 'vlan_start': parameter_types.positive_integer_with_empty_str, 'vpn_start': { 'type': 'string', }, }, 'required': ['label'], 'oneOf': [ {'required': ['cidr']}, {'required': ['cidr_v6']} ], 'additionalProperties': False, }, }, 'required': ['network'], 'additionalProperties': False, } add_network_to_project = { 'type': 'object', 'properties': { 'id': {'type': ['string', 'null']} }, 'required': ['id'], 'additionalProperties': False } nova-17.0.1/nova/api/openstack/compute/schemas/quota_sets.py0000666000175000017500000000572113250073126024136 0ustar zuulzuul00000000000000# Copyright 2014 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from nova.api.validation import parameter_types from nova import db common_quota = { 'type': ['integer', 'string'], 'pattern': '^-?[0-9]+$', # -1 is a flag value for unlimited 'minimum': -1, 'maximum': db.MAX_INT } quota_resources = { 'instances': common_quota, 'cores': common_quota, 'ram': common_quota, 'floating_ips': common_quota, 'fixed_ips': common_quota, 'metadata_items': common_quota, 'key_pairs': common_quota, 'security_groups': common_quota, 'security_group_rules': common_quota, 'injected_files': common_quota, 'injected_file_content_bytes': common_quota, 'injected_file_path_bytes': common_quota, 'server_groups': common_quota, 'server_group_members': common_quota, 'networks': common_quota } update_quota_set = copy.deepcopy(quota_resources) update_quota_set.update({'force': parameter_types.boolean}) update_quota_set_v236 = copy.deepcopy(update_quota_set) del update_quota_set_v236['fixed_ips'] del update_quota_set_v236['floating_ips'] del update_quota_set_v236['security_groups'] del update_quota_set_v236['security_group_rules'] del update_quota_set_v236['networks'] update = { 'type': 'object', 'properties': { 'type': 'object', 'quota_set': { 'properties': update_quota_set, 'additionalProperties': False, }, }, 'required': ['quota_set'], 'additionalProperties': False, } update_v236 = copy.deepcopy(update) update_v236['properties']['quota_set']['properties'] = update_quota_set_v236 # 2.57 builds on 2.36 and removes injected_file* quotas. update_quota_set_v257 = copy.deepcopy(update_quota_set_v236) del update_quota_set_v257['injected_files'] del update_quota_set_v257['injected_file_content_bytes'] del update_quota_set_v257['injected_file_path_bytes'] update_v257 = copy.deepcopy(update_v236) update_v257['properties']['quota_set']['properties'] = update_quota_set_v257 query_schema = { 'type': 'object', 'properties': { 'user_id': parameter_types.multi_params({'type': 'string'}) }, # NOTE(gmann): This is kept True to keep backward compatibility. # As of now Schema validation stripped out the additional parameters and # does not raise 400. In the future, we may block the additional parameters # by bump in Microversion. 'additionalProperties': True } nova-17.0.1/nova/api/openstack/compute/schemas/networks_associate.py0000666000175000017500000000154613250073126025657 0ustar zuulzuul00000000000000# Copyright 2015 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.validation import parameter_types associate_host = { 'type': 'object', 'properties': { 'associate_host': parameter_types.hostname }, 'required': ['associate_host'], 'additionalProperties': False } nova-17.0.1/nova/api/openstack/compute/schemas/simple_tenant_usage.py0000666000175000017500000000363113250073126025773 0ustar zuulzuul00000000000000# Copyright 2017 NEC Corporation. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from nova.api.validation import parameter_types index_query = { 'type': 'object', 'properties': { 'start': parameter_types.multi_params({'type': 'string'}), 'end': parameter_types.multi_params({'type': 'string'}), 'detailed': parameter_types.multi_params({'type': 'string'}) }, # NOTE(gmann): This is kept True to keep backward compatibility. # As of now Schema validation stripped out the additional parameters and # does not raise 400. In the future, we may block the additional parameters # by bump in Microversion. 'additionalProperties': True } show_query = { 'type': 'object', 'properties': { 'start': parameter_types.multi_params({'type': 'string'}), 'end': parameter_types.multi_params({'type': 'string'}) }, # NOTE(gmann): This is kept True to keep backward compatibility. # As of now Schema validation stripped out the additional parameters and # does not raise 400. In the future, we may block the additional parameters # by bump in Microversion. 'additionalProperties': True } index_query_v240 = copy.deepcopy(index_query) index_query_v240['properties'].update( parameter_types.pagination_parameters) show_query_v240 = copy.deepcopy(show_query) show_query_v240['properties'].update( parameter_types.pagination_parameters) nova-17.0.1/nova/api/openstack/compute/schemas/server_tags.py0000666000175000017500000000205713250073126024272 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.validation import parameter_types from nova.objects import instance update_all = { "title": "Server tags", "type": "object", "properties": { "tags": { "type": "array", "items": parameter_types.tag, "maxItems": instance.MAX_TAG_COUNT } }, 'required': ['tags'], 'additionalProperties': False } update = { "title": "Server tag", "type": "null", 'required': [], 'additionalProperties': False } nova-17.0.1/nova/api/openstack/compute/schemas/multiple_create.py0000666000175000017500000000152713250073126025125 0ustar zuulzuul00000000000000# Copyright 2014 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.validation import parameter_types server_create = { 'min_count': parameter_types.positive_integer, 'max_count': parameter_types.positive_integer, 'return_reservation_id': parameter_types.boolean, } nova-17.0.1/nova/api/openstack/compute/schemas/hypervisors.py0000666000175000017500000000370013250073126024337 0ustar zuulzuul00000000000000# Copyright 2017 Huawei Technologies Co.,LTD. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.validation import parameter_types list_query_schema_v233 = { 'type': 'object', 'properties': parameter_types.pagination_parameters, # NOTE(gmann): This is kept True to keep backward compatibility. # As of now Schema validation stripped out the additional parameters and # does not raise 400. In microversion 2.53, the additional parameters # are not allowed. 'additionalProperties': True } list_query_schema_v253 = { 'type': 'object', 'properties': { # The 2.33 microversion added support for paging by limit and marker. 'limit': parameter_types.single_param( parameter_types.non_negative_integer), 'marker': parameter_types.single_param({'type': 'string'}), # The 2.53 microversion adds support for filtering by hostname pattern # and requesting hosted servers in the GET /os-hypervisors and # GET /os-hypervisors/detail response. 'hypervisor_hostname_pattern': parameter_types.single_param( parameter_types.hostname), 'with_servers': parameter_types.single_param( parameter_types.boolean) }, 'additionalProperties': False } show_query_schema_v253 = { 'type': 'object', 'properties': { 'with_servers': parameter_types.single_param( parameter_types.boolean) }, 'additionalProperties': False } nova-17.0.1/nova/api/openstack/compute/schemas/rescue.py0000666000175000017500000000207613250073126023235 0ustar zuulzuul00000000000000# Copyright 2014 NEC Corporation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.validation import parameter_types rescue = { 'type': 'object', 'properties': { 'rescue': { 'type': ['object', 'null'], 'properties': { 'adminPass': parameter_types.admin_password, 'rescue_image_ref': parameter_types.image_id, }, 'additionalProperties': False, }, }, 'required': ['rescue'], 'additionalProperties': False, } nova-17.0.1/nova/api/openstack/compute/schemas/evacuate.py0000666000175000017500000000274013250073126023542 0ustar zuulzuul00000000000000# Copyright 2014 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from nova.api.validation import parameter_types evacuate = { 'type': 'object', 'properties': { 'evacuate': { 'type': 'object', 'properties': { 'host': parameter_types.hostname, 'onSharedStorage': parameter_types.boolean, 'adminPass': parameter_types.admin_password, }, 'required': ['onSharedStorage'], 'additionalProperties': False, }, }, 'required': ['evacuate'], 'additionalProperties': False, } evacuate_v214 = copy.deepcopy(evacuate) del evacuate_v214['properties']['evacuate']['properties']['onSharedStorage'] del evacuate_v214['properties']['evacuate']['required'] evacuate_v2_29 = copy.deepcopy(evacuate_v214) evacuate_v2_29['properties']['evacuate']['properties'][ 'force'] = parameter_types.boolean nova-17.0.1/nova/api/openstack/compute/schemas/flavors.py0000666000175000017500000000301613250073126023416 0ustar zuulzuul00000000000000# Copyright 2017 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.validation import parameter_types index_query = { 'type': 'object', 'properties': { 'limit': parameter_types.multi_params( parameter_types.non_negative_integer), 'marker': parameter_types.multi_params({'type': 'string'}), 'is_public': parameter_types.multi_params({'type': 'string'}), 'minRam': parameter_types.multi_params({'type': 'string'}), 'minDisk': parameter_types.multi_params({'type': 'string'}), 'sort_key': parameter_types.multi_params({'type': 'string'}), 'sort_dir': parameter_types.multi_params({'type': 'string'}) }, # NOTE(gmann): This is kept True to keep backward compatibility. # As of now Schema validation stripped out the additional parameters and # does not raise 400. In the future, we may block the additional parameters # by bump in Microversion. 'additionalProperties': True } nova-17.0.1/nova/api/openstack/compute/schemas/services.py0000666000175000017500000000502313250073126023565 0ustar zuulzuul00000000000000# Copyright 2014 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.validation import parameter_types service_update = { 'type': 'object', 'properties': { 'host': parameter_types.hostname, 'binary': { 'type': 'string', 'minLength': 1, 'maxLength': 255, }, 'disabled_reason': { 'type': 'string', 'minLength': 1, 'maxLength': 255, } }, 'required': ['host', 'binary'], 'additionalProperties': False } service_update_v211 = { 'type': 'object', 'properties': { 'host': parameter_types.hostname, 'binary': { 'type': 'string', 'minLength': 1, 'maxLength': 255, }, 'disabled_reason': { 'type': 'string', 'minLength': 1, 'maxLength': 255, }, 'forced_down': parameter_types.boolean }, 'required': ['host', 'binary'], 'additionalProperties': False } # The 2.53 body is for updating a service's status and/or forced_down fields. # There are no required attributes since the service is identified using a # unique service_id on the request path, and status and/or forced_down can # be specified in the body. If status=='disabled', then 'disabled_reason' is # also checked in the body but is not required. Requesting status='enabled' and # including a 'disabled_reason' results in a 400, but this is checked in code. service_update_v2_53 = { 'type': 'object', 'properties': { 'status': { 'type': 'string', 'enum': ['enabled', 'disabled'], }, 'disabled_reason': { 'type': 'string', 'minLength': 1, 'maxLength': 255, }, 'forced_down': parameter_types.boolean }, 'additionalProperties': False } index_query_schema = { 'type': 'object', 'properties': { 'host': parameter_types.common_query_param, 'binary': parameter_types.common_query_param, }, # For backward compatible changes 'additionalProperties': True } nova-17.0.1/nova/api/openstack/compute/schemas/create_backup.py0000666000175000017500000000272313250073126024536 0ustar zuulzuul00000000000000# Copyright 2014 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from nova.api.validation import parameter_types create_backup = { 'type': 'object', 'properties': { 'createBackup': { 'type': 'object', 'properties': { 'name': parameter_types.name, 'backup_type': { 'type': 'string', }, 'rotation': parameter_types.non_negative_integer, 'metadata': parameter_types.metadata, }, 'required': ['name', 'backup_type', 'rotation'], 'additionalProperties': False, }, }, 'required': ['createBackup'], 'additionalProperties': False, } create_backup_v20 = copy.deepcopy(create_backup) create_backup_v20['properties'][ 'createBackup']['properties']['name'] = (parameter_types. name_with_leading_trailing_spaces) nova-17.0.1/nova/api/openstack/compute/schemas/__init__.py0000666000175000017500000000000013250073126023467 0ustar zuulzuul00000000000000nova-17.0.1/nova/api/openstack/compute/schemas/agents.py0000666000175000017500000000635413250073126023233 0ustar zuulzuul00000000000000# Copyright 2013 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.validation import parameter_types create = { 'type': 'object', 'properties': { 'agent': { 'type': 'object', 'properties': { 'hypervisor': { 'type': 'string', 'minLength': 0, 'maxLength': 255, 'pattern': '^[a-zA-Z0-9-._ ]*$' }, 'os': { 'type': 'string', 'minLength': 0, 'maxLength': 255, 'pattern': '^[a-zA-Z0-9-._ ]*$' }, 'architecture': { 'type': 'string', 'minLength': 0, 'maxLength': 255, 'pattern': '^[a-zA-Z0-9-._ ]*$' }, 'version': { 'type': 'string', 'minLength': 0, 'maxLength': 255, 'pattern': '^[a-zA-Z0-9-._ ]*$' }, 'url': { 'type': 'string', 'minLength': 0, 'maxLength': 255, 'format': 'uri' }, 'md5hash': { 'type': 'string', 'minLength': 0, 'maxLength': 255, 'pattern': '^[a-fA-F0-9]*$' }, }, 'required': ['hypervisor', 'os', 'architecture', 'version', 'url', 'md5hash'], 'additionalProperties': False, }, }, 'required': ['agent'], 'additionalProperties': False, } update = { 'type': 'object', 'properties': { 'para': { 'type': 'object', 'properties': { 'version': { 'type': 'string', 'minLength': 0, 'maxLength': 255, 'pattern': '^[a-zA-Z0-9-._ ]*$' }, 'url': { 'type': 'string', 'minLength': 0, 'maxLength': 255, 'format': 'uri' }, 'md5hash': { 'type': 'string', 'minLength': 0, 'maxLength': 255, 'pattern': '^[a-fA-F0-9]*$' }, }, 'required': ['version', 'url', 'md5hash'], 'additionalProperties': False, }, }, 'required': ['para'], 'additionalProperties': False, } index_query = { 'type': 'object', 'properties': { 'hypervisor': parameter_types.common_query_param }, # NOTE(gmann): This is kept True to keep backward compatibility. # As of now Schema validation stripped out the additional parameters and # does not raise 400. In the future, we may block the additional parameters # by bump in Microversion. 'additionalProperties': True } nova-17.0.1/nova/api/openstack/compute/schemas/volumes.py0000666000175000017500000000714013250073126023436 0ustar zuulzuul00000000000000# Copyright 2014 IBM Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from nova.api.validation import parameter_types create = { 'type': 'object', 'properties': { 'volume': { 'type': 'object', 'properties': { 'volume_type': {'type': 'string'}, 'metadata': {'type': 'object'}, 'snapshot_id': {'type': 'string'}, 'size': { 'type': ['integer', 'string'], 'pattern': '^[0-9]+$', 'minimum': 1 }, 'availability_zone': {'type': 'string'}, 'display_name': {'type': 'string'}, 'display_description': {'type': 'string'}, }, 'required': ['size'], 'additionalProperties': False, }, }, 'required': ['volume'], 'additionalProperties': False, } snapshot_create = { 'type': 'object', 'properties': { 'snapshot': { 'type': 'object', 'properties': { 'volume_id': {'type': 'string'}, 'force': parameter_types.boolean, 'display_name': {'type': 'string'}, 'display_description': {'type': 'string'}, }, 'required': ['volume_id'], 'additionalProperties': False, }, }, 'required': ['snapshot'], 'additionalProperties': False, } create_volume_attachment = { 'type': 'object', 'properties': { 'volumeAttachment': { 'type': 'object', 'properties': { 'volumeId': parameter_types.volume_id, 'device': { 'type': ['string', 'null'], # NOTE: The validation pattern from match_device() in # nova/block_device.py. 'pattern': '(^/dev/x{0,1}[a-z]{0,1}d{0,1})([a-z]+)[0-9]*$' } }, 'required': ['volumeId'], 'additionalProperties': False, }, }, 'required': ['volumeAttachment'], 'additionalProperties': False, } create_volume_attachment_v249 = copy.deepcopy(create_volume_attachment) create_volume_attachment_v249['properties']['volumeAttachment'][ 'properties']['tag'] = parameter_types.tag update_volume_attachment = copy.deepcopy(create_volume_attachment) del update_volume_attachment['properties']['volumeAttachment'][ 'properties']['device'] index_query = { 'type': 'object', 'properties': { 'limit': parameter_types.multi_params( parameter_types.non_negative_integer), 'offset': parameter_types.multi_params( parameter_types.non_negative_integer) }, # NOTE(gmann): This is kept True to keep backward compatibility. # As of now Schema validation stripped out the additional parameters and # does not raise 400. In the future, we may block the additional parameters # by bump in Microversion. 'additionalProperties': True } detail_query = index_query nova-17.0.1/nova/api/openstack/compute/schemas/admin_password.py0000666000175000017500000000206613250073126024760 0ustar zuulzuul00000000000000# Copyright 2013 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.validation import parameter_types change_password = { 'type': 'object', 'properties': { 'changePassword': { 'type': 'object', 'properties': { 'adminPass': parameter_types.admin_password, }, 'required': ['adminPass'], 'additionalProperties': False, }, }, 'required': ['changePassword'], 'additionalProperties': False, } nova-17.0.1/nova/api/openstack/compute/schemas/servers.py0000666000175000017500000003704313250073126023442 0ustar zuulzuul00000000000000# Copyright 2014 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from nova.api.openstack.compute.schemas import user_data from nova.api.validation import parameter_types from nova.api.validation.parameter_types import multi_params from nova.objects import instance base_create = { 'type': 'object', 'properties': { 'server': { 'type': 'object', 'properties': { 'name': parameter_types.name, # NOTE(gmann): In case of boot from volume, imageRef was # allowed as the empty string also So keeping the same # behavior and allow empty string in case of boot from # volume only. Python code make sure empty string is # not allowed for other cases. 'imageRef': parameter_types.image_id_or_empty_string, 'flavorRef': parameter_types.flavor_ref, 'adminPass': parameter_types.admin_password, 'metadata': parameter_types.metadata, 'networks': { 'type': 'array', 'items': { 'type': 'object', 'properties': { 'fixed_ip': parameter_types.ip_address, 'port': { 'oneOf': [{'type': 'string', 'format': 'uuid'}, {'type': 'null'}] }, 'uuid': {'type': 'string'}, }, 'additionalProperties': False, } }, 'OS-DCF:diskConfig': parameter_types.disk_config, 'accessIPv4': parameter_types.accessIPv4, 'accessIPv6': parameter_types.accessIPv6, 'personality': parameter_types.personality, 'availability_zone': parameter_types.name, }, 'required': ['name', 'flavorRef'], 'additionalProperties': False, }, }, 'required': ['server'], 'additionalProperties': False, } base_create_v20 = copy.deepcopy(base_create) base_create_v20['properties']['server'][ 'properties']['name'] = parameter_types.name_with_leading_trailing_spaces base_create_v20['properties']['server']['properties'][ 'availability_zone'] = parameter_types.name_with_leading_trailing_spaces base_create_v219 = copy.deepcopy(base_create) base_create_v219['properties']['server'][ 'properties']['description'] = parameter_types.description base_create_v232 = copy.deepcopy(base_create_v219) base_create_v232['properties']['server'][ 'properties']['networks']['items'][ 'properties']['tag'] = parameter_types.tag # 2.37 builds on 2.32 and makes the following changes: # 1. server.networks is required # 2. server.networks is now either an enum or a list # 3. server.networks.uuid is now required to be a uuid base_create_v237 = copy.deepcopy(base_create_v232) base_create_v237['properties']['server']['required'].append('networks') base_create_v237['properties']['server']['properties']['networks'] = { 'oneOf': [ {'type': 'array', 'items': { 'type': 'object', 'properties': { 'fixed_ip': parameter_types.ip_address, 'port': { 'oneOf': [{'type': 'string', 'format': 'uuid'}, {'type': 'null'}] }, 'uuid': {'type': 'string', 'format': 'uuid'}, }, 'additionalProperties': False, }, }, {'type': 'string', 'enum': ['none', 'auto']}, ]} # 2.42 builds on 2.37 and re-introduces the tag field to the list of network # objects. base_create_v242 = copy.deepcopy(base_create_v237) base_create_v242['properties']['server']['properties']['networks'] = { 'oneOf': [ {'type': 'array', 'items': { 'type': 'object', 'properties': { 'fixed_ip': parameter_types.ip_address, 'port': { 'oneOf': [{'type': 'string', 'format': 'uuid'}, {'type': 'null'}] }, 'uuid': {'type': 'string', 'format': 'uuid'}, 'tag': parameter_types.tag, }, 'additionalProperties': False, }, }, {'type': 'string', 'enum': ['none', 'auto']}, ]} # 2.52 builds on 2.42 and makes the following changes: # Allowing adding tags to instances when booting base_create_v252 = copy.deepcopy(base_create_v242) base_create_v252['properties']['server']['properties']['tags'] = { "type": "array", "items": parameter_types.tag, "maxItems": instance.MAX_TAG_COUNT } # 2.57 builds on 2.52 and removes the personality parameter. base_create_v257 = copy.deepcopy(base_create_v252) base_create_v257['properties']['server']['properties'].pop('personality') base_update = { 'type': 'object', 'properties': { 'server': { 'type': 'object', 'properties': { 'name': parameter_types.name, 'OS-DCF:diskConfig': parameter_types.disk_config, 'accessIPv4': parameter_types.accessIPv4, 'accessIPv6': parameter_types.accessIPv6, }, 'additionalProperties': False, }, }, 'required': ['server'], 'additionalProperties': False, } base_update_v20 = copy.deepcopy(base_update) base_update_v20['properties']['server'][ 'properties']['name'] = parameter_types.name_with_leading_trailing_spaces base_update_v219 = copy.deepcopy(base_update) base_update_v219['properties']['server'][ 'properties']['description'] = parameter_types.description base_rebuild = { 'type': 'object', 'properties': { 'rebuild': { 'type': 'object', 'properties': { 'name': parameter_types.name, 'imageRef': parameter_types.image_id, 'adminPass': parameter_types.admin_password, 'metadata': parameter_types.metadata, 'preserve_ephemeral': parameter_types.boolean, 'OS-DCF:diskConfig': parameter_types.disk_config, 'accessIPv4': parameter_types.accessIPv4, 'accessIPv6': parameter_types.accessIPv6, 'personality': parameter_types.personality, }, 'required': ['imageRef'], 'additionalProperties': False, }, }, 'required': ['rebuild'], 'additionalProperties': False, } base_rebuild_v20 = copy.deepcopy(base_rebuild) base_rebuild_v20['properties']['rebuild'][ 'properties']['name'] = parameter_types.name_with_leading_trailing_spaces base_rebuild_v219 = copy.deepcopy(base_rebuild) base_rebuild_v219['properties']['rebuild'][ 'properties']['description'] = parameter_types.description base_rebuild_v254 = copy.deepcopy(base_rebuild_v219) base_rebuild_v254['properties']['rebuild'][ 'properties']['key_name'] = parameter_types.name_or_none # 2.57 builds on 2.54 and makes the following changes: # 1. Remove the personality parameter. # 2. Add the user_data parameter which is nullable so user_data can be reset. base_rebuild_v257 = copy.deepcopy(base_rebuild_v254) base_rebuild_v257['properties']['rebuild']['properties'].pop('personality') base_rebuild_v257['properties']['rebuild']['properties']['user_data'] = ({ 'oneOf': [ user_data.common_user_data, {'type': 'null'} ] }) resize = { 'type': 'object', 'properties': { 'resize': { 'type': 'object', 'properties': { 'flavorRef': parameter_types.flavor_ref, 'OS-DCF:diskConfig': parameter_types.disk_config, }, 'required': ['flavorRef'], 'additionalProperties': False, }, }, 'required': ['resize'], 'additionalProperties': False, } create_image = { 'type': 'object', 'properties': { 'createImage': { 'type': 'object', 'properties': { 'name': parameter_types.name, 'metadata': parameter_types.metadata }, 'required': ['name'], 'additionalProperties': False } }, 'required': ['createImage'], 'additionalProperties': False } create_image_v20 = copy.deepcopy(create_image) create_image_v20['properties']['createImage'][ 'properties']['name'] = parameter_types.name_with_leading_trailing_spaces reboot = { 'type': 'object', 'properties': { 'reboot': { 'type': 'object', 'properties': { 'type': { 'type': 'string', 'enum': ['HARD', 'Hard', 'hard', 'SOFT', 'Soft', 'soft'] } }, 'required': ['type'], 'additionalProperties': False } }, 'required': ['reboot'], 'additionalProperties': False } trigger_crash_dump = { 'type': 'object', 'properties': { 'trigger_crash_dump': { 'type': 'null' } }, 'required': ['trigger_crash_dump'], 'additionalProperties': False } JOINED_TABLE_QUERY_PARAMS_SERVERS = { 'block_device_mapping': parameter_types.common_query_param, 'services': parameter_types.common_query_param, 'metadata': parameter_types.common_query_param, 'system_metadata': parameter_types.common_query_param, 'info_cache': parameter_types.common_query_param, 'security_groups': parameter_types.common_query_param, 'pci_devices': parameter_types.common_query_param } # These fields are valid values for sort_keys before we start # using schema validation, but are considered to be bad values # and disabled to use. In order to avoid backward incompatibility, # they are ignored instead of return HTTP 400. SERVER_LIST_IGNORE_SORT_KEY = [ 'architecture', 'cell_name', 'cleaned', 'default_ephemeral_device', 'default_swap_device', 'deleted', 'deleted_at', 'disable_terminate', 'ephemeral_gb', 'ephemeral_key_uuid', 'id', 'key_data', 'launched_on', 'locked', 'memory_mb', 'os_type', 'reservation_id', 'root_gb', 'shutdown_terminate', 'user_data', 'vcpus', 'vm_mode' ] VALID_SORT_KEYS = { "type": "string", "enum": ['access_ip_v4', 'access_ip_v6', 'auto_disk_config', 'availability_zone', 'config_drive', 'created_at', 'display_description', 'display_name', 'host', 'hostname', 'image_ref', 'instance_type_id', 'kernel_id', 'key_name', 'launch_index', 'launched_at', 'locked_by', 'node', 'power_state', 'progress', 'project_id', 'ramdisk_id', 'root_device_name', 'task_state', 'terminated_at', 'updated_at', 'user_id', 'uuid', 'vm_state'] + SERVER_LIST_IGNORE_SORT_KEY } query_params_v21 = { 'type': 'object', 'properties': { 'user_id': parameter_types.common_query_param, 'project_id': parameter_types.common_query_param, # The alias of project_id. It should be removed in the # future with microversion bump. 'tenant_id': parameter_types.common_query_param, 'launch_index': parameter_types.common_query_param, # The alias of image. It should be removed in the # future with microversion bump. 'image_ref': parameter_types.common_query_param, 'image': parameter_types.common_query_param, 'kernel_id': parameter_types.common_query_regex_param, 'ramdisk_id': parameter_types.common_query_regex_param, 'hostname': parameter_types.common_query_regex_param, 'key_name': parameter_types.common_query_regex_param, 'power_state': parameter_types.common_query_regex_param, 'vm_state': parameter_types.common_query_param, 'task_state': parameter_types.common_query_param, 'host': parameter_types.common_query_param, 'node': parameter_types.common_query_regex_param, 'flavor': parameter_types.common_query_regex_param, 'reservation_id': parameter_types.common_query_regex_param, 'launched_at': parameter_types.common_query_regex_param, 'terminated_at': parameter_types.common_query_regex_param, 'availability_zone': parameter_types.common_query_regex_param, # NOTE(alex_xu): This is pattern matching, it didn't get any benefit # from DB index. 'name': parameter_types.common_query_regex_param, # The alias of name. It should be removed in the future # with microversion bump. 'display_name': parameter_types.common_query_regex_param, 'description': parameter_types.common_query_regex_param, # The alias of description. It should be removed in the # future with microversion bump. 'display_description': parameter_types.common_query_regex_param, 'locked_by': parameter_types.common_query_regex_param, 'uuid': parameter_types.common_query_param, 'root_device_name': parameter_types.common_query_regex_param, 'config_drive': parameter_types.common_query_regex_param, 'access_ip_v4': parameter_types.common_query_regex_param, 'access_ip_v6': parameter_types.common_query_regex_param, 'auto_disk_config': parameter_types.common_query_regex_param, 'progress': parameter_types.common_query_regex_param, 'sort_key': multi_params(VALID_SORT_KEYS), 'sort_dir': parameter_types.common_query_param, 'all_tenants': parameter_types.common_query_param, 'deleted': parameter_types.common_query_param, 'status': parameter_types.common_query_param, 'changes-since': multi_params({'type': 'string', 'format': 'date-time'}), # NOTE(alex_xu): The ip and ip6 are implemented in the python. 'ip': parameter_types.common_query_regex_param, 'ip6': parameter_types.common_query_regex_param, 'created_at': parameter_types.common_query_regex_param, }, # For backward-compatible additionalProperties is set to be True here. # And we will either strip the extra params out or raise HTTP 400 # according to the params' value in the later process. 'additionalProperties': True, # Prevent internal-attributes that are started with underscore from # being striped out in schema validation, and raise HTTP 400 in API. 'patternProperties': {"^_": parameter_types.common_query_param} } # Update the joined-table fields to the list so it will not be # stripped in later process, thus can be handled later in api # to raise HTTP 400. query_params_v21['properties'].update( JOINED_TABLE_QUERY_PARAMS_SERVERS) query_params_v21['properties'].update( parameter_types.pagination_parameters) query_params_v226 = copy.deepcopy(query_params_v21) query_params_v226['properties'].update({ 'tags': parameter_types.common_query_regex_param, 'tags-any': parameter_types.common_query_regex_param, 'not-tags': parameter_types.common_query_regex_param, 'not-tags-any': parameter_types.common_query_regex_param, }) nova-17.0.1/nova/api/openstack/compute/schemas/server_metadata.py0000666000175000017500000000246513250073126025117 0ustar zuulzuul00000000000000# Copyright 2014 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from nova.api.validation import parameter_types create = { 'type': 'object', 'properties': { 'metadata': parameter_types.metadata }, 'required': ['metadata'], 'additionalProperties': False, } metadata_update = copy.deepcopy(parameter_types.metadata) metadata_update.update({ 'minProperties': 1, 'maxProperties': 1 }) update = { 'type': 'object', 'properties': { 'meta': metadata_update }, 'required': ['meta'], 'additionalProperties': False, } update_all = { 'type': 'object', 'properties': { 'metadata': parameter_types.metadata }, 'required': ['metadata'], 'additionalProperties': False, } nova-17.0.1/nova/api/openstack/compute/schemas/fixed_ips.py0000666000175000017500000000200313250073126023707 0ustar zuulzuul00000000000000# Copyright 2015 Intel Corporation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.validation import parameter_types reserve = { 'type': 'object', 'properties': { 'reserve': parameter_types.none, }, 'required': ['reserve'], 'additionalProperties': False, } unreserve = { 'type': 'object', 'properties': { 'unreserve': parameter_types.none, }, 'required': ['unreserve'], 'additionalProperties': False, } nova-17.0.1/nova/api/openstack/compute/schemas/flavor_manage.py0000666000175000017500000000744513250073126024555 0ustar zuulzuul00000000000000# Copyright 2014 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from nova.api.validation import parameter_types from nova import db create = { 'type': 'object', 'properties': { 'flavor': { 'type': 'object', 'properties': { # in nova/flavors.py, name with all white spaces is forbidden. 'name': parameter_types.name, # forbid leading/trailing whitespaces 'id': { 'type': ['string', 'number', 'null'], 'minLength': 1, 'maxLength': 255, 'pattern': '^(?! )[a-zA-Z0-9. _-]+(? 0) float 'rxtx_factor': { 'type': ['number', 'string'], 'pattern': '^[0-9]+(\.[0-9]+)?$', 'minimum': 0, 'exclusiveMinimum': True, 'maximum': db.SQL_SP_FLOAT_MAX }, 'os-flavor-access:is_public': parameter_types.boolean, }, # TODO(oomichi): 'id' should be required with v2.1+microversions. # On v2.0 API, nova-api generates a flavor-id automatically if # specifying null as 'id' or not specifying 'id'. Ideally a client # should specify null as 'id' for requesting auto-generated id # exactly. However, this strict limitation causes a backwards # incompatible issue on v2.1. So now here relaxes the requirement # of 'id'. 'required': ['name', 'ram', 'vcpus', 'disk'], 'additionalProperties': False, }, }, 'required': ['flavor'], 'additionalProperties': False, } create_v20 = copy.deepcopy(create) create_v20['properties']['flavor']['properties']['name'] = (parameter_types. name_with_leading_trailing_spaces) # 2.55 adds an optional description field with a max length of 65535 since the # backing database column is a TEXT column which is 64KiB. flavor_description = { 'type': ['string', 'null'], 'minLength': 0, 'maxLength': 65535, 'pattern': parameter_types.valid_description_regex, } create_v2_55 = copy.deepcopy(create) create_v2_55['properties']['flavor']['properties']['description'] = ( flavor_description) update_v2_55 = { 'type': 'object', 'properties': { 'flavor': { 'type': 'object', 'properties': { 'description': flavor_description }, # Since the only property that can be specified on update is the # description field, it is required. If we allow updating other # flavor attributes in a later microversion, we should reconsider # what is required. 'required': ['description'], 'additionalProperties': False, }, }, 'required': ['flavor'], 'additionalProperties': False, } nova-17.0.1/nova/api/openstack/compute/schemas/assisted_volume_snapshots.py0000666000175000017500000000443013250073126027253 0ustar zuulzuul00000000000000# Copyright 2014 IBM Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.validation import parameter_types snapshots_create = { 'type': 'object', 'properties': { 'snapshot': { 'type': 'object', 'properties': { 'volume_id': { 'type': 'string', 'minLength': 1, }, 'create_info': { 'type': 'object', 'properties': { 'snapshot_id': { 'type': 'string', 'minLength': 1, }, 'type': { 'type': 'string', 'enum': ['qcow2'], }, 'new_file': { 'type': 'string', 'minLength': 1, }, 'id': { 'type': 'string', 'minLength': 1, }, }, 'required': ['snapshot_id', 'type', 'new_file'], 'additionalProperties': False, }, }, 'required': ['volume_id', 'create_info'], 'additionalProperties': False, } }, 'required': ['snapshot'], 'additionalProperties': False, } delete_query = { 'type': 'object', 'properties': { 'delete_info': parameter_types.multi_params({'type': 'string'}) }, # NOTE(gmann): This is kept True to keep backward compatibility. # As of now Schema validation stripped out the additional parameters and # does not raise 400. In the future, we may block the additional parameters # by bump in Microversion. 'additionalProperties': True } nova-17.0.1/nova/api/openstack/compute/schemas/instance_actions.py0000666000175000017500000000217713250073126025275 0ustar zuulzuul00000000000000# Copyright 2017 Huawei Technologies Co.,LTD. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.validation import parameter_types list_query_params_v258 = { 'type': 'object', 'properties': { # The 2.58 microversion added support for paging by limit and marker # and filtering by changes-since. 'limit': parameter_types.single_param( parameter_types.non_negative_integer), 'marker': parameter_types.single_param({'type': 'string'}), 'changes-since': parameter_types.single_param( {'type': 'string', 'format': 'date-time'}), }, 'additionalProperties': False } nova-17.0.1/nova/api/openstack/compute/schemas/server_groups.py0000666000175000017500000000505613250073126024655 0ustar zuulzuul00000000000000# Copyright 2014 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from nova.api.validation import parameter_types # NOTE(russellb) There is one other policy, 'legacy', but we don't allow that # being set via the API. It's only used when a group gets automatically # created to support the legacy behavior of the 'group' scheduler hint. create = { 'type': 'object', 'properties': { 'server_group': { 'type': 'object', 'properties': { 'name': parameter_types.name, 'policies': { # This allows only a single item and it must be one of the # enumerated values. So this is really just a single string # value, but for legacy reasons is an array. We could # probably change the type from array to string with a # microversion at some point but it's very low priority. 'type': 'array', 'items': [{ 'type': 'string', 'enum': ['anti-affinity', 'affinity']}], 'uniqueItems': True, 'additionalItems': False, } }, 'required': ['name', 'policies'], 'additionalProperties': False, } }, 'required': ['server_group'], 'additionalProperties': False, } create_v215 = copy.deepcopy(create) policies = create_v215['properties']['server_group']['properties']['policies'] policies['items'][0]['enum'].extend(['soft-anti-affinity', 'soft-affinity']) server_groups_query_param = { 'type': 'object', 'properties': { 'all_projects': parameter_types.multi_params({'type': 'string'}), 'limit': parameter_types.multi_params( parameter_types.non_negative_integer), 'offset': parameter_types.multi_params( parameter_types.non_negative_integer), }, # For backward compatible changes 'additionalProperties': True } nova-17.0.1/nova/api/openstack/compute/schemas/keypairs.py0000666000175000017500000000613413250073126023575 0ustar zuulzuul00000000000000# Copyright 2013 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from nova.api.validation import parameter_types create = { 'type': 'object', 'properties': { 'keypair': { 'type': 'object', 'properties': { 'name': parameter_types.name, 'public_key': {'type': 'string'}, }, 'required': ['name'], 'additionalProperties': False, }, }, 'required': ['keypair'], 'additionalProperties': False, } create_v20 = copy.deepcopy(create) create_v20['properties']['keypair']['properties']['name'] = (parameter_types. name_with_leading_trailing_spaces) create_v22 = { 'type': 'object', 'properties': { 'keypair': { 'type': 'object', 'properties': { 'name': parameter_types.name, 'type': { 'type': 'string', 'enum': ['ssh', 'x509'] }, 'public_key': {'type': 'string'}, }, 'required': ['name'], 'additionalProperties': False, }, }, 'required': ['keypair'], 'additionalProperties': False, } create_v210 = { 'type': 'object', 'properties': { 'keypair': { 'type': 'object', 'properties': { 'name': parameter_types.name, 'type': { 'type': 'string', 'enum': ['ssh', 'x509'] }, 'public_key': {'type': 'string'}, 'user_id': {'type': 'string'}, }, 'required': ['name'], 'additionalProperties': False, }, }, 'required': ['keypair'], 'additionalProperties': False, } server_create = { 'key_name': parameter_types.name, } server_create_v20 = { 'key_name': parameter_types.name_with_leading_trailing_spaces, } index_query_schema_v20 = { 'type': 'object', 'properties': {}, 'additionalProperties': True } index_query_schema_v210 = { 'type': 'object', 'properties': { 'user_id': parameter_types.multi_params({'type': 'string'}) }, 'additionalProperties': True } index_query_schema_v235 = copy.deepcopy(index_query_schema_v210) index_query_schema_v235['properties'].update( parameter_types.pagination_parameters) show_query_schema_v20 = index_query_schema_v20 show_query_schema_v210 = index_query_schema_v210 delete_query_schema_v20 = index_query_schema_v20 delete_query_schema_v210 = index_query_schema_v210 nova-17.0.1/nova/api/openstack/compute/schemas/attach_interfaces.py0000666000175000017500000000357113250073126025417 0ustar zuulzuul00000000000000# Copyright 2014 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from nova.api.validation import parameter_types create = { 'type': 'object', 'properties': { 'interfaceAttachment': { 'type': 'object', 'properties': { # NOTE: This parameter is passed to the search_opts of # Neutron list_network API: search_opts = {'id': net_id} 'net_id': parameter_types.network_id, # NOTE: This parameter is passed to Neutron show_port API # as a port id. 'port_id': parameter_types.network_port_id, 'fixed_ips': { 'type': 'array', 'minItems': 1, 'maxItems': 1, 'items': { 'type': 'object', 'properties': { 'ip_address': parameter_types.ip_address }, 'required': ['ip_address'], 'additionalProperties': False, }, }, }, 'additionalProperties': False, }, }, 'additionalProperties': False, } create_v249 = copy.deepcopy(create) create_v249['properties']['interfaceAttachment'][ 'properties']['tag'] = parameter_types.tag nova-17.0.1/nova/api/openstack/compute/schemas/limits.py0000666000175000017500000000147413250073126023251 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.validation import parameter_types limits_query_schema = { 'type': 'object', 'properties': { 'tenant_id': parameter_types.common_query_param, }, # For backward compatible changes 'additionalProperties': True } nova-17.0.1/nova/api/openstack/compute/schemas/floating_ip_dns.py0000666000175000017500000000341613250073126025105 0ustar zuulzuul00000000000000# Copyright 2014 IBM Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.validation import parameter_types domain_entry_update = { 'type': 'object', 'properties': { 'domain_entry': { 'type': 'object', 'properties': { 'scope': { 'type': 'string', 'enum': ['public', 'private'], }, 'project': parameter_types.project_id, 'availability_zone': parameter_types.name, }, 'required': ['scope'], 'maxProperties': 2, 'additionalProperties': False, }, }, 'required': ['domain_entry'], 'additionalProperties': False, } dns_entry_update = { 'type': 'object', 'properties': { 'dns_entry': { 'type': 'object', 'properties': { 'ip': parameter_types.ip_address, 'dns_type': { 'type': 'string', 'enum': ['a', 'A'], }, }, 'required': ['ip', 'dns_type'], 'additionalProperties': False, }, }, 'required': ['dns_entry'], 'additionalProperties': False, } nova-17.0.1/nova/api/openstack/compute/schemas/aggregates.py0000666000175000017500000000717113250073126024061 0ustar zuulzuul00000000000000# Copyright 2014 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from nova.api.validation import parameter_types availability_zone = {'oneOf': [parameter_types.az_name, {'type': 'null'}]} availability_zone_with_leading_trailing_spaces = { 'oneOf': [parameter_types.az_name_with_leading_trailing_spaces, {'type': 'null'}] } create = { 'type': 'object', 'properties': { 'type': 'object', 'aggregate': { 'type': 'object', 'properties': { 'name': parameter_types.name, 'availability_zone': availability_zone, }, 'required': ['name'], 'additionalProperties': False, }, }, 'required': ['aggregate'], 'additionalProperties': False, } create_v20 = copy.deepcopy(create) create_v20['properties']['aggregate']['properties']['name'] = (parameter_types. name_with_leading_trailing_spaces) create_v20['properties']['aggregate']['properties']['availability_zone'] = ( availability_zone_with_leading_trailing_spaces) update = { 'type': 'object', 'properties': { 'type': 'object', 'aggregate': { 'type': 'object', 'properties': { 'name': parameter_types.name_with_leading_trailing_spaces, 'availability_zone': availability_zone }, 'additionalProperties': False, 'anyOf': [ {'required': ['name']}, {'required': ['availability_zone']} ] }, }, 'required': ['aggregate'], 'additionalProperties': False, } update_v20 = copy.deepcopy(update) update_v20['properties']['aggregate']['properties']['name'] = (parameter_types. name_with_leading_trailing_spaces) update_v20['properties']['aggregate']['properties']['availability_zone'] = ( availability_zone_with_leading_trailing_spaces) add_host = { 'type': 'object', 'properties': { 'type': 'object', 'add_host': { 'type': 'object', 'properties': { 'host': parameter_types.hostname, }, 'required': ['host'], 'additionalProperties': False, }, }, 'required': ['add_host'], 'additionalProperties': False, } remove_host = { 'type': 'object', 'properties': { 'type': 'object', 'remove_host': { 'type': 'object', 'properties': { 'host': parameter_types.hostname, }, 'required': ['host'], 'additionalProperties': False, }, }, 'required': ['remove_host'], 'additionalProperties': False, } set_metadata = { 'type': 'object', 'properties': { 'type': 'object', 'set_metadata': { 'type': 'object', 'properties': { 'metadata': parameter_types.metadata_with_null }, 'required': ['metadata'], 'additionalProperties': False, }, }, 'required': ['set_metadata'], 'additionalProperties': False, } nova-17.0.1/nova/api/openstack/compute/schemas/multinic.py0000666000175000017500000000304313250073126023566 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.validation import parameter_types add_fixed_ip = { 'type': 'object', 'properties': { 'addFixedIp': { 'type': 'object', 'properties': { # The maxLength is from the column 'uuid' of the # table 'networks' 'networkId': { 'type': ['string', 'number'], 'minLength': 1, 'maxLength': 36, }, }, 'required': ['networkId'], 'additionalProperties': False, }, }, 'required': ['addFixedIp'], 'additionalProperties': False, } remove_fixed_ip = { 'type': 'object', 'properties': { 'removeFixedIp': { 'type': 'object', 'properties': { 'address': parameter_types.ip_address }, 'required': ['address'], 'additionalProperties': False, }, }, 'required': ['removeFixedIp'], 'additionalProperties': False, } nova-17.0.1/nova/api/openstack/compute/schemas/flavor_access.py0000666000175000017500000000325613250073126024562 0ustar zuulzuul00000000000000# Copyright 2013 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. add_tenant_access = { 'type': 'object', 'properties': { 'addTenantAccess': { 'type': 'object', 'properties': { 'tenant': { # defined from project_id in instance_type_projects table 'type': 'string', 'minLength': 1, 'maxLength': 255, }, }, 'required': ['tenant'], 'additionalProperties': False, }, }, 'required': ['addTenantAccess'], 'additionalProperties': False, } remove_tenant_access = { 'type': 'object', 'properties': { 'removeTenantAccess': { 'type': 'object', 'properties': { 'tenant': { # defined from project_id in instance_type_projects table 'type': 'string', 'minLength': 1, 'maxLength': 255, }, }, 'required': ['tenant'], 'additionalProperties': False, }, }, 'required': ['removeTenantAccess'], 'additionalProperties': False, } nova-17.0.1/nova/api/openstack/compute/schemas/image_metadata.py0000666000175000017500000000223113250073126024662 0ustar zuulzuul00000000000000# Copyright 2014 IBM Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from nova.api.validation import parameter_types create = { 'type': 'object', 'properties': { 'metadata': parameter_types.metadata }, 'required': ['metadata'], 'additionalProperties': False, } single_metadata = copy.deepcopy(parameter_types.metadata) single_metadata.update({ 'minProperties': 1, 'maxProperties': 1 }) update = { 'type': 'object', 'properties': { 'meta': single_metadata }, 'required': ['meta'], 'additionalProperties': False, } update_all = create nova-17.0.1/nova/api/openstack/compute/schemas/fping.py0000666000175000017500000000233613250073126023051 0ustar zuulzuul00000000000000# Copyright 2017 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.validation import parameter_types index_query = { 'type': 'object', 'properties': { 'all_tenants': parameter_types.multi_params({'type': 'string'}), 'include': parameter_types.multi_params({'type': 'string'}), 'exclude': parameter_types.multi_params({'type': 'string'}) }, # NOTE(gmann): This is kept True to keep backward compatibility. # As of now Schema validation stripped out the additional parameters and # does not raise 400. In the future, we may block the additional parameters # by bump in Microversion. 'additionalProperties': True } nova-17.0.1/nova/api/openstack/compute/schemas/security_groups.py0000666000175000017500000000367213250073126025220 0ustar zuulzuul00000000000000# Copyright 2014 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from nova.api.validation import parameter_types server_create = { 'security_groups': { 'type': 'array', 'items': { 'type': 'object', 'properties': { # NOTE(oomichi): allocate_for_instance() of neutronv2/api.py # gets security_group names or UUIDs from this parameter. # parameter_types.name allows both format. 'name': parameter_types.name, }, 'additionalProperties': False, } }, } server_create_v20 = copy.deepcopy(server_create) server_create_v20['security_groups']['items']['properties']['name'] = ( parameter_types.name_with_leading_trailing_spaces) index_query = { 'type': 'object', 'properties': { 'limit': parameter_types.multi_params( parameter_types.non_negative_integer), 'offset': parameter_types.multi_params( parameter_types.non_negative_integer), 'all_tenants': parameter_types.multi_params({'type': 'string'}) }, # NOTE(gmann): This is kept True to keep backward compatibility. # As of now Schema validation stripped out the additional parameters and # does not raise 400. In the future, we may block the additional parameters # by bump in Microversion. 'additionalProperties': True } nova-17.0.1/nova/api/openstack/compute/schemas/reset_server_state.py0000666000175000017500000000210213250073126025645 0ustar zuulzuul00000000000000# Copyright 2014 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. reset_state = { 'type': 'object', 'properties': { 'os-resetState': { 'type': 'object', 'properties': { 'state': { 'type': 'string', 'enum': ['active', 'error'], }, }, 'required': ['state'], 'additionalProperties': False, }, }, 'required': ['os-resetState'], 'additionalProperties': False, } nova-17.0.1/nova/api/openstack/compute/schemas/tenant_networks.py0000666000175000017500000000311513250073126025167 0ustar zuulzuul00000000000000# Copyright 2015 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.validation import parameter_types create = { 'type': 'object', 'properties': { 'network': { 'type': 'object', 'properties': { 'label': { 'type': 'string', 'maxLength': 255 }, 'ipam': parameter_types.boolean, 'cidr': parameter_types.cidr, 'cidr_v6': parameter_types.cidr, 'vlan_start': parameter_types.positive_integer_with_empty_str, 'network_size': parameter_types.positive_integer_with_empty_str, 'num_networks': parameter_types.positive_integer_with_empty_str }, 'required': ['label'], 'oneOf': [ {'required': ['cidr']}, {'required': ['cidr_v6']} ], 'additionalProperties': False, }, }, 'required': ['network'], 'additionalProperties': False, } nova-17.0.1/nova/api/openstack/compute/schemas/floating_ips.py0000666000175000017500000000275613250073126024432 0ustar zuulzuul00000000000000# Copyright 2015 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.validation import parameter_types add_floating_ip = { 'type': 'object', 'properties': { 'addFloatingIp': { 'type': 'object', 'properties': { 'address': parameter_types.ip_address, 'fixed_address': parameter_types.ip_address }, 'required': ['address'], 'additionalProperties': False } }, 'required': ['addFloatingIp'], 'additionalProperties': False } remove_floating_ip = { 'type': 'object', 'properties': { 'removeFloatingIp': { 'type': 'object', 'properties': { 'address': parameter_types.ip_address }, 'required': ['address'], 'additionalProperties': False } }, 'required': ['removeFloatingIp'], 'additionalProperties': False } nova-17.0.1/nova/api/openstack/compute/schemas/scheduler_hints.py0000666000175000017500000000520313250073126025125 0ustar zuulzuul00000000000000# Copyright 2014 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.validation import parameter_types _hints = { 'type': 'object', 'properties': { 'group': { 'type': 'string', 'format': 'uuid' }, 'different_host': { # NOTE: The value of 'different_host' is the set of server # uuids where a new server is scheduled on a different host. # A user can specify one server as string parameter and should # specify multiple servers as array parameter instead. 'oneOf': [ { 'type': 'string', 'format': 'uuid' }, { 'type': 'array', 'items': parameter_types.server_id } ] }, 'same_host': { # NOTE: The value of 'same_host' is the set of server # uuids where a new server is scheduled on the same host. 'type': ['string', 'array'], 'items': parameter_types.server_id }, 'query': { # NOTE: The value of 'query' is converted to dict data with # jsonutils.loads() and used for filtering hosts. 'type': ['string', 'object'], }, # NOTE: The value of 'target_cell' is the cell name what cell # a new server is scheduled on. 'target_cell': parameter_types.name, 'different_cell': { 'type': ['string', 'array'], 'items': { 'type': 'string' } }, 'build_near_host_ip': parameter_types.ip_address, 'cidr': { 'type': 'string', 'pattern': '^\/[0-9a-f.:]+$' }, }, # NOTE: As this Mail: # http://lists.openstack.org/pipermail/openstack-dev/2015-June/067996.html # pointed out the limit the scheduler-hints in the API is problematic. So # relax it. 'additionalProperties': True } server_create = { 'os:scheduler_hints': _hints, 'OS-SCH-HNT:scheduler_hints': _hints, } nova-17.0.1/nova/api/openstack/compute/schemas/quota_classes.py0000666000175000017500000000344113250073126024612 0ustar zuulzuul00000000000000# Copyright 2015 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from nova.api.openstack.compute.schemas import quota_sets update = { 'type': 'object', 'properties': { 'type': 'object', 'quota_class_set': { 'properties': quota_sets.quota_resources, 'additionalProperties': False, }, }, 'required': ['quota_class_set'], 'additionalProperties': False, } update_v250 = copy.deepcopy(update) del update_v250['properties']['quota_class_set']['properties']['fixed_ips'] del update_v250['properties']['quota_class_set']['properties']['floating_ips'] del update_v250['properties']['quota_class_set']['properties'][ 'security_groups'] del update_v250['properties']['quota_class_set']['properties'][ 'security_group_rules'] del update_v250['properties']['quota_class_set']['properties']['networks'] # 2.57 builds on 2.50 and removes injected_file* quotas. update_v257 = copy.deepcopy(update_v250) del update_v257['properties']['quota_class_set']['properties'][ 'injected_files'] del update_v257['properties']['quota_class_set']['properties'][ 'injected_file_content_bytes'] del update_v257['properties']['quota_class_set']['properties'][ 'injected_file_path_bytes'] nova-17.0.1/nova/api/openstack/compute/schemas/remote_consoles.py0000666000175000017500000000746313250073126025154 0ustar zuulzuul00000000000000# Copyright 2014 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. get_vnc_console = { 'type': 'object', 'properties': { 'os-getVNCConsole': { 'type': 'object', 'properties': { 'type': { 'type': 'string', 'enum': ['novnc', 'xvpvnc'], }, }, 'required': ['type'], 'additionalProperties': False, }, }, 'required': ['os-getVNCConsole'], 'additionalProperties': False, } get_spice_console = { 'type': 'object', 'properties': { 'os-getSPICEConsole': { 'type': 'object', 'properties': { 'type': { 'type': 'string', 'enum': ['spice-html5'], }, }, 'required': ['type'], 'additionalProperties': False, }, }, 'required': ['os-getSPICEConsole'], 'additionalProperties': False, } get_rdp_console = { 'type': 'object', 'properties': { 'os-getRDPConsole': { 'type': 'object', 'properties': { 'type': { 'type': 'string', 'enum': ['rdp-html5'], }, }, 'required': ['type'], 'additionalProperties': False, }, }, 'required': ['os-getRDPConsole'], 'additionalProperties': False, } get_serial_console = { 'type': 'object', 'properties': { 'os-getSerialConsole': { 'type': 'object', 'properties': { 'type': { 'type': 'string', 'enum': ['serial'], }, }, 'required': ['type'], 'additionalProperties': False, }, }, 'required': ['os-getSerialConsole'], 'additionalProperties': False, } create_v26 = { 'type': 'object', 'properties': { 'remote_console': { 'type': 'object', 'properties': { 'protocol': { 'type': 'string', 'enum': ['vnc', 'spice', 'rdp', 'serial'], }, 'type': { 'type': 'string', 'enum': ['novnc', 'xvpvnc', 'rdp-html5', 'spice-html5', 'serial'], }, }, 'required': ['protocol', 'type'], 'additionalProperties': False, }, }, 'required': ['remote_console'], 'additionalProperties': False, } create_v28 = { 'type': 'object', 'properties': { 'remote_console': { 'type': 'object', 'properties': { 'protocol': { 'type': 'string', 'enum': ['vnc', 'spice', 'rdp', 'serial', 'mks'], }, 'type': { 'type': 'string', 'enum': ['novnc', 'xvpvnc', 'rdp-html5', 'spice-html5', 'serial', 'webmks'], }, }, 'required': ['protocol', 'type'], 'additionalProperties': False, }, }, 'required': ['remote_console'], 'additionalProperties': False, } nova-17.0.1/nova/api/openstack/compute/schemas/block_device_mapping.py0000666000175000017500000000543413250073126026074 0ustar zuulzuul00000000000000# Copyright 2014 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from nova.api.openstack.compute.schemas import block_device_mapping_v1 from nova.api.validation import parameter_types block_device_mapping_new_item = { # defined in nova/block_device.py:from_api() # NOTE: Client can specify the Id with the combination of # source_type and uuid, or a single attribute like volume_id/ # image_id/snapshot_id. 'source_type': { 'type': 'string', 'enum': ['volume', 'image', 'snapshot', 'blank'], }, 'uuid': { 'type': 'string', 'minLength': 1, 'maxLength': 255, 'pattern': '^[a-zA-Z0-9._-]*$', }, 'image_id': parameter_types.image_id, 'destination_type': { 'type': 'string', 'enum': ['local', 'volume'], }, # Defined as varchar(255) in column "guest_format" in table # "block_device_mapping" 'guest_format': { 'type': 'string', 'maxLength': 255, }, # Defined as varchar(255) in column "device_type" in table # "block_device_mapping" 'device_type': { 'type': 'string', 'maxLength': 255, }, # Defined as varchar(255) in column "disk_bus" in table # "block_device_mapping" 'disk_bus': { 'type': 'string', 'maxLength': 255, }, # Defined as integer in nova/block_device.py:from_api() # NOTE(mriedem): boot_index=None is also accepted for backward # compatibility with the legacy v2 API. 'boot_index': { 'type': ['integer', 'string', 'null'], 'pattern': '^-?[0-9]+$', }, } block_device_mapping = copy.deepcopy( block_device_mapping_v1.legacy_block_device_mapping) block_device_mapping['properties'].update(block_device_mapping_new_item) server_create = { 'block_device_mapping_v2': { 'type': 'array', 'items': block_device_mapping } } block_device_mapping_with_tags_new_item = { 'tag': parameter_types.tag } block_device_mapping_with_tags = copy.deepcopy(block_device_mapping) block_device_mapping_with_tags['properties'].update( block_device_mapping_with_tags_new_item) server_create_with_tags = { 'block_device_mapping_v2': { 'type': 'array', 'items': block_device_mapping_with_tags } } nova-17.0.1/nova/api/openstack/compute/schemas/config_drive.py0000666000175000017500000000135113250073126024400 0ustar zuulzuul00000000000000# Copyright 2014 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.validation import parameter_types server_create = { 'config_drive': parameter_types.boolean, } nova-17.0.1/nova/api/openstack/compute/schemas/floating_ips_bulk.py0000666000175000017500000000300113250073126025427 0ustar zuulzuul00000000000000# Copyright 2014 IBM Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. ip_range = { # TODO(eliqiao) need to find a better pattern 'type': 'string', 'pattern': '^[0-9./a-fA-F]*$', } create = { 'type': 'object', 'properties': { 'floating_ips_bulk_create': { 'type': 'object', 'properties': { 'ip_range': ip_range, 'pool': { 'type': 'string', 'minLength': 1, 'maxLength': 255, }, 'interface': { 'type': 'string', 'minLength': 1, 'maxLength': 255, }, }, 'required': ['ip_range'], 'additionalProperties': False, }, }, 'required': ['floating_ips_bulk_create'], 'additionalProperties': False, } delete = { 'type': 'object', 'properties': { 'ip_range': ip_range, }, 'required': ['ip_range'], 'additionalProperties': False, } nova-17.0.1/nova/api/openstack/compute/schemas/server_external_events.py0000666000175000017500000000375513250073126026550 0ustar zuulzuul00000000000000# Copyright 2014 IBM Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from nova.objects import external_event as external_event_obj create = { 'type': 'object', 'properties': { 'events': { 'type': 'array', 'minItems': 1, 'items': { 'type': 'object', 'properties': { 'server_uuid': { 'type': 'string', 'format': 'uuid' }, 'name': { 'type': 'string', 'enum': [ 'network-changed', 'network-vif-plugged', 'network-vif-unplugged', 'network-vif-deleted' ], }, 'status': { 'type': 'string', 'enum': external_event_obj.EVENT_STATUSES, }, 'tag': { 'type': 'string', 'maxLength': 255, }, }, 'required': ['server_uuid', 'name'], 'additionalProperties': False, }, }, }, 'required': ['events'], 'additionalProperties': False, } create_v251 = copy.deepcopy(create) name = create_v251['properties']['events']['items']['properties']['name'] name['enum'].append('volume-extended') nova-17.0.1/nova/api/openstack/compute/__init__.py0000666000175000017500000000206713250073126022063 0ustar zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # The APIRouterV21 moves down to the 'nova.api.openstack.compute.routes' for # circle reference problem. Import the APIRouterV21 is for the api-paste.ini # works correctly without modification. We still looking for a chance to move # the APIRouterV21 back to here after cleanups. from nova.api.openstack.compute.routes import APIRouterV21 # noqa nova-17.0.1/nova/api/openstack/compute/image_size.py0000666000175000017500000000375613250073126022446 0ustar zuulzuul00000000000000# Copyright 2013 Rackspace Hosting # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.openstack import wsgi from nova.policies import image_size as is_policies class ImageSizeController(wsgi.Controller): def _extend_image(self, image, image_cache): # NOTE(mriedem): The OS-EXT-* prefix should not be used for new # attributes after v2.1. They are only in v2.1 for backward compat # with v2.0. key = "OS-EXT-IMG-SIZE:size" image[key] = image_cache['size'] @wsgi.extends def show(self, req, resp_obj, id): context = req.environ["nova.context"] if context.can(is_policies.BASE_POLICY_NAME, fatal=False): image_resp = resp_obj.obj['image'] # image guaranteed to be in the cache due to the core API adding # it in its 'show' method image_cached = req.get_db_item('images', image_resp['id']) self._extend_image(image_resp, image_cached) @wsgi.extends def detail(self, req, resp_obj): context = req.environ['nova.context'] if context.can(is_policies.BASE_POLICY_NAME, fatal=False): images_resp = list(resp_obj.obj['images']) # images guaranteed to be in the cache due to the core API adding # it in its 'detail' method for image in images_resp: image_cached = req.get_db_item('images', image['id']) self._extend_image(image, image_cached) nova-17.0.1/nova/api/openstack/compute/instance_usage_audit_log.py0000666000175000017500000001074513250073126025345 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import webob.exc from nova.api.openstack import wsgi from nova import compute from nova.compute import rpcapi as compute_rpcapi import nova.conf from nova.i18n import _ from nova.policies import instance_usage_audit_log as iual_policies from nova import utils CONF = nova.conf.CONF class InstanceUsageAuditLogController(wsgi.Controller): def __init__(self): self.host_api = compute.HostAPI() @wsgi.expected_errors(()) def index(self, req): context = req.environ['nova.context'] context.can(iual_policies.BASE_POLICY_NAME) task_log = self._get_audit_task_logs(context) return {'instance_usage_audit_logs': task_log} @wsgi.expected_errors(400) def show(self, req, id): context = req.environ['nova.context'] context.can(iual_policies.BASE_POLICY_NAME) try: if '.' in id: before_date = datetime.datetime.strptime(str(id), "%Y-%m-%d %H:%M:%S.%f") else: before_date = datetime.datetime.strptime(str(id), "%Y-%m-%d %H:%M:%S") except ValueError: msg = _("Invalid timestamp for date %s") % id raise webob.exc.HTTPBadRequest(explanation=msg) task_log = self._get_audit_task_logs(context, before=before_date) return {'instance_usage_audit_log': task_log} def _get_audit_task_logs(self, context, before=None): """Returns a full log for all instance usage audit tasks on all computes. :param context: Nova request context. :param before: By default we look for the audit period most recently completed before this datetime. Has no effect if both begin and end are specified. """ begin, end = utils.last_completed_audit_period(before=before) task_logs = self.host_api.task_log_get_all(context, "instance_usage_audit", begin, end) # We do this in this way to include disabled compute services, # which can have instances on them. (mdragon) filters = {'topic': compute_rpcapi.RPC_TOPIC} services = self.host_api.service_get_all(context, filters=filters) hosts = set(serv['host'] for serv in services) seen_hosts = set() done_hosts = set() running_hosts = set() total_errors = 0 total_items = 0 for tlog in task_logs: seen_hosts.add(tlog['host']) if tlog['state'] == "DONE": done_hosts.add(tlog['host']) if tlog['state'] == "RUNNING": running_hosts.add(tlog['host']) total_errors += tlog['errors'] total_items += tlog['task_items'] log = {tl['host']: dict(state=tl['state'], instances=tl['task_items'], errors=tl['errors'], message=tl['message']) for tl in task_logs} missing_hosts = hosts - seen_hosts overall_status = "%s hosts done. %s errors." % ( 'ALL' if len(done_hosts) == len(hosts) else "%s of %s" % (len(done_hosts), len(hosts)), total_errors) return dict(period_beginning=str(begin), period_ending=str(end), num_hosts=len(hosts), num_hosts_done=len(done_hosts), num_hosts_running=len(running_hosts), num_hosts_not_run=len(missing_hosts), hosts_not_run=list(missing_hosts), total_instances=total_items, total_errors=total_errors, overall_status=overall_status, log=log) nova-17.0.1/nova/api/openstack/compute/agents.py0000666000175000017500000001475313250073126021612 0ustar zuulzuul00000000000000# Copyright 2012 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import webob.exc from nova.api.openstack.compute.schemas import agents as schema from nova.api.openstack import wsgi from nova.api import validation from nova import exception from nova import objects from nova.policies import agents as agents_policies from nova import utils class AgentController(wsgi.Controller): """The agent is talking about guest agent.The host can use this for things like accessing files on the disk, configuring networking, or running other applications/scripts in the guest while it is running. Typically this uses some hypervisor-specific transport to avoid being dependent on a working network configuration. Xen, VMware, and VirtualBox have guest agents,although the Xen driver is the only one with an implementation for managing them in openstack. KVM doesn't really have a concept of a guest agent (although one could be written). You can find the design of agent update in this link: http://wiki.openstack.org/AgentUpdate and find the code in nova.virt.xenapi.vmops.VMOps._boot_new_instance. In this design We need update agent in guest from host, so we need some interfaces to update the agent info in host. You can find more information about the design of the GuestAgent in the following link: http://wiki.openstack.org/GuestAgent http://wiki.openstack.org/GuestAgentXenStoreCommunication """ @validation.query_schema(schema.index_query) @wsgi.expected_errors(()) def index(self, req): """Return a list of all agent builds. Filter by hypervisor.""" context = req.environ['nova.context'] context.can(agents_policies.BASE_POLICY_NAME) hypervisor = None agents = [] if 'hypervisor' in req.GET: hypervisor = req.GET['hypervisor'] builds = objects.AgentList.get_all(context, hypervisor=hypervisor) for agent_build in builds: agents.append({'hypervisor': agent_build.hypervisor, 'os': agent_build.os, 'architecture': agent_build.architecture, 'version': agent_build.version, 'md5hash': agent_build.md5hash, 'agent_id': agent_build.id, 'url': agent_build.url}) return {'agents': agents} @wsgi.expected_errors((400, 404)) @validation.schema(schema.update) def update(self, req, id, body): """Update an existing agent build.""" context = req.environ['nova.context'] context.can(agents_policies.BASE_POLICY_NAME) # TODO(oomichi): This parameter name "para" is different from the ones # of the other APIs. Most other names are resource names like "server" # etc. This name should be changed to "agent" for consistent naming # with v2.1+microversions. para = body['para'] url = para['url'] md5hash = para['md5hash'] version = para['version'] try: utils.validate_integer(id, 'id') except exception.InvalidInput as exc: raise webob.exc.HTTPBadRequest(explanation=exc.format_message()) agent = objects.Agent(context=context, id=id) agent.obj_reset_changes() agent.version = version agent.url = url agent.md5hash = md5hash try: agent.save() except exception.AgentBuildNotFound as ex: raise webob.exc.HTTPNotFound(explanation=ex.format_message()) # TODO(alex_xu): The agent_id should be integer that consistent with # create/index actions. But parameter 'id' is string type that parsed # from url. This is a bug, but because back-compatibility, it can't be # fixed for v2 API. This will be fixed in v2.1 API by Microversions in # the future. lp bug #1333494 return {"agent": {'agent_id': id, 'version': version, 'url': url, 'md5hash': md5hash}} # TODO(oomichi): Here should be 204(No Content) instead of 200 by v2.1 # +microversions because the resource agent has been deleted completely # when returning a response. @wsgi.expected_errors((400, 404)) @wsgi.response(200) def delete(self, req, id): """Deletes an existing agent build.""" context = req.environ['nova.context'] context.can(agents_policies.BASE_POLICY_NAME) try: utils.validate_integer(id, 'id') except exception.InvalidInput as exc: raise webob.exc.HTTPBadRequest(explanation=exc.format_message()) try: agent = objects.Agent(context=context, id=id) agent.destroy() except exception.AgentBuildNotFound as ex: raise webob.exc.HTTPNotFound(explanation=ex.format_message()) # TODO(oomichi): Here should be 201(Created) instead of 200 by v2.1 # +microversions because the creation of a resource agent finishes # when returning a response. @wsgi.expected_errors(409) @wsgi.response(200) @validation.schema(schema.create) def create(self, req, body): """Creates a new agent build.""" context = req.environ['nova.context'] context.can(agents_policies.BASE_POLICY_NAME) agent = body['agent'] hypervisor = agent['hypervisor'] os = agent['os'] architecture = agent['architecture'] version = agent['version'] url = agent['url'] md5hash = agent['md5hash'] agent_obj = objects.Agent(context=context) agent_obj.hypervisor = hypervisor agent_obj.os = os agent_obj.architecture = architecture agent_obj.version = version agent_obj.url = url agent_obj.md5hash = md5hash try: agent_obj.create() agent['agent_id'] = agent_obj.id except exception.AgentBuildExists as ex: raise webob.exc.HTTPConflict(explanation=ex.format_message()) return {'agent': agent} nova-17.0.1/nova/api/openstack/compute/views/0000775000175000017500000000000013250073471021103 5ustar zuulzuul00000000000000nova-17.0.1/nova/api/openstack/compute/views/server_diagnostics.py0000666000175000017500000000500013250073126025344 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.openstack import common INSTANCE_DIAGNOSTICS_PRIMITIVE_FIELDS = ( 'state', 'driver', 'hypervisor', 'hypervisor_os', 'uptime', 'config_drive', 'num_cpus', 'num_nics', 'num_disks' ) INSTANCE_DIAGNOSTICS_LIST_FIELDS = { 'disk_details': ('read_bytes', 'read_requests', 'write_bytes', 'write_requests', 'errors_count'), 'cpu_details': ('id', 'time', 'utilisation'), 'nic_details': ('mac_address', 'rx_octets', 'rx_errors', 'rx_drop', 'rx_packets', 'rx_rate', 'tx_octets', 'tx_errors', 'tx_drop', 'tx_packets', 'tx_rate') } INSTANCE_DIAGNOSTICS_OBJECT_FIELDS = {'memory_details': ('maximum', 'used')} class ViewBuilder(common.ViewBuilder): @staticmethod def _get_obj_field(obj, field): if obj and obj.obj_attr_is_set(field): return getattr(obj, field) return None def instance_diagnostics(self, diagnostics): """Return a dictionary with instance diagnostics.""" diagnostics_dict = {} for field in INSTANCE_DIAGNOSTICS_PRIMITIVE_FIELDS: diagnostics_dict[field] = self._get_obj_field(diagnostics, field) for list_field in INSTANCE_DIAGNOSTICS_LIST_FIELDS: diagnostics_dict[list_field] = [] list_obj = getattr(diagnostics, list_field) for obj in list_obj: obj_dict = {} for field in INSTANCE_DIAGNOSTICS_LIST_FIELDS[list_field]: obj_dict[field] = self._get_obj_field(obj, field) diagnostics_dict[list_field].append(obj_dict) for obj_field in INSTANCE_DIAGNOSTICS_OBJECT_FIELDS: diagnostics_dict[obj_field] = {} obj = self._get_obj_field(diagnostics, obj_field) for field in INSTANCE_DIAGNOSTICS_OBJECT_FIELDS[obj_field]: diagnostics_dict[obj_field][field] = self._get_obj_field( obj, field) return diagnostics_dict nova-17.0.1/nova/api/openstack/compute/views/migrations.py0000666000175000017500000000160013250073126023625 0ustar zuulzuul00000000000000# Copyright 2017 Huawei Technologies Co.,LTD. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.openstack import common class ViewBuilder(common.ViewBuilder): _collection_name = "os-migrations" def get_links(self, request, migrations): return self._get_collection_links(request, migrations, self._collection_name, 'uuid') nova-17.0.1/nova/api/openstack/compute/views/addresses.py0000666000175000017500000000355213250073126023436 0ustar zuulzuul00000000000000# Copyright 2010-2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import itertools from nova.api.openstack import common class ViewBuilder(common.ViewBuilder): """Models server addresses as a dictionary.""" _collection_name = "addresses" def basic(self, ip, extend_address=False): """Return a dictionary describing an IP address.""" address = { "version": ip["version"], "addr": ip["address"], } if extend_address: address.update({ "OS-EXT-IPS:type": ip["type"], "OS-EXT-IPS-MAC:mac_addr": ip['mac_address'], }) return address def show(self, network, label, extend_address=False): """Returns a dictionary describing a network.""" all_ips = itertools.chain(network["ips"], network["floating_ips"]) return {label: [self.basic(ip, extend_address) for ip in all_ips]} def index(self, networks, extend_address=False): """Return a dictionary describing a list of networks.""" addresses = collections.OrderedDict() for label, network in networks.items(): network_dict = self.show(network, label, extend_address) addresses[label] = network_dict[label] return dict(addresses=addresses) nova-17.0.1/nova/api/openstack/compute/views/server_tags.py0000666000175000017500000000220413250073126023776 0ustar zuulzuul00000000000000# Copyright 2016 Mirantis Inc # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.openstack import common from nova.api.openstack.compute.views import servers class ViewBuilder(common.ViewBuilder): _collection_name = "tags" def __init__(self): super(ViewBuilder, self).__init__() self._server_builder = servers.ViewBuilder() def get_location(self, request, server_id, tag_name): server_location = self._server_builder._get_href_link( request, server_id, "servers") return "%s/%s/%s" % (server_location, self._collection_name, tag_name) nova-17.0.1/nova/api/openstack/compute/views/images.py0000666000175000017500000001370113250073126022723 0ustar zuulzuul00000000000000# Copyright 2010-2011 OpenStack Foundation # Copyright 2013 IBM Corp. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils import strutils from nova.api.openstack import common from nova.image import glance from nova import utils class ViewBuilder(common.ViewBuilder): _collection_name = "images" def basic(self, request, image): """Return a dictionary with basic image attributes.""" return { "image": { "id": image.get("id"), "name": image.get("name"), "links": self._get_links(request, image["id"], self._collection_name), }, } def show(self, request, image): """Return a dictionary with image details.""" image_dict = { "id": image.get("id"), "name": image.get("name"), "minRam": int(image.get("min_ram") or 0), "minDisk": int(image.get("min_disk") or 0), "metadata": image.get("properties", {}), "created": self._format_date(image.get("created_at")), "updated": self._format_date(image.get("updated_at")), "status": self._get_status(image), "progress": self._get_progress(image), "links": self._get_links(request, image["id"], self._collection_name), } instance_uuid = image.get("properties", {}).get("instance_uuid") if instance_uuid is not None: server_ref = self._get_href_link(request, instance_uuid, 'servers') image_dict["server"] = { "id": instance_uuid, "links": [{ "rel": "self", "href": server_ref, }, { "rel": "bookmark", "href": self._get_bookmark_link(request, instance_uuid, 'servers'), }], } auto_disk_config = image_dict['metadata'].get("auto_disk_config", None) if auto_disk_config is not None: value = strutils.bool_from_string(auto_disk_config) image_dict["OS-DCF:diskConfig"] = ( 'AUTO' if value else 'MANUAL') return dict(image=image_dict) def detail(self, request, images): """Show a list of images with details.""" list_func = self.show coll_name = self._collection_name + '/detail' return self._list_view(list_func, request, images, coll_name) def index(self, request, images): """Show a list of images with basic attributes.""" list_func = self.basic coll_name = self._collection_name return self._list_view(list_func, request, images, coll_name) def _list_view(self, list_func, request, images, coll_name): """Provide a view for a list of images. :param list_func: Function used to format the image data :param request: API request :param images: List of images in dictionary format :param coll_name: Name of collection, used to generate the next link for a pagination query :returns: Image reply data in dictionary format """ image_list = [list_func(request, image)["image"] for image in images] images_links = self._get_collection_links(request, images, coll_name) images_dict = dict(images=image_list) if images_links: images_dict["images_links"] = images_links return images_dict def _get_links(self, request, identifier, collection_name): """Return a list of links for this image.""" return [{ "rel": "self", "href": self._get_href_link(request, identifier, collection_name), }, { "rel": "bookmark", "href": self._get_bookmark_link(request, identifier, collection_name), }, { "rel": "alternate", "type": "application/vnd.openstack.image", "href": self._get_alternate_link(request, identifier), }] def _get_alternate_link(self, request, identifier): """Create an alternate link for a specific image id.""" glance_url = glance.generate_glance_url( request.environ['nova.context']) glance_url = self._update_glance_link_prefix(glance_url) return '/'.join([glance_url, self._collection_name, str(identifier)]) @staticmethod def _format_date(dt): """Return standard format for a given datetime object.""" if dt is not None: return utils.isotime(dt) @staticmethod def _get_status(image): """Update the status field to standardize format.""" return { 'active': 'ACTIVE', 'queued': 'SAVING', 'saving': 'SAVING', 'deleted': 'DELETED', 'pending_delete': 'DELETED', 'killed': 'ERROR', }.get(image.get("status"), 'UNKNOWN') @staticmethod def _get_progress(image): return { "queued": 25, "saving": 50, "active": 100, }.get(image.get("status"), 0) nova-17.0.1/nova/api/openstack/compute/views/usages.py0000666000175000017500000000205713250073126022747 0ustar zuulzuul00000000000000# Copyright 2016 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.openstack import common class ViewBuilder(common.ViewBuilder): _collection_name = "os-simple-tenant-usage" def get_links(self, request, server_usages, tenant_id=None): coll_name = self._collection_name if tenant_id: coll_name = self._collection_name + '/{}'.format(tenant_id) return self._get_collection_links( request, server_usages, coll_name, 'instance_id') nova-17.0.1/nova/api/openstack/compute/views/hypervisors.py0000666000175000017500000000202213250073126024045 0ustar zuulzuul00000000000000# Copyright 2016 Kylin Cloud # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.openstack import common class ViewBuilder(common.ViewBuilder): _collection_name = "hypervisors" def get_links(self, request, hypervisors, detail=False): coll_name = (self._collection_name + '/detail' if detail else self._collection_name) return self._get_collection_links(request, hypervisors, coll_name, 'id') nova-17.0.1/nova/api/openstack/compute/views/flavors.py0000666000175000017500000001453413250073126023137 0ustar zuulzuul00000000000000# Copyright 2010-2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.openstack import api_version_request from nova.api.openstack import common from nova.policies import flavor_access as fa_policies from nova.policies import flavor_rxtx as fr_policies FLAVOR_DESCRIPTION_MICROVERSION = '2.55' class ViewBuilder(common.ViewBuilder): _collection_name = "flavors" def basic(self, request, flavor, include_description=False, update_is_public=None, update_rxtx_factor=None): # update_is_public & update_rxtx_factor are placeholder param # which are not used in this method as basic() method is used by # index() (GET /flavors) which does not return those keys in response. flavor_dict = { "flavor": { "id": flavor["flavorid"], "name": flavor["name"], "links": self._get_links(request, flavor["flavorid"], self._collection_name), }, } if include_description: flavor_dict['flavor']['description'] = flavor.description return flavor_dict def show(self, request, flavor, include_description=False, update_is_public=None, update_rxtx_factor=None): flavor_dict = { "flavor": { "id": flavor["flavorid"], "name": flavor["name"], "ram": flavor["memory_mb"], "disk": flavor["root_gb"], "swap": flavor["swap"] or "", "OS-FLV-EXT-DATA:ephemeral": flavor["ephemeral_gb"], "OS-FLV-DISABLED:disabled": flavor["disabled"], "vcpus": flavor["vcpus"], "links": self._get_links(request, flavor["flavorid"], self._collection_name), }, } if include_description: flavor_dict['flavor']['description'] = flavor.description # TODO(gmann): 'update_is_public' & 'update_rxtx_factor' are policies # checks. Once os-flavor-access & os-flavor-rxtx policies are # removed, 'os-flavor-access:is_public' and 'rxtx_factor' need to be # added in response without any check. # Evaluate the policies when using show method directly. context = request.environ['nova.context'] if update_is_public is None: update_is_public = context.can(fa_policies.BASE_POLICY_NAME, fatal=False) if update_rxtx_factor is None: update_rxtx_factor = context.can(fr_policies.BASE_POLICY_NAME, fatal=False) if update_is_public: flavor_dict['flavor'].update({ "os-flavor-access:is_public": flavor['is_public']}) if update_rxtx_factor: flavor_dict['flavor'].update( {"rxtx_factor": flavor['rxtx_factor'] or ""}) return flavor_dict def index(self, request, flavors): """Return the 'index' view of flavors.""" coll_name = self._collection_name include_description = api_version_request.is_supported( request, FLAVOR_DESCRIPTION_MICROVERSION) return self._list_view(self.basic, request, flavors, coll_name, include_description=include_description) def detail(self, request, flavors): """Return the 'detail' view of flavors.""" coll_name = self._collection_name + '/detail' include_description = api_version_request.is_supported( request, FLAVOR_DESCRIPTION_MICROVERSION) context = request.environ['nova.context'] update_is_public = context.can(fa_policies.BASE_POLICY_NAME, fatal=False) update_rxtx_factor = context.can(fr_policies.BASE_POLICY_NAME, fatal=False) return self._list_view(self.show, request, flavors, coll_name, include_description=include_description, update_is_public=update_is_public, update_rxtx_factor=update_rxtx_factor) def _list_view(self, func, request, flavors, coll_name, include_description=False, update_is_public=None, update_rxtx_factor=None): """Provide a view for a list of flavors. :param func: Function used to format the flavor data :param request: API request :param flavors: List of flavors in dictionary format :param coll_name: Name of collection, used to generate the next link for a pagination query :param include_description: If the flavor.description should be included in the response dict. :param update_is_public: If the flavor.is_public field should be included in the response dict. :param update_rxtx_factor: If the flavor.rxtx_factor field should be included in the response dict. :returns: Flavor reply data in dictionary format """ flavor_list = [func(request, flavor, include_description, update_is_public, update_rxtx_factor)["flavor"] for flavor in flavors] flavors_links = self._get_collection_links(request, flavors, coll_name, "flavorid") flavors_dict = dict(flavors=flavor_list) if flavors_links: flavors_dict["flavors_links"] = flavors_links return flavors_dict nova-17.0.1/nova/api/openstack/compute/views/__init__.py0000666000175000017500000000000013250073126023201 0ustar zuulzuul00000000000000nova-17.0.1/nova/api/openstack/compute/views/servers.py0000666000175000017500000003300213250073126023143 0ustar zuulzuul00000000000000# Copyright 2010-2011 OpenStack Foundation # Copyright 2011 Piston Cloud Computing, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import hashlib from oslo_log import log as logging from nova.api.openstack import api_version_request from nova.api.openstack import common from nova.api.openstack.compute.views import addresses as views_addresses from nova.api.openstack.compute.views import flavors as views_flavors from nova.api.openstack.compute.views import images as views_images from nova import context as nova_context from nova import exception from nova import objects from nova.policies import flavor_extra_specs as fes_policies from nova import utils LOG = logging.getLogger(__name__) class ViewBuilder(common.ViewBuilder): """Model a server API response as a python dictionary.""" _collection_name = "servers" _progress_statuses = ( "ACTIVE", "BUILD", "REBUILD", "RESIZE", "VERIFY_RESIZE", "MIGRATING", ) _fault_statuses = ( "ERROR", "DELETED" ) # These are the lazy-loadable instance attributes required for showing # details about an instance. Add to this list as new things need to be # shown. _show_expected_attrs = ['flavor', 'info_cache', 'metadata'] def __init__(self): """Initialize view builder.""" super(ViewBuilder, self).__init__() self._address_builder = views_addresses.ViewBuilder() self._image_builder = views_images.ViewBuilder() self._flavor_builder = views_flavors.ViewBuilder() def create(self, request, instance): """View that should be returned when an instance is created.""" return { "server": { "id": instance["uuid"], "links": self._get_links(request, instance["uuid"], self._collection_name), # NOTE(sdague): historically this was the # os-disk-config extension, but now that extensions # are gone, we merge these attributes here. "OS-DCF:diskConfig": ( 'AUTO' if instance.get('auto_disk_config') else 'MANUAL'), }, } def basic(self, request, instance, show_extra_specs=False): """Generic, non-detailed view of an instance.""" return { "server": { "id": instance["uuid"], "name": instance["display_name"], "links": self._get_links(request, instance["uuid"], self._collection_name), }, } def get_show_expected_attrs(self, expected_attrs=None): """Returns a list of lazy-loadable expected attributes used by show This should be used when getting the instances from the database so that the necessary attributes are pre-loaded before needing to build the show response where lazy-loading can fail if an instance was deleted. :param list expected_attrs: The list of expected attributes that will be requested in addition to what this view builder requires. This method will merge the two lists and return what should be ultimately used when getting an instance from the database. :returns: merged and sorted list of expected attributes """ if expected_attrs is None: expected_attrs = [] # NOTE(mriedem): We sort the list so we can have predictable test # results. return sorted(list(set(self._show_expected_attrs + expected_attrs))) def show(self, request, instance, extend_address=True, show_extra_specs=None): """Detailed view of a single instance.""" ip_v4 = instance.get('access_ip_v4') ip_v6 = instance.get('access_ip_v6') if show_extra_specs is None: # detail will pre-calculate this for us. If we're doing show, # then figure it out here. show_extra_specs = False if api_version_request.is_supported(request, min_version='2.47'): context = request.environ['nova.context'] show_extra_specs = context.can( fes_policies.POLICY_ROOT % 'index', fatal=False) server = { "server": { "id": instance["uuid"], "name": instance["display_name"], "status": self._get_vm_status(instance), "tenant_id": instance.get("project_id") or "", "user_id": instance.get("user_id") or "", "metadata": self._get_metadata(instance), "hostId": self._get_host_id(instance) or "", "image": self._get_image(request, instance), "flavor": self._get_flavor(request, instance, show_extra_specs), "created": utils.isotime(instance["created_at"]), "updated": utils.isotime(instance["updated_at"]), "addresses": self._get_addresses(request, instance, extend_address), "accessIPv4": str(ip_v4) if ip_v4 is not None else '', "accessIPv6": str(ip_v6) if ip_v6 is not None else '', "links": self._get_links(request, instance["uuid"], self._collection_name), # NOTE(sdague): historically this was the # os-disk-config extension, but now that extensions # are gone, we merge these attributes here. "OS-DCF:diskConfig": ( 'AUTO' if instance.get('auto_disk_config') else 'MANUAL'), }, } if server["server"]["status"] in self._fault_statuses: _inst_fault = self._get_fault(request, instance) if _inst_fault: server['server']['fault'] = _inst_fault if server["server"]["status"] in self._progress_statuses: server["server"]["progress"] = instance.get("progress", 0) if api_version_request.is_supported(request, min_version="2.9"): server["server"]["locked"] = (True if instance["locked_by"] else False) if api_version_request.is_supported(request, min_version="2.19"): server["server"]["description"] = instance.get( "display_description") if api_version_request.is_supported(request, min_version="2.26"): server["server"]["tags"] = [t.tag for t in instance.tags] return server def index(self, request, instances): """Show a list of servers without many details.""" coll_name = self._collection_name return self._list_view(self.basic, request, instances, coll_name, False) def detail(self, request, instances): """Detailed view of a list of instance.""" coll_name = self._collection_name + '/detail' if api_version_request.is_supported(request, min_version='2.47'): # Determine if we should show extra_specs in the inlined flavor # once before we iterate the list of instances context = request.environ['nova.context'] show_extra_specs = context.can(fes_policies.POLICY_ROOT % 'index', fatal=False) else: show_extra_specs = False return self._list_view(self.show, request, instances, coll_name, show_extra_specs) def _list_view(self, func, request, servers, coll_name, show_extra_specs): """Provide a view for a list of servers. :param func: Function used to format the server data :param request: API request :param servers: List of servers in dictionary format :param coll_name: Name of collection, used to generate the next link for a pagination query :returns: Server data in dictionary format """ server_list = [func(request, server, show_extra_specs=show_extra_specs)["server"] for server in servers] servers_links = self._get_collection_links(request, servers, coll_name) servers_dict = dict(servers=server_list) if servers_links: servers_dict["servers_links"] = servers_links return servers_dict @staticmethod def _get_metadata(instance): return instance.metadata or {} @staticmethod def _get_vm_status(instance): # If the instance is deleted the vm and task states don't really matter if instance.get("deleted"): return "DELETED" return common.status_from_state(instance.get("vm_state"), instance.get("task_state")) @staticmethod def _get_host_id(instance): host = instance.get("host") project = str(instance.get("project_id")) if host: data = (project + host).encode('utf-8') sha_hash = hashlib.sha224(data) return sha_hash.hexdigest() def _get_addresses(self, request, instance, extend_address=False): context = request.environ["nova.context"] networks = common.get_networks_for_instance(context, instance) return self._address_builder.index(networks, extend_address)["addresses"] def _get_image(self, request, instance): image_ref = instance["image_ref"] if image_ref: image_id = str(common.get_id_from_href(image_ref)) bookmark = self._image_builder._get_bookmark_link(request, image_id, "images") return { "id": image_id, "links": [{ "rel": "bookmark", "href": bookmark, }], } else: return "" def _get_flavor_dict(self, request, instance_type, show_extra_specs): flavordict = { "vcpus": instance_type.vcpus, "ram": instance_type.memory_mb, "disk": instance_type.root_gb, "ephemeral": instance_type.ephemeral_gb, "swap": instance_type.swap, "original_name": instance_type.name } if show_extra_specs: flavordict['extra_specs'] = instance_type.extra_specs return flavordict def _get_flavor(self, request, instance, show_extra_specs): instance_type = instance.get_flavor() if not instance_type: LOG.warning("Instance has had its instance_type removed " "from the DB", instance=instance) return {} if api_version_request.is_supported(request, min_version="2.47"): return self._get_flavor_dict(request, instance_type, show_extra_specs) flavor_id = instance_type["flavorid"] flavor_bookmark = self._flavor_builder._get_bookmark_link(request, flavor_id, "flavors") return { "id": str(flavor_id), "links": [{ "rel": "bookmark", "href": flavor_bookmark, }], } def _load_fault(self, request, instance): try: mapping = objects.InstanceMapping.get_by_instance_uuid( request.environ['nova.context'], instance.uuid) if mapping.cell_mapping is not None: with nova_context.target_cell(instance._context, mapping.cell_mapping): return instance.fault except exception.InstanceMappingNotFound: pass # NOTE(danms): No instance mapping at all, or a mapping with no cell, # which means a legacy environment or instance. return instance.fault def _get_fault(self, request, instance): if 'fault' in instance: fault = instance.fault else: fault = self._load_fault(request, instance) if not fault: return None fault_dict = { "code": fault["code"], "created": utils.isotime(fault["created_at"]), "message": fault["message"], } if fault.get('details', None): is_admin = False context = request.environ["nova.context"] if context: is_admin = getattr(context, 'is_admin', False) if is_admin or fault['code'] != 500: fault_dict['details'] = fault["details"] return fault_dict nova-17.0.1/nova/api/openstack/compute/views/versions.py0000666000175000017500000000573613250073126023337 0ustar zuulzuul00000000000000# Copyright 2010-2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from nova.api.openstack import common def get_view_builder(req): base_url = req.application_url return ViewBuilder(base_url) class ViewBuilder(common.ViewBuilder): def __init__(self, base_url): """:param base_url: url of the root wsgi application.""" self.prefix = self._update_compute_link_prefix(base_url) self.base_url = base_url def build_choices(self, VERSIONS, req): version_objs = [] for version in sorted(VERSIONS): version = VERSIONS[version] version_objs.append({ "id": version['id'], "status": version['status'], "links": [ { "rel": "self", "href": self.generate_href(version['id'], req.path), }, ], "media-types": version['media-types'], }) return dict(choices=version_objs) def build_versions(self, versions): version_objs = [] for version in sorted(versions.keys()): version = versions[version] version_objs.append({ "id": version['id'], "status": version['status'], "version": version['version'], "min_version": version['min_version'], "updated": version['updated'], "links": self._build_links(version), }) return dict(versions=version_objs) def build_version(self, version): reval = copy.deepcopy(version) reval['links'].insert(0, { "rel": "self", "href": self.prefix.rstrip('/') + '/', }) return dict(version=reval) def _build_links(self, version_data): """Generate a container of links that refer to the provided version.""" href = self.generate_href(version_data['id']) links = [ { "rel": "self", "href": href, }, ] return links def generate_href(self, version, path=None): """Create an url that refers to a specific version_number.""" if version.find('v2.1') == 0: version_number = 'v2.1' else: version_number = 'v2' path = path or '' return common.url_join(self.prefix, version_number, path) nova-17.0.1/nova/api/openstack/compute/views/instance_actions.py0000666000175000017500000000166613250073126025011 0ustar zuulzuul00000000000000# Copyright 2017 Huawei Technologies Co.,LTD. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.openstack import common class ViewBuilder(common.ViewBuilder): def get_links(self, request, server_id, instance_actions): collection_name = 'servers/%s/os-instance-actions' % server_id return self._get_collection_links(request, instance_actions, collection_name, 'request_id') nova-17.0.1/nova/api/openstack/compute/views/keypairs.py0000666000175000017500000000163113250073126023304 0ustar zuulzuul00000000000000# Copyright 2016 Mirantis Inc # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.openstack import common class ViewBuilder(common.ViewBuilder): _collection_name = "keypairs" def get_links(self, request, keypairs): return self._get_collection_links(request, keypairs, self._collection_name, 'name') nova-17.0.1/nova/api/openstack/compute/views/limits.py0000666000175000017500000000522613250073126022762 0ustar zuulzuul00000000000000# Copyright 2010-2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. class ViewBuilder(object): """OpenStack API base limits view builder.""" limit_names = {} def __init__(self): self.limit_names = { "ram": ["maxTotalRAMSize"], "instances": ["maxTotalInstances"], "cores": ["maxTotalCores"], "key_pairs": ["maxTotalKeypairs"], "floating_ips": ["maxTotalFloatingIps"], "metadata_items": ["maxServerMeta", "maxImageMeta"], "injected_files": ["maxPersonality"], "injected_file_content_bytes": ["maxPersonalitySize"], "security_groups": ["maxSecurityGroups"], "security_group_rules": ["maxSecurityGroupRules"], "server_groups": ["maxServerGroups"], "server_group_members": ["maxServerGroupMembers"] } def build(self, absolute_limits, filtered_limits=None, max_image_meta=True): absolute_limits = self._build_absolute_limits( absolute_limits, filtered_limits, max_image_meta=max_image_meta) output = { "limits": { "rate": [], "absolute": absolute_limits, }, } return output def _build_absolute_limits(self, absolute_limits, filtered_limits=None, max_image_meta=True): """Builder for absolute limits absolute_limits should be given as a dict of limits. For example: {"ram": 512, "gigabytes": 1024}. filtered_limits is an optional list of limits to exclude from the result set. """ filtered_limits = filtered_limits or [] limits = {} for name, value in absolute_limits.items(): if (name in self.limit_names and value is not None and name not in filtered_limits): for limit_name in self.limit_names[name]: if not max_image_meta and limit_name == "maxImageMeta": continue limits[limit_name] = value return limits nova-17.0.1/nova/api/openstack/compute/volumes.py0000666000175000017500000005245613250073126022025 0ustar zuulzuul00000000000000# Copyright 2011 Justin Santa Barbara # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """The volumes extension.""" from oslo_utils import strutils from webob import exc from nova.api.openstack import api_version_request from nova.api.openstack.api_version_request \ import MAX_PROXY_API_SUPPORT_VERSION from nova.api.openstack import common from nova.api.openstack.compute.schemas import volumes as volumes_schema from nova.api.openstack import wsgi from nova.api import validation from nova import compute from nova.compute import vm_states from nova import exception from nova.i18n import _ from nova import objects from nova.policies import volumes as vol_policies from nova.policies import volumes_attachments as va_policies from nova.volume import cinder ALIAS = "os-volumes" def _translate_volume_detail_view(context, vol): """Maps keys for volumes details view.""" d = _translate_volume_summary_view(context, vol) # No additional data / lookups at the moment return d def _translate_volume_summary_view(context, vol): """Maps keys for volumes summary view.""" d = {} d['id'] = vol['id'] d['status'] = vol['status'] d['size'] = vol['size'] d['availabilityZone'] = vol['availability_zone'] d['createdAt'] = vol['created_at'] if vol['attach_status'] == 'attached': # NOTE(ildikov): The attachments field in the volume info that # Cinder sends is converted to an OrderedDict with the # instance_uuid as key to make it easier for the multiattach # feature to check the required information. Multiattach will # be enable in the Nova API in Newton. # The format looks like the following: # attachments = {'instance_uuid': { # 'attachment_id': 'attachment_uuid', # 'mountpoint': '/dev/sda/ # } # } attachment = list(vol['attachments'].items())[0] d['attachments'] = [_translate_attachment_detail_view(vol['id'], attachment[0], attachment[1].get('mountpoint'))] else: d['attachments'] = [{}] d['displayName'] = vol['display_name'] d['displayDescription'] = vol['display_description'] if vol['volume_type_id'] and vol.get('volume_type'): d['volumeType'] = vol['volume_type']['name'] else: d['volumeType'] = vol['volume_type_id'] d['snapshotId'] = vol['snapshot_id'] if vol.get('volume_metadata'): d['metadata'] = vol.get('volume_metadata') else: d['metadata'] = {} return d class VolumeController(wsgi.Controller): """The Volumes API controller for the OpenStack API.""" def __init__(self): self.volume_api = cinder.API() super(VolumeController, self).__init__() @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors(404) def show(self, req, id): """Return data about the given volume.""" context = req.environ['nova.context'] context.can(vol_policies.BASE_POLICY_NAME) try: vol = self.volume_api.get(context, id) except exception.VolumeNotFound as e: raise exc.HTTPNotFound(explanation=e.format_message()) return {'volume': _translate_volume_detail_view(context, vol)} @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.response(202) @wsgi.expected_errors((400, 404)) def delete(self, req, id): """Delete a volume.""" context = req.environ['nova.context'] context.can(vol_policies.BASE_POLICY_NAME) try: self.volume_api.delete(context, id) except exception.InvalidInput as e: raise exc.HTTPBadRequest(explanation=e.format_message()) except exception.VolumeNotFound as e: raise exc.HTTPNotFound(explanation=e.format_message()) @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @validation.query_schema(volumes_schema.index_query) @wsgi.expected_errors(()) def index(self, req): """Returns a summary list of volumes.""" return self._items(req, entity_maker=_translate_volume_summary_view) @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @validation.query_schema(volumes_schema.detail_query) @wsgi.expected_errors(()) def detail(self, req): """Returns a detailed list of volumes.""" return self._items(req, entity_maker=_translate_volume_detail_view) def _items(self, req, entity_maker): """Returns a list of volumes, transformed through entity_maker.""" context = req.environ['nova.context'] context.can(vol_policies.BASE_POLICY_NAME) volumes = self.volume_api.get_all(context) limited_list = common.limited(volumes, req) res = [entity_maker(context, vol) for vol in limited_list] return {'volumes': res} @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors((400, 403, 404)) @validation.schema(volumes_schema.create) def create(self, req, body): """Creates a new volume.""" context = req.environ['nova.context'] context.can(vol_policies.BASE_POLICY_NAME) vol = body['volume'] vol_type = vol.get('volume_type') metadata = vol.get('metadata') snapshot_id = vol.get('snapshot_id', None) if snapshot_id is not None: try: snapshot = self.volume_api.get_snapshot(context, snapshot_id) except exception.SnapshotNotFound as e: raise exc.HTTPNotFound(explanation=e.format_message()) else: snapshot = None size = vol.get('size', None) if size is None and snapshot is not None: size = snapshot['volume_size'] availability_zone = vol.get('availability_zone') try: new_volume = self.volume_api.create( context, size, vol.get('display_name'), vol.get('display_description'), snapshot=snapshot, volume_type=vol_type, metadata=metadata, availability_zone=availability_zone ) except exception.InvalidInput as err: raise exc.HTTPBadRequest(explanation=err.format_message()) except exception.OverQuota as err: raise exc.HTTPForbidden(explanation=err.format_message()) # TODO(vish): Instance should be None at db layer instead of # trying to lazy load, but for now we turn it into # a dict to avoid an error. retval = _translate_volume_detail_view(context, dict(new_volume)) result = {'volume': retval} location = '%s/%s' % (req.url, new_volume['id']) return wsgi.ResponseObject(result, headers=dict(location=location)) def _translate_attachment_detail_view(volume_id, instance_uuid, mountpoint): """Maps keys for attachment details view.""" d = _translate_attachment_summary_view(volume_id, instance_uuid, mountpoint) # No additional data / lookups at the moment return d def _translate_attachment_summary_view(volume_id, instance_uuid, mountpoint): """Maps keys for attachment summary view.""" d = {} # NOTE(justinsb): We use the volume id as the id of the attachment object d['id'] = volume_id d['volumeId'] = volume_id d['serverId'] = instance_uuid if mountpoint: d['device'] = mountpoint return d def _check_request_version(req, min_version, method, server_id, server_state): if not api_version_request.is_supported(req, min_version=min_version): exc_inv = exception.InstanceInvalidState( attr='vm_state', instance_uuid=server_id, state=server_state, method=method) common.raise_http_conflict_for_instance_invalid_state( exc_inv, method, server_id) class VolumeAttachmentController(wsgi.Controller): """The volume attachment API controller for the OpenStack API. A child resource of the server. Note that we use the volume id as the ID of the attachment (though this is not guaranteed externally) """ def __init__(self): self.compute_api = compute.API() self.volume_api = cinder.API() super(VolumeAttachmentController, self).__init__() @wsgi.expected_errors(404) @validation.query_schema(volumes_schema.index_query) def index(self, req, server_id): """Returns the list of volume attachments for a given instance.""" context = req.environ['nova.context'] context.can(va_policies.POLICY_ROOT % 'index') instance = common.get_instance(self.compute_api, context, server_id) bdms = objects.BlockDeviceMappingList.get_by_instance_uuid( context, instance.uuid) limited_list = common.limited(bdms, req) results = [] for bdm in limited_list: if bdm.volume_id: va = _translate_attachment_summary_view(bdm.volume_id, bdm.instance_uuid, bdm.device_name) results.append(va) return {'volumeAttachments': results} @wsgi.expected_errors(404) def show(self, req, server_id, id): """Return data about the given volume attachment.""" context = req.environ['nova.context'] context.can(va_policies.POLICY_ROOT % 'show') volume_id = id instance = common.get_instance(self.compute_api, context, server_id) try: bdm = objects.BlockDeviceMapping.get_by_volume_and_instance( context, volume_id, instance.uuid) except exception.VolumeBDMNotFound: msg = (_("Instance %(instance)s is not attached " "to volume %(volume)s") % {'instance': server_id, 'volume': volume_id}) raise exc.HTTPNotFound(explanation=msg) assigned_mountpoint = bdm.device_name return {'volumeAttachment': _translate_attachment_detail_view( volume_id, instance.uuid, assigned_mountpoint)} # TODO(mriedem): This API should return a 202 instead of a 200 response. @wsgi.expected_errors((400, 404, 409)) @validation.schema(volumes_schema.create_volume_attachment, '2.0', '2.48') @validation.schema(volumes_schema.create_volume_attachment_v249, '2.49') def create(self, req, server_id, body): """Attach a volume to an instance.""" context = req.environ['nova.context'] context.can(va_policies.POLICY_ROOT % 'create') volume_id = body['volumeAttachment']['volumeId'] device = body['volumeAttachment'].get('device') tag = body['volumeAttachment'].get('tag') instance = common.get_instance(self.compute_api, context, server_id) if instance.vm_state in (vm_states.SHELVED, vm_states.SHELVED_OFFLOADED): _check_request_version(req, '2.20', 'attach_volume', server_id, instance.vm_state) try: supports_multiattach = common.supports_multiattach_volume(req) device = self.compute_api.attach_volume( context, instance, volume_id, device, tag=tag, supports_multiattach=supports_multiattach) except (exception.InstanceUnknownCell, exception.VolumeNotFound) as e: raise exc.HTTPNotFound(explanation=e.format_message()) except (exception.InstanceIsLocked, exception.DevicePathInUse, exception.MultiattachNotSupportedByVirtDriver, exception.MultiattachSupportNotYetAvailable) as e: raise exc.HTTPConflict(explanation=e.format_message()) except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state(state_error, 'attach_volume', server_id) except (exception.InvalidVolume, exception.InvalidDevicePath, exception.InvalidInput, exception.TaggedAttachmentNotSupported, exception.MultiattachNotSupportedOldMicroversion, exception.MultiattachToShelvedNotSupported) as e: raise exc.HTTPBadRequest(explanation=e.format_message()) # The attach is async attachment = {} attachment['id'] = volume_id attachment['serverId'] = server_id attachment['volumeId'] = volume_id attachment['device'] = device return {'volumeAttachment': attachment} @wsgi.response(202) @wsgi.expected_errors((400, 404, 409)) @validation.schema(volumes_schema.update_volume_attachment) def update(self, req, server_id, id, body): context = req.environ['nova.context'] context.can(va_policies.POLICY_ROOT % 'update') old_volume_id = id try: old_volume = self.volume_api.get(context, old_volume_id) except exception.VolumeNotFound as e: raise exc.HTTPNotFound(explanation=e.format_message()) new_volume_id = body['volumeAttachment']['volumeId'] try: new_volume = self.volume_api.get(context, new_volume_id) except exception.VolumeNotFound as e: # NOTE: This BadRequest is different from the above NotFound even # though the same VolumeNotFound exception. This is intentional # because new_volume_id is specified in a request body and if a # nonexistent resource in the body (not URI) the code should be # 400 Bad Request as API-WG guideline. On the other hand, # old_volume_id is specified with URI. So it is valid to return # NotFound response if that is not existent. raise exc.HTTPBadRequest(explanation=e.format_message()) instance = common.get_instance(self.compute_api, context, server_id) try: self.compute_api.swap_volume(context, instance, old_volume, new_volume) except exception.VolumeBDMNotFound as e: raise exc.HTTPNotFound(explanation=e.format_message()) except exception.InvalidVolume as e: raise exc.HTTPBadRequest(explanation=e.format_message()) except exception.InstanceIsLocked as e: raise exc.HTTPConflict(explanation=e.format_message()) except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state(state_error, 'swap_volume', server_id) @wsgi.response(202) @wsgi.expected_errors((400, 403, 404, 409)) def delete(self, req, server_id, id): """Detach a volume from an instance.""" context = req.environ['nova.context'] context.can(va_policies.POLICY_ROOT % 'delete') volume_id = id instance = common.get_instance(self.compute_api, context, server_id, expected_attrs=['device_metadata']) if instance.vm_state in (vm_states.SHELVED, vm_states.SHELVED_OFFLOADED): _check_request_version(req, '2.20', 'detach_volume', server_id, instance.vm_state) try: volume = self.volume_api.get(context, volume_id) except exception.VolumeNotFound as e: raise exc.HTTPNotFound(explanation=e.format_message()) try: bdm = objects.BlockDeviceMapping.get_by_volume_and_instance( context, volume_id, instance.uuid) except exception.VolumeBDMNotFound: msg = (_("Instance %(instance)s is not attached " "to volume %(volume)s") % {'instance': server_id, 'volume': volume_id}) raise exc.HTTPNotFound(explanation=msg) if bdm.is_root: msg = _("Cannot detach a root device volume") raise exc.HTTPForbidden(explanation=msg) try: self.compute_api.detach_volume(context, instance, volume) except exception.InvalidVolume as e: raise exc.HTTPBadRequest(explanation=e.format_message()) except exception.InstanceUnknownCell as e: raise exc.HTTPNotFound(explanation=e.format_message()) except exception.InvalidInput as e: raise exc.HTTPBadRequest(explanation=e.format_message()) except exception.InstanceIsLocked as e: raise exc.HTTPConflict(explanation=e.format_message()) except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state(state_error, 'detach_volume', server_id) def _translate_snapshot_detail_view(context, vol): """Maps keys for snapshots details view.""" d = _translate_snapshot_summary_view(context, vol) # NOTE(gagupta): No additional data / lookups at the moment return d def _translate_snapshot_summary_view(context, vol): """Maps keys for snapshots summary view.""" d = {} d['id'] = vol['id'] d['volumeId'] = vol['volume_id'] d['status'] = vol['status'] # NOTE(gagupta): We map volume_size as the snapshot size d['size'] = vol['volume_size'] d['createdAt'] = vol['created_at'] d['displayName'] = vol['display_name'] d['displayDescription'] = vol['display_description'] return d class SnapshotController(wsgi.Controller): """The Snapshots API controller for the OpenStack API.""" def __init__(self): self.volume_api = cinder.API() super(SnapshotController, self).__init__() @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors(404) def show(self, req, id): """Return data about the given snapshot.""" context = req.environ['nova.context'] context.can(vol_policies.BASE_POLICY_NAME) try: vol = self.volume_api.get_snapshot(context, id) except exception.SnapshotNotFound as e: raise exc.HTTPNotFound(explanation=e.format_message()) return {'snapshot': _translate_snapshot_detail_view(context, vol)} @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.response(202) @wsgi.expected_errors(404) def delete(self, req, id): """Delete a snapshot.""" context = req.environ['nova.context'] context.can(vol_policies.BASE_POLICY_NAME) try: self.volume_api.delete_snapshot(context, id) except exception.SnapshotNotFound as e: raise exc.HTTPNotFound(explanation=e.format_message()) @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @validation.query_schema(volumes_schema.index_query) @wsgi.expected_errors(()) def index(self, req): """Returns a summary list of snapshots.""" return self._items(req, entity_maker=_translate_snapshot_summary_view) @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @validation.query_schema(volumes_schema.detail_query) @wsgi.expected_errors(()) def detail(self, req): """Returns a detailed list of snapshots.""" return self._items(req, entity_maker=_translate_snapshot_detail_view) def _items(self, req, entity_maker): """Returns a list of snapshots, transformed through entity_maker.""" context = req.environ['nova.context'] context.can(vol_policies.BASE_POLICY_NAME) snapshots = self.volume_api.get_all_snapshots(context) limited_list = common.limited(snapshots, req) res = [entity_maker(context, snapshot) for snapshot in limited_list] return {'snapshots': res} @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors((400, 403)) @validation.schema(volumes_schema.snapshot_create) def create(self, req, body): """Creates a new snapshot.""" context = req.environ['nova.context'] context.can(vol_policies.BASE_POLICY_NAME) snapshot = body['snapshot'] volume_id = snapshot['volume_id'] force = snapshot.get('force', False) force = strutils.bool_from_string(force, strict=True) if force: create_func = self.volume_api.create_snapshot_force else: create_func = self.volume_api.create_snapshot try: new_snapshot = create_func(context, volume_id, snapshot.get('display_name'), snapshot.get('display_description')) except exception.OverQuota as e: raise exc.HTTPForbidden(explanation=e.format_message()) retval = _translate_snapshot_detail_view(context, new_snapshot) return {'snapshot': retval} nova-17.0.1/nova/api/openstack/compute/admin_password.py0000666000175000017500000000504613250073126023336 0ustar zuulzuul00000000000000# Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from webob import exc from nova.api.openstack import common from nova.api.openstack.compute.schemas import admin_password from nova.api.openstack import wsgi from nova.api import validation from nova import compute from nova import exception from nova.i18n import _ from nova.policies import admin_password as ap_policies class AdminPasswordController(wsgi.Controller): def __init__(self, *args, **kwargs): super(AdminPasswordController, self).__init__(*args, **kwargs) self.compute_api = compute.API() # TODO(eliqiao): Here should be 204(No content) instead of 202 by v2.1+ # microversions because the password has been changed when returning # a response. @wsgi.action('changePassword') @wsgi.response(202) @wsgi.expected_errors((404, 409, 501)) @validation.schema(admin_password.change_password) def change_password(self, req, id, body): context = req.environ['nova.context'] instance = common.get_instance(self.compute_api, context, id) context.can(ap_policies.BASE_POLICY_NAME, target={'user_id': instance.user_id, 'project_id': instance.project_id}) password = body['changePassword']['adminPass'] try: self.compute_api.set_admin_password(context, instance, password) except exception.InstanceUnknownCell as e: raise exc.HTTPNotFound(explanation=e.format_message()) except (exception.InstancePasswordSetFailed, exception.SetAdminPasswdNotSupported, exception.InstanceAgentNotEnabled) as e: raise exc.HTTPConflict(explanation=e.format_message()) except exception.InstanceInvalidState as e: raise common.raise_http_conflict_for_instance_invalid_state( e, 'changePassword', id) except NotImplementedError: msg = _("Unable to set password on instance") common.raise_feature_not_supported(msg=msg) nova-17.0.1/nova/api/openstack/compute/servers.py0000666000175000017500000015111113250073126022010 0ustar zuulzuul00000000000000# Copyright 2010 OpenStack Foundation # Copyright 2011 Piston Cloud Computing, Inc # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy from oslo_log import log as logging import oslo_messaging as messaging from oslo_utils import strutils from oslo_utils import timeutils from oslo_utils import uuidutils import six import webob from webob import exc from nova.api.openstack import api_version_request from nova.api.openstack import common from nova.api.openstack.compute import block_device_mapping from nova.api.openstack.compute import block_device_mapping_v1 from nova.api.openstack.compute import config_drive from nova.api.openstack.compute import helpers from nova.api.openstack.compute import keypairs from nova.api.openstack.compute import multiple_create from nova.api.openstack.compute import scheduler_hints from nova.api.openstack.compute.schemas import servers as schema_servers from nova.api.openstack.compute import security_groups from nova.api.openstack.compute import user_data from nova.api.openstack.compute.views import servers as views_servers from nova.api.openstack import wsgi from nova.api import validation from nova import compute from nova.compute import flavors from nova.compute import utils as compute_utils import nova.conf from nova import context as nova_context from nova import exception from nova.i18n import _ from nova.image import api as image_api from nova import objects from nova.objects import service as service_obj from nova.policies import servers as server_policies from nova import utils TAG_SEARCH_FILTERS = ('tags', 'tags-any', 'not-tags', 'not-tags-any') DEVICE_TAGGING_MIN_COMPUTE_VERSION = 14 CONF = nova.conf.CONF LOG = logging.getLogger(__name__) class ServersController(wsgi.Controller): """The Server API base controller class for the OpenStack API.""" _view_builder_class = views_servers.ViewBuilder schema_server_create = schema_servers.base_create schema_server_update = schema_servers.base_update schema_server_rebuild = schema_servers.base_rebuild schema_server_create_v20 = schema_servers.base_create_v20 schema_server_update_v20 = schema_servers.base_update_v20 schema_server_rebuild_v20 = schema_servers.base_rebuild_v20 schema_server_create_v219 = schema_servers.base_create_v219 schema_server_update_v219 = schema_servers.base_update_v219 schema_server_rebuild_v219 = schema_servers.base_rebuild_v219 schema_server_rebuild_v254 = schema_servers.base_rebuild_v254 schema_server_rebuild_v257 = schema_servers.base_rebuild_v257 schema_server_create_v232 = schema_servers.base_create_v232 schema_server_create_v237 = schema_servers.base_create_v237 schema_server_create_v242 = schema_servers.base_create_v242 schema_server_create_v252 = schema_servers.base_create_v252 schema_server_create_v257 = schema_servers.base_create_v257 # NOTE(alex_xu): Please do not add more items into this list. This list # should be removed in the future. schema_func_list = [ block_device_mapping.get_server_create_schema, block_device_mapping_v1.get_server_create_schema, config_drive.get_server_create_schema, keypairs.get_server_create_schema, multiple_create.get_server_create_schema, scheduler_hints.get_server_create_schema, security_groups.get_server_create_schema, user_data.get_server_create_schema, ] # NOTE(alex_xu): Please do not add more items into this list. This list # should be removed in the future. server_create_func_list = [ block_device_mapping.server_create, block_device_mapping_v1.server_create, config_drive.server_create, keypairs.server_create, multiple_create.server_create, scheduler_hints.server_create, security_groups.server_create, user_data.server_create, ] @staticmethod def _add_location(robj): # Just in case... if 'server' not in robj.obj: return robj link = [l for l in robj.obj['server']['links'] if l['rel'] == 'self'] if link: robj['Location'] = utils.utf8(link[0]['href']) # Convenience return return robj def __init__(self, **kwargs): super(ServersController, self).__init__(**kwargs) self.compute_api = compute.API() # TODO(alex_xu): The final goal is that merging all of # extended json-schema into server main json-schema. self._create_schema(self.schema_server_create_v257, '2.57') self._create_schema(self.schema_server_create_v252, '2.52') self._create_schema(self.schema_server_create_v242, '2.42') self._create_schema(self.schema_server_create_v237, '2.37') self._create_schema(self.schema_server_create_v232, '2.32') self._create_schema(self.schema_server_create_v219, '2.19') self._create_schema(self.schema_server_create, '2.1') self._create_schema(self.schema_server_create_v20, '2.0') @wsgi.expected_errors((400, 403)) @validation.query_schema(schema_servers.query_params_v226, '2.26') @validation.query_schema(schema_servers.query_params_v21, '2.1', '2.25') def index(self, req): """Returns a list of server names and ids for a given user.""" context = req.environ['nova.context'] context.can(server_policies.SERVERS % 'index') try: servers = self._get_servers(req, is_detail=False) except exception.Invalid as err: raise exc.HTTPBadRequest(explanation=err.format_message()) return servers @wsgi.expected_errors((400, 403)) @validation.query_schema(schema_servers.query_params_v226, '2.26') @validation.query_schema(schema_servers.query_params_v21, '2.1', '2.25') def detail(self, req): """Returns a list of server details for a given user.""" context = req.environ['nova.context'] context.can(server_policies.SERVERS % 'detail') try: servers = self._get_servers(req, is_detail=True) except exception.Invalid as err: raise exc.HTTPBadRequest(explanation=err.format_message()) return servers def _get_servers(self, req, is_detail): """Returns a list of servers, based on any search options specified.""" search_opts = {} search_opts.update(req.GET) context = req.environ['nova.context'] remove_invalid_options(context, search_opts, self._get_server_search_options(req)) for search_opt in search_opts: if (search_opt in schema_servers.JOINED_TABLE_QUERY_PARAMS_SERVERS.keys() or search_opt.startswith('_')): msg = _("Invalid filter field: %s.") % search_opt raise exc.HTTPBadRequest(explanation=msg) # Verify search by 'status' contains a valid status. # Convert it to filter by vm_state or task_state for compute_api. # For non-admin user, vm_state and task_state are filtered through # remove_invalid_options function, based on value of status field. # Set value to vm_state and task_state to make search simple. search_opts.pop('status', None) if 'status' in req.GET.keys(): statuses = req.GET.getall('status') states = common.task_and_vm_state_from_status(statuses) vm_state, task_state = states if not vm_state and not task_state: if api_version_request.is_supported(req, min_version='2.38'): msg = _('Invalid status value') raise exc.HTTPBadRequest(explanation=msg) return {'servers': []} search_opts['vm_state'] = vm_state # When we search by vm state, task state will return 'default'. # So we don't need task_state search_opt. if 'default' not in task_state: search_opts['task_state'] = task_state if 'changes-since' in search_opts: search_opts['changes-since'] = timeutils.parse_isotime( search_opts['changes-since']) # By default, compute's get_all() will return deleted instances. # If an admin hasn't specified a 'deleted' search option, we need # to filter out deleted instances by setting the filter ourselves. # ... Unless 'changes-since' is specified, because 'changes-since' # should return recently deleted instances according to the API spec. if 'deleted' not in search_opts: if 'changes-since' not in search_opts: # No 'changes-since', so we only want non-deleted servers search_opts['deleted'] = False else: # Convert deleted filter value to a valid boolean. # Return non-deleted servers if an invalid value # is passed with deleted filter. search_opts['deleted'] = strutils.bool_from_string( search_opts['deleted'], default=False) if search_opts.get("vm_state") == ['deleted']: if context.is_admin: search_opts['deleted'] = True else: msg = _("Only administrators may list deleted instances") raise exc.HTTPForbidden(explanation=msg) if api_version_request.is_supported(req, min_version='2.26'): for tag_filter in TAG_SEARCH_FILTERS: if tag_filter in search_opts: search_opts[tag_filter] = search_opts[ tag_filter].split(',') # If tenant_id is passed as a search parameter this should # imply that all_tenants is also enabled unless explicitly # disabled. Note that the tenant_id parameter is filtered out # by remove_invalid_options above unless the requestor is an # admin. # TODO(gmann): 'all_tenants' flag should not be required while # searching with 'tenant_id'. Ref bug# 1185290 # +microversions to achieve above mentioned behavior by # uncommenting below code. # if 'tenant_id' in search_opts and 'all_tenants' not in search_opts: # We do not need to add the all_tenants flag if the tenant # id associated with the token is the tenant id # specified. This is done so a request that does not need # the all_tenants flag does not fail because of lack of # policy permission for compute:get_all_tenants when it # doesn't actually need it. # if context.project_id != search_opts.get('tenant_id'): # search_opts['all_tenants'] = 1 all_tenants = common.is_all_tenants(search_opts) # use the boolean from here on out so remove the entry from search_opts # if it's present search_opts.pop('all_tenants', None) elevated = None if all_tenants: if is_detail: context.can(server_policies.SERVERS % 'detail:get_all_tenants') else: context.can(server_policies.SERVERS % 'index:get_all_tenants') elevated = context.elevated() else: # As explained in lp:#1185290, if `all_tenants` is not passed # we must ignore the `tenant_id` search option. As explained # in a above code comment, any change to this behavior would # require a microversion bump. search_opts.pop('tenant_id', None) if context.project_id: search_opts['project_id'] = context.project_id else: search_opts['user_id'] = context.user_id limit, marker = common.get_limit_and_marker(req) sort_keys, sort_dirs = common.get_sort_params(req.params) sort_keys, sort_dirs = remove_invalid_sort_keys( context, sort_keys, sort_dirs, schema_servers.SERVER_LIST_IGNORE_SORT_KEY, ('host', 'node')) expected_attrs = [] if is_detail: expected_attrs.append('services') if api_version_request.is_supported(req, '2.26'): expected_attrs.append("tags") # merge our expected attrs with what the view builder needs for # showing details expected_attrs = self._view_builder.get_show_expected_attrs( expected_attrs) try: instance_list = self.compute_api.get_all(elevated or context, search_opts=search_opts, limit=limit, marker=marker, expected_attrs=expected_attrs, sort_keys=sort_keys, sort_dirs=sort_dirs) except exception.MarkerNotFound: msg = _('marker [%s] not found') % marker raise exc.HTTPBadRequest(explanation=msg) except exception.FlavorNotFound: LOG.debug("Flavor '%s' could not be found ", search_opts['flavor']) instance_list = objects.InstanceList() if is_detail: instance_list._context = context instance_list.fill_faults() response = self._view_builder.detail(req, instance_list) else: response = self._view_builder.index(req, instance_list) req.cache_db_instances(instance_list) return response def _get_server(self, context, req, instance_uuid, is_detail=False): """Utility function for looking up an instance by uuid. :param context: request context for auth :param req: HTTP request. The instance is cached in this request. :param instance_uuid: UUID of the server instance to get :param is_detail: True if you plan on showing the details of the instance in the response, False otherwise. """ expected_attrs = ['flavor', 'numa_topology'] if is_detail: if api_version_request.is_supported(req, '2.26'): expected_attrs.append("tags") expected_attrs = self._view_builder.get_show_expected_attrs( expected_attrs) instance = common.get_instance(self.compute_api, context, instance_uuid, expected_attrs=expected_attrs) req.cache_db_instance(instance) return instance @staticmethod def _validate_network_id(net_id, network_uuids): """Validates that a requested network id. This method performs two checks: 1. That the network id is in the proper uuid format. 2. That the network is not a duplicate when using nova-network. :param net_id: The network id to validate. :param network_uuids: A running list of requested network IDs that have passed validation already. :raises: webob.exc.HTTPBadRequest if validation fails """ if not uuidutils.is_uuid_like(net_id): # NOTE(mriedem): Neutron would allow a network id with a br- prefix # back in Folsom so continue to honor that. # TODO(mriedem): Need to figure out if this is still a valid case. br_uuid = net_id.split('-', 1)[-1] if not uuidutils.is_uuid_like(br_uuid): msg = _("Bad networks format: network uuid is " "not in proper format (%s)") % net_id raise exc.HTTPBadRequest(explanation=msg) # duplicate networks are allowed only for neutron v2.0 if net_id in network_uuids and not utils.is_neutron(): expl = _("Duplicate networks (%s) are not allowed") % net_id raise exc.HTTPBadRequest(explanation=expl) def _get_requested_networks(self, requested_networks, supports_device_tagging=False): """Create a list of requested networks from the networks attribute.""" # Starting in the 2.37 microversion, requested_networks is either a # list or a string enum with value 'auto' or 'none'. The auto/none # values are verified via jsonschema so we don't check them again here. if isinstance(requested_networks, six.string_types): return objects.NetworkRequestList( objects=[objects.NetworkRequest( network_id=requested_networks)]) networks = [] network_uuids = [] for network in requested_networks: request = objects.NetworkRequest() try: # fixed IP address is optional # if the fixed IP address is not provided then # it will use one of the available IP address from the network request.address = network.get('fixed_ip', None) request.port_id = network.get('port', None) request.tag = network.get('tag', None) if request.tag and not supports_device_tagging: msg = _('Network interface tags are not yet supported.') raise exc.HTTPBadRequest(explanation=msg) if request.port_id: request.network_id = None if not utils.is_neutron(): # port parameter is only for neutron v2.0 msg = _("Unknown argument: port") raise exc.HTTPBadRequest(explanation=msg) if request.address is not None: msg = _("Specified Fixed IP '%(addr)s' cannot be used " "with port '%(port)s': the two cannot be " "specified together.") % { "addr": request.address, "port": request.port_id} raise exc.HTTPBadRequest(explanation=msg) else: request.network_id = network['uuid'] self._validate_network_id( request.network_id, network_uuids) network_uuids.append(request.network_id) networks.append(request) except KeyError as key: expl = _('Bad network format: missing %s') % key raise exc.HTTPBadRequest(explanation=expl) except TypeError: expl = _('Bad networks format') raise exc.HTTPBadRequest(explanation=expl) return objects.NetworkRequestList(objects=networks) @wsgi.expected_errors(404) def show(self, req, id): """Returns server details by server id.""" context = req.environ['nova.context'] context.can(server_policies.SERVERS % 'show') instance = self._get_server(context, req, id, is_detail=True) return self._view_builder.show(req, instance) @wsgi.response(202) @wsgi.expected_errors((400, 403, 409)) @validation.schema(schema_server_create_v20, '2.0', '2.0') @validation.schema(schema_server_create, '2.1', '2.18') @validation.schema(schema_server_create_v219, '2.19', '2.31') @validation.schema(schema_server_create_v232, '2.32', '2.36') @validation.schema(schema_server_create_v237, '2.37', '2.41') @validation.schema(schema_server_create_v242, '2.42', '2.51') @validation.schema(schema_server_create_v252, '2.52', '2.56') @validation.schema(schema_server_create_v257, '2.57') def create(self, req, body): """Creates a new server for a given user.""" context = req.environ['nova.context'] server_dict = body['server'] password = self._get_server_admin_password(server_dict) name = common.normalize_name(server_dict['name']) description = name if api_version_request.is_supported(req, min_version='2.19'): description = server_dict.get('description') # Arguments to be passed to instance create function create_kwargs = {} # TODO(alex_xu): This is for back-compatible with stevedore # extension interface. But the final goal is that merging # all of extended code into ServersController. self._create_by_func_list(server_dict, create_kwargs, body) availability_zone = server_dict.pop("availability_zone", None) if api_version_request.is_supported(req, min_version='2.52'): create_kwargs['tags'] = server_dict.get('tags') helpers.translate_attributes(helpers.CREATE, server_dict, create_kwargs) target = { 'project_id': context.project_id, 'user_id': context.user_id, 'availability_zone': availability_zone} context.can(server_policies.SERVERS % 'create', target) # TODO(Shao He, Feng) move this policy check to os-availability-zone # extension after refactor it. parse_az = self.compute_api.parse_availability_zone try: availability_zone, host, node = parse_az(context, availability_zone) except exception.InvalidInput as err: raise exc.HTTPBadRequest(explanation=six.text_type(err)) if host or node: context.can(server_policies.SERVERS % 'create:forced_host', {}) min_compute_version = service_obj.get_minimum_version_all_cells( nova_context.get_admin_context(), ['nova-compute']) supports_device_tagging = (min_compute_version >= DEVICE_TAGGING_MIN_COMPUTE_VERSION) block_device_mapping = create_kwargs.get("block_device_mapping") # TODO(Shao He, Feng) move this policy check to os-block-device-mapping # extension after refactor it. if block_device_mapping: context.can(server_policies.SERVERS % 'create:attach_volume', target) for bdm in block_device_mapping: if bdm.get('tag', None) and not supports_device_tagging: msg = _('Block device tags are not yet supported.') raise exc.HTTPBadRequest(explanation=msg) image_uuid = self._image_from_req_data(server_dict, create_kwargs) # NOTE(cyeoh): Although upper layer can set the value of # return_reservation_id in order to request that a reservation # id be returned to the client instead of the newly created # instance information we do not want to pass this parameter # to the compute create call which always returns both. We use # this flag after the instance create call to determine what # to return to the client return_reservation_id = create_kwargs.pop('return_reservation_id', False) requested_networks = server_dict.get('networks', None) if requested_networks is not None: requested_networks = self._get_requested_networks( requested_networks, supports_device_tagging) # Skip policy check for 'create:attach_network' if there is no # network allocation request. if requested_networks and len(requested_networks) and \ not requested_networks.no_allocate: context.can(server_policies.SERVERS % 'create:attach_network', target) flavor_id = self._flavor_id_from_req_data(body) try: inst_type = flavors.get_flavor_by_flavor_id( flavor_id, ctxt=context, read_deleted="no") supports_multiattach = common.supports_multiattach_volume(req) (instances, resv_id) = self.compute_api.create(context, inst_type, image_uuid, display_name=name, display_description=description, availability_zone=availability_zone, forced_host=host, forced_node=node, metadata=server_dict.get('metadata', {}), admin_password=password, requested_networks=requested_networks, check_server_group_quota=True, supports_multiattach=supports_multiattach, **create_kwargs) except (exception.QuotaError, exception.PortLimitExceeded) as error: raise exc.HTTPForbidden( explanation=error.format_message()) except exception.ImageNotFound: msg = _("Can not find requested image") raise exc.HTTPBadRequest(explanation=msg) except exception.KeypairNotFound: msg = _("Invalid key_name provided.") raise exc.HTTPBadRequest(explanation=msg) except exception.ConfigDriveInvalidValue: msg = _("Invalid config_drive provided.") raise exc.HTTPBadRequest(explanation=msg) except exception.ExternalNetworkAttachForbidden as error: raise exc.HTTPForbidden(explanation=error.format_message()) except messaging.RemoteError as err: msg = "%(err_type)s: %(err_msg)s" % {'err_type': err.exc_type, 'err_msg': err.value} raise exc.HTTPBadRequest(explanation=msg) except UnicodeDecodeError as error: msg = "UnicodeError: %s" % error raise exc.HTTPBadRequest(explanation=msg) except (exception.CPUThreadPolicyConfigurationInvalid, exception.ImageNotActive, exception.ImageBadRequest, exception.ImageNotAuthorized, exception.FixedIpNotFoundForAddress, exception.FlavorNotFound, exception.FlavorDiskTooSmall, exception.FlavorMemoryTooSmall, exception.InvalidMetadata, exception.InvalidRequest, exception.InvalidVolume, exception.MultiplePortsNotApplicable, exception.InvalidFixedIpAndMaxCountRequest, exception.InstanceUserDataMalformed, exception.PortNotFound, exception.FixedIpAlreadyInUse, exception.SecurityGroupNotFound, exception.PortRequiresFixedIP, exception.NetworkRequiresSubnet, exception.NetworkNotFound, exception.InvalidBDM, exception.InvalidBDMSnapshot, exception.InvalidBDMVolume, exception.InvalidBDMImage, exception.InvalidBDMBootSequence, exception.InvalidBDMLocalsLimit, exception.InvalidBDMVolumeNotBootable, exception.InvalidBDMEphemeralSize, exception.InvalidBDMFormat, exception.InvalidBDMSwapSize, exception.AutoDiskConfigDisabledByImage, exception.ImageCPUPinningForbidden, exception.ImageCPUThreadPolicyForbidden, exception.ImageNUMATopologyIncomplete, exception.ImageNUMATopologyForbidden, exception.ImageNUMATopologyAsymmetric, exception.ImageNUMATopologyCPUOutOfRange, exception.ImageNUMATopologyCPUDuplicates, exception.ImageNUMATopologyCPUsUnassigned, exception.ImageNUMATopologyMemoryOutOfRange, exception.InvalidNUMANodesNumber, exception.InstanceGroupNotFound, exception.MemoryPageSizeInvalid, exception.MemoryPageSizeForbidden, exception.PciRequestAliasNotDefined, exception.RealtimeConfigurationInvalid, exception.RealtimeMaskNotFoundOrInvalid, exception.SnapshotNotFound, exception.UnableToAutoAllocateNetwork, exception.MultiattachNotSupportedOldMicroversion) as error: raise exc.HTTPBadRequest(explanation=error.format_message()) except (exception.PortInUse, exception.InstanceExists, exception.NetworkAmbiguous, exception.NoUniqueMatch, exception.MultiattachSupportNotYetAvailable) as error: raise exc.HTTPConflict(explanation=error.format_message()) # If the caller wanted a reservation_id, return it if return_reservation_id: return wsgi.ResponseObject({'reservation_id': resv_id}) req.cache_db_instances(instances) server = self._view_builder.create(req, instances[0]) if CONF.api.enable_instance_password: server['server']['adminPass'] = password robj = wsgi.ResponseObject(server) return self._add_location(robj) # NOTE(gmann): Parameter 'req_body' is placed to handle scheduler_hint # extension for V2.1. No other extension supposed to use this as # it will be removed soon. def _create_by_func_list(self, server_dict, create_kwargs, req_body): for func in self.server_create_func_list: func(server_dict, create_kwargs, req_body) def _create_schema(self, create_schema, version): for schema_func in self.schema_func_list: self._create_schema_by_func(create_schema, version, schema_func) def _create_schema_by_func(self, create_schema, version, schema_func): schema = schema_func(version) if (schema_func.__module__ == 'nova.api.openstack.compute.scheduler_hints'): # NOTE(oomichi): The request parameter position of scheduler-hint # extension is different from the other extensions, so here handles # the difference. create_schema['properties'].update(schema) else: create_schema['properties']['server']['properties'].update(schema) def _delete(self, context, req, instance_uuid): instance = self._get_server(context, req, instance_uuid) context.can(server_policies.SERVERS % 'delete', target={'user_id': instance.user_id, 'project_id': instance.project_id}) if CONF.reclaim_instance_interval: try: self.compute_api.soft_delete(context, instance) except exception.InstanceInvalidState: # Note(yufang521247): instance which has never been active # is not allowed to be soft_deleted. Thus we have to call # delete() to clean up the instance. self.compute_api.delete(context, instance) else: self.compute_api.delete(context, instance) @wsgi.expected_errors(404) @validation.schema(schema_server_update_v20, '2.0', '2.0') @validation.schema(schema_server_update, '2.1', '2.18') @validation.schema(schema_server_update_v219, '2.19') def update(self, req, id, body): """Update server then pass on to version-specific controller.""" ctxt = req.environ['nova.context'] update_dict = {} instance = self._get_server(ctxt, req, id, is_detail=True) ctxt.can(server_policies.SERVERS % 'update', target={'user_id': instance.user_id, 'project_id': instance.project_id}) server = body['server'] if 'name' in server: update_dict['display_name'] = common.normalize_name( server['name']) if 'description' in server: # This is allowed to be None (remove description) update_dict['display_description'] = server['description'] helpers.translate_attributes(helpers.UPDATE, server, update_dict) try: instance = self.compute_api.update_instance(ctxt, instance, update_dict) return self._view_builder.show(req, instance, extend_address=False) except exception.InstanceNotFound: msg = _("Instance could not be found") raise exc.HTTPNotFound(explanation=msg) # NOTE(gmann): Returns 204 for backwards compatibility but should be 202 # for representing async API as this API just accepts the request and # request hypervisor driver to complete the same in async mode. @wsgi.response(204) @wsgi.expected_errors((400, 404, 409)) @wsgi.action('confirmResize') def _action_confirm_resize(self, req, id, body): context = req.environ['nova.context'] context.can(server_policies.SERVERS % 'confirm_resize') instance = self._get_server(context, req, id) try: self.compute_api.confirm_resize(context, instance) except exception.InstanceUnknownCell as e: raise exc.HTTPNotFound(explanation=e.format_message()) except exception.MigrationNotFound: msg = _("Instance has not been resized.") raise exc.HTTPBadRequest(explanation=msg) except exception.InstanceIsLocked as e: raise exc.HTTPConflict(explanation=e.format_message()) except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state(state_error, 'confirmResize', id) @wsgi.response(202) @wsgi.expected_errors((400, 404, 409)) @wsgi.action('revertResize') def _action_revert_resize(self, req, id, body): context = req.environ['nova.context'] context.can(server_policies.SERVERS % 'revert_resize') instance = self._get_server(context, req, id) try: self.compute_api.revert_resize(context, instance) except exception.InstanceUnknownCell as e: raise exc.HTTPNotFound(explanation=e.format_message()) except exception.MigrationNotFound: msg = _("Instance has not been resized.") raise exc.HTTPBadRequest(explanation=msg) except exception.FlavorNotFound: msg = _("Flavor used by the instance could not be found.") raise exc.HTTPBadRequest(explanation=msg) except exception.InstanceIsLocked as e: raise exc.HTTPConflict(explanation=e.format_message()) except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state(state_error, 'revertResize', id) @wsgi.response(202) @wsgi.expected_errors((404, 409)) @wsgi.action('reboot') @validation.schema(schema_servers.reboot) def _action_reboot(self, req, id, body): reboot_type = body['reboot']['type'].upper() context = req.environ['nova.context'] context.can(server_policies.SERVERS % 'reboot') instance = self._get_server(context, req, id) try: self.compute_api.reboot(context, instance, reboot_type) except exception.InstanceIsLocked as e: raise exc.HTTPConflict(explanation=e.format_message()) except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state(state_error, 'reboot', id) def _resize(self, req, instance_id, flavor_id, **kwargs): """Begin the resize process with given instance/flavor.""" context = req.environ["nova.context"] instance = self._get_server(context, req, instance_id) context.can(server_policies.SERVERS % 'resize', target={'user_id': instance.user_id, 'project_id': instance.project_id}) try: self.compute_api.resize(context, instance, flavor_id, **kwargs) except exception.InstanceUnknownCell as e: raise exc.HTTPNotFound(explanation=e.format_message()) except exception.QuotaError as error: raise exc.HTTPForbidden( explanation=error.format_message()) except exception.InstanceIsLocked as e: raise exc.HTTPConflict(explanation=e.format_message()) except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state(state_error, 'resize', instance_id) except exception.ImageNotAuthorized: msg = _("You are not authorized to access the image " "the instance was started with.") raise exc.HTTPUnauthorized(explanation=msg) except exception.ImageNotFound: msg = _("Image that the instance was started " "with could not be found.") raise exc.HTTPBadRequest(explanation=msg) except (exception.AutoDiskConfigDisabledByImage, exception.CannotResizeDisk, exception.CannotResizeToSameFlavor, exception.FlavorNotFound, exception.NoValidHost, exception.PciRequestAliasNotDefined) as e: raise exc.HTTPBadRequest(explanation=e.format_message()) except exception.Invalid: msg = _("Invalid instance image.") raise exc.HTTPBadRequest(explanation=msg) @wsgi.response(204) @wsgi.expected_errors((404, 409)) def delete(self, req, id): """Destroys a server.""" try: self._delete(req.environ['nova.context'], req, id) except exception.InstanceNotFound: msg = _("Instance could not be found") raise exc.HTTPNotFound(explanation=msg) except exception.InstanceUnknownCell as e: raise exc.HTTPNotFound(explanation=e.format_message()) except exception.InstanceIsLocked as e: raise exc.HTTPConflict(explanation=e.format_message()) except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state(state_error, 'delete', id) def _image_from_req_data(self, server_dict, create_kwargs): """Get image data from the request or raise appropriate exceptions. The field imageRef is mandatory when no block devices have been defined and must be a proper uuid when present. """ image_href = server_dict.get('imageRef') if not image_href and create_kwargs.get('block_device_mapping'): return '' elif image_href: return image_href else: msg = _("Missing imageRef attribute") raise exc.HTTPBadRequest(explanation=msg) def _flavor_id_from_req_data(self, data): flavor_ref = data['server']['flavorRef'] return common.get_id_from_href(flavor_ref) @wsgi.response(202) @wsgi.expected_errors((400, 401, 403, 404, 409)) @wsgi.action('resize') @validation.schema(schema_servers.resize) def _action_resize(self, req, id, body): """Resizes a given instance to the flavor size requested.""" resize_dict = body['resize'] flavor_ref = str(resize_dict["flavorRef"]) kwargs = {} helpers.translate_attributes(helpers.RESIZE, resize_dict, kwargs) self._resize(req, id, flavor_ref, **kwargs) @wsgi.response(202) @wsgi.expected_errors((400, 403, 404, 409)) @wsgi.action('rebuild') @validation.schema(schema_server_rebuild_v20, '2.0', '2.0') @validation.schema(schema_server_rebuild, '2.1', '2.18') @validation.schema(schema_server_rebuild_v219, '2.19', '2.53') @validation.schema(schema_server_rebuild_v254, '2.54', '2.56') @validation.schema(schema_server_rebuild_v257, '2.57') def _action_rebuild(self, req, id, body): """Rebuild an instance with the given attributes.""" rebuild_dict = body['rebuild'] image_href = rebuild_dict["imageRef"] password = self._get_server_admin_password(rebuild_dict) context = req.environ['nova.context'] instance = self._get_server(context, req, id) context.can(server_policies.SERVERS % 'rebuild', target={'user_id': instance.user_id, 'project_id': instance.project_id}) attr_map = { 'name': 'display_name', 'description': 'display_description', 'metadata': 'metadata', } kwargs = {} helpers.translate_attributes(helpers.REBUILD, rebuild_dict, kwargs) if (api_version_request.is_supported(req, min_version='2.54') and 'key_name' in rebuild_dict): kwargs['key_name'] = rebuild_dict.get('key_name') # If user_data is not specified, we don't include it in kwargs because # we don't want to overwrite the existing user_data. include_user_data = api_version_request.is_supported( req, min_version='2.57') if include_user_data and 'user_data' in rebuild_dict: kwargs['user_data'] = rebuild_dict['user_data'] for request_attribute, instance_attribute in attr_map.items(): try: if request_attribute == 'name': kwargs[instance_attribute] = common.normalize_name( rebuild_dict[request_attribute]) else: kwargs[instance_attribute] = rebuild_dict[ request_attribute] except (KeyError, TypeError): pass try: self.compute_api.rebuild(context, instance, image_href, password, **kwargs) except exception.InstanceIsLocked as e: raise exc.HTTPConflict(explanation=e.format_message()) except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state(state_error, 'rebuild', id) except exception.InstanceNotFound: msg = _("Instance could not be found") raise exc.HTTPNotFound(explanation=msg) except exception.InstanceUnknownCell as e: raise exc.HTTPNotFound(explanation=e.format_message()) except exception.ImageNotFound: msg = _("Cannot find image for rebuild") raise exc.HTTPBadRequest(explanation=msg) except exception.KeypairNotFound: msg = _("Invalid key_name provided.") raise exc.HTTPBadRequest(explanation=msg) except exception.QuotaError as error: raise exc.HTTPForbidden(explanation=error.format_message()) except (exception.ImageNotActive, exception.ImageUnacceptable, exception.FlavorDiskTooSmall, exception.FlavorMemoryTooSmall, exception.InvalidMetadata, exception.AutoDiskConfigDisabledByImage) as error: raise exc.HTTPBadRequest(explanation=error.format_message()) instance = self._get_server(context, req, id, is_detail=True) view = self._view_builder.show(req, instance, extend_address=False) # Add on the admin_password attribute since the view doesn't do it # unless instance passwords are disabled if CONF.api.enable_instance_password: view['server']['adminPass'] = password if api_version_request.is_supported(req, min_version='2.54'): # NOTE(liuyulong): set the new key_name for the API response. view['server']['key_name'] = instance.key_name if include_user_data: view['server']['user_data'] = instance.user_data robj = wsgi.ResponseObject(view) return self._add_location(robj) @wsgi.response(202) @wsgi.expected_errors((400, 403, 404, 409)) @wsgi.action('createImage') @common.check_snapshots_enabled @validation.schema(schema_servers.create_image, '2.0', '2.0') @validation.schema(schema_servers.create_image, '2.1') def _action_create_image(self, req, id, body): """Snapshot a server instance.""" context = req.environ['nova.context'] context.can(server_policies.SERVERS % 'create_image') entity = body["createImage"] image_name = common.normalize_name(entity["name"]) metadata = entity.get('metadata', {}) # Starting from microversion 2.39 we don't check quotas on createImage if api_version_request.is_supported( req, max_version= api_version_request.MAX_IMAGE_META_PROXY_API_VERSION): common.check_img_metadata_properties_quota(context, metadata) instance = self._get_server(context, req, id) bdms = objects.BlockDeviceMappingList.get_by_instance_uuid( context, instance.uuid) try: if compute_utils.is_volume_backed_instance(context, instance, bdms): context.can(server_policies.SERVERS % 'create_image:allow_volume_backed') image = self.compute_api.snapshot_volume_backed( context, instance, image_name, extra_properties= metadata) else: image = self.compute_api.snapshot(context, instance, image_name, extra_properties=metadata) except exception.InstanceUnknownCell as e: raise exc.HTTPNotFound(explanation=e.format_message()) except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state(state_error, 'createImage', id) except exception.Invalid as err: raise exc.HTTPBadRequest(explanation=err.format_message()) except exception.OverQuota as e: raise exc.HTTPForbidden(explanation=e.format_message()) # Starting with microversion 2.45 we return a response body containing # the snapshot image id without the Location header. if api_version_request.is_supported(req, '2.45'): return {'image_id': image['id']} # build location of newly-created image entity image_id = str(image['id']) image_ref = image_api.API().generate_image_url(image_id, context) resp = webob.Response(status_int=202) resp.headers['Location'] = image_ref return resp def _get_server_admin_password(self, server): """Determine the admin password for a server on creation.""" if 'adminPass' in server: password = server['adminPass'] else: password = utils.generate_password() return password def _get_server_search_options(self, req): """Return server search options allowed by non-admin.""" opt_list = ('reservation_id', 'name', 'status', 'image', 'flavor', 'ip', 'changes-since', 'all_tenants') if api_version_request.is_supported(req, min_version='2.5'): opt_list += ('ip6',) if api_version_request.is_supported(req, min_version='2.26'): opt_list += TAG_SEARCH_FILTERS return opt_list def _get_instance(self, context, instance_uuid): try: attrs = ['system_metadata', 'metadata'] if not CONF.cells.enable: # NOTE(danms): We can't target a cell database if we're # in cellsv1 otherwise we'll short-circuit the replication. mapping = objects.InstanceMapping.get_by_instance_uuid( context, instance_uuid) nova_context.set_target_cell(context, mapping.cell_mapping) return objects.Instance.get_by_uuid( context, instance_uuid, expected_attrs=attrs) except (exception.InstanceNotFound, exception.InstanceMappingNotFound) as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) @wsgi.response(202) @wsgi.expected_errors((404, 409)) @wsgi.action('os-start') def _start_server(self, req, id, body): """Start an instance.""" context = req.environ['nova.context'] instance = self._get_instance(context, id) context.can(server_policies.SERVERS % 'start', instance) try: self.compute_api.start(context, instance) except (exception.InstanceNotReady, exception.InstanceIsLocked) as e: raise webob.exc.HTTPConflict(explanation=e.format_message()) except exception.InstanceUnknownCell as e: raise exc.HTTPNotFound(explanation=e.format_message()) except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state(state_error, 'start', id) @wsgi.response(202) @wsgi.expected_errors((404, 409)) @wsgi.action('os-stop') def _stop_server(self, req, id, body): """Stop an instance.""" context = req.environ['nova.context'] instance = self._get_instance(context, id) context.can(server_policies.SERVERS % 'stop', target={'user_id': instance.user_id, 'project_id': instance.project_id}) try: self.compute_api.stop(context, instance) except (exception.InstanceNotReady, exception.InstanceIsLocked) as e: raise webob.exc.HTTPConflict(explanation=e.format_message()) except exception.InstanceUnknownCell as e: raise exc.HTTPNotFound(explanation=e.format_message()) except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state(state_error, 'stop', id) @wsgi.Controller.api_version("2.17") @wsgi.response(202) @wsgi.expected_errors((400, 404, 409)) @wsgi.action('trigger_crash_dump') @validation.schema(schema_servers.trigger_crash_dump) def _action_trigger_crash_dump(self, req, id, body): """Trigger crash dump in an instance""" context = req.environ['nova.context'] instance = self._get_instance(context, id) context.can(server_policies.SERVERS % 'trigger_crash_dump', target={'user_id': instance.user_id, 'project_id': instance.project_id}) try: self.compute_api.trigger_crash_dump(context, instance) except (exception.InstanceNotReady, exception.InstanceIsLocked) as e: raise webob.exc.HTTPConflict(explanation=e.format_message()) except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state(state_error, 'trigger_crash_dump', id) except exception.TriggerCrashDumpNotSupported as e: raise webob.exc.HTTPBadRequest(explanation=e.format_message()) def remove_invalid_options(context, search_options, allowed_search_options): """Remove search options that are not valid for non-admin API/context.""" if context.is_admin: # Only remove parameters for sorting and pagination for key in ('sort_key', 'sort_dir', 'limit', 'marker'): search_options.pop(key, None) return # Otherwise, strip out all unknown options unknown_options = [opt for opt in search_options if opt not in allowed_search_options] if unknown_options: LOG.debug("Removing options '%s' from query", ", ".join(unknown_options)) for opt in unknown_options: search_options.pop(opt, None) def remove_invalid_sort_keys(context, sort_keys, sort_dirs, blacklist, admin_only_fields): key_list = copy.deepcopy(sort_keys) for key in key_list: # NOTE(Kevin Zheng): We are intend to remove the sort_key # in the blacklist and its' corresponding sort_dir, since # the sort_key and sort_dir are not strict to be provide # in pairs in the current implement, sort_dirs could be # less than sort_keys, in order to avoid IndexError, we # only pop sort_dir when number of sort_dirs is no less # than the sort_key index. if key in blacklist: if len(sort_dirs) > sort_keys.index(key): sort_dirs.pop(sort_keys.index(key)) sort_keys.pop(sort_keys.index(key)) elif key in admin_only_fields and not context.is_admin: msg = _("Only administrators can sort servers " "by %s") % key raise exc.HTTPForbidden(explanation=msg) return sort_keys, sort_dirs nova-17.0.1/nova/api/openstack/compute/server_metadata.py0000666000175000017500000001501013250073126023462 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from webob import exc from nova.api.openstack import common from nova.api.openstack.compute.schemas import server_metadata from nova.api.openstack import wsgi from nova.api import validation from nova import compute from nova import exception from nova.i18n import _ from nova.policies import server_metadata as sm_policies class ServerMetadataController(wsgi.Controller): """The server metadata API controller for the OpenStack API.""" def __init__(self): self.compute_api = compute.API() super(ServerMetadataController, self).__init__() def _get_metadata(self, context, server_id): server = common.get_instance(self.compute_api, context, server_id) try: # NOTE(mikal): get_instance_metadata sometimes returns # InstanceNotFound in unit tests, even though the instance is # fetched on the line above. I blame mocking. meta = self.compute_api.get_instance_metadata(context, server) except exception.InstanceNotFound: msg = _('Server does not exist') raise exc.HTTPNotFound(explanation=msg) meta_dict = {} for key, value in meta.items(): meta_dict[key] = value return meta_dict @wsgi.expected_errors(404) def index(self, req, server_id): """Returns the list of metadata for a given instance.""" context = req.environ['nova.context'] context.can(sm_policies.POLICY_ROOT % 'index') return {'metadata': self._get_metadata(context, server_id)} @wsgi.expected_errors((403, 404, 409)) # NOTE(gmann): Returns 200 for backwards compatibility but should be 201 # as this operation complete the creation of metadata. @validation.schema(server_metadata.create) def create(self, req, server_id, body): metadata = body['metadata'] context = req.environ['nova.context'] context.can(sm_policies.POLICY_ROOT % 'create') new_metadata = self._update_instance_metadata(context, server_id, metadata, delete=False) return {'metadata': new_metadata} @wsgi.expected_errors((400, 403, 404, 409)) @validation.schema(server_metadata.update) def update(self, req, server_id, id, body): context = req.environ['nova.context'] context.can(sm_policies.POLICY_ROOT % 'update') meta_item = body['meta'] if id not in meta_item: expl = _('Request body and URI mismatch') raise exc.HTTPBadRequest(explanation=expl) self._update_instance_metadata(context, server_id, meta_item, delete=False) return {'meta': meta_item} @wsgi.expected_errors((403, 404, 409)) @validation.schema(server_metadata.update_all) def update_all(self, req, server_id, body): context = req.environ['nova.context'] context.can(sm_policies.POLICY_ROOT % 'update_all') metadata = body['metadata'] new_metadata = self._update_instance_metadata(context, server_id, metadata, delete=True) return {'metadata': new_metadata} def _update_instance_metadata(self, context, server_id, metadata, delete=False): server = common.get_instance(self.compute_api, context, server_id) try: return self.compute_api.update_instance_metadata(context, server, metadata, delete) except exception.InstanceUnknownCell as e: raise exc.HTTPNotFound(explanation=e.format_message()) except exception.QuotaError as error: raise exc.HTTPForbidden(explanation=error.format_message()) except exception.InstanceIsLocked as e: raise exc.HTTPConflict(explanation=e.format_message()) except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state(state_error, 'update metadata', server_id) @wsgi.expected_errors(404) def show(self, req, server_id, id): """Return a single metadata item.""" context = req.environ['nova.context'] context.can(sm_policies.POLICY_ROOT % 'show') data = self._get_metadata(context, server_id) try: return {'meta': {id: data[id]}} except KeyError: msg = _("Metadata item was not found") raise exc.HTTPNotFound(explanation=msg) @wsgi.expected_errors((404, 409)) @wsgi.response(204) def delete(self, req, server_id, id): """Deletes an existing metadata.""" context = req.environ['nova.context'] context.can(sm_policies.POLICY_ROOT % 'delete') metadata = self._get_metadata(context, server_id) if id not in metadata: msg = _("Metadata item was not found") raise exc.HTTPNotFound(explanation=msg) server = common.get_instance(self.compute_api, context, server_id) try: self.compute_api.delete_instance_metadata(context, server, id) except exception.InstanceUnknownCell as e: raise exc.HTTPNotFound(explanation=e.format_message()) except exception.InstanceIsLocked as e: raise exc.HTTPConflict(explanation=e.format_message()) except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state(state_error, 'delete metadata', server_id) nova-17.0.1/nova/api/openstack/compute/fixed_ips.py0000666000175000017500000001022213250073126022266 0ustar zuulzuul00000000000000# Copyright 2012 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import webob import webob.exc from nova.api.openstack.api_version_request \ import MAX_PROXY_API_SUPPORT_VERSION from nova.api.openstack.compute.schemas import fixed_ips from nova.api.openstack import wsgi from nova.api import validation from nova import exception from nova.i18n import _ from nova import objects from nova.policies import fixed_ips as fi_policies class FixedIPController(wsgi.Controller): @wsgi.Controller.api_version('2.1', '2.3') def _fill_reserved_status(self, req, fixed_ip, fixed_ip_info): # NOTE(mriedem): To be backwards compatible, < 2.4 version does not # show anything about reserved status. pass @wsgi.Controller.api_version('2.4') # noqa def _fill_reserved_status(self, req, fixed_ip, fixed_ip_info): fixed_ip_info['fixed_ip']['reserved'] = fixed_ip.reserved @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors((400, 404)) def show(self, req, id): """Return data about the given fixed IP.""" context = req.environ['nova.context'] context.can(fi_policies.BASE_POLICY_NAME) attrs = ['network', 'instance'] try: fixed_ip = objects.FixedIP.get_by_address(context, id, expected_attrs=attrs) except exception.FixedIpNotFoundForAddress as ex: raise webob.exc.HTTPNotFound(explanation=ex.format_message()) except exception.FixedIpInvalid as ex: raise webob.exc.HTTPBadRequest(explanation=ex.format_message()) fixed_ip_info = {"fixed_ip": {}} if fixed_ip is None: msg = _("Fixed IP %s has been deleted") % id raise webob.exc.HTTPNotFound(explanation=msg) fixed_ip_info['fixed_ip']['cidr'] = str(fixed_ip.network.cidr) fixed_ip_info['fixed_ip']['address'] = str(fixed_ip.address) if fixed_ip.instance: fixed_ip_info['fixed_ip']['hostname'] = fixed_ip.instance.hostname fixed_ip_info['fixed_ip']['host'] = fixed_ip.instance.host else: fixed_ip_info['fixed_ip']['hostname'] = None fixed_ip_info['fixed_ip']['host'] = None self._fill_reserved_status(req, fixed_ip, fixed_ip_info) return fixed_ip_info @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.response(202) @wsgi.expected_errors((400, 404)) @validation.schema(fixed_ips.reserve) @wsgi.action('reserve') def reserve(self, req, id, body): context = req.environ['nova.context'] context.can(fi_policies.BASE_POLICY_NAME) return self._set_reserved(context, id, True) @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.response(202) @wsgi.expected_errors((400, 404)) @validation.schema(fixed_ips.unreserve) @wsgi.action('unreserve') def unreserve(self, req, id, body): context = req.environ['nova.context'] context.can(fi_policies.BASE_POLICY_NAME) return self._set_reserved(context, id, False) def _set_reserved(self, context, address, reserved): try: fixed_ip = objects.FixedIP.get_by_address(context, address) fixed_ip.reserved = reserved fixed_ip.save() except exception.FixedIpNotFoundForAddress: msg = _("Fixed IP %s not found") % address raise webob.exc.HTTPNotFound(explanation=msg) except exception.FixedIpInvalid: msg = _("Fixed IP %s not valid") % address raise webob.exc.HTTPBadRequest(explanation=msg) nova-17.0.1/nova/api/openstack/compute/flavor_manage.py0000666000175000017500000001314413250073136023124 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import webob from oslo_log import log as logging from nova.api.openstack import api_version_request from nova.api.openstack.compute.schemas import flavor_manage from nova.api.openstack.compute.views import flavors as flavors_view from nova.api.openstack import wsgi from nova.api import validation from nova.compute import flavors from nova import exception from nova import objects from nova.policies import base from nova.policies import flavor_manage as fm_policies from nova import policy LOG = logging.getLogger(__name__) ALIAS = "os-flavor-manage" class FlavorManageController(wsgi.Controller): """The Flavor Lifecycle API controller for the OpenStack API.""" _view_builder_class = flavors_view.ViewBuilder def __init__(self): super(FlavorManageController, self).__init__() # NOTE(oomichi): Return 202 for backwards compatibility but should be # 204 as this operation complete the deletion of aggregate resource and # return no response body. @wsgi.response(202) @wsgi.expected_errors((404)) @wsgi.action("delete") def _delete(self, req, id): context = req.environ['nova.context'] # TODO(rb560u): remove this check in future release using_old_action = \ policy.verify_deprecated_policy(fm_policies.BASE_POLICY_NAME, fm_policies.POLICY_ROOT % 'delete', base.RULE_ADMIN_API, context) if not using_old_action: context.can(fm_policies.POLICY_ROOT % 'delete') flavor = objects.Flavor(context=context, flavorid=id) try: flavor.destroy() except exception.FlavorNotFound as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) # NOTE(oomichi): Return 200 for backwards compatibility but should be 201 # as this operation complete the creation of flavor resource. @wsgi.action("create") @wsgi.expected_errors((400, 409)) @validation.schema(flavor_manage.create_v20, '2.0', '2.0') @validation.schema(flavor_manage.create, '2.1', '2.54') @validation.schema(flavor_manage.create_v2_55, flavors_view.FLAVOR_DESCRIPTION_MICROVERSION) def _create(self, req, body): context = req.environ['nova.context'] # TODO(rb560u): remove this check in future release using_old_action = \ policy.verify_deprecated_policy(fm_policies.BASE_POLICY_NAME, fm_policies.POLICY_ROOT % 'create', base.RULE_ADMIN_API, context) if not using_old_action: context.can(fm_policies.POLICY_ROOT % 'create') vals = body['flavor'] name = vals['name'] flavorid = vals.get('id') memory = vals['ram'] vcpus = vals['vcpus'] root_gb = vals['disk'] ephemeral_gb = vals.get('OS-FLV-EXT-DATA:ephemeral', 0) swap = vals.get('swap', 0) rxtx_factor = vals.get('rxtx_factor', 1.0) is_public = vals.get('os-flavor-access:is_public', True) # The user can specify a description starting with microversion 2.55. include_description = api_version_request.is_supported( req, flavors_view.FLAVOR_DESCRIPTION_MICROVERSION) description = vals.get('description') if include_description else None try: flavor = flavors.create(name, memory, vcpus, root_gb, ephemeral_gb=ephemeral_gb, flavorid=flavorid, swap=swap, rxtx_factor=rxtx_factor, is_public=is_public, description=description) # NOTE(gmann): For backward compatibility, non public flavor # access is not being added for created tenant. Ref -bug/1209101 req.cache_db_flavor(flavor) except (exception.FlavorExists, exception.FlavorIdExists) as err: raise webob.exc.HTTPConflict(explanation=err.format_message()) return self._view_builder.show(req, flavor, include_description) @wsgi.Controller.api_version(flavors_view.FLAVOR_DESCRIPTION_MICROVERSION) @wsgi.action('update') @wsgi.expected_errors((400, 404)) @validation.schema(flavor_manage.update_v2_55, flavors_view.FLAVOR_DESCRIPTION_MICROVERSION) def _update(self, req, id, body): # Validate the policy. context = req.environ['nova.context'] context.can(fm_policies.POLICY_ROOT % 'update') # Get the flavor and update the description. try: flavor = objects.Flavor.get_by_flavor_id(context, id) flavor.description = body['flavor']['description'] flavor.save() except exception.FlavorNotFound as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) # Cache the flavor so the flavor_access and flavor_rxtx extensions # can add stuff to the response. req.cache_db_flavor(flavor) return self._view_builder.show(req, flavor, include_description=True) nova-17.0.1/nova/api/openstack/compute/versions.py0000666000175000017500000000563013250073126022173 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.openstack import api_version_request from nova.api.openstack.compute.views import versions as views_versions from nova.api.openstack import wsgi LINKS = { 'v2.0': { 'html': 'http://docs.openstack.org/' }, 'v2.1': { 'html': 'http://docs.openstack.org/' }, } VERSIONS = { "v2.0": { "id": "v2.0", "status": "SUPPORTED", "version": "", "min_version": "", "updated": "2011-01-21T11:33:21Z", "links": [ { "rel": "describedby", "type": "text/html", "href": LINKS['v2.0']['html'], }, ], "media-types": [ { "base": "application/json", "type": "application/vnd.openstack.compute+json;version=2", } ], }, "v2.1": { "id": "v2.1", "status": "CURRENT", "version": api_version_request._MAX_API_VERSION, "min_version": api_version_request._MIN_API_VERSION, "updated": "2013-07-23T11:33:21Z", "links": [ { "rel": "describedby", "type": "text/html", "href": LINKS['v2.1']['html'], }, ], "media-types": [ { "base": "application/json", "type": "application/vnd.openstack.compute+json;version=2.1", } ], } } class Versions(wsgi.Resource): # The root version API isn't under the microversion control. support_api_request_version = False def __init__(self): super(Versions, self).__init__(None) def index(self, req, body=None): """Return all versions.""" builder = views_versions.get_view_builder(req) return builder.build_versions(VERSIONS) @wsgi.response(300) def multi(self, req, body=None): """Return multiple choices.""" builder = views_versions.get_view_builder(req) return builder.build_choices(VERSIONS, req) def get_action_args(self, request_environment): """Parse dictionary created by routes library.""" args = {} if request_environment['PATH_INFO'] == '/': args['action'] = 'index' else: args['action'] = 'multi' return args nova-17.0.1/nova/api/openstack/compute/hide_server_addresses.py0000666000175000017500000000473513250073126024664 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Extension for hiding server addresses in certain states.""" from nova.api.openstack import wsgi from nova.compute import vm_states import nova.conf from nova.policies import hide_server_addresses as hsa_policies CONF = nova.conf.CONF class Controller(wsgi.Controller): def __init__(self, *args, **kwargs): super(Controller, self).__init__(*args, **kwargs) hidden_states = CONF.api.hide_server_address_states # NOTE(jkoelker) _ is not considered uppercase ;) valid_vm_states = [getattr(vm_states, state) for state in dir(vm_states) if state.isupper()] self.hide_address_states = [state.lower() for state in hidden_states if state in valid_vm_states] def _perhaps_hide_addresses(self, instance, resp_server): if instance.get('vm_state') in self.hide_address_states: resp_server['addresses'] = {} @wsgi.extends def show(self, req, resp_obj, id): resp = resp_obj context = req.environ['nova.context'] if not context.can(hsa_policies.BASE_POLICY_NAME, fatal=False): return if 'server' in resp.obj and 'addresses' in resp.obj['server']: resp_server = resp.obj['server'] instance = req.get_db_instance(resp_server['id']) self._perhaps_hide_addresses(instance, resp_server) @wsgi.extends def detail(self, req, resp_obj): resp = resp_obj context = req.environ['nova.context'] if not context.can(hsa_policies.BASE_POLICY_NAME, fatal=False): return for server in list(resp.obj['servers']): if 'addresses' in server: instance = req.get_db_instance(server['id']) self._perhaps_hide_addresses(instance, server) nova-17.0.1/nova/api/openstack/compute/availability_zone.py0000666000175000017500000001142513250073126024027 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.openstack import wsgi from nova import availability_zones from nova import compute import nova.conf from nova.policies import availability_zone as az_policies from nova import servicegroup CONF = nova.conf.CONF ATTRIBUTE_NAME = "availability_zone" class AvailabilityZoneController(wsgi.Controller): """The Availability Zone API controller for the OpenStack API.""" def __init__(self): super(AvailabilityZoneController, self).__init__() self.servicegroup_api = servicegroup.API() self.host_api = compute.HostAPI() def _get_filtered_availability_zones(self, zones, is_available): result = [] for zone in zones: # Hide internal_service_availability_zone if zone == CONF.internal_service_availability_zone: continue result.append({'zoneName': zone, 'zoneState': {'available': is_available}, "hosts": None}) return result def _describe_availability_zones(self, context, **kwargs): ctxt = context.elevated() available_zones, not_available_zones = \ availability_zones.get_availability_zones(ctxt) filtered_available_zones = \ self._get_filtered_availability_zones(available_zones, True) filtered_not_available_zones = \ self._get_filtered_availability_zones(not_available_zones, False) return {'availabilityZoneInfo': filtered_available_zones + filtered_not_available_zones} def _describe_availability_zones_verbose(self, context, **kwargs): ctxt = context.elevated() available_zones, not_available_zones = \ availability_zones.get_availability_zones(ctxt) # Available services enabled_services = self.host_api.service_get_all( context, {'disabled': False}, set_zones=True, all_cells=True) zone_hosts = {} host_services = {} api_services = ('nova-osapi_compute', 'nova-ec2', 'nova-metadata') for service in enabled_services: if service.binary in api_services: # Skip API services in the listing since they are not # maintained in the same way as other services continue zone_hosts.setdefault(service['availability_zone'], []) if service['host'] not in zone_hosts[service['availability_zone']]: zone_hosts[service['availability_zone']].append( service['host']) host_services.setdefault(service['availability_zone'] + service['host'], []) host_services[service['availability_zone'] + service['host']].\ append(service) result = [] for zone in available_zones: hosts = {} for host in zone_hosts.get(zone, []): hosts[host] = {} for service in host_services[zone + host]: alive = self.servicegroup_api.service_is_up(service) hosts[host][service['binary']] = {'available': alive, 'active': True != service['disabled'], 'updated_at': service['updated_at']} result.append({'zoneName': zone, 'zoneState': {'available': True}, "hosts": hosts}) for zone in not_available_zones: result.append({'zoneName': zone, 'zoneState': {'available': False}, "hosts": None}) return {'availabilityZoneInfo': result} @wsgi.expected_errors(()) def index(self, req): """Returns a summary list of availability zone.""" context = req.environ['nova.context'] context.can(az_policies.POLICY_ROOT % 'list') return self._describe_availability_zones(context) @wsgi.expected_errors(()) def detail(self, req): """Returns a detailed list of availability zone.""" context = req.environ['nova.context'] context.can(az_policies.POLICY_ROOT % 'detail') return self._describe_availability_zones_verbose(context) nova-17.0.1/nova/api/openstack/compute/assisted_volume_snapshots.py0000666000175000017500000000767113250073126025642 0ustar zuulzuul00000000000000# Copyright 2013 Red Hat, Inc. # Copyright 2014 IBM Corp. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """The Assisted volume snapshots extension.""" from oslo_serialization import jsonutils import six from webob import exc from nova.api.openstack.compute.schemas import assisted_volume_snapshots from nova.api.openstack import wsgi from nova.api import validation from nova import compute from nova import exception from nova.policies import assisted_volume_snapshots as avs_policies class AssistedVolumeSnapshotsController(wsgi.Controller): """The Assisted volume snapshots API controller for the OpenStack API.""" def __init__(self): self.compute_api = compute.API() super(AssistedVolumeSnapshotsController, self).__init__() @wsgi.expected_errors(400) @validation.schema(assisted_volume_snapshots.snapshots_create) def create(self, req, body): """Creates a new snapshot.""" context = req.environ['nova.context'] context.can(avs_policies.POLICY_ROOT % 'create') snapshot = body['snapshot'] create_info = snapshot['create_info'] volume_id = snapshot['volume_id'] try: return self.compute_api.volume_snapshot_create(context, volume_id, create_info) except (exception.VolumeBDMNotFound, exception.VolumeBDMIsMultiAttach, exception.InvalidVolume) as error: raise exc.HTTPBadRequest(explanation=error.format_message()) except (exception.InstanceInvalidState, exception.InstanceNotReady) as e: # TODO(mriedem) InstanceInvalidState and InstanceNotReady would # normally result in a 409 but that would require bumping the # microversion, which we should just do in a single microversion # across all APIs when we fix status code wrinkles. raise exc.HTTPBadRequest(explanation=e.format_message()) @wsgi.response(204) @validation.query_schema(assisted_volume_snapshots.delete_query) @wsgi.expected_errors((400, 404)) def delete(self, req, id): """Delete a snapshot.""" context = req.environ['nova.context'] context.can(avs_policies.POLICY_ROOT % 'delete') delete_metadata = {} delete_metadata.update(req.GET) try: delete_info = jsonutils.loads(delete_metadata['delete_info']) volume_id = delete_info['volume_id'] except (KeyError, ValueError) as e: raise exc.HTTPBadRequest(explanation=six.text_type(e)) try: self.compute_api.volume_snapshot_delete(context, volume_id, id, delete_info) except (exception.VolumeBDMNotFound, exception.InvalidVolume) as error: raise exc.HTTPBadRequest(explanation=error.format_message()) except exception.NotFound as e: return exc.HTTPNotFound(explanation=e.format_message()) except (exception.InstanceInvalidState, exception.InstanceNotReady) as e: # TODO(mriedem) InstanceInvalidState and InstanceNotReady would # normally result in a 409 but that would require bumping the # microversion, which we should just do in a single microversion # across all APIs when we fix status code wrinkles. raise exc.HTTPBadRequest(explanation=e.format_message()) nova-17.0.1/nova/api/openstack/compute/instance_actions.py0000666000175000017500000001527713250073126023657 0ustar zuulzuul00000000000000# Copyright 2013 Rackspace Hosting # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from webob import exc from oslo_utils import timeutils from nova.api.openstack import api_version_request from nova.api.openstack import common from nova.api.openstack.compute.schemas \ import instance_actions as schema_instance_actions from nova.api.openstack.compute.views \ import instance_actions as instance_actions_view from nova.api.openstack import wsgi from nova.api import validation from nova import compute from nova import exception from nova.i18n import _ from nova.policies import instance_actions as ia_policies from nova import utils ACTION_KEYS = ['action', 'instance_uuid', 'request_id', 'user_id', 'project_id', 'start_time', 'message'] ACTION_KEYS_V258 = ['action', 'instance_uuid', 'request_id', 'user_id', 'project_id', 'start_time', 'message', 'updated_at'] EVENT_KEYS = ['event', 'start_time', 'finish_time', 'result', 'traceback'] class InstanceActionsController(wsgi.Controller): _view_builder_class = instance_actions_view.ViewBuilder def __init__(self): super(InstanceActionsController, self).__init__() self.compute_api = compute.API() self.action_api = compute.InstanceActionAPI() def _format_action(self, action_raw, action_keys): action = {} for key in action_keys: action[key] = action_raw.get(key) return action def _format_event(self, event_raw, show_traceback=False): event = {} for key in EVENT_KEYS: # By default, non-admins are not allowed to see traceback details. if key == 'traceback' and not show_traceback: continue event[key] = event_raw.get(key) return event @wsgi.Controller.api_version("2.1", "2.20") def _get_instance(self, req, context, server_id): return common.get_instance(self.compute_api, context, server_id) @wsgi.Controller.api_version("2.21") # noqa def _get_instance(self, req, context, server_id): with utils.temporary_mutation(context, read_deleted='yes'): return common.get_instance(self.compute_api, context, server_id) @wsgi.Controller.api_version("2.1", "2.57") @wsgi.expected_errors(404) def index(self, req, server_id): """Returns the list of actions recorded for a given instance.""" context = req.environ["nova.context"] instance = self._get_instance(req, context, server_id) context.can(ia_policies.BASE_POLICY_NAME, instance) actions_raw = self.action_api.actions_get(context, instance) actions = [self._format_action(action, ACTION_KEYS) for action in actions_raw] return {'instanceActions': actions} @wsgi.Controller.api_version("2.58") # noqa @wsgi.expected_errors((400, 404)) @validation.query_schema(schema_instance_actions.list_query_params_v258, "2.58") def index(self, req, server_id): """Returns the list of actions recorded for a given instance.""" context = req.environ["nova.context"] instance = self._get_instance(req, context, server_id) context.can(ia_policies.BASE_POLICY_NAME, instance) search_opts = {} search_opts.update(req.GET) if 'changes-since' in search_opts: search_opts['changes-since'] = timeutils.parse_isotime( search_opts['changes-since']) limit, marker = common.get_limit_and_marker(req) try: actions_raw = self.action_api.actions_get(context, instance, limit=limit, marker=marker, filters=search_opts) except exception.MarkerNotFound as e: raise exc.HTTPBadRequest(explanation=e.format_message()) actions = [self._format_action(action, ACTION_KEYS_V258) for action in actions_raw] actions_dict = {'instanceActions': actions} actions_links = self._view_builder.get_links(req, server_id, actions) if actions_links: actions_dict['links'] = actions_links return actions_dict @wsgi.expected_errors(404) def show(self, req, server_id, id): """Return data about the given instance action.""" context = req.environ['nova.context'] instance = self._get_instance(req, context, server_id) context.can(ia_policies.BASE_POLICY_NAME, instance) action = self.action_api.action_get_by_request_id(context, instance, id) if action is None: msg = _("Action %s not found") % id raise exc.HTTPNotFound(explanation=msg) action_id = action['id'] if api_version_request.is_supported(req, min_version="2.58"): action = self._format_action(action, ACTION_KEYS_V258) else: action = self._format_action(action, ACTION_KEYS) # Prior to microversion 2.51, events would only be returned in the # response for admins by default policy rules. Starting in # microversion 2.51, events are returned for admin_or_owner (of the # instance) but the "traceback" field is only shown for admin users # by default. show_events = False show_traceback = False if context.can(ia_policies.POLICY_ROOT % 'events', fatal=False): # For all microversions, the user can see all event details # including the traceback. show_events = show_traceback = True elif api_version_request.is_supported(req, '2.51'): # The user is not able to see all event details, but they can at # least see the non-traceback event details. show_events = True if show_events: events_raw = self.action_api.action_events_get(context, instance, action_id) action['events'] = [self._format_event(evt, show_traceback) for evt in events_raw] return {'instanceAction': action} nova-17.0.1/nova/api/openstack/compute/server_groups.py0000666000175000017500000001670113250073126023231 0ustar zuulzuul00000000000000# Copyright (c) 2014 Cisco Systems, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """The Server Group API Extension.""" import collections from oslo_log import log as logging import webob from webob import exc from nova.api.openstack import api_version_request from nova.api.openstack import common from nova.api.openstack.compute.schemas import server_groups as schema from nova.api.openstack import wsgi from nova.api import validation import nova.conf from nova import context as nova_context import nova.exception from nova.i18n import _ from nova import objects from nova.policies import server_groups as sg_policies LOG = logging.getLogger(__name__) CONF = nova.conf.CONF def _authorize_context(req, action): context = req.environ['nova.context'] context.can(sg_policies.POLICY_ROOT % action) return context def _get_not_deleted(context, uuids): mappings = objects.InstanceMappingList.get_by_instance_uuids( context, uuids) inst_by_cell = collections.defaultdict(list) cell_mappings = {} found_inst_uuids = [] # Get a master list of cell mappings, and a list of instance # uuids organized by cell for im in mappings: if not im.cell_mapping: # Not scheduled yet, so just throw it in the final list # and move on found_inst_uuids.append(im.instance_uuid) continue if im.cell_mapping.uuid not in cell_mappings: cell_mappings[im.cell_mapping.uuid] = im.cell_mapping inst_by_cell[im.cell_mapping.uuid].append(im.instance_uuid) # Query each cell for the instances that are inside, building # a list of non-deleted instance uuids. for cell_uuid, cell_mapping in cell_mappings.items(): inst_uuids = inst_by_cell[cell_uuid] LOG.debug('Querying cell %(cell)s for %(num)i instances', {'cell': cell_mapping.identity, 'num': len(uuids)}) filters = {'uuid': inst_uuids, 'deleted': False} with nova_context.target_cell(context, cell_mapping) as ctx: found_inst_uuids.extend([ inst.uuid for inst in objects.InstanceList.get_by_filters( ctx, filters=filters)]) return found_inst_uuids class ServerGroupController(wsgi.Controller): """The Server group API controller for the OpenStack API.""" def _format_server_group(self, context, group, req): # the id field has its value as the uuid of the server group # There is no 'uuid' key in server_group seen by clients. # In addition, clients see policies as a ["policy-name"] list; # and they see members as a ["server-id"] list. server_group = {} server_group['id'] = group.uuid server_group['name'] = group.name server_group['policies'] = group.policies or [] # NOTE(danms): This has been exposed to the user, but never used. # Since we can't remove it, just make sure it's always empty. server_group['metadata'] = {} members = [] if group.members: # Display the instances that are not deleted. members = _get_not_deleted(context, group.members) server_group['members'] = members # Add project id information to the response data for # API version v2.13 if api_version_request.is_supported(req, min_version="2.13"): server_group['project_id'] = group.project_id server_group['user_id'] = group.user_id return server_group @wsgi.expected_errors(404) def show(self, req, id): """Return data about the given server group.""" context = _authorize_context(req, 'show') try: sg = objects.InstanceGroup.get_by_uuid(context, id) except nova.exception.InstanceGroupNotFound as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) return {'server_group': self._format_server_group(context, sg, req)} @wsgi.response(204) @wsgi.expected_errors(404) def delete(self, req, id): """Delete a server group.""" context = _authorize_context(req, 'delete') try: sg = objects.InstanceGroup.get_by_uuid(context, id) except nova.exception.InstanceGroupNotFound as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) try: sg.destroy() except nova.exception.InstanceGroupNotFound as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) @wsgi.expected_errors(()) @validation.query_schema(schema.server_groups_query_param) def index(self, req): """Returns a list of server groups.""" context = _authorize_context(req, 'index') project_id = context.project_id if 'all_projects' in req.GET and context.is_admin: sgs = objects.InstanceGroupList.get_all(context) else: sgs = objects.InstanceGroupList.get_by_project_id( context, project_id) limited_list = common.limited(sgs.objects, req) result = [self._format_server_group(context, group, req) for group in limited_list] return {'server_groups': result} @wsgi.Controller.api_version("2.1") @wsgi.expected_errors((400, 403)) @validation.schema(schema.create, "2.0", "2.14") @validation.schema(schema.create_v215, "2.15") def create(self, req, body): """Creates a new server group.""" context = _authorize_context(req, 'create') try: objects.Quotas.check_deltas(context, {'server_groups': 1}, context.project_id, context.user_id) except nova.exception.OverQuota: msg = _("Quota exceeded, too many server groups.") raise exc.HTTPForbidden(explanation=msg) vals = body['server_group'] sg = objects.InstanceGroup(context) sg.project_id = context.project_id sg.user_id = context.user_id try: sg.name = vals.get('name') sg.policies = vals.get('policies') sg.create() except ValueError as e: raise exc.HTTPBadRequest(explanation=e) # NOTE(melwitt): We recheck the quota after creating the object to # prevent users from allocating more resources than their allowed quota # in the event of a race. This is configurable because it can be # expensive if strict quota limits are not required in a deployment. if CONF.quota.recheck_quota: try: objects.Quotas.check_deltas(context, {'server_groups': 0}, context.project_id, context.user_id) except nova.exception.OverQuota: sg.destroy() msg = _("Quota exceeded, too many server groups.") raise exc.HTTPForbidden(explanation=msg) return {'server_group': self._format_server_group(context, sg, req)} nova-17.0.1/nova/api/openstack/compute/keypairs.py0000666000175000017500000003136213250073126022153 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Keypair management extension.""" import webob import webob.exc from nova.api.openstack import api_version_request from nova.api.openstack import common from nova.api.openstack.compute.schemas import keypairs from nova.api.openstack.compute.views import keypairs as keypairs_view from nova.api.openstack import wsgi from nova.api import validation from nova.compute import api as compute_api from nova import exception from nova.i18n import _ from nova.objects import keypair as keypair_obj from nova.policies import keypairs as kp_policies class KeypairController(wsgi.Controller): """Keypair API controller for the OpenStack API.""" _view_builder_class = keypairs_view.ViewBuilder def __init__(self): self.api = compute_api.KeypairAPI() super(KeypairController, self).__init__() def _filter_keypair(self, keypair, **attrs): # TODO(claudiub): After v2 and v2.1 is no longer supported, # keypair.type can be added to the clean dict below clean = { 'name': keypair.name, 'public_key': keypair.public_key, 'fingerprint': keypair.fingerprint, } for attr in attrs: clean[attr] = keypair[attr] return clean @wsgi.Controller.api_version("2.10") @wsgi.response(201) @wsgi.expected_errors((400, 403, 409)) @validation.schema(keypairs.create_v210) def create(self, req, body): """Create or import keypair. A policy check restricts users from creating keys for other users params: keypair object with: name (required) - string public_key (optional) - string type (optional) - string user_id (optional) - string """ # handle optional user-id for admin only user_id = body['keypair'].get('user_id') return self._create(req, body, type=True, user_id=user_id) @wsgi.Controller.api_version("2.2", "2.9") # noqa @wsgi.response(201) @wsgi.expected_errors((400, 403, 409)) @validation.schema(keypairs.create_v22) def create(self, req, body): """Create or import keypair. Sending name will generate a key and return private_key and fingerprint. Keypair will have the type ssh or x509, specified by type. You can send a public_key to add an existing ssh/x509 key. params: keypair object with: name (required) - string public_key (optional) - string type (optional) - string """ return self._create(req, body, type=True) @wsgi.Controller.api_version("2.1", "2.1") # noqa @wsgi.expected_errors((400, 403, 409)) @validation.schema(keypairs.create_v20, "2.0", "2.0") @validation.schema(keypairs.create, "2.1", "2.1") def create(self, req, body): """Create or import keypair. Sending name will generate a key and return private_key and fingerprint. You can send a public_key to add an existing ssh key. params: keypair object with: name (required) - string public_key (optional) - string """ return self._create(req, body) def _create(self, req, body, user_id=None, **keypair_filters): context = req.environ['nova.context'] params = body['keypair'] name = common.normalize_name(params['name']) key_type = params.get('type', keypair_obj.KEYPAIR_TYPE_SSH) user_id = user_id or context.user_id context.can(kp_policies.POLICY_ROOT % 'create', target={'user_id': user_id, 'project_id': context.project_id}) try: if 'public_key' in params: keypair = self.api.import_key_pair(context, user_id, name, params['public_key'], key_type) keypair = self._filter_keypair(keypair, user_id=True, **keypair_filters) else: keypair, private_key = self.api.create_key_pair( context, user_id, name, key_type) keypair = self._filter_keypair(keypair, user_id=True, **keypair_filters) keypair['private_key'] = private_key return {'keypair': keypair} except exception.KeypairLimitExceeded: msg = _("Quota exceeded, too many key pairs.") raise webob.exc.HTTPForbidden(explanation=msg) except exception.InvalidKeypair as exc: raise webob.exc.HTTPBadRequest(explanation=exc.format_message()) except exception.KeyPairExists as exc: raise webob.exc.HTTPConflict(explanation=exc.format_message()) @wsgi.Controller.api_version("2.1", "2.1") @validation.query_schema(keypairs.delete_query_schema_v20) @wsgi.response(202) @wsgi.expected_errors(404) def delete(self, req, id): self._delete(req, id) @wsgi.Controller.api_version("2.2", "2.9") # noqa @validation.query_schema(keypairs.delete_query_schema_v20) @wsgi.response(204) @wsgi.expected_errors(404) def delete(self, req, id): self._delete(req, id) @wsgi.Controller.api_version("2.10") # noqa @validation.query_schema(keypairs.delete_query_schema_v210) @wsgi.response(204) @wsgi.expected_errors(404) def delete(self, req, id): # handle optional user-id for admin only user_id = self._get_user_id(req) self._delete(req, id, user_id=user_id) def _delete(self, req, id, user_id=None): """Delete a keypair with a given name.""" context = req.environ['nova.context'] # handle optional user-id for admin only user_id = user_id or context.user_id context.can(kp_policies.POLICY_ROOT % 'delete', target={'user_id': user_id, 'project_id': context.project_id}) try: self.api.delete_key_pair(context, user_id, id) except exception.KeypairNotFound as exc: raise webob.exc.HTTPNotFound(explanation=exc.format_message()) def _get_user_id(self, req): if 'user_id' in req.GET.keys(): user_id = req.GET.getall('user_id')[0] return user_id @wsgi.Controller.api_version("2.10") @validation.query_schema(keypairs.show_query_schema_v210) @wsgi.expected_errors(404) def show(self, req, id): # handle optional user-id for admin only user_id = self._get_user_id(req) return self._show(req, id, type=True, user_id=user_id) @wsgi.Controller.api_version("2.2", "2.9") # noqa @validation.query_schema(keypairs.show_query_schema_v20) @wsgi.expected_errors(404) def show(self, req, id): return self._show(req, id, type=True) @wsgi.Controller.api_version("2.1", "2.1") # noqa @validation.query_schema(keypairs.show_query_schema_v20) @wsgi.expected_errors(404) def show(self, req, id): return self._show(req, id) def _show(self, req, id, user_id=None, **keypair_filters): """Return data for the given key name.""" context = req.environ['nova.context'] user_id = user_id or context.user_id context.can(kp_policies.POLICY_ROOT % 'show', target={'user_id': user_id, 'project_id': context.project_id}) try: # The return object needs to be a dict in order to pop the 'type' # field, if the api_version < 2.2. keypair = self.api.get_key_pair(context, user_id, id) keypair = self._filter_keypair(keypair, created_at=True, deleted=True, deleted_at=True, id=True, user_id=True, updated_at=True, **keypair_filters) except exception.KeypairNotFound as exc: raise webob.exc.HTTPNotFound(explanation=exc.format_message()) # TODO(oomichi): It is necessary to filter a response of keypair with # _filter_keypair() when v2.1+microversions for implementing consistent # behaviors in this keypair resource. return {'keypair': keypair} @wsgi.Controller.api_version("2.35") @validation.query_schema(keypairs.index_query_schema_v235) @wsgi.expected_errors(400) def index(self, req): user_id = self._get_user_id(req) return self._index(req, links=True, type=True, user_id=user_id) @wsgi.Controller.api_version("2.10", "2.34") # noqa @validation.query_schema(keypairs.index_query_schema_v210) @wsgi.expected_errors(()) def index(self, req): # handle optional user-id for admin only user_id = self._get_user_id(req) return self._index(req, type=True, user_id=user_id) @wsgi.Controller.api_version("2.2", "2.9") # noqa @validation.query_schema(keypairs.index_query_schema_v20) @wsgi.expected_errors(()) def index(self, req): return self._index(req, type=True) @wsgi.Controller.api_version("2.1", "2.1") # noqa @validation.query_schema(keypairs.index_query_schema_v20) @wsgi.expected_errors(()) def index(self, req): return self._index(req) def _index(self, req, user_id=None, links=False, **keypair_filters): """List of keypairs for a user.""" context = req.environ['nova.context'] user_id = user_id or context.user_id context.can(kp_policies.POLICY_ROOT % 'index', target={'user_id': user_id, 'project_id': context.project_id}) if api_version_request.is_supported(req, min_version='2.35'): limit, marker = common.get_limit_and_marker(req) else: limit = marker = None try: key_pairs = self.api.get_key_pairs( context, user_id, limit=limit, marker=marker) except exception.MarkerNotFound as e: raise webob.exc.HTTPBadRequest(explanation=e.format_message()) key_pairs = [self._filter_keypair(key_pair, **keypair_filters) for key_pair in key_pairs] keypairs_list = [{'keypair': key_pair} for key_pair in key_pairs] keypairs_dict = {'keypairs': keypairs_list} if links: keypairs_links = self._view_builder.get_links(req, key_pairs) if keypairs_links: keypairs_dict['keypairs_links'] = keypairs_links return keypairs_dict class Controller(wsgi.Controller): def _add_key_name(self, req, servers): for server in servers: db_server = req.get_db_instance(server['id']) # server['id'] is guaranteed to be in the cache due to # the core API adding it in its 'show'/'detail' methods. server['key_name'] = db_server['key_name'] def _show(self, req, resp_obj): if 'server' in resp_obj.obj: server = resp_obj.obj['server'] self._add_key_name(req, [server]) @wsgi.extends def show(self, req, resp_obj, id): context = req.environ['nova.context'] if context.can(kp_policies.BASE_POLICY_NAME, fatal=False): self._show(req, resp_obj) @wsgi.extends def detail(self, req, resp_obj): context = req.environ['nova.context'] if 'servers' in resp_obj.obj and context.can( kp_policies.BASE_POLICY_NAME, fatal=False): servers = resp_obj.obj['servers'] self._add_key_name(req, servers) # NOTE(gmann): This function is not supposed to use 'body_deprecated_param' # parameter as this is placed to handle scheduler_hint extension for V2.1. def server_create(server_dict, create_kwargs, body_deprecated_param): # NOTE(alex_xu): The v2.1 API compat mode, we strip the spaces for # keypair create. But we didn't strip spaces at here for # backward-compatible some users already created keypair and name with # leading/trailing spaces by legacy v2 API. create_kwargs['key_name'] = server_dict.get('key_name') def get_server_create_schema(version): if version == '2.0': return keypairs.server_create_v20 else: return keypairs.server_create nova-17.0.1/nova/api/openstack/compute/attach_interfaces.py0000666000175000017500000001641313250073126023773 0ustar zuulzuul00000000000000# Copyright 2012 SINA Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """The instance interfaces extension.""" import webob from webob import exc from nova.api.openstack import common from nova.api.openstack.compute.schemas import attach_interfaces from nova.api.openstack import wsgi from nova.api import validation from nova import compute from nova import exception from nova.i18n import _ from nova import network from nova.policies import attach_interfaces as ai_policies def _translate_interface_attachment_view(port_info): """Maps keys for interface attachment details view.""" return { 'net_id': port_info['network_id'], 'port_id': port_info['id'], 'mac_addr': port_info['mac_address'], 'port_state': port_info['status'], 'fixed_ips': port_info.get('fixed_ips', None), } class InterfaceAttachmentController(wsgi.Controller): """The interface attachment API controller for the OpenStack API.""" def __init__(self): self.compute_api = compute.API() self.network_api = network.API() super(InterfaceAttachmentController, self).__init__() @wsgi.expected_errors((404, 501)) def index(self, req, server_id): """Returns the list of interface attachments for a given instance.""" context = req.environ['nova.context'] context.can(ai_policies.BASE_POLICY_NAME) instance = common.get_instance(self.compute_api, context, server_id) search_opts = {'device_id': instance.uuid} try: data = self.network_api.list_ports(context, **search_opts) except exception.NotFound as e: raise exc.HTTPNotFound(explanation=e.format_message()) except NotImplementedError: common.raise_feature_not_supported() ports = data.get('ports', []) entity_maker = _translate_interface_attachment_view results = [entity_maker(port) for port in ports] return {'interfaceAttachments': results} @wsgi.expected_errors((403, 404)) def show(self, req, server_id, id): """Return data about the given interface attachment.""" context = req.environ['nova.context'] context.can(ai_policies.BASE_POLICY_NAME) port_id = id # NOTE(mriedem): We need to verify the instance actually exists from # the server_id even though we're not using the instance for anything, # just the port id. common.get_instance(self.compute_api, context, server_id) try: port_info = self.network_api.show_port(context, port_id) except exception.PortNotFound as e: raise exc.HTTPNotFound(explanation=e.format_message()) except exception.Forbidden as e: raise exc.HTTPForbidden(explanation=e.format_message()) if port_info['port']['device_id'] != server_id: msg = _("Instance %(instance)s does not have a port with id " "%(port)s") % {'instance': server_id, 'port': port_id} raise exc.HTTPNotFound(explanation=msg) return {'interfaceAttachment': _translate_interface_attachment_view( port_info['port'])} @wsgi.expected_errors((400, 404, 409, 500, 501)) @validation.schema(attach_interfaces.create, '2.0', '2.48') @validation.schema(attach_interfaces.create_v249, '2.49') def create(self, req, server_id, body): """Attach an interface to an instance.""" context = req.environ['nova.context'] context.can(ai_policies.BASE_POLICY_NAME) context.can(ai_policies.POLICY_ROOT % 'create') network_id = None port_id = None req_ip = None tag = None if body: attachment = body['interfaceAttachment'] network_id = attachment.get('net_id', None) port_id = attachment.get('port_id', None) tag = attachment.get('tag', None) try: req_ip = attachment['fixed_ips'][0]['ip_address'] except Exception: pass if network_id and port_id: msg = _("Must not input both network_id and port_id") raise exc.HTTPBadRequest(explanation=msg) if req_ip and not network_id: msg = _("Must input network_id when request IP address") raise exc.HTTPBadRequest(explanation=msg) instance = common.get_instance(self.compute_api, context, server_id) try: vif = self.compute_api.attach_interface(context, instance, network_id, port_id, req_ip, tag=tag) except (exception.InterfaceAttachFailedNoNetwork, exception.NetworkAmbiguous, exception.NoMoreFixedIps, exception.PortNotUsable, exception.AttachInterfaceNotSupported, exception.SecurityGroupCannotBeApplied, exception.TaggedAttachmentNotSupported) as e: raise exc.HTTPBadRequest(explanation=e.format_message()) except (exception.InstanceIsLocked, exception.FixedIpAlreadyInUse, exception.PortInUse) as e: raise exc.HTTPConflict(explanation=e.format_message()) except (exception.PortNotFound, exception.NetworkNotFound) as e: raise exc.HTTPNotFound(explanation=e.format_message()) except exception.InterfaceAttachFailed as e: raise webob.exc.HTTPInternalServerError( explanation=e.format_message()) except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state(state_error, 'attach_interface', server_id) return self.show(req, server_id, vif['id']) @wsgi.response(202) @wsgi.expected_errors((404, 409, 501)) def delete(self, req, server_id, id): """Detach an interface from an instance.""" context = req.environ['nova.context'] context.can(ai_policies.BASE_POLICY_NAME) context.can(ai_policies.POLICY_ROOT % 'delete') port_id = id instance = common.get_instance(self.compute_api, context, server_id, expected_attrs=['device_metadata']) try: self.compute_api.detach_interface(context, instance, port_id=port_id) except exception.PortNotFound as e: raise exc.HTTPNotFound(explanation=e.format_message()) except exception.InstanceIsLocked as e: raise exc.HTTPConflict(explanation=e.format_message()) except NotImplementedError: common.raise_feature_not_supported() except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state(state_error, 'detach_interface', server_id) nova-17.0.1/nova/api/openstack/compute/limits.py0000666000175000017500000000676213250073126021633 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.openstack.api_version_request \ import MAX_IMAGE_META_PROXY_API_VERSION from nova.api.openstack.api_version_request \ import MAX_PROXY_API_SUPPORT_VERSION from nova.api.openstack.api_version_request \ import MIN_WITHOUT_IMAGE_META_PROXY_API_VERSION from nova.api.openstack.api_version_request \ import MIN_WITHOUT_PROXY_API_SUPPORT_VERSION from nova.api.openstack.compute.schemas import limits from nova.api.openstack.compute.views import limits as limits_views from nova.api.openstack import wsgi from nova.api import validation from nova.policies import limits as limits_policies from nova import quota QUOTAS = quota.QUOTAS # This is a list of limits which needs to filter out from the API response. # This is due to the deprecation of network related proxy APIs, the related # limit should be removed from the API also. FILTERED_LIMITS_2_36 = ['floating_ips', 'security_groups', 'security_group_rules'] FILTERED_LIMITS_2_57 = list(FILTERED_LIMITS_2_36) FILTERED_LIMITS_2_57.extend(['injected_files', 'injected_file_content_bytes']) class LimitsController(wsgi.Controller): """Controller for accessing limits in the OpenStack API.""" @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors(()) @validation.query_schema(limits.limits_query_schema) def index(self, req): return self._index(req) @wsgi.Controller.api_version(MIN_WITHOUT_PROXY_API_SUPPORT_VERSION, # noqa MAX_IMAGE_META_PROXY_API_VERSION) # noqa @wsgi.expected_errors(()) @validation.query_schema(limits.limits_query_schema) def index(self, req): return self._index(req, FILTERED_LIMITS_2_36) @wsgi.Controller.api_version( # noqa MIN_WITHOUT_IMAGE_META_PROXY_API_VERSION, '2.56') # noqa @wsgi.expected_errors(()) @validation.query_schema(limits.limits_query_schema) def index(self, req): return self._index(req, FILTERED_LIMITS_2_36, max_image_meta=False) @wsgi.Controller.api_version('2.57') # noqa @wsgi.expected_errors(()) @validation.query_schema(limits.limits_query_schema) def index(self, req): return self._index(req, FILTERED_LIMITS_2_57, max_image_meta=False) def _index(self, req, filtered_limits=None, max_image_meta=True): """Return all global limit information.""" context = req.environ['nova.context'] context.can(limits_policies.BASE_POLICY_NAME) project_id = req.params.get('tenant_id', context.project_id) quotas = QUOTAS.get_project_quotas(context, project_id, usages=False) abs_limits = {k: v['limit'] for k, v in quotas.items()} builder = limits_views.ViewBuilder() return builder.build(abs_limits, filtered_limits=filtered_limits, max_image_meta=max_image_meta) nova-17.0.1/nova/api/openstack/compute/lock_server.py0000666000175000017500000000377313250073126022647 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.openstack import common from nova.api.openstack import wsgi from nova import compute from nova.policies import lock_server as ls_policies class LockServerController(wsgi.Controller): def __init__(self, *args, **kwargs): super(LockServerController, self).__init__(*args, **kwargs) self.compute_api = compute.API() @wsgi.response(202) @wsgi.expected_errors(404) @wsgi.action('lock') def _lock(self, req, id, body): """Lock a server instance.""" context = req.environ['nova.context'] instance = common.get_instance(self.compute_api, context, id) context.can(ls_policies.POLICY_ROOT % 'lock', target={'user_id': instance.user_id, 'project_id': instance.project_id}) self.compute_api.lock(context, instance) @wsgi.response(202) @wsgi.expected_errors(404) @wsgi.action('unlock') def _unlock(self, req, id, body): """Unlock a server instance.""" context = req.environ['nova.context'] context.can(ls_policies.POLICY_ROOT % 'unlock') instance = common.get_instance(self.compute_api, context, id) if not self.compute_api.is_expected_locked_by(context, instance): context.can(ls_policies.POLICY_ROOT % 'unlock:unlock_override', instance) self.compute_api.unlock(context, instance) nova-17.0.1/nova/api/openstack/compute/floating_ip_dns.py0000666000175000017500000002323613250073126023464 0ustar zuulzuul00000000000000# Copyright 2011 Andrew Bogott for the Wikimedia Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils import netutils from six.moves import urllib import webob from nova.api.openstack.api_version_request \ import MAX_PROXY_API_SUPPORT_VERSION from nova.api.openstack import common from nova.api.openstack.compute.schemas import floating_ip_dns from nova.api.openstack import wsgi from nova.api import validation from nova import exception from nova.i18n import _ from nova import network from nova.policies import floating_ip_dns as fid_policies def _translate_dns_entry_view(dns_entry): result = {} result['ip'] = dns_entry.get('ip') result['id'] = dns_entry.get('id') result['type'] = dns_entry.get('type') result['domain'] = dns_entry.get('domain') result['name'] = dns_entry.get('name') return {'dns_entry': result} def _translate_dns_entries_view(dns_entries): return {'dns_entries': [_translate_dns_entry_view(entry)['dns_entry'] for entry in dns_entries]} def _translate_domain_entry_view(domain_entry): result = {} result['domain'] = domain_entry.get('domain') result['scope'] = domain_entry.get('scope') result['project'] = domain_entry.get('project') result['availability_zone'] = domain_entry.get('availability_zone') return {'domain_entry': result} def _translate_domain_entries_view(domain_entries): return {'domain_entries': [_translate_domain_entry_view(entry)['domain_entry'] for entry in domain_entries]} def _unquote_domain(domain): """Unquoting function for receiving a domain name in a URL. Domain names tend to have .'s in them. Urllib doesn't quote dots, but Routes tends to choke on them, so we need an extra level of by-hand quoting here. """ return urllib.parse.unquote(domain).replace('%2E', '.') def _create_dns_entry(ip, name, domain): return {'ip': ip, 'name': name, 'domain': domain} def _create_domain_entry(domain, scope=None, project=None, av_zone=None): return {'domain': domain, 'scope': scope, 'project': project, 'availability_zone': av_zone} class FloatingIPDNSDomainController(wsgi.Controller): """DNS domain controller for OpenStack API.""" def __init__(self): super(FloatingIPDNSDomainController, self).__init__() self.network_api = network.API() @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors(501) def index(self, req): """Return a list of available DNS domains.""" context = req.environ['nova.context'] context.can(fid_policies.BASE_POLICY_NAME) try: domains = self.network_api.get_dns_domains(context) except NotImplementedError: common.raise_feature_not_supported() domainlist = [_create_domain_entry(domain['domain'], domain.get('scope'), domain.get('project'), domain.get('availability_zone')) for domain in domains] return _translate_domain_entries_view(domainlist) @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors((400, 501)) @validation.schema(floating_ip_dns.domain_entry_update) def update(self, req, id, body): """Add or modify domain entry.""" context = req.environ['nova.context'] context.can(fid_policies.POLICY_ROOT % "domain:update") fqdomain = _unquote_domain(id) entry = body['domain_entry'] scope = entry['scope'] project = entry.get('project', None) av_zone = entry.get('availability_zone', None) if scope == 'private' and project: msg = _("you can not pass project if the scope is private") raise webob.exc.HTTPBadRequest(explanation=msg) if scope == 'public' and av_zone: msg = _("you can not pass av_zone if the scope is public") raise webob.exc.HTTPBadRequest(explanation=msg) if scope == 'private': create_dns_domain = self.network_api.create_private_dns_domain area_name, area = 'availability_zone', av_zone else: create_dns_domain = self.network_api.create_public_dns_domain area_name, area = 'project', project try: create_dns_domain(context, fqdomain, area) except NotImplementedError: common.raise_feature_not_supported() return _translate_domain_entry_view({'domain': fqdomain, 'scope': scope, area_name: area}) @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors((404, 501)) @wsgi.response(202) def delete(self, req, id): """Delete the domain identified by id.""" context = req.environ['nova.context'] context.can(fid_policies.POLICY_ROOT % "domain:delete") domain = _unquote_domain(id) # Delete the whole domain try: self.network_api.delete_dns_domain(context, domain) except NotImplementedError: common.raise_feature_not_supported() except exception.NotFound as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) class FloatingIPDNSEntryController(wsgi.Controller): """DNS Entry controller for OpenStack API.""" def __init__(self): super(FloatingIPDNSEntryController, self).__init__() self.network_api = network.API() @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors((404, 501)) def show(self, req, domain_id, id): """Return the DNS entry that corresponds to domain_id and id.""" context = req.environ['nova.context'] context.can(fid_policies.BASE_POLICY_NAME) domain = _unquote_domain(domain_id) floating_ip = None # Check whether id is a valid ipv4/ipv6 address. if netutils.is_valid_ip(id): floating_ip = id try: if floating_ip: entries = self.network_api.get_dns_entries_by_address(context, floating_ip, domain) else: entries = self.network_api.get_dns_entries_by_name(context, id, domain) except NotImplementedError: common.raise_feature_not_supported() if not entries: explanation = _("DNS entries not found.") raise webob.exc.HTTPNotFound(explanation=explanation) if floating_ip: entrylist = [_create_dns_entry(floating_ip, entry, domain) for entry in entries] dns_entries = _translate_dns_entries_view(entrylist) return wsgi.ResponseObject(dns_entries) entry = _create_dns_entry(entries[0], id, domain) return _translate_dns_entry_view(entry) @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors(501) @validation.schema(floating_ip_dns.dns_entry_update) def update(self, req, domain_id, id, body): """Add or modify dns entry.""" context = req.environ['nova.context'] context.can(fid_policies.BASE_POLICY_NAME) domain = _unquote_domain(domain_id) name = id entry = body['dns_entry'] address = entry['ip'] dns_type = entry['dns_type'] try: entries = self.network_api.get_dns_entries_by_name(context, name, domain) if not entries: # create! self.network_api.add_dns_entry(context, address, name, dns_type, domain) else: # modify! self.network_api.modify_dns_entry(context, name, address, domain) except NotImplementedError: common.raise_feature_not_supported() return _translate_dns_entry_view({'ip': address, 'name': name, 'type': dns_type, 'domain': domain}) @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors((404, 501)) @wsgi.response(202) def delete(self, req, domain_id, id): """Delete the entry identified by req and id.""" context = req.environ['nova.context'] context.can(fid_policies.BASE_POLICY_NAME) domain = _unquote_domain(domain_id) name = id try: self.network_api.delete_dns_entry(context, name, domain) except NotImplementedError: common.raise_feature_not_supported() except exception.NotFound as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) nova-17.0.1/nova/api/openstack/compute/aggregates.py0000666000175000017500000002313113250073126022430 0ustar zuulzuul00000000000000# Copyright (c) 2012 Citrix Systems, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """The Aggregate admin API extension.""" import datetime from webob import exc from nova.api.openstack import api_version_request from nova.api.openstack import common from nova.api.openstack.compute.schemas import aggregates from nova.api.openstack import wsgi from nova.api import validation from nova.compute import api as compute_api from nova import exception from nova.i18n import _ from nova.policies import aggregates as aggr_policies def _get_context(req): return req.environ['nova.context'] class AggregateController(wsgi.Controller): """The Host Aggregates API controller for the OpenStack API.""" def __init__(self): self.api = compute_api.AggregateAPI() @wsgi.expected_errors(()) def index(self, req): """Returns a list a host aggregate's id, name, availability_zone.""" context = _get_context(req) context.can(aggr_policies.POLICY_ROOT % 'index') aggregates = self.api.get_aggregate_list(context) return {'aggregates': [self._marshall_aggregate(req, a)['aggregate'] for a in aggregates]} # NOTE(gmann): Returns 200 for backwards compatibility but should be 201 # as this operation complete the creation of aggregates resource. @wsgi.expected_errors((400, 409)) @validation.schema(aggregates.create_v20, '2.0', '2.0') @validation.schema(aggregates.create, '2.1') def create(self, req, body): """Creates an aggregate, given its name and optional availability zone. """ context = _get_context(req) context.can(aggr_policies.POLICY_ROOT % 'create') host_aggregate = body["aggregate"] name = common.normalize_name(host_aggregate["name"]) avail_zone = host_aggregate.get("availability_zone") if avail_zone: avail_zone = common.normalize_name(avail_zone) try: aggregate = self.api.create_aggregate(context, name, avail_zone) except exception.AggregateNameExists as e: raise exc.HTTPConflict(explanation=e.format_message()) except exception.ObjectActionError: raise exc.HTTPConflict(explanation=_( 'Not all aggregates have been migrated to the API database')) except exception.InvalidAggregateAction as e: raise exc.HTTPBadRequest(explanation=e.format_message()) agg = self._marshall_aggregate(req, aggregate) # To maintain the same API result as before the changes for returning # nova objects were made. del agg['aggregate']['hosts'] del agg['aggregate']['metadata'] return agg @wsgi.expected_errors(404) def show(self, req, id): """Shows the details of an aggregate, hosts and metadata included.""" context = _get_context(req) context.can(aggr_policies.POLICY_ROOT % 'show') try: aggregate = self.api.get_aggregate(context, id) except exception.AggregateNotFound as e: raise exc.HTTPNotFound(explanation=e.format_message()) return self._marshall_aggregate(req, aggregate) @wsgi.expected_errors((400, 404, 409)) @validation.schema(aggregates.update_v20, '2.0', '2.0') @validation.schema(aggregates.update, '2.1') def update(self, req, id, body): """Updates the name and/or availability_zone of given aggregate.""" context = _get_context(req) context.can(aggr_policies.POLICY_ROOT % 'update') updates = body["aggregate"] if 'name' in updates: updates['name'] = common.normalize_name(updates['name']) try: aggregate = self.api.update_aggregate(context, id, updates) except exception.AggregateNameExists as e: raise exc.HTTPConflict(explanation=e.format_message()) except exception.AggregateNotFound as e: raise exc.HTTPNotFound(explanation=e.format_message()) except exception.InvalidAggregateAction as e: raise exc.HTTPBadRequest(explanation=e.format_message()) return self._marshall_aggregate(req, aggregate) # NOTE(gmann): Returns 200 for backwards compatibility but should be 204 # as this operation complete the deletion of aggregate resource and return # no response body. @wsgi.expected_errors((400, 404)) def delete(self, req, id): """Removes an aggregate by id.""" context = _get_context(req) context.can(aggr_policies.POLICY_ROOT % 'delete') try: self.api.delete_aggregate(context, id) except exception.AggregateNotFound as e: raise exc.HTTPNotFound(explanation=e.format_message()) except exception.InvalidAggregateAction as e: raise exc.HTTPBadRequest(explanation=e.format_message()) # NOTE(gmann): Returns 200 for backwards compatibility but should be 202 # for representing async API as this API just accepts the request and # request hypervisor driver to complete the same in async mode. @wsgi.expected_errors((404, 409)) @wsgi.action('add_host') @validation.schema(aggregates.add_host) def _add_host(self, req, id, body): """Adds a host to the specified aggregate.""" host = body['add_host']['host'] context = _get_context(req) context.can(aggr_policies.POLICY_ROOT % 'add_host') try: aggregate = self.api.add_host_to_aggregate(context, id, host) except (exception.AggregateNotFound, exception.HostMappingNotFound, exception.ComputeHostNotFound) as e: raise exc.HTTPNotFound(explanation=e.format_message()) except (exception.AggregateHostExists, exception.InvalidAggregateAction) as e: raise exc.HTTPConflict(explanation=e.format_message()) return self._marshall_aggregate(req, aggregate) # NOTE(gmann): Returns 200 for backwards compatibility but should be 202 # for representing async API as this API just accepts the request and # request hypervisor driver to complete the same in async mode. @wsgi.expected_errors((404, 409)) @wsgi.action('remove_host') @validation.schema(aggregates.remove_host) def _remove_host(self, req, id, body): """Removes a host from the specified aggregate.""" host = body['remove_host']['host'] context = _get_context(req) context.can(aggr_policies.POLICY_ROOT % 'remove_host') try: aggregate = self.api.remove_host_from_aggregate(context, id, host) except (exception.AggregateNotFound, exception.AggregateHostNotFound, exception.HostMappingNotFound, exception.ComputeHostNotFound): msg = _('Cannot remove host %(host)s in aggregate %(id)s') % { 'host': host, 'id': id} raise exc.HTTPNotFound(explanation=msg) except exception.InvalidAggregateAction: msg = _('Cannot remove host %(host)s in aggregate %(id)s') % { 'host': host, 'id': id} raise exc.HTTPConflict(explanation=msg) return self._marshall_aggregate(req, aggregate) @wsgi.expected_errors((400, 404)) @wsgi.action('set_metadata') @validation.schema(aggregates.set_metadata) def _set_metadata(self, req, id, body): """Replaces the aggregate's existing metadata with new metadata.""" context = _get_context(req) context.can(aggr_policies.POLICY_ROOT % 'set_metadata') metadata = body["set_metadata"]["metadata"] try: aggregate = self.api.update_aggregate_metadata(context, id, metadata) except exception.AggregateNotFound as e: raise exc.HTTPNotFound(explanation=e.format_message()) except exception.InvalidAggregateAction as e: raise exc.HTTPBadRequest(explanation=e.format_message()) return self._marshall_aggregate(req, aggregate) def _marshall_aggregate(self, req, aggregate): _aggregate = {} for key, value in self._build_aggregate_items(req, aggregate): # NOTE(danms): The original API specified non-TZ-aware timestamps if isinstance(value, datetime.datetime): value = value.replace(tzinfo=None) _aggregate[key] = value return {"aggregate": _aggregate} def _build_aggregate_items(self, req, aggregate): show_uuid = api_version_request.is_supported(req, min_version="2.41") keys = aggregate.obj_fields # NOTE(rlrossit): Within the compute API, metadata will always be # set on the aggregate object (at a minimum to {}). Because of this, # we can freely use getattr() on keys in obj_extra_fields (in this # case it is only ['availability_zone']) without worrying about # lazy-loading an unset variable for key in keys: if ((aggregate.obj_attr_is_set(key) or key in aggregate.obj_extra_fields) and (show_uuid or key != 'uuid')): yield key, getattr(aggregate, key) nova-17.0.1/nova/api/openstack/compute/multinic.py0000666000175000017500000000552513250073126022152 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """The multinic extension.""" from webob import exc from nova.api.openstack import common from nova.api.openstack.compute.schemas import multinic from nova.api.openstack import wsgi from nova.api import validation from nova import compute from nova import exception from nova.policies import multinic as multinic_policies class MultinicController(wsgi.Controller): """This API is deprecated from Microversion '2.44'.""" def __init__(self, *args, **kwargs): super(MultinicController, self).__init__(*args, **kwargs) self.compute_api = compute.API() @wsgi.Controller.api_version("2.1", "2.43") @wsgi.response(202) @wsgi.action('addFixedIp') @wsgi.expected_errors((400, 404)) @validation.schema(multinic.add_fixed_ip) def _add_fixed_ip(self, req, id, body): """Adds an IP on a given network to an instance.""" context = req.environ['nova.context'] context.can(multinic_policies.BASE_POLICY_NAME) instance = common.get_instance(self.compute_api, context, id) network_id = body['addFixedIp']['networkId'] try: self.compute_api.add_fixed_ip(context, instance, network_id) except exception.InstanceUnknownCell as e: raise exc.HTTPNotFound(explanation=e.format_message()) except exception.NoMoreFixedIps as e: raise exc.HTTPBadRequest(explanation=e.format_message()) @wsgi.Controller.api_version("2.1", "2.43") @wsgi.response(202) @wsgi.action('removeFixedIp') @wsgi.expected_errors((400, 404)) @validation.schema(multinic.remove_fixed_ip) def _remove_fixed_ip(self, req, id, body): """Removes an IP from an instance.""" context = req.environ['nova.context'] context.can(multinic_policies.BASE_POLICY_NAME) instance = common.get_instance(self.compute_api, context, id) address = body['removeFixedIp']['address'] try: self.compute_api.remove_fixed_ip(context, instance, address) except exception.InstanceUnknownCell as e: raise exc.HTTPNotFound(explanation=e.format_message()) except exception.FixedIpNotFoundForSpecificInstance as e: raise exc.HTTPBadRequest(explanation=e.format_message()) nova-17.0.1/nova/api/openstack/compute/wsgi.py0000666000175000017500000000141513250073126021271 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """WSGI application entry-point for Nova Compute API, installed by pbr.""" from nova.api.openstack import wsgi_app NAME = "osapi_compute" def init_application(): return wsgi_app.init_application(NAME) nova-17.0.1/nova/api/openstack/compute/flavor_access.py0000666000175000017500000000773113250073126023141 0ustar zuulzuul00000000000000# Copyright (c) 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """The flavor access extension.""" import webob from nova.api.openstack import api_version_request from nova.api.openstack import common from nova.api.openstack.compute.schemas import flavor_access from nova.api.openstack import identity from nova.api.openstack import wsgi from nova.api import validation from nova import exception from nova.i18n import _ from nova.policies import flavor_access as fa_policies def _marshall_flavor_access(flavor): rval = [] for project_id in flavor.projects: rval.append({'flavor_id': flavor.flavorid, 'tenant_id': project_id}) return {'flavor_access': rval} class FlavorAccessController(wsgi.Controller): """The flavor access API controller for the OpenStack API.""" @wsgi.expected_errors(404) def index(self, req, flavor_id): context = req.environ['nova.context'] context.can(fa_policies.BASE_POLICY_NAME) flavor = common.get_flavor(context, flavor_id) # public flavor to all projects if flavor.is_public: explanation = _("Access list not available for public flavors.") raise webob.exc.HTTPNotFound(explanation=explanation) # private flavor to listed projects only return _marshall_flavor_access(flavor) class FlavorActionController(wsgi.Controller): """The flavor access API controller for the OpenStack API.""" @wsgi.expected_errors((400, 403, 404, 409)) @wsgi.action("addTenantAccess") @validation.schema(flavor_access.add_tenant_access) def _add_tenant_access(self, req, id, body): context = req.environ['nova.context'] context.can(fa_policies.POLICY_ROOT % "add_tenant_access") vals = body['addTenantAccess'] tenant = vals['tenant'] identity.verify_project_id(context, tenant) flavor = common.get_flavor(context, id) try: if api_version_request.is_supported(req, min_version='2.7'): if flavor.is_public: exp = _("Can not add access to a public flavor.") raise webob.exc.HTTPConflict(explanation=exp) flavor.add_access(tenant) except exception.FlavorNotFound as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) except exception.FlavorAccessExists as err: raise webob.exc.HTTPConflict(explanation=err.format_message()) return _marshall_flavor_access(flavor) @wsgi.expected_errors((400, 403, 404)) @wsgi.action("removeTenantAccess") @validation.schema(flavor_access.remove_tenant_access) def _remove_tenant_access(self, req, id, body): context = req.environ['nova.context'] context.can( fa_policies.POLICY_ROOT % "remove_tenant_access") vals = body['removeTenantAccess'] tenant = vals['tenant'] identity.verify_project_id(context, tenant) # NOTE(gibi): We have to load a flavor from the db here as # flavor.remove_access() will try to emit a notification and that needs # a fully loaded flavor. flavor = common.get_flavor(context, id) try: flavor.remove_access(tenant) except (exception.FlavorAccessNotFound, exception.FlavorNotFound) as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) return _marshall_flavor_access(flavor) nova-17.0.1/nova/api/openstack/compute/image_metadata.py0000666000175000017500000001264113250073126023245 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from webob import exc from nova.api.openstack.api_version_request import \ MAX_IMAGE_META_PROXY_API_VERSION from nova.api.openstack import common from nova.api.openstack.compute.schemas import image_metadata from nova.api.openstack import wsgi from nova.api import validation from nova import exception from nova.i18n import _ import nova.image class ImageMetadataController(wsgi.Controller): """The image metadata API controller for the OpenStack API.""" def __init__(self): self.image_api = nova.image.API() def _get_image(self, context, image_id): try: return self.image_api.get(context, image_id) except exception.ImageNotAuthorized as e: raise exc.HTTPForbidden(explanation=e.format_message()) except exception.ImageNotFound: msg = _("Image not found.") raise exc.HTTPNotFound(explanation=msg) @wsgi.Controller.api_version("2.1", MAX_IMAGE_META_PROXY_API_VERSION) @wsgi.expected_errors((403, 404)) def index(self, req, image_id): """Returns the list of metadata for a given instance.""" context = req.environ['nova.context'] metadata = self._get_image(context, image_id)['properties'] return dict(metadata=metadata) @wsgi.Controller.api_version("2.1", MAX_IMAGE_META_PROXY_API_VERSION) @wsgi.expected_errors((403, 404)) def show(self, req, image_id, id): context = req.environ['nova.context'] metadata = self._get_image(context, image_id)['properties'] if id in metadata: return {'meta': {id: metadata[id]}} else: raise exc.HTTPNotFound() @wsgi.Controller.api_version("2.1", MAX_IMAGE_META_PROXY_API_VERSION) @wsgi.expected_errors((400, 403, 404)) @validation.schema(image_metadata.create) def create(self, req, image_id, body): context = req.environ['nova.context'] image = self._get_image(context, image_id) for key, value in body['metadata'].items(): image['properties'][key] = value common.check_img_metadata_properties_quota(context, image['properties']) try: image = self.image_api.update(context, image_id, image, data=None, purge_props=True) except exception.ImageNotAuthorized as e: raise exc.HTTPForbidden(explanation=e.format_message()) return dict(metadata=image['properties']) @wsgi.Controller.api_version("2.1", MAX_IMAGE_META_PROXY_API_VERSION) @wsgi.expected_errors((400, 403, 404)) @validation.schema(image_metadata.update) def update(self, req, image_id, id, body): context = req.environ['nova.context'] meta = body['meta'] if id not in meta: expl = _('Request body and URI mismatch') raise exc.HTTPBadRequest(explanation=expl) image = self._get_image(context, image_id) image['properties'][id] = meta[id] common.check_img_metadata_properties_quota(context, image['properties']) try: self.image_api.update(context, image_id, image, data=None, purge_props=True) except exception.ImageNotAuthorized as e: raise exc.HTTPForbidden(explanation=e.format_message()) return dict(meta=meta) @wsgi.Controller.api_version("2.1", MAX_IMAGE_META_PROXY_API_VERSION) @wsgi.expected_errors((400, 403, 404)) @validation.schema(image_metadata.update_all) def update_all(self, req, image_id, body): context = req.environ['nova.context'] image = self._get_image(context, image_id) metadata = body['metadata'] common.check_img_metadata_properties_quota(context, metadata) image['properties'] = metadata try: self.image_api.update(context, image_id, image, data=None, purge_props=True) except exception.ImageNotAuthorized as e: raise exc.HTTPForbidden(explanation=e.format_message()) return dict(metadata=metadata) @wsgi.Controller.api_version("2.1", MAX_IMAGE_META_PROXY_API_VERSION) @wsgi.expected_errors((403, 404)) @wsgi.response(204) def delete(self, req, image_id, id): context = req.environ['nova.context'] image = self._get_image(context, image_id) if id not in image['properties']: msg = _("Invalid metadata key") raise exc.HTTPNotFound(explanation=msg) image['properties'].pop(id) try: self.image_api.update(context, image_id, image, data=None, purge_props=True) except exception.ImageNotAuthorized as e: raise exc.HTTPForbidden(explanation=e.format_message()) nova-17.0.1/nova/api/openstack/compute/server_password.py0000666000175000017500000000363713250073126023560 0ustar zuulzuul00000000000000# Copyright (c) 2012 Nebula, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """The server password extension.""" from nova.api.metadata import password from nova.api.openstack import common from nova.api.openstack import wsgi from nova import compute from nova.policies import server_password as sp_policies class ServerPasswordController(wsgi.Controller): """The Server Password API controller for the OpenStack API.""" def __init__(self): self.compute_api = compute.API() @wsgi.expected_errors(404) def index(self, req, server_id): context = req.environ['nova.context'] context.can(sp_policies.BASE_POLICY_NAME) instance = common.get_instance(self.compute_api, context, server_id) passw = password.extract_password(instance) return {'password': passw or ''} @wsgi.expected_errors(404) @wsgi.response(204) def clear(self, req, server_id): """Removes the encrypted server password from the metadata server Note that this does not actually change the instance server password. """ context = req.environ['nova.context'] context.can(sp_policies.BASE_POLICY_NAME) instance = common.get_instance(self.compute_api, context, server_id) meta = password.convert_password(context, None) instance.system_metadata.update(meta) instance.save() nova-17.0.1/nova/api/openstack/compute/extended_server_attributes.py0000666000175000017500000001152213250073126025754 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """The Extended Server Attributes API extension.""" from nova.api.openstack import api_version_request from nova.api.openstack import wsgi from nova import compute from nova.policies import extended_server_attributes as esa_policies from nova.policies import servers as servers_policies class ExtendedServerAttributesController(wsgi.Controller): def __init__(self, *args, **kwargs): super(ExtendedServerAttributesController, self).__init__(*args, **kwargs) self.compute_api = compute.API() def _extend_server(self, context, server, instance, req): key = "OS-EXT-SRV-ATTR:hypervisor_hostname" server[key] = instance.node properties = ['host', 'name'] if api_version_request.is_supported(req, min_version='2.3'): # NOTE(mriedem): These will use the OS-EXT-SRV-ATTR prefix below # and that's OK for microversion 2.3 which is being compatible # with v2.0 for the ec2 API split out from Nova. After this, # however, new microversions should not be using the # OS-EXT-SRV-ATTR prefix. properties += ['reservation_id', 'launch_index', 'hostname', 'kernel_id', 'ramdisk_id', 'root_device_name', 'user_data'] for attr in properties: if attr == 'name': key = "OS-EXT-SRV-ATTR:instance_%s" % attr else: # NOTE(mriedem): Nothing after microversion 2.3 should use the # OS-EXT-SRV-ATTR prefix for the attribute key name. key = "OS-EXT-SRV-ATTR:%s" % attr server[key] = instance[attr] def _server_host_status(self, context, server, instance, req): host_status = self.compute_api.get_instance_host_status(instance) server['host_status'] = host_status @wsgi.extends def show(self, req, resp_obj, id): context = req.environ['nova.context'] authorize_extend = False authorize_host_status = False if context.can(esa_policies.BASE_POLICY_NAME, fatal=False): authorize_extend = True if (api_version_request.is_supported(req, min_version='2.16') and context.can(servers_policies.SERVERS % 'show:host_status', fatal=False)): authorize_host_status = True if authorize_extend or authorize_host_status: server = resp_obj.obj['server'] db_instance = req.get_db_instance(server['id']) # server['id'] is guaranteed to be in the cache due to # the core API adding it in its 'show' method. if authorize_extend: self._extend_server(context, server, db_instance, req) if authorize_host_status: self._server_host_status(context, server, db_instance, req) @wsgi.extends def detail(self, req, resp_obj): context = req.environ['nova.context'] authorize_extend = False authorize_host_status = False if context.can(esa_policies.BASE_POLICY_NAME, fatal=False): authorize_extend = True if (api_version_request.is_supported(req, min_version='2.16') and context.can(servers_policies.SERVERS % 'show:host_status', fatal=False)): authorize_host_status = True if authorize_extend or authorize_host_status: servers = list(resp_obj.obj['servers']) # NOTE(dinesh-bhor): Skipping fetching of instances from cache as # servers list can be empty if invalid status is provided to the # core API 'detail' method. if servers: instances = req.get_db_instances() if authorize_host_status: host_statuses = ( self.compute_api.get_instances_host_statuses( instances.values())) for server in servers: if authorize_extend: instance = instances[server['id']] self._extend_server(context, server, instance, req) if authorize_host_status: server['host_status'] = host_statuses[server['id']] nova-17.0.1/nova/api/openstack/compute/cloudpipe.py0000666000175000017500000000245613250073126022312 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Connect your vlan to the world.""" from webob import exc from nova.api.openstack import wsgi class CloudpipeController(wsgi.Controller): """Handle creating and listing cloudpipe instances.""" @wsgi.expected_errors((410)) def create(self, req, body): """Create a new cloudpipe instance, if none exists. Parameters: {cloudpipe: {'project_id': ''}} """ raise exc.HTTPGone() @wsgi.expected_errors((410)) def index(self, req): """List running cloudpipe instances.""" raise exc.HTTPGone() @wsgi.expected_errors(410) def update(self, req, id, body): """Configure cloudpipe parameters for the project.""" raise exc.HTTPGone() nova-17.0.1/nova/api/openstack/compute/server_usage.py0000666000175000017500000000433713250073126023020 0ustar zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.openstack import wsgi from nova.policies import server_usage as su_policies resp_topic = "OS-SRV-USG" class ServerUsageController(wsgi.Controller): def _extend_server(self, server, instance): for k in ['launched_at', 'terminated_at']: key = "%s:%s" % (resp_topic, k) # NOTE(danms): Historically, this timestamp has been generated # merely by grabbing str(datetime) of a TZ-naive object. The # only way we can keep that with instance objects is to strip # the tzinfo from the stamp and str() it. server[key] = (instance[k].replace(tzinfo=None) if instance[k] else None) @wsgi.extends def show(self, req, resp_obj, id): context = req.environ['nova.context'] if context.can(su_policies.BASE_POLICY_NAME, fatal=False): server = resp_obj.obj['server'] db_instance = req.get_db_instance(server['id']) # server['id'] is guaranteed to be in the cache due to # the core API adding it in its 'show' method. self._extend_server(server, db_instance) @wsgi.extends def detail(self, req, resp_obj): context = req.environ['nova.context'] if context.can(su_policies.BASE_POLICY_NAME, fatal=False): servers = list(resp_obj.obj['servers']) for server in servers: db_instance = req.get_db_instance(server['id']) # server['id'] is guaranteed to be in the cache due to # the core API adding it in its 'detail' method. self._extend_server(server, db_instance) nova-17.0.1/nova/api/openstack/compute/fping.py0000666000175000017500000001131213250073126021420 0ustar zuulzuul00000000000000# Copyright 2011 Grid Dynamics # Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import itertools import os from webob import exc from nova.api.openstack.api_version_request \ import MAX_PROXY_API_SUPPORT_VERSION from nova.api.openstack import common from nova.api.openstack.compute.schemas import fping as schema from nova.api.openstack import wsgi from nova.api import validation from nova import compute import nova.conf from nova.i18n import _ from nova.policies import fping as fping_policies from nova import utils CONF = nova.conf.CONF class FpingController(wsgi.Controller): def __init__(self, network_api=None): self.compute_api = compute.API() self.last_call = {} def check_fping(self): if not os.access(CONF.api.fping_path, os.X_OK): raise exc.HTTPServiceUnavailable( explanation=_("fping utility is not found.")) @staticmethod def fping(ips): fping_ret = utils.execute(CONF.api.fping_path, *ips, check_exit_code=False) if not fping_ret: return set() alive_ips = set() for line in fping_ret[0].split("\n"): ip = line.split(" ", 1)[0] if "alive" in line: alive_ips.add(ip) return alive_ips @staticmethod def _get_instance_ips(context, instance): ret = [] for network in common.get_networks_for_instance( context, instance).values(): all_ips = itertools.chain(network["ips"], network["floating_ips"]) ret += [ip["address"] for ip in all_ips] return ret @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @validation.query_schema(schema.index_query) @wsgi.expected_errors(503) def index(self, req): context = req.environ["nova.context"] search_opts = dict(deleted=False) if "all_tenants" in req.GET: context.can(fping_policies.POLICY_ROOT % 'all_tenants') else: context.can(fping_policies.BASE_POLICY_NAME) if context.project_id: search_opts["project_id"] = context.project_id else: search_opts["user_id"] = context.user_id self.check_fping() include = req.GET.get("include", None) if include: include = set(include.split(",")) exclude = set() else: include = None exclude = req.GET.get("exclude", None) if exclude: exclude = set(exclude.split(",")) else: exclude = set() instance_list = self.compute_api.get_all( context, search_opts=search_opts) ip_list = [] instance_ips = {} instance_projects = {} for instance in instance_list: uuid = instance.uuid if uuid in exclude or (include is not None and uuid not in include): continue ips = [str(ip) for ip in self._get_instance_ips(context, instance)] instance_ips[uuid] = ips instance_projects[uuid] = instance.project_id ip_list += ips alive_ips = self.fping(ip_list) res = [] for instance_uuid, ips in instance_ips.items(): res.append({ "id": instance_uuid, "project_id": instance_projects[instance_uuid], "alive": bool(set(ips) & alive_ips), }) return {"servers": res} @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors((404, 503)) def show(self, req, id): context = req.environ["nova.context"] context.can(fping_policies.BASE_POLICY_NAME) self.check_fping() instance = common.get_instance(self.compute_api, context, id) ips = [str(ip) for ip in self._get_instance_ips(context, instance)] alive_ips = self.fping(ips) return { "server": { "id": instance.uuid, "project_id": instance.project_id, "alive": bool(set(ips) & alive_ips), } } nova-17.0.1/nova/api/openstack/compute/security_groups.py0000666000175000017500000006012313250073126023567 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # Copyright 2012 Justin Santa Barbara # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """The security groups extension.""" from oslo_log import log as logging from oslo_serialization import jsonutils from webob import exc from nova.api.openstack.api_version_request \ import MAX_PROXY_API_SUPPORT_VERSION from nova.api.openstack import common from nova.api.openstack.compute.schemas import security_groups as \ schema_security_groups from nova.api.openstack import wsgi from nova.api import validation from nova import compute from nova import exception from nova.i18n import _ from nova.network.security_group import openstack_driver from nova.policies import security_groups as sg_policies from nova.virt import netutils LOG = logging.getLogger(__name__) ATTRIBUTE_NAME = 'security_groups' SG_NOT_FOUND = object() def _authorize_context(req): context = req.environ['nova.context'] context.can(sg_policies.BASE_POLICY_NAME) return context class SecurityGroupControllerBase(object): """Base class for Security Group controllers.""" def __init__(self): self.security_group_api = ( openstack_driver.get_openstack_security_group_driver()) self.compute_api = compute.API( security_group_api=self.security_group_api) def _format_security_group_rule(self, context, rule, group_rule_data=None): """Return a security group rule in desired API response format. If group_rule_data is passed in that is used rather than querying for it. """ sg_rule = {} sg_rule['id'] = rule['id'] sg_rule['parent_group_id'] = rule['parent_group_id'] sg_rule['ip_protocol'] = rule['protocol'] sg_rule['from_port'] = rule['from_port'] sg_rule['to_port'] = rule['to_port'] sg_rule['group'] = {} sg_rule['ip_range'] = {} if group_rule_data: sg_rule['group'] = group_rule_data elif rule['group_id']: try: source_group = self.security_group_api.get( context, id=rule['group_id']) except exception.SecurityGroupNotFound: # NOTE(arosen): There is a possible race condition that can # occur here if two api calls occur concurrently: one that # lists the security groups and another one that deletes a # security group rule that has a group_id before the # group_id is fetched. To handle this if # SecurityGroupNotFound is raised we return None instead # of the rule and the caller should ignore the rule. LOG.debug("Security Group ID %s does not exist", rule['group_id']) return sg_rule['group'] = {'name': source_group.get('name'), 'tenant_id': source_group.get('project_id')} else: sg_rule['ip_range'] = {'cidr': rule['cidr']} return sg_rule def _format_security_group(self, context, group, group_rule_data_by_rule_group_id=None): security_group = {} security_group['id'] = group['id'] security_group['description'] = group['description'] security_group['name'] = group['name'] security_group['tenant_id'] = group['project_id'] security_group['rules'] = [] for rule in group['rules']: group_rule_data = None if rule['group_id'] and group_rule_data_by_rule_group_id: group_rule_data = ( group_rule_data_by_rule_group_id.get(rule['group_id'])) if group_rule_data == SG_NOT_FOUND: # The security group for the rule was not found so skip it. continue formatted_rule = self._format_security_group_rule( context, rule, group_rule_data) if formatted_rule: security_group['rules'] += [formatted_rule] return security_group def _get_group_rule_data_by_rule_group_id(self, context, groups): group_rule_data_by_rule_group_id = {} # Pre-populate with the group information itself in case any of the # rule group IDs are the in-scope groups. for group in groups: group_rule_data_by_rule_group_id[group['id']] = { 'name': group.get('name'), 'tenant_id': group.get('project_id')} for group in groups: for rule in group['rules']: rule_group_id = rule['group_id'] if (rule_group_id and rule_group_id not in group_rule_data_by_rule_group_id): try: source_group = self.security_group_api.get( context, id=rule['group_id']) group_rule_data_by_rule_group_id[rule_group_id] = { 'name': source_group.get('name'), 'tenant_id': source_group.get('project_id')} except exception.SecurityGroupNotFound: LOG.debug("Security Group %s does not exist", rule_group_id) # Use a sentinel so we don't process this group again. group_rule_data_by_rule_group_id[rule_group_id] = ( SG_NOT_FOUND) return group_rule_data_by_rule_group_id def _from_body(self, body, key): if not body: raise exc.HTTPBadRequest( explanation=_("The request body can't be empty")) value = body.get(key, None) if value is None: raise exc.HTTPBadRequest( explanation=_("Missing parameter %s") % key) return value class SecurityGroupController(SecurityGroupControllerBase, wsgi.Controller): """The Security group API controller for the OpenStack API.""" @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors((400, 404)) def show(self, req, id): """Return data about the given security group.""" context = _authorize_context(req) try: id = self.security_group_api.validate_id(id) security_group = self.security_group_api.get(context, None, id, map_exception=True) except exception.SecurityGroupNotFound as exp: raise exc.HTTPNotFound(explanation=exp.format_message()) except exception.Invalid as exp: raise exc.HTTPBadRequest(explanation=exp.format_message()) return {'security_group': self._format_security_group(context, security_group)} @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors((400, 404)) @wsgi.response(202) def delete(self, req, id): """Delete a security group.""" context = _authorize_context(req) try: id = self.security_group_api.validate_id(id) security_group = self.security_group_api.get(context, None, id, map_exception=True) self.security_group_api.destroy(context, security_group) except exception.SecurityGroupNotFound as exp: raise exc.HTTPNotFound(explanation=exp.format_message()) except exception.Invalid as exp: raise exc.HTTPBadRequest(explanation=exp.format_message()) @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @validation.query_schema(schema_security_groups.index_query) @wsgi.expected_errors(404) def index(self, req): """Returns a list of security groups.""" context = _authorize_context(req) search_opts = {} search_opts.update(req.GET) project_id = context.project_id raw_groups = self.security_group_api.list(context, project=project_id, search_opts=search_opts) limited_list = common.limited(raw_groups, req) result = [self._format_security_group(context, group) for group in limited_list] return {'security_groups': list(sorted(result, key=lambda k: (k['tenant_id'], k['name'])))} @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors((400, 403)) def create(self, req, body): """Creates a new security group.""" context = _authorize_context(req) security_group = self._from_body(body, 'security_group') group_name = security_group.get('name', None) group_description = security_group.get('description', None) try: self.security_group_api.validate_property(group_name, 'name', None) self.security_group_api.validate_property(group_description, 'description', None) group_ref = self.security_group_api.create_security_group( context, group_name, group_description) except exception.Invalid as exp: raise exc.HTTPBadRequest(explanation=exp.format_message()) except exception.SecurityGroupLimitExceeded as exp: raise exc.HTTPForbidden(explanation=exp.format_message()) return {'security_group': self._format_security_group(context, group_ref)} @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors((400, 404)) def update(self, req, id, body): """Update a security group.""" context = _authorize_context(req) try: id = self.security_group_api.validate_id(id) security_group = self.security_group_api.get(context, None, id, map_exception=True) except exception.SecurityGroupNotFound as exp: raise exc.HTTPNotFound(explanation=exp.format_message()) except exception.Invalid as exp: raise exc.HTTPBadRequest(explanation=exp.format_message()) security_group_data = self._from_body(body, 'security_group') group_name = security_group_data.get('name', None) group_description = security_group_data.get('description', None) try: self.security_group_api.validate_property(group_name, 'name', None) self.security_group_api.validate_property(group_description, 'description', None) group_ref = self.security_group_api.update_security_group( context, security_group, group_name, group_description) except exception.SecurityGroupNotFound as exp: raise exc.HTTPNotFound(explanation=exp.format_message()) except exception.Invalid as exp: raise exc.HTTPBadRequest(explanation=exp.format_message()) return {'security_group': self._format_security_group(context, group_ref)} class SecurityGroupRulesController(SecurityGroupControllerBase, wsgi.Controller): @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors((400, 403, 404)) def create(self, req, body): context = _authorize_context(req) sg_rule = self._from_body(body, 'security_group_rule') group_id = sg_rule.get('group_id') source_group = {} try: parent_group_id = self.security_group_api.validate_id( sg_rule.get('parent_group_id')) security_group = self.security_group_api.get(context, None, parent_group_id, map_exception=True) if group_id is not None: group_id = self.security_group_api.validate_id(group_id) source_group = self.security_group_api.get( context, id=group_id) new_rule = self._rule_args_to_dict(context, to_port=sg_rule.get('to_port'), from_port=sg_rule.get('from_port'), ip_protocol=sg_rule.get('ip_protocol'), cidr=sg_rule.get('cidr'), group_id=group_id) except (exception.Invalid, exception.InvalidCidr) as exp: raise exc.HTTPBadRequest(explanation=exp.format_message()) except exception.SecurityGroupNotFound as exp: raise exc.HTTPNotFound(explanation=exp.format_message()) if new_rule is None: msg = _("Not enough parameters to build a valid rule.") raise exc.HTTPBadRequest(explanation=msg) new_rule['parent_group_id'] = security_group['id'] if 'cidr' in new_rule: net, prefixlen = netutils.get_net_and_prefixlen(new_rule['cidr']) if net not in ('0.0.0.0', '::') and prefixlen == '0': msg = _("Bad prefix for network in cidr %s") % new_rule['cidr'] raise exc.HTTPBadRequest(explanation=msg) group_rule_data = None try: if group_id: group_rule_data = {'name': source_group.get('name'), 'tenant_id': source_group.get('project_id')} security_group_rule = ( self.security_group_api.create_security_group_rule( context, security_group, new_rule)) except exception.Invalid as exp: raise exc.HTTPBadRequest(explanation=exp.format_message()) except exception.SecurityGroupNotFound as exp: raise exc.HTTPNotFound(explanation=exp.format_message()) except exception.SecurityGroupLimitExceeded as exp: raise exc.HTTPForbidden(explanation=exp.format_message()) formatted_rule = self._format_security_group_rule(context, security_group_rule, group_rule_data) return {"security_group_rule": formatted_rule} def _rule_args_to_dict(self, context, to_port=None, from_port=None, ip_protocol=None, cidr=None, group_id=None): if group_id is not None: return self.security_group_api.new_group_ingress_rule( group_id, ip_protocol, from_port, to_port) else: cidr = self.security_group_api.parse_cidr(cidr) return self.security_group_api.new_cidr_ingress_rule( cidr, ip_protocol, from_port, to_port) @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors((400, 404, 409)) @wsgi.response(202) def delete(self, req, id): context = _authorize_context(req) try: id = self.security_group_api.validate_id(id) rule = self.security_group_api.get_rule(context, id) group_id = rule['parent_group_id'] security_group = self.security_group_api.get(context, None, group_id, map_exception=True) self.security_group_api.remove_rules(context, security_group, [rule['id']]) except exception.SecurityGroupNotFound as exp: raise exc.HTTPNotFound(explanation=exp.format_message()) except exception.NoUniqueMatch as exp: raise exc.HTTPConflict(explanation=exp.format_message()) except exception.Invalid as exp: raise exc.HTTPBadRequest(explanation=exp.format_message()) class ServerSecurityGroupController(SecurityGroupControllerBase): @wsgi.expected_errors(404) def index(self, req, server_id): """Returns a list of security groups for the given instance.""" context = _authorize_context(req) self.security_group_api.ensure_default(context) instance = common.get_instance(self.compute_api, context, server_id) try: groups = self.security_group_api.get_instance_security_groups( context, instance, True) except (exception.SecurityGroupNotFound, exception.InstanceNotFound) as exp: msg = exp.format_message() raise exc.HTTPNotFound(explanation=msg) # Optimize performance here by loading up the group_rule_data per # rule['group_id'] ahead of time so we're not doing redundant # security group lookups for each rule. group_rule_data_by_rule_group_id = ( self._get_group_rule_data_by_rule_group_id(context, groups)) result = [self._format_security_group(context, group, group_rule_data_by_rule_group_id) for group in groups] return {'security_groups': list(sorted(result, key=lambda k: (k['tenant_id'], k['name'])))} class SecurityGroupActionController(wsgi.Controller): def __init__(self, *args, **kwargs): super(SecurityGroupActionController, self).__init__(*args, **kwargs) self.security_group_api = ( openstack_driver.get_openstack_security_group_driver()) self.compute_api = compute.API( security_group_api=self.security_group_api) def _parse(self, body, action): try: body = body[action] group_name = body['name'] except TypeError: msg = _("Missing parameter dict") raise exc.HTTPBadRequest(explanation=msg) except KeyError: msg = _("Security group not specified") raise exc.HTTPBadRequest(explanation=msg) if not group_name or group_name.strip() == '': msg = _("Security group name cannot be empty") raise exc.HTTPBadRequest(explanation=msg) return group_name def _invoke(self, method, context, id, group_name): instance = common.get_instance(self.compute_api, context, id) method(context, instance, group_name) @wsgi.expected_errors((400, 404, 409)) @wsgi.response(202) @wsgi.action('addSecurityGroup') def _addSecurityGroup(self, req, id, body): context = req.environ['nova.context'] context.can(sg_policies.BASE_POLICY_NAME) group_name = self._parse(body, 'addSecurityGroup') try: return self._invoke(self.security_group_api.add_to_instance, context, id, group_name) except (exception.SecurityGroupNotFound, exception.InstanceNotFound) as exp: raise exc.HTTPNotFound(explanation=exp.format_message()) except exception.NoUniqueMatch as exp: raise exc.HTTPConflict(explanation=exp.format_message()) except (exception.SecurityGroupCannotBeApplied, exception.SecurityGroupExistsForInstance) as exp: raise exc.HTTPBadRequest(explanation=exp.format_message()) @wsgi.expected_errors((400, 404, 409)) @wsgi.response(202) @wsgi.action('removeSecurityGroup') def _removeSecurityGroup(self, req, id, body): context = req.environ['nova.context'] context.can(sg_policies.BASE_POLICY_NAME) group_name = self._parse(body, 'removeSecurityGroup') try: return self._invoke(self.security_group_api.remove_from_instance, context, id, group_name) except (exception.SecurityGroupNotFound, exception.InstanceNotFound) as exp: raise exc.HTTPNotFound(explanation=exp.format_message()) except exception.NoUniqueMatch as exp: raise exc.HTTPConflict(explanation=exp.format_message()) except exception.SecurityGroupNotExistsForInstance as exp: raise exc.HTTPBadRequest(explanation=exp.format_message()) class SecurityGroupsOutputController(wsgi.Controller): def __init__(self, *args, **kwargs): super(SecurityGroupsOutputController, self).__init__(*args, **kwargs) self.compute_api = compute.API() self.security_group_api = ( openstack_driver.get_openstack_security_group_driver()) def _extend_servers(self, req, servers): # TODO(arosen) this function should be refactored to reduce duplicate # code and use get_instance_security_groups instead of get_db_instance. if not len(servers): return key = "security_groups" context = req.environ['nova.context'] if not context.can(sg_policies.BASE_POLICY_NAME, fatal=False): return if not openstack_driver.is_neutron_security_groups(): for server in servers: instance = req.get_db_instance(server['id']) groups = instance.get(key) if groups: server[ATTRIBUTE_NAME] = [{"name": group.name} for group in groups] else: # If method is a POST we get the security groups intended for an # instance from the request. The reason for this is if using # neutron security groups the requested security groups for the # instance are not in the db and have not been sent to neutron yet. if req.method != 'POST': sg_instance_bindings = ( self.security_group_api .get_instances_security_groups_bindings(context, servers)) for server in servers: groups = sg_instance_bindings.get(server['id']) if groups: server[ATTRIBUTE_NAME] = groups # In this section of code len(servers) == 1 as you can only POST # one server in an API request. else: # try converting to json req_obj = jsonutils.loads(req.body) # Add security group to server, if no security group was in # request add default since that is the group it is part of servers[0][ATTRIBUTE_NAME] = req_obj['server'].get( ATTRIBUTE_NAME, [{'name': 'default'}]) def _show(self, req, resp_obj): if 'server' in resp_obj.obj: self._extend_servers(req, [resp_obj.obj['server']]) @wsgi.extends def show(self, req, resp_obj, id): return self._show(req, resp_obj) @wsgi.extends def create(self, req, resp_obj, body): return self._show(req, resp_obj) @wsgi.extends def detail(self, req, resp_obj): self._extend_servers(req, list(resp_obj.obj['servers'])) # NOTE(gmann): This function is not supposed to use 'body_deprecated_param' # parameter as this is placed to handle scheduler_hint extension for V2.1. def server_create(server_dict, create_kwargs, body_deprecated_param): security_groups = server_dict.get(ATTRIBUTE_NAME) if security_groups is not None: create_kwargs['security_groups'] = [ sg['name'] for sg in security_groups if sg.get('name')] create_kwargs['security_groups'] = list( set(create_kwargs['security_groups'])) def get_server_create_schema(version): if version == '2.0': return schema_security_groups.server_create_v20 return schema_security_groups.server_create nova-17.0.1/nova/api/openstack/compute/security_group_default_rules.py0000666000175000017500000001265213250073126026326 0ustar zuulzuul00000000000000# Copyright 2013 Metacloud Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from webob import exc from nova.api.openstack.api_version_request \ import MAX_PROXY_API_SUPPORT_VERSION from nova.api.openstack.compute import security_groups as sg from nova.api.openstack import wsgi from nova import exception from nova.i18n import _ from nova.network.security_group import openstack_driver from nova.policies import security_group_default_rules as sgdr_policies class SecurityGroupDefaultRulesController(sg.SecurityGroupControllerBase, wsgi.Controller): def __init__(self): self.security_group_api = ( openstack_driver.get_openstack_security_group_driver()) @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors((400, 409, 501)) def create(self, req, body): context = req.environ['nova.context'] context.can(sgdr_policies.BASE_POLICY_NAME) sg_rule = self._from_body(body, 'security_group_default_rule') try: values = self._rule_args_to_dict(to_port=sg_rule.get('to_port'), from_port=sg_rule.get('from_port'), ip_protocol=sg_rule.get('ip_protocol'), cidr=sg_rule.get('cidr')) except (exception.InvalidCidr, exception.InvalidInput, exception.InvalidIpProtocol, exception.InvalidPortRange) as ex: raise exc.HTTPBadRequest(explanation=ex.format_message()) if values is None: msg = _('Not enough parameters to build a valid rule.') raise exc.HTTPBadRequest(explanation=msg) if self.security_group_api.default_rule_exists(context, values): msg = _('This default rule already exists.') raise exc.HTTPConflict(explanation=msg) security_group_rule = self.security_group_api.add_default_rules( context, [values])[0] fmt_rule = self._format_security_group_default_rule( security_group_rule) return {'security_group_default_rule': fmt_rule} def _rule_args_to_dict(self, to_port=None, from_port=None, ip_protocol=None, cidr=None): cidr = self.security_group_api.parse_cidr(cidr) return self.security_group_api.new_cidr_ingress_rule( cidr, ip_protocol, from_port, to_port) @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors((400, 404, 501)) def show(self, req, id): context = req.environ['nova.context'] context.can(sgdr_policies.BASE_POLICY_NAME) try: id = self.security_group_api.validate_id(id) except exception.Invalid as ex: raise exc.HTTPBadRequest(explanation=ex.format_message()) try: rule = self.security_group_api.get_default_rule(context, id) except exception.SecurityGroupDefaultRuleNotFound as ex: raise exc.HTTPNotFound(explanation=ex.format_message()) fmt_rule = self._format_security_group_default_rule(rule) return {"security_group_default_rule": fmt_rule} @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors((400, 404, 501)) @wsgi.response(204) def delete(self, req, id): context = req.environ['nova.context'] context.can(sgdr_policies.BASE_POLICY_NAME) try: id = self.security_group_api.validate_id(id) except exception.Invalid as ex: raise exc.HTTPBadRequest(explanation=ex.format_message()) try: rule = self.security_group_api.get_default_rule(context, id) self.security_group_api.remove_default_rules(context, [rule['id']]) except exception.SecurityGroupDefaultRuleNotFound as ex: raise exc.HTTPNotFound(explanation=ex.format_message()) @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors((404, 501)) def index(self, req): context = req.environ['nova.context'] context.can(sgdr_policies.BASE_POLICY_NAME) ret = {'security_group_default_rules': []} try: for rule in self.security_group_api.get_all_default_rules(context): rule_fmt = self._format_security_group_default_rule(rule) ret['security_group_default_rules'].append(rule_fmt) except exception.SecurityGroupDefaultRuleNotFound as ex: raise exc.HTTPNotFound(explanation=ex.format_message()) return ret def _format_security_group_default_rule(self, rule): sg_rule = {} sg_rule['id'] = rule['id'] sg_rule['ip_protocol'] = rule['protocol'] sg_rule['from_port'] = rule['from_port'] sg_rule['to_port'] = rule['to_port'] sg_rule['ip_range'] = {} sg_rule['ip_range'] = {'cidr': rule['cidr']} return sg_rule nova-17.0.1/nova/api/openstack/compute/tenant_networks.py0000666000175000017500000001705313250073126023552 0ustar zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import netaddr import netaddr.core as netexc from oslo_log import log as logging import six from webob import exc from nova.api.openstack.api_version_request \ import MAX_PROXY_API_SUPPORT_VERSION from nova.api.openstack.compute.schemas import tenant_networks as schema from nova.api.openstack import wsgi from nova.api import validation import nova.conf from nova import context as nova_context from nova import exception from nova.i18n import _ import nova.network from nova import objects from nova.policies import tenant_networks as tn_policies from nova import quota CONF = nova.conf.CONF QUOTAS = quota.QUOTAS LOG = logging.getLogger(__name__) def network_dict(network): # NOTE(danms): Here, network should be an object, which could have come # from neutron and thus be missing most of the attributes. Providing a # default to get() avoids trying to lazy-load missing attributes. return {"id": network.get("uuid", None) or network.get("id", None), "cidr": str(network.get("cidr", None)), "label": network.get("label", None)} class TenantNetworkController(wsgi.Controller): def __init__(self, network_api=None): self.network_api = nova.network.API() self._default_networks = [] def _refresh_default_networks(self): self._default_networks = [] if CONF.api.use_neutron_default_nets: try: self._default_networks = self._get_default_networks() except Exception: LOG.exception("Failed to get default networks") def _get_default_networks(self): project_id = CONF.api.neutron_default_tenant_id ctx = nova_context.RequestContext(user_id=None, project_id=project_id) networks = {} for n in self.network_api.get_all(ctx): networks[n['id']] = n['label'] return [{'id': k, 'label': v} for k, v in networks.items()] @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors(()) def index(self, req): context = req.environ['nova.context'] context.can(tn_policies.BASE_POLICY_NAME) networks = list(self.network_api.get_all(context)) if not self._default_networks: self._refresh_default_networks() networks.extend(self._default_networks) return {'networks': [network_dict(n) for n in networks]} @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors(404) def show(self, req, id): context = req.environ['nova.context'] context.can(tn_policies.BASE_POLICY_NAME) try: network = self.network_api.get(context, id) except exception.NetworkNotFound: msg = _("Network not found") raise exc.HTTPNotFound(explanation=msg) return {'network': network_dict(network)} @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors((403, 404, 409)) @wsgi.response(202) def delete(self, req, id): context = req.environ['nova.context'] context.can(tn_policies.BASE_POLICY_NAME) try: self.network_api.disassociate(context, id) self.network_api.delete(context, id) except exception.PolicyNotAuthorized as e: raise exc.HTTPForbidden(explanation=six.text_type(e)) except exception.NetworkInUse as e: raise exc.HTTPConflict(explanation=e.format_message()) except exception.NetworkNotFound: msg = _("Network not found") raise exc.HTTPNotFound(explanation=msg) @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors((400, 403, 409, 503)) @validation.schema(schema.create) def create(self, req, body): context = req.environ["nova.context"] context.can(tn_policies.BASE_POLICY_NAME) network = body["network"] keys = ["cidr", "cidr_v6", "ipam", "vlan_start", "network_size", "num_networks"] kwargs = {k: network.get(k) for k in keys} label = network["label"] if kwargs["cidr"]: try: net = netaddr.IPNetwork(kwargs["cidr"]) if net.size < 4: msg = _("Requested network does not contain " "enough (2+) usable hosts") raise exc.HTTPBadRequest(explanation=msg) except netexc.AddrConversionError: msg = _("Address could not be converted.") raise exc.HTTPBadRequest(explanation=msg) try: if CONF.enable_network_quota: objects.Quotas.check_deltas(context, {'networks': 1}, context.project_id) except exception.OverQuota: msg = _("Quota exceeded, too many networks.") raise exc.HTTPForbidden(explanation=msg) kwargs['project_id'] = context.project_id try: networks = self.network_api.create(context, label=label, **kwargs) except exception.PolicyNotAuthorized as e: raise exc.HTTPForbidden(explanation=six.text_type(e)) except exception.CidrConflict as e: raise exc.HTTPConflict(explanation=e.format_message()) except Exception: msg = _("Create networks failed") LOG.exception(msg, extra=network) raise exc.HTTPServiceUnavailable(explanation=msg) # NOTE(melwitt): We recheck the quota after creating the object to # prevent users from allocating more resources than their allowed quota # in the event of a race. This is configurable because it can be # expensive if strict quota limits are not required in a deployment. if CONF.quota.recheck_quota and CONF.enable_network_quota: try: objects.Quotas.check_deltas(context, {'networks': 0}, context.project_id) except exception.OverQuota: self.network_api.delete(context, network_dict(networks[0])['id']) msg = _("Quota exceeded, too many networks.") raise exc.HTTPForbidden(explanation=msg) return {"network": network_dict(networks[0])} def _network_count(context, project_id): # NOTE(melwitt): This assumes a single cell. ctx = nova_context.RequestContext(user_id=None, project_id=project_id) ctx = ctx.elevated() networks = nova.network.api.API().get_all(ctx) return {'project': {'networks': len(networks)}} def _register_network_quota(): if CONF.enable_network_quota: QUOTAS.register_resource(quota.CountableResource('networks', _network_count, 'quota_networks')) _register_network_quota() nova-17.0.1/nova/api/openstack/compute/floating_ips.py0000666000175000017500000003316013250073126023000 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # Copyright (c) 2011 X.commerce, a business unit of eBay Inc. # Copyright 2011 Grid Dynamics # Copyright 2011 Eldar Nugaev, Kirill Shileev, Ilya Alekseyev # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging from oslo_utils import netutils from oslo_utils import uuidutils import webob from nova.api.openstack.api_version_request \ import MAX_PROXY_API_SUPPORT_VERSION from nova.api.openstack import common from nova.api.openstack.compute.schemas import floating_ips from nova.api.openstack import wsgi from nova.api import validation from nova import compute from nova import exception from nova.i18n import _ from nova import network from nova.policies import floating_ips as fi_policies LOG = logging.getLogger(__name__) def _translate_floating_ip_view(floating_ip): result = { 'id': floating_ip['id'], 'ip': floating_ip['address'], 'pool': floating_ip['pool'], } # If fixed_ip is unset on floating_ip, then we can't get any of the next # stuff, so we'll just short-circuit if 'fixed_ip' not in floating_ip: result['fixed_ip'] = None result['instance_id'] = None return {'floating_ip': result} # TODO(rlrossit): These look like dicts, but they're actually versioned # objects, so we need to do these contain checks because they will not be # caught by the exceptions below (it raises NotImplementedError and # OrphanedObjectError. This comment can probably be removed when # the dict syntax goes away. try: if 'address' in floating_ip['fixed_ip']: result['fixed_ip'] = floating_ip['fixed_ip']['address'] else: result['fixed_ip'] = None except (TypeError, KeyError, AttributeError): result['fixed_ip'] = None try: if 'instance_uuid' in floating_ip['fixed_ip']: result['instance_id'] = floating_ip['fixed_ip']['instance_uuid'] else: result['instance_id'] = None except (TypeError, KeyError, AttributeError): result['instance_id'] = None return {'floating_ip': result} def _translate_floating_ips_view(floating_ips): return {'floating_ips': [_translate_floating_ip_view(ip)['floating_ip'] for ip in floating_ips]} def get_instance_by_floating_ip_addr(self, context, address): try: instance_id =\ self.network_api.get_instance_id_by_floating_address( context, address) except exception.FloatingIpNotFoundForAddress as ex: raise webob.exc.HTTPNotFound(explanation=ex.format_message()) except exception.FloatingIpMultipleFoundForAddress as ex: raise webob.exc.HTTPConflict(explanation=ex.format_message()) if instance_id: return common.get_instance(self.compute_api, context, instance_id, expected_attrs=['flavor']) def disassociate_floating_ip(self, context, instance, address): try: self.network_api.disassociate_floating_ip(context, instance, address) except exception.Forbidden: raise webob.exc.HTTPForbidden() except exception.CannotDisassociateAutoAssignedFloatingIP: msg = _('Cannot disassociate auto assigned floating IP') raise webob.exc.HTTPForbidden(explanation=msg) class FloatingIPController(wsgi.Controller): """The Floating IPs API controller for the OpenStack API.""" def __init__(self): self.compute_api = compute.API() self.network_api = network.API() super(FloatingIPController, self).__init__() @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors((400, 404)) def show(self, req, id): """Return data about the given floating IP.""" context = req.environ['nova.context'] context.can(fi_policies.BASE_POLICY_NAME) try: floating_ip = self.network_api.get_floating_ip(context, id) except (exception.NotFound, exception.FloatingIpNotFound): msg = _("Floating IP not found for ID %s") % id raise webob.exc.HTTPNotFound(explanation=msg) except exception.InvalidID as e: raise webob.exc.HTTPBadRequest(explanation=e.format_message()) return _translate_floating_ip_view(floating_ip) @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors(()) def index(self, req): """Return a list of floating IPs allocated to a project.""" context = req.environ['nova.context'] context.can(fi_policies.BASE_POLICY_NAME) floating_ips = self.network_api.get_floating_ips_by_project(context) return _translate_floating_ips_view(floating_ips) @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors((400, 403, 404)) def create(self, req, body=None): context = req.environ['nova.context'] context.can(fi_policies.BASE_POLICY_NAME) pool = None if body and 'pool' in body: pool = body['pool'] try: address = self.network_api.allocate_floating_ip(context, pool) ip = self.network_api.get_floating_ip_by_address(context, address) except exception.NoMoreFloatingIps: if pool: msg = _("No more floating IPs in pool %s.") % pool else: msg = _("No more floating IPs available.") raise webob.exc.HTTPNotFound(explanation=msg) except exception.FloatingIpLimitExceeded: if pool: msg = _("IP allocation over quota in pool %s.") % pool else: msg = _("IP allocation over quota.") raise webob.exc.HTTPForbidden(explanation=msg) except exception.FloatingIpPoolNotFound as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) except exception.FloatingIpBadRequest as e: raise webob.exc.HTTPBadRequest(explanation=e.format_message()) return _translate_floating_ip_view(ip) @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.response(202) @wsgi.expected_errors((400, 403, 404, 409)) def delete(self, req, id): context = req.environ['nova.context'] context.can(fi_policies.BASE_POLICY_NAME) # get the floating ip object try: floating_ip = self.network_api.get_floating_ip(context, id) except (exception.NotFound, exception.FloatingIpNotFound): msg = _("Floating IP not found for ID %s") % id raise webob.exc.HTTPNotFound(explanation=msg) except exception.InvalidID as e: raise webob.exc.HTTPBadRequest(explanation=e.format_message()) address = floating_ip['address'] # get the associated instance object (if any) instance = get_instance_by_floating_ip_addr(self, context, address) try: self.network_api.disassociate_and_release_floating_ip( context, instance, floating_ip) except exception.Forbidden: raise webob.exc.HTTPForbidden() except exception.CannotDisassociateAutoAssignedFloatingIP: msg = _('Cannot disassociate auto assigned floating IP') raise webob.exc.HTTPForbidden(explanation=msg) except exception.FloatingIpNotFoundForAddress as exc: raise webob.exc.HTTPNotFound(explanation=exc.format_message()) class FloatingIPActionController(wsgi.Controller): """This API is deprecated from the Microversion '2.44'.""" def __init__(self, *args, **kwargs): super(FloatingIPActionController, self).__init__(*args, **kwargs) self.compute_api = compute.API() self.network_api = network.API() @wsgi.Controller.api_version("2.1", "2.43") @wsgi.expected_errors((400, 403, 404)) @wsgi.action('addFloatingIp') @validation.schema(floating_ips.add_floating_ip) def _add_floating_ip(self, req, id, body): """Associate floating_ip to an instance.""" context = req.environ['nova.context'] context.can(fi_policies.BASE_POLICY_NAME) address = body['addFloatingIp']['address'] instance = common.get_instance(self.compute_api, context, id, expected_attrs=['flavor']) cached_nwinfo = instance.get_network_info() if not cached_nwinfo: LOG.warning( 'Info cache is %r during associate with no nw_info cache', instance.info_cache, instance=instance) msg = _('Instance network is not ready yet') raise webob.exc.HTTPBadRequest(explanation=msg) fixed_ips = cached_nwinfo.fixed_ips() if not fixed_ips: msg = _('No fixed IPs associated to instance') raise webob.exc.HTTPBadRequest(explanation=msg) fixed_address = None if 'fixed_address' in body['addFloatingIp']: fixed_address = body['addFloatingIp']['fixed_address'] for fixed in fixed_ips: if fixed['address'] == fixed_address: break else: msg = _('Specified fixed address not assigned to instance') raise webob.exc.HTTPBadRequest(explanation=msg) if not fixed_address: try: fixed_address = next(ip['address'] for ip in fixed_ips if netutils.is_valid_ipv4(ip['address'])) except StopIteration: msg = _('Unable to associate floating IP %(address)s ' 'to any fixed IPs for instance %(id)s. ' 'Instance has no fixed IPv4 addresses to ' 'associate.') % ( {'address': address, 'id': id}) raise webob.exc.HTTPBadRequest(explanation=msg) if len(fixed_ips) > 1: LOG.warning('multiple fixed_ips exist, using the first ' 'IPv4 fixed_ip: %s', fixed_address) try: self.network_api.associate_floating_ip(context, instance, floating_address=address, fixed_address=fixed_address) except exception.FloatingIpAssociated: msg = _('floating IP is already associated') raise webob.exc.HTTPBadRequest(explanation=msg) except exception.FloatingIpAssociateFailed as e: raise webob.exc.HTTPBadRequest(explanation=e.format_message()) except exception.NoFloatingIpInterface: msg = _('l3driver call to add floating IP failed') raise webob.exc.HTTPBadRequest(explanation=msg) except exception.InstanceUnknownCell as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) except exception.FloatingIpNotFoundForAddress: msg = _('floating IP not found') raise webob.exc.HTTPNotFound(explanation=msg) except exception.Forbidden as e: raise webob.exc.HTTPForbidden(explanation=e.format_message()) except Exception as e: msg = _('Unable to associate floating IP %(address)s to ' 'fixed IP %(fixed_address)s for instance %(id)s. ' 'Error: %(error)s') % ( {'address': address, 'fixed_address': fixed_address, 'id': id, 'error': e}) LOG.exception(msg) raise webob.exc.HTTPBadRequest(explanation=msg) return webob.Response(status_int=202) @wsgi.Controller.api_version("2.1", "2.43") @wsgi.expected_errors((400, 403, 404, 409)) @wsgi.action('removeFloatingIp') @validation.schema(floating_ips.remove_floating_ip) def _remove_floating_ip(self, req, id, body): """Dissociate floating_ip from an instance.""" context = req.environ['nova.context'] context.can(fi_policies.BASE_POLICY_NAME) address = body['removeFloatingIp']['address'] # get the floating ip object try: floating_ip = self.network_api.get_floating_ip_by_address(context, address) except exception.FloatingIpNotFoundForAddress: msg = _("floating IP not found") raise webob.exc.HTTPNotFound(explanation=msg) # get the associated instance object (if any) instance = get_instance_by_floating_ip_addr(self, context, address) # disassociate if associated if (instance and floating_ip.get('fixed_ip_id') and (uuidutils.is_uuid_like(id) and [instance.uuid == id] or [instance.id == id])[0]): try: disassociate_floating_ip(self, context, instance, address) except exception.FloatingIpNotAssociated: msg = _('Floating IP is not associated') raise webob.exc.HTTPBadRequest(explanation=msg) return webob.Response(status_int=202) else: msg = _("Floating IP %(address)s is not associated with instance " "%(id)s.") % {'address': address, 'id': id} raise webob.exc.HTTPConflict(explanation=msg) nova-17.0.1/nova/api/openstack/compute/baremetal_nodes.py0000666000175000017500000001335013250073126023445 0ustar zuulzuul00000000000000# Copyright (c) 2013 NTT DOCOMO, INC. # Copyright 2014 IBM Corporation. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """The bare-metal admin extension.""" from oslo_utils import importutils import webob from nova.api.openstack.api_version_request \ import MAX_PROXY_API_SUPPORT_VERSION from nova.api.openstack import common from nova.api.openstack import wsgi import nova.conf from nova.i18n import _ from nova.policies import baremetal_nodes as bn_policies ironic_client = importutils.try_import('ironicclient.client') ironic_exc = importutils.try_import('ironicclient.exc') node_fields = ['id', 'cpus', 'local_gb', 'memory_mb', 'pm_address', 'pm_user', 'service_host', 'terminal_port', 'instance_uuid'] node_ext_fields = ['uuid', 'task_state', 'updated_at', 'pxe_config_path'] interface_fields = ['id', 'address', 'datapath_id', 'port_no'] CONF = nova.conf.CONF def _check_ironic_client_enabled(): """Check whether Ironic is installed or not.""" if ironic_client is None: common.raise_feature_not_supported() def _get_ironic_client(): """return an Ironic client.""" # TODO(NobodyCam): Fix insecure setting kwargs = {'os_username': CONF.ironic.admin_username, 'os_password': CONF.ironic.admin_password, 'os_auth_url': CONF.ironic.admin_url, 'os_tenant_name': CONF.ironic.admin_tenant_name, 'os_service_type': 'baremetal', 'os_endpoint_type': 'public', 'insecure': 'true', 'ironic_url': CONF.ironic.api_endpoint} # NOTE(mriedem): The 1 api_version arg here is the only valid value for # the client, but it's not even used so it doesn't really matter. The # ironic client wrapper in the virt driver actually uses a hard-coded # microversion via the os_ironic_api_version kwarg. icli = ironic_client.get_client(1, **kwargs) return icli def _no_ironic_proxy(cmd): raise webob.exc.HTTPBadRequest( explanation=_("Command Not supported. Please use Ironic " "command %(cmd)s to perform this " "action.") % {'cmd': cmd}) class BareMetalNodeController(wsgi.Controller): """The Bare-Metal Node API controller for the OpenStack API.""" def _node_dict(self, node_ref): d = {} for f in node_fields: d[f] = node_ref.get(f) for f in node_ext_fields: d[f] = node_ref.get(f) return d @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors((404, 501)) def index(self, req): context = req.environ['nova.context'] context.can(bn_policies.BASE_POLICY_NAME) nodes = [] # proxy command to Ironic _check_ironic_client_enabled() icli = _get_ironic_client() ironic_nodes = icli.node.list(detail=True) for inode in ironic_nodes: node = {'id': inode.uuid, 'interfaces': [], 'host': 'IRONIC MANAGED', 'task_state': inode.provision_state, 'cpus': inode.properties.get('cpus', 0), 'memory_mb': inode.properties.get('memory_mb', 0), 'disk_gb': inode.properties.get('local_gb', 0)} nodes.append(node) return {'nodes': nodes} @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors((404, 501)) def show(self, req, id): context = req.environ['nova.context'] context.can(bn_policies.BASE_POLICY_NAME) # proxy command to Ironic _check_ironic_client_enabled() icli = _get_ironic_client() try: inode = icli.node.get(id) except ironic_exc.NotFound: msg = _("Node %s could not be found.") % id raise webob.exc.HTTPNotFound(explanation=msg) iports = icli.node.list_ports(id) node = {'id': inode.uuid, 'interfaces': [], 'host': 'IRONIC MANAGED', 'task_state': inode.provision_state, 'cpus': inode.properties.get('cpus', 0), 'memory_mb': inode.properties.get('memory_mb', 0), 'disk_gb': inode.properties.get('local_gb', 0), 'instance_uuid': inode.instance_uuid} for port in iports: node['interfaces'].append({'address': port.address}) return {'node': node} @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors(400) def create(self, req, body): _no_ironic_proxy("node-create") @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors(400) def delete(self, req, id): _no_ironic_proxy("node-delete") @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.action('add_interface') @wsgi.expected_errors(400) def _add_interface(self, req, id, body): _no_ironic_proxy("port-create") @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.action('remove_interface') @wsgi.expected_errors(400) def _remove_interface(self, req, id, body): _no_ironic_proxy("port-delete") nova-17.0.1/nova/api/openstack/compute/scheduler_hints.py0000666000175000017500000000251613250073126023506 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.openstack.compute.schemas import scheduler_hints as schema # NOTE(gmann): Accepting request body in this function to fetch "scheduler # hint". This is a workaround to allow OS_SCH-HNT at the top level # of the body request, but that it will be changed in the future to be a # subset of the servers dict. def server_create(server_dict, create_kwargs, req_body): scheduler_hints = {} if 'os:scheduler_hints' in req_body: scheduler_hints = req_body['os:scheduler_hints'] elif 'OS-SCH-HNT:scheduler_hints' in req_body: scheduler_hints = req_body['OS-SCH-HNT:scheduler_hints'] create_kwargs['scheduler_hints'] = scheduler_hints def get_server_create_schema(version): return schema.server_create nova-17.0.1/nova/api/openstack/compute/virtual_interfaces.py0000666000175000017500000000535413250073126024217 0ustar zuulzuul00000000000000# Copyright (C) 2011 Midokura KK # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """The virtual interfaces extension.""" import webob from nova.api.openstack import api_version_request from nova.api.openstack import common from nova.api.openstack import wsgi from nova import compute from nova.i18n import _ from nova import network from nova.policies import virtual_interfaces as vif_policies def _translate_vif_summary_view(req, vif): """Maps keys for VIF summary view.""" d = {} d['id'] = vif.uuid d['mac_address'] = vif.address if api_version_request.is_supported(req, min_version='2.12'): d['net_id'] = vif.net_uuid # NOTE(gmann): This is for v2.1 compatible mode where response should be # same as v2 one. if req.is_legacy_v2(): d['OS-EXT-VIF-NET:net_id'] = vif.net_uuid return d class ServerVirtualInterfaceController(wsgi.Controller): """The instance VIF API controller for the OpenStack API. This API is deprecated from the Microversion '2.44'. """ def __init__(self): self.compute_api = compute.API() self.network_api = network.API() super(ServerVirtualInterfaceController, self).__init__() def _items(self, req, server_id, entity_maker): """Returns a list of VIFs, transformed through entity_maker.""" context = req.environ['nova.context'] context.can(vif_policies.BASE_POLICY_NAME) instance = common.get_instance(self.compute_api, context, server_id) try: vifs = self.network_api.get_vifs_by_instance(context, instance) except NotImplementedError: msg = _('Listing virtual interfaces is not supported by this ' 'cloud.') raise webob.exc.HTTPBadRequest(explanation=msg) limited_list = common.limited(vifs, req) res = [entity_maker(req, vif) for vif in limited_list] return {'virtual_interfaces': res} @wsgi.Controller.api_version("2.1", "2.43") @wsgi.expected_errors((400, 404)) def index(self, req, server_id): """Returns the list of VIFs for a given instance.""" return self._items(req, server_id, entity_maker=_translate_vif_summary_view) nova-17.0.1/nova/api/openstack/compute/quota_classes.py0000666000175000017500000001251513250073126023171 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import webob from nova.api.openstack.compute.schemas import quota_classes from nova.api.openstack import wsgi from nova.api import validation from nova import exception from nova import objects from nova.policies import quota_class_sets as qcs_policies from nova import quota from nova import utils QUOTAS = quota.QUOTAS # NOTE(gmann): Quotas which were returned in v2 but in v2.1 those # were not returned. Fixed in microversion 2.50. Bug#1693168. EXTENDED_QUOTAS = ['server_groups', 'server_group_members'] # NOTE(gmann): Network related quotas are filter out in # microversion 2.50. Bug#1701211. FILTERED_QUOTAS_2_50 = ["fixed_ips", "floating_ips", "networks", "security_group_rules", "security_groups"] # Microversion 2.57 removes personality (injected) files from the API. FILTERED_QUOTAS_2_57 = list(FILTERED_QUOTAS_2_50) FILTERED_QUOTAS_2_57.extend(['injected_files', 'injected_file_content_bytes', 'injected_file_path_bytes']) class QuotaClassSetsController(wsgi.Controller): supported_quotas = [] def __init__(self, **kwargs): self.supported_quotas = QUOTAS.resources def _format_quota_set(self, quota_class, quota_set, filtered_quotas=None, exclude_server_groups=False): """Convert the quota object to a result dict.""" if quota_class: result = dict(id=str(quota_class)) else: result = {} original_quotas = copy.deepcopy(self.supported_quotas) if filtered_quotas: original_quotas = [resource for resource in original_quotas if resource not in filtered_quotas] # NOTE(gmann): Before microversion v2.50, v2.1 API does not return the # 'server_groups' & 'server_group_members' key in quota class API # response. if exclude_server_groups: for resource in EXTENDED_QUOTAS: original_quotas.remove(resource) for resource in original_quotas: if resource in quota_set: result[resource] = quota_set[resource] return dict(quota_class_set=result) @wsgi.Controller.api_version('2.1', '2.49') @wsgi.expected_errors(()) def show(self, req, id): return self._show(req, id, exclude_server_groups=True) @wsgi.Controller.api_version('2.50', '2.56') # noqa @wsgi.expected_errors(()) def show(self, req, id): return self._show(req, id, FILTERED_QUOTAS_2_50) @wsgi.Controller.api_version('2.57') # noqa @wsgi.expected_errors(()) def show(self, req, id): return self._show(req, id, FILTERED_QUOTAS_2_57) def _show(self, req, id, filtered_quotas=None, exclude_server_groups=False): context = req.environ['nova.context'] context.can(qcs_policies.POLICY_ROOT % 'show', {'quota_class': id}) values = QUOTAS.get_class_quotas(context, id) return self._format_quota_set(id, values, filtered_quotas, exclude_server_groups) @wsgi.Controller.api_version("2.1", "2.49") # noqa @wsgi.expected_errors(400) @validation.schema(quota_classes.update) def update(self, req, id, body): return self._update(req, id, body, exclude_server_groups=True) @wsgi.Controller.api_version("2.50", "2.56") # noqa @wsgi.expected_errors(400) @validation.schema(quota_classes.update_v250) def update(self, req, id, body): return self._update(req, id, body, FILTERED_QUOTAS_2_50) @wsgi.Controller.api_version("2.57") # noqa @wsgi.expected_errors(400) @validation.schema(quota_classes.update_v257) def update(self, req, id, body): return self._update(req, id, body, FILTERED_QUOTAS_2_57) def _update(self, req, id, body, filtered_quotas=None, exclude_server_groups=False): context = req.environ['nova.context'] context.can(qcs_policies.POLICY_ROOT % 'update', {'quota_class': id}) try: utils.check_string_length(id, 'quota_class_name', min_length=1, max_length=255) except exception.InvalidInput as e: raise webob.exc.HTTPBadRequest( explanation=e.format_message()) quota_class = id for key, value in body['quota_class_set'].items(): try: objects.Quotas.update_class(context, quota_class, key, value) except exception.QuotaClassNotFound: objects.Quotas.create_class(context, quota_class, key, value) values = QUOTAS.get_class_quotas(context, quota_class) return self._format_quota_set(None, values, filtered_quotas, exclude_server_groups) nova-17.0.1/nova/api/openstack/compute/remote_consoles.py0000666000175000017500000002117413250073126023524 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import webob from nova.api.openstack import common from nova.api.openstack.compute.schemas import remote_consoles from nova.api.openstack import wsgi from nova.api import validation from nova import compute from nova import exception from nova.policies import remote_consoles as rc_policies class RemoteConsolesController(wsgi.Controller): def __init__(self, *args, **kwargs): self.compute_api = compute.API() self.handlers = {'vnc': self.compute_api.get_vnc_console, 'spice': self.compute_api.get_spice_console, 'rdp': self.compute_api.get_rdp_console, 'serial': self.compute_api.get_serial_console, 'mks': self.compute_api.get_mks_console} super(RemoteConsolesController, self).__init__(*args, **kwargs) @wsgi.Controller.api_version("2.1", "2.5") @wsgi.expected_errors((400, 404, 409, 501)) @wsgi.action('os-getVNCConsole') @validation.schema(remote_consoles.get_vnc_console) def get_vnc_console(self, req, id, body): """Get text console output.""" context = req.environ['nova.context'] context.can(rc_policies.BASE_POLICY_NAME) # If type is not supplied or unknown, get_vnc_console below will cope console_type = body['os-getVNCConsole'].get('type') instance = common.get_instance(self.compute_api, context, id) try: output = self.compute_api.get_vnc_console(context, instance, console_type) except exception.ConsoleTypeUnavailable as e: raise webob.exc.HTTPBadRequest(explanation=e.format_message()) except (exception.InstanceUnknownCell, exception.InstanceNotFound) as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) except exception.InstanceNotReady as e: raise webob.exc.HTTPConflict(explanation=e.format_message()) except NotImplementedError: common.raise_feature_not_supported() return {'console': {'type': console_type, 'url': output['url']}} @wsgi.Controller.api_version("2.1", "2.5") @wsgi.expected_errors((400, 404, 409, 501)) @wsgi.action('os-getSPICEConsole') @validation.schema(remote_consoles.get_spice_console) def get_spice_console(self, req, id, body): """Get text console output.""" context = req.environ['nova.context'] context.can(rc_policies.BASE_POLICY_NAME) # If type is not supplied or unknown, get_spice_console below will cope console_type = body['os-getSPICEConsole'].get('type') instance = common.get_instance(self.compute_api, context, id) try: output = self.compute_api.get_spice_console(context, instance, console_type) except exception.ConsoleTypeUnavailable as e: raise webob.exc.HTTPBadRequest(explanation=e.format_message()) except (exception.InstanceUnknownCell, exception.InstanceNotFound) as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) except exception.InstanceNotReady as e: raise webob.exc.HTTPConflict(explanation=e.format_message()) except NotImplementedError: common.raise_feature_not_supported() return {'console': {'type': console_type, 'url': output['url']}} @wsgi.Controller.api_version("2.1", "2.5") @wsgi.expected_errors((400, 404, 409, 501)) @wsgi.action('os-getRDPConsole') @validation.schema(remote_consoles.get_rdp_console) def get_rdp_console(self, req, id, body): """Get text console output.""" context = req.environ['nova.context'] context.can(rc_policies.BASE_POLICY_NAME) # If type is not supplied or unknown, get_rdp_console below will cope console_type = body['os-getRDPConsole'].get('type') instance = common.get_instance(self.compute_api, context, id) try: # NOTE(mikal): get_rdp_console() can raise InstanceNotFound, so # we still need to catch it here. output = self.compute_api.get_rdp_console(context, instance, console_type) except exception.ConsoleTypeUnavailable as e: raise webob.exc.HTTPBadRequest(explanation=e.format_message()) except (exception.InstanceUnknownCell, exception.InstanceNotFound) as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) except exception.InstanceNotReady as e: raise webob.exc.HTTPConflict(explanation=e.format_message()) except NotImplementedError: common.raise_feature_not_supported() return {'console': {'type': console_type, 'url': output['url']}} @wsgi.Controller.api_version("2.1", "2.5") @wsgi.expected_errors((400, 404, 409, 501)) @wsgi.action('os-getSerialConsole') @validation.schema(remote_consoles.get_serial_console) def get_serial_console(self, req, id, body): """Get connection to a serial console.""" context = req.environ['nova.context'] context.can(rc_policies.BASE_POLICY_NAME) # If type is not supplied or unknown get_serial_console below will cope console_type = body['os-getSerialConsole'].get('type') instance = common.get_instance(self.compute_api, context, id) try: output = self.compute_api.get_serial_console(context, instance, console_type) except (exception.InstanceUnknownCell, exception.InstanceNotFound) as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) except exception.InstanceNotReady as e: raise webob.exc.HTTPConflict(explanation=e.format_message()) except (exception.ConsoleTypeUnavailable, exception.ImageSerialPortNumberInvalid, exception.ImageSerialPortNumberExceedFlavorValue, exception.SocketPortRangeExhaustedException) as e: raise webob.exc.HTTPBadRequest(explanation=e.format_message()) except NotImplementedError: common.raise_feature_not_supported() return {'console': {'type': console_type, 'url': output['url']}} @wsgi.Controller.api_version("2.6") @wsgi.expected_errors((400, 404, 409, 501)) @validation.schema(remote_consoles.create_v26, "2.6", "2.7") @validation.schema(remote_consoles.create_v28, "2.8") def create(self, req, server_id, body): context = req.environ['nova.context'] context.can(rc_policies.BASE_POLICY_NAME) instance = common.get_instance(self.compute_api, context, server_id) protocol = body['remote_console']['protocol'] console_type = body['remote_console']['type'] try: handler = self.handlers.get(protocol) output = handler(context, instance, console_type) return {'remote_console': {'protocol': protocol, 'type': console_type, 'url': output['url']}} except exception.InstanceNotFound as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) except exception.InstanceNotReady as e: raise webob.exc.HTTPConflict(explanation=e.format_message()) except (exception.ConsoleTypeInvalid, exception.ConsoleTypeUnavailable, exception.ImageSerialPortNumberInvalid, exception.ImageSerialPortNumberExceedFlavorValue, exception.SocketPortRangeExhaustedException) as e: raise webob.exc.HTTPBadRequest(explanation=e.format_message()) except NotImplementedError: common.raise_feature_not_supported() nova-17.0.1/nova/api/openstack/compute/block_device_mapping.py0000666000175000017500000000575313250073126024455 0ustar zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """The block device mappings extension.""" from webob import exc from nova.api.openstack import api_version_request from nova.api.openstack.compute.schemas import block_device_mapping as \ schema_block_device_mapping from nova import block_device from nova import exception from nova.i18n import _ ATTRIBUTE_NAME = "block_device_mapping_v2" LEGACY_ATTRIBUTE_NAME = "block_device_mapping" # NOTE(gmann): This function is not supposed to use 'body_deprecated_param' # parameter as this is placed to handle scheduler_hint extension for V2.1. def server_create(server_dict, create_kwargs, body_deprecated_param): # Have to check whether --image is given, see bug 1433609 image_href = server_dict.get('imageRef') image_uuid_specified = image_href is not None bdm = server_dict.get(ATTRIBUTE_NAME, []) legacy_bdm = server_dict.get(LEGACY_ATTRIBUTE_NAME, []) if bdm and legacy_bdm: expl = _('Using different block_device_mapping syntaxes ' 'is not allowed in the same request.') raise exc.HTTPBadRequest(explanation=expl) try: block_device_mapping = [ block_device.BlockDeviceDict.from_api(bdm_dict, image_uuid_specified) for bdm_dict in bdm] except exception.InvalidBDMFormat as e: raise exc.HTTPBadRequest(explanation=e.format_message()) if block_device_mapping: create_kwargs['block_device_mapping'] = block_device_mapping # Unset the legacy_bdm flag if we got a block device mapping. create_kwargs['legacy_bdm'] = False def get_server_create_schema(version): request_version = api_version_request.APIVersionRequest(version) version_242 = api_version_request.APIVersionRequest('2.42') # NOTE(artom) the following conditional was merged as # "if version == '2.32'" The intent all along was to check whether # version was greater than or equal to 2.32. In other words, we wanted # to support tags in versions 2.32 and up, but ended up supporting them # in version 2.32 only. Since we need a new microversion to add request # body attributes, tags have been re-added in version 2.42. if version == '2.32' or request_version >= version_242: return schema_block_device_mapping.server_create_with_tags else: return schema_block_device_mapping.server_create nova-17.0.1/nova/api/openstack/compute/config_drive.py0000666000175000017500000000450613250073126022762 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Config Drive extension.""" from nova.api.openstack.compute.schemas import config_drive as \ schema_config_drive from nova.api.openstack import wsgi from nova.policies import config_drive as cd_policies ATTRIBUTE_NAME = "config_drive" class ConfigDriveController(wsgi.Controller): def _add_config_drive(self, req, servers): for server in servers: db_server = req.get_db_instance(server['id']) # server['id'] is guaranteed to be in the cache due to # the core API adding it in its 'show'/'detail' methods. server[ATTRIBUTE_NAME] = db_server['config_drive'] def _show(self, req, resp_obj): if 'server' in resp_obj.obj: server = resp_obj.obj['server'] self._add_config_drive(req, [server]) @wsgi.extends def show(self, req, resp_obj, id): context = req.environ['nova.context'] if context.can(cd_policies.BASE_POLICY_NAME, fatal=False): self._show(req, resp_obj) @wsgi.extends def detail(self, req, resp_obj): context = req.environ['nova.context'] if 'servers' in resp_obj.obj and context.can( cd_policies.BASE_POLICY_NAME, fatal=False): servers = resp_obj.obj['servers'] self._add_config_drive(req, servers) # NOTE(gmann): This function is not supposed to use 'body_deprecated_param' # parameter as this is placed to handle scheduler_hint extension for V2.1. def server_create(server_dict, create_kwargs, body_deprecated_param): create_kwargs['config_drive'] = server_dict.get(ATTRIBUTE_NAME) def get_server_create_schema(version): return schema_config_drive.server_create nova-17.0.1/nova/api/openstack/compute/shelve.py0000666000175000017500000000761013250073126021611 0ustar zuulzuul00000000000000# Copyright 2013 Rackspace Hosting # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """The shelved mode extension.""" from webob import exc from nova.api.openstack import common from nova.api.openstack import wsgi from nova import compute from nova import exception from nova.policies import shelve as shelve_policies class ShelveController(wsgi.Controller): def __init__(self, *args, **kwargs): super(ShelveController, self).__init__(*args, **kwargs) self.compute_api = compute.API() @wsgi.response(202) @wsgi.expected_errors((404, 409)) @wsgi.action('shelve') def _shelve(self, req, id, body): """Move an instance into shelved mode.""" context = req.environ["nova.context"] instance = common.get_instance(self.compute_api, context, id) context.can(shelve_policies.POLICY_ROOT % 'shelve', target={'user_id': instance.user_id, 'project_id': instance.project_id}) try: self.compute_api.shelve(context, instance) except exception.InstanceUnknownCell as e: raise exc.HTTPNotFound(explanation=e.format_message()) except exception.InstanceIsLocked as e: raise exc.HTTPConflict(explanation=e.format_message()) except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state(state_error, 'shelve', id) @wsgi.response(202) @wsgi.expected_errors((404, 409)) @wsgi.action('shelveOffload') def _shelve_offload(self, req, id, body): """Force removal of a shelved instance from the compute node.""" context = req.environ["nova.context"] context.can(shelve_policies.POLICY_ROOT % 'shelve_offload') instance = common.get_instance(self.compute_api, context, id) try: self.compute_api.shelve_offload(context, instance) except exception.InstanceUnknownCell as e: raise exc.HTTPNotFound(explanation=e.format_message()) except exception.InstanceIsLocked as e: raise exc.HTTPConflict(explanation=e.format_message()) except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state(state_error, 'shelveOffload', id) @wsgi.response(202) @wsgi.expected_errors((404, 409)) @wsgi.action('unshelve') def _unshelve(self, req, id, body): """Restore an instance from shelved mode.""" context = req.environ["nova.context"] context.can(shelve_policies.POLICY_ROOT % 'unshelve') instance = common.get_instance(self.compute_api, context, id) try: self.compute_api.unshelve(context, instance) except exception.InstanceUnknownCell as e: raise exc.HTTPNotFound(explanation=e.format_message()) except exception.InstanceIsLocked as e: raise exc.HTTPConflict(explanation=e.format_message()) except exception.InstanceInvalidState as state_error: common.raise_http_conflict_for_instance_invalid_state(state_error, 'unshelve', id) nova-17.0.1/nova/api/openstack/compute/floating_ips_bulk.py0000666000175000017500000001352313250073126024016 0ustar zuulzuul00000000000000# Copyright 2012 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import netaddr import six import webob.exc from nova.api.openstack.api_version_request \ import MAX_PROXY_API_SUPPORT_VERSION from nova.api.openstack.compute.schemas import floating_ips_bulk from nova.api.openstack import wsgi from nova.api import validation import nova.conf from nova import exception from nova.i18n import _ from nova import objects from nova.policies import floating_ips_bulk as fib_policies CONF = nova.conf.CONF class FloatingIPBulkController(wsgi.Controller): @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors(404) def index(self, req): """Return a list of all floating IPs.""" context = req.environ['nova.context'] context.can(fib_policies.BASE_POLICY_NAME) return self._get_floating_ip_info(context) @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors(404) def show(self, req, id): """Return a list of all floating IPs for a given host.""" context = req.environ['nova.context'] context.can(fib_policies.BASE_POLICY_NAME) return self._get_floating_ip_info(context, id) def _get_floating_ip_info(self, context, host=None): floating_ip_info = {"floating_ip_info": []} if host is None: try: floating_ips = objects.FloatingIPList.get_all(context) except exception.NoFloatingIpsDefined: return floating_ip_info else: try: floating_ips = objects.FloatingIPList.get_by_host(context, host) except exception.FloatingIpNotFoundForHost as e: raise webob.exc.HTTPNotFound(explanation=e.format_message()) for floating_ip in floating_ips: instance_uuid = None fixed_ip = None if floating_ip.fixed_ip: instance_uuid = floating_ip.fixed_ip.instance_uuid fixed_ip = str(floating_ip.fixed_ip.address) result = {'address': str(floating_ip.address), 'pool': floating_ip.pool, 'interface': floating_ip.interface, 'project_id': floating_ip.project_id, 'instance_uuid': instance_uuid, 'fixed_ip': fixed_ip} floating_ip_info['floating_ip_info'].append(result) return floating_ip_info @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors((400, 409)) @validation.schema(floating_ips_bulk.create) def create(self, req, body): """Bulk create floating IPs.""" context = req.environ['nova.context'] context.can(fib_policies.BASE_POLICY_NAME) params = body['floating_ips_bulk_create'] ip_range = params['ip_range'] pool = params.get('pool', CONF.default_floating_pool) interface = params.get('interface', CONF.public_interface) try: ips = [objects.FloatingIPList.make_ip_info(addr, pool, interface) for addr in self._address_to_hosts(ip_range)] except exception.InvalidInput as exc: raise webob.exc.HTTPBadRequest(explanation=exc.format_message()) try: objects.FloatingIPList.create(context, ips) except exception.FloatingIpExists as exc: raise webob.exc.HTTPConflict(explanation=exc.format_message()) return {"floating_ips_bulk_create": {"ip_range": ip_range, "pool": pool, "interface": interface}} @wsgi.Controller.api_version("2.1", MAX_PROXY_API_SUPPORT_VERSION) @wsgi.expected_errors((400, 404)) @validation.schema(floating_ips_bulk.delete) def update(self, req, id, body): """Bulk delete floating IPs.""" context = req.environ['nova.context'] context.can(fib_policies.BASE_POLICY_NAME) if id != "delete": msg = _("Unknown action") raise webob.exc.HTTPNotFound(explanation=msg) ip_range = body['ip_range'] try: ips = (objects.FloatingIPList.make_ip_info(address, None, None) for address in self._address_to_hosts(ip_range)) except exception.InvalidInput as exc: raise webob.exc.HTTPBadRequest(explanation=exc.format_message()) objects.FloatingIPList.destroy(context, ips) return {"floating_ips_bulk_delete": ip_range} def _address_to_hosts(self, addresses): """Iterate over hosts within an address range. If an explicit range specifier is missing, the parameter is interpreted as a specific individual address. """ try: return [netaddr.IPAddress(addresses)] except ValueError: net = netaddr.IPNetwork(addresses) if net.size < 4: reason = _("/%s should be specified as single address(es) " "not in cidr format") % net.prefixlen raise exception.InvalidInput(reason=reason) else: return net.iter_hosts() except netaddr.AddrFormatError as exc: raise exception.InvalidInput(reason=six.text_type(exc)) nova-17.0.1/nova/api/openstack/compute/server_external_events.py0000666000175000017500000001430113250073136025113 0ustar zuulzuul00000000000000# Copyright 2014 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging import webob from nova.api.openstack.compute.schemas import server_external_events from nova.api.openstack import wsgi from nova.api import validation from nova import compute from nova import context as nova_context from nova.i18n import _ from nova import objects from nova.policies import server_external_events as see_policies LOG = logging.getLogger(__name__) class ServerExternalEventsController(wsgi.Controller): def __init__(self): self.compute_api = compute.API() super(ServerExternalEventsController, self).__init__() @staticmethod def _is_event_tag_present_when_required(event): if event.name == 'volume-extended' and event.tag is None: return False return True def _get_instances_all_cells(self, context, instance_uuids, instance_mappings): cells = {} instance_uuids_by_cell = {} for im in instance_mappings: if im.cell_mapping.uuid not in cells: cells[im.cell_mapping.uuid] = im.cell_mapping instance_uuids_by_cell.setdefault(im.cell_mapping.uuid, list()) instance_uuids_by_cell[im.cell_mapping.uuid].append( im.instance_uuid) instances = {} for cell_uuid, cell in cells.items(): with nova_context.target_cell(context, cell) as cctxt: instances.update( {inst.uuid: inst for inst in objects.InstanceList.get_by_filters( cctxt, {'uuid': instance_uuids_by_cell[cell_uuid]}, expected_attrs=['migration_context', 'info_cache'])}) return instances @wsgi.expected_errors((403, 404)) @wsgi.response(200) @validation.schema(server_external_events.create, '2.1', '2.50') @validation.schema(server_external_events.create_v251, '2.51') def create(self, req, body): """Creates a new instance event.""" context = req.environ['nova.context'] context.can(see_policies.POLICY_ROOT % 'create') response_events = [] accepted_events = [] accepted_instances = set() result = 200 body_events = body['events'] # Fetch instance objects for all relevant instances instance_uuids = set([event['server_uuid'] for event in body_events]) instance_mappings = objects.InstanceMappingList.get_by_instance_uuids( context, list(instance_uuids)) instances = self._get_instances_all_cells(context, instance_uuids, instance_mappings) for _event in body_events: client_event = dict(_event) event = objects.InstanceExternalEvent(context) event.instance_uuid = client_event.pop('server_uuid') event.name = client_event.pop('name') event.status = client_event.pop('status', 'completed') event.tag = client_event.pop('tag', None) response_events.append(_event) instance = instances.get(event.instance_uuid) if not instance: LOG.debug('Dropping event %(name)s:%(tag)s for unknown ' 'instance %(instance_uuid)s', {'name': event.name, 'tag': event.tag, 'instance_uuid': event.instance_uuid}) _event['status'] = 'failed' _event['code'] = 404 result = 207 continue # NOTE: before accepting the event, make sure the instance # for which the event is sent is assigned to a host; otherwise # it will not be possible to dispatch the event if not self._is_event_tag_present_when_required(event): LOG.debug("Event tag is missing for instance " "%(instance)s. Dropping event %(event)s", {'instance': event.instance_uuid, 'event': event.name}) _event['status'] = 'failed' _event['code'] = 400 result = 207 elif instance.host: accepted_events.append(event) accepted_instances.add(instance) LOG.info('Creating event %(name)s:%(tag)s for ' 'instance %(instance_uuid)s on %(host)s', {'name': event.name, 'tag': event.tag, 'instance_uuid': event.instance_uuid, 'host': instance.host}) # NOTE: as the event is processed asynchronously verify # whether 202 is a more suitable response code than 200 _event['status'] = 'completed' _event['code'] = 200 else: LOG.debug("Unable to find a host for instance " "%(instance)s. Dropping event %(event)s", {'instance': event.instance_uuid, 'event': event.name}) _event['status'] = 'failed' _event['code'] = 422 result = 207 if accepted_events: self.compute_api.external_instance_event( context, accepted_instances, accepted_events) else: msg = _('No instances found for any event') raise webob.exc.HTTPNotFound(explanation=msg) # FIXME(cyeoh): This needs some infrastructure support so that # we have a general way to do this robj = wsgi.ResponseObject({'events': response_events}) robj._code = result return robj nova-17.0.1/nova/api/openstack/wsgi_app.py0000666000175000017500000000626513250073126020465 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """WSGI application initialization for Nova APIs.""" import os from oslo_config import cfg from oslo_log import log as logging from oslo_service import _options as service_opts from paste import deploy from nova import config from nova import context from nova import exception from nova import objects from nova import service from nova import utils CONF = cfg.CONF CONFIG_FILES = ['api-paste.ini', 'nova.conf'] utils.monkey_patch() objects.register_all() def _get_config_files(env=None): if env is None: env = os.environ dirname = env.get('OS_NOVA_CONFIG_DIR', '/etc/nova').strip() return [os.path.join(dirname, config_file) for config_file in CONFIG_FILES] def _setup_service(host, name): binary = name if name.startswith('nova-') else "nova-%s" % name ctxt = context.get_admin_context() service_ref = objects.Service.get_by_host_and_binary( ctxt, host, binary) if service_ref: service._update_service_ref(service_ref) else: try: service_obj = objects.Service(ctxt) service_obj.host = host service_obj.binary = binary service_obj.topic = None service_obj.report_count = 0 service_obj.create() except (exception.ServiceTopicExists, exception.ServiceBinaryExists): # If we race to create a record with a sibling, don't # fail here. pass def error_application(exc, name): # TODO(cdent): make this something other than a stub def application(environ, start_response): start_response('500 Internal Server Error', [ ('Content-Type', 'text/plain; charset=UTF-8')]) return ['Out of date %s service %s\n' % (name, exc)] return application def init_application(name): conf_files = _get_config_files() config.parse_args([], default_config_files=conf_files) logging.setup(CONF, "nova") try: _setup_service(CONF.host, name) except exception.ServiceTooOld as exc: return error_application(exc, name) service.setup_profiler(name, CONF.host) # dump conf at debug (log_options option comes from oslo.service) # FIXME(mriedem): This is gross but we don't have a public hook into # oslo.service to register these options, so we are doing it manually for # now; remove this when we have a hook method into oslo.service. CONF.register_opts(service_opts.service_opts) if CONF.log_options: CONF.log_opt_values( logging.getLogger(__name__), logging.DEBUG) conf = conf_files[0] return deploy.loadapp('config:%s' % conf, name=name) nova-17.0.1/nova/api/openstack/auth.py0000666000175000017500000000617013250073126017610 0ustar zuulzuul00000000000000# Copyright 2013 IBM Corp. # Copyright 2010 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import webob.dec import webob.exc from nova.api.openstack import wsgi import nova.conf from nova import context from nova import wsgi as base_wsgi CONF = nova.conf.CONF class NoAuthMiddlewareBase(base_wsgi.Middleware): """Return a fake token if one isn't specified.""" def base_call(self, req, project_id_in_path, always_admin=True): if 'X-Auth-Token' not in req.headers: user_id = req.headers.get('X-Auth-User', 'admin') project_id = req.headers.get('X-Auth-Project-Id', 'admin') if project_id_in_path: os_url = '/'.join([req.url.rstrip('/'), project_id]) else: os_url = req.url.rstrip('/') res = webob.Response() # NOTE(vish): This is expecting and returning Auth(1.1), whereas # keystone uses 2.0 auth. We should probably allow # 2.0 auth here as well. res.headers['X-Auth-Token'] = '%s:%s' % (user_id, project_id) res.headers['X-Server-Management-Url'] = os_url res.content_type = 'text/plain' res.status = '204' return res token = req.headers['X-Auth-Token'] user_id, _sep, project_id = token.partition(':') project_id = project_id or user_id remote_address = getattr(req, 'remote_address', '127.0.0.1') if CONF.api.use_forwarded_for: remote_address = req.headers.get('X-Forwarded-For', remote_address) is_admin = always_admin or (user_id == 'admin') ctx = context.RequestContext(user_id, project_id, is_admin=is_admin, remote_address=remote_address) req.environ['nova.context'] = ctx return self.application class NoAuthMiddleware(NoAuthMiddlewareBase): """Return a fake token if one isn't specified. noauth2 provides admin privs if 'admin' is provided as the user id. """ @webob.dec.wsgify(RequestClass=wsgi.Request) def __call__(self, req): return self.base_call(req, True, always_admin=False) class NoAuthMiddlewareV2_18(NoAuthMiddlewareBase): """Return a fake token if one isn't specified. This provides a version of the middleware which does not add project_id into server management urls. """ @webob.dec.wsgify(RequestClass=wsgi.Request) def __call__(self, req): return self.base_call(req, False, always_admin=False) nova-17.0.1/nova/api/openstack/urlmap.py0000666000175000017500000002434013250073126020146 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import re from oslo_log import log as logging import paste.urlmap import six if six.PY2: import urllib2 else: from urllib import request as urllib2 from nova.api.openstack import wsgi LOG = logging.getLogger(__name__) _quoted_string_re = r'"[^"\\]*(?:\\.[^"\\]*)*"' _option_header_piece_re = re.compile(r';\s*([^\s;=]+|%s)\s*' r'(?:=\s*([^;]+|%s))?\s*' % (_quoted_string_re, _quoted_string_re)) def unquote_header_value(value): """Unquotes a header value. This does not use the real unquoting but what browsers are actually using for quoting. :param value: the header value to unquote. """ if value and value[0] == value[-1] == '"': # this is not the real unquoting, but fixing this so that the # RFC is met will result in bugs with internet explorer and # probably some other browsers as well. IE for example is # uploading files with "C:\foo\bar.txt" as filename value = value[1:-1] return value def parse_list_header(value): """Parse lists as described by RFC 2068 Section 2. In particular, parse comma-separated lists where the elements of the list may include quoted-strings. A quoted-string could contain a comma. A non-quoted string could have quotes in the middle. Quotes are removed automatically after parsing. The return value is a standard :class:`list`: >>> parse_list_header('token, "quoted value"') ['token', 'quoted value'] :param value: a string with a list header. :return: :class:`list` """ result = [] for item in urllib2.parse_http_list(value): if item[:1] == item[-1:] == '"': item = unquote_header_value(item[1:-1]) result.append(item) return result def parse_options_header(value): """Parse a ``Content-Type`` like header into a tuple with the content type and the options: >>> parse_options_header('Content-Type: text/html; mimetype=text/html') ('Content-Type:', {'mimetype': 'text/html'}) :param value: the header to parse. :return: (str, options) """ def _tokenize(string): for match in _option_header_piece_re.finditer(string): key, value = match.groups() key = unquote_header_value(key) if value is not None: value = unquote_header_value(value) yield key, value if not value: return '', {} parts = _tokenize(';' + value) name = next(parts)[0] extra = dict(parts) return name, extra class Accept(object): def __init__(self, value): self._content_types = [parse_options_header(v) for v in parse_list_header(value)] def best_match(self, supported_content_types): # FIXME: Should we have a more sophisticated matching algorithm that # takes into account the version as well? best_quality = -1 best_content_type = None best_params = {} best_match = '*/*' for content_type in supported_content_types: for content_mask, params in self._content_types: try: quality = float(params.get('q', 1)) except ValueError: continue if quality < best_quality: continue elif best_quality == quality: if best_match.count('*') <= content_mask.count('*'): continue if self._match_mask(content_mask, content_type): best_quality = quality best_content_type = content_type best_params = params best_match = content_mask return best_content_type, best_params def _match_mask(self, mask, content_type): if '*' not in mask: return content_type == mask if mask == '*/*': return True mask_major = mask[:-2] content_type_major = content_type.split('/', 1)[0] return content_type_major == mask_major def urlmap_factory(loader, global_conf, **local_conf): if 'not_found_app' in local_conf: not_found_app = local_conf.pop('not_found_app') else: not_found_app = global_conf.get('not_found_app') if not_found_app: not_found_app = loader.get_app(not_found_app, global_conf=global_conf) urlmap = URLMap(not_found_app=not_found_app) for path, app_name in local_conf.items(): path = paste.urlmap.parse_path_expression(path) app = loader.get_app(app_name, global_conf=global_conf) urlmap[path] = app return urlmap class URLMap(paste.urlmap.URLMap): def _match(self, host, port, path_info): """Find longest match for a given URL path.""" for (domain, app_url), app in self.applications: if domain and domain != host and domain != host + ':' + port: continue if (path_info == app_url or path_info.startswith(app_url + '/')): return app, app_url return None, None def _set_script_name(self, app, app_url): def wrap(environ, start_response): environ['SCRIPT_NAME'] += app_url return app(environ, start_response) return wrap def _munge_path(self, app, path_info, app_url): def wrap(environ, start_response): environ['SCRIPT_NAME'] += app_url environ['PATH_INFO'] = path_info[len(app_url):] return app(environ, start_response) return wrap def _path_strategy(self, host, port, path_info): """Check path suffix for MIME type and path prefix for API version.""" mime_type = app = app_url = None parts = path_info.rsplit('.', 1) if len(parts) > 1: possible_type = 'application/' + parts[1] if possible_type in wsgi.get_supported_content_types(): mime_type = possible_type parts = path_info.split('/') if len(parts) > 1: possible_app, possible_app_url = self._match(host, port, path_info) # Don't use prefix if it ends up matching default if possible_app and possible_app_url: app_url = possible_app_url app = self._munge_path(possible_app, path_info, app_url) return mime_type, app, app_url def _content_type_strategy(self, host, port, environ): """Check Content-Type header for API version.""" app = None params = parse_options_header(environ.get('CONTENT_TYPE', ''))[1] if 'version' in params: app, app_url = self._match(host, port, '/v' + params['version']) if app: app = self._set_script_name(app, app_url) return app def _accept_strategy(self, host, port, environ, supported_content_types): """Check Accept header for best matching MIME type and API version.""" accept = Accept(environ.get('HTTP_ACCEPT', '')) app = None # Find the best match in the Accept header mime_type, params = accept.best_match(supported_content_types) if 'version' in params: app, app_url = self._match(host, port, '/v' + params['version']) if app: app = self._set_script_name(app, app_url) return mime_type, app def __call__(self, environ, start_response): host = environ.get('HTTP_HOST', environ.get('SERVER_NAME')).lower() if ':' in host: host, port = host.split(':', 1) else: if environ['wsgi.url_scheme'] == 'http': port = '80' else: port = '443' path_info = environ['PATH_INFO'] path_info = self.normalize_url(path_info, False)[1] # The MIME type for the response is determined in one of two ways: # 1) URL path suffix (eg /servers/detail.json) # 2) Accept header (eg application/json;q=0.8, application/xml;q=0.2) # The API version is determined in one of three ways: # 1) URL path prefix (eg /v1.1/tenant/servers/detail) # 2) Content-Type header (eg application/json;version=1.1) # 3) Accept header (eg application/json;q=0.8;version=1.1) supported_content_types = list(wsgi.get_supported_content_types()) mime_type, app, app_url = self._path_strategy(host, port, path_info) # Accept application/atom+xml for the index query of each API # version mount point as well as the root index if (app_url and app_url + '/' == path_info) or path_info == '/': supported_content_types.append('application/atom+xml') if not app: app = self._content_type_strategy(host, port, environ) if not mime_type or not app: possible_mime_type, possible_app = self._accept_strategy( host, port, environ, supported_content_types) if possible_mime_type and not mime_type: mime_type = possible_mime_type if possible_app and not app: app = possible_app if not mime_type: mime_type = 'application/json' if not app: # Didn't match a particular version, probably matches default app, app_url = self._match(host, port, path_info) if app: app = self._munge_path(app, path_info, app_url) if app: environ['nova.best_content_type'] = mime_type return app(environ, start_response) LOG.debug('Could not find application for %s', environ['PATH_INFO']) environ['paste.urlmap_object'] = self return self.not_found_application(environ, start_response) nova-17.0.1/nova/api/openstack/wsgi.py0000666000175000017500000011437513250073126017627 0ustar zuulzuul00000000000000# Copyright 2013 IBM Corp. # Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import functools import microversion_parse from oslo_log import log as logging from oslo_serialization import jsonutils from oslo_utils import encodeutils from oslo_utils import strutils import six import webob from nova.api.openstack import api_version_request as api_version from nova.api.openstack import versioned_method from nova import exception from nova import i18n from nova.i18n import _ from nova import utils from nova import wsgi LOG = logging.getLogger(__name__) _SUPPORTED_CONTENT_TYPES = ( 'application/json', 'application/vnd.openstack.compute+json', ) # These are typically automatically created by routes as either defaults # collection or member methods. _ROUTES_METHODS = [ 'create', 'delete', 'show', 'update', ] _METHODS_WITH_BODY = [ 'POST', 'PUT', ] # The default api version request if none is requested in the headers # Note(cyeoh): This only applies for the v2.1 API once microversions # support is fully merged. It does not affect the V2 API. DEFAULT_API_VERSION = "2.1" # name of attribute to keep version method information VER_METHOD_ATTR = 'versioned_methods' # Names of headers used by clients to request a specific version # of the REST API API_VERSION_REQUEST_HEADER = 'OpenStack-API-Version' LEGACY_API_VERSION_REQUEST_HEADER = 'X-OpenStack-Nova-API-Version' ENV_LEGACY_V2 = 'openstack.legacy_v2' def get_supported_content_types(): return _SUPPORTED_CONTENT_TYPES # NOTE(rlrossit): This function allows a get on both a dict-like and an # object-like object. cache_db_items() is used on both versioned objects and # dicts, so the function can't be totally changed over to [] syntax, nor # can it be changed over to use getattr(). def item_get(item, item_key): if hasattr(item, '__getitem__'): return item[item_key] else: return getattr(item, item_key) class Request(wsgi.Request): """Add some OpenStack API-specific logic to the base webob.Request.""" def __init__(self, *args, **kwargs): super(Request, self).__init__(*args, **kwargs) self._extension_data = {'db_items': {}} if not hasattr(self, 'api_version_request'): self.api_version_request = api_version.APIVersionRequest() def cache_db_items(self, key, items, item_key='id'): """Allow API methods to store objects from a DB query to be used by API extensions within the same API request. An instance of this class only lives for the lifetime of a single API request, so there's no need to implement full cache management. """ db_items = self._extension_data['db_items'].setdefault(key, {}) for item in items: db_items[item_get(item, item_key)] = item def get_db_items(self, key): """Allow an API extension to get previously stored objects within the same API request. Note that the object data will be slightly stale. """ return self._extension_data['db_items'][key] def get_db_item(self, key, item_key): """Allow an API extension to get a previously stored object within the same API request. Note that the object data will be slightly stale. """ return self.get_db_items(key).get(item_key) def cache_db_instances(self, instances): self.cache_db_items('instances', instances, 'uuid') def cache_db_instance(self, instance): self.cache_db_items('instances', [instance], 'uuid') def get_db_instances(self): return self.get_db_items('instances') def get_db_instance(self, instance_uuid): return self.get_db_item('instances', instance_uuid) def cache_db_flavors(self, flavors): self.cache_db_items('flavors', flavors, 'flavorid') def cache_db_flavor(self, flavor): self.cache_db_items('flavors', [flavor], 'flavorid') def get_db_flavors(self): return self.get_db_items('flavors') def get_db_flavor(self, flavorid): return self.get_db_item('flavors', flavorid) def best_match_content_type(self): """Determine the requested response content-type.""" if 'nova.best_content_type' not in self.environ: # Calculate the best MIME type content_type = None # Check URL path suffix parts = self.path.rsplit('.', 1) if len(parts) > 1: possible_type = 'application/' + parts[1] if possible_type in get_supported_content_types(): content_type = possible_type if not content_type: content_type = self.accept.best_match( get_supported_content_types()) self.environ['nova.best_content_type'] = (content_type or 'application/json') return self.environ['nova.best_content_type'] def get_content_type(self): """Determine content type of the request body. Does not do any body introspection, only checks header """ if "Content-Type" not in self.headers: return None content_type = self.content_type # NOTE(markmc): text/plain is the default for eventlet and # other webservers which use mimetools.Message.gettype() # whereas twisted defaults to ''. if not content_type or content_type == 'text/plain': return None if content_type not in get_supported_content_types(): raise exception.InvalidContentType(content_type=content_type) return content_type def best_match_language(self): """Determine the best available language for the request. :returns: the best language match or None if the 'Accept-Language' header was not available in the request. """ if not self.accept_language: return None return self.accept_language.best_match( i18n.get_available_languages()) def set_api_version_request(self): """Set API version request based on the request header information.""" hdr_string = microversion_parse.get_version( self.headers, service_type='compute', legacy_headers=[LEGACY_API_VERSION_REQUEST_HEADER]) if hdr_string is None: self.api_version_request = api_version.APIVersionRequest( api_version.DEFAULT_API_VERSION) elif hdr_string == 'latest': # 'latest' is a special keyword which is equivalent to # requesting the maximum version of the API supported self.api_version_request = api_version.max_api_version() else: self.api_version_request = api_version.APIVersionRequest( hdr_string) # Check that the version requested is within the global # minimum/maximum of supported API versions if not self.api_version_request.matches( api_version.min_api_version(), api_version.max_api_version()): raise exception.InvalidGlobalAPIVersion( req_ver=self.api_version_request.get_string(), min_ver=api_version.min_api_version().get_string(), max_ver=api_version.max_api_version().get_string()) def set_legacy_v2(self): self.environ[ENV_LEGACY_V2] = True def is_legacy_v2(self): return self.environ.get(ENV_LEGACY_V2, False) class ActionDispatcher(object): """Maps method name to local methods through action name.""" def dispatch(self, *args, **kwargs): """Find and call local method.""" action = kwargs.pop('action', 'default') action_method = getattr(self, str(action), self.default) return action_method(*args, **kwargs) def default(self, data): raise NotImplementedError() class JSONDeserializer(ActionDispatcher): def _from_json(self, datastring): try: return jsonutils.loads(datastring) except ValueError: msg = _("cannot understand JSON") raise exception.MalformedRequestBody(reason=msg) def deserialize(self, datastring, action='default'): return self.dispatch(datastring, action=action) def default(self, datastring): return {'body': self._from_json(datastring)} class JSONDictSerializer(ActionDispatcher): """Default JSON request body serialization.""" def serialize(self, data, action='default'): return self.dispatch(data, action=action) def default(self, data): return six.text_type(jsonutils.dumps(data)) def response(code): """Attaches response code to a method. This decorator associates a response code with a method. Note that the function attributes are directly manipulated; the method is not wrapped. """ def decorator(func): func.wsgi_code = code return func return decorator class ResponseObject(object): """Bundles a response object Object that app methods may return in order to allow its response to be modified by extensions in the code. Its use is optional (and should only be used if you really know what you are doing). """ def __init__(self, obj, code=None, headers=None): """Builds a response object.""" self.obj = obj self._default_code = 200 self._code = code self._headers = headers or {} self.serializer = JSONDictSerializer() def __getitem__(self, key): """Retrieves a header with the given name.""" return self._headers[key.lower()] def __setitem__(self, key, value): """Sets a header with the given name to the given value.""" self._headers[key.lower()] = value def __delitem__(self, key): """Deletes the header with the given name.""" del self._headers[key.lower()] def serialize(self, request, content_type): """Serializes the wrapped object. Utility method for serializing the wrapped object. Returns a webob.Response object. """ serializer = self.serializer body = None if self.obj is not None: body = serializer.serialize(self.obj) response = webob.Response(body=body) if response.headers.get('Content-Length'): # NOTE(andreykurilin): we need to encode 'Content-Length' header, # since webob.Response auto sets it if "body" attr is presented. # https://github.com/Pylons/webob/blob/1.5.0b0/webob/response.py#L147 response.headers['Content-Length'] = utils.utf8( response.headers['Content-Length']) response.status_int = self.code for hdr, value in self._headers.items(): response.headers[hdr] = utils.utf8(value) response.headers['Content-Type'] = utils.utf8(content_type) return response @property def code(self): """Retrieve the response status.""" return self._code or self._default_code @property def headers(self): """Retrieve the headers.""" return self._headers.copy() def action_peek(body): """Determine action to invoke. This looks inside the json body and fetches out the action method name. """ try: decoded = jsonutils.loads(body) except ValueError: msg = _("cannot understand JSON") raise exception.MalformedRequestBody(reason=msg) # Make sure there's exactly one key... if len(decoded) != 1: msg = _("too many body keys") raise exception.MalformedRequestBody(reason=msg) # Return the action name return list(decoded.keys())[0] class ResourceExceptionHandler(object): """Context manager to handle Resource exceptions. Used when processing exceptions generated by API implementation methods (or their extensions). Converts most exceptions to Fault exceptions, with the appropriate logging. """ def __enter__(self): return None def __exit__(self, ex_type, ex_value, ex_traceback): if not ex_value: return True if isinstance(ex_value, exception.Forbidden): raise Fault(webob.exc.HTTPForbidden( explanation=ex_value.format_message())) elif isinstance(ex_value, exception.VersionNotFoundForAPIMethod): raise elif isinstance(ex_value, exception.Invalid): raise Fault(exception.ConvertedException( code=ex_value.code, explanation=ex_value.format_message())) elif isinstance(ex_value, TypeError): exc_info = (ex_type, ex_value, ex_traceback) LOG.error('Exception handling resource: %s', ex_value, exc_info=exc_info) raise Fault(webob.exc.HTTPBadRequest()) elif isinstance(ex_value, Fault): LOG.info("Fault thrown: %s", ex_value) raise ex_value elif isinstance(ex_value, webob.exc.HTTPException): LOG.info("HTTP exception thrown: %s", ex_value) raise Fault(ex_value) # We didn't handle the exception return False class Resource(wsgi.Application): """WSGI app that handles (de)serialization and controller dispatch. WSGI app that reads routing information supplied by RoutesMiddleware and calls the requested action method upon its controller. All controller action methods must accept a 'req' argument, which is the incoming wsgi.Request. If the operation is a PUT or POST, the controller method must also accept a 'body' argument (the deserialized request body). They may raise a webob.exc exception or return a dict, which will be serialized by requested content type. Exceptions derived from webob.exc.HTTPException will be automatically wrapped in Fault() to provide API friendly error responses. """ support_api_request_version = True def __init__(self, controller): """:param controller: object that implement methods created by routes lib """ self.controller = controller self.default_serializers = dict(json=JSONDictSerializer) # Copy over the actions dictionary self.wsgi_actions = {} if controller: self.register_actions(controller) # Save a mapping of extensions self.wsgi_extensions = {} self.wsgi_action_extensions = {} def register_actions(self, controller): """Registers controller actions with this resource.""" actions = getattr(controller, 'wsgi_actions', {}) for key, method_name in actions.items(): self.wsgi_actions[key] = getattr(controller, method_name) def register_extensions(self, controller): """Registers controller extensions with this resource.""" extensions = getattr(controller, 'wsgi_extensions', []) for method_name, action_name in extensions: # Look up the extending method extension = getattr(controller, method_name) if action_name: # Extending an action... if action_name not in self.wsgi_action_extensions: self.wsgi_action_extensions[action_name] = [] self.wsgi_action_extensions[action_name].append(extension) else: # Extending a regular method if method_name not in self.wsgi_extensions: self.wsgi_extensions[method_name] = [] self.wsgi_extensions[method_name].append(extension) def get_action_args(self, request_environment): """Parse dictionary created by routes library.""" # NOTE(Vek): Check for get_action_args() override in the # controller if hasattr(self.controller, 'get_action_args'): return self.controller.get_action_args(request_environment) try: args = request_environment['wsgiorg.routing_args'][1].copy() except (KeyError, IndexError, AttributeError): return {} try: del args['controller'] except KeyError: pass try: del args['format'] except KeyError: pass return args def get_body(self, request): content_type = request.get_content_type() return content_type, request.body def deserialize(self, body): return JSONDeserializer().deserialize(body) def process_extensions(self, extensions, resp_obj, request, action_args): for ext in extensions: response = None # Regular functions get post-processing... try: with ResourceExceptionHandler(): response = ext(req=request, resp_obj=resp_obj, **action_args) except exception.VersionNotFoundForAPIMethod: # If an attached extension (@wsgi.extends) for the # method has no version match its not an error. We # just don't run the extends code continue except Fault as ex: response = ex # We had a response return it, to exit early. This is # actually a failure mode. None is success. if response: return response return None def _should_have_body(self, request): return request.method in _METHODS_WITH_BODY @webob.dec.wsgify(RequestClass=Request) def __call__(self, request): """WSGI method that controls (de)serialization and method dispatch.""" if self.support_api_request_version: # Set the version of the API requested based on the header try: request.set_api_version_request() except exception.InvalidAPIVersionString as e: return Fault(webob.exc.HTTPBadRequest( explanation=e.format_message())) except exception.InvalidGlobalAPIVersion as e: return Fault(webob.exc.HTTPNotAcceptable( explanation=e.format_message())) # Identify the action, its arguments, and the requested # content type action_args = self.get_action_args(request.environ) action = action_args.pop('action', None) # NOTE(sdague): we filter out InvalidContentTypes early so we # know everything is good from here on out. try: content_type, body = self.get_body(request) accept = request.best_match_content_type() except exception.InvalidContentType: msg = _("Unsupported Content-Type") return Fault(webob.exc.HTTPUnsupportedMediaType(explanation=msg)) # NOTE(Vek): Splitting the function up this way allows for # auditing by external tools that wrap the existing # function. If we try to audit __call__(), we can # run into troubles due to the @webob.dec.wsgify() # decorator. return self._process_stack(request, action, action_args, content_type, body, accept) def _process_stack(self, request, action, action_args, content_type, body, accept): """Implement the processing stack.""" # Get the implementing method try: meth, extensions = self.get_method(request, action, content_type, body) except (AttributeError, TypeError): return Fault(webob.exc.HTTPNotFound()) except KeyError as ex: msg = _("There is no such action: %s") % ex.args[0] return Fault(webob.exc.HTTPBadRequest(explanation=msg)) except exception.MalformedRequestBody: msg = _("Malformed request body") return Fault(webob.exc.HTTPBadRequest(explanation=msg)) if body: msg = _("Action: '%(action)s', calling method: %(meth)s, body: " "%(body)s") % {'action': action, 'body': six.text_type(body, 'utf-8'), 'meth': str(meth)} LOG.debug(strutils.mask_password(msg)) else: LOG.debug("Calling method '%(meth)s'", {'meth': str(meth)}) # Now, deserialize the request body... try: contents = self._get_request_content(body, request) except exception.MalformedRequestBody: msg = _("Malformed request body") return Fault(webob.exc.HTTPBadRequest(explanation=msg)) # Update the action args action_args.update(contents) project_id = action_args.pop("project_id", None) context = request.environ.get('nova.context') if (context and project_id and (project_id != context.project_id)): msg = _("Malformed request URL: URL's project_id '%(project_id)s'" " doesn't match Context's project_id" " '%(context_project_id)s'") % \ {'project_id': project_id, 'context_project_id': context.project_id} return Fault(webob.exc.HTTPBadRequest(explanation=msg)) response = None try: with ResourceExceptionHandler(): action_result = self.dispatch(meth, request, action_args) except Fault as ex: response = ex if not response: # No exceptions; convert action_result into a # ResponseObject resp_obj = None if type(action_result) is dict or action_result is None: resp_obj = ResponseObject(action_result) elif isinstance(action_result, ResponseObject): resp_obj = action_result else: response = action_result # Run post-processing extensions if resp_obj: # Do a preserialize to set up the response object if hasattr(meth, 'wsgi_code'): resp_obj._default_code = meth.wsgi_code # Process extensions response = self.process_extensions(extensions, resp_obj, request, action_args) if resp_obj and not response: response = resp_obj.serialize(request, accept) if hasattr(response, 'headers'): for hdr, val in list(response.headers.items()): if six.PY2: # In Py2.X Headers must be byte strings response.headers[hdr] = utils.utf8(val) else: # In Py3.X Headers must be utf-8 strings response.headers[hdr] = encodeutils.safe_decode( utils.utf8(val)) if not request.api_version_request.is_null(): response.headers[API_VERSION_REQUEST_HEADER] = \ 'compute ' + request.api_version_request.get_string() response.headers[LEGACY_API_VERSION_REQUEST_HEADER] = \ request.api_version_request.get_string() response.headers.add('Vary', API_VERSION_REQUEST_HEADER) response.headers.add('Vary', LEGACY_API_VERSION_REQUEST_HEADER) return response def _get_request_content(self, body, request): contents = {} if self._should_have_body(request): # allow empty body with PUT and POST if request.content_length == 0 or request.content_length is None: contents = {'body': None} else: contents = self.deserialize(body) return contents def get_method(self, request, action, content_type, body): meth, extensions = self._get_method(request, action, content_type, body) return meth, extensions def _get_method(self, request, action, content_type, body): """Look up the action-specific method and its extensions.""" # Look up the method try: if not self.controller: meth = getattr(self, action) else: meth = getattr(self.controller, action) except AttributeError: if (not self.wsgi_actions or action not in _ROUTES_METHODS + ['action']): # Propagate the error raise else: return meth, self.wsgi_extensions.get(action, []) if action == 'action': action_name = action_peek(body) else: action_name = action # Look up the action method return (self.wsgi_actions[action_name], self.wsgi_action_extensions.get(action_name, [])) def dispatch(self, method, request, action_args): """Dispatch a call to the action-specific method.""" try: return method(req=request, **action_args) except exception.VersionNotFoundForAPIMethod: # We deliberately don't return any message information # about the exception to the user so it looks as if # the method is simply not implemented. return Fault(webob.exc.HTTPNotFound()) def action(name): """Mark a function as an action. The given name will be taken as the action key in the body. This is also overloaded to allow extensions to provide non-extending definitions of create and delete operations. """ def decorator(func): func.wsgi_action = name return func return decorator def extends(*args, **kwargs): """Indicate a function extends an operation. Can be used as either:: @extends def index(...): pass or as:: @extends(action='resize') def _action_resize(...): pass """ def decorator(func): # Store enough information to find what we're extending func.wsgi_extends = (func.__name__, kwargs.get('action')) return func # If we have positional arguments, call the decorator if args: return decorator(*args) # OK, return the decorator instead return decorator def expected_errors(errors): """Decorator for v2.1 API methods which specifies expected exceptions. Specify which exceptions may occur when an API method is called. If an unexpected exception occurs then return a 500 instead and ask the user of the API to file a bug report. """ def decorator(f): @functools.wraps(f) def wrapped(*args, **kwargs): try: return f(*args, **kwargs) except Exception as exc: if isinstance(exc, webob.exc.WSGIHTTPException): if isinstance(errors, int): t_errors = (errors,) else: t_errors = errors if exc.code in t_errors: raise elif isinstance(exc, exception.Forbidden): # Note(cyeoh): Special case to handle # Forbidden exceptions so every # extension method does not need to wrap authorize # calls. ResourceExceptionHandler silently # converts NotAuthorized to HTTPForbidden raise elif isinstance(exc, exception.ValidationError): # Note(oomichi): Handle a validation error, which # happens due to invalid API parameters, as an # expected error. raise elif isinstance(exc, exception.Unauthorized): # Handle an authorized exception, will be # automatically converted to a HTTP 401, clients # like python-novaclient handle this error to # generate new token and do another attempt. raise LOG.exception("Unexpected exception in API method") msg = _('Unexpected API Error. Please report this at ' 'http://bugs.launchpad.net/nova/ and attach the Nova ' 'API log if possible.\n%s') % type(exc) raise webob.exc.HTTPInternalServerError(explanation=msg) return wrapped return decorator class ControllerMetaclass(type): """Controller metaclass. This metaclass automates the task of assembling a dictionary mapping action keys to method names. """ def __new__(mcs, name, bases, cls_dict): """Adds the wsgi_actions dictionary to the class.""" # Find all actions actions = {} extensions = [] versioned_methods = None # start with wsgi actions from base classes for base in bases: actions.update(getattr(base, 'wsgi_actions', {})) if base.__name__ == "Controller": # NOTE(cyeoh): This resets the VER_METHOD_ATTR attribute # between API controller class creations. This allows us # to use a class decorator on the API methods that doesn't # require naming explicitly what method is being versioned as # it can be implicit based on the method decorated. It is a bit # ugly. if VER_METHOD_ATTR in base.__dict__: versioned_methods = getattr(base, VER_METHOD_ATTR) delattr(base, VER_METHOD_ATTR) for key, value in cls_dict.items(): if not callable(value): continue if getattr(value, 'wsgi_action', None): actions[value.wsgi_action] = key elif getattr(value, 'wsgi_extends', None): extensions.append(value.wsgi_extends) # Add the actions and extensions to the class dict cls_dict['wsgi_actions'] = actions cls_dict['wsgi_extensions'] = extensions if versioned_methods: cls_dict[VER_METHOD_ATTR] = versioned_methods return super(ControllerMetaclass, mcs).__new__(mcs, name, bases, cls_dict) @six.add_metaclass(ControllerMetaclass) class Controller(object): """Default controller.""" _view_builder_class = None def __init__(self, view_builder=None): """Initialize controller with a view builder instance.""" if view_builder: self._view_builder = view_builder elif self._view_builder_class: self._view_builder = self._view_builder_class() else: self._view_builder = None def __getattribute__(self, key): def version_select(*args, **kwargs): """Look for the method which matches the name supplied and version constraints and calls it with the supplied arguments. @return: Returns the result of the method called @raises: VersionNotFoundForAPIMethod if there is no method which matches the name and version constraints """ # The first arg to all versioned methods is always the request # object. The version for the request is attached to the # request object if len(args) == 0: ver = kwargs['req'].api_version_request else: ver = args[0].api_version_request func_list = self.versioned_methods[key] for func in func_list: if ver.matches(func.start_version, func.end_version): # Update the version_select wrapper function so # other decorator attributes like wsgi.response # are still respected. functools.update_wrapper(version_select, func.func) return func.func(self, *args, **kwargs) # No version match raise exception.VersionNotFoundForAPIMethod(version=ver) try: version_meth_dict = object.__getattribute__(self, VER_METHOD_ATTR) except AttributeError: # No versioning on this class return object.__getattribute__(self, key) if version_meth_dict and \ key in object.__getattribute__(self, VER_METHOD_ATTR): return version_select return object.__getattribute__(self, key) # NOTE(cyeoh): This decorator MUST appear first (the outermost # decorator) on an API method for it to work correctly @classmethod def api_version(cls, min_ver, max_ver=None): """Decorator for versioning api methods. Add the decorator to any method which takes a request object as the first parameter and belongs to a class which inherits from wsgi.Controller. @min_ver: string representing minimum version @max_ver: optional string representing maximum version """ def decorator(f): obj_min_ver = api_version.APIVersionRequest(min_ver) if max_ver: obj_max_ver = api_version.APIVersionRequest(max_ver) else: obj_max_ver = api_version.APIVersionRequest() # Add to list of versioned methods registered func_name = f.__name__ new_func = versioned_method.VersionedMethod( func_name, obj_min_ver, obj_max_ver, f) func_dict = getattr(cls, VER_METHOD_ATTR, {}) if not func_dict: setattr(cls, VER_METHOD_ATTR, func_dict) func_list = func_dict.get(func_name, []) if not func_list: func_dict[func_name] = func_list func_list.append(new_func) # Ensure the list is sorted by minimum version (reversed) # so later when we work through the list in order we find # the method which has the latest version which supports # the version requested. is_intersect = Controller.check_for_versions_intersection( func_list) if is_intersect: raise exception.ApiVersionsIntersect( name=new_func.name, min_ver=new_func.start_version, max_ver=new_func.end_version, ) func_list.sort(key=lambda f: f.start_version, reverse=True) return f return decorator @staticmethod def is_valid_body(body, entity_name): if not (body and entity_name in body): return False def is_dict(d): try: d.get(None) return True except AttributeError: return False return is_dict(body[entity_name]) @staticmethod def check_for_versions_intersection(func_list): """Determines whether function list contains version intervals intersections or not. General algorithm: https://en.wikipedia.org/wiki/Intersection_algorithm :param func_list: list of VersionedMethod objects :return: boolean """ pairs = [] counter = 0 for f in func_list: pairs.append((f.start_version, 1, f)) pairs.append((f.end_version, -1, f)) def compare(x): return x[0] pairs.sort(key=compare) for p in pairs: counter += p[1] if counter > 1: return True return False class Fault(webob.exc.HTTPException): """Wrap webob.exc.HTTPException to provide API friendly response.""" _fault_names = { 400: "badRequest", 401: "unauthorized", 403: "forbidden", 404: "itemNotFound", 405: "badMethod", 409: "conflictingRequest", 413: "overLimit", 415: "badMediaType", 429: "overLimit", 501: "notImplemented", 503: "serviceUnavailable"} def __init__(self, exception): """Create a Fault for the given webob.exc.exception.""" self.wrapped_exc = exception for key, value in list(self.wrapped_exc.headers.items()): self.wrapped_exc.headers[key] = str(value) self.status_int = exception.status_int @webob.dec.wsgify(RequestClass=Request) def __call__(self, req): """Generate a WSGI response based on the exception passed to ctor.""" user_locale = req.best_match_language() # Replace the body with fault details. code = self.wrapped_exc.status_int fault_name = self._fault_names.get(code, "computeFault") explanation = self.wrapped_exc.explanation LOG.debug("Returning %(code)s to user: %(explanation)s", {'code': code, 'explanation': explanation}) explanation = i18n.translate(explanation, user_locale) fault_data = { fault_name: { 'code': code, 'message': explanation}} if code == 413 or code == 429: retry = self.wrapped_exc.headers.get('Retry-After', None) if retry: fault_data[fault_name]['retryAfter'] = retry if not req.api_version_request.is_null(): self.wrapped_exc.headers[API_VERSION_REQUEST_HEADER] = \ 'compute ' + req.api_version_request.get_string() self.wrapped_exc.headers[LEGACY_API_VERSION_REQUEST_HEADER] = \ req.api_version_request.get_string() self.wrapped_exc.headers.add('Vary', API_VERSION_REQUEST_HEADER) self.wrapped_exc.headers.add('Vary', LEGACY_API_VERSION_REQUEST_HEADER) self.wrapped_exc.content_type = 'application/json' self.wrapped_exc.charset = 'UTF-8' self.wrapped_exc.text = JSONDictSerializer().serialize(fault_data) return self.wrapped_exc def __str__(self): return self.wrapped_exc.__str__() nova-17.0.1/nova/api/validation/0000775000175000017500000000000013250073471016435 5ustar zuulzuul00000000000000nova-17.0.1/nova/api/validation/parameter_types.py0000666000175000017500000003030213250073126022210 0ustar zuulzuul00000000000000# Copyright 2014 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Common parameter types for validating request Body. """ import copy import re import unicodedata import six from nova import db from nova.i18n import _ from nova.objects import tag def single_param(schema): """Macro function for use in JSONSchema to support query parameters that should have only one value. """ ret = multi_params(schema) ret['maxItems'] = 1 return ret def multi_params(schema): """Macro function for use in JSONSchema to support query parameters that may have multiple values. """ return {'type': 'array', 'items': schema} # NOTE: We don't check actual values of queries on params # which are defined as the following common_param. # Please note those are for backward compatible existing # query parameters because previously multiple parameters # might be input and accepted. common_query_param = multi_params({'type': 'string'}) common_query_regex_param = multi_params({'type': 'string', 'format': 'regex'}) class ValidationRegex(object): def __init__(self, regex, reason): self.regex = regex self.reason = reason def _is_printable(char): """determine if a unicode code point is printable. This checks if the character is either "other" (mostly control codes), or a non-horizontal space. All characters that don't match those criteria are considered printable; that is: letters; combining marks; numbers; punctuation; symbols; (horizontal) space separators. """ category = unicodedata.category(char) return (not category.startswith("C") and (not category.startswith("Z") or category == "Zs")) def _get_all_chars(): for i in range(0xFFFF): yield six.unichr(i) # build a regex that matches all printable characters. This allows # spaces in the middle of the name. Also note that the regexp below # deliberately allows the empty string. This is so only the constraint # which enforces a minimum length for the name is triggered when an # empty string is tested. Otherwise it is not deterministic which # constraint fails and this causes issues for some unittests when # PYTHONHASHSEED is set randomly. def _build_regex_range(ws=True, invert=False, exclude=None): """Build a range regex for a set of characters in utf8. This builds a valid range regex for characters in utf8 by iterating the entire space and building up a set of x-y ranges for all the characters we find which are valid. :param ws: should we include whitespace in this range. :param exclude: any characters we want to exclude :param invert: invert the logic The inversion is useful when we want to generate a set of ranges which is everything that's not a certain class. For instance, produce all all the non printable characters as a set of ranges. """ if exclude is None: exclude = [] regex = "" # are we currently in a range in_range = False # last character we found, for closing ranges last = None # last character we added to the regex, this lets us know that we # already have B in the range, which means we don't need to close # it out with B-B. While the later seems to work, it's kind of bad form. last_added = None def valid_char(char): if char in exclude: result = False elif ws: result = _is_printable(char) else: # Zs is the unicode class for space characters, of which # there are about 10 in this range. result = (_is_printable(char) and unicodedata.category(char) != "Zs") if invert is True: return not result return result # iterate through the entire character range. in_ for c in _get_all_chars(): if valid_char(c): if not in_range: regex += re.escape(c) last_added = c in_range = True else: if in_range and last != last_added: regex += "-" + re.escape(last) in_range = False last = c else: if in_range: regex += "-" + re.escape(c) return regex valid_name_regex_base = '^(?![%s])[%s]*(? 0: if self.is_body: # NOTE: For whole OpenStack message consistency, this error # message has been written as the similar format of # WSME. detail = _("Invalid input for field/attribute %(path)s. " "Value: %(value)s. %(message)s") % { 'path': ex.path.pop(), 'value': ex.instance, 'message': ex.message} else: # NOTE: Use 'ex.path.popleft()' instead of 'ex.path.pop()', # due to the structure of query parameters is a dict # with key as name and value is list. So the first # item in the 'ex.path' is the key, and second item # is the index of list in the value. We need the # key as the parameter name in the error message. # So pop the first value out of 'ex.path'. detail = _("Invalid input for query parameters %(path)s. " "Value: %(value)s. %(message)s") % { 'path': ex.path.popleft(), 'value': ex.instance, 'message': ex.message} else: detail = ex.message raise exception.ValidationError(detail=detail) except TypeError as ex: # NOTE: If passing non string value to patternProperties parameter, # TypeError happens. Here is for catching the TypeError. detail = six.text_type(ex) raise exception.ValidationError(detail=detail) def _number_from_str(self, instance): try: value = int(instance) except (ValueError, TypeError): try: value = float(instance) except (ValueError, TypeError): return None return value def _validate_minimum(self, validator, minimum, instance, schema): instance = self._number_from_str(instance) if instance is None: return return self.validator_org.VALIDATORS['minimum'](validator, minimum, instance, schema) def _validate_maximum(self, validator, maximum, instance, schema): instance = self._number_from_str(instance) if instance is None: return return self.validator_org.VALIDATORS['maximum'](validator, maximum, instance, schema) nova-17.0.1/nova/api/compute_req_id.py0000666000175000017500000000237613250073126017663 0ustar zuulzuul00000000000000# Copyright (c) 2014 IBM Corp. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Middleware that ensures x-compute-request-id Nova's notion of request-id tracking predates any common idea, so the original version of this header in OpenStack was x-compute-request-id. Eventually we got oslo, and all other projects implemented this with x-openstack-request-id. However, x-compute-request-id was always part of our contract. The following migrates us to use x-openstack-request-id as well, by using the common middleware. """ from oslo_middleware import request_id HTTP_RESP_HEADER_REQUEST_ID = 'x-compute-request-id' class ComputeReqIdMiddleware(request_id.RequestId): compat_headers = [HTTP_RESP_HEADER_REQUEST_ID] nova-17.0.1/nova/api/manager.py0000666000175000017500000000270513250073126016272 0ustar zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova import manager from nova.network import driver from nova import utils class MetadataManager(manager.Manager): """Metadata Manager. This class manages the Metadata API service initialization. Currently, it just adds an iptables filter rule for the metadata service. """ def __init__(self, *args, **kwargs): super(MetadataManager, self).__init__(*args, **kwargs) if not utils.is_neutron(): # NOTE(mikal): we only add iptables rules if we're running # under nova-network. This code should go away when the # deprecation of nova-network is complete. self.network_driver = driver.load_network_driver() self.network_driver.metadata_accept() nova-17.0.1/nova/api/__init__.py0000666000175000017500000000000013250073126016401 0ustar zuulzuul00000000000000nova-17.0.1/nova/api/auth.py0000666000175000017500000000675413250073126015631 0ustar zuulzuul00000000000000# Copyright (c) 2011 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Common Auth Middleware. """ from oslo_log import log as logging from oslo_log import versionutils from oslo_serialization import jsonutils import webob.dec import webob.exc import nova.conf from nova import context from nova.i18n import _ from nova import wsgi CONF = nova.conf.CONF LOG = logging.getLogger(__name__) def _load_pipeline(loader, pipeline): filters = [loader.get_filter(n) for n in pipeline[:-1]] app = loader.get_app(pipeline[-1]) filters.reverse() for filter in filters: app = filter(app) return app def pipeline_factory(loader, global_conf, **local_conf): """A paste pipeline replica that keys off of auth_strategy.""" versionutils.report_deprecated_feature( LOG, "The legacy V2 API code tree has been removed in Newton. " "Please remove legacy v2 API entry from api-paste.ini, and use " "V2.1 API or V2.1 API compat mode instead" ) def pipeline_factory_v21(loader, global_conf, **local_conf): """A paste pipeline replica that keys off of auth_strategy.""" return _load_pipeline(loader, local_conf[CONF.api.auth_strategy].split()) class InjectContext(wsgi.Middleware): """Add a 'nova.context' to WSGI environ.""" def __init__(self, context, *args, **kwargs): self.context = context super(InjectContext, self).__init__(*args, **kwargs) @webob.dec.wsgify(RequestClass=wsgi.Request) def __call__(self, req): req.environ['nova.context'] = self.context return self.application class NovaKeystoneContext(wsgi.Middleware): """Make a request context from keystone headers.""" @webob.dec.wsgify(RequestClass=wsgi.Request) def __call__(self, req): # Build a context, including the auth_token... remote_address = req.remote_addr if CONF.api.use_forwarded_for: remote_address = req.headers.get('X-Forwarded-For', remote_address) service_catalog = None if req.headers.get('X_SERVICE_CATALOG') is not None: try: catalog_header = req.headers.get('X_SERVICE_CATALOG') service_catalog = jsonutils.loads(catalog_header) except ValueError: raise webob.exc.HTTPInternalServerError( _('Invalid service catalog json.')) # NOTE(jamielennox): This is a full auth plugin set by auth_token # middleware in newer versions. user_auth_plugin = req.environ.get('keystone.token_auth') ctx = context.RequestContext.from_environ( req.environ, user_auth_plugin=user_auth_plugin, remote_address=remote_address, service_catalog=service_catalog) if ctx.user_id is None: LOG.debug("Neither X_USER_ID nor X_USER found in request") return webob.exc.HTTPUnauthorized() req.environ['nova.context'] = ctx return self.application nova-17.0.1/nova/api/ec2/0000775000175000017500000000000013250073471014754 5ustar zuulzuul00000000000000nova-17.0.1/nova/api/ec2/ec2utils.py0000666000175000017500000003377313250073126017074 0ustar zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import functools import re from oslo_log import log as logging from oslo_utils import timeutils from oslo_utils import uuidutils import six from nova import cache_utils from nova import context from nova import exception from nova.i18n import _ from nova.network import model as network_model from nova import objects from nova.objects import base as obj_base LOG = logging.getLogger(__name__) # NOTE(vish): cache mapping for one week _CACHE_TIME = 7 * 24 * 60 * 60 _CACHE = None def memoize(func): @functools.wraps(func) def memoizer(context, reqid): global _CACHE if not _CACHE: _CACHE = cache_utils.get_client(expiration_time=_CACHE_TIME) key = "%s:%s" % (func.__name__, reqid) key = str(key) value = _CACHE.get(key) if value is None: value = func(context, reqid) _CACHE.set(key, value) return value return memoizer def reset_cache(): global _CACHE _CACHE = None def image_type(image_type): """Converts to a three letter image type. aki, kernel => aki ari, ramdisk => ari anything else => ami """ if image_type == 'kernel': return 'aki' if image_type == 'ramdisk': return 'ari' if image_type not in ['aki', 'ari']: return 'ami' return image_type def resource_type_from_id(context, resource_id): """Get resource type by ID Returns a string representation of the Amazon resource type, if known. Returns None on failure. :param context: context under which the method is called :param resource_id: resource_id to evaluate """ known_types = { 'i': 'instance', 'r': 'reservation', 'vol': 'volume', 'snap': 'snapshot', 'ami': 'image', 'aki': 'image', 'ari': 'image' } type_marker = resource_id.split('-')[0] return known_types.get(type_marker) @memoize def id_to_glance_id(context, image_id): """Convert an internal (db) id to a glance id.""" return objects.S3ImageMapping.get_by_id(context, image_id).uuid @memoize def glance_id_to_id(context, glance_id): """Convert a glance id to an internal (db) id.""" if not glance_id: return try: return objects.S3ImageMapping.get_by_uuid(context, glance_id).id except exception.NotFound: s3imap = objects.S3ImageMapping(context, uuid=glance_id) s3imap.create() return s3imap.id def ec2_id_to_glance_id(context, ec2_id): image_id = ec2_id_to_id(ec2_id) return id_to_glance_id(context, image_id) def glance_id_to_ec2_id(context, glance_id, image_type='ami'): image_id = glance_id_to_id(context, glance_id) if image_id is None: return return image_ec2_id(image_id, image_type=image_type) def ec2_id_to_id(ec2_id): """Convert an ec2 ID (i-[base 16 number]) to an instance id (int).""" try: return int(ec2_id.split('-')[-1], 16) except ValueError: raise exception.InvalidEc2Id(ec2_id=ec2_id) def image_ec2_id(image_id, image_type='ami'): """Returns image ec2_id using id and three letter type.""" template = image_type + '-%08x' return id_to_ec2_id(image_id, template=template) def get_ip_info_for_instance_from_nw_info(nw_info): if not isinstance(nw_info, network_model.NetworkInfo): nw_info = network_model.NetworkInfo.hydrate(nw_info) ip_info = {} fixed_ips = nw_info.fixed_ips() ip_info['fixed_ips'] = [ip['address'] for ip in fixed_ips if ip['version'] == 4] ip_info['fixed_ip6s'] = [ip['address'] for ip in fixed_ips if ip['version'] == 6] ip_info['floating_ips'] = [ip['address'] for ip in nw_info.floating_ips()] return ip_info def get_ip_info_for_instance(context, instance): """Return a dictionary of IP information for an instance.""" if isinstance(instance, obj_base.NovaObject): nw_info = instance.info_cache.network_info else: # FIXME(comstud): Temporary as we transition to objects. info_cache = instance.info_cache or {} nw_info = info_cache.get('network_info') # Make sure empty response is turned into the model if not nw_info: nw_info = [] return get_ip_info_for_instance_from_nw_info(nw_info) def id_to_ec2_id(instance_id, template='i-%08x'): """Convert an instance ID (int) to an ec2 ID (i-[base 16 number]).""" return template % int(instance_id) def id_to_ec2_inst_id(instance_id): """Get or create an ec2 instance ID (i-[base 16 number]) from uuid.""" if instance_id is None: return None elif uuidutils.is_uuid_like(instance_id): ctxt = context.get_admin_context() int_id = get_int_id_from_instance_uuid(ctxt, instance_id) return id_to_ec2_id(int_id) else: return id_to_ec2_id(instance_id) def ec2_inst_id_to_uuid(context, ec2_id): """"Convert an instance id to uuid.""" int_id = ec2_id_to_id(ec2_id) return get_instance_uuid_from_int_id(context, int_id) @memoize def get_instance_uuid_from_int_id(context, int_id): imap = objects.EC2InstanceMapping.get_by_id(context, int_id) return imap.uuid def id_to_ec2_snap_id(snapshot_id): """Get or create an ec2 volume ID (vol-[base 16 number]) from uuid.""" if uuidutils.is_uuid_like(snapshot_id): ctxt = context.get_admin_context() int_id = get_int_id_from_snapshot_uuid(ctxt, snapshot_id) return id_to_ec2_id(int_id, 'snap-%08x') else: return id_to_ec2_id(snapshot_id, 'snap-%08x') def id_to_ec2_vol_id(volume_id): """Get or create an ec2 volume ID (vol-[base 16 number]) from uuid.""" if uuidutils.is_uuid_like(volume_id): ctxt = context.get_admin_context() int_id = get_int_id_from_volume_uuid(ctxt, volume_id) return id_to_ec2_id(int_id, 'vol-%08x') else: return id_to_ec2_id(volume_id, 'vol-%08x') def ec2_vol_id_to_uuid(ec2_id): """Get the corresponding UUID for the given ec2-id.""" ctxt = context.get_admin_context() # NOTE(jgriffith) first strip prefix to get just the numeric int_id = ec2_id_to_id(ec2_id) return get_volume_uuid_from_int_id(ctxt, int_id) _ms_time_regex = re.compile('^\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}\.\d{3,6}Z$') def status_to_ec2_attach_status(volume): """Get the corresponding EC2 attachment state. According to EC2 API, the valid attachment status in response is: attaching | attached | detaching | detached """ volume_status = volume.get('status') attach_status = volume.get('attach_status') if volume_status in ('attaching', 'detaching'): ec2_attach_status = volume_status elif attach_status in ('attached', 'detached'): ec2_attach_status = attach_status else: msg = _("Unacceptable attach status:%s for ec2 API.") % attach_status raise exception.Invalid(msg) return ec2_attach_status def is_ec2_timestamp_expired(request, expires=None): """Checks the timestamp or expiry time included in an EC2 request and returns true if the request is expired """ timestamp = request.get('Timestamp') expiry_time = request.get('Expires') def parse_strtime(strtime): if _ms_time_regex.match(strtime): # NOTE(MotoKen): time format for aws-sdk-java contains millisecond time_format = "%Y-%m-%dT%H:%M:%S.%fZ" else: time_format = "%Y-%m-%dT%H:%M:%SZ" return timeutils.parse_strtime(strtime, time_format) try: if timestamp and expiry_time: msg = _("Request must include either Timestamp or Expires," " but cannot contain both") LOG.error(msg) raise exception.InvalidRequest(msg) elif expiry_time: query_time = parse_strtime(expiry_time) return timeutils.is_older_than(query_time, -1) elif timestamp: query_time = parse_strtime(timestamp) # Check if the difference between the timestamp in the request # and the time on our servers is larger than 5 minutes, the # request is too old (or too new). if query_time and expires: return timeutils.is_older_than(query_time, expires) or \ timeutils.is_newer_than(query_time, expires) return False except ValueError: LOG.info("Timestamp is invalid.") return True @memoize def get_int_id_from_instance_uuid(context, instance_uuid): if instance_uuid is None: return try: imap = objects.EC2InstanceMapping.get_by_uuid(context, instance_uuid) return imap.id except exception.NotFound: imap = objects.EC2InstanceMapping(context) imap.uuid = instance_uuid imap.create() return imap.id @memoize def get_int_id_from_volume_uuid(context, volume_uuid): if volume_uuid is None: return try: vmap = objects.EC2VolumeMapping.get_by_uuid(context, volume_uuid) return vmap.id except exception.NotFound: vmap = objects.EC2VolumeMapping(context) vmap.uuid = volume_uuid vmap.create() return vmap.id @memoize def get_volume_uuid_from_int_id(context, int_id): vmap = objects.EC2VolumeMapping.get_by_id(context, int_id) return vmap.uuid def ec2_snap_id_to_uuid(ec2_id): """Get the corresponding UUID for the given ec2-id.""" ctxt = context.get_admin_context() # NOTE(jgriffith) first strip prefix to get just the numeric int_id = ec2_id_to_id(ec2_id) return get_snapshot_uuid_from_int_id(ctxt, int_id) @memoize def get_int_id_from_snapshot_uuid(context, snapshot_uuid): if snapshot_uuid is None: return try: smap = objects.EC2SnapshotMapping.get_by_uuid(context, snapshot_uuid) return smap.id except exception.NotFound: smap = objects.EC2SnapshotMapping(context, uuid=snapshot_uuid) smap.create() return smap.id @memoize def get_snapshot_uuid_from_int_id(context, int_id): smap = objects.EC2SnapshotMapping.get_by_id(context, int_id) return smap.uuid _c2u = re.compile('(((?<=[a-z])[A-Z])|([A-Z](?![A-Z]|$)))') def camelcase_to_underscore(str): return _c2u.sub(r'_\1', str).lower().strip('_') def _try_convert(value): """Return a non-string from a string or unicode, if possible. ============= ===================================================== When value is returns ============= ===================================================== zero-length '' 'None' None 'True' True case insensitive 'False' False case insensitive '0', '-0' 0 0xN, -0xN int from hex (positive) (N is any number) 0bN, -0bN int from binary (positive) (N is any number) * try conversion to int, float, complex, fallback value """ def _negative_zero(value): epsilon = 1e-7 return 0 if abs(value) < epsilon else value if len(value) == 0: return '' if value == 'None': return None lowered_value = value.lower() if lowered_value == 'true': return True if lowered_value == 'false': return False for prefix, base in [('0x', 16), ('0b', 2), ('0', 8), ('', 10)]: try: if lowered_value.startswith((prefix, "-" + prefix)): return int(lowered_value, base) except ValueError: pass try: return _negative_zero(float(value)) except ValueError: return value def dict_from_dotted_str(items): """parse multi dot-separated argument into dict. EBS boot uses multi dot-separated arguments like BlockDeviceMapping.1.DeviceName=snap-id Convert the above into {'block_device_mapping': {'1': {'device_name': snap-id}}} """ args = {} for key, value in items: parts = key.split(".") key = str(camelcase_to_underscore(parts[0])) if isinstance(value, six.string_types): # NOTE(vish): Automatically convert strings back # into their respective values value = _try_convert(value) if len(parts) > 1: d = args.get(key, {}) args[key] = d for k in parts[1:-1]: k = camelcase_to_underscore(k) v = d.get(k, {}) d[k] = v d = v d[camelcase_to_underscore(parts[-1])] = value else: args[key] = value return args def search_opts_from_filters(filters): return {f['name'].replace('-', '_'): f['value']['1'] for f in filters if f['value']['1']} if filters else {} def regex_from_ec2_regex(ec2_re): """Converts an EC2-style regex to a python regex. Approach is based on python fnmatch. """ iter_ec2_re = iter(ec2_re) py_re = '' for char in iter_ec2_re: if char == '*': py_re += '.*' elif char == '?': py_re += '.' elif char == '\\': try: next_char = next(iter_ec2_re) except StopIteration: next_char = '' if next_char == '*' or next_char == '?': py_re += '[%s]' % next_char else: py_re += '\\\\' + next_char else: py_re += re.escape(char) return '\A%s\Z(?s)' % py_re nova-17.0.1/nova/api/ec2/cloud.py0000666000175000017500000000226413250073126016437 0ustar zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_log import log as logging from oslo_log import versionutils LOG = logging.getLogger(__name__) class CloudController(object): def __init__(self): versionutils.report_deprecated_feature( LOG, 'The in tree EC2 API has been removed in Mitaka. ' 'Please remove entries from api-paste.ini and use ' 'the OpenStack ec2-api project ' 'http://git.openstack.org/cgit/openstack/ec2-api/' ) nova-17.0.1/nova/api/ec2/__init__.py0000666000175000017500000000000013250073126017052 0ustar zuulzuul00000000000000nova-17.0.1/nova/i18n.py0000666000175000017500000000250113250073126014660 0ustar zuulzuul00000000000000# Copyright 2014 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """oslo.i18n integration module. See https://docs.openstack.org/oslo.i18n/latest/user/index.html . """ import oslo_i18n DOMAIN = 'nova' _translators = oslo_i18n.TranslatorFactory(domain=DOMAIN) # The primary translation function using the well-known name "_" _ = _translators.primary # Translators for log levels. # # The abbreviated names are meant to reflect the usual use of a short # name like '_'. The "L" is for "log" and the other letter comes from # the level. _LI = _translators.log_info _LW = _translators.log_warning _LE = _translators.log_error _LC = _translators.log_critical def translate(value, user_locale): return oslo_i18n.translate(value, user_locale) def get_available_languages(): return oslo_i18n.get_available_languages(DOMAIN) nova-17.0.1/nova/baserpc.py0000666000175000017500000000450713250073126015530 0ustar zuulzuul00000000000000# # Copyright 2013 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # """ Base RPC client and server common to all services. """ import oslo_messaging as messaging from oslo_serialization import jsonutils import nova.conf from nova import rpc CONF = nova.conf.CONF _NAMESPACE = 'baseapi' class BaseAPI(object): """Client side of the base rpc API. API version history: 1.0 - Initial version. 1.1 - Add get_backdoor_port """ VERSION_ALIASES = { # baseapi was added in havana } def __init__(self, topic): super(BaseAPI, self).__init__() target = messaging.Target(topic=topic, namespace=_NAMESPACE, version='1.0') version_cap = self.VERSION_ALIASES.get(CONF.upgrade_levels.baseapi, CONF.upgrade_levels.baseapi) self.client = rpc.get_client(target, version_cap=version_cap) def ping(self, context, arg, timeout=None): arg_p = jsonutils.to_primitive(arg) cctxt = self.client.prepare(timeout=timeout) return cctxt.call(context, 'ping', arg=arg_p) def get_backdoor_port(self, context, host): cctxt = self.client.prepare(server=host, version='1.1') return cctxt.call(context, 'get_backdoor_port') class BaseRPCAPI(object): """Server side of the base RPC API.""" target = messaging.Target(namespace=_NAMESPACE, version='1.1') def __init__(self, service_name, backdoor_port): self.service_name = service_name self.backdoor_port = backdoor_port def ping(self, context, arg): resp = {'service': self.service_name, 'arg': arg} return jsonutils.to_primitive(resp) def get_backdoor_port(self, context): return self.backdoor_port nova-17.0.1/nova/quota.py0000666000175000017500000017554313250073136015254 0ustar zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Quotas for resources per project.""" import copy from oslo_log import log as logging from oslo_utils import importutils import six import nova.conf from nova import context as nova_context from nova import db from nova import exception from nova import objects from nova import utils LOG = logging.getLogger(__name__) CONF = nova.conf.CONF class DbQuotaDriver(object): """Driver to perform necessary checks to enforce quotas and obtain quota information. The default driver utilizes the local database. """ UNLIMITED_VALUE = -1 def get_by_project_and_user(self, context, project_id, user_id, resource): """Get a specific quota by project and user.""" return objects.Quotas.get(context, project_id, resource, user_id=user_id) def get_by_project(self, context, project_id, resource): """Get a specific quota by project.""" return objects.Quotas.get(context, project_id, resource) def get_by_class(self, context, quota_class, resource): """Get a specific quota by quota class.""" return objects.Quotas.get_class(context, quota_class, resource) def get_defaults(self, context, resources): """Given a list of resources, retrieve the default quotas. Use the class quotas named `_DEFAULT_QUOTA_NAME` as default quotas, if it exists. :param context: The request context, for access checks. :param resources: A dictionary of the registered resources. """ quotas = {} default_quotas = objects.Quotas.get_default_class(context) for resource in resources.values(): # resource.default returns the config options. So if there's not # an entry for the resource in the default class, it uses the # config option. quotas[resource.name] = default_quotas.get(resource.name, resource.default) return quotas def get_class_quotas(self, context, resources, quota_class, defaults=True): """Given a list of resources, retrieve the quotas for the given quota class. :param context: The request context, for access checks. :param resources: A dictionary of the registered resources. :param quota_class: The name of the quota class to return quotas for. :param defaults: If True, the default value will be reported if there is no specific value for the resource. """ quotas = {} class_quotas = objects.Quotas.get_all_class_by_name(context, quota_class) for resource in resources.values(): if defaults or resource.name in class_quotas: quotas[resource.name] = class_quotas.get(resource.name, resource.default) return quotas def _process_quotas(self, context, resources, project_id, quotas, quota_class=None, defaults=True, usages=None, remains=False): modified_quotas = {} # Get the quotas for the appropriate class. If the project ID # matches the one in the context, we use the quota_class from # the context, otherwise, we use the provided quota_class (if # any) if project_id == context.project_id: quota_class = context.quota_class if quota_class: class_quotas = objects.Quotas.get_all_class_by_name(context, quota_class) else: class_quotas = {} default_quotas = self.get_defaults(context, resources) for resource in resources.values(): # Omit default/quota class values if not defaults and resource.name not in quotas: continue limit = quotas.get(resource.name, class_quotas.get( resource.name, default_quotas[resource.name])) modified_quotas[resource.name] = dict(limit=limit) # Include usages if desired. This is optional because one # internal consumer of this interface wants to access the # usages directly from inside a transaction. if usages: usage = usages.get(resource.name, {}) modified_quotas[resource.name].update( in_use=usage.get('in_use', 0), ) # Initialize remains quotas with the default limits. if remains: modified_quotas[resource.name].update(remains=limit) if remains: # Get all user quotas for a project and subtract their limits # from the class limits to get the remains. For example, if the # class/default is 20 and there are two users each with quota of 5, # then there is quota of 10 left to give out. all_quotas = objects.Quotas.get_all(context, project_id) for quota in all_quotas: if quota.resource in modified_quotas: modified_quotas[quota.resource]['remains'] -= \ quota.hard_limit return modified_quotas def _get_usages(self, context, resources, project_id, user_id=None): """Get usages of specified resources. This function is called to get resource usages for validating quota limit creates or updates in the os-quota-sets API and for displaying resource usages in the os-used-limits API. This function is not used for checking resource usage against quota limits. :param context: The request context for access checks :param resources: The dict of Resources for which to get usages :param project_id: The project_id for scoping the usage count :param user_id: Optional user_id for scoping the usage count :returns: A dict containing resources and their usage information, for example: {'project_id': 'project-uuid', 'user_id': 'user-uuid', 'instances': {'in_use': 5}, 'fixed_ips': {'in_use': 5}} """ usages = {} for resource in resources.values(): # NOTE(melwitt): We should skip resources that are not countable, # such as AbsoluteResources. if not isinstance(resource, CountableResource): continue if resource.name in usages: # This is needed because for any of the resources: # ('instances', 'cores', 'ram'), they are counted at the same # time for efficiency (query the instances table once instead # of multiple times). So, a count of any one of them contains # counts for the others and we can avoid re-counting things. continue if resource.name in ('key_pairs', 'server_group_members', 'security_group_rules'): # These per user resources are special cases whose usages # are not considered when validating limit create/update or # displaying used limits. They are always zero. usages[resource.name] = {'in_use': 0} else: if resource.name in db.quota_get_per_project_resources(): count = resource.count_as_dict(context, project_id) key = 'project' else: # NOTE(melwitt): This assumes a specific signature for # count_as_dict(). Usages used to be records in the # database but now we are counting resources. The # count_as_dict() function signature needs to match this # call, else it should get a conditional in this function. count = resource.count_as_dict(context, project_id, user_id=user_id) key = 'user' if user_id else 'project' # Example count_as_dict() return value: # {'project': {'instances': 5}, # 'user': {'instances': 2}} counted_resources = count[key].keys() for res in counted_resources: count_value = count[key][res] usages[res] = {'in_use': count_value} return usages def get_user_quotas(self, context, resources, project_id, user_id, quota_class=None, defaults=True, usages=True, project_quotas=None, user_quotas=None): """Given a list of resources, retrieve the quotas for the given user and project. :param context: The request context, for access checks. :param resources: A dictionary of the registered resources. :param project_id: The ID of the project to return quotas for. :param user_id: The ID of the user to return quotas for. :param quota_class: If project_id != context.project_id, the quota class cannot be determined. This parameter allows it to be specified. It will be ignored if project_id == context.project_id. :param defaults: If True, the quota class value (or the default value, if there is no value from the quota class) will be reported if there is no specific value for the resource. :param usages: If True, the current counts will also be returned. :param project_quotas: Quotas dictionary for the specified project. :param user_quotas: Quotas dictionary for the specified project and user. """ if user_quotas: user_quotas = user_quotas.copy() else: user_quotas = objects.Quotas.get_all_by_project_and_user( context, project_id, user_id) # Use the project quota for default user quota. proj_quotas = project_quotas or objects.Quotas.get_all_by_project( context, project_id) for key, value in proj_quotas.items(): if key not in user_quotas.keys(): user_quotas[key] = value user_usages = {} if usages: user_usages = self._get_usages(context, resources, project_id, user_id=user_id) return self._process_quotas(context, resources, project_id, user_quotas, quota_class, defaults=defaults, usages=user_usages) def get_project_quotas(self, context, resources, project_id, quota_class=None, defaults=True, usages=True, remains=False, project_quotas=None): """Given a list of resources, retrieve the quotas for the given project. :param context: The request context, for access checks. :param resources: A dictionary of the registered resources. :param project_id: The ID of the project to return quotas for. :param quota_class: If project_id != context.project_id, the quota class cannot be determined. This parameter allows it to be specified. It will be ignored if project_id == context.project_id. :param defaults: If True, the quota class value (or the default value, if there is no value from the quota class) will be reported if there is no specific value for the resource. :param usages: If True, the current counts will also be returned. :param remains: If True, the current remains of the project will will be returned. :param project_quotas: Quotas dictionary for the specified project. """ project_quotas = project_quotas or objects.Quotas.get_all_by_project( context, project_id) project_usages = {} if usages: project_usages = self._get_usages(context, resources, project_id) return self._process_quotas(context, resources, project_id, project_quotas, quota_class, defaults=defaults, usages=project_usages, remains=remains) def _is_unlimited_value(self, v): """A helper method to check for unlimited value. """ return v <= self.UNLIMITED_VALUE def _sum_quota_values(self, v1, v2): """A helper method that handles unlimited values when performing sum operation. """ if self._is_unlimited_value(v1) or self._is_unlimited_value(v2): return self.UNLIMITED_VALUE return v1 + v2 def _sub_quota_values(self, v1, v2): """A helper method that handles unlimited values when performing subtraction operation. """ if self._is_unlimited_value(v1) or self._is_unlimited_value(v2): return self.UNLIMITED_VALUE return v1 - v2 def get_settable_quotas(self, context, resources, project_id, user_id=None): """Given a list of resources, retrieve the range of settable quotas for the given user or project. :param context: The request context, for access checks. :param resources: A dictionary of the registered resources. :param project_id: The ID of the project to return quotas for. :param user_id: The ID of the user to return quotas for. """ settable_quotas = {} db_proj_quotas = objects.Quotas.get_all_by_project(context, project_id) project_quotas = self.get_project_quotas(context, resources, project_id, remains=True, project_quotas=db_proj_quotas) if user_id: setted_quotas = objects.Quotas.get_all_by_project_and_user( context, project_id, user_id) user_quotas = self.get_user_quotas(context, resources, project_id, user_id, project_quotas=db_proj_quotas, user_quotas=setted_quotas) for key, value in user_quotas.items(): # Maximum is the remaining quota for a project (class/default # minus the sum of all user quotas in the project), plus the # given user's quota. So if the class/default is 20 and there # are two users each with quota of 5, then there is quota of # 10 remaining. The given user currently has quota of 5, so # the maximum you could update their quota to would be 15. # Class/default 20 - currently used in project 10 + current # user 5 = 15. maximum = \ self._sum_quota_values(project_quotas[key]['remains'], setted_quotas.get(key, 0)) # This function is called for the quota_sets api and the # corresponding nova-manage command. The idea is when someone # attempts to update a quota, the value chosen must be at least # as much as the current usage and less than or equal to the # project limit less the sum of existing per user limits. minimum = value['in_use'] settable_quotas[key] = {'minimum': minimum, 'maximum': maximum} else: for key, value in project_quotas.items(): minimum = \ max(int(self._sub_quota_values(value['limit'], value['remains'])), int(value['in_use'])) settable_quotas[key] = {'minimum': minimum, 'maximum': -1} return settable_quotas def _get_quotas(self, context, resources, keys, project_id=None, user_id=None, project_quotas=None): """A helper method which retrieves the quotas for the specific resources identified by keys, and which apply to the current context. :param context: The request context, for access checks. :param resources: A dictionary of the registered resources. :param keys: A list of the desired quotas to retrieve. :param project_id: Specify the project_id if current context is admin and admin wants to impact on common user's tenant. :param user_id: Specify the user_id if current context is admin and admin wants to impact on common user. :param project_quotas: Quotas dictionary for the specified project. """ # Filter resources desired = set(keys) sub_resources = {k: v for k, v in resources.items() if k in desired} # Make sure we accounted for all of them... if len(keys) != len(sub_resources): unknown = desired - set(sub_resources.keys()) raise exception.QuotaResourceUnknown(unknown=sorted(unknown)) if user_id: LOG.debug('Getting quotas for user %(user_id)s and project ' '%(project_id)s. Resources: %(keys)s', {'user_id': user_id, 'project_id': project_id, 'keys': keys}) # Grab and return the quotas (without usages) quotas = self.get_user_quotas(context, sub_resources, project_id, user_id, context.quota_class, usages=False, project_quotas=project_quotas) else: LOG.debug('Getting quotas for project %(project_id)s. Resources: ' '%(keys)s', {'project_id': project_id, 'keys': keys}) # Grab and return the quotas (without usages) quotas = self.get_project_quotas(context, sub_resources, project_id, context.quota_class, usages=False, project_quotas=project_quotas) return {k: v['limit'] for k, v in quotas.items()} def limit_check(self, context, resources, values, project_id=None, user_id=None): """Check simple quota limits. For limits--those quotas for which there is no usage synchronization function--this method checks that a set of proposed values are permitted by the limit restriction. This method will raise a QuotaResourceUnknown exception if a given resource is unknown or if it is not a simple limit resource. If any of the proposed values is over the defined quota, an OverQuota exception will be raised with the sorted list of the resources which are too high. Otherwise, the method returns nothing. :param context: The request context, for access checks. :param resources: A dictionary of the registered resources. :param values: A dictionary of the values to check against the quota. :param project_id: Specify the project_id if current context is admin and admin wants to impact on common user's tenant. :param user_id: Specify the user_id if current context is admin and admin wants to impact on common user. """ _valid_method_call_check_resources(values, 'check', resources) # Ensure no value is less than zero unders = [key for key, val in values.items() if val < 0] if unders: raise exception.InvalidQuotaValue(unders=sorted(unders)) # If project_id is None, then we use the project_id in context if project_id is None: project_id = context.project_id # If user id is None, then we use the user_id in context if user_id is None: user_id = context.user_id # Get the applicable quotas project_quotas = objects.Quotas.get_all_by_project(context, project_id) quotas = self._get_quotas(context, resources, values.keys(), project_id=project_id, project_quotas=project_quotas) user_quotas = self._get_quotas(context, resources, values.keys(), project_id=project_id, user_id=user_id, project_quotas=project_quotas) # Check the quotas and construct a list of the resources that # would be put over limit by the desired values overs = [key for key, val in values.items() if quotas[key] >= 0 and quotas[key] < val or (user_quotas[key] >= 0 and user_quotas[key] < val)] if overs: headroom = {} for key in overs: headroom[key] = min( val for val in (quotas.get(key), project_quotas.get(key)) if val is not None ) raise exception.OverQuota(overs=sorted(overs), quotas=quotas, usages={}, headroom=headroom) def limit_check_project_and_user(self, context, resources, project_values=None, user_values=None, project_id=None, user_id=None): """Check values (usage + desired delta) against quota limits. For limits--this method checks that a set of proposed values are permitted by the limit restriction. This method will raise a QuotaResourceUnknown exception if a given resource is unknown or if it is not a simple limit resource. If any of the proposed values is over the defined quota, an OverQuota exception will be raised with the sorted list of the resources which are too high. Otherwise, the method returns nothing. :param context: The request context, for access checks :param resources: A dictionary of the registered resources :param project_values: Optional dict containing the resource values to check against project quota, e.g. {'instances': 1, 'cores': 2, 'memory_mb': 512} :param user_values: Optional dict containing the resource values to check against user quota, e.g. {'instances': 1, 'cores': 2, 'memory_mb': 512} :param project_id: Optional project_id for scoping the limit check to a different project than in the context :param user_id: Optional user_id for scoping the limit check to a different user than in the context """ if project_values is None: project_values = {} if user_values is None: user_values = {} _valid_method_call_check_resources(project_values, 'check', resources) _valid_method_call_check_resources(user_values, 'check', resources) if not any([project_values, user_values]): raise exception.Invalid( 'Must specify at least one of project_values or user_values ' 'for the limit check.') # Ensure no value is less than zero for vals in (project_values, user_values): unders = [key for key, val in vals.items() if val < 0] if unders: raise exception.InvalidQuotaValue(unders=sorted(unders)) # Get a set of all keys for calling _get_quotas() so we get all of the # resource limits we need. all_keys = set(project_values).union(user_values) # Keys that are in both project_values and user_values need to be # checked against project quota and user quota, respectively. # Keys that are not in both only need to be checked against project # quota or user quota, if it is defined. Separate the keys that don't # need to be checked against both quotas, merge them into one dict, # and remove them from project_values and user_values. keys_to_merge = set(project_values).symmetric_difference(user_values) merged_values = {} for key in keys_to_merge: # The key will be either in project_values or user_values based on # the earlier symmetric_difference. Default to 0 in case the found # value is 0 and won't take precedence over a None default. merged_values[key] = (project_values.get(key, 0) or user_values.get(key, 0)) project_values.pop(key, None) user_values.pop(key, None) # If project_id is None, then we use the project_id in context if project_id is None: project_id = context.project_id # If user id is None, then we use the user_id in context if user_id is None: user_id = context.user_id # Get the applicable quotas. They will be merged together (taking the # min limit) if project_values and user_values were not specified # together. # per project quota limits (quotas that have no concept of # user-scoping: fixed_ips, networks, floating_ips) project_quotas = objects.Quotas.get_all_by_project(context, project_id) # per user quotas, project quota limits (for quotas that have # user-scoping, limits for the project) quotas = self._get_quotas(context, resources, all_keys, project_id=project_id, project_quotas=project_quotas) # per user quotas, user quota limits (for quotas that have # user-scoping, the limits for the user) user_quotas = self._get_quotas(context, resources, all_keys, project_id=project_id, user_id=user_id, project_quotas=project_quotas) if merged_values: # This is for resources that are not counted across a project and # must pass both the quota for the project and the quota for the # user. # Combine per user project quotas and user_quotas for use in the # checks, taking the minimum limit between the two. merged_quotas = copy.deepcopy(quotas) for k, v in user_quotas.items(): if k in merged_quotas: merged_quotas[k] = min(merged_quotas[k], v) else: merged_quotas[k] = v # Check the quotas and construct a list of the resources that # would be put over limit by the desired values overs = [key for key, val in merged_values.items() if merged_quotas[key] >= 0 and merged_quotas[key] < val] if overs: headroom = {} for key in overs: headroom[key] = merged_quotas[key] raise exception.OverQuota(overs=sorted(overs), quotas=merged_quotas, usages={}, headroom=headroom) # This is for resources that are counted across a project and # across a user (instances, cores, ram, security_groups, # server_groups). The project_values must pass the quota for the # project and the user_values must pass the quota for the user. over_user_quota = False overs = [] for key in user_values.keys(): # project_values and user_values should contain the same keys or # be empty after the keys in the symmetric_difference were removed # from both dicts. if quotas[key] >= 0 and quotas[key] < project_values[key]: overs.append(key) elif (user_quotas[key] >= 0 and user_quotas[key] < user_values[key]): overs.append(key) over_user_quota = True if overs: quotas_exceeded = user_quotas if over_user_quota else quotas headroom = {} for key in overs: headroom[key] = quotas_exceeded[key] raise exception.OverQuota(overs=sorted(overs), quotas=quotas_exceeded, usages={}, headroom=headroom) def destroy_all_by_project_and_user(self, context, project_id, user_id): """Destroy all quotas associated with a project and user. :param context: The request context, for access checks. :param project_id: The ID of the project being deleted. :param user_id: The ID of the user being deleted. """ objects.Quotas.destroy_all_by_project_and_user(context, project_id, user_id) def destroy_all_by_project(self, context, project_id): """Destroy all quotas associated with a project. :param context: The request context, for access checks. :param project_id: The ID of the project being deleted. """ objects.Quotas.destroy_all_by_project(context, project_id) class NoopQuotaDriver(object): """Driver that turns quotas calls into no-ops and pretends that quotas for all resources are unlimited. This can be used if you do not wish to have any quota checking. For instance, with nova compute cells, the parent cell should do quota checking, but the child cell should not. """ def get_by_project_and_user(self, context, project_id, user_id, resource): """Get a specific quota by project and user.""" # Unlimited return -1 def get_by_project(self, context, project_id, resource): """Get a specific quota by project.""" # Unlimited return -1 def get_by_class(self, context, quota_class, resource): """Get a specific quota by quota class.""" # Unlimited return -1 def get_defaults(self, context, resources): """Given a list of resources, retrieve the default quotas. :param context: The request context, for access checks. :param resources: A dictionary of the registered resources. """ quotas = {} for resource in resources.values(): quotas[resource.name] = -1 return quotas def get_class_quotas(self, context, resources, quota_class, defaults=True): """Given a list of resources, retrieve the quotas for the given quota class. :param context: The request context, for access checks. :param resources: A dictionary of the registered resources. :param quota_class: The name of the quota class to return quotas for. :param defaults: If True, the default value will be reported if there is no specific value for the resource. """ quotas = {} for resource in resources.values(): quotas[resource.name] = -1 return quotas def _get_noop_quotas(self, resources, usages=None, remains=False): quotas = {} for resource in resources.values(): quotas[resource.name] = {} quotas[resource.name]['limit'] = -1 if usages: quotas[resource.name]['in_use'] = -1 if remains: quotas[resource.name]['remains'] = -1 return quotas def get_user_quotas(self, context, resources, project_id, user_id, quota_class=None, defaults=True, usages=True): """Given a list of resources, retrieve the quotas for the given user and project. :param context: The request context, for access checks. :param resources: A dictionary of the registered resources. :param project_id: The ID of the project to return quotas for. :param user_id: The ID of the user to return quotas for. :param quota_class: If project_id != context.project_id, the quota class cannot be determined. This parameter allows it to be specified. It will be ignored if project_id == context.project_id. :param defaults: If True, the quota class value (or the default value, if there is no value from the quota class) will be reported if there is no specific value for the resource. :param usages: If True, the current counts will also be returned. """ return self._get_noop_quotas(resources, usages=usages) def get_project_quotas(self, context, resources, project_id, quota_class=None, defaults=True, usages=True, remains=False): """Given a list of resources, retrieve the quotas for the given project. :param context: The request context, for access checks. :param resources: A dictionary of the registered resources. :param project_id: The ID of the project to return quotas for. :param quota_class: If project_id != context.project_id, the quota class cannot be determined. This parameter allows it to be specified. It will be ignored if project_id == context.project_id. :param defaults: If True, the quota class value (or the default value, if there is no value from the quota class) will be reported if there is no specific value for the resource. :param usages: If True, the current counts will also be returned. :param remains: If True, the current remains of the project will will be returned. """ return self._get_noop_quotas(resources, usages=usages, remains=remains) def get_settable_quotas(self, context, resources, project_id, user_id=None): """Given a list of resources, retrieve the range of settable quotas for the given user or project. :param context: The request context, for access checks. :param resources: A dictionary of the registered resources. :param project_id: The ID of the project to return quotas for. :param user_id: The ID of the user to return quotas for. """ quotas = {} for resource in resources.values(): quotas[resource.name] = {'minimum': 0, 'maximum': -1} return quotas def limit_check(self, context, resources, values, project_id=None, user_id=None): """Check simple quota limits. For limits--those quotas for which there is no usage synchronization function--this method checks that a set of proposed values are permitted by the limit restriction. This method will raise a QuotaResourceUnknown exception if a given resource is unknown or if it is not a simple limit resource. If any of the proposed values is over the defined quota, an OverQuota exception will be raised with the sorted list of the resources which are too high. Otherwise, the method returns nothing. :param context: The request context, for access checks. :param resources: A dictionary of the registered resources. :param values: A dictionary of the values to check against the quota. :param project_id: Specify the project_id if current context is admin and admin wants to impact on common user's tenant. :param user_id: Specify the user_id if current context is admin and admin wants to impact on common user. """ pass def limit_check_project_and_user(self, context, resources, project_values=None, user_values=None, project_id=None, user_id=None): """Check values against quota limits. For limits--this method checks that a set of proposed values are permitted by the limit restriction. This method will raise a QuotaResourceUnknown exception if a given resource is unknown or if it is not a simple limit resource. If any of the proposed values is over the defined quota, an OverQuota exception will be raised with the sorted list of the resources which are too high. Otherwise, the method returns nothing. :param context: The request context, for access checks :param resources: A dictionary of the registered resources :param project_values: Optional dict containing the resource values to check against project quota, e.g. {'instances': 1, 'cores': 2, 'memory_mb': 512} :param user_values: Optional dict containing the resource values to check against user quota, e.g. {'instances': 1, 'cores': 2, 'memory_mb': 512} :param project_id: Optional project_id for scoping the limit check to a different project than in the context :param user_id: Optional user_id for scoping the limit check to a different user than in the context """ pass def destroy_all_by_project_and_user(self, context, project_id, user_id): """Destroy all quotas associated with a project and user. :param context: The request context, for access checks. :param project_id: The ID of the project being deleted. :param user_id: The ID of the user being deleted. """ pass def destroy_all_by_project(self, context, project_id): """Destroy all quotas associated with a project. :param context: The request context, for access checks. :param project_id: The ID of the project being deleted. """ pass class BaseResource(object): """Describe a single resource for quota checking.""" def __init__(self, name, flag=None): """Initializes a Resource. :param name: The name of the resource, i.e., "instances". :param flag: The name of the flag or configuration option which specifies the default value of the quota for this resource. """ self.name = name self.flag = flag def quota(self, driver, context, **kwargs): """Given a driver and context, obtain the quota for this resource. :param driver: A quota driver. :param context: The request context. :param project_id: The project to obtain the quota value for. If not provided, it is taken from the context. If it is given as None, no project-specific quota will be searched for. :param quota_class: The quota class corresponding to the project, or for which the quota is to be looked up. If not provided, it is taken from the context. If it is given as None, no quota class-specific quota will be searched for. Note that the quota class defaults to the value in the context, which may not correspond to the project if project_id is not the same as the one in the context. """ # Get the project ID project_id = kwargs.get('project_id', context.project_id) # Ditto for the quota class quota_class = kwargs.get('quota_class', context.quota_class) # Look up the quota for the project if project_id: try: return driver.get_by_project(context, project_id, self.name) except exception.ProjectQuotaNotFound: pass # Try for the quota class if quota_class: try: return driver.get_by_class(context, quota_class, self.name) except exception.QuotaClassNotFound: pass # OK, return the default return self.default @property def default(self): """Return the default value of the quota.""" # NOTE(mikal): special case for quota_networks, which is an API # flag and not a quota flag if self.flag == 'quota_networks': return CONF[self.flag] return CONF.quota[self.flag] if self.flag else -1 class AbsoluteResource(BaseResource): """Describe a resource that does not correspond to database objects.""" valid_method = 'check' class CountableResource(AbsoluteResource): """Describe a resource where the counts aren't based solely on the project ID. """ def __init__(self, name, count_as_dict, flag=None): """Initializes a CountableResource. Countable resources are those resources which directly correspond to objects in the database, but for which a count by project ID is inappropriate e.g. security_group_rules, keypairs, etc. A CountableResource must be constructed with a counting function, which will be called to determine the current counts of the resource. The counting function will be passed the context, along with the extra positional and keyword arguments that are passed to Quota.count_as_dict(). It should return a dict specifying the count scoped to a project and/or a user. Example count of instances, cores, or ram returned as a rollup of all the resources since we only want to query the instances table once, not multiple times, for each resource. Instances, cores, and ram are counted across a project and across a user: {'project': {'instances': 5, 'cores': 8, 'ram': 4096}, 'user': {'instances': 1, 'cores': 2, 'ram': 512}} Example count of server groups keeping a consistent format. Server groups are counted across a project and across a user: {'project': {'server_groups': 7}, 'user': {'server_groups': 2}} Example count of key pairs keeping a consistent format. Key pairs are counted across a user only: {'user': {'key_pairs': 5}} Note that this counting is not performed in a transaction-safe manner. This resource class is a temporary measure to provide required functionality, until a better approach to solving this problem can be evolved. :param name: The name of the resource, i.e., "instances". :param count_as_dict: A callable which returns the count of the resource as a dict. The arguments passed are as described above. :param flag: The name of the flag or configuration option which specifies the default value of the quota for this resource. """ super(CountableResource, self).__init__(name, flag=flag) self.count_as_dict = count_as_dict class QuotaEngine(object): """Represent the set of recognized quotas.""" def __init__(self, quota_driver_class=None): """Initialize a Quota object.""" self._resources = {} self._driver_cls = quota_driver_class self.__driver = None @property def _driver(self): if self.__driver: return self.__driver if not self._driver_cls: self._driver_cls = CONF.quota.driver if isinstance(self._driver_cls, six.string_types): self._driver_cls = importutils.import_object(self._driver_cls) self.__driver = self._driver_cls return self.__driver def register_resource(self, resource): """Register a resource.""" self._resources[resource.name] = resource def register_resources(self, resources): """Register a list of resources.""" for resource in resources: self.register_resource(resource) def get_by_project_and_user(self, context, project_id, user_id, resource): """Get a specific quota by project and user.""" return self._driver.get_by_project_and_user(context, project_id, user_id, resource) def get_by_project(self, context, project_id, resource): """Get a specific quota by project.""" return self._driver.get_by_project(context, project_id, resource) def get_by_class(self, context, quota_class, resource): """Get a specific quota by quota class.""" return self._driver.get_by_class(context, quota_class, resource) def get_defaults(self, context): """Retrieve the default quotas. :param context: The request context, for access checks. """ return self._driver.get_defaults(context, self._resources) def get_class_quotas(self, context, quota_class, defaults=True): """Retrieve the quotas for the given quota class. :param context: The request context, for access checks. :param quota_class: The name of the quota class to return quotas for. :param defaults: If True, the default value will be reported if there is no specific value for the resource. """ return self._driver.get_class_quotas(context, self._resources, quota_class, defaults=defaults) def get_user_quotas(self, context, project_id, user_id, quota_class=None, defaults=True, usages=True): """Retrieve the quotas for the given user and project. :param context: The request context, for access checks. :param project_id: The ID of the project to return quotas for. :param user_id: The ID of the user to return quotas for. :param quota_class: If project_id != context.project_id, the quota class cannot be determined. This parameter allows it to be specified. :param defaults: If True, the quota class value (or the default value, if there is no value from the quota class) will be reported if there is no specific value for the resource. :param usages: If True, the current counts will also be returned. """ return self._driver.get_user_quotas(context, self._resources, project_id, user_id, quota_class=quota_class, defaults=defaults, usages=usages) def get_project_quotas(self, context, project_id, quota_class=None, defaults=True, usages=True, remains=False): """Retrieve the quotas for the given project. :param context: The request context, for access checks. :param project_id: The ID of the project to return quotas for. :param quota_class: If project_id != context.project_id, the quota class cannot be determined. This parameter allows it to be specified. :param defaults: If True, the quota class value (or the default value, if there is no value from the quota class) will be reported if there is no specific value for the resource. :param usages: If True, the current counts will also be returned. :param remains: If True, the current remains of the project will will be returned. """ return self._driver.get_project_quotas(context, self._resources, project_id, quota_class=quota_class, defaults=defaults, usages=usages, remains=remains) def get_settable_quotas(self, context, project_id, user_id=None): """Given a list of resources, retrieve the range of settable quotas for the given user or project. :param context: The request context, for access checks. :param project_id: The ID of the project to return quotas for. :param user_id: The ID of the user to return quotas for. """ return self._driver.get_settable_quotas(context, self._resources, project_id, user_id=user_id) def count_as_dict(self, context, resource, *args, **kwargs): """Count a resource and return a dict. For countable resources, invokes the count_as_dict() function and returns its result. Arguments following the context and resource are passed directly to the count function declared by the resource. :param context: The request context, for access checks. :param resource: The name of the resource, as a string. :returns: A dict containing the count(s) for the resource, for example: {'project': {'instances': 2, 'cores': 4, 'ram': 1024}, 'user': {'instances': 1, 'cores': 2, 'ram': 512}} another example: {'user': {'key_pairs': 5}} """ # Get the resource res = self._resources.get(resource) if not res or not hasattr(res, 'count_as_dict'): raise exception.QuotaResourceUnknown(unknown=[resource]) return res.count_as_dict(context, *args, **kwargs) # TODO(melwitt): This can be removed once no old code can call # limit_check(). It will be replaced with limit_check_project_and_user(). def limit_check(self, context, project_id=None, user_id=None, **values): """Check simple quota limits. For limits--those quotas for which there is no usage synchronization function--this method checks that a set of proposed values are permitted by the limit restriction. The values to check are given as keyword arguments, where the key identifies the specific quota limit to check, and the value is the proposed value. This method will raise a QuotaResourceUnknown exception if a given resource is unknown or if it is not a simple limit resource. If any of the proposed values is over the defined quota, an OverQuota exception will be raised with the sorted list of the resources which are too high. Otherwise, the method returns nothing. :param context: The request context, for access checks. :param project_id: Specify the project_id if current context is admin and admin wants to impact on common user's tenant. :param user_id: Specify the user_id if current context is admin and admin wants to impact on common user. """ return self._driver.limit_check(context, self._resources, values, project_id=project_id, user_id=user_id) def limit_check_project_and_user(self, context, project_values=None, user_values=None, project_id=None, user_id=None): """Check values against quota limits. For limits--this method checks that a set of proposed values are permitted by the limit restriction. This method will raise a QuotaResourceUnknown exception if a given resource is unknown or if it is not a simple limit resource. If any of the proposed values is over the defined quota, an OverQuota exception will be raised with the sorted list of the resources which are too high. Otherwise, the method returns nothing. :param context: The request context, for access checks :param project_values: Optional dict containing the resource values to check against project quota, e.g. {'instances': 1, 'cores': 2, 'memory_mb': 512} :param user_values: Optional dict containing the resource values to check against user quota, e.g. {'instances': 1, 'cores': 2, 'memory_mb': 512} :param project_id: Optional project_id for scoping the limit check to a different project than in the context :param user_id: Optional user_id for scoping the limit check to a different user than in the context """ return self._driver.limit_check_project_and_user( context, self._resources, project_values=project_values, user_values=user_values, project_id=project_id, user_id=user_id) def destroy_all_by_project_and_user(self, context, project_id, user_id): """Destroy all quotas, usages, and reservations associated with a project and user. :param context: The request context, for access checks. :param project_id: The ID of the project being deleted. :param user_id: The ID of the user being deleted. """ self._driver.destroy_all_by_project_and_user(context, project_id, user_id) def destroy_all_by_project(self, context, project_id): """Destroy all quotas, usages, and reservations associated with a project. :param context: The request context, for access checks. :param project_id: The ID of the project being deleted. """ self._driver.destroy_all_by_project(context, project_id) @property def resources(self): return sorted(self._resources.keys()) def get_reserved(self): if isinstance(self._driver, NoopQuotaDriver): return -1 return 0 def _keypair_get_count_by_user(context, user_id): count = objects.KeyPairList.get_count_by_user(context, user_id) return {'user': {'key_pairs': count}} def _security_group_count(context, project_id, user_id=None): """Get the counts of security groups in the database. :param context: The request context for database access :param project_id: The project_id to count across :param user_id: The user_id to count across :returns: A dict containing the project-scoped counts and user-scoped counts if user_id is specified. For example: {'project': {'security_groups': }, 'user': {'security_groups': }} """ # NOTE(melwitt): This assumes a single cell. return objects.SecurityGroupList.get_counts(context, project_id, user_id=user_id) def _server_group_count_members_by_user(context, group, user_id): # NOTE(melwitt): This is mostly duplicated from # InstanceGroup.count_members_by_user() to query across multiple cells. # We need to be able to pass the correct cell context to # InstanceList.get_by_filters(). # TODO(melwitt): Counting across cells for instances means we will miss # counting resources if a cell is down. In the future, we should query # placement for cores/ram and InstanceMappings for instances (once we are # deleting InstanceMappings when we delete instances). cell_mappings = objects.CellMappingList.get_all(context) greenthreads = [] filters = {'deleted': False, 'user_id': user_id, 'uuid': group.members} for cell_mapping in cell_mappings: with nova_context.target_cell(context, cell_mapping) as cctxt: greenthreads.append(utils.spawn( objects.InstanceList.get_by_filters, cctxt, filters)) instances = objects.InstanceList(objects=[]) for greenthread in greenthreads: found = greenthread.wait() instances = instances + found return {'user': {'server_group_members': len(instances)}} def _fixed_ip_count(context, project_id): # NOTE(melwitt): This assumes a single cell. count = objects.FixedIPList.get_count_by_project(context, project_id) return {'project': {'fixed_ips': count}} def _floating_ip_count(context, project_id): # NOTE(melwitt): This assumes a single cell. count = objects.FloatingIPList.get_count_by_project(context, project_id) return {'project': {'floating_ips': count}} def _instances_cores_ram_count(context, project_id, user_id=None): """Get the counts of instances, cores, and ram in the database. :param context: The request context for database access :param project_id: The project_id to count across :param user_id: The user_id to count across :returns: A dict containing the project-scoped counts and user-scoped counts if user_id is specified. For example: {'project': {'instances': , 'cores': , 'ram': }, 'user': {'instances': , 'cores': , 'ram': }} """ # TODO(melwitt): Counting across cells for instances means we will miss # counting resources if a cell is down. In the future, we should query # placement for cores/ram and InstanceMappings for instances (once we are # deleting InstanceMappings when we delete instances). results = nova_context.scatter_gather_all_cells( context, objects.InstanceList.get_counts, project_id, user_id=user_id) total_counts = {'project': {'instances': 0, 'cores': 0, 'ram': 0}} if user_id: total_counts['user'] = {'instances': 0, 'cores': 0, 'ram': 0} for cell_uuid, result in results.items(): if result not in (nova_context.did_not_respond_sentinel, nova_context.raised_exception_sentinel): for resource, count in result['project'].items(): total_counts['project'][resource] += count if user_id: for resource, count in result['user'].items(): total_counts['user'][resource] += count return total_counts def _server_group_count(context, project_id, user_id=None): """Get the counts of server groups in the database. :param context: The request context for database access :param project_id: The project_id to count across :param user_id: The user_id to count across :returns: A dict containing the project-scoped counts and user-scoped counts if user_id is specified. For example: {'project': {'server_groups': }, 'user': {'server_groups': }} """ return objects.InstanceGroupList.get_counts(context, project_id, user_id=user_id) def _security_group_rule_count_by_group(context, security_group_id): count = db.security_group_rule_count_by_group(context, security_group_id) # NOTE(melwitt): Neither 'project' nor 'user' fit perfectly here as # security group rules are counted per security group, not by user or # project. But, the quota limits for security_group_rules can be scoped to # a user, so we'll use 'user' here. return {'user': {'security_group_rules': count}} QUOTAS = QuotaEngine() resources = [ CountableResource('instances', _instances_cores_ram_count, 'instances'), CountableResource('cores', _instances_cores_ram_count, 'cores'), CountableResource('ram', _instances_cores_ram_count, 'ram'), CountableResource('security_groups', _security_group_count, 'security_groups'), CountableResource('fixed_ips', _fixed_ip_count, 'fixed_ips'), CountableResource('floating_ips', _floating_ip_count, 'floating_ips'), AbsoluteResource('metadata_items', 'metadata_items'), AbsoluteResource('injected_files', 'injected_files'), AbsoluteResource('injected_file_content_bytes', 'injected_file_content_bytes'), AbsoluteResource('injected_file_path_bytes', 'injected_file_path_length'), CountableResource('security_group_rules', _security_group_rule_count_by_group, 'security_group_rules'), CountableResource('key_pairs', _keypair_get_count_by_user, 'key_pairs'), CountableResource('server_groups', _server_group_count, 'server_groups'), CountableResource('server_group_members', _server_group_count_members_by_user, 'server_group_members'), ] QUOTAS.register_resources(resources) def _valid_method_call_check_resource(name, method, resources): if name not in resources: raise exception.InvalidQuotaMethodUsage(method=method, res=name) res = resources[name] if res.valid_method != method: raise exception.InvalidQuotaMethodUsage(method=method, res=name) def _valid_method_call_check_resources(resource_values, method, resources): """A method to check whether the resource can use the quota method. :param resource_values: Dict containing the resource names and values :param method: The quota method to check :param resources: Dict containing Resource objects to validate against """ for name in resource_values.keys(): _valid_method_call_check_resource(name, method, resources) nova-17.0.1/nova/tests/0000775000175000017500000000000013250073472014675 5ustar zuulzuul00000000000000nova-17.0.1/nova/tests/json_ref.py0000666000175000017500000000433713250073126017061 0ustar zuulzuul00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os from oslo_serialization import jsonutils def _resolve_ref(ref, base_path): file_path, _, json_path = ref.partition('#') if json_path: raise NotImplementedError('JSON refs with JSON path after the "#" is ' 'not yet supported') path = os.path.join(base_path, file_path) # binary mode is needed due to bug/1515231 with open(path, 'r+b') as f: ref_value = jsonutils.load(f) base_path = os.path.dirname(path) res = resolve_refs(ref_value, base_path) return res def resolve_refs(obj_with_refs, base_path): if isinstance(obj_with_refs, list): for i, item in enumerate(obj_with_refs): obj_with_refs[i] = resolve_refs(item, base_path) elif isinstance(obj_with_refs, dict): if '$ref' in obj_with_refs.keys(): ref = obj_with_refs.pop('$ref') resolved_ref = _resolve_ref(ref, base_path) # the rest of the ref dict contains overrides for the ref. Apply # those overrides recursively here. _update_dict_recursively(resolved_ref, obj_with_refs) return resolved_ref else: for key, value in obj_with_refs.items(): obj_with_refs[key] = resolve_refs(value, base_path) else: # scalar, nothing to do pass return obj_with_refs def _update_dict_recursively(d, update): """Update dict d recursively with data from dict update""" for k, v in update.items(): if k in d and isinstance(d[k], dict) and isinstance(v, dict): _update_dict_recursively(d[k], v) else: d[k] = v nova-17.0.1/nova/tests/fixtures.py0000666000175000017500000021564713250073126017135 0ustar zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Fixtures for Nova tests.""" from __future__ import absolute_import import collections from contextlib import contextmanager import copy import logging as std_logging import os import warnings import fixtures from keystoneauth1 import session as ks import mock from oslo_concurrency import lockutils from oslo_config import cfg import oslo_messaging as messaging from oslo_messaging import conffixture as messaging_conffixture from oslo_privsep import daemon as privsep_daemon from oslo_utils import uuidutils from requests import adapters from wsgi_intercept import interceptor from nova.api.openstack.compute import tenant_networks from nova.api.openstack.placement import deploy as placement_deploy from nova.api.openstack import wsgi_app from nova.compute import rpcapi as compute_rpcapi from nova import context from nova.db import migration from nova.db.sqlalchemy import api as session from nova import exception from nova.network import model as network_model from nova import objects from nova.objects import base as obj_base from nova.objects import service as service_obj from nova import quota as nova_quota from nova import rpc from nova import service from nova.tests.functional.api import client from nova.tests import uuidsentinel from nova import wsgi _TRUE_VALUES = ('True', 'true', '1', 'yes') CONF = cfg.CONF DB_SCHEMA = {'main': "", 'api': ""} SESSION_CONFIGURED = False class ServiceFixture(fixtures.Fixture): """Run a service as a test fixture.""" def __init__(self, name, host=None, **kwargs): name = name # If not otherwise specified, the host will default to the # name of the service. Some things like aggregates care that # this is stable. host = host or name kwargs.setdefault('host', host) kwargs.setdefault('binary', 'nova-%s' % name) self.kwargs = kwargs def setUp(self): super(ServiceFixture, self).setUp() self.service = service.Service.create(**self.kwargs) self.service.start() self.addCleanup(self.service.kill) class NullHandler(std_logging.Handler): """custom default NullHandler to attempt to format the record. Used in conjunction with log_fixture.get_logging_handle_error_fixture to detect formatting errors in debug level logs without saving the logs. """ def handle(self, record): self.format(record) def emit(self, record): pass def createLock(self): self.lock = None class StandardLogging(fixtures.Fixture): """Setup Logging redirection for tests. There are a number of things we want to handle with logging in tests: * Redirect the logging to somewhere that we can test or dump it later. * Ensure that as many DEBUG messages as possible are actually executed, to ensure they are actually syntactically valid (they often have not been). * Ensure that we create useful output for tests that doesn't overwhelm the testing system (which means we can't capture the 100 MB of debug logging on every run). To do this we create a logger fixture at the root level, which defaults to INFO and create a Null Logger at DEBUG which lets us execute log messages at DEBUG but not keep the output. To support local debugging OS_DEBUG=True can be set in the environment, which will print out the full debug logging. There are also a set of overrides for particularly verbose modules to be even less than INFO. """ def setUp(self): super(StandardLogging, self).setUp() # set root logger to debug root = std_logging.getLogger() root.setLevel(std_logging.DEBUG) # supports collecting debug level for local runs if os.environ.get('OS_DEBUG') in _TRUE_VALUES: level = std_logging.DEBUG else: level = std_logging.INFO # Collect logs fs = '%(asctime)s %(levelname)s [%(name)s] %(message)s' self.logger = self.useFixture( fixtures.FakeLogger(format=fs, level=None)) # TODO(sdague): why can't we send level through the fake # logger? Tests prove that it breaks, but it's worth getting # to the bottom of. root.handlers[0].setLevel(level) if level > std_logging.DEBUG: # Just attempt to format debug level logs, but don't save them handler = NullHandler() self.useFixture(fixtures.LogHandler(handler, nuke_handlers=False)) handler.setLevel(std_logging.DEBUG) # Don't log every single DB migration step std_logging.getLogger( 'migrate.versioning.api').setLevel(std_logging.WARNING) # At times we end up calling back into main() functions in # testing. This has the possibility of calling logging.setup # again, which completely unwinds the logging capture we've # created here. Once we've setup the logging the way we want, # disable the ability for the test to change this. def fake_logging_setup(*args): pass self.useFixture( fixtures.MonkeyPatch('oslo_log.log.setup', fake_logging_setup)) class OutputStreamCapture(fixtures.Fixture): """Capture output streams during tests. This fixture captures errant printing to stderr / stdout during the tests and lets us see those streams at the end of the test runs instead. Useful to see what was happening during failed tests. """ def setUp(self): super(OutputStreamCapture, self).setUp() if os.environ.get('OS_STDOUT_CAPTURE') in _TRUE_VALUES: self.out = self.useFixture(fixtures.StringStream('stdout')) self.useFixture( fixtures.MonkeyPatch('sys.stdout', self.out.stream)) if os.environ.get('OS_STDERR_CAPTURE') in _TRUE_VALUES: self.err = self.useFixture(fixtures.StringStream('stderr')) self.useFixture( fixtures.MonkeyPatch('sys.stderr', self.err.stream)) @property def stderr(self): return self.err._details["stderr"].as_text() @property def stdout(self): return self.out._details["stdout"].as_text() class Timeout(fixtures.Fixture): """Setup per test timeouts. In order to avoid test deadlocks we support setting up a test timeout parameter read from the environment. In almost all cases where the timeout is reached this means a deadlock. A class level TIMEOUT_SCALING_FACTOR also exists, which allows extremely long tests to specify they need more time. """ def __init__(self, timeout, scaling=1): super(Timeout, self).__init__() try: self.test_timeout = int(timeout) except ValueError: # If timeout value is invalid do not set a timeout. self.test_timeout = 0 if scaling >= 1: self.test_timeout *= scaling else: raise ValueError('scaling value must be >= 1') def setUp(self): super(Timeout, self).setUp() if self.test_timeout > 0: self.useFixture(fixtures.Timeout(self.test_timeout, gentle=True)) class DatabasePoisonFixture(fixtures.Fixture): def setUp(self): super(DatabasePoisonFixture, self).setUp() self.useFixture(fixtures.MonkeyPatch( 'oslo_db.sqlalchemy.enginefacade._TransactionFactory.' '_create_session', self._poison_configure)) def _poison_configure(self, *a, **k): # If you encounter this error, you might be tempted to just not # inherit from NoDBTestCase. Bug #1568414 fixed a few hundred of these # errors, and not once was that the correct solution. Instead, # consider some of the following tips (when applicable): # # - mock at the object layer rather than the db layer, for example: # nova.objects.instance.Instance.get # vs. # nova.db.instance_get # # - mock at the api layer rather than the object layer, for example: # nova.api.openstack.common.get_instance # vs. # nova.objects.instance.Instance.get # # - mock code that requires the database but is otherwise tangential # to the code you're testing (for example: EventReporterStub) # # - peruse some of the other database poison warning fixes here: # https://review.openstack.org/#/q/topic:bug/1568414 raise Exception('This test uses methods that set internal oslo_db ' 'state, but it does not claim to use the database. ' 'This will conflict with the setup of tests that ' 'do use the database and cause failures later.') class SingleCellSimple(fixtures.Fixture): """Setup the simplest cells environment possible This should be used when you do not care about multiple cells, or having a "real" environment for tests that should not care. This will give you a single cell, and map any and all accesses to that cell (even things that would go to cell0). If you need to distinguish between cell0 and cellN, then you should use the CellDatabases fixture. If instances should appear to still be in scheduling state, pass instances_created=False to init. """ def __init__(self, instances_created=True): self.instances_created = instances_created def setUp(self): super(SingleCellSimple, self).setUp() self.useFixture(fixtures.MonkeyPatch( 'nova.objects.CellMappingList._get_all_from_db', self._fake_cell_list)) self.useFixture(fixtures.MonkeyPatch( 'nova.objects.CellMapping._get_by_uuid_from_db', self._fake_cell_get)) self.useFixture(fixtures.MonkeyPatch( 'nova.objects.HostMapping._get_by_host_from_db', self._fake_hostmapping_get)) self.useFixture(fixtures.MonkeyPatch( 'nova.objects.InstanceMapping._get_by_instance_uuid_from_db', self._fake_instancemapping_get)) self.useFixture(fixtures.MonkeyPatch( 'nova.objects.InstanceMappingList._get_by_instance_uuids_from_db', self._fake_instancemapping_get_uuids)) self.useFixture(fixtures.MonkeyPatch( 'nova.objects.InstanceMapping._save_in_db', self._fake_instancemapping_get_save)) self.useFixture(fixtures.MonkeyPatch( 'nova.context.target_cell', self._fake_target_cell)) self.useFixture(fixtures.MonkeyPatch( 'nova.context.set_target_cell', lambda c, m: None)) def _fake_hostmapping_get(self, *args): return {'id': 1, 'updated_at': None, 'created_at': None, 'host': 'host1', 'cell_mapping': self._fake_cell_list()[0]} def _fake_instancemapping_get_common(self, instance_uuid): return { 'id': 1, 'updated_at': None, 'created_at': None, 'instance_uuid': instance_uuid, 'cell_id': (self.instances_created and 1 or None), 'project_id': 'project', 'cell_mapping': ( self.instances_created and self._fake_cell_get() or None), } def _fake_instancemapping_get_save(self, *args): return self._fake_instancemapping_get_common(args[-2]) def _fake_instancemapping_get(self, *args): return self._fake_instancemapping_get_common(args[-1]) def _fake_instancemapping_get_uuids(self, *args): return [self._fake_instancemapping_get(uuid) for uuid in args[-1]] def _fake_cell_get(self, *args): return self._fake_cell_list()[0] def _fake_cell_list(self, *args): return [{'id': 1, 'updated_at': None, 'created_at': None, 'uuid': uuidsentinel.cell1, 'name': 'onlycell', 'transport_url': 'fake://nowhere/', 'database_connection': 'sqlite:///'}] @contextmanager def _fake_target_cell(self, context, target_cell): # NOTE(danms): Just pass through the context without actually # targeting anything. yield context class CheatingSerializer(rpc.RequestContextSerializer): """A messaging.RequestContextSerializer that helps with cells. Our normal serializer does not pass in the context like db_connection and mq_connection, for good reason. We don't really want/need to force a remote RPC server to use our values for this. However, during unit and functional tests, since we're all in the same process, we want cell-targeted RPC calls to preserve these values. Unless we had per-service config and database layer state for the fake services we start, this is a reasonable cheat. """ def serialize_context(self, context): """Serialize context with the db_connection inside.""" values = super(CheatingSerializer, self).serialize_context(context) values['db_connection'] = context.db_connection values['mq_connection'] = context.mq_connection return values def deserialize_context(self, values): """Deserialize context and honor db_connection if present.""" ctxt = super(CheatingSerializer, self).deserialize_context(values) ctxt.db_connection = values.pop('db_connection', None) ctxt.mq_connection = values.pop('mq_connection', None) return ctxt class CellDatabases(fixtures.Fixture): """Create per-cell databases for testing. How to use:: fix = CellDatabases() fix.add_cell_database('connection1') fix.add_cell_database('connection2', default=True) self.useFixture(fix) Passing default=True tells the fixture which database should be given to code that doesn't target a specific cell. """ def __init__(self): self._ctxt_mgrs = {} self._last_ctxt_mgr = None self._default_ctxt_mgr = None # NOTE(danms): Use a ReaderWriterLock to synchronize our # global database muckery here. If we change global db state # to point to a cell, we need to take an exclusive lock to # prevent any other calls to get_context_manager() until we # reset to the default. self._cell_lock = lockutils.ReaderWriterLock() def _cache_schema(self, connection_str): # NOTE(melwitt): See the regular Database fixture for why # we do this. global DB_SCHEMA if not DB_SCHEMA['main']: ctxt_mgr = self._ctxt_mgrs[connection_str] engine = ctxt_mgr.get_legacy_facade().get_engine() conn = engine.connect() migration.db_sync(database='main') DB_SCHEMA['main'] = "".join(line for line in conn.connection.iterdump()) engine.dispose() @contextmanager def _wrap_target_cell(self, context, cell_mapping): with self._cell_lock.write_lock(): if cell_mapping is None: # NOTE(danms): The real target_cell untargets with a # cell_mapping of None. Since we're controlling our # own targeting in this fixture, we need to call this out # specifically and avoid switching global database state with self._real_target_cell(context, cell_mapping) as c: yield c return ctxt_mgr = self._ctxt_mgrs[cell_mapping.database_connection] # This assumes the next local DB access is the same cell that # was targeted last time. self._last_ctxt_mgr = ctxt_mgr try: with self._real_target_cell(context, cell_mapping) as ccontext: yield ccontext finally: # Once we have returned from the context, we need # to restore the default context manager for any # subsequent calls self._last_ctxt_mgr = self._default_ctxt_mgr def _wrap_create_context_manager(self, connection=None): ctxt_mgr = self._ctxt_mgrs[connection] return ctxt_mgr def _wrap_get_context_manager(self, context): try: # If already targeted, we can proceed without a lock if context.db_connection: return context.db_connection except AttributeError: # Unit tests with None, FakeContext, etc pass # NOTE(melwitt): This is a hack to try to deal with # local accesses i.e. non target_cell accesses. with self._cell_lock.read_lock(): # FIXME(mriedem): This is actually misleading and means we don't # catch things like bug 1717000 where a context should be targeted # to a cell but it's not, and the fixture here just returns the # last targeted context that was used. return self._last_ctxt_mgr def _wrap_get_server(self, target, endpoints, serializer=None): """Mirror rpc.get_server() but with our special sauce.""" serializer = CheatingSerializer(serializer) return messaging.get_rpc_server(rpc.TRANSPORT, target, endpoints, executor='eventlet', serializer=serializer) def _wrap_get_client(self, target, version_cap=None, serializer=None): """Mirror rpc.get_client() but with our special sauce.""" serializer = CheatingSerializer(serializer) return messaging.RPCClient(rpc.TRANSPORT, target, version_cap=version_cap, serializer=serializer) def add_cell_database(self, connection_str, default=False): """Add a cell database to the fixture. :param connection_str: An identifier used to represent the connection string for this database. It should match the database_connection field in the corresponding CellMapping. """ # NOTE(danms): Create a new context manager for the cell, which # will house the sqlite:// connection for this cell's in-memory # database. Store/index it by the connection string, which is # how we identify cells in CellMapping. ctxt_mgr = session.create_context_manager() self._ctxt_mgrs[connection_str] = ctxt_mgr # NOTE(melwitt): The first DB access through service start is # local so this initializes _last_ctxt_mgr for that and needs # to be a compute cell. self._last_ctxt_mgr = ctxt_mgr # NOTE(danms): Record which context manager should be the default # so we can restore it when we return from target-cell contexts. # If none has been provided yet, store the current one in case # no default is ever specified. if self._default_ctxt_mgr is None or default: self._default_ctxt_mgr = ctxt_mgr def get_context_manager(context): return ctxt_mgr # NOTE(danms): This is a temporary MonkeyPatch just to get # a new database created with the schema we need and the # context manager for it stashed. with fixtures.MonkeyPatch( 'nova.db.sqlalchemy.api.get_context_manager', get_context_manager): self._cache_schema(connection_str) engine = ctxt_mgr.get_legacy_facade().get_engine() engine.dispose() conn = engine.connect() conn.connection.executescript(DB_SCHEMA['main']) def setUp(self): super(CellDatabases, self).setUp() self.addCleanup(self.cleanup) self._real_target_cell = context.target_cell # NOTE(danms): These context managers are in place for the # duration of the test (unlike the temporary ones above) and # provide the actual "runtime" switching of connections for us. self.useFixture(fixtures.MonkeyPatch( 'nova.db.sqlalchemy.api.create_context_manager', self._wrap_create_context_manager)) self.useFixture(fixtures.MonkeyPatch( 'nova.db.sqlalchemy.api.get_context_manager', self._wrap_get_context_manager)) self.useFixture(fixtures.MonkeyPatch( 'nova.context.target_cell', self._wrap_target_cell)) self.useFixture(fixtures.MonkeyPatch( 'nova.rpc.get_server', self._wrap_get_server)) self.useFixture(fixtures.MonkeyPatch( 'nova.rpc.get_client', self._wrap_get_client)) def cleanup(self): for ctxt_mgr in self._ctxt_mgrs.values(): engine = ctxt_mgr.get_legacy_facade().get_engine() engine.dispose() class Database(fixtures.Fixture): def __init__(self, database='main', connection=None): """Create a database fixture. :param database: The type of database, 'main' or 'api' :param connection: The connection string to use """ super(Database, self).__init__() # NOTE(pkholkin): oslo_db.enginefacade is configured in tests the same # way as it is done for any other service that uses db global SESSION_CONFIGURED if not SESSION_CONFIGURED: session.configure(CONF) SESSION_CONFIGURED = True self.database = database if database == 'main': if connection is not None: ctxt_mgr = session.create_context_manager( connection=connection) facade = ctxt_mgr.get_legacy_facade() self.get_engine = facade.get_engine else: self.get_engine = session.get_engine elif database == 'api': self.get_engine = session.get_api_engine def _cache_schema(self): global DB_SCHEMA if not DB_SCHEMA[self.database]: engine = self.get_engine() conn = engine.connect() migration.db_sync(database=self.database) DB_SCHEMA[self.database] = "".join(line for line in conn.connection.iterdump()) engine.dispose() def cleanup(self): engine = self.get_engine() engine.dispose() def reset(self): self._cache_schema() engine = self.get_engine() engine.dispose() conn = engine.connect() conn.connection.executescript(DB_SCHEMA[self.database]) def setUp(self): super(Database, self).setUp() self.reset() self.addCleanup(self.cleanup) class DatabaseAtVersion(fixtures.Fixture): def __init__(self, version, database='main'): """Create a database fixture. :param version: Max version to sync to (or None for current) :param database: The type of database, 'main' or 'api' """ super(DatabaseAtVersion, self).__init__() self.database = database self.version = version if database == 'main': self.get_engine = session.get_engine elif database == 'api': self.get_engine = session.get_api_engine def cleanup(self): engine = self.get_engine() engine.dispose() def reset(self): engine = self.get_engine() engine.dispose() engine.connect() migration.db_sync(version=self.version, database=self.database) def setUp(self): super(DatabaseAtVersion, self).setUp() self.reset() self.addCleanup(self.cleanup) class DefaultFlavorsFixture(fixtures.Fixture): def setUp(self): super(DefaultFlavorsFixture, self).setUp() ctxt = context.get_admin_context() defaults = {'rxtx_factor': 1.0, 'disabled': False, 'is_public': True, 'ephemeral_gb': 0, 'swap': 0} extra_specs = { "hw:cpu_model": "SandyBridge", "hw:mem_page_size": "2048", "hw:cpu_policy": "dedicated" } default_flavors = [ objects.Flavor(context=ctxt, memory_mb=512, vcpus=1, root_gb=1, flavorid='1', name='m1.tiny', **defaults), objects.Flavor(context=ctxt, memory_mb=2048, vcpus=1, root_gb=20, flavorid='2', name='m1.small', **defaults), objects.Flavor(context=ctxt, memory_mb=4096, vcpus=2, root_gb=40, flavorid='3', name='m1.medium', **defaults), objects.Flavor(context=ctxt, memory_mb=8192, vcpus=4, root_gb=80, flavorid='4', name='m1.large', **defaults), objects.Flavor(context=ctxt, memory_mb=16384, vcpus=8, root_gb=160, flavorid='5', name='m1.xlarge', **defaults), objects.Flavor(context=ctxt, memory_mb=512, vcpus=1, root_gb=1, flavorid='6', name='m1.tiny.specs', extra_specs=extra_specs, **defaults), ] for flavor in default_flavors: flavor.create() class RPCFixture(fixtures.Fixture): def __init__(self, *exmods): super(RPCFixture, self).__init__() self.exmods = [] self.exmods.extend(exmods) self._buses = {} def _fake_create_transport(self, url): # FIXME(danms): Right now, collapse all connections # to a single bus. This is how our tests expect things # to work. When the tests are fixed, this fixture can # support simulating multiple independent buses, and this # hack should be removed. url = None # NOTE(danms): This will be called with a non-None url by # cells-aware code that is requesting to contact something on # one of the many transports we're multplexing here. if url not in self._buses: exmods = rpc.get_allowed_exmods() self._buses[url] = messaging.get_rpc_transport( CONF, url=url, allowed_remote_exmods=exmods) return self._buses[url] def setUp(self): super(RPCFixture, self).setUp() self.addCleanup(rpc.cleanup) rpc.add_extra_exmods(*self.exmods) self.addCleanup(rpc.clear_extra_exmods) self.messaging_conf = messaging_conffixture.ConfFixture(CONF) self.messaging_conf.transport_driver = 'fake' self.useFixture(self.messaging_conf) self.useFixture(fixtures.MonkeyPatch( 'nova.rpc.create_transport', self._fake_create_transport)) # NOTE(danms): Execute the init with get_transport_url() as None, # instead of the parsed TransportURL(None) so that we can cache # it as it will be called later if the default is requested by # one of our mq-switching methods. with mock.patch('nova.rpc.get_transport_url') as mock_gtu: mock_gtu.return_value = None rpc.init(CONF) class WarningsFixture(fixtures.Fixture): """Filters out warnings during test runs.""" def setUp(self): super(WarningsFixture, self).setUp() # NOTE(sdague): Make deprecation warnings only happen once. Otherwise # this gets kind of crazy given the way that upstream python libs use # this. warnings.simplefilter("once", DeprecationWarning) warnings.filterwarnings('ignore', message='With-statements now directly support' ' multiple context managers') # NOTE(sdague): nova does not use pkg_resources directly, this # is all very long standing deprecations about other tools # using it. None of this is useful to Nova development. warnings.filterwarnings('ignore', module='pkg_resources') # NOTE(sdague): this remains an unresolved item around the way # forward on is_admin, the deprecation is definitely really premature. warnings.filterwarnings('ignore', message='Policy enforcement is depending on the value of is_admin.' ' This key is deprecated. Please update your policy ' 'file to use the standard policy values.') # NOTE(sdague): mox3 is on life support, don't really care # about any deprecations coming from it warnings.filterwarnings('ignore', module='mox3.mox') self.addCleanup(warnings.resetwarnings) class ConfPatcher(fixtures.Fixture): """Fixture to patch and restore global CONF. This also resets overrides for everything that is patched during it's teardown. """ def __init__(self, **kwargs): """Constructor :params group: if specified all config options apply to that group. :params **kwargs: the rest of the kwargs are processed as a set of key/value pairs to be set as configuration override. """ super(ConfPatcher, self).__init__() self.group = kwargs.pop('group', None) self.args = kwargs def setUp(self): super(ConfPatcher, self).setUp() for k, v in self.args.items(): self.addCleanup(CONF.clear_override, k, self.group) CONF.set_override(k, v, self.group) class OSAPIFixture(fixtures.Fixture): """Create an OS API server as a fixture. This spawns an OS API server as a fixture in a new greenthread in the current test. The fixture has a .api parameter with is a simple rest client that can communicate with it. This fixture is extremely useful for testing REST responses through the WSGI stack easily in functional tests. Usage: api = self.useFixture(fixtures.OSAPIFixture()).api resp = api.api_request('/someurl') self.assertEqual(200, resp.status_code) resp = api.api_request('/otherurl', method='POST', body='{foo}') The resp is a requests library response. Common attributes that you'll want to use are: - resp.status_code - integer HTTP status code returned by the request - resp.content - the body of the response - resp.headers - dictionary of HTTP headers returned """ def __init__(self, api_version='v2', project_id='6f70656e737461636b20342065766572'): """Constructor :param api_version: the API version that we're interested in using. Currently this expects 'v2' or 'v2.1' as possible options. :param project_id: the project id to use on the API. """ super(OSAPIFixture, self).__init__() self.api_version = api_version self.project_id = project_id def setUp(self): super(OSAPIFixture, self).setUp() # A unique hostname for the wsgi-intercept. hostname = uuidsentinel.osapi_host port = 80 service_name = 'osapi_compute' endpoint = 'http://%s:%s/' % (hostname, port) conf_overrides = { 'osapi_compute_listen': hostname, 'osapi_compute_listen_port': port, 'debug': True, } self.useFixture(ConfPatcher(**conf_overrides)) # Turn off manipulation of socket_options in TCPKeepAliveAdapter # to keep wsgi-intercept happy. Replace it with the method # from its superclass. self.useFixture(fixtures.MonkeyPatch( 'keystoneauth1.session.TCPKeepAliveAdapter.init_poolmanager', adapters.HTTPAdapter.init_poolmanager)) loader = wsgi.Loader().load_app(service_name) app = lambda: loader # re-use service setup code from wsgi_app to register # service, which is looked for in some tests wsgi_app._setup_service(CONF.host, service_name) intercept = interceptor.RequestsInterceptor(app, url=endpoint) intercept.install_intercept() self.addCleanup(intercept.uninstall_intercept) self.auth_url = 'http://%(host)s:%(port)s/%(api_version)s' % ({ 'host': hostname, 'port': port, 'api_version': self.api_version}) self.api = client.TestOpenStackClient('fake', 'fake', self.auth_url, self.project_id) self.admin_api = client.TestOpenStackClient( 'admin', 'admin', self.auth_url, self.project_id) # Provide a way to access the wsgi application to tests using # the fixture. self.app = app class OSMetadataServer(fixtures.Fixture): """Create an OS Metadata API server as a fixture. This spawns an OS Metadata API server as a fixture in a new greenthread in the current test. TODO(sdague): ideally for testing we'd have something like the test client which acts like requests, but connects any of the interactions needed. """ def setUp(self): super(OSMetadataServer, self).setUp() # in order to run these in tests we need to bind only to local # host, and dynamically allocate ports conf_overrides = { 'metadata_listen': '127.0.0.1', 'metadata_listen_port': 0, 'debug': True } self.useFixture(ConfPatcher(**conf_overrides)) # NOTE(mikal): we don't have root to manipulate iptables, so just # zero that bit out. self.useFixture(fixtures.MonkeyPatch( 'nova.network.linux_net.IptablesManager._apply', lambda _: None)) self.metadata = service.WSGIService("metadata") self.metadata.start() self.addCleanup(self.metadata.stop) self.md_url = "http://%s:%s/" % ( conf_overrides['metadata_listen'], self.metadata.port) class PoisonFunctions(fixtures.Fixture): """Poison functions so they explode if we touch them. When running under a non full stack test harness there are parts of the code that you don't want to go anywhere near. These include things like code that spins up extra threads, which just introduces races. """ def setUp(self): super(PoisonFunctions, self).setUp() # The nova libvirt driver starts an event thread which only # causes trouble in tests. Make sure that if tests don't # properly patch it the test explodes. # explicit import because MonkeyPatch doesn't magic import # correctly if we are patching a method on a class in a # module. import nova.virt.libvirt.host # noqa def evloop(*args, **kwargs): import sys warnings.warn("Forgot to disable libvirt event thread") sys.exit(1) self.useFixture(fixtures.MonkeyPatch( 'nova.virt.libvirt.host.Host._init_events', evloop)) class IndirectionAPIFixture(fixtures.Fixture): """Patch and restore the global NovaObject indirection api.""" def __init__(self, indirection_api): """Constructor :param indirection_api: the indirection API to be used for tests. """ super(IndirectionAPIFixture, self).__init__() self.indirection_api = indirection_api def cleanup(self): obj_base.NovaObject.indirection_api = self.orig_indirection_api def setUp(self): super(IndirectionAPIFixture, self).setUp() self.orig_indirection_api = obj_base.NovaObject.indirection_api obj_base.NovaObject.indirection_api = self.indirection_api self.addCleanup(self.cleanup) class _FakeGreenThread(object): def __init__(self, func, *args, **kwargs): self._result = func(*args, **kwargs) def cancel(self, *args, **kwargs): # This method doesn't make sense for a synchronous call, it's just # defined to satisfy the interface. pass def kill(self, *args, **kwargs): # This method doesn't make sense for a synchronous call, it's just # defined to satisfy the interface. pass def link(self, func, *args, **kwargs): func(self, *args, **kwargs) def unlink(self, func, *args, **kwargs): # This method doesn't make sense for a synchronous call, it's just # defined to satisfy the interface. pass def wait(self): return self._result class SpawnIsSynchronousFixture(fixtures.Fixture): """Patch and restore the spawn_n utility method to be synchronous""" def setUp(self): super(SpawnIsSynchronousFixture, self).setUp() self.useFixture(fixtures.MonkeyPatch( 'nova.utils.spawn_n', _FakeGreenThread)) self.useFixture(fixtures.MonkeyPatch( 'nova.utils.spawn', _FakeGreenThread)) class BannedDBSchemaOperations(fixtures.Fixture): """Ban some operations for migrations""" def __init__(self, banned_resources=None): super(BannedDBSchemaOperations, self).__init__() self._banned_resources = banned_resources or [] @staticmethod def _explode(resource, op): raise exception.DBNotAllowed( 'Operation %s.%s() is not allowed in a database migration' % ( resource, op)) def setUp(self): super(BannedDBSchemaOperations, self).setUp() for thing in self._banned_resources: self.useFixture(fixtures.MonkeyPatch( 'sqlalchemy.%s.drop' % thing, lambda *a, **k: self._explode(thing, 'drop'))) self.useFixture(fixtures.MonkeyPatch( 'sqlalchemy.%s.alter' % thing, lambda *a, **k: self._explode(thing, 'alter'))) class ForbidNewLegacyNotificationFixture(fixtures.Fixture): """Make sure the test fails if new legacy notification is added""" def __init__(self): super(ForbidNewLegacyNotificationFixture, self).__init__() self.notifier = rpc.LegacyValidatingNotifier def setUp(self): super(ForbidNewLegacyNotificationFixture, self).setUp() self.notifier.fatal = True # allow the special test value used in # nova.tests.unit.test_notifications.NotificationsTestCase self.notifier.allowed_legacy_notification_event_types.append( '_decorated_function') self.addCleanup(self.cleanup) def cleanup(self): self.notifier.fatal = False self.notifier.allowed_legacy_notification_event_types.remove( '_decorated_function') class AllServicesCurrent(fixtures.Fixture): def setUp(self): super(AllServicesCurrent, self).setUp() self.useFixture(fixtures.MonkeyPatch( 'nova.objects.Service.get_minimum_version_multi', self._fake_minimum)) compute_rpcapi.LAST_VERSION = None def _fake_minimum(self, *args, **kwargs): return service_obj.SERVICE_VERSION class RegisterNetworkQuota(fixtures.Fixture): def setUp(self): super(RegisterNetworkQuota, self).setUp() # Quota resource registration modifies the global QUOTAS engine, so # this fixture registers and unregisters network quota for a test. tenant_networks._register_network_quota() self.addCleanup(self.cleanup) def cleanup(self): nova_quota.QUOTAS._resources.pop('networks', None) class NeutronFixture(fixtures.Fixture): """A fixture to boot instances with neutron ports""" # the default project_id in OsaAPIFixtures tenant_id = '6f70656e737461636b20342065766572' network_1 = { 'status': 'ACTIVE', 'subnets': [], 'name': 'private-network', 'admin_state_up': True, 'tenant_id': tenant_id, 'id': '3cb9bc59-5699-4588-a4b1-b87f96708bc6', } subnet_1 = { 'name': 'private-subnet', 'enable_dhcp': True, 'network_id': network_1['id'], 'tenant_id': tenant_id, 'dns_nameservers': [], 'allocation_pools': [ { 'start': '192.168.1.1', 'end': '192.168.1.254' } ], 'host_routes': [], 'ip_version': 4, 'gateway_ip': '192.168.1.1', 'cidr': '192.168.1.1/24', 'id': 'f8a6e8f8-c2ec-497c-9f23-da9616de54ef' } network_1['subnets'] = [subnet_1['id']] port_1 = { 'id': 'ce531f90-199f-48c0-816c-13e38010b442', 'network_id': network_1['id'], 'admin_state_up': True, 'status': 'ACTIVE', 'mac_address': 'fa:16:3e:4c:2c:30', 'fixed_ips': [ { # The IP on this port must be a prefix of the IP on port_2 to # test listing servers with an ip filter regex. 'ip_address': '192.168.1.3', 'subnet_id': subnet_1['id'] } ], 'tenant_id': tenant_id } port_2 = { 'id': '88dae9fa-0dc6-49e3-8c29-3abc41e99ac9', 'network_id': network_1['id'], 'admin_state_up': True, 'status': 'ACTIVE', 'mac_address': '00:0c:29:0d:11:74', 'fixed_ips': [ { 'ip_address': '192.168.1.30', 'subnet_id': subnet_1['id'] } ], 'tenant_id': tenant_id } nw_info = [{ "profile": {}, "ovs_interfaceid": "b71f1699-42be-4515-930a-f3ef01f94aa7", "preserve_on_delete": False, "network": { "bridge": "br-int", "subnets": [{ "ips": [{ "meta": {}, "version": 4, "type": "fixed", "floating_ips": [], "address": "10.0.0.4" }], "version": 4, "meta": {}, "dns": [], "routes": [], "cidr": "10.0.0.0/26", "gateway": { "meta": {}, "version": 4, "type": "gateway", "address": "10.0.0.1" } }], "meta": { "injected": False, "tenant_id": tenant_id, "mtu": 1500 }, "id": "e1882e38-38c2-4239-ade7-35d644cb963a", "label": "public" }, "devname": "tapb71f1699-42", "vnic_type": "normal", "qbh_params": None, "meta": {}, "details": { "port_filter": True, "ovs_hybrid_plug": True }, "address": "fa:16:3e:47:94:4a", "active": True, "type": "ovs", "id": "b71f1699-42be-4515-930a-f3ef01f94aa7", "qbg_params": None }] def __init__(self, test): super(NeutronFixture, self).__init__() self.test = test self._ports = [copy.deepcopy(NeutronFixture.port_1)] self._extensions = [] self._networks = [NeutronFixture.network_1] self._subnets = [NeutronFixture.subnet_1] self._floatingips = [] def setUp(self): super(NeutronFixture, self).setUp() self.test.stub_out( 'nova.network.neutronv2.api.API.' 'validate_networks', lambda *args, **kwargs: 1) self.test.stub_out( 'nova.network.neutronv2.api.API.' 'create_pci_requests_for_sriov_ports', lambda *args, **kwargs: None) self.test.stub_out( 'nova.network.neutronv2.api.API.setup_networks_on_host', lambda *args, **kwargs: None) self.test.stub_out( 'nova.network.neutronv2.api.API.migrate_instance_start', lambda *args, **kwargs: None) self.test.stub_out( 'nova.network.neutronv2.api.API.add_fixed_ip_to_instance', lambda *args, **kwargs: network_model.NetworkInfo.hydrate( NeutronFixture.nw_info)) self.test.stub_out( 'nova.network.neutronv2.api.API.remove_fixed_ip_from_instance', lambda *args, **kwargs: network_model.NetworkInfo.hydrate( NeutronFixture.nw_info)) self.test.stub_out( 'nova.network.neutronv2.api.API.migrate_instance_finish', lambda *args, **kwargs: None) self.test.stub_out( 'nova.network.security_group.neutron_driver.SecurityGroupAPI.' 'get_instances_security_groups_bindings', lambda *args, **kwargs: {}) self.test.stub_out('nova.network.neutronv2.api.get_client', lambda *args, **kwargs: self) def _get_first_id_match(self, id, list): filtered_list = [p for p in list if p['id'] == id] if len(filtered_list) > 0: return filtered_list[0] else: return None def _filter_ports(self, **_params): ports = copy.deepcopy(self._ports) for opt in _params: filtered_ports = [p for p in ports if p.get(opt) == _params[opt]] ports = filtered_ports return {'ports': ports} def list_extensions(self, *args, **kwargs): return copy.deepcopy({'extensions': self._extensions}) def show_port(self, port_id, **_params): port = self._get_first_id_match(port_id, self._ports) if port is None: raise exception.PortNotFound(port_id=port_id) return {'port': port} def delete_port(self, port, **_params): for current_port in self._ports: if current_port['id'] == port: self._ports.remove(current_port) def list_networks(self, retrieve_all=True, **_params): return copy.deepcopy({'networks': self._networks}) def list_ports(self, retrieve_all=True, **_params): return self._filter_ports(**_params) def list_subnets(self, retrieve_all=True, **_params): return copy.deepcopy({'subnets': self._subnets}) def list_floatingips(self, retrieve_all=True, **_params): return copy.deepcopy({'floatingips': self._floatingips}) def create_port(self, *args, **kwargs): self._ports.append(copy.deepcopy(NeutronFixture.port_2)) return copy.deepcopy({'port': NeutronFixture.port_2}) def update_port(self, port_id, body=None): new_port = self._get_first_id_match(port_id, self._ports) if body is not None: for k, v in body['port'].items(): new_port[k] = v return {'port': new_port} class _NoopConductor(object): def __getattr__(self, key): def _noop_rpc(*args, **kwargs): return None return _noop_rpc class NoopConductorFixture(fixtures.Fixture): """Stub out the conductor API to do nothing""" def setUp(self): super(NoopConductorFixture, self).setUp() self.useFixture(fixtures.MonkeyPatch( 'nova.conductor.ComputeTaskAPI', _NoopConductor)) self.useFixture(fixtures.MonkeyPatch( 'nova.conductor.API', _NoopConductor)) class EventReporterStub(fixtures.Fixture): def setUp(self): super(EventReporterStub, self).setUp() self.useFixture(fixtures.MonkeyPatch( 'nova.compute.utils.EventReporter', lambda *args, **kwargs: mock.MagicMock())) class CinderFixture(fixtures.Fixture): """A fixture to volume operations""" # the default project_id in OSAPIFixtures tenant_id = '6f70656e737461636b20342065766572' SWAP_OLD_VOL = 'a07f71dc-8151-4e7d-a0cc-cd24a3f11113' SWAP_NEW_VOL = '227cc671-f30b-4488-96fd-7d0bf13648d8' SWAP_ERR_OLD_VOL = '828419fa-3efb-4533-b458-4267ca5fe9b1' SWAP_ERR_NEW_VOL = '9c6d9c2d-7a8f-4c80-938d-3bf062b8d489' # This represents a bootable image-backed volume to test # boot-from-volume scenarios. IMAGE_BACKED_VOL = '6ca404f3-d844-4169-bb96-bc792f37de98' def __init__(self, test): super(CinderFixture, self).__init__() self.test = test self.swap_error = False self.swap_volume_instance_uuid = None self.swap_volume_instance_error_uuid = None self.reserved_volumes = list() # This is a map of instance UUIDs mapped to a list of volume IDs. # This map gets updated on attach/detach operations. self.attachments = collections.defaultdict(list) def setUp(self): super(CinderFixture, self).setUp() def fake_get(self_api, context, volume_id, microversion=None): # Check for the special swap volumes. if volume_id in (CinderFixture.SWAP_OLD_VOL, CinderFixture.SWAP_ERR_OLD_VOL): volume = { 'status': 'available', 'display_name': 'TEST1', 'attach_status': 'detached', 'id': volume_id, 'multiattach': False, 'size': 1 } if ((self.swap_volume_instance_uuid and volume_id == CinderFixture.SWAP_OLD_VOL) or (self.swap_volume_instance_error_uuid and volume_id == CinderFixture.SWAP_ERR_OLD_VOL)): instance_uuid = (self.swap_volume_instance_uuid if volume_id == CinderFixture.SWAP_OLD_VOL else self.swap_volume_instance_error_uuid) volume.update({ 'status': 'in-use', 'attachments': { instance_uuid: { 'mountpoint': '/dev/vdb', 'attachment_id': volume_id } }, 'attach_status': 'attached' }) return volume # Check to see if the volume is attached. for instance_uuid, volumes in self.attachments.items(): if volume_id in volumes: # The volume is attached. volume = { 'status': 'in-use', 'display_name': volume_id, 'attach_status': 'attached', 'id': volume_id, 'multiattach': False, 'size': 1, 'attachments': { instance_uuid: { 'attachment_id': volume_id, 'mountpoint': '/dev/vdb' } } } break else: # This is a test that does not care about the actual details. reserved_volume = (volume_id in self.reserved_volumes) volume = { 'status': 'attaching' if reserved_volume else 'available', 'display_name': 'TEST2', 'attach_status': 'detached', 'id': volume_id, 'multiattach': False, 'size': 1 } # Check for our special image-backed volume. if volume_id == self.IMAGE_BACKED_VOL: # Make it a bootable volume. volume['bootable'] = True # Add the image_id metadata. volume['volume_image_metadata'] = { # There would normally be more image metadata in here... 'image_id': '155d900f-4e14-4e4c-a73d-069cbf4541e6' } return volume def fake_initialize_connection(self, context, volume_id, connector): if volume_id == CinderFixture.SWAP_ERR_NEW_VOL: # Return a tuple in order to raise an exception. return () return {} def fake_migrate_volume_completion(self, context, old_volume_id, new_volume_id, error): return {'save_volume_id': new_volume_id} def fake_reserve_volume(self_api, context, volume_id): self.reserved_volumes.append(volume_id) def fake_unreserve_volume(self_api, context, volume_id): # NOTE(mnaser): It's possible that we unreserve a volume that was # never reserved (ex: instance.volume_attach.error # notification tests) if volume_id in self.reserved_volumes: self.reserved_volumes.remove(volume_id) # Signaling that swap_volume has encountered the error # from initialize_connection and is working on rolling back # the reservation on SWAP_ERR_NEW_VOL. self.swap_error = True def fake_attach(_self, context, volume_id, instance_uuid, mountpoint, mode='rw'): # Check to see if the volume is already attached to any server. for instance, volumes in self.attachments.items(): if volume_id in volumes: raise exception.InvalidInput( reason='Volume %s is already attached to ' 'instance %s' % (volume_id, instance)) # It's not attached so let's "attach" it. self.attachments[instance_uuid].append(volume_id) self.test.stub_out('nova.volume.cinder.API.attach', fake_attach) def fake_detach(_self, context, volume_id, instance_uuid=None, attachment_id=None): # NOTE(mnaser): It's possible that we unreserve a volume that was # never reserved (ex: instance.volume_attach.error # notification tests) if volume_id in self.reserved_volumes: self.reserved_volumes.remove(volume_id) if instance_uuid is not None: # If the volume isn't attached to this instance it will # result in a ValueError which indicates a broken test or # code, so we just let that raise up. self.attachments[instance_uuid].remove(volume_id) else: for instance, volumes in self.attachments.items(): if volume_id in volumes: volumes.remove(volume_id) break self.test.stub_out('nova.volume.cinder.API.detach', fake_detach) self.test.stub_out('nova.volume.cinder.API.begin_detaching', lambda *args, **kwargs: None) self.test.stub_out('nova.volume.cinder.API.get', fake_get) self.test.stub_out('nova.volume.cinder.API.initialize_connection', fake_initialize_connection) self.test.stub_out( 'nova.volume.cinder.API.migrate_volume_completion', fake_migrate_volume_completion) self.test.stub_out('nova.volume.cinder.API.reserve_volume', fake_reserve_volume) self.test.stub_out('nova.volume.cinder.API.roll_detaching', lambda *args, **kwargs: None) self.test.stub_out('nova.volume.cinder.API.terminate_connection', lambda *args, **kwargs: None) self.test.stub_out('nova.volume.cinder.API.unreserve_volume', fake_unreserve_volume) self.test.stub_out('nova.volume.cinder.API.check_attached', lambda *args, **kwargs: None) # TODO(mriedem): We can probably pull some of the common parts from the # CinderFixture into a common mixin class for things like the variables # and fake_get. class CinderFixtureNewAttachFlow(fixtures.Fixture): """A fixture to volume operations with the new Cinder attach/detach API""" # the default project_id in OSAPIFixtures tenant_id = '6f70656e737461636b20342065766572' SWAP_OLD_VOL = 'a07f71dc-8151-4e7d-a0cc-cd24a3f11113' SWAP_NEW_VOL = '227cc671-f30b-4488-96fd-7d0bf13648d8' SWAP_ERR_OLD_VOL = '828419fa-3efb-4533-b458-4267ca5fe9b1' SWAP_ERR_NEW_VOL = '9c6d9c2d-7a8f-4c80-938d-3bf062b8d489' SWAP_ERR_ATTACH_ID = '4a3cd440-b9c2-11e1-afa6-0800200c9a66' MULTIATTACH_VOL = '4757d51f-54eb-4442-8684-3399a6431f67' # This represents a bootable image-backed volume to test # boot-from-volume scenarios. IMAGE_BACKED_VOL = '6ca404f3-d844-4169-bb96-bc792f37de98' def __init__(self, test): super(CinderFixtureNewAttachFlow, self).__init__() self.test = test self.swap_error = False self.swap_volume_instance_uuid = None self.swap_volume_instance_error_uuid = None self.attachment_error_id = None # This is a map of instance UUIDs mapped to a list of volume IDs. # This map gets updated on attach/detach operations. self.attachments = collections.defaultdict(list) def setUp(self): super(CinderFixtureNewAttachFlow, self).setUp() def fake_get(self_api, context, volume_id, microversion=None): # Check for the special swap volumes. if volume_id in (CinderFixture.SWAP_OLD_VOL, CinderFixture.SWAP_ERR_OLD_VOL): volume = { 'status': 'available', 'display_name': 'TEST1', 'attach_status': 'detached', 'id': volume_id, 'multiattach': False, 'size': 1 } if ((self.swap_volume_instance_uuid and volume_id == CinderFixture.SWAP_OLD_VOL) or (self.swap_volume_instance_error_uuid and volume_id == CinderFixture.SWAP_ERR_OLD_VOL)): instance_uuid = (self.swap_volume_instance_uuid if volume_id == CinderFixture.SWAP_OLD_VOL else self.swap_volume_instance_error_uuid) volume.update({ 'status': 'in-use', 'attachments': { instance_uuid: { 'mountpoint': '/dev/vdb', 'attachment_id': volume_id } }, 'attach_status': 'attached' }) return volume # Check to see if the volume is attached. for instance_uuid, volumes in self.attachments.items(): if volume_id in volumes: # The volume is attached. volume = { 'status': 'in-use', 'display_name': volume_id, 'attach_status': 'attached', 'id': volume_id, 'multiattach': volume_id == self.MULTIATTACH_VOL, 'size': 1, 'attachments': { instance_uuid: { 'attachment_id': volume_id, 'mountpoint': '/dev/vdb' } } } break else: # This is a test that does not care about the actual details. volume = { 'status': 'available', 'display_name': 'TEST2', 'attach_status': 'detached', 'id': volume_id, 'multiattach': volume_id == self.MULTIATTACH_VOL, 'size': 1 } # Check for our special image-backed volume. if volume_id == self.IMAGE_BACKED_VOL: # Make it a bootable volume. volume['bootable'] = True # Add the image_id metadata. volume['volume_image_metadata'] = { # There would normally be more image metadata in here... 'image_id': '155d900f-4e14-4e4c-a73d-069cbf4541e6' } return volume def fake_migrate_volume_completion(self, context, old_volume_id, new_volume_id, error): return {'save_volume_id': new_volume_id} def fake_attachment_create(_self, context, volume_id, instance_uuid, connector=None, mountpoint=None): attachment_id = uuidutils.generate_uuid() if self.attachment_error_id is not None: attachment_id = self.attachment_error_id attachment = {'id': attachment_id, 'connection_info': {'data': {}}} self.attachments['instance_uuid'].append(instance_uuid) self.attachments[instance_uuid].append(volume_id) return attachment def fake_attachment_delete(_self, context, attachment_id): instance_uuid = self.attachments['instance_uuid'][0] del self.attachments[instance_uuid][0] del self.attachments['instance_uuid'][0] if attachment_id == CinderFixtureNewAttachFlow.SWAP_ERR_ATTACH_ID: self.swap_error = True def fake_attachment_update(_self, context, attachment_id, connector, mountpoint=None): attachment_ref = {'driver_volume_type': 'fake_type', 'id': attachment_id, 'connection_info': {'data': {'foo': 'bar', 'target_lun': '1'}}} if attachment_id == CinderFixtureNewAttachFlow.SWAP_ERR_ATTACH_ID: attachment_ref = {'connection_info': ()} return attachment_ref def fake_attachment_get(_self, context, attachment_id): attachment_ref = {'driver_volume_type': 'fake_type', 'id': attachment_id, 'connection_info': {'data': {'foo': 'bar', 'target_lun': '1'}}} return attachment_ref self.test.stub_out('nova.volume.cinder.API.attachment_create', fake_attachment_create) self.test.stub_out('nova.volume.cinder.API.attachment_delete', fake_attachment_delete) self.test.stub_out('nova.volume.cinder.API.attachment_update', fake_attachment_update) self.test.stub_out('nova.volume.cinder.API.attachment_complete', lambda *args, **kwargs: None) self.test.stub_out('nova.volume.cinder.API.attachment_get', fake_attachment_get) self.test.stub_out('nova.volume.cinder.API.begin_detaching', lambda *args, **kwargs: None) self.test.stub_out('nova.volume.cinder.API.get', fake_get) self.test.stub_out( 'nova.volume.cinder.API.migrate_volume_completion', fake_migrate_volume_completion) self.test.stub_out('nova.volume.cinder.API.roll_detaching', lambda *args, **kwargs: None) self.test.stub_out('nova.volume.cinder.is_microversion_supported', lambda *args, **kwargs: None) self.test.stub_out('nova.volume.cinder.API.check_attached', lambda *args, **kwargs: None) class PlacementApiClient(object): def __init__(self, placement_fixture): self.fixture = placement_fixture def get(self, url, **kwargs): return client.APIResponse(self.fixture._fake_get(None, url, **kwargs)) def put(self, url, body, **kwargs): return client.APIResponse( self.fixture._fake_put(None, url, body, **kwargs)) class PlacementFixture(fixtures.Fixture): """A fixture to placement operations. Runs a local WSGI server bound on a free port and having the Placement application with NoAuth middleware. This fixture also prevents calling the ServiceCatalog for getting the endpoint. It's possible to ask for a specific token when running the fixtures so all calls would be passing this token. """ def __init__(self, token='admin'): self.token = token def setUp(self): super(PlacementFixture, self).setUp() self.useFixture(ConfPatcher(group='api', auth_strategy='noauth2')) loader = placement_deploy.loadapp(CONF) app = lambda: loader host = uuidsentinel.placement_host self.endpoint = 'http://%s/placement' % host intercept = interceptor.RequestsInterceptor(app, url=self.endpoint) intercept.install_intercept() self.addCleanup(intercept.uninstall_intercept) # Turn off manipulation of socket_options in TCPKeepAliveAdapter # to keep wsgi-intercept happy. Replace it with the method # from its superclass. self.useFixture(fixtures.MonkeyPatch( 'keystoneauth1.session.TCPKeepAliveAdapter.init_poolmanager', adapters.HTTPAdapter.init_poolmanager)) self._client = ks.Session(auth=None) # NOTE(sbauza): We need to mock the scheduler report client because # we need to fake Keystone by directly calling the endpoint instead # of looking up the service catalog, like we did for the OSAPIFixture. self.useFixture(fixtures.MonkeyPatch( 'nova.scheduler.client.report.SchedulerReportClient.get', self._fake_get)) self.useFixture(fixtures.MonkeyPatch( 'nova.scheduler.client.report.SchedulerReportClient.post', self._fake_post)) self.useFixture(fixtures.MonkeyPatch( 'nova.scheduler.client.report.SchedulerReportClient.put', self._fake_put)) self.useFixture(fixtures.MonkeyPatch( 'nova.scheduler.client.report.SchedulerReportClient.delete', self._fake_delete)) self.api = PlacementApiClient(self) @staticmethod def _update_headers_with_version(headers, **kwargs): version = kwargs.get("version") if version is not None: # TODO(mriedem): Perform some version discovery at some point. headers.update({ 'OpenStack-API-Version': 'placement %s' % version }) def _fake_get(self, *args, **kwargs): (url,) = args[1:] # TODO(sbauza): The current placement NoAuthMiddleware returns a 401 # in case a token is not provided. We should change that by creating # a fake token so we could remove adding the header below. headers = {'x-auth-token': self.token} self._update_headers_with_version(headers, **kwargs) return self._client.get( url, endpoint_override=self.endpoint, headers=headers, raise_exc=False) def _fake_post(self, *args, **kwargs): (url, data) = args[1:] # NOTE(sdague): using json= instead of data= sets the # media type to application/json for us. Placement API is # more sensitive to this than other APIs in the OpenStack # ecosystem. # TODO(sbauza): The current placement NoAuthMiddleware returns a 401 # in case a token is not provided. We should change that by creating # a fake token so we could remove adding the header below. headers = {'x-auth-token': self.token} self._update_headers_with_version(headers, **kwargs) return self._client.post( url, json=data, endpoint_override=self.endpoint, headers=headers, raise_exc=False) def _fake_put(self, *args, **kwargs): (url, data) = args[1:] # NOTE(sdague): using json= instead of data= sets the # media type to application/json for us. Placement API is # more sensitive to this than other APIs in the OpenStack # ecosystem. # TODO(sbauza): The current placement NoAuthMiddleware returns a 401 # in case a token is not provided. We should change that by creating # a fake token so we could remove adding the header below. headers = {'x-auth-token': self.token} self._update_headers_with_version(headers, **kwargs) return self._client.put( url, json=data, endpoint_override=self.endpoint, headers=headers, raise_exc=False) def _fake_delete(self, *args, **kwargs): (url,) = args[1:] # TODO(sbauza): The current placement NoAuthMiddleware returns a 401 # in case a token is not provided. We should change that by creating # a fake token so we could remove adding the header below. return self._client.delete( url, endpoint_override=self.endpoint, headers={'x-auth-token': self.token}, raise_exc=False) class UnHelperfulClientChannel(privsep_daemon._ClientChannel): def __init__(self, context): raise Exception('You have attempted to start a privsep helper. ' 'This is not allowed in the gate, and ' 'indicates a failure to have mocked your tests.') class PrivsepNoHelperFixture(fixtures.Fixture): """A fixture to catch failures to mock privsep's rootwrap helper. If you fail to mock away a privsep'd method in a unit test, then you may well end up accidentally running the privsep rootwrap helper. This will fail in the gate, but it fails in a way which doesn't identify which test is missing a mock. Instead, we raise an exception so that you at least know where you've missed something. """ def setUp(self): super(PrivsepNoHelperFixture, self).setUp() self.useFixture(fixtures.MonkeyPatch( 'oslo_privsep.daemon.RootwrapClientChannel', UnHelperfulClientChannel)) class NoopQuotaDriverFixture(fixtures.Fixture): """A fixture to run tests using the NoopQuotaDriver. We can't simply set self.flags to the NoopQuotaDriver in tests to use the NoopQuotaDriver because the QuotaEngine object is global. Concurrently running tests will fail intermittently because they might get the NoopQuotaDriver globally when they expected the default DbQuotaDriver behavior. So instead, we can patch the _driver property of the QuotaEngine class on a per-test basis. """ def setUp(self): super(NoopQuotaDriverFixture, self).setUp() self.useFixture(fixtures.MonkeyPatch('nova.quota.QuotaEngine._driver', nova_quota.NoopQuotaDriver())) # Set the config option just so that code checking for the presence of # the NoopQuotaDriver setting will see it as expected. # For some reason, this does *not* work when TestCase.flags is used. # When using self.flags, the concurrent test failures returned. CONF.set_override('driver', 'nova.quota.NoopQuotaDriver', 'quota') self.addCleanup(CONF.clear_override, 'driver', 'quota') nova-17.0.1/nova/tests/__init__.py0000666000175000017500000000000013250073126016772 0ustar zuulzuul00000000000000nova-17.0.1/nova/tests/unit/0000775000175000017500000000000013250073472015654 5ustar zuulzuul00000000000000nova-17.0.1/nova/tests/unit/image_fixtures.py0000666000175000017500000000572713250073126021252 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime # nova.image.glance._translate_from_glance() returns datetime # objects, not strings. NOW_DATE = datetime.datetime(2010, 10, 11, 10, 30, 22) def get_image_fixtures(): """Returns a set of image fixture dicts for use in unit tests. Returns a set of dicts representing images/snapshots of varying statuses that would be returned from a call to `glanceclient.client.Client.images.list`. The IDs of the images returned start at 123 and go to 131, with the following brief summary of image attributes: | ID Type Status Notes | ---------------------------------------------------------- | 123 Public image active | 124 Snapshot queued | 125 Snapshot saving | 126 Snapshot active | 127 Snapshot killed | 128 Snapshot deleted | 129 Snapshot pending_delete | 130 Public image active Has no name """ image_id = 123 fixtures = [] def add_fixture(**kwargs): kwargs.update(created_at=NOW_DATE, updated_at=NOW_DATE) fixtures.append(kwargs) # Public image add_fixture(id=str(image_id), name='public image', is_public=True, status='active', properties={'key1': 'value1'}, min_ram="128", min_disk="10", size='25165824') image_id += 1 # Snapshot for User 1 uuid = 'aa640691-d1a7-4a67-9d3c-d35ee6b3cc74' snapshot_properties = {'instance_uuid': uuid, 'user_id': 'fake'} for status in ('queued', 'saving', 'active', 'killed', 'deleted', 'pending_delete'): deleted = False if status != 'deleted' else True deleted_at = NOW_DATE if deleted else None add_fixture(id=str(image_id), name='%s snapshot' % status, is_public=False, status=status, properties=snapshot_properties, size='25165824', deleted=deleted, deleted_at=deleted_at) image_id += 1 # Image without a name add_fixture(id=str(image_id), is_public=True, status='active', properties={}) # Image for permission tests image_id += 1 add_fixture(id=str(image_id), is_public=True, status='active', properties={}, owner='authorized_fake') return fixtures nova-17.0.1/nova/tests/unit/test_availability_zones.py0000666000175000017500000003120513250073136023155 0ustar zuulzuul00000000000000# Copyright 2013 Netease Corporation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests for availability zones """ import mock import six from nova import availability_zones as az import nova.conf from nova import context from nova import db from nova import objects from nova import test from nova.tests import uuidsentinel CONF = nova.conf.CONF class AvailabilityZoneTestCases(test.TestCase): """Test case for aggregate based availability zone.""" def setUp(self): super(AvailabilityZoneTestCases, self).setUp() self.host = 'me' self.availability_zone = 'nova-test' self.default_az = CONF.default_availability_zone self.default_in_az = CONF.internal_service_availability_zone self.context = context.get_admin_context() self.agg = self._create_az('az_agg', self.availability_zone) def tearDown(self): db.aggregate_delete(self.context, self.agg['id']) super(AvailabilityZoneTestCases, self).tearDown() def _create_az(self, agg_name, az_name): agg_meta = {'name': agg_name, 'uuid': uuidsentinel.agg_uuid} agg = db.aggregate_create(self.context, agg_meta) metadata = {'availability_zone': az_name} db.aggregate_metadata_add(self.context, agg['id'], metadata) return agg def _update_az(self, aggregate, az_name): metadata = {'availability_zone': az_name} db.aggregate_update(self.context, aggregate['id'], metadata) def _create_service_with_topic(self, topic, host, disabled=False): values = { 'binary': 'nova-bin', 'host': host, 'topic': topic, 'disabled': disabled, } return db.service_create(self.context, values) def _destroy_service(self, service): return db.service_destroy(self.context, service['id']) def _add_to_aggregate(self, service, aggregate): return db.aggregate_host_add(self.context, aggregate['id'], service['host']) def _delete_from_aggregate(self, service, aggregate): return db.aggregate_host_delete(self.context, aggregate['id'], service['host']) def test_rest_availability_zone_reset_cache(self): az._get_cache().add('cache', 'fake_value') az.reset_cache() self.assertIsNone(az._get_cache().get('cache')) def test_update_host_availability_zone_cache(self): """Test availability zone cache could be update.""" service = self._create_service_with_topic('compute', self.host) # Create a new aggregate with an AZ and add the host to the AZ az_name = 'az1' cache_key = az._make_cache_key(self.host) agg_az1 = self._create_az('agg-az1', az_name) self._add_to_aggregate(service, agg_az1) az.update_host_availability_zone_cache(self.context, self.host) self.assertEqual('az1', az._get_cache().get(cache_key)) az.update_host_availability_zone_cache(self.context, self.host, 'az2') self.assertEqual('az2', az._get_cache().get(cache_key)) def test_set_availability_zone_compute_service(self): """Test for compute service get right availability zone.""" service = self._create_service_with_topic('compute', self.host) services = db.service_get_all(self.context) # The service is not add into aggregate, so confirm it is default # availability zone. new_service = az.set_availability_zones(self.context, services)[0] self.assertEqual(self.default_az, new_service['availability_zone']) # The service is added into aggregate, confirm return the aggregate # availability zone. self._add_to_aggregate(service, self.agg) new_service = az.set_availability_zones(self.context, services)[0] self.assertEqual(self.availability_zone, new_service['availability_zone']) self._destroy_service(service) def test_set_availability_zone_unicode_key(self): """Test set availability zone cache key is unicode.""" service = self._create_service_with_topic('network', self.host) services = db.service_get_all(self.context) az.set_availability_zones(self.context, services) self.assertIsInstance(services[0]['host'], six.text_type) cached_key = az._make_cache_key(services[0]['host']) self.assertIsInstance(cached_key, str) self._destroy_service(service) def test_set_availability_zone_not_compute_service(self): """Test not compute service get right availability zone.""" service = self._create_service_with_topic('network', self.host) services = db.service_get_all(self.context) new_service = az.set_availability_zones(self.context, services)[0] self.assertEqual(self.default_in_az, new_service['availability_zone']) self._destroy_service(service) def test_get_host_availability_zone(self): """Test get right availability zone by given host.""" self.assertEqual(self.default_az, az.get_host_availability_zone(self.context, self.host)) service = self._create_service_with_topic('compute', self.host) self._add_to_aggregate(service, self.agg) self.assertEqual(self.availability_zone, az.get_host_availability_zone(self.context, self.host)) def test_update_host_availability_zone(self): """Test availability zone could be update by given host.""" service = self._create_service_with_topic('compute', self.host) # Create a new aggregate with an AZ and add the host to the AZ az_name = 'az1' agg_az1 = self._create_az('agg-az1', az_name) self._add_to_aggregate(service, agg_az1) self.assertEqual(az_name, az.get_host_availability_zone(self.context, self.host)) # Update AZ new_az_name = 'az2' self._update_az(agg_az1, new_az_name) self.assertEqual(new_az_name, az.get_host_availability_zone(self.context, self.host)) def test_delete_host_availability_zone(self): """Test availability zone could be deleted successfully.""" service = self._create_service_with_topic('compute', self.host) # Create a new aggregate with an AZ and add the host to the AZ az_name = 'az1' agg_az1 = self._create_az('agg-az1', az_name) self._add_to_aggregate(service, agg_az1) self.assertEqual(az_name, az.get_host_availability_zone(self.context, self.host)) # Delete the AZ via deleting the aggregate self._delete_from_aggregate(service, agg_az1) self.assertEqual(self.default_az, az.get_host_availability_zone(self.context, self.host)) def test_get_availability_zones(self): """Test get_availability_zones.""" # When the param get_only_available of get_availability_zones is set # to default False, it returns two lists, zones with at least one # enabled services, and zones with no enabled services, # when get_only_available is set to True, only return a list of zones # with at least one enabled services. # Use the following test data: # # zone host enabled # nova-test host1 Yes # nova-test host2 No # nova-test2 host3 Yes # nova-test3 host4 No # host5 No agg2 = self._create_az('agg-az2', 'nova-test2') agg3 = self._create_az('agg-az3', 'nova-test3') service1 = self._create_service_with_topic('compute', 'host1', disabled=False) service2 = self._create_service_with_topic('compute', 'host2', disabled=True) service3 = self._create_service_with_topic('compute', 'host3', disabled=False) service4 = self._create_service_with_topic('compute', 'host4', disabled=True) self._create_service_with_topic('compute', 'host5', disabled=True) self._add_to_aggregate(service1, self.agg) self._add_to_aggregate(service2, self.agg) self._add_to_aggregate(service3, agg2) self._add_to_aggregate(service4, agg3) zones, not_zones = az.get_availability_zones(self.context) self.assertEqual(['nova-test', 'nova-test2'], zones) self.assertEqual(['nova-test3', 'nova'], not_zones) zones = az.get_availability_zones(self.context, True) self.assertEqual(['nova-test', 'nova-test2'], zones) zones, not_zones = az.get_availability_zones(self.context, with_hosts=True) self.assertJsonEqual(zones, [(u'nova-test2', set([u'host3'])), (u'nova-test', set([u'host1']))]) self.assertJsonEqual(not_zones, [(u'nova-test3', set([u'host4'])), (u'nova', set([u'host5']))]) def test_get_instance_availability_zone_default_value(self): """Test get right availability zone by given an instance.""" fake_inst = objects.Instance(host=self.host, availability_zone=None) self.assertEqual(self.default_az, az.get_instance_availability_zone(self.context, fake_inst)) def test_get_instance_availability_zone_from_aggregate(self): """Test get availability zone from aggregate by given an instance.""" host = 'host170' service = self._create_service_with_topic('compute', host) self._add_to_aggregate(service, self.agg) fake_inst = objects.Instance(host=host, availability_zone=self.availability_zone) self.assertEqual(self.availability_zone, az.get_instance_availability_zone(self.context, fake_inst)) @mock.patch.object(az._get_cache(), 'get') def test_get_instance_availability_zone_cache_differs(self, cache_get): host = 'host170' service = self._create_service_with_topic('compute', host) self._add_to_aggregate(service, self.agg) cache_get.return_value = self.default_az fake_inst = objects.Instance(host=host, availability_zone=self.availability_zone) self.assertEqual( self.availability_zone, az.get_instance_availability_zone(self.context, fake_inst)) def test_get_instance_availability_zone_no_host(self): """Test get availability zone from instance if host is None.""" fake_inst = objects.Instance(host=None, availability_zone='inst-az') result = az.get_instance_availability_zone(self.context, fake_inst) self.assertEqual('inst-az', result) def test_get_instance_availability_zone_no_host_set(self): """Test get availability zone from instance if host not set. This is testing the case in the compute API where the Instance object does not have the host attribute set because it's just the object that goes into the BuildRequest, it wasn't actually pulled from the DB. The instance in this case doesn't actually get inserted into the DB until it reaches conductor. So the host attribute may not be set but we expect availability_zone always will in the API because of _validate_and_build_base_options setting a default value which goes into the object. """ fake_inst = objects.Instance(availability_zone='inst-az') result = az.get_instance_availability_zone(self.context, fake_inst) self.assertEqual('inst-az', result) def test_get_instance_availability_zone_no_host_no_az(self): """Test get availability zone if neither host nor az is set.""" fake_inst = objects.Instance(host=None, availability_zone=None) result = az.get_instance_availability_zone(self.context, fake_inst) self.assertIsNone(result) nova-17.0.1/nova/tests/unit/test_conf.py0000666000175000017500000000727013250073126020216 0ustar zuulzuul00000000000000# Copyright 2016 HPE, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import tempfile import fixtures import mock from oslo_config import cfg import nova.conf.compute from nova import config from nova import test class ConfTest(test.NoDBTestCase): """This is a test and pattern for parsing tricky options.""" class TestConfigOpts(cfg.ConfigOpts): def __call__(self, args=None, default_config_files=None): if default_config_files is None: default_config_files = [] return cfg.ConfigOpts.__call__( self, args=args, prog='test', version='1.0', usage='%(prog)s FOO BAR', default_config_files=default_config_files, validate_default_values=True) def setUp(self): super(ConfTest, self).setUp() self.useFixture(fixtures.NestedTempfile()) self.conf = self.TestConfigOpts() self.tempdirs = [] def create_tempfiles(self, files, ext='.conf'): tempfiles = [] for (basename, contents) in files: if not os.path.isabs(basename): (fd, path) = tempfile.mkstemp(prefix=basename, suffix=ext) else: path = basename + ext fd = os.open(path, os.O_CREAT | os.O_WRONLY) tempfiles.append(path) try: os.write(fd, contents.encode('utf-8')) finally: os.close(fd) return tempfiles def test_reserved_huge_page(self): nova.conf.compute.register_opts(self.conf) paths = self.create_tempfiles( [('1', '[DEFAULT]\n' 'reserved_huge_pages = node:0,size:2048,count:64\n')]) self.conf(['--config-file', paths[0]]) # NOTE(sdague): In oslo.config if you specify a parameter # incorrectly, it silently drops it from the conf. Which means # the attr doesn't exist at all. The first attr test here is # for an unrelated boolean option that is using defaults (so # will always work. It's a basic control that *anything* is working. self.assertTrue(hasattr(self.conf, 'force_raw_images')) self.assertTrue(hasattr(self.conf, 'reserved_huge_pages'), "Parse error with reserved_huge_pages") # NOTE(sdague): Yes, this actually parses as an array holding # a dict. actual = [{'node': '0', 'size': '2048', 'count': '64'}] self.assertEqual(actual, self.conf.reserved_huge_pages) class TestParseArgs(test.NoDBTestCase): @mock.patch.object(config.log, 'register_options') def test_parse_args_glance_debug_false(self, register_options): self.flags(debug=False, group='glance') config.parse_args([], configure_db=False, init_rpc=False) self.assertIn('glanceclient=WARN', config.CONF.default_log_levels) @mock.patch.object(config.log, 'register_options') def test_parse_args_glance_debug_true(self, register_options): self.flags(debug=True, group='glance') config.parse_args([], configure_db=False, init_rpc=False) self.assertIn('glanceclient=DEBUG', config.CONF.default_log_levels) nova-17.0.1/nova/tests/unit/test_notifier.py0000666000175000017500000000442113250073126021103 0ustar zuulzuul00000000000000# Copyright 2015 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova import rpc from nova import test class TestNotifier(test.NoDBTestCase): @mock.patch('oslo_messaging.get_rpc_transport') @mock.patch('oslo_messaging.get_notification_transport') @mock.patch('oslo_messaging.Notifier') def test_notification_format_affects_notification_driver(self, mock_notifier, mock_noti_trans, mock_transport): conf = mock.Mock() conf.notifications.versioned_notifications_topics = [ 'versioned_notifications'] cases = { 'unversioned': [ mock.call(mock.ANY, serializer=mock.ANY), mock.call(mock.ANY, serializer=mock.ANY, driver='noop')], 'both': [ mock.call(mock.ANY, serializer=mock.ANY), mock.call(mock.ANY, serializer=mock.ANY, topics=['versioned_notifications'])], 'versioned': [ mock.call(mock.ANY, serializer=mock.ANY, driver='noop'), mock.call(mock.ANY, serializer=mock.ANY, topics=['versioned_notifications'])]} for config in cases: mock_notifier.reset_mock() mock_notifier.side_effect = ['first', 'second'] conf.notifications.notification_format = config rpc.init(conf) self.assertEqual(cases[config], mock_notifier.call_args_list) self.assertEqual('first', rpc.LEGACY_NOTIFIER) self.assertEqual('second', rpc.NOTIFIER) nova-17.0.1/nova/tests/unit/fake_hosts.py0000666000175000017500000000255713250073126020363 0ustar zuulzuul00000000000000# Copyright (c) 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Provides some fake hosts to test host and service related functions """ from nova.tests.unit.objects import test_service HOST_LIST = [ {"host_name": "host_c1", "service": "compute", "zone": "nova"}, {"host_name": "host_c2", "service": "compute", "zone": "nova"}] OS_API_HOST_LIST = {"hosts": HOST_LIST} HOST_LIST_NOVA_ZONE = [ {"host_name": "host_c1", "service": "compute", "zone": "nova"}, {"host_name": "host_c2", "service": "compute", "zone": "nova"}] service_base = test_service.fake_service SERVICES_LIST = [ dict(service_base, host='host_c1', topic='compute', binary='nova-compute'), dict(service_base, host='host_c2', topic='compute', binary='nova-compute')] nova-17.0.1/nova/tests/unit/volume/0000775000175000017500000000000013250073472017163 5ustar zuulzuul00000000000000nova-17.0.1/nova/tests/unit/volume/__init__.py0000666000175000017500000000000013250073126021260 0ustar zuulzuul00000000000000nova-17.0.1/nova/tests/unit/volume/test_cinder.py0000666000175000017500000013703613250073126022050 0ustar zuulzuul00000000000000# Copyright 2013 Mirantis, Inc. # Copyright 2013 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from cinderclient import api_versions as cinder_api_versions from cinderclient import exceptions as cinder_exception from cinderclient.v2 import limits as cinder_limits from keystoneauth1 import loading as ks_loading from keystoneclient import exceptions as keystone_exception import mock from oslo_utils import timeutils import six import nova.conf from nova import context from nova import exception from nova import test from nova.tests.unit.fake_instance import fake_instance_obj from nova.tests import uuidsentinel as uuids from nova.volume import cinder CONF = nova.conf.CONF class FakeVolume(object): def __init__(self, volume_id, size=1, attachments=None, multiattach=False): self.id = volume_id self.name = 'volume_name' self.description = 'volume_description' self.status = 'available' self.created_at = timeutils.utcnow() self.size = size self.availability_zone = 'nova' self.attachments = attachments or [] self.volume_type = 99 self.bootable = False self.snapshot_id = 'snap_id_1' self.metadata = {} self.multiattach = multiattach def get(self, volume_id): return self.volume_id class FakeSnapshot(object): def __init__(self, snapshot_id, volume_id, size=1): self.id = snapshot_id self.name = 'snapshot_name' self.description = 'snapshot_description' self.status = 'available' self.size = size self.created_at = timeutils.utcnow() self.progress = '99%' self.volume_id = volume_id self.project_id = 'fake_project' class FakeAttachment(object): def __init__(self): self.id = uuids.attachment_id self.status = 'attaching' self.instance = uuids.instance_uuid self.volume_id = uuids.volume_id self.attached_at = timeutils.utcnow() self.detached_at = None self.attach_mode = 'rw' self.connection_info = {'driver_volume_type': 'fake_type', 'target_lun': '1', 'foo': 'bar', 'attachment_id': uuids.attachment_id} self.att = {'id': self.id, 'status': self.status, 'instance': self.instance, 'volume_id': self.volume_id, 'attached_at': self.attached_at, 'detached_at': self.detached_at, 'attach_mode': self.attach_mode, 'connection_info': self.connection_info} def get(self, key, default=None): return self.att.get(key, default) def __setitem__(self, key, value): self.att[key] = value def __getitem__(self, key): return self.att[key] def to_dict(self): return self.att class CinderApiTestCase(test.NoDBTestCase): def setUp(self): super(CinderApiTestCase, self).setUp() self.api = cinder.API() self.ctx = context.get_admin_context() @mock.patch('nova.volume.cinder.cinderclient') def test_get(self, mock_cinderclient): volume_id = 'volume_id1' mock_volumes = mock.MagicMock() mock_cinderclient.return_value = mock.MagicMock(volumes=mock_volumes) self.api.get(self.ctx, volume_id) mock_cinderclient.assert_called_once_with(self.ctx, microversion=None) mock_volumes.get.assert_called_once_with(volume_id) @mock.patch('nova.volume.cinder.cinderclient') def test_get_failed_notfound(self, mock_cinderclient): mock_cinderclient.return_value.volumes.get.side_effect = ( cinder_exception.NotFound(404, '404')) self.assertRaises(exception.VolumeNotFound, self.api.get, self.ctx, 'id1') @mock.patch('nova.volume.cinder.cinderclient') def test_get_failed_badrequest(self, mock_cinderclient): mock_cinderclient.return_value.volumes.get.side_effect = ( cinder_exception.BadRequest(400, '400')) self.assertRaises(exception.InvalidInput, self.api.get, self.ctx, 'id1') @mock.patch('nova.volume.cinder.cinderclient') def test_get_failed_connection_failed(self, mock_cinderclient): mock_cinderclient.return_value.volumes.get.side_effect = ( cinder_exception.ConnectionError('')) self.assertRaises(exception.CinderConnectionFailed, self.api.get, self.ctx, 'id1') @mock.patch('nova.volume.cinder.cinderclient') def test_get_with_shared_targets(self, mock_cinderclient): """Tests getting a volume at microversion 3.48 which includes the shared_targets and service_uuid parameters in the volume response body. """ mock_volume = mock.MagicMock( shared_targets=False, service_uuid=uuids.service_uuid) mock_volumes = mock.MagicMock() mock_volumes.get.return_value = mock_volume mock_cinderclient.return_value = mock.MagicMock(volumes=mock_volumes) vol = self.api.get(self.ctx, uuids.volume_id, microversion='3.48') mock_cinderclient.assert_called_once_with( self.ctx, microversion='3.48') mock_volumes.get.assert_called_once_with(uuids.volume_id) self.assertIn('shared_targets', vol) self.assertFalse(vol['shared_targets']) self.assertEqual(uuids.service_uuid, vol['service_uuid']) @mock.patch('nova.volume.cinder.cinderclient', side_effect=exception.CinderAPIVersionNotAvailable( version='3.48')) def test_get_microversion_not_supported(self, mock_cinderclient): """Tests getting a volume at microversion 3.48 but that version is not available. """ self.assertRaises(exception.CinderAPIVersionNotAvailable, self.api.get, self.ctx, uuids.volume_id, microversion='3.48') @mock.patch('nova.volume.cinder.cinderclient') def test_create(self, mock_cinderclient): volume = FakeVolume('id1') mock_volumes = mock.MagicMock() mock_cinderclient.return_value = mock.MagicMock(volumes=mock_volumes) mock_volumes.create.return_value = volume created_volume = self.api.create(self.ctx, 1, '', '') self.assertEqual('id1', created_volume['id']) self.assertEqual(1, created_volume['size']) mock_cinderclient.assert_called_once_with(self.ctx) mock_volumes.create.assert_called_once_with(1, availability_zone=None, description='', imageRef=None, metadata=None, name='', project_id=None, snapshot_id=None, user_id=None, volume_type=None) @mock.patch('nova.volume.cinder.cinderclient') def test_create_failed(self, mock_cinderclient): mock_cinderclient.return_value.volumes.create.side_effect = ( cinder_exception.BadRequest(400, '400')) self.assertRaises(exception.InvalidInput, self.api.create, self.ctx, 1, '', '') @mock.patch('nova.volume.cinder.cinderclient') def test_create_over_quota_failed(self, mock_cinderclient): mock_cinderclient.return_value.volumes.create.side_effect = ( cinder_exception.OverLimit(413)) self.assertRaises(exception.OverQuota, self.api.create, self.ctx, 1, '', '') mock_cinderclient.return_value.volumes.create.assert_called_once_with( 1, user_id=None, imageRef=None, availability_zone=None, volume_type=None, description='', snapshot_id=None, name='', project_id=None, metadata=None) @mock.patch('nova.volume.cinder.cinderclient') def test_get_all(self, mock_cinderclient): volume1 = FakeVolume('id1') volume2 = FakeVolume('id2') volume_list = [volume1, volume2] mock_volumes = mock.MagicMock() mock_cinderclient.return_value = mock.MagicMock(volumes=mock_volumes) mock_volumes.list.return_value = volume_list volumes = self.api.get_all(self.ctx) self.assertEqual(2, len(volumes)) self.assertEqual(['id1', 'id2'], [vol['id'] for vol in volumes]) mock_cinderclient.assert_called_once_with(self.ctx) mock_volumes.list.assert_called_once_with(detailed=True, search_opts={}) @mock.patch('nova.volume.cinder.cinderclient') def test_get_all_with_search(self, mock_cinderclient): volume1 = FakeVolume('id1') mock_volumes = mock.MagicMock() mock_cinderclient.return_value = mock.MagicMock(volumes=mock_volumes) mock_volumes.list.return_value = [volume1] volumes = self.api.get_all(self.ctx, search_opts={'id': 'id1'}) self.assertEqual(1, len(volumes)) self.assertEqual('id1', volumes[0]['id']) mock_cinderclient.assert_called_once_with(self.ctx) mock_volumes.list.assert_called_once_with(detailed=True, search_opts={'id': 'id1'}) @mock.patch.object(cinder.az, 'get_instance_availability_zone', return_value='zone1') def test_check_availability_zone_differs(self, mock_get_instance_az): self.flags(cross_az_attach=False, group='cinder') volume = {'id': uuids.volume_id, 'status': 'available', 'attach_status': 'detached', 'availability_zone': 'zone2'} instance = fake_instance_obj(self.ctx) # Simulate _provision_instances in the compute API; the instance is not # created in the API so the instance will not have an id attribute set. delattr(instance, 'id') self.assertRaises(exception.InvalidVolume, self.api.check_availability_zone, self.ctx, volume, instance) mock_get_instance_az.assert_called_once_with(self.ctx, instance) @mock.patch('nova.volume.cinder.cinderclient') def test_reserve_volume(self, mock_cinderclient): mock_volumes = mock.MagicMock() mock_cinderclient.return_value = mock.MagicMock(volumes=mock_volumes) self.api.reserve_volume(self.ctx, 'id1') mock_cinderclient.assert_called_once_with(self.ctx) mock_volumes.reserve.assert_called_once_with('id1') @mock.patch('nova.volume.cinder.cinderclient') def test_unreserve_volume(self, mock_cinderclient): mock_volumes = mock.MagicMock() mock_cinderclient.return_value = mock.MagicMock(volumes=mock_volumes) self.api.unreserve_volume(self.ctx, 'id1') mock_cinderclient.assert_called_once_with(self.ctx) mock_volumes.unreserve.assert_called_once_with('id1') @mock.patch('nova.volume.cinder.cinderclient') def test_begin_detaching(self, mock_cinderclient): mock_volumes = mock.MagicMock() mock_cinderclient.return_value = mock.MagicMock(volumes=mock_volumes) self.api.begin_detaching(self.ctx, 'id1') mock_cinderclient.assert_called_once_with(self.ctx) mock_volumes.begin_detaching.assert_called_once_with('id1') @mock.patch('nova.volume.cinder.cinderclient') def test_roll_detaching(self, mock_cinderclient): mock_volumes = mock.MagicMock() mock_cinderclient.return_value = mock.MagicMock(volumes=mock_volumes) self.api.roll_detaching(self.ctx, 'id1') mock_cinderclient.assert_called_once_with(self.ctx) mock_volumes.roll_detaching.assert_called_once_with('id1') @mock.patch('nova.volume.cinder.cinderclient') def test_attach(self, mock_cinderclient): mock_volumes = mock.MagicMock() mock_cinderclient.return_value = mock.MagicMock(volumes=mock_volumes) self.api.attach(self.ctx, 'id1', 'uuid', 'point') mock_cinderclient.assert_called_once_with(self.ctx) mock_volumes.attach.assert_called_once_with('id1', 'uuid', 'point', mode='rw') @mock.patch('nova.volume.cinder.cinderclient') def test_attach_with_mode(self, mock_cinderclient): mock_volumes = mock.MagicMock() mock_cinderclient.return_value = mock.MagicMock(volumes=mock_volumes) self.api.attach(self.ctx, 'id1', 'uuid', 'point', mode='ro') mock_cinderclient.assert_called_once_with(self.ctx) mock_volumes.attach.assert_called_once_with('id1', 'uuid', 'point', mode='ro') @mock.patch('nova.volume.cinder.cinderclient') def test_attachment_create(self, mock_cinderclient): """Tests the happy path for creating a volume attachment without a mountpoint. """ attachment_ref = {'id': uuids.attachment_id, 'connection_info': {}} expected_attachment_ref = {'id': uuids.attachment_id, 'connection_info': {}} mock_cinderclient.return_value.attachments.create.return_value = ( attachment_ref) result = self.api.attachment_create( self.ctx, uuids.volume_id, uuids.instance_id) self.assertEqual(expected_attachment_ref, result) mock_cinderclient.return_value.attachments.create.\ assert_called_once_with(uuids.volume_id, None, uuids.instance_id) @mock.patch('nova.volume.cinder.cinderclient') def test_attachment_create_with_mountpoint(self, mock_cinderclient): """Tests the happy path for creating a volume attachment with a mountpoint. """ attachment_ref = {'id': uuids.attachment_id, 'connection_info': {}} expected_attachment_ref = {'id': uuids.attachment_id, 'connection_info': {}} mock_cinderclient.return_value.attachments.create.return_value = ( attachment_ref) original_connector = {'host': 'fake-host'} updated_connector = dict(original_connector, mountpoint='/dev/vdb') result = self.api.attachment_create( self.ctx, uuids.volume_id, uuids.instance_id, connector=original_connector, mountpoint='/dev/vdb') self.assertEqual(expected_attachment_ref, result) # Make sure the original connector wasn't modified. self.assertNotIn('mountpoint', original_connector) # Make sure the mountpoint was passed through via the connector. mock_cinderclient.return_value.attachments.create.\ assert_called_once_with(uuids.volume_id, updated_connector, uuids.instance_id) @mock.patch('nova.volume.cinder.cinderclient') def test_attachment_create_volume_not_found(self, mock_cinderclient): """Tests that the translate_volume_exception decorator is used.""" # fake out the volume not found error mock_cinderclient.return_value.attachments.create.side_effect = ( cinder_exception.NotFound(404)) self.assertRaises(exception.VolumeNotFound, self.api.attachment_create, self.ctx, uuids.volume_id, uuids.instance_id) @mock.patch('nova.volume.cinder.cinderclient', side_effect=exception.CinderAPIVersionNotAvailable( version='3.44')) def test_attachment_create_unsupported_api_version(self, mock_cinderclient): """Tests that CinderAPIVersionNotAvailable is passed back through if 3.44 isn't available. """ self.assertRaises(exception.CinderAPIVersionNotAvailable, self.api.attachment_create, self.ctx, uuids.volume_id, uuids.instance_id) mock_cinderclient.assert_called_once_with(self.ctx, '3.44') @mock.patch('nova.volume.cinder.cinderclient') def test_attachment_update(self, mock_cinderclient): """Tests the happy path for updating a volume attachment without a mountpoint. """ fake_attachment = FakeAttachment() connector = {'host': 'fake-host'} expected_attachment_ref = { 'id': uuids.attachment_id, 'volume_id': fake_attachment.volume_id, 'connection_info': { 'attach_mode': 'rw', 'attached_at': fake_attachment.attached_at, 'data': {'foo': 'bar', 'target_lun': '1'}, 'detached_at': None, 'driver_volume_type': 'fake_type', 'instance': fake_attachment.instance, 'status': 'attaching', 'volume_id': fake_attachment.volume_id}} mock_cinderclient.return_value.attachments.update.return_value = ( fake_attachment) result = self.api.attachment_update( self.ctx, uuids.attachment_id, connector=connector) self.assertEqual(expected_attachment_ref, result) # Make sure the connector wasn't modified. self.assertNotIn('mountpoint', connector) mock_cinderclient.return_value.attachments.update.\ assert_called_once_with(uuids.attachment_id, connector) @mock.patch('nova.volume.cinder.cinderclient') def test_attachment_update_with_mountpoint(self, mock_cinderclient): """Tests the happy path for updating a volume attachment with a mountpoint. """ fake_attachment = FakeAttachment() original_connector = {'host': 'fake-host'} updated_connector = dict(original_connector, mountpoint='/dev/vdb') expected_attachment_ref = { 'id': uuids.attachment_id, 'volume_id': fake_attachment.volume_id, 'connection_info': { 'attach_mode': 'rw', 'attached_at': fake_attachment.attached_at, 'data': {'foo': 'bar', 'target_lun': '1'}, 'detached_at': None, 'driver_volume_type': 'fake_type', 'instance': fake_attachment.instance, 'status': 'attaching', 'volume_id': fake_attachment.volume_id}} mock_cinderclient.return_value.attachments.update.return_value = ( fake_attachment) result = self.api.attachment_update( self.ctx, uuids.attachment_id, connector=original_connector, mountpoint='/dev/vdb') self.assertEqual(expected_attachment_ref, result) # Make sure the original connector wasn't modified. self.assertNotIn('mountpoint', original_connector) # Make sure the mountpoint was passed through via the connector. mock_cinderclient.return_value.attachments.update.\ assert_called_once_with(uuids.attachment_id, updated_connector) @mock.patch('nova.volume.cinder.cinderclient') def test_attachment_update_attachment_not_found(self, mock_cinderclient): """Tests that the translate_attachment_exception decorator is used.""" # fake out the volume not found error mock_cinderclient.return_value.attachments.update.side_effect = ( cinder_exception.NotFound(404)) self.assertRaises(exception.VolumeAttachmentNotFound, self.api.attachment_update, self.ctx, uuids.attachment_id, connector={'host': 'fake-host'}) @mock.patch('nova.volume.cinder.cinderclient') def test_attachment_update_attachment_no_connector(self, mock_cinderclient): """Tests that the translate_cinder_exception decorator is used.""" # fake out the volume bad request error mock_cinderclient.return_value.attachments.update.side_effect = ( cinder_exception.BadRequest(400)) self.assertRaises(exception.InvalidInput, self.api.attachment_update, self.ctx, uuids.attachment_id, connector=None) @mock.patch('nova.volume.cinder.cinderclient', side_effect=exception.CinderAPIVersionNotAvailable( version='3.44')) def test_attachment_update_unsupported_api_version(self, mock_cinderclient): """Tests that CinderAPIVersionNotAvailable is passed back through if 3.44 isn't available. """ self.assertRaises(exception.CinderAPIVersionNotAvailable, self.api.attachment_update, self.ctx, uuids.attachment_id, connector={}) mock_cinderclient.assert_called_once_with(self.ctx, '3.44', skip_version_check=True) @mock.patch('nova.volume.cinder.cinderclient') def test_attachment_delete(self, mock_cinderclient): mock_attachments = mock.MagicMock() mock_cinderclient.return_value = \ mock.MagicMock(attachments=mock_attachments) attachment_id = uuids.attachment self.api.attachment_delete(self.ctx, attachment_id) mock_cinderclient.assert_called_once_with(self.ctx, '3.44', skip_version_check=True) mock_attachments.delete.assert_called_once_with(attachment_id) @mock.patch('nova.volume.cinder.LOG') @mock.patch('nova.volume.cinder.cinderclient') def test_attachment_delete_failed(self, mock_cinderclient, mock_log): mock_cinderclient.return_value.attachments.delete.side_effect = ( cinder_exception.NotFound(404, '404')) attachment_id = uuids.attachment ex = self.assertRaises(exception.VolumeAttachmentNotFound, self.api.attachment_delete, self.ctx, attachment_id) self.assertEqual(404, ex.code) self.assertIn(attachment_id, six.text_type(ex)) @mock.patch('nova.volume.cinder.cinderclient', side_effect=exception.CinderAPIVersionNotAvailable( version='3.44')) def test_attachment_delete_unsupported_api_version(self, mock_cinderclient): """Tests that CinderAPIVersionNotAvailable is passed back through if 3.44 isn't available. """ self.assertRaises(exception.CinderAPIVersionNotAvailable, self.api.attachment_delete, self.ctx, uuids.attachment_id) mock_cinderclient.assert_called_once_with(self.ctx, '3.44', skip_version_check=True) @mock.patch('nova.volume.cinder.cinderclient') def test_attachment_complete(self, mock_cinderclient): mock_attachments = mock.MagicMock() mock_cinderclient.return_value = \ mock.MagicMock(attachments=mock_attachments) attachment_id = uuids.attachment self.api.attachment_complete(self.ctx, attachment_id) mock_cinderclient.assert_called_once_with(self.ctx, '3.44', skip_version_check=True) mock_attachments.complete.assert_called_once_with(attachment_id) @mock.patch('nova.volume.cinder.cinderclient') def test_attachment_complete_failed(self, mock_cinderclient): mock_cinderclient.return_value.attachments.complete.side_effect = ( cinder_exception.NotFound(404, '404')) attachment_id = uuids.attachment ex = self.assertRaises(exception.VolumeAttachmentNotFound, self.api.attachment_complete, self.ctx, attachment_id) self.assertEqual(404, ex.code) self.assertIn(attachment_id, six.text_type(ex)) @mock.patch('nova.volume.cinder.cinderclient', side_effect=exception.CinderAPIVersionNotAvailable( version='3.44')) def test_attachment_complete_unsupported_api_version(self, mock_cinderclient): """Tests that CinderAPIVersionNotAvailable is passed back. If microversion 3.44 isn't available that should result in a CinderAPIVersionNotAvailable exception. """ self.assertRaises(exception.CinderAPIVersionNotAvailable, self.api.attachment_complete, self.ctx, uuids.attachment_id) mock_cinderclient.assert_called_once_with(self.ctx, '3.44', skip_version_check=True) @mock.patch('nova.volume.cinder.cinderclient') def test_detach(self, mock_cinderclient): mock_volumes = mock.MagicMock() mock_cinderclient.return_value = mock.MagicMock(version='2', volumes=mock_volumes) self.api.detach(self.ctx, 'id1', instance_uuid='fake_uuid', attachment_id='fakeid') mock_cinderclient.assert_called_with(self.ctx) mock_volumes.detach.assert_called_once_with('id1', 'fakeid') @mock.patch('nova.volume.cinder.cinderclient') def test_detach_no_attachment_id(self, mock_cinderclient): attachment = {'server_id': 'fake_uuid', 'attachment_id': 'fakeid' } mock_volumes = mock.MagicMock() mock_cinderclient.return_value = mock.MagicMock(version='2', volumes=mock_volumes) mock_cinderclient.return_value.volumes.get.return_value = \ FakeVolume('id1', attachments=[attachment]) self.api.detach(self.ctx, 'id1', instance_uuid='fake_uuid') mock_cinderclient.assert_called_with(self.ctx, microversion=None) mock_volumes.detach.assert_called_once_with('id1', None) @mock.patch('nova.volume.cinder.cinderclient') def test_detach_no_attachment_id_multiattach(self, mock_cinderclient): attachment = {'server_id': 'fake_uuid', 'attachment_id': 'fakeid' } mock_volumes = mock.MagicMock() mock_cinderclient.return_value = mock.MagicMock(version='2', volumes=mock_volumes) mock_cinderclient.return_value.volumes.get.return_value = \ FakeVolume('id1', attachments=[attachment], multiattach=True) self.api.detach(self.ctx, 'id1', instance_uuid='fake_uuid') mock_cinderclient.assert_called_with(self.ctx, microversion=None) mock_volumes.detach.assert_called_once_with('id1', 'fakeid') @mock.patch('nova.volume.cinder.cinderclient') def test_attachment_get(self, mock_cinderclient): mock_attachment = mock.MagicMock() mock_cinderclient.return_value = \ mock.MagicMock(attachments=mock_attachment) attachment_id = uuids.attachment self.api.attachment_get(self.ctx, attachment_id) mock_cinderclient.assert_called_once_with(self.ctx, '3.44', skip_version_check=True) mock_attachment.show.assert_called_once_with(attachment_id) @mock.patch('nova.volume.cinder.cinderclient') def test_attachment_get_failed(self, mock_cinderclient): mock_cinderclient.return_value.attachments.show.side_effect = ( cinder_exception.NotFound(404, '404')) attachment_id = uuids.attachment ex = self.assertRaises(exception.VolumeAttachmentNotFound, self.api.attachment_get, self.ctx, attachment_id) self.assertEqual(404, ex.code) self.assertIn(attachment_id, six.text_type(ex)) @mock.patch('nova.volume.cinder.cinderclient', side_effect=exception.CinderAPIVersionNotAvailable( version='3.44')) def test_attachment_get_unsupported_api_version(self, mock_cinderclient): """Tests that CinderAPIVersionNotAvailable is passed back. If microversion 3.44 isn't available that should result in a CinderAPIVersionNotAvailable exception. """ self.assertRaises(exception.CinderAPIVersionNotAvailable, self.api.attachment_get, self.ctx, uuids.attachment_id) mock_cinderclient.assert_called_once_with(self.ctx, '3.44', skip_version_check=True) @mock.patch('nova.volume.cinder.cinderclient') def test_initialize_connection(self, mock_cinderclient): connection_info = {'foo': 'bar'} mock_cinderclient.return_value.volumes. \ initialize_connection.return_value = connection_info volume_id = 'fake_vid' connector = {'host': 'fakehost1'} actual = self.api.initialize_connection(self.ctx, volume_id, connector) expected = connection_info expected['connector'] = connector self.assertEqual(expected, actual) mock_cinderclient.return_value.volumes. \ initialize_connection.assert_called_once_with(volume_id, connector) @mock.patch('nova.volume.cinder.LOG') @mock.patch('nova.volume.cinder.cinderclient') def test_initialize_connection_exception_no_code( self, mock_cinderclient, mock_log): mock_cinderclient.return_value.volumes. \ initialize_connection.side_effect = ( cinder_exception.ClientException(500, "500")) mock_cinderclient.return_value.volumes. \ terminate_connection.side_effect = ( test.TestingException) connector = {'host': 'fakehost1'} self.assertRaises(cinder_exception.ClientException, self.api.initialize_connection, self.ctx, 'id1', connector) self.assertIsNone(mock_log.error.call_args_list[1][0][1]['code']) @mock.patch('nova.volume.cinder.cinderclient') def test_initialize_connection_rollback(self, mock_cinderclient): mock_cinderclient.return_value.volumes.\ initialize_connection.side_effect = ( cinder_exception.ClientException(500, "500")) connector = {'host': 'host1'} ex = self.assertRaises(cinder_exception.ClientException, self.api.initialize_connection, self.ctx, 'id1', connector) self.assertEqual(500, ex.code) mock_cinderclient.return_value.volumes.\ terminate_connection.assert_called_once_with('id1', connector) @mock.patch('nova.volume.cinder.cinderclient') def test_initialize_connection_no_rollback(self, mock_cinderclient): mock_cinderclient.return_value.volumes.\ initialize_connection.side_effect = test.TestingException connector = {'host': 'host1'} self.assertRaises(test.TestingException, self.api.initialize_connection, self.ctx, 'id1', connector) self.assertFalse(mock_cinderclient.return_value.volumes. terminate_connection.called) @mock.patch('nova.volume.cinder.cinderclient') def test_terminate_connection(self, mock_cinderclient): mock_volumes = mock.MagicMock() mock_cinderclient.return_value = mock.MagicMock(volumes=mock_volumes) self.api.terminate_connection(self.ctx, 'id1', 'connector') mock_cinderclient.assert_called_once_with(self.ctx) mock_volumes.terminate_connection.assert_called_once_with('id1', 'connector') @mock.patch('nova.volume.cinder.cinderclient') def test_delete(self, mock_cinderclient): mock_volumes = mock.MagicMock() mock_cinderclient.return_value = mock.MagicMock(volumes=mock_volumes) self.api.delete(self.ctx, 'id1') mock_cinderclient.assert_called_once_with(self.ctx) mock_volumes.delete.assert_called_once_with('id1') def test_update(self): self.assertRaises(NotImplementedError, self.api.update, self.ctx, '', '') @mock.patch('nova.volume.cinder.cinderclient') def test_get_absolute_limits_forbidden(self, cinderclient): """Tests to make sure we gracefully handle a Forbidden error raised from python-cinderclient when getting limits. """ cinderclient.return_value.limits.get.side_effect = ( cinder_exception.Forbidden(403)) self.assertRaises( exception.Forbidden, self.api.get_absolute_limits, self.ctx) @mock.patch('nova.volume.cinder.cinderclient') def test_get_absolute_limits(self, cinderclient): """Tests the happy path of getting the absolute limits.""" expected_limits = { "totalSnapshotsUsed": 0, "maxTotalBackups": 10, "maxTotalVolumeGigabytes": 1000, "maxTotalSnapshots": 10, "maxTotalBackupGigabytes": 1000, "totalBackupGigabytesUsed": 0, "maxTotalVolumes": 10, "totalVolumesUsed": 0, "totalBackupsUsed": 0, "totalGigabytesUsed": 0 } limits_obj = cinder_limits.Limits(None, {'absolute': expected_limits}) cinderclient.return_value.limits.get.return_value = limits_obj actual_limits = self.api.get_absolute_limits(self.ctx) self.assertDictEqual(expected_limits, actual_limits) @mock.patch('nova.volume.cinder.cinderclient') def test_get_snapshot(self, mock_cinderclient): snapshot_id = 'snapshot_id' mock_volume_snapshots = mock.MagicMock() mock_cinderclient.return_value = mock.MagicMock( volume_snapshots=mock_volume_snapshots) self.api.get_snapshot(self.ctx, snapshot_id) mock_cinderclient.assert_called_once_with(self.ctx) mock_volume_snapshots.get.assert_called_once_with(snapshot_id) @mock.patch('nova.volume.cinder.cinderclient') def test_get_snapshot_failed_notfound(self, mock_cinderclient): mock_cinderclient.return_value.volume_snapshots.get.side_effect = ( cinder_exception.NotFound(404, '404')) self.assertRaises(exception.SnapshotNotFound, self.api.get_snapshot, self.ctx, 'snapshot_id') @mock.patch('nova.volume.cinder.cinderclient') def test_get_snapshot_connection_failed(self, mock_cinderclient): mock_cinderclient.return_value.volume_snapshots.get.side_effect = ( cinder_exception.ConnectionError('')) self.assertRaises(exception.CinderConnectionFailed, self.api.get_snapshot, self.ctx, 'snapshot_id') @mock.patch('nova.volume.cinder.cinderclient') def test_get_all_snapshots(self, mock_cinderclient): snapshot1 = FakeSnapshot('snapshot_id1', 'id1') snapshot2 = FakeSnapshot('snapshot_id2', 'id2') snapshot_list = [snapshot1, snapshot2] mock_volume_snapshots = mock.MagicMock() mock_cinderclient.return_value = mock.MagicMock( volume_snapshots=mock_volume_snapshots) mock_volume_snapshots.list.return_value = snapshot_list snapshots = self.api.get_all_snapshots(self.ctx) self.assertEqual(2, len(snapshots)) self.assertEqual(['snapshot_id1', 'snapshot_id2'], [snap['id'] for snap in snapshots]) self.assertEqual(['id1', 'id2'], [snap['volume_id'] for snap in snapshots]) mock_cinderclient.assert_called_once_with(self.ctx) mock_volume_snapshots.list.assert_called_once_with(detailed=True) @mock.patch('nova.volume.cinder.cinderclient') def test_create_snapshot(self, mock_cinderclient): snapshot = FakeSnapshot('snapshot_id1', 'id1') mock_volume_snapshots = mock.MagicMock() mock_cinderclient.return_value = mock.MagicMock( volume_snapshots=mock_volume_snapshots) mock_volume_snapshots.create.return_value = snapshot created_snapshot = self.api.create_snapshot(self.ctx, 'id1', 'name', 'description') self.assertEqual('snapshot_id1', created_snapshot['id']) self.assertEqual('id1', created_snapshot['volume_id']) mock_cinderclient.assert_called_once_with(self.ctx) mock_volume_snapshots.create.assert_called_once_with('id1', False, 'name', 'description') @mock.patch('nova.volume.cinder.cinderclient') def test_create_force(self, mock_cinderclient): snapshot = FakeSnapshot('snapshot_id1', 'id1') mock_volume_snapshots = mock.MagicMock() mock_cinderclient.return_value = mock.MagicMock( volume_snapshots=mock_volume_snapshots) mock_volume_snapshots.create.return_value = snapshot created_snapshot = self.api.create_snapshot_force(self.ctx, 'id1', 'name', 'description') self.assertEqual('snapshot_id1', created_snapshot['id']) self.assertEqual('id1', created_snapshot['volume_id']) mock_cinderclient.assert_called_once_with(self.ctx) mock_volume_snapshots.create.assert_called_once_with('id1', True, 'name', 'description') @mock.patch('nova.volume.cinder.cinderclient') def test_delete_snapshot(self, mock_cinderclient): mock_volume_snapshots = mock.MagicMock() mock_cinderclient.return_value = mock.MagicMock( volume_snapshots=mock_volume_snapshots) self.api.delete_snapshot(self.ctx, 'snapshot_id') mock_cinderclient.assert_called_once_with(self.ctx) mock_volume_snapshots.delete.assert_called_once_with('snapshot_id') @mock.patch('nova.volume.cinder.cinderclient') def test_update_snapshot_status(self, mock_cinderclient): mock_volume_snapshots = mock.MagicMock() mock_cinderclient.return_value = mock.MagicMock( volume_snapshots=mock_volume_snapshots) self.api.update_snapshot_status(self.ctx, 'snapshot_id', 'error') mock_cinderclient.assert_called_once_with(self.ctx) mock_volume_snapshots.update_snapshot_status.assert_called_once_with( 'snapshot_id', {'status': 'error', 'progress': '90%'}) @mock.patch('nova.volume.cinder.cinderclient') def test_get_volume_encryption_metadata(self, mock_cinderclient): mock_volumes = mock.MagicMock() mock_cinderclient.return_value = mock.MagicMock(volumes=mock_volumes) self.api.get_volume_encryption_metadata(self.ctx, {'encryption_key_id': 'fake_key'}) mock_cinderclient.assert_called_once_with(self.ctx) mock_volumes.get_encryption_metadata.assert_called_once_with( {'encryption_key_id': 'fake_key'}) def test_translate_cinder_exception_no_error(self): my_func = mock.Mock() my_func.__name__ = 'my_func' my_func.return_value = 'foo' res = cinder.translate_cinder_exception(my_func)('fizzbuzz', 'bar', 'baz') self.assertEqual('foo', res) my_func.assert_called_once_with('fizzbuzz', 'bar', 'baz') def test_translate_cinder_exception_cinder_connection_error(self): self._do_translate_cinder_exception_test( cinder_exception.ConnectionError, exception.CinderConnectionFailed) def test_translate_cinder_exception_keystone_connection_error(self): self._do_translate_cinder_exception_test( keystone_exception.ConnectionError, exception.CinderConnectionFailed) def test_translate_cinder_exception_cinder_bad_request(self): self._do_translate_cinder_exception_test( cinder_exception.BadRequest(400, '400'), exception.InvalidInput) def test_translate_cinder_exception_keystone_bad_request(self): self._do_translate_cinder_exception_test( keystone_exception.BadRequest, exception.InvalidInput) def test_translate_cinder_exception_cinder_forbidden(self): self._do_translate_cinder_exception_test( cinder_exception.Forbidden(403, '403'), exception.Forbidden) def test_translate_cinder_exception_keystone_forbidden(self): self._do_translate_cinder_exception_test( keystone_exception.Forbidden, exception.Forbidden) def test_translate_mixed_exception_over_limit(self): self._do_translate_mixed_exception_test( cinder_exception.OverLimit(''), exception.OverQuota) def test_translate_mixed_exception_volume_not_found(self): self._do_translate_mixed_exception_test( cinder_exception.NotFound(''), exception.VolumeNotFound) def test_translate_mixed_exception_keystone_not_found(self): self._do_translate_mixed_exception_test( keystone_exception.NotFound, exception.VolumeNotFound) def _do_translate_cinder_exception_test(self, raised_exc, expected_exc): self._do_translate_exception_test(raised_exc, expected_exc, cinder.translate_cinder_exception) def _do_translate_mixed_exception_test(self, raised_exc, expected_exc): self._do_translate_exception_test(raised_exc, expected_exc, cinder.translate_mixed_exceptions) def _do_translate_exception_test(self, raised_exc, expected_exc, wrapper): my_func = mock.Mock() my_func.__name__ = 'my_func' my_func.side_effect = raised_exc self.assertRaises(expected_exc, wrapper(my_func), 'foo', 'bar', 'baz') class CinderClientTestCase(test.NoDBTestCase): """Used to test constructing a cinder client object at various versions.""" def setUp(self): super(CinderClientTestCase, self).setUp() cinder.reset_globals() self.ctxt = context.RequestContext('fake-user', 'fake-project') # Mock out the keystoneauth stuff. self.mock_session = mock.Mock( autospec='keystoneauth1.loading.session.Session') load_session = mock.patch('keystoneauth1.loading.' 'load_session_from_conf_options', return_value=self.mock_session).start() self.addCleanup(load_session.stop) @mock.patch('cinderclient.client.get_volume_api_from_url', return_value='3') def test_create_v3_client_no_microversion(self, get_volume_api): """Tests that creating a v3 client, which is the default, and without specifying a microversion will default to 3.0 as the version to use. """ client = cinder.cinderclient(self.ctxt) self.assertEqual(cinder_api_versions.APIVersion('3.0'), client.api_version) get_volume_api.assert_called_once_with( self.mock_session.get_endpoint.return_value) @mock.patch('cinderclient.client.get_volume_api_from_url', return_value='3') @mock.patch('cinderclient.client.get_highest_client_server_version', return_value=2.0) # Fake the case that cinder is really old. def test_create_v3_client_with_microversion_too_new(self, get_highest_version, get_volume_api): """Tests that creating a v3 client and requesting a microversion that is either too new for the server (or client) to support raises an exception. """ self.assertRaises(exception.CinderAPIVersionNotAvailable, cinder.cinderclient, self.ctxt, microversion='3.44') get_volume_api.assert_called_once_with( self.mock_session.get_endpoint.return_value) get_highest_version.assert_called_once_with( self.mock_session.get_endpoint.return_value) @mock.patch('cinderclient.client.get_highest_client_server_version', return_value=cinder_api_versions.MAX_VERSION) @mock.patch('cinderclient.client.get_volume_api_from_url', return_value='3') def test_create_v3_client_with_microversion_available(self, get_volume_api, get_highest_version): """Tests that creating a v3 client and requesting a microversion that is available in the server and supported by the client will result in creating a Client object with the requested microversion. """ client = cinder.cinderclient(self.ctxt, microversion='3.44') self.assertEqual(cinder_api_versions.APIVersion('3.44'), client.api_version) get_volume_api.assert_called_once_with( self.mock_session.get_endpoint.return_value) get_highest_version.assert_called_once_with( self.mock_session.get_endpoint.return_value) @mock.patch('cinderclient.client.get_highest_client_server_version', new_callable=mock.NonCallableMock) # asserts not called @mock.patch('cinderclient.client.get_volume_api_from_url', return_value='3') def test_create_v3_client_with_microversion_skip_version_check( self, get_volume_api, get_highest_version): """Tests that creating a v3 client and requesting a microversion but asking to skip the version discovery check is honored. """ client = cinder.cinderclient(self.ctxt, microversion='3.44', skip_version_check=True) self.assertEqual(cinder_api_versions.APIVersion('3.44'), client.api_version) get_volume_api.assert_called_once_with( self.mock_session.get_endpoint.return_value) @mock.patch.object(ks_loading, 'load_auth_from_conf_options') def test_load_auth_plugin_failed(self, mock_load_from_conf): mock_load_from_conf.return_value = None self.assertRaises(cinder_exception.Unauthorized, cinder._load_auth_plugin, CONF) @mock.patch('nova.volume.cinder._ADMIN_AUTH') def test_admin_context_without_token(self, mock_admin_auth): mock_admin_auth.return_value = '_FAKE_ADMIN_AUTH' admin_ctx = context.get_admin_context() params = cinder._get_cinderclient_parameters(admin_ctx) self.assertEqual(params[0], mock_admin_auth) nova-17.0.1/nova/tests/unit/test_hooks.py0000666000175000017500000001354613250073126020417 0ustar zuulzuul00000000000000# Copyright (c) 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests for hook customization.""" import stevedore from nova import hooks from nova import test class SampleHookA(object): name = "a" def _add_called(self, op, kwargs): called = kwargs.get('called', None) if called is not None: called.append(op + self.name) def pre(self, *args, **kwargs): self._add_called("pre", kwargs) class SampleHookB(SampleHookA): name = "b" def post(self, rv, *args, **kwargs): self._add_called("post", kwargs) class SampleHookC(SampleHookA): name = "c" def pre(self, f, *args, **kwargs): self._add_called("pre" + f.__name__, kwargs) def post(self, f, rv, *args, **kwargs): self._add_called("post" + f.__name__, kwargs) class SampleHookExceptionPre(SampleHookA): name = "epre" exception = Exception() def pre(self, f, *args, **kwargs): raise self.exception class SampleHookExceptionPost(SampleHookA): name = "epost" exception = Exception() def post(self, f, rv, *args, **kwargs): raise self.exception class MockEntryPoint(object): def __init__(self, cls): self.cls = cls def load(self): return self.cls class MockedHookTestCase(test.BaseHookTestCase): PLUGINS = [] def setUp(self): super(MockedHookTestCase, self).setUp() hooks.reset() hook_manager = hooks.HookManager.make_test_instance(self.PLUGINS) self.stub_out('nova.hooks.HookManager', lambda x: hook_manager) class HookTestCase(MockedHookTestCase): PLUGINS = [ stevedore.extension.Extension('test_hook', MockEntryPoint(SampleHookA), SampleHookA, SampleHookA()), stevedore.extension.Extension('test_hook', MockEntryPoint(SampleHookB), SampleHookB, SampleHookB()), ] def setUp(self): super(HookTestCase, self).setUp() hooks.reset() @hooks.add_hook('test_hook') def _hooked(self, a, b=1, c=2, called=None): return 42 def test_basic(self): self.assertEqual(42, self._hooked(1)) mgr = hooks._HOOKS['test_hook'] self.assert_has_hook('test_hook', self._hooked) self.assertEqual(2, len(mgr.extensions)) self.assertEqual(SampleHookA, mgr.extensions[0].plugin) self.assertEqual(SampleHookB, mgr.extensions[1].plugin) def test_order_of_execution(self): called_order = [] self._hooked(42, called=called_order) self.assertEqual(['prea', 'preb', 'postb'], called_order) class HookTestCaseWithFunction(MockedHookTestCase): PLUGINS = [ stevedore.extension.Extension('function_hook', MockEntryPoint(SampleHookC), SampleHookC, SampleHookC()), ] @hooks.add_hook('function_hook', pass_function=True) def _hooked(self, a, b=1, c=2, called=None): return 42 def test_basic(self): self.assertEqual(42, self._hooked(1)) mgr = hooks._HOOKS['function_hook'] self.assert_has_hook('function_hook', self._hooked) self.assertEqual(1, len(mgr.extensions)) self.assertEqual(SampleHookC, mgr.extensions[0].plugin) def test_order_of_execution(self): called_order = [] self._hooked(42, called=called_order) self.assertEqual(['pre_hookedc', 'post_hookedc'], called_order) class HookFailPreTestCase(MockedHookTestCase): PLUGINS = [ stevedore.extension.Extension('fail_pre', MockEntryPoint(SampleHookExceptionPre), SampleHookExceptionPre, SampleHookExceptionPre()), ] @hooks.add_hook('fail_pre', pass_function=True) def _hooked(self, a, b=1, c=2, called=None): return 42 def test_hook_fail_should_still_return(self): self.assertEqual(42, self._hooked(1)) mgr = hooks._HOOKS['fail_pre'] self.assert_has_hook('fail_pre', self._hooked) self.assertEqual(1, len(mgr.extensions)) self.assertEqual(SampleHookExceptionPre, mgr.extensions[0].plugin) def test_hook_fail_should_raise_fatal(self): self.stub_out('nova.tests.unit.test_hooks.' 'SampleHookExceptionPre.exception', hooks.FatalHookException()) self.assertRaises(hooks.FatalHookException, self._hooked, 1) class HookFailPostTestCase(MockedHookTestCase): PLUGINS = [ stevedore.extension.Extension('fail_post', MockEntryPoint(SampleHookExceptionPost), SampleHookExceptionPost, SampleHookExceptionPost()), ] @hooks.add_hook('fail_post', pass_function=True) def _hooked(self, a, b=1, c=2, called=None): return 42 def test_hook_fail_should_still_return(self): self.assertEqual(42, self._hooked(1)) mgr = hooks._HOOKS['fail_post'] self.assert_has_hook('fail_post', self._hooked) self.assertEqual(1, len(mgr.extensions)) self.assertEqual(SampleHookExceptionPost, mgr.extensions[0].plugin) def test_hook_fail_should_raise_fatal(self): self.stub_out('nova.tests.unit.test_hooks.' 'SampleHookExceptionPost.exception', hooks.FatalHookException()) self.assertRaises(hooks.FatalHookException, self._hooked, 1) nova-17.0.1/nova/tests/unit/api_samples_test_base/0000775000175000017500000000000013250073472022202 5ustar zuulzuul00000000000000nova-17.0.1/nova/tests/unit/api_samples_test_base/__init__.py0000666000175000017500000000000013250073126024277 0ustar zuulzuul00000000000000nova-17.0.1/nova/tests/unit/api_samples_test_base/test_compare_result.py0000666000175000017500000003761213250073126026646 0ustar zuulzuul00000000000000# Copyright 2015 HPE, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import mock import testtools from nova import test from nova.tests.functional import api_samples_test_base class TestCompareResult(test.NoDBTestCase): """Provide test coverage for result comparison logic in functional tests. _compare_result two types of comparisons, template data and sample data. Template data means the response is checked against a regex that is referenced by the template name. The template name is specified in the format %(name) Sample data is a normal value comparison. """ def getApiSampleTestBaseHelper(self): """Build an instance without running any unwanted test methods""" # NOTE(auggy): TestCase takes a "test" method name to run in __init__ # calling this way prevents additional test methods from running ast_instance = api_samples_test_base.ApiSampleTestBase('setUp') # required by ApiSampleTestBase ast_instance.api_major_version = 'v2' ast_instance._project_id = 'True' # automagically create magic methods usually handled by test classes ast_instance.compute = mock.MagicMock() ast_instance.subs = ast_instance._get_regexes() return ast_instance def setUp(self): super(TestCompareResult, self).setUp() self.ast = self.getApiSampleTestBaseHelper() def test_bare_strings_match(self): """compare 2 bare strings that match""" sample_data = u'foo' response_data = u'foo' result = self.ast._compare_result( expected=sample_data, result=response_data, result_str="Test") # NOTE(auggy): _compare_result will not return a matched value in the # case of bare strings. If they don't match it will throw an exception, # otherwise it returns "None". self.assertEqual( expected=None, observed=result, message='Check _compare_result of 2 bare strings') def test_bare_strings_no_match(self): """check 2 bare strings that don't match""" sample_data = u'foo' response_data = u'bar' with testtools.ExpectedException(api_samples_test_base.NoMatch): self.ast._compare_result( expected=sample_data, result=response_data, result_str="Test") def test_template_strings_match(self): """compare 2 template strings (contain %) that match""" template_data = u'%(id)s' response_data = u'858f295a-8543-45fa-804a-08f8356d616d' result = self.ast._compare_result( expected=template_data, result=response_data, result_str="Test") self.assertEqual( expected=response_data, observed=result, message='Check _compare_result of 2 template strings') def test_template_strings_no_match(self): """check 2 template strings (contain %) that don't match""" template_data = u'%(id)s' response_data = u'$58f295a-8543-45fa-804a-08f8356d616d' with testtools.ExpectedException(api_samples_test_base.NoMatch): self.ast._compare_result( expected=template_data, result=response_data, result_str="Test") # TODO(auggy): _compare_result needs a consistent return value # In some cases it returns the value if it matched, in others it returns # None. In all cases, it throws an exception if there's no match. def test_bare_int_match(self): """check 2 bare ints that match""" sample_data = 42 response_data = 42 result = self.ast._compare_result( expected=sample_data, result=response_data, result_str="Test") self.assertEqual( expected=None, observed=result, message='Check _compare_result of 2 bare ints') def test_bare_int_no_match(self): """check 2 bare ints that don't match""" sample_data = 42 response_data = 43 with testtools.ExpectedException(api_samples_test_base.NoMatch): self.ast._compare_result( expected=sample_data, result=response_data, result_str="Test") # TODO(auggy): _compare_result needs a consistent return value def test_template_int_match(self): """check template int against string containing digits""" template_data = u'%(int)s' response_data = u'42' result = self.ast._compare_result( expected=template_data, result=response_data, result_str="Test") self.assertEqual( expected=None, observed=result, message='Check _compare_result of template ints') def test_template_int_no_match(self): """check template int against a string containing no digits""" template_data = u'%(int)s' response_data = u'foo' with testtools.ExpectedException(api_samples_test_base.NoMatch): self.ast._compare_result( expected=template_data, result=response_data, result_str="Test") def test_template_int_value(self): """check an int value of a template int throws exception""" # template_data = u'%(int_test)' # response_data = 42 # use an int instead of a string as the subs value local_subs = copy.deepcopy(self.ast.subs) local_subs.update({'int_test': 42}) with testtools.ExpectedException(TypeError): self.ast.subs = local_subs # TODO(auggy): _compare_result needs a consistent return value def test_dict_match(self): """check 2 matching dictionaries""" template_data = { u'server': { u'id': u'%(id)s', u'adminPass': u'%(password)s' } } response_data = { u'server': { u'id': u'858f295a-8543-45fa-804a-08f8356d616d', u'adminPass': u'4ZQ3bb6WYbC2'} } result = self.ast._compare_result( expected=template_data, result=response_data, result_str="Test") self.assertEqual( expected=u'858f295a-8543-45fa-804a-08f8356d616d', observed=result, message='Check _compare_result of 2 dictionaries') def test_dict_no_match_value(self): """check 2 dictionaries where one has a different value""" sample_data = { u'server': { u'id': u'858f295a-8543-45fa-804a-08f8356d616d', u'adminPass': u'foo' } } response_data = { u'server': { u'id': u'858f295a-8543-45fa-804a-08f8356d616d', u'adminPass': u'4ZQ3bb6WYbC2'} } with testtools.ExpectedException(api_samples_test_base.NoMatch): self.ast._compare_result( expected=sample_data, result=response_data, result_str="Test") def test_dict_no_match_extra_key(self): """check 2 dictionaries where one has an extra key""" template_data = { u'server': { u'id': u'%(id)s', u'adminPass': u'%(password)s', u'foo': u'foo' } } response_data = { u'server': { u'id': u'858f295a-8543-45fa-804a-08f8356d616d', u'adminPass': u'4ZQ3bb6WYbC2'} } with testtools.ExpectedException(api_samples_test_base.NoMatch): self.ast._compare_result( expected=template_data, result=response_data, result_str="Test") def test_dict_result_type_mismatch(self): """check expected is a dictionary and result is not a dictionary""" template_data = { u'server': { u'id': u'%(id)s', u'adminPass': u'%(password)s', } } response_data = u'foo' with testtools.ExpectedException(api_samples_test_base.NoMatch): self.ast._compare_result( expected=template_data, result=response_data, result_str="Test") # TODO(auggy): _compare_result needs a consistent return value def test_list_match(self): """check 2 matching lists""" template_data = { u'links': [ { u'href': u'%(versioned_compute_endpoint)s/server/%(uuid)s', u'rel': u'self' }, { u'href': u'%(compute_endpoint)s/servers/%(uuid)s', u'rel': u'bookmark' } ] } response_data = { u'links': [ { u'href': (u'http://openstack.example.com/v2/%s/server/' '858f295a-8543-45fa-804a-08f8356d616d' % api_samples_test_base.PROJECT_ID ), u'rel': u'self' }, { u'href': (u'http://openstack.example.com/%s/servers/' '858f295a-8543-45fa-804a-08f8356d616d' % api_samples_test_base.PROJECT_ID ), u'rel': u'bookmark' } ] } result = self.ast._compare_result( expected=template_data, result=response_data, result_str="Test") self.assertEqual( expected=None, observed=result, message='Check _compare_result of 2 lists') def test_list_match_extra_item_result(self): """check extra list items in result """ template_data = { u'links': [ { u'href': u'%(versioned_compute_endpoint)s/server/%(uuid)s', u'rel': u'self' }, { u'href': u'%(compute_endpoint)s/servers/%(uuid)s', u'rel': u'bookmark' } ] } response_data = { u'links': [ { u'href': (u'http://openstack.example.com/v2/openstack/server/' '858f295a-8543-45fa-804a-08f8356d616d'), u'rel': u'self' }, { u'href': (u'http://openstack.example.com/openstack/servers/' '858f295a-8543-45fa-804a-08f8356d616d'), u'rel': u'bookmark' }, u'foo' ] } with testtools.ExpectedException(api_samples_test_base.NoMatch): self.ast._compare_result( expected=template_data, result=response_data, result_str="Test") def test_list_match_extra_item_template(self): """check extra list items in template """ template_data = { u'links': [ { u'href': u'%(versioned_compute_endpoint)s/server/%(uuid)s', u'rel': u'self' }, { u'href': u'%(compute_endpoint)s/servers/%(uuid)s', u'rel': u'bookmark' }, u'foo' # extra field ] } response_data = { u'links': [ { u'href': (u'http://openstack.example.com/v2/openstack/server/' '858f295a-8543-45fa-804a-08f8356d616d'), u'rel': u'self' }, { u'href': (u'http://openstack.example.com/openstack/servers/' '858f295a-8543-45fa-804a-08f8356d616d'), u'rel': u'bookmark' } ] } with testtools.ExpectedException(api_samples_test_base.NoMatch): self.ast._compare_result( expected=template_data, result=response_data, result_str="Test") def test_list_no_match(self): """check 2 matching lists""" template_data = { u'things': [ { u'foo': u'bar', u'baz': 0 }, { u'foo': u'zod', u'baz': 1 } ] } response_data = { u'things': [ { u'foo': u'bar', u'baz': u'0' }, { u'foo': u'zod', u'baz': 1 } ] } # TODO(auggy): This error returns "extra list items" # it should show the item/s in the list that didn't match with testtools.ExpectedException(api_samples_test_base.NoMatch): self.ast._compare_result( expected=template_data, result=response_data, result_str="Test") def test_none_match(self): """check that None matches""" sample_data = None response_data = None result = self.ast._compare_result( expected=sample_data, result=response_data, result_str="Test") # NOTE(auggy): _compare_result will not return a matched value in the # case of bare strings. If they don't match it will throw an exception, # otherwise it returns "None". self.assertEqual( expected=None, observed=result, message='Check _compare_result of None') def test_none_no_match(self): """check expected none and non-None response don't match""" sample_data = None response_data = u'bar' with testtools.ExpectedException(api_samples_test_base.NoMatch): self.ast._compare_result( expected=sample_data, result=response_data, result_str="Test") def test_none_result_no_match(self): """check result none and expected non-None response don't match""" sample_data = u'foo' response_data = None with testtools.ExpectedException(api_samples_test_base.NoMatch): self.ast._compare_result( expected=sample_data, result=response_data, result_str="Test") def test_template_no_subs_key(self): """check an int value of a template int throws exception""" template_data = u'%(foo)' response_data = 'bar' with testtools.ExpectedException(KeyError): self.ast._compare_result( expected=template_data, result=response_data, result_str="Test") nova-17.0.1/nova/tests/unit/test_baserpc.py0000666000175000017500000000303713250073126020705 0ustar zuulzuul00000000000000# # Copyright 2013 - Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # """ Test the base rpc API. """ from nova import baserpc from nova.compute import rpcapi as compute_rpcapi import nova.conf from nova import context from nova import test CONF = nova.conf.CONF class BaseAPITestCase(test.TestCase): def setUp(self): super(BaseAPITestCase, self).setUp() self.user_id = 'fake' self.project_id = 'fake' self.context = context.RequestContext(self.user_id, self.project_id) self.compute = self.start_service('compute') self.base_rpcapi = baserpc.BaseAPI(compute_rpcapi.RPC_TOPIC) def test_ping(self): res = self.base_rpcapi.ping(self.context, 'foo') self.assertEqual({'service': 'compute', 'arg': 'foo'}, res) def test_get_backdoor_port(self): res = self.base_rpcapi.get_backdoor_port(self.context, self.compute.host) self.assertEqual(self.compute.backdoor_port, res) nova-17.0.1/nova/tests/unit/test_versions.py0000666000175000017500000000411013250073126021127 0ustar zuulzuul00000000000000# Copyright 2011 Ken Pepple # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg import six from six.moves import builtins from nova import test from nova import version class VersionTestCase(test.NoDBTestCase): """Test cases for Versions code.""" def test_version_string_with_package_is_good(self): """Ensure uninstalled code get version string.""" self.stub_out('nova.version.version_info.version_string', lambda: '5.5.5.5') self.stub_out('nova.version.NOVA_PACKAGE', 'g9ec3421') self.assertEqual("5.5.5.5-g9ec3421", version.version_string_with_package()) def test_release_file(self): version.loaded = False real_open = builtins.open real_find_file = cfg.CONF.find_file def fake_find_file(self, name): if name == "release": return "/etc/nova/release" return real_find_file(self, name) def fake_open(path, *args, **kwargs): if path == "/etc/nova/release": data = """[Nova] vendor = ACME Corporation product = ACME Nova package = 1337""" return six.StringIO(data) return real_open(path, *args, **kwargs) self.stub_out('six.moves.builtins.open', fake_open) self.stub_out('oslo_config.cfg.ConfigOpts.find_file', fake_find_file) self.assertEqual(version.vendor_string(), "ACME Corporation") self.assertEqual(version.product_string(), "ACME Nova") self.assertEqual(version.package_string(), "1337") nova-17.0.1/nova/tests/unit/fake_diagnostics.py0000666000175000017500000000246613250073126021531 0ustar zuulzuul00000000000000# Copyright (c) 2017 Mirantis Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova import objects def fake_diagnostics_obj(**updates): diag = objects.Diagnostics() cpu_details = updates.pop('cpu_details', []) nic_details = updates.pop('nic_details', []) disk_details = updates.pop('disk_details', []) memory_details = updates.pop('memory_details', {}) for field in objects.Diagnostics.fields: if field in updates: setattr(diag, field, updates[field]) for cpu in cpu_details: diag.add_cpu(**cpu) for nic in nic_details: diag.add_nic(**nic) for disk in disk_details: diag.add_disk(**disk) for k, v in memory_details.items(): setattr(diag.memory_details, k, v) return diag nova-17.0.1/nova/tests/unit/test_context.py0000666000175000017500000005131613250073136020756 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_context import context as o_context from oslo_context import fixture as o_fixture from nova import context from nova import exception from nova import objects from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests import uuidsentinel as uuids class ContextTestCase(test.NoDBTestCase): # NOTE(danms): Avoid any cells setup by claiming we will # do things ourselves. USES_DB_SELF = True def setUp(self): super(ContextTestCase, self).setUp() self.useFixture(o_fixture.ClearRequestContext()) def test_request_context_elevated(self): user_ctxt = context.RequestContext('111', '222', is_admin=False) self.assertFalse(user_ctxt.is_admin) admin_ctxt = user_ctxt.elevated() self.assertTrue(admin_ctxt.is_admin) self.assertIn('admin', admin_ctxt.roles) self.assertFalse(user_ctxt.is_admin) self.assertNotIn('admin', user_ctxt.roles) def test_request_context_sets_is_admin(self): ctxt = context.RequestContext('111', '222', roles=['admin', 'weasel']) self.assertTrue(ctxt.is_admin) def test_request_context_sets_is_admin_by_role(self): ctxt = context.RequestContext('111', '222', roles=['administrator']) self.assertTrue(ctxt.is_admin) def test_request_context_sets_is_admin_upcase(self): ctxt = context.RequestContext('111', '222', roles=['Admin', 'weasel']) self.assertTrue(ctxt.is_admin) def test_request_context_read_deleted(self): ctxt = context.RequestContext('111', '222', read_deleted='yes') self.assertEqual('yes', ctxt.read_deleted) ctxt.read_deleted = 'no' self.assertEqual('no', ctxt.read_deleted) def test_request_context_read_deleted_invalid(self): self.assertRaises(ValueError, context.RequestContext, '111', '222', read_deleted=True) ctxt = context.RequestContext('111', '222') self.assertRaises(ValueError, setattr, ctxt, 'read_deleted', True) def test_service_catalog_default(self): ctxt = context.RequestContext('111', '222') self.assertEqual([], ctxt.service_catalog) ctxt = context.RequestContext('111', '222', service_catalog=[]) self.assertEqual([], ctxt.service_catalog) ctxt = context.RequestContext('111', '222', service_catalog=None) self.assertEqual([], ctxt.service_catalog) def test_service_catalog_filter(self): service_catalog = [ {u'type': u'compute', u'name': u'nova'}, {u'type': u's3', u'name': u's3'}, {u'type': u'image', u'name': u'glance'}, {u'type': u'volumev3', u'name': u'cinderv3'}, {u'type': u'network', u'name': u'neutron'}, {u'type': u'ec2', u'name': u'ec2'}, {u'type': u'object-store', u'name': u'swift'}, {u'type': u'identity', u'name': u'keystone'}, {u'type': u'block-storage', u'name': u'cinder'}, {u'type': None, u'name': u'S_withouttype'}, {u'type': u'vo', u'name': u'S_partofvolume'}] volume_catalog = [{u'type': u'image', u'name': u'glance'}, {u'type': u'volumev3', u'name': u'cinderv3'}, {u'type': u'network', u'name': u'neutron'}, {u'type': u'block-storage', u'name': u'cinder'}] ctxt = context.RequestContext('111', '222', service_catalog=service_catalog) self.assertEqual(volume_catalog, ctxt.service_catalog) def test_to_dict_from_dict_no_log(self): warns = [] def stub_warn(msg, *a, **kw): if (a and len(a) == 1 and isinstance(a[0], dict) and a[0]): a = a[0] warns.append(str(msg) % a) self.stub_out('nova.context.LOG.warning', stub_warn) ctxt = context.RequestContext('111', '222', roles=['admin', 'weasel']) context.RequestContext.from_dict(ctxt.to_dict()) self.assertEqual(0, len(warns), warns) def test_store_when_no_overwrite(self): # If no context exists we store one even if overwrite is false # (since we are not overwriting anything). ctx = context.RequestContext('111', '222', overwrite=False) self.assertIs(o_context.get_current(), ctx) def test_no_overwrite(self): # If there is already a context in the cache a new one will # not overwrite it if overwrite=False. ctx1 = context.RequestContext('111', '222', overwrite=True) context.RequestContext('333', '444', overwrite=False) self.assertIs(o_context.get_current(), ctx1) def test_get_context_no_overwrite(self): # If there is already a context in the cache creating another context # should not overwrite it. ctx1 = context.RequestContext('111', '222', overwrite=True) context.get_context() self.assertIs(ctx1, o_context.get_current()) def test_admin_no_overwrite(self): # If there is already a context in the cache creating an admin # context will not overwrite it. ctx1 = context.RequestContext('111', '222', overwrite=True) context.get_admin_context() self.assertIs(o_context.get_current(), ctx1) def test_convert_from_rc_to_dict(self): ctx = context.RequestContext( 111, 222, request_id='req-679033b7-1755-4929-bf85-eb3bfaef7e0b', timestamp='2015-03-02T22:31:56.641629') values2 = ctx.to_dict() expected_values = {'auth_token': None, 'domain': None, 'instance_lock_checked': False, 'is_admin': False, 'is_admin_project': True, 'project_id': 222, 'project_domain': None, 'project_name': None, 'quota_class': None, 'read_deleted': 'no', 'read_only': False, 'remote_address': None, 'request_id': 'req-679033b7-1755-4929-bf85-eb3bfaef7e0b', 'resource_uuid': None, 'roles': [], 'service_catalog': [], 'show_deleted': False, 'tenant': 222, 'timestamp': '2015-03-02T22:31:56.641629', 'user': 111, 'user_domain': None, 'user_id': 111, 'user_identity': '111 222 - - -', 'user_name': None} for k, v in expected_values.items(): self.assertIn(k, values2) self.assertEqual(values2[k], v) @mock.patch.object(context.policy, 'authorize') def test_can(self, mock_authorize): mock_authorize.return_value = True ctxt = context.RequestContext('111', '222') result = ctxt.can(mock.sentinel.rule) self.assertTrue(result) mock_authorize.assert_called_once_with( ctxt, mock.sentinel.rule, {'project_id': ctxt.project_id, 'user_id': ctxt.user_id}) @mock.patch.object(context.policy, 'authorize') def test_can_fatal(self, mock_authorize): mock_authorize.side_effect = exception.Forbidden ctxt = context.RequestContext('111', '222') self.assertRaises(exception.Forbidden, ctxt.can, mock.sentinel.rule) @mock.patch.object(context.policy, 'authorize') def test_can_non_fatal(self, mock_authorize): mock_authorize.side_effect = exception.Forbidden ctxt = context.RequestContext('111', '222') result = ctxt.can(mock.sentinel.rule, mock.sentinel.target, fatal=False) self.assertFalse(result) mock_authorize.assert_called_once_with(ctxt, mock.sentinel.rule, mock.sentinel.target) @mock.patch('nova.rpc.create_transport') @mock.patch('nova.db.create_context_manager') def test_target_cell(self, mock_create_ctxt_mgr, mock_rpc): mock_create_ctxt_mgr.return_value = mock.sentinel.cdb mock_rpc.return_value = mock.sentinel.cmq ctxt = context.RequestContext('111', '222', roles=['admin', 'weasel']) # Verify the existing db_connection, if any, is restored ctxt.db_connection = mock.sentinel.db_conn ctxt.mq_connection = mock.sentinel.mq_conn mapping = objects.CellMapping(database_connection='fake://', transport_url='fake://', uuid=uuids.cell) with context.target_cell(ctxt, mapping) as cctxt: self.assertEqual(cctxt.db_connection, mock.sentinel.cdb) self.assertEqual(cctxt.mq_connection, mock.sentinel.cmq) self.assertEqual(mock.sentinel.db_conn, ctxt.db_connection) self.assertEqual(mock.sentinel.mq_conn, ctxt.mq_connection) @mock.patch('nova.rpc.create_transport') @mock.patch('nova.db.create_context_manager') def test_target_cell_unset(self, mock_create_ctxt_mgr, mock_rpc): """Tests that passing None as the mapping will temporarily untarget any previously set cell context. """ mock_create_ctxt_mgr.return_value = mock.sentinel.cdb mock_rpc.return_value = mock.sentinel.cmq ctxt = context.RequestContext('111', '222', roles=['admin', 'weasel']) ctxt.db_connection = mock.sentinel.db_conn ctxt.mq_connection = mock.sentinel.mq_conn with context.target_cell(ctxt, None) as cctxt: self.assertIsNone(cctxt.db_connection) self.assertIsNone(cctxt.mq_connection) self.assertEqual(mock.sentinel.db_conn, ctxt.db_connection) self.assertEqual(mock.sentinel.mq_conn, ctxt.mq_connection) @mock.patch('nova.context.set_target_cell') def test_target_cell_regenerates(self, mock_set): ctxt = context.RequestContext('fake', 'fake') # Set a non-tracked property on the context to make sure it # does not make it to the targeted one (like a copy would do) ctxt.sentinel = mock.sentinel.parent with context.target_cell(ctxt, mock.sentinel.cm) as cctxt: # Should be a different object self.assertIsNot(cctxt, ctxt) # Should not have inherited the non-tracked property self.assertFalse(hasattr(cctxt, 'sentinel'), 'Targeted context was copied from original') # Set another non-tracked property cctxt.sentinel = mock.sentinel.child # Make sure we didn't pollute the original context self.assertNotEqual(ctxt.sentinel, mock.sentinel.child) def test_get_context(self): ctxt = context.get_context() self.assertIsNone(ctxt.user_id) self.assertIsNone(ctxt.project_id) self.assertFalse(ctxt.is_admin) @mock.patch('nova.rpc.create_transport') @mock.patch('nova.db.create_context_manager') def test_target_cell_caching(self, mock_create_cm, mock_create_tport): mock_create_cm.return_value = mock.sentinel.db_conn_obj mock_create_tport.return_value = mock.sentinel.mq_conn_obj ctxt = context.get_context() mapping = objects.CellMapping(database_connection='fake://db', transport_url='fake://mq', uuid=uuids.cell) # First call should create new connection objects. with context.target_cell(ctxt, mapping) as cctxt: self.assertEqual(mock.sentinel.db_conn_obj, cctxt.db_connection) self.assertEqual(mock.sentinel.mq_conn_obj, cctxt.mq_connection) mock_create_cm.assert_called_once_with('fake://db') mock_create_tport.assert_called_once_with('fake://mq') # Second call should use cached objects. mock_create_cm.reset_mock() mock_create_tport.reset_mock() with context.target_cell(ctxt, mapping) as cctxt: self.assertEqual(mock.sentinel.db_conn_obj, cctxt.db_connection) self.assertEqual(mock.sentinel.mq_conn_obj, cctxt.mq_connection) mock_create_cm.assert_not_called() mock_create_tport.assert_not_called() @mock.patch('nova.context.target_cell') @mock.patch('nova.objects.InstanceList.get_by_filters') def test_scatter_gather_cells(self, mock_get_inst, mock_target_cell): ctxt = context.get_context() mapping = objects.CellMapping(database_connection='fake://db', transport_url='fake://mq', uuid=uuids.cell) mappings = objects.CellMappingList(objects=[mapping]) # Use a mock manager to assert call order across mocks. manager = mock.Mock() manager.attach_mock(mock_get_inst, 'get_inst') manager.attach_mock(mock_target_cell, 'target_cell') filters = {'deleted': False} context.scatter_gather_cells( ctxt, mappings, 60, objects.InstanceList.get_by_filters, filters, sort_dir='foo') # NOTE(melwitt): This only works without the SpawnIsSynchronous fixture # because when the spawn is treated as synchronous and the thread # function is called immediately, it will occur inside the target_cell # context manager scope when it wouldn't with a real spawn. # Assert that InstanceList.get_by_filters was called before the # target_cell context manager exited. get_inst_call = mock.call.get_inst( mock_target_cell.return_value.__enter__.return_value, filters, sort_dir='foo') expected_calls = [get_inst_call, mock.call.target_cell().__exit__(None, None, None)] manager.assert_has_calls(expected_calls) @mock.patch('nova.context.LOG.warning') @mock.patch('eventlet.timeout.Timeout') @mock.patch('eventlet.queue.LightQueue.get') @mock.patch('nova.objects.InstanceList.get_by_filters') def test_scatter_gather_cells_timeout(self, mock_get_inst, mock_get_result, mock_timeout, mock_log_warning): # This is needed because we're mocking get_by_filters. self.useFixture(nova_fixtures.SpawnIsSynchronousFixture()) ctxt = context.get_context() mapping0 = objects.CellMapping(database_connection='fake://db0', transport_url='none:///', uuid=objects.CellMapping.CELL0_UUID) mapping1 = objects.CellMapping(database_connection='fake://db1', transport_url='fake://mq1', uuid=uuids.cell1) mappings = objects.CellMappingList(objects=[mapping0, mapping1]) # Simulate cell1 not responding. mock_get_result.side_effect = [(mapping0.uuid, mock.sentinel.instances), exception.CellTimeout()] results = context.scatter_gather_cells( ctxt, mappings, 30, objects.InstanceList.get_by_filters) self.assertEqual(2, len(results)) self.assertIn(mock.sentinel.instances, results.values()) self.assertIn(context.did_not_respond_sentinel, results.values()) mock_timeout.assert_called_once_with(30, exception.CellTimeout) self.assertTrue(mock_log_warning.called) @mock.patch('nova.context.LOG.exception') @mock.patch('nova.objects.InstanceList.get_by_filters') def test_scatter_gather_cells_exception(self, mock_get_inst, mock_log_exception): # This is needed because we're mocking get_by_filters. self.useFixture(nova_fixtures.SpawnIsSynchronousFixture()) ctxt = context.get_context() mapping0 = objects.CellMapping(database_connection='fake://db0', transport_url='none:///', uuid=objects.CellMapping.CELL0_UUID) mapping1 = objects.CellMapping(database_connection='fake://db1', transport_url='fake://mq1', uuid=uuids.cell1) mappings = objects.CellMappingList(objects=[mapping0, mapping1]) # Simulate cell1 raising an exception. mock_get_inst.side_effect = [mock.sentinel.instances, test.TestingException()] results = context.scatter_gather_cells( ctxt, mappings, 30, objects.InstanceList.get_by_filters) self.assertEqual(2, len(results)) self.assertIn(mock.sentinel.instances, results.values()) self.assertIn(context.raised_exception_sentinel, results.values()) self.assertTrue(mock_log_exception.called) @mock.patch('nova.context.scatter_gather_cells') @mock.patch('nova.objects.CellMappingList.get_all') def test_scatter_gather_all_cells(self, mock_get_all, mock_scatter): ctxt = context.get_context() mapping0 = objects.CellMapping(database_connection='fake://db0', transport_url='none:///', uuid=objects.CellMapping.CELL0_UUID) mapping1 = objects.CellMapping(database_connection='fake://db1', transport_url='fake://mq1', uuid=uuids.cell1) mock_get_all.return_value = objects.CellMappingList( objects=[mapping0, mapping1]) filters = {'deleted': False} context.scatter_gather_all_cells( ctxt, objects.InstanceList.get_by_filters, filters, sort_dir='foo') mock_scatter.assert_called_once_with( ctxt, mock_get_all.return_value, 60, objects.InstanceList.get_by_filters, filters, sort_dir='foo') @mock.patch('nova.context.scatter_gather_cells') @mock.patch('nova.objects.CellMappingList.get_all') def test_scatter_gather_skip_cell0(self, mock_get_all, mock_scatter): ctxt = context.get_context() mapping0 = objects.CellMapping(database_connection='fake://db0', transport_url='none:///', uuid=objects.CellMapping.CELL0_UUID) mapping1 = objects.CellMapping(database_connection='fake://db1', transport_url='fake://mq1', uuid=uuids.cell1) mock_get_all.return_value = objects.CellMappingList( objects=[mapping0, mapping1]) filters = {'deleted': False} context.scatter_gather_skip_cell0( ctxt, objects.InstanceList.get_by_filters, filters, sort_dir='foo') mock_scatter.assert_called_once_with( ctxt, [mapping1], 60, objects.InstanceList.get_by_filters, filters, sort_dir='foo') nova-17.0.1/nova/tests/unit/test_rpc.py0000666000175000017500000004665313250073126020065 0ustar zuulzuul00000000000000# Copyright 2016 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import fixtures import mock import oslo_messaging as messaging from oslo_messaging.rpc import dispatcher from oslo_serialization import jsonutils from nova import context from nova import rpc from nova import test # Make a class that resets all of the global variables in nova.rpc class RPCResetFixture(fixtures.Fixture): def _setUp(self): self.trans = copy.copy(rpc.TRANSPORT) self.noti_trans = copy.copy(rpc.NOTIFICATION_TRANSPORT) self.noti = copy.copy(rpc.NOTIFIER) self.all_mods = copy.copy(rpc.ALLOWED_EXMODS) self.ext_mods = copy.copy(rpc.EXTRA_EXMODS) self.conf = copy.copy(rpc.CONF) self.addCleanup(self._reset_everything) def _reset_everything(self): rpc.TRANSPORT = self.trans rpc.NOTIFICATION_TRANSPORT = self.noti_trans rpc.NOTIFIER = self.noti rpc.ALLOWED_EXMODS = self.all_mods rpc.EXTRA_EXMODS = self.ext_mods rpc.CONF = self.conf class TestRPC(test.NoDBTestCase): # We're testing the rpc code so we can't use the RPCFixture. STUB_RPC = False def setUp(self): super(TestRPC, self).setUp() self.useFixture(RPCResetFixture()) @mock.patch.object(rpc, 'get_allowed_exmods') @mock.patch.object(rpc, 'RequestContextSerializer') @mock.patch.object(messaging, 'get_notification_transport') @mock.patch.object(messaging, 'Notifier') def test_init_unversioned(self, mock_notif, mock_noti_trans, mock_ser, mock_exmods): # The expected call to get the legacy notifier will require no new # kwargs, and we expect the new notifier will need the noop driver expected = [{}, {'driver': 'noop'}] self._test_init(mock_notif, mock_noti_trans, mock_ser, mock_exmods, 'unversioned', expected) @mock.patch.object(rpc, 'get_allowed_exmods') @mock.patch.object(rpc, 'RequestContextSerializer') @mock.patch.object(messaging, 'get_notification_transport') @mock.patch.object(messaging, 'Notifier') def test_init_both(self, mock_notif, mock_noti_trans, mock_ser, mock_exmods): expected = [{}, {'topics': ['versioned_notifications']}] self._test_init(mock_notif, mock_noti_trans, mock_ser, mock_exmods, 'both', expected) @mock.patch.object(rpc, 'get_allowed_exmods') @mock.patch.object(rpc, 'RequestContextSerializer') @mock.patch.object(messaging, 'get_notification_transport') @mock.patch.object(messaging, 'Notifier') def test_init_versioned(self, mock_notif, mock_noti_trans, mock_ser, mock_exmods): expected = [{'driver': 'noop'}, {'topics': ['versioned_notifications']}] self._test_init(mock_notif, mock_noti_trans, mock_ser, mock_exmods, 'versioned', expected) @mock.patch.object(rpc, 'get_allowed_exmods') @mock.patch.object(rpc, 'RequestContextSerializer') @mock.patch.object(messaging, 'get_notification_transport') @mock.patch.object(messaging, 'Notifier') def test_init_versioned_with_custom_topics(self, mock_notif, mock_noti_trans, mock_ser, mock_exmods): expected = [{'driver': 'noop'}, {'topics': ['custom_topic1', 'custom_topic2']}] self._test_init( mock_notif, mock_noti_trans, mock_ser, mock_exmods, 'versioned', expected, versioned_notification_topics=['custom_topic1', 'custom_topic2']) def test_cleanup_transport_null(self): rpc.NOTIFICATION_TRANSPORT = mock.Mock() rpc.LEGACY_NOTIFIER = mock.Mock() rpc.NOTIFIER = mock.Mock() self.assertRaises(AssertionError, rpc.cleanup) def test_cleanup_notification_transport_null(self): rpc.TRANSPORT = mock.Mock() rpc.NOTIFIER = mock.Mock() self.assertRaises(AssertionError, rpc.cleanup) def test_cleanup_legacy_notifier_null(self): rpc.TRANSPORT = mock.Mock() rpc.NOTIFICATION_TRANSPORT = mock.Mock() rpc.NOTIFIER = mock.Mock() def test_cleanup_notifier_null(self): rpc.TRANSPORT = mock.Mock() rpc.LEGACY_NOTIFIER = mock.Mock() rpc.NOTIFICATION_TRANSPORT = mock.Mock() self.assertRaises(AssertionError, rpc.cleanup) def test_cleanup(self): rpc.LEGACY_NOTIFIER = mock.Mock() rpc.NOTIFIER = mock.Mock() rpc.NOTIFICATION_TRANSPORT = mock.Mock() rpc.TRANSPORT = mock.Mock() trans_cleanup = mock.Mock() not_trans_cleanup = mock.Mock() rpc.TRANSPORT.cleanup = trans_cleanup rpc.NOTIFICATION_TRANSPORT.cleanup = not_trans_cleanup rpc.cleanup() trans_cleanup.assert_called_once_with() not_trans_cleanup.assert_called_once_with() self.assertIsNone(rpc.TRANSPORT) self.assertIsNone(rpc.NOTIFICATION_TRANSPORT) self.assertIsNone(rpc.LEGACY_NOTIFIER) self.assertIsNone(rpc.NOTIFIER) @mock.patch.object(messaging, 'set_transport_defaults') def test_set_defaults(self, mock_set): control_exchange = mock.Mock() rpc.set_defaults(control_exchange) mock_set.assert_called_once_with(control_exchange) def test_add_extra_exmods(self): rpc.EXTRA_EXMODS = [] rpc.add_extra_exmods('foo', 'bar') self.assertEqual(['foo', 'bar'], rpc.EXTRA_EXMODS) def test_clear_extra_exmods(self): rpc.EXTRA_EXMODS = ['foo', 'bar'] rpc.clear_extra_exmods() self.assertEqual(0, len(rpc.EXTRA_EXMODS)) def test_get_allowed_exmods(self): rpc.ALLOWED_EXMODS = ['foo'] rpc.EXTRA_EXMODS = ['bar'] exmods = rpc.get_allowed_exmods() self.assertEqual(['foo', 'bar'], exmods) @mock.patch.object(messaging, 'TransportURL') def test_get_transport_url(self, mock_url): conf = mock.Mock() rpc.CONF = conf mock_url.parse.return_value = 'foo' url = rpc.get_transport_url(url_str='bar') self.assertEqual('foo', url) mock_url.parse.assert_called_once_with(conf, 'bar') @mock.patch.object(messaging, 'TransportURL') def test_get_transport_url_null(self, mock_url): conf = mock.Mock() rpc.CONF = conf mock_url.parse.return_value = 'foo' url = rpc.get_transport_url() self.assertEqual('foo', url) mock_url.parse.assert_called_once_with(conf, None) @mock.patch.object(rpc, 'profiler', None) @mock.patch.object(rpc, 'RequestContextSerializer') @mock.patch.object(messaging, 'RPCClient') def test_get_client(self, mock_client, mock_ser): rpc.TRANSPORT = mock.Mock() tgt = mock.Mock() ser = mock.Mock() mock_client.return_value = 'client' mock_ser.return_value = ser client = rpc.get_client(tgt, version_cap='1.0', serializer='foo') mock_ser.assert_called_once_with('foo') mock_client.assert_called_once_with(rpc.TRANSPORT, tgt, version_cap='1.0', serializer=ser) self.assertEqual('client', client) @mock.patch.object(rpc, 'profiler', None) @mock.patch.object(rpc, 'RequestContextSerializer') @mock.patch.object(messaging, 'get_rpc_server') def test_get_server(self, mock_get, mock_ser): rpc.TRANSPORT = mock.Mock() ser = mock.Mock() tgt = mock.Mock() ends = mock.Mock() mock_ser.return_value = ser mock_get.return_value = 'server' server = rpc.get_server(tgt, ends, serializer='foo') mock_ser.assert_called_once_with('foo') access_policy = dispatcher.DefaultRPCAccessPolicy mock_get.assert_called_once_with(rpc.TRANSPORT, tgt, ends, executor='eventlet', serializer=ser, access_policy=access_policy) self.assertEqual('server', server) @mock.patch.object(rpc, 'profiler', mock.Mock()) @mock.patch.object(rpc, 'ProfilerRequestContextSerializer') @mock.patch.object(messaging, 'RPCClient') def test_get_client_profiler_enabled(self, mock_client, mock_ser): rpc.TRANSPORT = mock.Mock() tgt = mock.Mock() ser = mock.Mock() mock_client.return_value = 'client' mock_ser.return_value = ser client = rpc.get_client(tgt, version_cap='1.0', serializer='foo') mock_ser.assert_called_once_with('foo') mock_client.assert_called_once_with(rpc.TRANSPORT, tgt, version_cap='1.0', serializer=ser) self.assertEqual('client', client) @mock.patch.object(rpc, 'profiler', mock.Mock()) @mock.patch.object(rpc, 'ProfilerRequestContextSerializer') @mock.patch.object(messaging, 'get_rpc_server') def test_get_server_profiler_enabled(self, mock_get, mock_ser): rpc.TRANSPORT = mock.Mock() ser = mock.Mock() tgt = mock.Mock() ends = mock.Mock() mock_ser.return_value = ser mock_get.return_value = 'server' server = rpc.get_server(tgt, ends, serializer='foo') mock_ser.assert_called_once_with('foo') access_policy = dispatcher.DefaultRPCAccessPolicy mock_get.assert_called_once_with(rpc.TRANSPORT, tgt, ends, executor='eventlet', serializer=ser, access_policy=access_policy) self.assertEqual('server', server) def test_get_notifier(self): rpc.LEGACY_NOTIFIER = mock.Mock() mock_prep = mock.Mock() mock_prep.return_value = 'notifier' rpc.LEGACY_NOTIFIER.prepare = mock_prep notifier = rpc.get_notifier('service', publisher_id='foo') mock_prep.assert_called_once_with(publisher_id='foo') self.assertIsInstance(notifier, rpc.LegacyValidatingNotifier) self.assertEqual('notifier', notifier.notifier) def test_get_notifier_null_publisher(self): rpc.LEGACY_NOTIFIER = mock.Mock() mock_prep = mock.Mock() mock_prep.return_value = 'notifier' rpc.LEGACY_NOTIFIER.prepare = mock_prep notifier = rpc.get_notifier('service', host='bar') mock_prep.assert_called_once_with(publisher_id='service.bar') self.assertIsInstance(notifier, rpc.LegacyValidatingNotifier) self.assertEqual('notifier', notifier.notifier) def test_get_versioned_notifier(self): rpc.NOTIFIER = mock.Mock() mock_prep = mock.Mock() mock_prep.return_value = 'notifier' rpc.NOTIFIER.prepare = mock_prep notifier = rpc.get_versioned_notifier('service.foo') mock_prep.assert_called_once_with(publisher_id='service.foo') self.assertEqual('notifier', notifier) @mock.patch.object(rpc, 'get_allowed_exmods') @mock.patch.object(messaging, 'get_rpc_transport') def test_create_transport(self, mock_transport, mock_exmods): exmods = mock_exmods.return_value transport = rpc.create_transport(mock.sentinel.url) self.assertEqual(mock_transport.return_value, transport) mock_exmods.assert_called_once_with() mock_transport.assert_called_once_with(rpc.CONF, url=mock.sentinel.url, allowed_remote_exmods=exmods) def _test_init(self, mock_notif, mock_noti_trans, mock_ser, mock_exmods, notif_format, expected_driver_topic_kwargs, versioned_notification_topics=['versioned_notifications']): legacy_notifier = mock.Mock() notifier = mock.Mock() notif_transport = mock.Mock() transport = mock.Mock() serializer = mock.Mock() conf = mock.Mock() conf.transport_url = None conf.notifications.notification_format = notif_format conf.notifications.versioned_notifications_topics = ( versioned_notification_topics) mock_exmods.return_value = ['foo'] mock_noti_trans.return_value = notif_transport mock_ser.return_value = serializer mock_notif.side_effect = [legacy_notifier, notifier] @mock.patch.object(rpc, 'CONF', new=conf) @mock.patch.object(rpc, 'create_transport') @mock.patch.object(rpc, 'get_transport_url') def _test(get_url, create_transport): create_transport.return_value = transport rpc.init(conf) create_transport.assert_called_once_with(get_url.return_value) _test() self.assertTrue(mock_exmods.called) self.assertIsNotNone(rpc.TRANSPORT) self.assertIsNotNone(rpc.LEGACY_NOTIFIER) self.assertIsNotNone(rpc.NOTIFIER) self.assertEqual(legacy_notifier, rpc.LEGACY_NOTIFIER) self.assertEqual(notifier, rpc.NOTIFIER) expected_calls = [] for kwargs in expected_driver_topic_kwargs: expected_kwargs = {'serializer': serializer} expected_kwargs.update(kwargs) expected_calls.append(((notif_transport,), expected_kwargs)) self.assertEqual(expected_calls, mock_notif.call_args_list, "The calls to messaging.Notifier() did not create " "the legacy and versioned notifiers properly.") class TestJsonPayloadSerializer(test.NoDBTestCase): def test_serialize_entity(self): with mock.patch.object(jsonutils, 'to_primitive') as mock_prim: rpc.JsonPayloadSerializer.serialize_entity('context', 'entity') mock_prim.assert_called_once_with('entity', convert_instances=True) class TestRequestContextSerializer(test.NoDBTestCase): def setUp(self): super(TestRequestContextSerializer, self).setUp() self.mock_base = mock.Mock() self.ser = rpc.RequestContextSerializer(self.mock_base) self.ser_null = rpc.RequestContextSerializer(None) def test_serialize_entity(self): self.mock_base.serialize_entity.return_value = 'foo' ser_ent = self.ser.serialize_entity('context', 'entity') self.mock_base.serialize_entity.assert_called_once_with('context', 'entity') self.assertEqual('foo', ser_ent) def test_serialize_entity_null_base(self): ser_ent = self.ser_null.serialize_entity('context', 'entity') self.assertEqual('entity', ser_ent) def test_deserialize_entity(self): self.mock_base.deserialize_entity.return_value = 'foo' deser_ent = self.ser.deserialize_entity('context', 'entity') self.mock_base.deserialize_entity.assert_called_once_with('context', 'entity') self.assertEqual('foo', deser_ent) def test_deserialize_entity_null_base(self): deser_ent = self.ser_null.deserialize_entity('context', 'entity') self.assertEqual('entity', deser_ent) def test_serialize_context(self): context = mock.Mock() self.ser.serialize_context(context) context.to_dict.assert_called_once_with() @mock.patch.object(context, 'RequestContext') def test_deserialize_context(self, mock_req): self.ser.deserialize_context('context') mock_req.from_dict.assert_called_once_with('context') class TestProfilerRequestContextSerializer(test.NoDBTestCase): def setUp(self): super(TestProfilerRequestContextSerializer, self).setUp() self.ser = rpc.ProfilerRequestContextSerializer(mock.Mock()) @mock.patch('nova.rpc.profiler') def test_serialize_context(self, mock_profiler): prof = mock_profiler.get.return_value prof.hmac_key = 'swordfish' prof.get_base_id.return_value = 'baseid' prof.get_id.return_value = 'parentid' context = mock.Mock() context.to_dict.return_value = {'project_id': 'test'} self.assertEqual({'project_id': 'test', 'trace_info': { 'hmac_key': 'swordfish', 'base_id': 'baseid', 'parent_id': 'parentid'}}, self.ser.serialize_context(context)) @mock.patch('nova.rpc.profiler') def test_deserialize_context(self, mock_profiler): serialized = {'project_id': 'test', 'trace_info': { 'hmac_key': 'swordfish', 'base_id': 'baseid', 'parent_id': 'parentid'}} context = self.ser.deserialize_context(serialized) self.assertEqual('test', context.project_id) mock_profiler.init.assert_called_once_with( hmac_key='swordfish', base_id='baseid', parent_id='parentid') class TestClientRouter(test.NoDBTestCase): @mock.patch('oslo_messaging.RPCClient') def test_by_instance(self, mock_rpcclient): default_client = mock.Mock() cell_client = mock.Mock() mock_rpcclient.return_value = cell_client ctxt = mock.Mock() ctxt.mq_connection = mock.sentinel.transport router = rpc.ClientRouter(default_client) client = router.client(ctxt) # verify a client was created by ClientRouter mock_rpcclient.assert_called_once_with( mock.sentinel.transport, default_client.target, version_cap=default_client.version_cap, serializer=default_client.serializer) # verify cell client was returned self.assertEqual(cell_client, client) @mock.patch('oslo_messaging.RPCClient') def test_by_instance_untargeted(self, mock_rpcclient): default_client = mock.Mock() cell_client = mock.Mock() mock_rpcclient.return_value = cell_client ctxt = mock.Mock() ctxt.mq_connection = None router = rpc.ClientRouter(default_client) client = router.client(ctxt) self.assertEqual(router.default_client, client) self.assertFalse(mock_rpcclient.called) class TestIsNotificationsEnabledDecorator(test.NoDBTestCase): def setUp(self): super(TestIsNotificationsEnabledDecorator, self).setUp() self.f = mock.Mock() self.f.__name__ = 'f' self.decorated = rpc.if_notifications_enabled(self.f) def test_call_func_if_needed(self): self.decorated() self.f.assert_called_once_with() @mock.patch('nova.rpc.NOTIFIER.is_enabled', return_value=False) def test_not_call_func_if_notifier_disabled(self, mock_is_enabled): self.decorated() self.assertEqual(0, len(self.f.mock_calls)) def test_not_call_func_if_only_unversioned_notifications_requested(self): self.flags(notification_format='unversioned', group='notifications') self.decorated() self.assertEqual(0, len(self.f.mock_calls)) nova-17.0.1/nova/tests/unit/test_metadata.py0000666000175000017500000020645713250073126021061 0ustar zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests for metadata service.""" import copy import hashlib import hmac import os import re try: import cPickle as pickle except ImportError: import pickle from keystoneauth1 import exceptions as ks_exceptions from keystoneauth1 import session import mock from oslo_config import cfg from oslo_serialization import base64 from oslo_serialization import jsonutils from oslo_utils import encodeutils import requests import six import webob from nova.api.metadata import base from nova.api.metadata import handler from nova.api.metadata import password from nova.api.metadata import vendordata_dynamic from nova import block_device from nova.compute import flavors from nova import context from nova import exception from nova.network import model as network_model from nova.network.neutronv2 import api as neutronapi from nova import objects from nova.objects import virt_device_metadata as metadata_obj from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_block_device from nova.tests.unit import fake_network from nova.tests.unit import test_identity from nova.tests import uuidsentinel as uuids from nova import utils from nova.virt import netutils CONF = cfg.CONF USER_DATA_STRING = (b"This is an encoded string") ENCODE_USER_DATA_STRING = base64.encode_as_text(USER_DATA_STRING) FAKE_SEED = '7qtD24mpMR2' def fake_inst_obj(context): inst = objects.Instance( context=context, id=1, user_id='fake_user', uuid='b65cee2f-8c69-4aeb-be2f-f79742548fc2', project_id='test', key_name="key", key_data="ssh-rsa AAAAB3Nzai....N3NtHw== someuser@somehost", host='test', launch_index=1, reservation_id='r-xxxxxxxx', user_data=ENCODE_USER_DATA_STRING, image_ref=uuids.image_ref, kernel_id=None, ramdisk_id=None, vcpus=1, fixed_ips=[], root_device_name='/dev/sda1', hostname='test.novadomain', display_name='my_displayname', metadata={}, device_metadata=fake_metadata_objects(), default_ephemeral_device=None, default_swap_device=None, system_metadata={}, security_groups=objects.SecurityGroupList(), availability_zone='fake-az') inst.keypairs = objects.KeyPairList(objects=[ fake_keypair_obj(inst.key_name, inst.key_data)]) nwinfo = network_model.NetworkInfo([]) inst.info_cache = objects.InstanceInfoCache(context=context, instance_uuid=inst.uuid, network_info=nwinfo) inst.flavor = flavors.get_default_flavor() return inst def fake_keypair_obj(name, data): return objects.KeyPair(name=name, type='fake_type', public_key=data) def fake_InstanceMetadata(testcase, inst_data, address=None, sgroups=None, content=None, extra_md=None, network_info=None, network_metadata=None): content = content or [] extra_md = extra_md or {} if sgroups is None: sgroups = [{'name': 'default'}] fakes.stub_out_secgroup_api(testcase, security_groups=sgroups) return base.InstanceMetadata(inst_data, address=address, content=content, extra_md=extra_md, network_info=network_info, network_metadata=network_metadata) def fake_request(testcase, mdinst, relpath, address="127.0.0.1", fake_get_metadata=None, headers=None, fake_get_metadata_by_instance_id=None, app=None): def get_metadata_by_remote_address(self, address): return mdinst if app is None: app = handler.MetadataRequestHandler() if fake_get_metadata is None: fake_get_metadata = get_metadata_by_remote_address if testcase: testcase.stub_out( '%(module)s.%(class)s.get_metadata_by_remote_address' % {'module': app.__module__, 'class': app.__class__.__name__}, fake_get_metadata) if fake_get_metadata_by_instance_id: testcase.stub_out( '%(module)s.%(class)s.get_metadata_by_instance_id' % {'module': app.__module__, 'class': app.__class__.__name__}, fake_get_metadata_by_instance_id) request = webob.Request.blank(relpath) request.remote_addr = address if headers is not None: request.headers.update(headers) response = request.get_response(app) return response def fake_metadata_objects(): nic_obj = metadata_obj.NetworkInterfaceMetadata( bus=metadata_obj.PCIDeviceBus(address='0000:00:01.0'), mac='00:00:00:00:00:00', tags=['foo'] ) nic_vlans_obj = metadata_obj.NetworkInterfaceMetadata( bus=metadata_obj.PCIDeviceBus(address='0000:80:01.0'), mac='e3:a0:d0:12:c5:10', vlan=1000, ) ide_disk_obj = metadata_obj.DiskMetadata( bus=metadata_obj.IDEDeviceBus(address='0:0'), serial='disk-vol-2352423', path='/dev/sda', tags=['baz'], ) scsi_disk_obj = metadata_obj.DiskMetadata( bus=metadata_obj.SCSIDeviceBus(address='05c8:021e:04a7:011b'), serial='disk-vol-2352423', path='/dev/sda', tags=['baz'], ) usb_disk_obj = metadata_obj.DiskMetadata( bus=metadata_obj.USBDeviceBus(address='05c8:021e'), serial='disk-vol-2352423', path='/dev/sda', tags=['baz'], ) fake_device_obj = metadata_obj.DeviceMetadata() device_with_fake_bus_obj = metadata_obj.NetworkInterfaceMetadata( bus=metadata_obj.DeviceBus(), mac='00:00:00:00:00:00', tags=['foo'] ) mdlist = metadata_obj.InstanceDeviceMetadata( instance_uuid='b65cee2f-8c69-4aeb-be2f-f79742548fc2', devices=[nic_obj, ide_disk_obj, scsi_disk_obj, usb_disk_obj, fake_device_obj, device_with_fake_bus_obj, nic_vlans_obj]) return mdlist def fake_metadata_dicts(include_vlan=False): nic_meta = { 'type': 'nic', 'bus': 'pci', 'address': '0000:00:01.0', 'mac': '00:00:00:00:00:00', 'tags': ['foo'], } vlan_nic_meta = { 'type': 'nic', 'bus': 'pci', 'address': '0000:80:01.0', 'mac': 'e3:a0:d0:12:c5:10', 'vlan': 1000, } ide_disk_meta = { 'type': 'disk', 'bus': 'ide', 'address': '0:0', 'serial': 'disk-vol-2352423', 'path': '/dev/sda', 'tags': ['baz'], } scsi_disk_meta = copy.copy(ide_disk_meta) scsi_disk_meta['bus'] = 'scsi' scsi_disk_meta['address'] = '05c8:021e:04a7:011b' usb_disk_meta = copy.copy(ide_disk_meta) usb_disk_meta['bus'] = 'usb' usb_disk_meta['address'] = '05c8:021e' dicts = [nic_meta, ide_disk_meta, scsi_disk_meta, usb_disk_meta] if include_vlan: dicts += [vlan_nic_meta] return dicts class MetadataTestCase(test.TestCase): def setUp(self): super(MetadataTestCase, self).setUp() self.context = context.RequestContext('fake', 'fake') self.instance = fake_inst_obj(self.context) self.keypair = fake_keypair_obj(self.instance.key_name, self.instance.key_data) fake_network.stub_out_nw_api_get_instance_nw_info(self) fakes.stub_out_secgroup_api(self) def test_can_pickle_metadata(self): # Make sure that InstanceMetadata is possible to pickle. This is # required for memcache backend to work correctly. md = fake_InstanceMetadata(self, self.instance.obj_clone()) pickle.dumps(md, protocol=0) def test_user_data(self): inst = self.instance.obj_clone() inst['user_data'] = base64.encode_as_text("happy") md = fake_InstanceMetadata(self, inst) self.assertEqual( md.get_ec2_metadata(version='2009-04-04')['user-data'], b"happy") def test_no_user_data(self): inst = self.instance.obj_clone() inst.user_data = None md = fake_InstanceMetadata(self, inst) obj = object() self.assertEqual( md.get_ec2_metadata(version='2009-04-04').get('user-data', obj), obj) def _test_security_groups(self): inst = self.instance.obj_clone() sgroups = [{'name': name} for name in ('default', 'other')] expected = ['default', 'other'] md = fake_InstanceMetadata(self, inst, sgroups=sgroups) data = md.get_ec2_metadata(version='2009-04-04') self.assertEqual(data['meta-data']['security-groups'], expected) def test_security_groups(self): self._test_security_groups() def test_neutron_security_groups(self): self.flags(use_neutron=True) self._test_security_groups() def test_local_hostname_fqdn(self): md = fake_InstanceMetadata(self, self.instance.obj_clone()) data = md.get_ec2_metadata(version='2009-04-04') self.assertEqual(data['meta-data']['local-hostname'], "%s.%s" % (self.instance['hostname'], CONF.dhcp_domain)) def test_format_instance_mapping(self): # Make sure that _format_instance_mappings works. instance_ref0 = objects.Instance(**{'id': 0, 'uuid': 'e5fe5518-0288-4fa3-b0c4-c79764101b85', 'root_device_name': None, 'default_ephemeral_device': None, 'default_swap_device': None}) instance_ref1 = objects.Instance(**{'id': 0, 'uuid': 'b65cee2f-8c69-4aeb-be2f-f79742548fc2', 'root_device_name': '/dev/sda1', 'default_ephemeral_device': None, 'default_swap_device': None}) def fake_bdm_get(ctxt, uuid): return [fake_block_device.FakeDbBlockDeviceDict( {'volume_id': 87654321, 'snapshot_id': None, 'no_device': None, 'source_type': 'volume', 'destination_type': 'volume', 'delete_on_termination': True, 'device_name': '/dev/sdh'}), fake_block_device.FakeDbBlockDeviceDict( {'volume_id': None, 'snapshot_id': None, 'no_device': None, 'source_type': 'blank', 'destination_type': 'local', 'guest_format': 'swap', 'delete_on_termination': None, 'device_name': '/dev/sdc'}), fake_block_device.FakeDbBlockDeviceDict( {'volume_id': None, 'snapshot_id': None, 'no_device': None, 'source_type': 'blank', 'destination_type': 'local', 'guest_format': None, 'delete_on_termination': None, 'device_name': '/dev/sdb'})] self.stub_out('nova.db.block_device_mapping_get_all_by_instance', fake_bdm_get) expected = {'ami': 'sda1', 'root': '/dev/sda1', 'ephemeral0': '/dev/sdb', 'swap': '/dev/sdc', 'ebs0': '/dev/sdh'} self.assertEqual(base._format_instance_mapping(self.context, instance_ref0), block_device._DEFAULT_MAPPINGS) self.assertEqual(base._format_instance_mapping(self.context, instance_ref1), expected) def test_pubkey(self): md = fake_InstanceMetadata(self, self.instance.obj_clone()) pubkey_ent = md.lookup("/2009-04-04/meta-data/public-keys") self.assertEqual(base.ec2_md_print(pubkey_ent), "0=%s" % self.instance['key_name']) self.assertEqual(base.ec2_md_print(pubkey_ent['0']['openssh-key']), self.instance['key_data']) def test_image_type_ramdisk(self): inst = self.instance.obj_clone() inst['ramdisk_id'] = uuids.ramdisk_id md = fake_InstanceMetadata(self, inst) data = md.lookup("/latest/meta-data/ramdisk-id") self.assertIsNotNone(data) self.assertTrue(re.match('ari-[0-9a-f]{8}', data)) def test_image_type_kernel(self): inst = self.instance.obj_clone() inst['kernel_id'] = uuids.kernel_id md = fake_InstanceMetadata(self, inst) data = md.lookup("/2009-04-04/meta-data/kernel-id") self.assertTrue(re.match('aki-[0-9a-f]{8}', data)) self.assertEqual( md.lookup("/ec2/2009-04-04/meta-data/kernel-id"), data) def test_image_type_no_kernel_raises(self): inst = self.instance.obj_clone() md = fake_InstanceMetadata(self, inst) self.assertRaises(base.InvalidMetadataPath, md.lookup, "/2009-04-04/meta-data/kernel-id") def test_instance_is_sanitized(self): inst = self.instance.obj_clone() # The instance already has some fake device_metadata stored on it, # and we want to test to see it gets lazy-loaded, so save off the # original attribute value and delete the attribute from the instance, # then we can assert it gets loaded up later. original_device_meta = inst.device_metadata delattr(inst, 'device_metadata') def fake_obj_load_attr(attrname): if attrname == 'device_metadata': inst.device_metadata = original_device_meta elif attrname == 'ec2_ids': inst.ec2_ids = objects.EC2Ids() else: self.fail('Unexpected instance lazy-load: %s' % attrname) inst._will_not_pass = True with mock.patch.object( inst, 'obj_load_attr', side_effect=fake_obj_load_attr) as mock_obj_load_attr: md = fake_InstanceMetadata(self, inst) self.assertFalse(hasattr(md.instance, '_will_not_pass')) self.assertEqual(2, mock_obj_load_attr.call_count) mock_obj_load_attr.assert_has_calls( [mock.call('device_metadata'), mock.call('ec2_ids')], any_order=True) self.assertIs(original_device_meta, inst.device_metadata) def test_check_version(self): inst = self.instance.obj_clone() md = fake_InstanceMetadata(self, inst) self.assertTrue(md._check_version('1.0', '2009-04-04')) self.assertFalse(md._check_version('2009-04-04', '1.0')) self.assertFalse(md._check_version('2009-04-04', '2008-09-01')) self.assertTrue(md._check_version('2008-09-01', '2009-04-04')) self.assertTrue(md._check_version('2009-04-04', '2009-04-04')) @mock.patch('nova.virt.netutils.get_injected_network_template') def test_InstanceMetadata_uses_passed_network_info(self, mock_get): network_info = [] mock_get.return_value = False base.InstanceMetadata(fake_inst_obj(self.context), network_info=network_info) mock_get.assert_called_once_with(network_info) @mock.patch.object(netutils, "get_network_metadata", autospec=True) def test_InstanceMetadata_gets_network_metadata(self, mock_netutils): network_data = {'links': [], 'networks': [], 'services': []} mock_netutils.return_value = network_data md = base.InstanceMetadata(fake_inst_obj(self.context)) self.assertEqual(network_data, md.network_metadata) def test_InstanceMetadata_invoke_metadata_for_config_drive(self): fakes.stub_out_key_pair_funcs(self) inst = self.instance.obj_clone() inst_md = base.InstanceMetadata(inst) expected_paths = [ 'ec2/2009-04-04/user-data', 'ec2/2009-04-04/meta-data.json', 'ec2/latest/user-data', 'ec2/latest/meta-data.json', 'openstack/2012-08-10/meta_data.json', 'openstack/2012-08-10/user_data', 'openstack/2013-04-04/meta_data.json', 'openstack/2013-04-04/user_data', 'openstack/2013-10-17/meta_data.json', 'openstack/2013-10-17/user_data', 'openstack/2013-10-17/vendor_data.json', 'openstack/2015-10-15/meta_data.json', 'openstack/2015-10-15/user_data', 'openstack/2015-10-15/vendor_data.json', 'openstack/2015-10-15/network_data.json', 'openstack/2016-06-30/meta_data.json', 'openstack/2016-06-30/user_data', 'openstack/2016-06-30/vendor_data.json', 'openstack/2016-06-30/network_data.json', 'openstack/2016-10-06/meta_data.json', 'openstack/2016-10-06/user_data', 'openstack/2016-10-06/vendor_data.json', 'openstack/2016-10-06/network_data.json', 'openstack/2016-10-06/vendor_data2.json', 'openstack/2017-02-22/meta_data.json', 'openstack/2017-02-22/user_data', 'openstack/2017-02-22/vendor_data.json', 'openstack/2017-02-22/network_data.json', 'openstack/2017-02-22/vendor_data2.json', 'openstack/latest/meta_data.json', 'openstack/latest/user_data', 'openstack/latest/vendor_data.json', 'openstack/latest/network_data.json', 'openstack/latest/vendor_data2.json', ] actual_paths = [] for (path, value) in inst_md.metadata_for_config_drive(): actual_paths.append(path) self.assertIsNotNone(path) self.assertEqual(expected_paths, actual_paths) @mock.patch('nova.virt.netutils.get_injected_network_template') def test_InstanceMetadata_queries_network_API_when_needed(self, mock_get): network_info_from_api = [] mock_get.return_value = False base.InstanceMetadata(fake_inst_obj(self.context)) mock_get.assert_called_once_with(network_info_from_api) def test_local_ipv4(self): nw_info = fake_network.fake_get_instance_nw_info(self, num_networks=2) expected_local = "192.168.1.100" md = fake_InstanceMetadata(self, self.instance, network_info=nw_info, address="fake") data = md.get_ec2_metadata(version='2009-04-04') self.assertEqual(expected_local, data['meta-data']['local-ipv4']) def test_local_ipv4_from_nw_info(self): nw_info = fake_network.fake_get_instance_nw_info(self, num_networks=2) expected_local = "192.168.1.100" md = fake_InstanceMetadata(self, self.instance, network_info=nw_info) data = md.get_ec2_metadata(version='2009-04-04') self.assertEqual(data['meta-data']['local-ipv4'], expected_local) def test_local_ipv4_from_address(self): expected_local = "fake" md = fake_InstanceMetadata(self, self.instance, network_info=[], address="fake") data = md.get_ec2_metadata(version='2009-04-04') self.assertEqual(data['meta-data']['local-ipv4'], expected_local) @mock.patch('oslo_serialization.base64.encode_as_text', return_value=FAKE_SEED) @mock.patch('nova.cells.rpcapi.CellsAPI.get_keypair_at_top') @mock.patch.object(jsonutils, 'dump_as_bytes') def _test_as_json_with_options(self, mock_json_dump_as_bytes, mock_cells_keypair, mock_base64, is_cells=False, os_version=base.GRIZZLY): if is_cells: self.flags(enable=True, group='cells') self.flags(cell_type='compute', group='cells') instance = self.instance keypair = self.keypair md = fake_InstanceMetadata(self, instance) expected_metadata = { 'uuid': md.uuid, 'hostname': md._get_hostname(), 'name': instance.display_name, 'launch_index': instance.launch_index, 'availability_zone': md.availability_zone, } if md.launch_metadata: expected_metadata['meta'] = md.launch_metadata if md.files: expected_metadata['files'] = md.files if md.extra_md: expected_metadata['extra_md'] = md.extra_md if md.network_config: expected_metadata['network_config'] = md.network_config if instance.key_name: expected_metadata['public_keys'] = { keypair.name: keypair.public_key } expected_metadata['keys'] = [{'type': keypair.type, 'data': keypair.public_key, 'name': keypair.name}] if md._check_os_version(base.GRIZZLY, os_version): expected_metadata['random_seed'] = FAKE_SEED if md._check_os_version(base.LIBERTY, os_version): expected_metadata['project_id'] = instance.project_id if md._check_os_version(base.NEWTON_ONE, os_version): expose_vlan = md._check_os_version(base.OCATA, os_version) expected_metadata['devices'] = fake_metadata_dicts(expose_vlan) mock_cells_keypair.return_value = keypair md._metadata_as_json(os_version, 'non useless path parameter') if instance.key_name: if is_cells: mock_cells_keypair.assert_called_once_with(mock.ANY, instance.user_id, instance.key_name) self.assertIsInstance(mock_cells_keypair.call_args[0][0], context.RequestContext) self.assertEqual(md.md_mimetype, base.MIME_TYPE_APPLICATION_JSON) mock_json_dump_as_bytes.assert_called_once_with(expected_metadata) def test_as_json(self): for os_version in base.OPENSTACK_VERSIONS: self._test_as_json_with_options(os_version=os_version) def test_as_json_with_cells_mode(self): for os_version in base.OPENSTACK_VERSIONS: self._test_as_json_with_options(is_cells=True, os_version=os_version) @mock.patch('nova.cells.rpcapi.CellsAPI.get_keypair_at_top', side_effect=exception.KeypairNotFound( name='key', user_id='fake_user')) @mock.patch.object(objects.Instance, 'get_by_uuid') def test_as_json_deleted_keypair_in_cells_mode(self, mock_get_keypair_at_top, mock_inst_get_by_uuid): self.flags(enable=True, group='cells') self.flags(cell_type='compute', group='cells') instance = self.instance.obj_clone() delattr(instance, 'keypairs') md = fake_InstanceMetadata(self, instance) meta = md._metadata_as_json(base.OPENSTACK_VERSIONS[-1], path=None) meta = jsonutils.loads(meta) self.assertNotIn('keys', meta) self.assertNotIn('public_keys', meta) @mock.patch.object(objects.Instance, 'get_by_uuid') def test_metadata_as_json_deleted_keypair(self, mock_inst_get_by_uuid): """Tests that we handle missing instance keypairs. """ instance = self.instance.obj_clone() # we want to make sure that key_name is set but not keypairs so it has # to be lazy-loaded from the database delattr(instance, 'keypairs') mock_inst_get_by_uuid.return_value = instance md = fake_InstanceMetadata(self, instance) meta = md._metadata_as_json(base.OPENSTACK_VERSIONS[-1], path=None) meta = jsonutils.loads(meta) self.assertNotIn('keys', meta) self.assertNotIn('public_keys', meta) class OpenStackMetadataTestCase(test.TestCase): def setUp(self): super(OpenStackMetadataTestCase, self).setUp() self.context = context.RequestContext('fake', 'fake') self.instance = fake_inst_obj(self.context) fake_network.stub_out_nw_api_get_instance_nw_info(self) def test_empty_device_metadata(self): fakes.stub_out_key_pair_funcs(self) inst = self.instance.obj_clone() inst.device_metadata = None mdinst = fake_InstanceMetadata(self, inst) mdjson = mdinst.lookup("/openstack/latest/meta_data.json") mddict = jsonutils.loads(mdjson) self.assertEqual([], mddict['devices']) def test_device_metadata(self): # Because we handle a list of devices, we have only one test and in it # include the various devices types that we have to test, as well as a # couple of fake device types and bus types that should be silently # ignored fakes.stub_out_key_pair_funcs(self) inst = self.instance.obj_clone() mdinst = fake_InstanceMetadata(self, inst) mdjson = mdinst.lookup("/openstack/latest/meta_data.json") mddict = jsonutils.loads(mdjson) self.assertEqual(fake_metadata_dicts(True), mddict['devices']) def test_top_level_listing(self): # request for /openstack// should show metadata.json inst = self.instance.obj_clone() mdinst = fake_InstanceMetadata(self, inst) result = mdinst.lookup("/openstack") # trailing / should not affect anything self.assertEqual(result, mdinst.lookup("/openstack/")) # the 'content' should not show up in directory listing self.assertNotIn(base.CONTENT_DIR, result) self.assertIn('2012-08-10', result) self.assertIn('latest', result) def test_version_content_listing(self): # request for /openstack// should show metadata.json inst = self.instance.obj_clone() mdinst = fake_InstanceMetadata(self, inst) listing = mdinst.lookup("/openstack/2012-08-10") self.assertIn("meta_data.json", listing) def test_returns_apis_supported_in_liberty_version(self): mdinst = fake_InstanceMetadata(self, self.instance) liberty_supported_apis = mdinst.lookup("/openstack/2015-10-15") self.assertEqual([base.MD_JSON_NAME, base.UD_NAME, base.PASS_NAME, base.VD_JSON_NAME, base.NW_JSON_NAME], liberty_supported_apis) def test_returns_apis_supported_in_havana_version(self): mdinst = fake_InstanceMetadata(self, self.instance) havana_supported_apis = mdinst.lookup("/openstack/2013-10-17") self.assertEqual([base.MD_JSON_NAME, base.UD_NAME, base.PASS_NAME, base.VD_JSON_NAME], havana_supported_apis) def test_returns_apis_supported_in_folsom_version(self): mdinst = fake_InstanceMetadata(self, self.instance) folsom_supported_apis = mdinst.lookup("/openstack/2012-08-10") self.assertEqual([base.MD_JSON_NAME, base.UD_NAME], folsom_supported_apis) def test_returns_apis_supported_in_grizzly_version(self): mdinst = fake_InstanceMetadata(self, self.instance) grizzly_supported_apis = mdinst.lookup("/openstack/2013-04-04") self.assertEqual([base.MD_JSON_NAME, base.UD_NAME, base.PASS_NAME], grizzly_supported_apis) def test_metadata_json(self): fakes.stub_out_key_pair_funcs(self) inst = self.instance.obj_clone() content = [ ('/etc/my.conf', "content of my.conf"), ('/root/hello', "content of /root/hello"), ] mdinst = fake_InstanceMetadata(self, inst, content=content) mdjson = mdinst.lookup("/openstack/2012-08-10/meta_data.json") mdjson = mdinst.lookup("/openstack/latest/meta_data.json") mddict = jsonutils.loads(mdjson) self.assertEqual(mddict['uuid'], self.instance['uuid']) self.assertIn('files', mddict) self.assertIn('public_keys', mddict) self.assertEqual(mddict['public_keys'][self.instance['key_name']], self.instance['key_data']) self.assertIn('launch_index', mddict) self.assertEqual(mddict['launch_index'], self.instance['launch_index']) # verify that each of the things we put in content # resulted in an entry in 'files', that their content # there is as expected, and that /content lists them. for (path, content) in content: fent = [f for f in mddict['files'] if f['path'] == path] self.assertEqual(1, len(fent)) fent = fent[0] found = mdinst.lookup("/openstack%s" % fent['content_path']) self.assertEqual(found, content) def test_x509_keypair(self): inst = self.instance.obj_clone() expected = {'name': self.instance['key_name'], 'type': 'x509', 'data': 'public_key'} inst.keypairs[0].name = expected['name'] inst.keypairs[0].type = expected['type'] inst.keypairs[0].public_key = expected['data'] mdinst = fake_InstanceMetadata(self, inst) mdjson = mdinst.lookup("/openstack/2012-08-10/meta_data.json") mddict = jsonutils.loads(mdjson) self.assertEqual([expected], mddict['keys']) def test_extra_md(self): # make sure extra_md makes it through to metadata fakes.stub_out_key_pair_funcs(self) inst = self.instance.obj_clone() extra = {'foo': 'bar', 'mylist': [1, 2, 3], 'mydict': {"one": 1, "two": 2}} mdinst = fake_InstanceMetadata(self, inst, extra_md=extra) mdjson = mdinst.lookup("/openstack/2012-08-10/meta_data.json") mddict = jsonutils.loads(mdjson) for key, val in extra.items(): self.assertEqual(mddict[key], val) def test_password(self): # make sure extra_md makes it through to metadata inst = self.instance.obj_clone() mdinst = fake_InstanceMetadata(self, inst) result = mdinst.lookup("/openstack/latest/password") self.assertEqual(result, password.handle_password) def test_userdata(self): inst = self.instance.obj_clone() mdinst = fake_InstanceMetadata(self, inst) userdata_found = mdinst.lookup("/openstack/2012-08-10/user_data") self.assertEqual(USER_DATA_STRING, userdata_found) # since we had user-data in this instance, it should be in listing self.assertIn('user_data', mdinst.lookup("/openstack/2012-08-10")) inst.user_data = None mdinst = fake_InstanceMetadata(self, inst) # since this instance had no user-data it should not be there. self.assertNotIn('user_data', mdinst.lookup("/openstack/2012-08-10")) self.assertRaises(base.InvalidMetadataPath, mdinst.lookup, "/openstack/2012-08-10/user_data") def test_random_seed(self): fakes.stub_out_key_pair_funcs(self) inst = self.instance.obj_clone() mdinst = fake_InstanceMetadata(self, inst) # verify that 2013-04-04 has the 'random' field mdjson = mdinst.lookup("/openstack/2013-04-04/meta_data.json") mddict = jsonutils.loads(mdjson) self.assertIn("random_seed", mddict) self.assertEqual(len(base64.decode_as_bytes(mddict["random_seed"])), 512) # verify that older version do not have it mdjson = mdinst.lookup("/openstack/2012-08-10/meta_data.json") self.assertNotIn("random_seed", jsonutils.loads(mdjson)) def test_project_id(self): fakes.stub_out_key_pair_funcs(self) mdinst = fake_InstanceMetadata(self, self.instance) # verify that 2015-10-15 has the 'project_id' field mdjson = mdinst.lookup("/openstack/2015-10-15/meta_data.json") mddict = jsonutils.loads(mdjson) self.assertIn("project_id", mddict) self.assertEqual(mddict["project_id"], self.instance.project_id) # verify that older version do not have it mdjson = mdinst.lookup("/openstack/2013-10-17/meta_data.json") self.assertNotIn("project_id", jsonutils.loads(mdjson)) def test_no_dashes_in_metadata(self): # top level entries in meta_data should not contain '-' in their name fakes.stub_out_key_pair_funcs(self) inst = self.instance.obj_clone() mdinst = fake_InstanceMetadata(self, inst) mdjson = jsonutils.loads( mdinst.lookup("/openstack/latest/meta_data.json")) self.assertEqual([], [k for k in mdjson.keys() if k.find("-") != -1]) def test_vendor_data_presence(self): inst = self.instance.obj_clone() mdinst = fake_InstanceMetadata(self, inst) # verify that 2013-10-17 has the vendor_data.json file result = mdinst.lookup("/openstack/2013-10-17") self.assertIn('vendor_data.json', result) # verify that older version do not have it result = mdinst.lookup("/openstack/2013-04-04") self.assertNotIn('vendor_data.json', result) # verify that 2016-10-06 has the vendor_data2.json file result = mdinst.lookup("/openstack/2016-10-06") self.assertIn('vendor_data2.json', result) # assert that we never created a ksa session for dynamic vendordata if # we didn't make a request self.assertIsNone(mdinst.vendordata_providers['DynamicJSON'].session) def _test_vendordata2_response_inner(self, request_mock, response_code, include_rest_result=True): fake_response = test_identity.FakeResponse(response_code) if include_rest_result: fake_response.content = '{"color": "blue"}' request_mock.return_value = fake_response with utils.tempdir() as tmpdir: jsonfile = os.path.join(tmpdir, 'test.json') with open(jsonfile, 'w') as f: f.write(jsonutils.dumps({'ldap': '10.0.0.1', 'ad': '10.0.0.2'})) self.flags(vendordata_providers=['StaticJSON', 'DynamicJSON'], vendordata_jsonfile_path=jsonfile, vendordata_dynamic_targets=[ 'web@http://fake.com/foobar'], group='api' ) inst = self.instance.obj_clone() mdinst = fake_InstanceMetadata(self, inst) # verify that 2013-10-17 has the vendor_data.json file vdpath = "/openstack/2013-10-17/vendor_data.json" vd = jsonutils.loads(mdinst.lookup(vdpath)) self.assertEqual('10.0.0.1', vd.get('ldap')) self.assertEqual('10.0.0.2', vd.get('ad')) # verify that 2016-10-06 works as well vdpath = "/openstack/2016-10-06/vendor_data.json" vd = jsonutils.loads(mdinst.lookup(vdpath)) self.assertEqual('10.0.0.1', vd.get('ldap')) self.assertEqual('10.0.0.2', vd.get('ad')) # verify the new format as well vdpath = "/openstack/2016-10-06/vendor_data2.json" with mock.patch( 'nova.api.metadata.vendordata_dynamic.LOG.warning') as wrn: vd = jsonutils.loads(mdinst.lookup(vdpath)) # We don't have vendordata_dynamic_auth credentials configured # so we expect to see a warning logged about making an insecure # connection. warning_calls = wrn.call_args_list self.assertEqual(1, len(warning_calls)) # Verify the warning message is the one we expect which is the # first and only arg to the first and only call to the warning. self.assertIn('Passing insecure dynamic vendordata requests', six.text_type(warning_calls[0][0])) self.assertEqual('10.0.0.1', vd['static'].get('ldap')) self.assertEqual('10.0.0.2', vd['static'].get('ad')) if include_rest_result: self.assertEqual('blue', vd['web'].get('color')) else: self.assertEqual({}, vd['web']) @mock.patch.object(session.Session, 'request') def test_vendor_data_response_vendordata2_ok(self, request_mock): self._test_vendordata2_response_inner(request_mock, requests.codes.OK) @mock.patch.object(session.Session, 'request') def test_vendor_data_response_vendordata2_created(self, request_mock): self._test_vendordata2_response_inner(request_mock, requests.codes.CREATED) @mock.patch.object(session.Session, 'request') def test_vendor_data_response_vendordata2_accepted(self, request_mock): self._test_vendordata2_response_inner(request_mock, requests.codes.ACCEPTED) @mock.patch.object(session.Session, 'request') def test_vendor_data_response_vendordata2_no_content(self, request_mock): # Make it a failure if no content was returned and we don't handle it. self.flags(vendordata_dynamic_failure_fatal=True, group='api') self._test_vendordata2_response_inner(request_mock, requests.codes.NO_CONTENT, include_rest_result=False) def _test_vendordata2_response_inner_exceptional( self, request_mock, log_mock, exc): request_mock.side_effect = exc('Ta da!') with utils.tempdir() as tmpdir: jsonfile = os.path.join(tmpdir, 'test.json') with open(jsonfile, 'w') as f: f.write(jsonutils.dumps({'ldap': '10.0.0.1', 'ad': '10.0.0.2'})) self.flags(vendordata_providers=['StaticJSON', 'DynamicJSON'], vendordata_jsonfile_path=jsonfile, vendordata_dynamic_targets=[ 'web@http://fake.com/foobar'], group='api' ) inst = self.instance.obj_clone() mdinst = fake_InstanceMetadata(self, inst) # verify the new format as well vdpath = "/openstack/2016-10-06/vendor_data2.json" vd = jsonutils.loads(mdinst.lookup(vdpath)) self.assertEqual('10.0.0.1', vd['static'].get('ldap')) self.assertEqual('10.0.0.2', vd['static'].get('ad')) # and exception should result in nothing being added, but no error self.assertEqual({}, vd['web']) self.assertTrue(log_mock.called) @mock.patch.object(vendordata_dynamic.LOG, 'warning') @mock.patch.object(session.Session, 'request') def test_vendor_data_response_vendordata2_type_error(self, request_mock, log_mock): self._test_vendordata2_response_inner_exceptional( request_mock, log_mock, TypeError) @mock.patch.object(vendordata_dynamic.LOG, 'warning') @mock.patch.object(session.Session, 'request') def test_vendor_data_response_vendordata2_value_error(self, request_mock, log_mock): self._test_vendordata2_response_inner_exceptional( request_mock, log_mock, ValueError) @mock.patch.object(vendordata_dynamic.LOG, 'warning') @mock.patch.object(session.Session, 'request') def test_vendor_data_response_vendordata2_request_error(self, request_mock, log_mock): self._test_vendordata2_response_inner_exceptional( request_mock, log_mock, ks_exceptions.BadRequest) @mock.patch.object(vendordata_dynamic.LOG, 'warning') @mock.patch.object(session.Session, 'request') def test_vendor_data_response_vendordata2_ssl_error(self, request_mock, log_mock): self._test_vendordata2_response_inner_exceptional( request_mock, log_mock, ks_exceptions.SSLError) @mock.patch.object(vendordata_dynamic.LOG, 'warning') @mock.patch.object(session.Session, 'request') def test_vendor_data_response_vendordata2_ssl_error_fatal(self, request_mock, log_mock): self.flags(vendordata_dynamic_failure_fatal=True, group='api') self.assertRaises(ks_exceptions.SSLError, self._test_vendordata2_response_inner_exceptional, request_mock, log_mock, ks_exceptions.SSLError) def test_network_data_presence(self): inst = self.instance.obj_clone() mdinst = fake_InstanceMetadata(self, inst) # verify that 2015-10-15 has the network_data.json file result = mdinst.lookup("/openstack/2015-10-15") self.assertIn('network_data.json', result) # verify that older version do not have it result = mdinst.lookup("/openstack/2013-10-17") self.assertNotIn('network_data.json', result) def test_network_data_response(self): inst = self.instance.obj_clone() nw_data = { "links": [{"ethernet_mac_address": "aa:aa:aa:aa:aa:aa", "id": "nic0", "type": "ethernet", "vif_id": 1, "mtu": 1500}], "networks": [{"id": "network0", "ip_address": "10.10.0.2", "link": "nic0", "netmask": "255.255.255.0", "network_id": "00000000-0000-0000-0000-000000000000", "routes": [], "type": "ipv4"}], "services": [{'address': '1.2.3.4', 'type': 'dns'}]} mdinst = fake_InstanceMetadata(self, inst, network_metadata=nw_data) # verify that 2015-10-15 has the network_data.json file nwpath = "/openstack/2015-10-15/network_data.json" nw = jsonutils.loads(mdinst.lookup(nwpath)) # check the other expected values for k, v in nw_data.items(): self.assertEqual(nw[k], v) class MetadataHandlerTestCase(test.TestCase): """Test that metadata is returning proper values.""" def setUp(self): super(MetadataHandlerTestCase, self).setUp() fake_network.stub_out_nw_api_get_instance_nw_info(self) self.context = context.RequestContext('fake', 'fake') self.instance = fake_inst_obj(self.context) self.mdinst = fake_InstanceMetadata(self, self.instance, address=None, sgroups=None) def test_callable(self): def verify(req, meta_data): self.assertIsInstance(meta_data, CallableMD) return "foo" class CallableMD(object): def lookup(self, path_info): return verify response = fake_request(self, CallableMD(), "/bar") self.assertEqual(response.status_int, 200) self.assertEqual(response.text, "foo") def test_root(self): expected = "\n".join(base.VERSIONS) + "\nlatest" response = fake_request(self, self.mdinst, "/") self.assertEqual(response.text, expected) response = fake_request(self, self.mdinst, "/foo/../") self.assertEqual(response.text, expected) def test_root_metadata_proxy_enabled(self): self.flags(service_metadata_proxy=True, group='neutron') expected = "\n".join(base.VERSIONS) + "\nlatest" response = fake_request(self, self.mdinst, "/") self.assertEqual(response.text, expected) response = fake_request(self, self.mdinst, "/foo/../") self.assertEqual(response.text, expected) def test_version_root(self): response = fake_request(self, self.mdinst, "/2009-04-04") response_ctype = response.headers['Content-Type'] self.assertTrue(response_ctype.startswith("text/plain")) self.assertEqual(response.text, 'meta-data/\nuser-data') response = fake_request(self, self.mdinst, "/9999-99-99") self.assertEqual(response.status_int, 404) def test_json_data(self): fakes.stub_out_key_pair_funcs(self) response = fake_request(self, self.mdinst, "/openstack/latest/meta_data.json") response_ctype = response.headers['Content-Type'] self.assertTrue(response_ctype.startswith("application/json")) response = fake_request(self, self.mdinst, "/openstack/latest/vendor_data.json") response_ctype = response.headers['Content-Type'] self.assertTrue(response_ctype.startswith("application/json")) @mock.patch('nova.network.API') def test_user_data_non_existing_fixed_address(self, mock_network_api): mock_network_api.return_value.get_fixed_ip_by_address.side_effect = ( exception.NotFound()) response = fake_request(None, self.mdinst, "/2009-04-04/user-data", "127.1.1.1") self.assertEqual(response.status_int, 404) def test_fixed_address_none(self): response = fake_request(None, self.mdinst, relpath="/2009-04-04/user-data", address=None) self.assertEqual(response.status_int, 500) def test_invalid_path_is_404(self): response = fake_request(self, self.mdinst, relpath="/2009-04-04/user-data-invalid") self.assertEqual(response.status_int, 404) def test_user_data_with_use_forwarded_header(self): expected_addr = "192.192.192.2" def fake_get_metadata(self_gm, address): if address == expected_addr: return self.mdinst else: raise Exception("Expected addr of %s, got %s" % (expected_addr, address)) self.flags(use_forwarded_for=True, group='api') response = fake_request(self, self.mdinst, relpath="/2009-04-04/user-data", address="168.168.168.1", fake_get_metadata=fake_get_metadata, headers={'X-Forwarded-For': expected_addr}) self.assertEqual(response.status_int, 200) response_ctype = response.headers['Content-Type'] self.assertTrue(response_ctype.startswith("text/plain")) self.assertEqual(response.body, base64.decode_as_bytes(self.instance['user_data'])) response = fake_request(self, self.mdinst, relpath="/2009-04-04/user-data", address="168.168.168.1", fake_get_metadata=fake_get_metadata, headers=None) self.assertEqual(response.status_int, 500) @mock.patch('oslo_utils.secretutils.constant_time_compare') def test_by_instance_id_uses_constant_time_compare(self, mock_compare): mock_compare.side_effect = test.TestingException req = webob.Request.blank('/') hnd = handler.MetadataRequestHandler() req.headers['X-Instance-ID'] = 'fake-inst' req.headers['X-Instance-ID-Signature'] = 'fake-sig' req.headers['X-Tenant-ID'] = 'fake-proj' self.assertRaises(test.TestingException, hnd._handle_instance_id_request, req) self.assertEqual(1, mock_compare.call_count) def _fake_x_get_metadata(self, self_app, instance_id, remote_address): if remote_address is None: raise Exception('Expected X-Forwared-For header') if encodeutils.to_utf8(instance_id) == self.expected_instance_id: return self.mdinst # raise the exception to aid with 500 response code test raise Exception("Expected instance_id of %r, got %r" % (self.expected_instance_id, instance_id)) def test_user_data_with_neutron_instance_id(self): self.expected_instance_id = b'a-b-c-d' signed = hmac.new( encodeutils.to_utf8(CONF.neutron.metadata_proxy_shared_secret), self.expected_instance_id, hashlib.sha256).hexdigest() # try a request with service disabled response = fake_request( self, self.mdinst, relpath="/2009-04-04/user-data", address="192.192.192.2", headers={'X-Instance-ID': 'a-b-c-d', 'X-Tenant-ID': 'test', 'X-Instance-ID-Signature': signed}) self.assertEqual(response.status_int, 200) # now enable the service self.flags(service_metadata_proxy=True, group='neutron') response = fake_request( self, self.mdinst, relpath="/2009-04-04/user-data", address="192.192.192.2", fake_get_metadata_by_instance_id=self._fake_x_get_metadata, headers={'X-Forwarded-For': '192.192.192.2', 'X-Instance-ID': 'a-b-c-d', 'X-Tenant-ID': 'test', 'X-Instance-ID-Signature': signed}) self.assertEqual(response.status_int, 200) response_ctype = response.headers['Content-Type'] self.assertTrue(response_ctype.startswith("text/plain")) self.assertEqual(response.body, base64.decode_as_bytes(self.instance['user_data'])) # mismatched signature response = fake_request( self, self.mdinst, relpath="/2009-04-04/user-data", address="192.192.192.2", fake_get_metadata_by_instance_id=self._fake_x_get_metadata, headers={'X-Forwarded-For': '192.192.192.2', 'X-Instance-ID': 'a-b-c-d', 'X-Tenant-ID': 'test', 'X-Instance-ID-Signature': ''}) self.assertEqual(response.status_int, 403) # missing X-Tenant-ID from request response = fake_request( self, self.mdinst, relpath="/2009-04-04/user-data", address="192.192.192.2", fake_get_metadata_by_instance_id=self._fake_x_get_metadata, headers={'X-Forwarded-For': '192.192.192.2', 'X-Instance-ID': 'a-b-c-d', 'X-Instance-ID-Signature': signed}) self.assertEqual(response.status_int, 400) # mismatched X-Tenant-ID response = fake_request( self, self.mdinst, relpath="/2009-04-04/user-data", address="192.192.192.2", fake_get_metadata_by_instance_id=self._fake_x_get_metadata, headers={'X-Forwarded-For': '192.192.192.2', 'X-Instance-ID': 'a-b-c-d', 'X-Tenant-ID': 'FAKE', 'X-Instance-ID-Signature': signed}) self.assertEqual(response.status_int, 404) # without X-Forwarded-For response = fake_request( self, self.mdinst, relpath="/2009-04-04/user-data", address="192.192.192.2", fake_get_metadata_by_instance_id=self._fake_x_get_metadata, headers={'X-Instance-ID': 'a-b-c-d', 'X-Tenant-ID': 'test', 'X-Instance-ID-Signature': signed}) self.assertEqual(response.status_int, 500) # unexpected Instance-ID signed = hmac.new( encodeutils.to_utf8(CONF.neutron.metadata_proxy_shared_secret), b'z-z-z-z', hashlib.sha256).hexdigest() response = fake_request( self, self.mdinst, relpath="/2009-04-04/user-data", address="192.192.192.2", fake_get_metadata_by_instance_id=self._fake_x_get_metadata, headers={'X-Forwarded-For': '192.192.192.2', 'X-Instance-ID': 'z-z-z-z', 'X-Tenant-ID': 'test', 'X-Instance-ID-Signature': signed}) self.assertEqual(response.status_int, 500) def test_get_metadata(self): def _test_metadata_path(relpath): # recursively confirm a http 200 from all meta-data elements # available at relpath. response = fake_request(self, self.mdinst, relpath=relpath) for item in response.text.split('\n'): if 'public-keys' in relpath: # meta-data/public-keys/0=keyname refers to # meta-data/public-keys/0 item = item.split('=')[0] if item.endswith('/'): path = relpath + '/' + item _test_metadata_path(path) continue path = relpath + '/' + item response = fake_request(self, self.mdinst, relpath=path) self.assertEqual(response.status_int, 200, message=path) _test_metadata_path('/2009-04-04/meta-data') def _metadata_handler_with_instance_id(self, hnd): expected_instance_id = b'a-b-c-d' signed = hmac.new( encodeutils.to_utf8(CONF.neutron.metadata_proxy_shared_secret), expected_instance_id, hashlib.sha256).hexdigest() self.flags(service_metadata_proxy=True, group='neutron') response = fake_request( None, self.mdinst, relpath="/2009-04-04/user-data", address="192.192.192.2", fake_get_metadata=False, app=hnd, headers={'X-Forwarded-For': '192.192.192.2', 'X-Instance-ID': 'a-b-c-d', 'X-Tenant-ID': 'test', 'X-Instance-ID-Signature': signed}) self.assertEqual(200, response.status_int) self.assertEqual(base64.decode_as_bytes(self.instance['user_data']), response.body) @mock.patch.object(base, 'get_metadata_by_instance_id') def test_metadata_handler_with_instance_id(self, get_by_uuid): # test twice to ensure that the cache works get_by_uuid.return_value = self.mdinst self.flags(metadata_cache_expiration=15, group='api') hnd = handler.MetadataRequestHandler() self._metadata_handler_with_instance_id(hnd) self._metadata_handler_with_instance_id(hnd) self.assertEqual(1, get_by_uuid.call_count) @mock.patch.object(base, 'get_metadata_by_instance_id') def test_metadata_handler_with_instance_id_no_cache(self, get_by_uuid): # test twice to ensure that disabling the cache works get_by_uuid.return_value = self.mdinst self.flags(metadata_cache_expiration=0, group='api') hnd = handler.MetadataRequestHandler() self._metadata_handler_with_instance_id(hnd) self._metadata_handler_with_instance_id(hnd) self.assertEqual(2, get_by_uuid.call_count) def _metadata_handler_with_remote_address(self, hnd): response = fake_request( None, self.mdinst, fake_get_metadata=False, app=hnd, relpath="/2009-04-04/user-data", address="192.192.192.2") self.assertEqual(200, response.status_int) self.assertEqual(base64.decode_as_bytes(self.instance.user_data), response.body) @mock.patch.object(base, 'get_metadata_by_address') def test_metadata_handler_with_remote_address(self, get_by_uuid): # test twice to ensure that the cache works get_by_uuid.return_value = self.mdinst self.flags(metadata_cache_expiration=15, group='api') hnd = handler.MetadataRequestHandler() self._metadata_handler_with_remote_address(hnd) self._metadata_handler_with_remote_address(hnd) self.assertEqual(1, get_by_uuid.call_count) @mock.patch.object(base, 'get_metadata_by_address') def test_metadata_handler_with_remote_address_no_cache(self, get_by_uuid): # test twice to ensure that disabling the cache works get_by_uuid.return_value = self.mdinst self.flags(metadata_cache_expiration=0, group='api') hnd = handler.MetadataRequestHandler() self._metadata_handler_with_remote_address(hnd) self._metadata_handler_with_remote_address(hnd) self.assertEqual(2, get_by_uuid.call_count) @mock.patch.object(neutronapi, 'get_client', return_value=mock.Mock()) def test_metadata_lb_proxy(self, mock_get_client): self.flags(service_metadata_proxy=True, group='neutron') self.expected_instance_id = b'a-b-c-d' # with X-Metadata-Provider proxy_lb_id = 'edge-x' mock_client = mock_get_client() mock_client.list_ports.return_value = { 'ports': [{'device_id': 'a-b-c-d', 'tenant_id': 'test'}]} mock_client.list_subnets.return_value = { 'subnets': [{'network_id': 'f-f-f-f'}]} response = fake_request( self, self.mdinst, relpath="/2009-04-04/user-data", address="192.192.192.2", fake_get_metadata_by_instance_id=self._fake_x_get_metadata, headers={'X-Forwarded-For': '192.192.192.2', 'X-Metadata-Provider': proxy_lb_id}) self.assertEqual(200, response.status_int) @mock.patch.object(neutronapi, 'get_client', return_value=mock.Mock()) def test_metadata_lb_proxy_chain(self, mock_get_client): self.flags(service_metadata_proxy=True, group='neutron') self.expected_instance_id = b'a-b-c-d' # with X-Metadata-Provider proxy_lb_id = 'edge-x' def fake_list_ports(ctx, **kwargs): if kwargs.get('fixed_ips') == 'ip_address=192.192.192.2': return { 'ports': [{ 'device_id': 'a-b-c-d', 'tenant_id': 'test'}]} else: return {'ports': []} mock_client = mock_get_client() mock_client.list_ports.side_effect = fake_list_ports mock_client.list_subnets.return_value = { 'subnets': [{'network_id': 'f-f-f-f'}]} response = fake_request( self, self.mdinst, relpath="/2009-04-04/user-data", address="10.10.10.10", fake_get_metadata_by_instance_id=self._fake_x_get_metadata, headers={'X-Forwarded-For': '192.192.192.2, 10.10.10.10', 'X-Metadata-Provider': proxy_lb_id}) self.assertEqual(200, response.status_int) @mock.patch.object(neutronapi, 'get_client', return_value=mock.Mock()) def test_metadata_lb_proxy_signed(self, mock_get_client): shared_secret = "testing1234" self.flags( metadata_proxy_shared_secret=shared_secret, service_metadata_proxy=True, group='neutron') self.expected_instance_id = b'a-b-c-d' # with X-Metadata-Provider proxy_lb_id = 'edge-x' signature = hmac.new( encodeutils.to_utf8(shared_secret), encodeutils.to_utf8(proxy_lb_id), hashlib.sha256).hexdigest() mock_client = mock_get_client() mock_client.list_ports.return_value = { 'ports': [{'device_id': 'a-b-c-d', 'tenant_id': 'test'}]} mock_client.list_subnets.return_value = { 'subnets': [{'network_id': 'f-f-f-f'}]} response = fake_request( self, self.mdinst, relpath="/2009-04-04/user-data", address="192.192.192.2", fake_get_metadata_by_instance_id=self._fake_x_get_metadata, headers={'X-Forwarded-For': '192.192.192.2', 'X-Metadata-Provider': proxy_lb_id, 'X-Metadata-Provider-Signature': signature}) self.assertEqual(200, response.status_int) @mock.patch.object(neutronapi, 'get_client', return_value=mock.Mock()) def test_metadata_lb_proxy_signed_fail(self, mock_get_client): shared_secret = "testing1234" bad_secret = "testing3468" self.flags( metadata_proxy_shared_secret=shared_secret, service_metadata_proxy=True, group='neutron') self.expected_instance_id = b'a-b-c-d' # with X-Metadata-Provider proxy_lb_id = 'edge-x' signature = hmac.new( encodeutils.to_utf8(bad_secret), encodeutils.to_utf8(proxy_lb_id), hashlib.sha256).hexdigest() mock_client = mock_get_client() mock_client.list_ports.return_value = { 'ports': [{'device_id': 'a-b-c-d', 'tenant_id': 'test'}]} mock_client.list_subnets.return_value = { 'subnets': [{'network_id': 'f-f-f-f'}]} response = fake_request( self, self.mdinst, relpath="/2009-04-04/user-data", address="192.192.192.2", fake_get_metadata_by_instance_id=self._fake_x_get_metadata, headers={'X-Forwarded-For': '192.192.192.2', 'X-Metadata-Provider': proxy_lb_id, 'X-Metadata-Provider-Signature': signature}) self.assertEqual(403, response.status_int) @mock.patch.object(context, 'get_admin_context') @mock.patch('nova.network.API') def test_get_metadata_by_address(self, mock_net_api, mock_get_context): mock_get_context.return_value = 'CONTEXT' api = mock.Mock() fixed_ip = objects.FixedIP( instance_uuid='2bfd8d71-6b69-410c-a2f5-dbca18d02966') api.get_fixed_ip_by_address.return_value = fixed_ip mock_net_api.return_value = api with mock.patch.object(base, 'get_metadata_by_instance_id') as gmd: base.get_metadata_by_address('foo') api.get_fixed_ip_by_address.assert_called_once_with( 'CONTEXT', 'foo') gmd.assert_called_once_with(fixed_ip.instance_uuid, 'foo', 'CONTEXT') @mock.patch.object(context, 'get_admin_context') @mock.patch.object(objects.Instance, 'get_by_uuid') def test_get_metadata_by_instance_id(self, mock_uuid, mock_context): inst = objects.Instance() mock_uuid.return_value = inst ctxt = context.RequestContext() with mock.patch.object(base, 'InstanceMetadata') as imd: base.get_metadata_by_instance_id('foo', 'bar', ctxt=ctxt) self.assertFalse(mock_context.called, "get_admin_context() should not" "have been called, the context was given") mock_uuid.assert_called_once_with(ctxt, 'foo', expected_attrs=['ec2_ids', 'flavor', 'info_cache', 'metadata', 'system_metadata', 'security_groups', 'keypairs', 'device_metadata']) imd.assert_called_once_with(inst, 'bar') @mock.patch.object(context, 'get_admin_context') @mock.patch.object(objects.Instance, 'get_by_uuid') def test_get_metadata_by_instance_id_null_context(self, mock_uuid, mock_context): inst = objects.Instance() mock_uuid.return_value = inst mock_context.return_value = context.RequestContext() with mock.patch.object(base, 'InstanceMetadata') as imd: base.get_metadata_by_instance_id('foo', 'bar') mock_context.assert_called_once_with() mock_uuid.assert_called_once_with(mock_context.return_value, 'foo', expected_attrs=['ec2_ids', 'flavor', 'info_cache', 'metadata', 'system_metadata', 'security_groups', 'keypairs', 'device_metadata']) imd.assert_called_once_with(inst, 'bar') @mock.patch.object(objects.Instance, 'get_by_uuid') @mock.patch.object(objects.InstanceMapping, 'get_by_instance_uuid') def test_get_metadata_by_instance_id_with_cell_mapping(self, mock_get_im, mock_get_inst): ctxt = context.RequestContext() inst = objects.Instance() im = objects.InstanceMapping(cell_mapping=objects.CellMapping()) mock_get_inst.return_value = inst mock_get_im.return_value = im with mock.patch.object(base, 'InstanceMetadata') as imd: with mock.patch('nova.context.target_cell') as mock_tc: base.get_metadata_by_instance_id('foo', 'bar', ctxt=ctxt) mock_tc.assert_called_once_with(ctxt, im.cell_mapping) mock_get_im.assert_called_once_with(ctxt, 'foo') imd.assert_called_once_with(inst, 'bar') class MetadataPasswordTestCase(test.TestCase): def setUp(self): super(MetadataPasswordTestCase, self).setUp() fake_network.stub_out_nw_api_get_instance_nw_info(self) self.context = context.RequestContext('fake', 'fake') self.instance = fake_inst_obj(self.context) self.mdinst = fake_InstanceMetadata(self, self.instance, address=None, sgroups=None) def test_get_password(self): request = webob.Request.blank('') self.mdinst.password = 'foo' result = password.handle_password(request, self.mdinst) self.assertEqual(result, 'foo') @mock.patch.object(objects.InstanceMapping, 'get_by_instance_uuid', return_value=objects.InstanceMapping(cell_mapping=None)) @mock.patch.object(objects.Instance, 'get_by_uuid') def test_set_password_instance_not_found(self, get_by_uuid, get_mapping): """Tests that a 400 is returned if the instance can not be found.""" get_by_uuid.side_effect = exception.InstanceNotFound( instance_id=self.instance.uuid) request = webob.Request.blank('') request.method = 'POST' request.val = b'foo' request.content_length = len(request.body) self.assertRaises(webob.exc.HTTPBadRequest, password.handle_password, request, self.mdinst) def test_bad_method(self): request = webob.Request.blank('') request.method = 'PUT' self.assertRaises(webob.exc.HTTPBadRequest, password.handle_password, request, self.mdinst) @mock.patch('nova.objects.InstanceMapping.get_by_instance_uuid') @mock.patch('nova.objects.Instance.get_by_uuid') def _try_set_password(self, get_by_uuid, get_mapping, val=b'bar'): request = webob.Request.blank('') request.method = 'POST' request.body = val get_mapping.return_value = objects.InstanceMapping(cell_mapping=None) get_by_uuid.return_value = self.instance with mock.patch.object(self.instance, 'save') as save: password.handle_password(request, self.mdinst) save.assert_called_once_with() self.assertIn('password_0', self.instance.system_metadata) get_mapping.assert_called_once_with(mock.ANY, self.instance.uuid) def test_set_password(self): self.mdinst.password = '' self._try_set_password() def test_conflict(self): self.mdinst.password = 'foo' self.assertRaises(webob.exc.HTTPConflict, self._try_set_password) def test_too_large(self): self.mdinst.password = '' self.assertRaises(webob.exc.HTTPBadRequest, self._try_set_password, val=(b'a' * (password.MAX_SIZE + 1))) nova-17.0.1/nova/tests/unit/test_hacking.py0000666000175000017500000007723113250073136020702 0ustar zuulzuul00000000000000# Copyright 2014 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import textwrap import mock import pep8 from nova.hacking import checks from nova import test class HackingTestCase(test.NoDBTestCase): """This class tests the hacking checks in nova.hacking.checks by passing strings to the check methods like the pep8/flake8 parser would. The parser loops over each line in the file and then passes the parameters to the check method. The parameter names in the check method dictate what type of object is passed to the check method. The parameter types are:: logical_line: A processed line with the following modifications: - Multi-line statements converted to a single line. - Stripped left and right. - Contents of strings replaced with "xxx" of same length. - Comments removed. physical_line: Raw line of text from the input file. lines: a list of the raw lines from the input file tokens: the tokens that contribute to this logical line line_number: line number in the input file total_lines: number of lines in the input file blank_lines: blank lines before this one indent_char: indentation character in this file (" " or "\t") indent_level: indentation (with tabs expanded to multiples of 8) previous_indent_level: indentation on previous line previous_logical: previous logical line filename: Path of the file being run through pep8 When running a test on a check method the return will be False/None if there is no violation in the sample input. If there is an error a tuple is returned with a position in the line, and a message. So to check the result just assertTrue if the check is expected to fail and assertFalse if it should pass. """ def test_virt_driver_imports(self): expect = (0, "N311: importing code from other virt drivers forbidden") self.assertEqual(expect, checks.import_no_virt_driver_import_deps( "from nova.virt.libvirt import utils as libvirt_utils", "./nova/virt/xenapi/driver.py")) self.assertEqual(expect, checks.import_no_virt_driver_import_deps( "import nova.virt.libvirt.utils as libvirt_utils", "./nova/virt/xenapi/driver.py")) self.assertIsNone(checks.import_no_virt_driver_import_deps( "from nova.virt.libvirt import utils as libvirt_utils", "./nova/virt/libvirt/driver.py")) self.assertIsNone(checks.import_no_virt_driver_import_deps( "import nova.virt.firewall", "./nova/virt/libvirt/firewall.py")) def test_virt_driver_config_vars(self): self.assertIsInstance(checks.import_no_virt_driver_config_deps( "CONF.import_opt('volume_drivers', " "'nova.virt.libvirt.driver', group='libvirt')", "./nova/virt/xenapi/driver.py"), tuple) self.assertIsNone(checks.import_no_virt_driver_config_deps( "CONF.import_opt('volume_drivers', " "'nova.virt.libvirt.driver', group='libvirt')", "./nova/virt/libvirt/volume.py")) def test_assert_true_instance(self): self.assertEqual(len(list(checks.assert_true_instance( "self.assertTrue(isinstance(e, " "exception.BuildAbortException))"))), 1) self.assertEqual( len(list(checks.assert_true_instance("self.assertTrue()"))), 0) def test_assert_equal_type(self): self.assertEqual(len(list(checks.assert_equal_type( "self.assertEqual(type(als['QuicAssist']), list)"))), 1) self.assertEqual( len(list(checks.assert_equal_type("self.assertTrue()"))), 0) def test_assert_equal_in(self): self.assertEqual(len(list(checks.assert_equal_in( "self.assertEqual(a in b, True)"))), 1) self.assertEqual(len(list(checks.assert_equal_in( "self.assertEqual('str' in 'string', True)"))), 1) self.assertEqual(len(list(checks.assert_equal_in( "self.assertEqual(any(a==1 for a in b), True)"))), 0) self.assertEqual(len(list(checks.assert_equal_in( "self.assertEqual(True, a in b)"))), 1) self.assertEqual(len(list(checks.assert_equal_in( "self.assertEqual(True, 'str' in 'string')"))), 1) self.assertEqual(len(list(checks.assert_equal_in( "self.assertEqual(True, any(a==1 for a in b))"))), 0) self.assertEqual(len(list(checks.assert_equal_in( "self.assertEqual(a in b, False)"))), 1) self.assertEqual(len(list(checks.assert_equal_in( "self.assertEqual('str' in 'string', False)"))), 1) self.assertEqual(len(list(checks.assert_equal_in( "self.assertEqual(any(a==1 for a in b), False)"))), 0) self.assertEqual(len(list(checks.assert_equal_in( "self.assertEqual(False, a in b)"))), 1) self.assertEqual(len(list(checks.assert_equal_in( "self.assertEqual(False, 'str' in 'string')"))), 1) self.assertEqual(len(list(checks.assert_equal_in( "self.assertEqual(False, any(a==1 for a in b))"))), 0) def test_assert_true_or_false_with_in_or_not_in(self): self.assertEqual(len(list(checks.assert_true_or_false_with_in( "self.assertTrue(A in B)"))), 1) self.assertEqual(len(list(checks.assert_true_or_false_with_in( "self.assertFalse(A in B)"))), 1) self.assertEqual(len(list(checks.assert_true_or_false_with_in( "self.assertTrue(A not in B)"))), 1) self.assertEqual(len(list(checks.assert_true_or_false_with_in( "self.assertFalse(A not in B)"))), 1) self.assertEqual(len(list(checks.assert_true_or_false_with_in( "self.assertTrue(A in B, 'some message')"))), 1) self.assertEqual(len(list(checks.assert_true_or_false_with_in( "self.assertFalse(A in B, 'some message')"))), 1) self.assertEqual(len(list(checks.assert_true_or_false_with_in( "self.assertTrue(A not in B, 'some message')"))), 1) self.assertEqual(len(list(checks.assert_true_or_false_with_in( "self.assertFalse(A not in B, 'some message')"))), 1) self.assertEqual(len(list(checks.assert_true_or_false_with_in( "self.assertTrue(A in 'some string with spaces')"))), 1) self.assertEqual(len(list(checks.assert_true_or_false_with_in( "self.assertTrue(A in ['1', '2', '3'])"))), 1) self.assertEqual(len(list(checks.assert_true_or_false_with_in( "self.assertTrue(A in [1, 2, 3])"))), 1) self.assertEqual(len(list(checks.assert_true_or_false_with_in( "self.assertTrue(any(A > 5 for A in B))"))), 0) self.assertEqual(len(list(checks.assert_true_or_false_with_in( "self.assertTrue(any(A > 5 for A in B), 'some message')"))), 0) self.assertEqual(len(list(checks.assert_true_or_false_with_in( "self.assertFalse(some in list1 and some2 in list2)"))), 0) def test_no_translate_debug_logs(self): self.assertEqual(len(list(checks.no_translate_debug_logs( "LOG.debug(_('foo'))", "nova/scheduler/foo.py"))), 1) self.assertEqual(len(list(checks.no_translate_debug_logs( "LOG.debug('foo')", "nova/scheduler/foo.py"))), 0) self.assertEqual(len(list(checks.no_translate_debug_logs( "LOG.info(_('foo'))", "nova/scheduler/foo.py"))), 0) def test_no_setting_conf_directly_in_tests(self): self.assertEqual(len(list(checks.no_setting_conf_directly_in_tests( "CONF.option = 1", "nova/tests/test_foo.py"))), 1) self.assertEqual(len(list(checks.no_setting_conf_directly_in_tests( "CONF.group.option = 1", "nova/tests/test_foo.py"))), 1) self.assertEqual(len(list(checks.no_setting_conf_directly_in_tests( "CONF.option = foo = 1", "nova/tests/test_foo.py"))), 1) # Shouldn't fail with comparisons self.assertEqual(len(list(checks.no_setting_conf_directly_in_tests( "CONF.option == 'foo'", "nova/tests/test_foo.py"))), 0) self.assertEqual(len(list(checks.no_setting_conf_directly_in_tests( "CONF.option != 1", "nova/tests/test_foo.py"))), 0) # Shouldn't fail since not in nova/tests/ self.assertEqual(len(list(checks.no_setting_conf_directly_in_tests( "CONF.option = 1", "nova/compute/foo.py"))), 0) def test_no_mutable_default_args(self): self.assertEqual(1, len(list(checks.no_mutable_default_args( "def get_info_from_bdm(virt_type, bdm, mapping=[])")))) self.assertEqual(0, len(list(checks.no_mutable_default_args( "defined = []")))) self.assertEqual(0, len(list(checks.no_mutable_default_args( "defined, undefined = [], {}")))) def test_check_explicit_underscore_import(self): self.assertEqual(len(list(checks.check_explicit_underscore_import( "LOG.info(_('My info message'))", "cinder/tests/other_files.py"))), 1) self.assertEqual(len(list(checks.check_explicit_underscore_import( "msg = _('My message')", "cinder/tests/other_files.py"))), 1) self.assertEqual(len(list(checks.check_explicit_underscore_import( "from cinder.i18n import _", "cinder/tests/other_files.py"))), 0) self.assertEqual(len(list(checks.check_explicit_underscore_import( "LOG.info(_('My info message'))", "cinder/tests/other_files.py"))), 0) self.assertEqual(len(list(checks.check_explicit_underscore_import( "msg = _('My message')", "cinder/tests/other_files.py"))), 0) self.assertEqual(len(list(checks.check_explicit_underscore_import( "from cinder.i18n import _, _LW", "cinder/tests/other_files2.py"))), 0) self.assertEqual(len(list(checks.check_explicit_underscore_import( "msg = _('My message')", "cinder/tests/other_files2.py"))), 0) self.assertEqual(len(list(checks.check_explicit_underscore_import( "_ = translations.ugettext", "cinder/tests/other_files3.py"))), 0) self.assertEqual(len(list(checks.check_explicit_underscore_import( "msg = _('My message')", "cinder/tests/other_files3.py"))), 0) def test_use_jsonutils(self): def __get_msg(fun): msg = ("N324: jsonutils.%(fun)s must be used instead of " "json.%(fun)s" % {'fun': fun}) return [(0, msg)] for method in ('dump', 'dumps', 'load', 'loads'): self.assertEqual( __get_msg(method), list(checks.use_jsonutils("json.%s(" % method, "./nova/virt/xenapi/driver.py"))) self.assertEqual(0, len(list(checks.use_jsonutils("json.%s(" % method, "./plugins/xenserver/script.py")))) self.assertEqual(0, len(list(checks.use_jsonutils("jsonx.%s(" % method, "./nova/virt/xenapi/driver.py")))) self.assertEqual(0, len(list(checks.use_jsonutils("json.dumb", "./nova/virt/xenapi/driver.py")))) # We are patching pep8 so that only the check under test is actually # installed. @mock.patch('pep8._checks', {'physical_line': {}, 'logical_line': {}, 'tree': {}}) def _run_check(self, code, checker, filename=None): pep8.register_check(checker) lines = textwrap.dedent(code).strip().splitlines(True) checker = pep8.Checker(filename=filename, lines=lines) # NOTE(sdague): the standard reporter has printing to stdout # as a normal part of check_all, which bleeds through to the # test output stream in an unhelpful way. This blocks that printing. with mock.patch('pep8.StandardReport.get_file_results'): checker.check_all() checker.report._deferred_print.sort() return checker.report._deferred_print def _assert_has_errors(self, code, checker, expected_errors=None, filename=None): actual_errors = [e[:3] for e in self._run_check(code, checker, filename)] self.assertEqual(expected_errors or [], actual_errors) def _assert_has_no_errors(self, code, checker, filename=None): self._assert_has_errors(code, checker, filename=filename) def test_str_unicode_exception(self): checker = checks.CheckForStrUnicodeExc code = """ def f(a, b): try: p = str(a) + str(b) except ValueError as e: p = str(e) return p """ errors = [(5, 16, 'N325')] self._assert_has_errors(code, checker, expected_errors=errors) code = """ def f(a, b): try: p = unicode(a) + str(b) except ValueError as e: p = e return p """ self._assert_has_no_errors(code, checker) code = """ def f(a, b): try: p = str(a) + str(b) except ValueError as e: p = unicode(e) return p """ errors = [(5, 20, 'N325')] self._assert_has_errors(code, checker, expected_errors=errors) code = """ def f(a, b): try: p = str(a) + str(b) except ValueError as e: try: p = unicode(a) + unicode(b) except ValueError as ve: p = str(e) + str(ve) p = e return p """ errors = [(8, 20, 'N325'), (8, 29, 'N325')] self._assert_has_errors(code, checker, expected_errors=errors) code = """ def f(a, b): try: p = str(a) + str(b) except ValueError as e: try: p = unicode(a) + unicode(b) except ValueError as ve: p = str(e) + unicode(ve) p = str(e) return p """ errors = [(8, 20, 'N325'), (8, 33, 'N325'), (9, 16, 'N325')] self._assert_has_errors(code, checker, expected_errors=errors) def test_api_version_decorator_check(self): code = """ @some_other_decorator @wsgi.api_version("2.5") def my_method(): pass """ self._assert_has_errors(code, checks.check_api_version_decorator, expected_errors=[(2, 0, "N332")]) def test_oslo_assert_raises_regexp(self): code = """ self.assertRaisesRegexp(ValueError, "invalid literal for.*XYZ'$", int, 'XYZ') """ self._assert_has_errors(code, checks.assert_raises_regexp, expected_errors=[(1, 0, "N335")]) def test_api_version_decorator_check_no_errors(self): code = """ class ControllerClass(): @wsgi.api_version("2.5") def my_method(): pass """ self._assert_has_no_errors(code, checks.check_api_version_decorator) def test_trans_add(self): checker = checks.CheckForTransAdd code = """ def fake_tran(msg): return msg _ = fake_tran _LI = _ _LW = _ _LE = _ _LC = _ def f(a, b): msg = _('test') + 'add me' msg = _LI('test') + 'add me' msg = _LW('test') + 'add me' msg = _LE('test') + 'add me' msg = _LC('test') + 'add me' msg = 'add to me' + _('test') return msg """ errors = [(13, 10, 'N326'), (14, 10, 'N326'), (15, 10, 'N326'), (16, 10, 'N326'), (17, 10, 'N326'), (18, 24, 'N326')] self._assert_has_errors(code, checker, expected_errors=errors) code = """ def f(a, b): msg = 'test' + 'add me' return msg """ self._assert_has_no_errors(code, checker) def test_dict_constructor_with_list_copy(self): self.assertEqual(1, len(list(checks.dict_constructor_with_list_copy( " dict([(i, connect_info[i])")))) self.assertEqual(1, len(list(checks.dict_constructor_with_list_copy( " attrs = dict([(k, _from_json(v))")))) self.assertEqual(1, len(list(checks.dict_constructor_with_list_copy( " type_names = dict((value, key) for key, value in")))) self.assertEqual(1, len(list(checks.dict_constructor_with_list_copy( " dict((value, key) for key, value in")))) self.assertEqual(1, len(list(checks.dict_constructor_with_list_copy( "foo(param=dict((k, v) for k, v in bar.items()))")))) self.assertEqual(1, len(list(checks.dict_constructor_with_list_copy( " dict([[i,i] for i in range(3)])")))) self.assertEqual(1, len(list(checks.dict_constructor_with_list_copy( " dd = dict([i,i] for i in range(3))")))) self.assertEqual(0, len(list(checks.dict_constructor_with_list_copy( " create_kwargs = dict(snapshot=snapshot,")))) self.assertEqual(0, len(list(checks.dict_constructor_with_list_copy( " self._render_dict(xml, data_el, data.__dict__)")))) def test_check_http_not_implemented(self): code = """ except NotImplementedError: common.raise_http_not_implemented_error() """ filename = "nova/api/openstack/compute/v21/test.py" self._assert_has_no_errors(code, checks.check_http_not_implemented, filename=filename) code = """ except NotImplementedError: msg = _("Unable to set password on instance") raise exc.HTTPNotImplemented(explanation=msg) """ errors = [(3, 4, 'N339')] self._assert_has_errors(code, checks.check_http_not_implemented, expected_errors=errors, filename=filename) def test_check_contextlib_use(self): code = """ with test.nested( mock.patch.object(network_model.NetworkInfo, 'hydrate'), mock.patch.object(objects.InstanceInfoCache, 'save'), ) as ( hydrate_mock, save_mock ) """ filename = "nova/api/openstack/compute/v21/test.py" self._assert_has_no_errors(code, checks.check_no_contextlib_nested, filename=filename) code = """ with contextlib.nested( mock.patch.object(network_model.NetworkInfo, 'hydrate'), mock.patch.object(objects.InstanceInfoCache, 'save'), ) as ( hydrate_mock, save_mock ) """ filename = "nova/api/openstack/compute/legacy_v2/test.py" errors = [(1, 0, 'N341')] self._assert_has_errors(code, checks.check_no_contextlib_nested, expected_errors=errors, filename=filename) def test_check_greenthread_spawns(self): errors = [(1, 0, "N340")] code = "greenthread.spawn(func, arg1, kwarg1=kwarg1)" self._assert_has_errors(code, checks.check_greenthread_spawns, expected_errors=errors) code = "greenthread.spawn_n(func, arg1, kwarg1=kwarg1)" self._assert_has_errors(code, checks.check_greenthread_spawns, expected_errors=errors) code = "eventlet.greenthread.spawn(func, arg1, kwarg1=kwarg1)" self._assert_has_errors(code, checks.check_greenthread_spawns, expected_errors=errors) code = "eventlet.spawn(func, arg1, kwarg1=kwarg1)" self._assert_has_errors(code, checks.check_greenthread_spawns, expected_errors=errors) code = "eventlet.spawn_n(func, arg1, kwarg1=kwarg1)" self._assert_has_errors(code, checks.check_greenthread_spawns, expected_errors=errors) code = "nova.utils.spawn(func, arg1, kwarg1=kwarg1)" self._assert_has_no_errors(code, checks.check_greenthread_spawns) code = "nova.utils.spawn_n(func, arg1, kwarg1=kwarg1)" self._assert_has_no_errors(code, checks.check_greenthread_spawns) def test_config_option_regex_match(self): def should_match(code): self.assertTrue(checks.cfg_opt_re.match(code)) def should_not_match(code): self.assertFalse(checks.cfg_opt_re.match(code)) should_match("opt = cfg.StrOpt('opt_name')") should_match("opt = cfg.IntOpt('opt_name')") should_match("opt = cfg.DictOpt('opt_name')") should_match("opt = cfg.Opt('opt_name')") should_match("opts=[cfg.Opt('opt_name')]") should_match(" cfg.Opt('opt_name')") should_not_match("opt_group = cfg.OptGroup('opt_group_name')") def test_check_config_option_in_central_place(self): errors = [(1, 0, "N342")] code = """ opts = [ cfg.StrOpt('random_opt', default='foo', help='I am here to do stuff'), ] """ # option at the right place in the tree self._assert_has_no_errors(code, checks.check_config_option_in_central_place, filename="nova/conf/serial_console.py") # option at the wrong place in the tree self._assert_has_errors(code, checks.check_config_option_in_central_place, filename="nova/cmd/serialproxy.py", expected_errors=errors) # option at a location which is marked as an exception # TODO(macsz) remove testing exceptions as they are removed from # check_config_option_in_central_place self._assert_has_no_errors(code, checks.check_config_option_in_central_place, filename="nova/cmd/manage.py") self._assert_has_no_errors(code, checks.check_config_option_in_central_place, filename="nova/tests/dummy_test.py") def test_check_doubled_words(self): errors = [(1, 0, "N343")] # Artificial break to stop pep8 detecting the test ! code = "This is the" + " the best comment" self._assert_has_errors(code, checks.check_doubled_words, expected_errors=errors) code = "This is the then best comment" self._assert_has_no_errors(code, checks.check_doubled_words) def test_dict_iteritems(self): self.assertEqual(1, len(list(checks.check_python3_no_iteritems( "obj.iteritems()")))) self.assertEqual(0, len(list(checks.check_python3_no_iteritems( "six.iteritems(ob))")))) def test_dict_iterkeys(self): self.assertEqual(1, len(list(checks.check_python3_no_iterkeys( "obj.iterkeys()")))) self.assertEqual(0, len(list(checks.check_python3_no_iterkeys( "six.iterkeys(ob))")))) def test_dict_itervalues(self): self.assertEqual(1, len(list(checks.check_python3_no_itervalues( "obj.itervalues()")))) self.assertEqual(0, len(list(checks.check_python3_no_itervalues( "six.itervalues(ob))")))) def test_no_os_popen(self): code = """ import os foobar_cmd = "foobar -get -beer" answer = os.popen(foobar_cmd).read() if answer == nok": try: os.popen(os.popen('foobar -beer -please')).read() except ValueError: go_home() """ errors = [(4, 0, 'N348'), (8, 8, 'N348')] self._assert_has_errors(code, checks.no_os_popen, expected_errors=errors) def test_no_log_warn(self): code = """ LOG.warn("LOG.warn is deprecated") """ errors = [(1, 0, 'N352')] self._assert_has_errors(code, checks.no_log_warn, expected_errors=errors) code = """ LOG.warning("LOG.warn is deprecated") """ self._assert_has_no_errors(code, checks.no_log_warn) def test_uncalled_closures(self): checker = checks.CheckForUncalledTestClosure code = """ def test_fake_thing(): def _test(): pass """ self._assert_has_errors(code, checker, expected_errors=[(1, 0, 'N349')]) code = """ def test_fake_thing(): def _test(): pass _test() """ self._assert_has_no_errors(code, checker) code = """ def test_fake_thing(): def _test(): pass self.assertRaises(FakeExcepion, _test) """ self._assert_has_no_errors(code, checker) def test_check_policy_registration_in_central_place(self): errors = [(3, 0, "N350")] code = """ from nova import policy policy.RuleDefault('context_is_admin', 'role:admin') """ # registration in the proper place self._assert_has_no_errors( code, checks.check_policy_registration_in_central_place, filename="nova/policies/base.py") # option at a location which is not in scope right now self._assert_has_errors( code, checks.check_policy_registration_in_central_place, filename="nova/api/openstack/compute/non_existent.py", expected_errors=errors) def test_check_policy_enforce(self): errors = [(3, 0, "N351")] code = """ from nova import policy policy._ENFORCER.enforce('context_is_admin', target, credentials) """ self._assert_has_errors(code, checks.check_policy_enforce, expected_errors=errors) def test_check_policy_enforce_does_not_catch_other_enforce(self): # Simulate a different enforce method defined in Nova code = """ from nova import foo foo.enforce() """ self._assert_has_no_errors(code, checks.check_policy_enforce) def test_check_python3_xrange(self): func = checks.check_python3_xrange self.assertEqual(1, len(list(func('for i in xrange(10)')))) self.assertEqual(1, len(list(func('for i in xrange (10)')))) self.assertEqual(0, len(list(func('for i in range(10)')))) self.assertEqual(0, len(list(func('for i in six.moves.range(10)')))) def test_log_context(self): code = """ LOG.info(_LI("Rebooting instance"), context=context, instance=instance) """ errors = [(1, 0, 'N353')] self._assert_has_errors(code, checks.check_context_log, expected_errors=errors) code = """ LOG.info(_LI("Rebooting instance"), context=admin_context, instance=instance) """ errors = [(1, 0, 'N353')] self._assert_has_errors(code, checks.check_context_log, expected_errors=errors) code = """ LOG.info(_LI("Rebooting instance"), instance=instance) """ self._assert_has_no_errors(code, checks.check_context_log) def test_no_assert_equal_true_false(self): code = """ self.assertEqual(context_is_admin, True) self.assertEqual(context_is_admin, False) self.assertEqual(True, context_is_admin) self.assertEqual(False, context_is_admin) self.assertNotEqual(context_is_admin, True) self.assertNotEqual(context_is_admin, False) self.assertNotEqual(True, context_is_admin) self.assertNotEqual(False, context_is_admin) """ errors = [(1, 0, 'N355'), (2, 0, 'N355'), (3, 0, 'N355'), (4, 0, 'N355'), (5, 0, 'N355'), (6, 0, 'N355'), (7, 0, 'N355'), (8, 0, 'N355')] self._assert_has_errors(code, checks.no_assert_equal_true_false, expected_errors=errors) code = """ self.assertEqual(context_is_admin, stuff) self.assertNotEqual(context_is_admin, stuff) """ self._assert_has_no_errors(code, checks.no_assert_equal_true_false) def test_no_assert_true_false_is_not(self): code = """ self.assertTrue(test is None) self.assertTrue(False is my_variable) self.assertFalse(None is test) self.assertFalse(my_variable is False) """ errors = [(1, 0, 'N356'), (2, 0, 'N356'), (3, 0, 'N356'), (4, 0, 'N356')] self._assert_has_errors(code, checks.no_assert_true_false_is_not, expected_errors=errors) def test_check_uuid4(self): code = """ fake_uuid = uuid.uuid4() hex_uuid = uuid.uuid4().hex """ errors = [(1, 0, 'N357'), (2, 0, 'N357')] self._assert_has_errors(code, checks.check_uuid4, expected_errors=errors) code = """ int_uuid = uuid.uuid4().int urn_uuid = uuid.uuid4().urn variant_uuid = uuid.uuid4().variant version_uuid = uuid.uuid4().version """ self._assert_has_no_errors(code, checks.check_uuid4) def test_return_followed_by_space(self): self.assertEqual(1, len(list(checks.return_followed_by_space( "return(42)")))) self.assertEqual(1, len(list(checks.return_followed_by_space( "return(' some string ')")))) self.assertEqual(0, len(list(checks.return_followed_by_space( "return 42")))) self.assertEqual(0, len(list(checks.return_followed_by_space( "return ' some string '")))) self.assertEqual(0, len(list(checks.return_followed_by_space( "return (int('40') + 2)")))) nova-17.0.1/nova/tests/unit/test_exception.py0000666000175000017500000002605313250073126021267 0ustar zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import inspect import fixtures import mock import six from webob.util import status_reasons from nova import context from nova import exception from nova import exception_wrapper from nova import rpc from nova import test from nova.tests.unit import fake_notifier def good_function(self, context): return 99 def bad_function_exception(self, context, extra, blah="a", boo="b", zoo=None): raise test.TestingException('bad things happened') def bad_function_unknown_module(self, context): """Example traceback that points to a module that getmodule() can't find. Traceback (most recent call last): File "", line 1, in File "src/lxml/lxml.etree.pyx", line 2402, in lxml.etree._Attrib.__setitem__ (src/lxml/lxml.etree.c:67548) File "src/lxml/apihelpers.pxi", line 570, in lxml.etree._setAttributeValue (src/lxml/lxml.etree.c:21551) File "src/lxml/apihelpers.pxi", line 1437, in lxml.etree._utf8 (src/lxml/lxml.etree.c:30194) TypeError: Argument must be bytes or unicode, got 'NoneType' """ from lxml import etree x = etree.fromstring('') x.attrib['foo'] = None class WrapExceptionTestCase(test.NoDBTestCase): def setUp(self): super(WrapExceptionTestCase, self).setUp() fake_notifier.stub_notifier(self) self.addCleanup(fake_notifier.reset) def test_wrap_exception_good_return(self): wrapped = exception_wrapper.wrap_exception(rpc.get_notifier('fake')) self.assertEqual(99, wrapped(good_function)(1, 2)) self.assertEqual(0, len(fake_notifier.NOTIFICATIONS)) self.assertEqual(0, len(fake_notifier.VERSIONED_NOTIFICATIONS)) def test_wrap_exception_unknown_module(self): ctxt = context.get_admin_context() wrapped = exception_wrapper.wrap_exception( rpc.get_notifier('fake'), binary='nova-compute') self.assertRaises( TypeError, wrapped(bad_function_unknown_module), None, ctxt) self.assertEqual(1, len(fake_notifier.VERSIONED_NOTIFICATIONS)) notification = fake_notifier.VERSIONED_NOTIFICATIONS[0] payload = notification['payload']['nova_object.data'] self.assertEqual('unknown', payload['module_name']) def test_wrap_exception_with_notifier(self): wrapped = exception_wrapper.wrap_exception(rpc.get_notifier('fake'), binary='nova-compute') ctxt = context.get_admin_context() self.assertRaises(test.TestingException, wrapped(bad_function_exception), 1, ctxt, 3, zoo=3) self.assertEqual(1, len(fake_notifier.NOTIFICATIONS)) notification = fake_notifier.NOTIFICATIONS[0] self.assertEqual('bad_function_exception', notification.event_type) self.assertEqual(ctxt, notification.context) self.assertEqual(3, notification.payload['args']['extra']) for key in ['exception', 'args']: self.assertIn(key, notification.payload.keys()) self.assertNotIn('context', notification.payload['args'].keys()) self.assertEqual(1, len(fake_notifier.VERSIONED_NOTIFICATIONS)) notification = fake_notifier.VERSIONED_NOTIFICATIONS[0] self.assertEqual('compute.exception', notification['event_type']) self.assertEqual('nova-compute:fake-mini', notification['publisher_id']) self.assertEqual('ERROR', notification['priority']) payload = notification['payload'] self.assertEqual('ExceptionPayload', payload['nova_object.name']) self.assertEqual('1.0', payload['nova_object.version']) payload = payload['nova_object.data'] self.assertEqual('TestingException', payload['exception']) self.assertEqual('bad things happened', payload['exception_message']) self.assertEqual('bad_function_exception', payload['function_name']) self.assertEqual('nova.tests.unit.test_exception', payload['module_name']) @mock.patch('nova.rpc.NOTIFIER') @mock.patch('nova.notifications.objects.exception.' 'ExceptionNotification.__init__') def test_wrap_exception_notification_not_emitted_if_disabled( self, mock_notification, mock_notifier): mock_notifier.is_enabled.return_value = False wrapped = exception_wrapper.wrap_exception(rpc.get_notifier('fake'), binary='nova-compute') ctxt = context.get_admin_context() self.assertRaises(test.TestingException, wrapped(bad_function_exception), 1, ctxt, 3, zoo=3) self.assertFalse(mock_notification.called) @mock.patch('nova.notifications.objects.exception.' 'ExceptionNotification.__init__') def test_wrap_exception_notification_not_emitted_if_unversioned( self, mock_notifier): self.flags(notification_format='unversioned', group='notifications') wrapped = exception_wrapper.wrap_exception(rpc.get_notifier('fake'), binary='nova-compute') ctxt = context.get_admin_context() self.assertRaises(test.TestingException, wrapped(bad_function_exception), 1, ctxt, 3, zoo=3) self.assertFalse(mock_notifier.called) class NovaExceptionTestCase(test.NoDBTestCase): def test_default_error_msg(self): class FakeNovaException(exception.NovaException): msg_fmt = "default message" exc = FakeNovaException() self.assertEqual('default message', six.text_type(exc)) def test_error_msg(self): self.assertEqual('test', six.text_type(exception.NovaException('test'))) def test_default_error_msg_with_kwargs(self): class FakeNovaException(exception.NovaException): msg_fmt = "default message: %(code)s" exc = FakeNovaException(code=500) self.assertEqual('default message: 500', six.text_type(exc)) self.assertEqual('default message: 500', exc.message) def test_error_msg_exception_with_kwargs(self): class FakeNovaException(exception.NovaException): msg_fmt = "default message: %(misspelled_code)s" exc = FakeNovaException(code=500, misspelled_code='blah') self.assertEqual('default message: blah', six.text_type(exc)) self.assertEqual('default message: blah', exc.message) def test_default_error_code(self): class FakeNovaException(exception.NovaException): code = 404 exc = FakeNovaException() self.assertEqual(404, exc.kwargs['code']) def test_error_code_from_kwarg(self): class FakeNovaException(exception.NovaException): code = 500 exc = FakeNovaException(code=404) self.assertEqual(exc.kwargs['code'], 404) def test_cleanse_dict(self): kwargs = {'foo': 1, 'blah_pass': 2, 'zoo_password': 3, '_pass': 4} self.assertEqual({'foo': 1}, exception_wrapper._cleanse_dict(kwargs)) kwargs = {} self.assertEqual({}, exception_wrapper._cleanse_dict(kwargs)) def test_format_message_local(self): class FakeNovaException(exception.NovaException): msg_fmt = "some message" exc = FakeNovaException() self.assertEqual(six.text_type(exc), exc.format_message()) def test_format_message_remote(self): class FakeNovaException_Remote(exception.NovaException): msg_fmt = "some message" if six.PY2: def __unicode__(self): return u"print the whole trace" else: def __str__(self): return "print the whole trace" exc = FakeNovaException_Remote() self.assertEqual(u"print the whole trace", six.text_type(exc)) self.assertEqual("some message", exc.format_message()) def test_format_message_remote_error(self): # NOTE(melwitt): This test checks that errors are formatted as expected # in a real environment where format errors are caught and not # reraised, so we patch in the real implementation. self.useFixture(fixtures.MonkeyPatch( 'nova.exception.NovaException._log_exception', test.NovaExceptionReraiseFormatError.real_log_exception)) class FakeNovaException_Remote(exception.NovaException): msg_fmt = "some message %(somearg)s" def __unicode__(self): return u"print the whole trace" exc = FakeNovaException_Remote(lame_arg='lame') self.assertEqual("some message %(somearg)s", exc.format_message()) class ConvertedExceptionTestCase(test.NoDBTestCase): def test_instantiate(self): exc = exception.ConvertedException(400, 'Bad Request', 'reason') self.assertEqual(exc.code, 400) self.assertEqual(exc.title, 'Bad Request') self.assertEqual(exc.explanation, 'reason') def test_instantiate_without_title_known_code(self): exc = exception.ConvertedException(500) self.assertEqual(exc.title, status_reasons[500]) def test_instantiate_without_title_unknown_code(self): exc = exception.ConvertedException(499) self.assertEqual(exc.title, 'Unknown Client Error') def test_instantiate_bad_code(self): self.assertRaises(KeyError, exception.ConvertedException, 10) class ExceptionTestCase(test.NoDBTestCase): @staticmethod def _raise_exc(exc): raise exc(500) def test_exceptions_raise(self): # NOTE(dprince): disable format errors since we are not passing kwargs for name in dir(exception): exc = getattr(exception, name) if isinstance(exc, type): self.assertRaises(exc, self._raise_exc, exc) class ExceptionValidMessageTestCase(test.NoDBTestCase): def test_messages(self): failures = [] for name, obj in inspect.getmembers(exception): if name in ['NovaException', 'InstanceFaultRollback']: continue if not inspect.isclass(obj): continue if not issubclass(obj, exception.NovaException): continue e = obj if e.msg_fmt == "An unknown exception occurred.": failures.append('%s needs a more specific msg_fmt' % name) if failures: self.fail('\n'.join(failures)) nova-17.0.1/nova/tests/unit/fake_xvp_console_proxy.py0000666000175000017500000000311713250073126023014 0ustar zuulzuul00000000000000# Copyright (c) 2010 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Fake ConsoleProxy driver for tests.""" class FakeConsoleProxy(object): """Fake ConsoleProxy driver.""" @property def console_type(self): return 'fake' def setup_console(self, context, console): """Sets up actual proxies.""" pass def teardown_console(self, context, console): """Tears down actual proxies.""" pass def init_host(self): """Start up any config'ed consoles on start.""" pass def generate_password(self, length=8): """Returns random console password.""" return 'fakepass' def get_port(self, context): """Get available port for consoles that need one.""" return 5999 def fix_pool_password(self, password): """Trim password to length, and any other messaging.""" return password def fix_console_password(self, password): """Trim password to length, and any other messaging.""" return password nova-17.0.1/nova/tests/unit/keymgr/0000775000175000017500000000000013250073472017152 5ustar zuulzuul00000000000000nova-17.0.1/nova/tests/unit/keymgr/fake.py0000666000175000017500000000155413250073126020435 0ustar zuulzuul00000000000000# Copyright 2011 Justin Santa Barbara # Copyright 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Implementation of a fake key manager.""" from castellan.tests.unit.key_manager import mock_key_manager def fake_api(configuration=None): return mock_key_manager.MockKeyManager(configuration) nova-17.0.1/nova/tests/unit/keymgr/__init__.py0000666000175000017500000000000013250073126021247 0ustar zuulzuul00000000000000nova-17.0.1/nova/tests/unit/keymgr/test_conf_key_mgr.py0000666000175000017500000000751313250073126023231 0ustar zuulzuul00000000000000# Copyright (c) 2013 The Johns Hopkins University/Applied Physics Laboratory # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Test cases for the conf key manager. """ import binascii import codecs from castellan.common.objects import symmetric_key as key import nova.conf from nova import context from nova import exception from nova.keymgr import conf_key_mgr from nova import test CONF = nova.conf.CONF decode_hex = codecs.getdecoder("hex_codec") class ConfKeyManagerTestCase(test.NoDBTestCase): def __init__(self, *args, **kwargs): super(ConfKeyManagerTestCase, self).__init__(*args, **kwargs) self._hex_key = '0' * 64 def _create_key_manager(self): CONF.set_default('fixed_key', default=self._hex_key, group='key_manager') return conf_key_mgr.ConfKeyManager(CONF) def setUp(self): super(ConfKeyManagerTestCase, self).setUp() self.ctxt = context.RequestContext('fake', 'fake') self.key_mgr = self._create_key_manager() encoded_key = bytes(binascii.unhexlify(self._hex_key)) self.key = key.SymmetricKey('AES', len(encoded_key) * 8, encoded_key) self.key_id = self.key_mgr.key_id def test_init(self): key_manager = self._create_key_manager() self.assertEqual(self._hex_key, key_manager._hex_key) def test_init_value_error(self): CONF.set_default('fixed_key', default=None, group='key_manager') self.assertRaises(ValueError, conf_key_mgr.ConfKeyManager, CONF) def test_create_key(self): key_id_1 = self.key_mgr.create_key(self.ctxt, 'AES', 256) key_id_2 = self.key_mgr.create_key(self.ctxt, 'AES', 256) # ensure that the UUIDs are the same self.assertEqual(key_id_1, key_id_2) def test_create_null_context(self): self.assertRaises(exception.Forbidden, self.key_mgr.create_key, None, 'AES', 256) def test_store_key(self): key_bytes = bytes(binascii.unhexlify('0' * 64)) _key = key.SymmetricKey('AES', len(key_bytes) * 8, key_bytes) key_id = self.key_mgr.store(self.ctxt, _key) actual_key = self.key_mgr.get(self.ctxt, key_id) self.assertEqual(_key, actual_key) def test_store_null_context(self): self.assertRaises(exception.Forbidden, self.key_mgr.store, None, self.key) def test_get_null_context(self): self.assertRaises(exception.Forbidden, self.key_mgr.get, None, None) def test_get_unknown_key(self): self.assertRaises(KeyError, self.key_mgr.get, self.ctxt, None) def test_get(self): self.assertEqual(self.key, self.key_mgr.get(self.ctxt, self.key_id)) def test_delete_key(self): key_id = self.key_mgr.create_key(self.ctxt, 'AES', 256) self.key_mgr.delete(self.ctxt, key_id) # key won't actually be deleted self.assertEqual(self.key, self.key_mgr.get(self.ctxt, key_id)) def test_delete_null_context(self): self.assertRaises(exception.Forbidden, self.key_mgr.delete, None, None) def test_delete_unknown_key(self): self.assertRaises(exception.KeyManagerError, self.key_mgr.delete, self.ctxt, None) nova-17.0.1/nova/tests/unit/test_profiler.py0000666000175000017500000001022513250073126021105 0ustar zuulzuul00000000000000# Copyright 2016 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import inspect import os from oslo_utils import importutils import osprofiler.opts as profiler import six.moves as six from nova import conf from nova import test class TestProfiler(test.NoDBTestCase): def test_all_public_methods_are_traced(self): # NOTE(rpodolyaka): osprofiler only wraps class methods when option # CONF.profiler.enabled is set to True and the default value is False, # which means in our usual test run we use original, not patched # classes. In order to test, that we actually properly wrap methods # we are interested in, this test case sets CONF.profiler.enabled to # True and reloads all the affected Python modules (application of # decorators and metaclasses is performed at module import time). # Unfortunately, this leads to subtle failures of other test cases # (e.g. super() is performed on a "new" version of a class instance # created after a module reload while the class name is a reference to # an "old" version of the class). Thus, this test is run in isolation. if not os.getenv('TEST_OSPROFILER', False): self.skipTest('TEST_OSPROFILER env variable is not set. ' 'Skipping osprofiler tests...') # reinitialize the metaclass after enabling osprofiler profiler.set_defaults(conf.CONF) self.flags(enabled=True, group='profiler') six.reload_module(importutils.import_module('nova.manager')) classes = [ 'nova.api.manager.MetadataManager', 'nova.cells.manager.CellsManager', 'nova.cells.rpcapi.CellsAPI', 'nova.compute.api.API', 'nova.compute.manager.ComputeManager', 'nova.compute.rpcapi.ComputeAPI', 'nova.conductor.manager.ComputeTaskManager', 'nova.conductor.manager.ConductorManager', 'nova.conductor.rpcapi.ComputeTaskAPI', 'nova.conductor.rpcapi.ConductorAPI', 'nova.console.manager.ConsoleProxyManager', 'nova.console.rpcapi.ConsoleAPI', 'nova.consoleauth.manager.ConsoleAuthManager', 'nova.consoleauth.rpcapi.ConsoleAuthAPI', 'nova.image.api.API', 'nova.network.api.API', 'nova.network.manager.FlatDHCPManager', 'nova.network.manager.FlatManager', 'nova.network.manager.VlanManager', 'nova.network.neutronv2.api.ClientWrapper', 'nova.network.rpcapi.NetworkAPI', 'nova.scheduler.manager.SchedulerManager', 'nova.scheduler.rpcapi.SchedulerAPI', 'nova.virt.libvirt.vif.LibvirtGenericVIFDriver', 'nova.virt.libvirt.volume.volume.LibvirtBaseVolumeDriver', ] for clsname in classes: # give the metaclass and trace_cls() decorator a chance to patch # methods of the classes above six.reload_module( importutils.import_module(clsname.rsplit('.', 1)[0])) cls = importutils.import_class(clsname) for attr, obj in cls.__dict__.items(): # only public methods are traced if attr.startswith('_'): continue # only checks callables if not (inspect.ismethod(obj) or inspect.isfunction(obj)): continue # osprofiler skips static methods if isinstance(obj, staticmethod): continue self.assertTrue(getattr(obj, '__traced__', False), obj) nova-17.0.1/nova/tests/unit/test_nova_manage.py0000666000175000017500000027000413250073136021542 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # Copyright 2011 Ilya Alekseyev # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import sys import ddt import fixtures import mock from oslo_db import exception as db_exc from oslo_utils import uuidutils from six.moves import StringIO from nova.cmd import manage from nova import conf from nova import context from nova import db from nova.db import migration from nova.db.sqlalchemy import migration as sqla_migration from nova import exception from nova import objects from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.unit.db import fakes as db_fakes from nova.tests.unit.objects import test_network from nova.tests import uuidsentinel CONF = conf.CONF class UtilitiesTestCase(test.NoDBTestCase): def test_mask_passwd(self): # try to trip up the regex match with extra : and @. url1 = ("http://user:pass@domain.com:1234/something?" "email=me@somewhere.com") self.assertEqual( ("http://user:****@domain.com:1234/something?" "email=me@somewhere.com"), manage.mask_passwd_in_url(url1)) # pretty standard kinds of urls that we expect, have different # schemes. This ensures none of the parts get lost. url2 = "mysql+pymysql://root:pass@127.0.0.1/nova_api?charset=utf8" self.assertEqual( "mysql+pymysql://root:****@127.0.0.1/nova_api?charset=utf8", manage.mask_passwd_in_url(url2)) url3 = "rabbit://stackrabbit:pass@10.42.0.53:5672/" self.assertEqual( "rabbit://stackrabbit:****@10.42.0.53:5672/", manage.mask_passwd_in_url(url3)) url4 = ("mysql+pymysql://nova:my_password@my_IP/nova_api?" "charset=utf8&ssl_ca=/etc/nova/tls/mysql/ca-cert.pem" "&ssl_cert=/etc/nova/tls/mysql/server-cert.pem" "&ssl_key=/etc/nova/tls/mysql/server-key.pem") url4_safe = ("mysql+pymysql://nova:****@my_IP/nova_api?" "charset=utf8&ssl_ca=/etc/nova/tls/mysql/ca-cert.pem" "&ssl_cert=/etc/nova/tls/mysql/server-cert.pem" "&ssl_key=/etc/nova/tls/mysql/server-key.pem") self.assertEqual( url4_safe, manage.mask_passwd_in_url(url4)) class FloatingIpCommandsTestCase(test.NoDBTestCase): def setUp(self): super(FloatingIpCommandsTestCase, self).setUp() self.output = StringIO() self.useFixture(fixtures.MonkeyPatch('sys.stdout', self.output)) db_fakes.stub_out_db_network_api(self) self.commands = manage.FloatingIpCommands() def test_address_to_hosts(self): def assert_loop(result, expected): for ip in result: self.assertIn(str(ip), expected) address_to_hosts = self.commands.address_to_hosts # /32 and /31 self.assertRaises(exception.InvalidInput, address_to_hosts, '192.168.100.1/32') self.assertRaises(exception.InvalidInput, address_to_hosts, '192.168.100.1/31') # /30 expected = ["192.168.100.%s" % i for i in range(1, 3)] result = address_to_hosts('192.168.100.0/30') self.assertEqual(2, len(list(result))) assert_loop(result, expected) # /29 expected = ["192.168.100.%s" % i for i in range(1, 7)] result = address_to_hosts('192.168.100.0/29') self.assertEqual(6, len(list(result))) assert_loop(result, expected) # /28 expected = ["192.168.100.%s" % i for i in range(1, 15)] result = address_to_hosts('192.168.100.0/28') self.assertEqual(14, len(list(result))) assert_loop(result, expected) # /16 result = address_to_hosts('192.168.100.0/16') self.assertEqual(65534, len(list(result))) # NOTE(dripton): I don't test /13 because it makes the test take 3s. # /12 gives over a million IPs, which is ridiculous. self.assertRaises(exception.InvalidInput, address_to_hosts, '192.168.100.1/12') class NetworkCommandsTestCase(test.NoDBTestCase): def setUp(self): super(NetworkCommandsTestCase, self).setUp() # These are all tests that assume nova-network and using the nova DB. self.flags(use_neutron=False) self.output = StringIO() self.useFixture(fixtures.MonkeyPatch('sys.stdout', self.output)) self.commands = manage.NetworkCommands() self.net = {'id': 0, 'label': 'fake', 'injected': False, 'cidr': '192.168.0.0/24', 'cidr_v6': 'dead:beef::/64', 'multi_host': False, 'gateway_v6': 'dead:beef::1', 'netmask_v6': '64', 'netmask': '255.255.255.0', 'bridge': 'fa0', 'bridge_interface': 'fake_fa0', 'gateway': '192.168.0.1', 'broadcast': '192.168.0.255', 'dns1': '8.8.8.8', 'dns2': '8.8.4.4', 'vlan': 200, 'vlan_start': 201, 'vpn_public_address': '10.0.0.2', 'vpn_public_port': '2222', 'vpn_private_address': '192.168.0.2', 'dhcp_start': '192.168.0.3', 'project_id': 'fake_project', 'host': 'fake_host', 'uuid': 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa'} def fake_network_get_by_cidr(context, cidr): self.assertTrue(context.to_dict()['is_admin']) self.assertEqual(cidr, self.fake_net['cidr']) return db_fakes.FakeModel(dict(test_network.fake_network, **self.fake_net)) def fake_network_get_by_uuid(context, uuid): self.assertTrue(context.to_dict()['is_admin']) self.assertEqual(uuid, self.fake_net['uuid']) return db_fakes.FakeModel(dict(test_network.fake_network, **self.fake_net)) def fake_network_update(context, network_id, values): self.assertTrue(context.to_dict()['is_admin']) self.assertEqual(network_id, self.fake_net['id']) self.assertEqual(values, self.fake_update_value) self.fake_network_get_by_cidr = fake_network_get_by_cidr self.fake_network_get_by_uuid = fake_network_get_by_uuid self.fake_network_update = fake_network_update def test_create(self): def fake_create_networks(obj, context, **kwargs): self.assertTrue(context.to_dict()['is_admin']) self.assertEqual(kwargs['label'], 'Test') self.assertEqual(kwargs['cidr'], '10.2.0.0/24') self.assertFalse(kwargs['multi_host']) self.assertEqual(kwargs['num_networks'], 1) self.assertEqual(kwargs['network_size'], 256) self.assertEqual(kwargs['vlan'], 200) self.assertEqual(kwargs['vlan_start'], 201) self.assertEqual(kwargs['vpn_start'], 2000) self.assertEqual(kwargs['cidr_v6'], 'fd00:2::/120') self.assertEqual(kwargs['gateway'], '10.2.0.1') self.assertEqual(kwargs['gateway_v6'], 'fd00:2::22') self.assertEqual(kwargs['bridge'], 'br200') self.assertEqual(kwargs['bridge_interface'], 'eth0') self.assertEqual(kwargs['dns1'], '8.8.8.8') self.assertEqual(kwargs['dns2'], '8.8.4.4') self.flags(network_manager='nova.network.manager.VlanManager') self.stub_out('nova.network.manager.VlanManager.create_networks', fake_create_networks) self.commands.create( label='Test', cidr='10.2.0.0/24', num_networks=1, network_size=256, multi_host='F', vlan=200, vlan_start=201, vpn_start=2000, cidr_v6='fd00:2::/120', gateway='10.2.0.1', gateway_v6='fd00:2::22', bridge='br200', bridge_interface='eth0', dns1='8.8.8.8', dns2='8.8.4.4', uuid='aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa') def test_create_without_lable(self): self.assertRaises(exception.NetworkNotCreated, self.commands.create, cidr='10.2.0.0/24', num_networks=1, network_size=256, multi_host='F', vlan=200, vlan_start=201, vpn_start=2000, cidr_v6='fd00:2::/120', gateway='10.2.0.1', gateway_v6='fd00:2::22', bridge='br200', bridge_interface='eth0', dns1='8.8.8.8', dns2='8.8.4.4', uuid='aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa') def test_create_with_lable_too_long(self): self.assertRaises(exception.LabelTooLong, self.commands.create, label='x' * 256, cidr='10.2.0.0/24', num_networks=1, network_size=256, multi_host='F', vlan=200, vlan_start=201, vpn_start=2000, cidr_v6='fd00:2::/120', gateway='10.2.0.1', gateway_v6='fd00:2::22', bridge='br200', bridge_interface='eth0', dns1='8.8.8.8', dns2='8.8.4.4', uuid='aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa') def test_create_without_cidr(self): self.assertRaises(exception.NetworkNotCreated, self.commands.create, label='Test', num_networks=1, network_size=256, multi_host='F', vlan=200, vlan_start=201, vpn_start=2000, gateway='10.2.0.1', gateway_v6='fd00:2::22', bridge='br200', bridge_interface='eth0', dns1='8.8.8.8', dns2='8.8.4.4', uuid='aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa') def test_list(self): def fake_network_get_all(context): return [db_fakes.FakeModel(self.net)] self.stub_out('nova.db.network_get_all', fake_network_get_all) self.commands.list() result = self.output.getvalue() _fmt = "\t".join(["%(id)-5s", "%(cidr)-18s", "%(cidr_v6)-15s", "%(dhcp_start)-15s", "%(dns1)-15s", "%(dns2)-15s", "%(vlan)-15s", "%(project_id)-15s", "%(uuid)-15s"]) head = _fmt % {'id': 'id', 'cidr': 'IPv4', 'cidr_v6': 'IPv6', 'dhcp_start': 'start address', 'dns1': 'DNS1', 'dns2': 'DNS2', 'vlan': 'VlanID', 'project_id': 'project', 'uuid': "uuid"} body = _fmt % {'id': self.net['id'], 'cidr': self.net['cidr'], 'cidr_v6': self.net['cidr_v6'], 'dhcp_start': self.net['dhcp_start'], 'dns1': self.net['dns1'], 'dns2': self.net['dns2'], 'vlan': self.net['vlan'], 'project_id': self.net['project_id'], 'uuid': self.net['uuid']} answer = '%s\n%s\n' % (head, body) self.assertEqual(result, answer) def test_delete(self): self.fake_net = self.net self.fake_net['project_id'] = None self.fake_net['host'] = None self.stub_out('nova.db.network_get_by_uuid', self.fake_network_get_by_uuid) def fake_network_delete_safe(context, network_id): self.assertTrue(context.to_dict()['is_admin']) self.assertEqual(network_id, self.fake_net['id']) self.stub_out('nova.db.network_delete_safe', fake_network_delete_safe) self.commands.delete(uuid=self.fake_net['uuid']) def test_delete_by_cidr(self): self.fake_net = self.net self.fake_net['project_id'] = None self.fake_net['host'] = None self.stub_out('nova.db.network_get_by_cidr', self.fake_network_get_by_cidr) def fake_network_delete_safe(context, network_id): self.assertTrue(context.to_dict()['is_admin']) self.assertEqual(network_id, self.fake_net['id']) self.stub_out('nova.db.network_delete_safe', fake_network_delete_safe) self.commands.delete(fixed_range=self.fake_net['cidr']) def _test_modify_base(self, update_value, project, host, dis_project=None, dis_host=None): self.fake_net = self.net self.fake_update_value = update_value self.stub_out('nova.db.network_get_by_cidr', self.fake_network_get_by_cidr) self.stub_out('nova.db.network_update', self.fake_network_update) self.commands.modify(self.fake_net['cidr'], project=project, host=host, dis_project=dis_project, dis_host=dis_host) def test_modify_associate(self): self._test_modify_base(update_value={'project_id': 'test_project', 'host': 'test_host'}, project='test_project', host='test_host') def test_modify_unchanged(self): self._test_modify_base(update_value={}, project=None, host=None) def test_modify_disassociate(self): self._test_modify_base(update_value={'project_id': None, 'host': None}, project=None, host=None, dis_project=True, dis_host=True) class NeutronV2NetworkCommandsTestCase(test.NoDBTestCase): def setUp(self): super(NeutronV2NetworkCommandsTestCase, self).setUp() self.output = StringIO() self.useFixture(fixtures.MonkeyPatch('sys.stdout', self.output)) self.flags(use_neutron=True) self.commands = manage.NetworkCommands() def test_create(self): self.assertEqual(2, self.commands.create()) def test_list(self): self.assertEqual(2, self.commands.list()) def test_delete(self): self.assertEqual(2, self.commands.delete()) def test_modify(self): self.assertEqual(2, self.commands.modify('192.168.0.1')) class DBCommandsTestCase(test.NoDBTestCase): USES_DB_SELF = True def setUp(self): super(DBCommandsTestCase, self).setUp() self.output = StringIO() self.useFixture(fixtures.MonkeyPatch('sys.stdout', self.output)) self.commands = manage.DbCommands() def test_archive_deleted_rows_negative(self): self.assertEqual(2, self.commands.archive_deleted_rows(-1)) def test_archive_deleted_rows_large_number(self): large_number = '1' * 100 self.assertEqual(2, self.commands.archive_deleted_rows(large_number)) @mock.patch.object(db, 'archive_deleted_rows', return_value=(dict(instances=10, consoles=5), list())) def _test_archive_deleted_rows(self, mock_db_archive, verbose=False): result = self.commands.archive_deleted_rows(20, verbose=verbose) mock_db_archive.assert_called_once_with(20) output = self.output.getvalue() if verbose: expected = '''\ +-----------+-------------------------+ | Table | Number of Rows Archived | +-----------+-------------------------+ | consoles | 5 | | instances | 10 | +-----------+-------------------------+ ''' self.assertEqual(expected, output) else: self.assertEqual(0, len(output)) self.assertEqual(1, result) def test_archive_deleted_rows(self): # Tests that we don't show any table output (not verbose). self._test_archive_deleted_rows() def test_archive_deleted_rows_verbose(self): # Tests that we get table output. self._test_archive_deleted_rows(verbose=True) @mock.patch.object(db, 'archive_deleted_rows') def test_archive_deleted_rows_until_complete(self, mock_db_archive, verbose=False): mock_db_archive.side_effect = [ ({'instances': 10, 'instance_extra': 5}, list()), ({'instances': 5, 'instance_faults': 1}, list()), ({}, list())] result = self.commands.archive_deleted_rows(20, verbose=verbose, until_complete=True) self.assertEqual(1, result) if verbose: expected = """\ Archiving.....complete +-----------------+-------------------------+ | Table | Number of Rows Archived | +-----------------+-------------------------+ | instance_extra | 5 | | instance_faults | 1 | | instances | 15 | +-----------------+-------------------------+ """ else: expected = '' self.assertEqual(expected, self.output.getvalue()) mock_db_archive.assert_has_calls([mock.call(20), mock.call(20), mock.call(20)]) def test_archive_deleted_rows_until_complete_quiet(self): self.test_archive_deleted_rows_until_complete(verbose=False) @mock.patch.object(db, 'archive_deleted_rows') def test_archive_deleted_rows_until_stopped(self, mock_db_archive, verbose=True): mock_db_archive.side_effect = [ ({'instances': 10, 'instance_extra': 5}, list()), ({'instances': 5, 'instance_faults': 1}, list()), KeyboardInterrupt] result = self.commands.archive_deleted_rows(20, verbose=verbose, until_complete=True) self.assertEqual(1, result) if verbose: expected = """\ Archiving.....stopped +-----------------+-------------------------+ | Table | Number of Rows Archived | +-----------------+-------------------------+ | instance_extra | 5 | | instance_faults | 1 | | instances | 15 | +-----------------+-------------------------+ """ else: expected = '' self.assertEqual(expected, self.output.getvalue()) mock_db_archive.assert_has_calls([mock.call(20), mock.call(20), mock.call(20)]) def test_archive_deleted_rows_until_stopped_quiet(self): self.test_archive_deleted_rows_until_stopped(verbose=False) @mock.patch.object(db, 'archive_deleted_rows', return_value=({}, [])) def test_archive_deleted_rows_verbose_no_results(self, mock_db_archive): result = self.commands.archive_deleted_rows(20, verbose=True) mock_db_archive.assert_called_once_with(20) output = self.output.getvalue() self.assertIn('Nothing was archived.', output) self.assertEqual(0, result) @mock.patch.object(db, 'archive_deleted_rows') @mock.patch.object(objects.RequestSpec, 'destroy_bulk') def test_archive_deleted_rows_and_instance_mappings_and_request_specs(self, mock_destroy, mock_db_archive, verbose=True): self.useFixture(nova_fixtures.Database()) self.useFixture(nova_fixtures.Database(database='api')) ctxt = context.RequestContext('fake-user', 'fake_project') cell_uuid = uuidutils.generate_uuid() cell_mapping = objects.CellMapping(context=ctxt, uuid=cell_uuid, database_connection='fake:///db', transport_url='fake:///mq') cell_mapping.create() uuids = [] for i in range(2): uuid = uuidutils.generate_uuid() uuids.append(uuid) objects.Instance(ctxt, project_id=ctxt.project_id, uuid=uuid)\ .create() objects.InstanceMapping(ctxt, project_id=ctxt.project_id, cell_mapping=cell_mapping, instance_uuid=uuid)\ .create() mock_db_archive.return_value = (dict(instances=2, consoles=5), uuids) mock_destroy.return_value = 2 result = self.commands.archive_deleted_rows(20, verbose=verbose) self.assertEqual(1, result) mock_db_archive.assert_called_once_with(20) self.assertEqual(1, mock_destroy.call_count) output = self.output.getvalue() if verbose: expected = '''\ +-------------------+-------------------------+ | Table | Number of Rows Archived | +-------------------+-------------------------+ | consoles | 5 | | instance_mappings | 2 | | instances | 2 | | request_specs | 2 | +-------------------+-------------------------+ ''' self.assertEqual(expected, output) else: self.assertEqual(0, len(output)) @mock.patch.object(migration, 'db_null_instance_uuid_scan', return_value={'foo': 0}) def test_null_instance_uuid_scan_no_records_found(self, mock_scan): self.commands.null_instance_uuid_scan() self.assertIn("There were no records found", self.output.getvalue()) @mock.patch.object(migration, 'db_null_instance_uuid_scan', return_value={'foo': 1, 'bar': 0}) def _test_null_instance_uuid_scan(self, mock_scan, delete): self.commands.null_instance_uuid_scan(delete) output = self.output.getvalue() if delete: self.assertIn("Deleted 1 records from table 'foo'.", output) self.assertNotIn("Deleted 0 records from table 'bar'.", output) else: self.assertIn("1 records in the 'foo' table", output) self.assertNotIn("0 records in the 'bar' table", output) self.assertNotIn("There were no records found", output) def test_null_instance_uuid_scan_readonly(self): self._test_null_instance_uuid_scan(delete=False) def test_null_instance_uuid_scan_delete(self): self._test_null_instance_uuid_scan(delete=True) @mock.patch.object(sqla_migration, 'db_version', return_value=2) def test_version(self, sqla_migrate): self.commands.version() sqla_migrate.assert_called_once_with(context=None, database='main') @mock.patch.object(sqla_migration, 'db_sync') def test_sync(self, sqla_sync): self.commands.sync(version=4, local_cell=True) sqla_sync.assert_called_once_with(context=None, version=4, database='main') @mock.patch('nova.db.migration.db_sync') @mock.patch.object(objects.CellMapping, 'get_by_uuid', return_value='map') def test_sync_cell0(self, mock_get_by_uuid, mock_db_sync): ctxt = context.get_admin_context() cell_ctxt = context.get_admin_context() with test.nested( mock.patch('nova.context.RequestContext', return_value=ctxt), mock.patch('nova.context.target_cell')) \ as (mock_get_context, mock_target_cell): fake_target_cell_mock = mock.MagicMock() fake_target_cell_mock.__enter__.return_value = cell_ctxt mock_target_cell.return_value = fake_target_cell_mock self.commands.sync(version=4) mock_get_by_uuid.assert_called_once_with(ctxt, objects.CellMapping.CELL0_UUID) mock_target_cell.assert_called_once_with(ctxt, 'map') db_sync_calls = [ mock.call(4, context=cell_ctxt), mock.call(4) ] mock_db_sync.assert_has_calls(db_sync_calls) @mock.patch.object(objects.CellMapping, 'get_by_uuid', side_effect=test.TestingException('invalid connection')) def test_sync_cell0_unknown_error(self, mock_get_by_uuid): """Asserts that a detailed error message is given when an unknown error occurs trying to get the cell0 cell mapping. """ self.commands.sync() mock_get_by_uuid.assert_called_once_with( test.MatchType(context.RequestContext), objects.CellMapping.CELL0_UUID) expected = """ERROR: Could not access cell0. Has the nova_api database been created? Has the nova_cell0 database been created? Has "nova-manage api_db sync" been run? Has "nova-manage cell_v2 map_cell0" been run? Is [api_database]/connection set in nova.conf? Is the cell0 database connection URL correct? Error: invalid connection """ self.assertEqual(expected, self.output.getvalue()) def _fake_db_command(self, migrations=None): if migrations is None: mock_mig_1 = mock.MagicMock(__name__="mock_mig_1") mock_mig_2 = mock.MagicMock(__name__="mock_mig_2") mock_mig_1.return_value = (5, 4) mock_mig_2.return_value = (6, 6) migrations = (mock_mig_1, mock_mig_2) class _CommandSub(manage.DbCommands): online_migrations = migrations return _CommandSub @mock.patch('nova.context.get_admin_context') def test_online_migrations(self, mock_get_context): self.useFixture(fixtures.MonkeyPatch('sys.stdout', StringIO())) ctxt = mock_get_context.return_value command_cls = self._fake_db_command() command = command_cls() command.online_data_migrations(10) command_cls.online_migrations[0].assert_called_once_with(ctxt, 10) command_cls.online_migrations[1].assert_called_once_with(ctxt, 6) expected = """\ 5 rows matched query mock_mig_1, 4 migrated 6 rows matched query mock_mig_2, 6 migrated +------------+--------------+-----------+ | Migration | Total Needed | Completed | +------------+--------------+-----------+ | mock_mig_1 | 5 | 4 | | mock_mig_2 | 6 | 6 | +------------+--------------+-----------+ """ self.assertEqual(expected, sys.stdout.getvalue()) @mock.patch('nova.context.get_admin_context') def test_online_migrations_no_max_count(self, mock_get_context): total = [120] batches = [50, 40, 30, 0] runs = [] def fake_migration(context, count): self.assertEqual(mock_get_context.return_value, context) runs.append(count) count = batches.pop(0) total[0] -= count return total[0], count command_cls = self._fake_db_command((fake_migration,)) command = command_cls() command.online_data_migrations(None) self.assertEqual([], batches) self.assertEqual(0, total[0]) self.assertEqual([50, 50, 50, 50], runs) def test_online_migrations_error(self): fake_migration = mock.MagicMock() fake_migration.side_effect = Exception fake_migration.__name__ = 'fake' command_cls = self._fake_db_command((fake_migration,)) command = command_cls() command.online_data_migrations(None) def test_online_migrations_bad_max(self): self.assertEqual(127, self.commands.online_data_migrations(max_count=-2)) self.assertEqual(127, self.commands.online_data_migrations(max_count='a')) self.assertEqual(127, self.commands.online_data_migrations(max_count=0)) def test_online_migrations_no_max(self): with mock.patch.object(self.commands, '_run_migration') as rm: rm.return_value = {} self.assertEqual(0, self.commands.online_data_migrations()) def test_online_migrations_finished(self): with mock.patch.object(self.commands, '_run_migration') as rm: rm.return_value = {} self.assertEqual(0, self.commands.online_data_migrations(max_count=5)) def test_online_migrations_not_finished(self): with mock.patch.object(self.commands, '_run_migration') as rm: rm.return_value = {'mig': (10, 5)} self.assertEqual(1, self.commands.online_data_migrations(max_count=5)) class ApiDbCommandsTestCase(test.NoDBTestCase): def setUp(self): super(ApiDbCommandsTestCase, self).setUp() self.output = StringIO() self.useFixture(fixtures.MonkeyPatch('sys.stdout', self.output)) self.commands = manage.ApiDbCommands() @mock.patch.object(sqla_migration, 'db_version', return_value=2) def test_version(self, sqla_migrate): self.commands.version() sqla_migrate.assert_called_once_with(context=None, database='api') @mock.patch.object(sqla_migration, 'db_sync') def test_sync(self, sqla_sync): self.commands.sync(version=4) sqla_sync.assert_called_once_with(context=None, version=4, database='api') class CellCommandsTestCase(test.NoDBTestCase): def setUp(self): super(CellCommandsTestCase, self).setUp() self.output = StringIO() self.useFixture(fixtures.MonkeyPatch('sys.stdout', self.output)) self.commands = manage.CellCommands() def test_create_transport_hosts_multiple(self): """Test the _create_transport_hosts method when broker_hosts is set. """ brokers = "127.0.0.1:5672,127.0.0.2:5671" thosts = self.commands._create_transport_hosts( 'guest', 'devstack', broker_hosts=brokers) self.assertEqual(2, len(thosts)) self.assertEqual('127.0.0.1', thosts[0].hostname) self.assertEqual(5672, thosts[0].port) self.assertEqual('127.0.0.2', thosts[1].hostname) self.assertEqual(5671, thosts[1].port) def test_create_transport_hosts_single(self): """Test the _create_transport_hosts method when hostname is passed.""" thosts = self.commands._create_transport_hosts('guest', 'devstack', hostname='127.0.0.1', port=80) self.assertEqual(1, len(thosts)) self.assertEqual('127.0.0.1', thosts[0].hostname) self.assertEqual(80, thosts[0].port) def test_create_transport_hosts_single_broker(self): """Test the _create_transport_hosts method for single broker_hosts.""" thosts = self.commands._create_transport_hosts( 'guest', 'devstack', broker_hosts='127.0.0.1:5672') self.assertEqual(1, len(thosts)) self.assertEqual('127.0.0.1', thosts[0].hostname) self.assertEqual(5672, thosts[0].port) def test_create_transport_hosts_both(self): """Test the _create_transport_hosts method when both broker_hosts and hostname/port are passed. """ thosts = self.commands._create_transport_hosts( 'guest', 'devstack', broker_hosts='127.0.0.1:5672', hostname='127.0.0.2', port=80) self.assertEqual(1, len(thosts)) self.assertEqual('127.0.0.1', thosts[0].hostname) self.assertEqual(5672, thosts[0].port) def test_create_transport_hosts_wrong_val(self): """Test the _create_transport_hosts method when broker_hosts is wrongly specified """ self.assertRaises(ValueError, self.commands._create_transport_hosts, 'guest', 'devstack', broker_hosts='127.0.0.1:5672,127.0.0.1') def test_create_transport_hosts_wrong_port_val(self): """Test the _create_transport_hosts method when port in broker_hosts is wrongly specified """ self.assertRaises(ValueError, self.commands._create_transport_hosts, 'guest', 'devstack', broker_hosts='127.0.0.1:') def test_create_transport_hosts_wrong_port_arg(self): """Test the _create_transport_hosts method when port argument is wrongly specified """ self.assertRaises(ValueError, self.commands._create_transport_hosts, 'guest', 'devstack', hostname='127.0.0.1', port='ab') @mock.patch.object(context, 'get_admin_context') @mock.patch.object(db, 'cell_create') def test_create_broker_hosts(self, mock_db_cell_create, mock_ctxt): """Test the create function when broker_hosts is passed """ cell_tp_url = "fake://guest:devstack@127.0.0.1:5432" cell_tp_url += ",guest:devstack@127.0.0.2:9999/" ctxt = mock.sentinel mock_ctxt.return_value = mock.sentinel self.commands.create("test", broker_hosts='127.0.0.1:5432,127.0.0.2:9999', woffset=0, wscale=0, username="guest", password="devstack") exp_values = {'name': "test", 'is_parent': False, 'transport_url': cell_tp_url, 'weight_offset': 0.0, 'weight_scale': 0.0} mock_db_cell_create.assert_called_once_with(ctxt, exp_values) @mock.patch.object(context, 'get_admin_context') @mock.patch.object(db, 'cell_create') def test_create_broker_hosts_with_url_decoding_fix(self, mock_db_cell_create, mock_ctxt): """Test the create function when broker_hosts is passed """ cell_tp_url = "fake://the=user:the=password@127.0.0.1:5432/" ctxt = mock.sentinel mock_ctxt.return_value = mock.sentinel self.commands.create("test", broker_hosts='127.0.0.1:5432', woffset=0, wscale=0, username="the=user", password="the=password") exp_values = {'name': "test", 'is_parent': False, 'transport_url': cell_tp_url, 'weight_offset': 0.0, 'weight_scale': 0.0} mock_db_cell_create.assert_called_once_with(ctxt, exp_values) @mock.patch.object(context, 'get_admin_context') @mock.patch.object(db, 'cell_create') def test_create_hostname(self, mock_db_cell_create, mock_ctxt): """Test the create function when hostname and port is passed """ cell_tp_url = "fake://guest:devstack@127.0.0.1:9999/" ctxt = mock.sentinel mock_ctxt.return_value = mock.sentinel self.commands.create("test", hostname='127.0.0.1', port="9999", woffset=0, wscale=0, username="guest", password="devstack") exp_values = {'name': "test", 'is_parent': False, 'transport_url': cell_tp_url, 'weight_offset': 0.0, 'weight_scale': 0.0} mock_db_cell_create.assert_called_once_with(ctxt, exp_values) @ddt.ddt class CellV2CommandsTestCase(test.NoDBTestCase): USES_DB_SELF = True def setUp(self): super(CellV2CommandsTestCase, self).setUp() self.useFixture(nova_fixtures.Database()) self.useFixture(nova_fixtures.Database(database='api')) self.output = StringIO() self.useFixture(fixtures.MonkeyPatch('sys.stdout', self.output)) self.commands = manage.CellV2Commands() def test_map_cell_and_hosts(self): # Create some fake compute nodes and check if they get host mappings ctxt = context.RequestContext() values = { 'vcpus': 4, 'memory_mb': 4096, 'local_gb': 1024, 'vcpus_used': 2, 'memory_mb_used': 2048, 'local_gb_used': 512, 'hypervisor_type': 'Hyper-Dan-VM-ware', 'hypervisor_version': 1001, 'cpu_info': 'Schmintel i786', } for i in range(3): host = 'host%s' % i compute_node = objects.ComputeNode(ctxt, host=host, **values) compute_node.create() cell_transport_url = "fake://guest:devstack@127.0.0.1:9999/" self.commands.map_cell_and_hosts(cell_transport_url, name='ssd', verbose=True) cell_mapping_uuid = self.output.getvalue().strip() # Verify the cell mapping cell_mapping = objects.CellMapping.get_by_uuid(ctxt, cell_mapping_uuid) self.assertEqual('ssd', cell_mapping.name) self.assertEqual(cell_transport_url, cell_mapping.transport_url) # Verify the host mappings for i in range(3): host = 'host%s' % i host_mapping = objects.HostMapping.get_by_host(ctxt, host) self.assertEqual(cell_mapping.uuid, host_mapping.cell_mapping.uuid) def test_map_cell_and_hosts_duplicate(self): # Create a cell mapping and hosts and check that nothing new is created ctxt = context.RequestContext() cell_mapping_uuid = uuidutils.generate_uuid() cell_mapping = objects.CellMapping( ctxt, uuid=cell_mapping_uuid, name='fake', transport_url='fake://', database_connection='fake://') cell_mapping.create() # Create compute nodes that will map to the cell values = { 'vcpus': 4, 'memory_mb': 4096, 'local_gb': 1024, 'vcpus_used': 2, 'memory_mb_used': 2048, 'local_gb_used': 512, 'hypervisor_type': 'Hyper-Dan-VM-ware', 'hypervisor_version': 1001, 'cpu_info': 'Schmintel i786', } for i in range(3): host = 'host%s' % i compute_node = objects.ComputeNode(ctxt, host=host, **values) compute_node.create() host_mapping = objects.HostMapping( ctxt, host=host, cell_mapping=cell_mapping) host_mapping.create() cell_transport_url = "fake://guest:devstack@127.0.0.1:9999/" retval = self.commands.map_cell_and_hosts(cell_transport_url, name='ssd', verbose=True) self.assertEqual(0, retval) output = self.output.getvalue().strip() expected = '' for i in range(3): expected += ('Host host%s is already mapped to cell %s\n' % (i, cell_mapping_uuid)) expected += 'All hosts are already mapped to cell(s), exiting.' self.assertEqual(expected, output) def test_map_cell_and_hosts_partial_update(self): # Create a cell mapping and partial hosts and check that # missing HostMappings are created ctxt = context.RequestContext() cell_mapping_uuid = uuidutils.generate_uuid() cell_mapping = objects.CellMapping( ctxt, uuid=cell_mapping_uuid, name='fake', transport_url='fake://', database_connection='fake://') cell_mapping.create() # Create compute nodes that will map to the cell values = { 'vcpus': 4, 'memory_mb': 4096, 'local_gb': 1024, 'vcpus_used': 2, 'memory_mb_used': 2048, 'local_gb_used': 512, 'hypervisor_type': 'Hyper-Dan-VM-ware', 'hypervisor_version': 1001, 'cpu_info': 'Schmintel i786', } for i in range(3): host = 'host%s' % i compute_node = objects.ComputeNode(ctxt, host=host, **values) compute_node.create() # NOTE(danms): Create a second node on one compute to make sure # we handle that case compute_node = objects.ComputeNode(ctxt, host='host0', **values) compute_node.create() # Only create 2 existing HostMappings out of 3 for i in range(2): host = 'host%s' % i host_mapping = objects.HostMapping( ctxt, host=host, cell_mapping=cell_mapping) host_mapping.create() cell_transport_url = "fake://guest:devstack@127.0.0.1:9999/" self.commands.map_cell_and_hosts(cell_transport_url, name='ssd', verbose=True) # Verify the HostMapping for the last host was created host_mapping = objects.HostMapping.get_by_host(ctxt, 'host2') self.assertEqual(cell_mapping.uuid, host_mapping.cell_mapping.uuid) # Verify the output output = self.output.getvalue().strip() expected = '' for i in [0, 1, 0]: expected += ('Host host%s is already mapped to cell %s\n' % (i, cell_mapping_uuid)) # The expected CellMapping UUID for the last host should be the same expected += cell_mapping.uuid self.assertEqual(expected, output) def test_map_cell_and_hosts_no_hosts_found(self): cell_transport_url = "fake://guest:devstack@127.0.0.1:9999/" retval = self.commands.map_cell_and_hosts(cell_transport_url, name='ssd', verbose=True) self.assertEqual(0, retval) output = self.output.getvalue().strip() expected = 'No hosts found to map to cell, exiting.' self.assertEqual(expected, output) def test_map_cell_and_hosts_no_transport_url(self): retval = self.commands.map_cell_and_hosts() self.assertEqual(1, retval) output = self.output.getvalue().strip() expected = ('Must specify --transport-url if [DEFAULT]/transport_url ' 'is not set in the configuration file.') self.assertEqual(expected, output) def test_map_cell_and_hosts_transport_url_config(self): self.flags(transport_url = "fake://guest:devstack@127.0.0.1:9999/") retval = self.commands.map_cell_and_hosts() self.assertEqual(0, retval) @mock.patch.object(context, 'target_cell') def test_map_instances(self, mock_target_cell): ctxt = context.RequestContext('fake-user', 'fake_project') cell_uuid = uuidutils.generate_uuid() cell_mapping = objects.CellMapping( ctxt, uuid=cell_uuid, name='fake', transport_url='fake://', database_connection='fake://') cell_mapping.create() mock_target_cell.return_value.__enter__.return_value = ctxt instance_uuids = [] for i in range(3): uuid = uuidutils.generate_uuid() instance_uuids.append(uuid) objects.Instance(ctxt, project_id=ctxt.project_id, uuid=uuid).create() self.commands.map_instances(cell_uuid) for uuid in instance_uuids: inst_mapping = objects.InstanceMapping.get_by_instance_uuid(ctxt, uuid) self.assertEqual(ctxt.project_id, inst_mapping.project_id) self.assertEqual(cell_mapping.uuid, inst_mapping.cell_mapping.uuid) mock_target_cell.assert_called_once_with( test.MatchType(context.RequestContext), test.MatchObjPrims(cell_mapping)) @mock.patch.object(context, 'target_cell') def test_map_instances_duplicates(self, mock_target_cell): ctxt = context.RequestContext('fake-user', 'fake_project') cell_uuid = uuidutils.generate_uuid() cell_mapping = objects.CellMapping( ctxt, uuid=cell_uuid, name='fake', transport_url='fake://', database_connection='fake://') cell_mapping.create() mock_target_cell.return_value.__enter__.return_value = ctxt instance_uuids = [] for i in range(3): uuid = uuidutils.generate_uuid() instance_uuids.append(uuid) objects.Instance(ctxt, project_id=ctxt.project_id, uuid=uuid).create() objects.InstanceMapping(ctxt, project_id=ctxt.project_id, instance_uuid=instance_uuids[0], cell_mapping=cell_mapping).create() self.commands.map_instances(cell_uuid) for uuid in instance_uuids: inst_mapping = objects.InstanceMapping.get_by_instance_uuid(ctxt, uuid) self.assertEqual(ctxt.project_id, inst_mapping.project_id) mappings = objects.InstanceMappingList.get_by_project_id(ctxt, ctxt.project_id) self.assertEqual(3, len(mappings)) mock_target_cell.assert_called_once_with( test.MatchType(context.RequestContext), test.MatchObjPrims(cell_mapping)) @mock.patch.object(context, 'target_cell') def test_map_instances_two_batches(self, mock_target_cell): ctxt = context.RequestContext('fake-user', 'fake_project') cell_uuid = uuidutils.generate_uuid() cell_mapping = objects.CellMapping( ctxt, uuid=cell_uuid, name='fake', transport_url='fake://', database_connection='fake://') cell_mapping.create() mock_target_cell.return_value.__enter__.return_value = ctxt instance_uuids = [] # Batch size is 50 in map_instances for i in range(60): uuid = uuidutils.generate_uuid() instance_uuids.append(uuid) objects.Instance(ctxt, project_id=ctxt.project_id, uuid=uuid).create() ret = self.commands.map_instances(cell_uuid) self.assertEqual(0, ret) for uuid in instance_uuids: inst_mapping = objects.InstanceMapping.get_by_instance_uuid(ctxt, uuid) self.assertEqual(ctxt.project_id, inst_mapping.project_id) self.assertEqual(2, mock_target_cell.call_count) mock_target_cell.assert_called_with( test.MatchType(context.RequestContext), test.MatchObjPrims(cell_mapping)) @mock.patch.object(context, 'target_cell') def test_map_instances_max_count(self, mock_target_cell): ctxt = context.RequestContext('fake-user', 'fake_project') cell_uuid = uuidutils.generate_uuid() cell_mapping = objects.CellMapping( ctxt, uuid=cell_uuid, name='fake', transport_url='fake://', database_connection='fake://') cell_mapping.create() mock_target_cell.return_value.__enter__.return_value = ctxt instance_uuids = [] for i in range(6): uuid = uuidutils.generate_uuid() instance_uuids.append(uuid) objects.Instance(ctxt, project_id=ctxt.project_id, uuid=uuid).create() ret = self.commands.map_instances(cell_uuid, max_count=3) self.assertEqual(1, ret) for uuid in instance_uuids[:3]: # First three are mapped inst_mapping = objects.InstanceMapping.get_by_instance_uuid(ctxt, uuid) self.assertEqual(ctxt.project_id, inst_mapping.project_id) for uuid in instance_uuids[3:]: # Last three are not self.assertRaises(exception.InstanceMappingNotFound, objects.InstanceMapping.get_by_instance_uuid, ctxt, uuid) mock_target_cell.assert_called_once_with( test.MatchType(context.RequestContext), test.MatchObjPrims(cell_mapping)) @mock.patch.object(context, 'target_cell') def test_map_instances_marker_deleted(self, mock_target_cell): ctxt = context.RequestContext('fake-user', 'fake_project') cell_uuid = uuidutils.generate_uuid() cell_mapping = objects.CellMapping( ctxt, uuid=cell_uuid, name='fake', transport_url='fake://', database_connection='fake://') cell_mapping.create() mock_target_cell.return_value.__enter__.return_value = ctxt instance_uuids = [] for i in range(6): uuid = uuidutils.generate_uuid() instance_uuids.append(uuid) objects.Instance(ctxt, project_id=ctxt.project_id, uuid=uuid).create() ret = self.commands.map_instances(cell_uuid, max_count=3) self.assertEqual(1, ret) # Instances are mapped in the order created so we know the marker is # based off the third instance. marker = instance_uuids[2].replace('-', ' ') marker_mapping = objects.InstanceMapping.get_by_instance_uuid(ctxt, marker) marker_mapping.destroy() ret = self.commands.map_instances(cell_uuid) self.assertEqual(0, ret) for uuid in instance_uuids: inst_mapping = objects.InstanceMapping.get_by_instance_uuid(ctxt, uuid) self.assertEqual(ctxt.project_id, inst_mapping.project_id) self.assertEqual(2, mock_target_cell.call_count) mock_target_cell.assert_called_with( test.MatchType(context.RequestContext), test.MatchObjPrims(cell_mapping)) def test_map_instances_validate_cell_uuid(self): # create a random cell_uuid which is invalid cell_uuid = uuidutils.generate_uuid() # check that it raises an exception self.assertRaises(exception.CellMappingNotFound, self.commands.map_instances, cell_uuid) def test_map_cell0(self): ctxt = context.RequestContext() database_connection = 'fake:/foobar//' self.commands.map_cell0(database_connection) cell_mapping = objects.CellMapping.get_by_uuid(ctxt, objects.CellMapping.CELL0_UUID) self.assertEqual('cell0', cell_mapping.name) self.assertEqual('none:///', cell_mapping.transport_url) self.assertEqual(database_connection, cell_mapping.database_connection) @mock.patch.object(manage.CellV2Commands, '_map_cell0', new=mock.Mock()) def test_map_cell0_returns_0_on_successful_create(self): self.assertEqual(0, self.commands.map_cell0()) @mock.patch.object(manage.CellV2Commands, '_map_cell0') def test_map_cell0_returns_0_if_cell0_already_exists(self, _map_cell0): _map_cell0.side_effect = db_exc.DBDuplicateEntry exit_code = self.commands.map_cell0() self.assertEqual(0, exit_code) output = self.output.getvalue().strip() self.assertEqual('Cell0 is already setup', output) def test_map_cell0_default_database(self): CONF.set_default('connection', 'fake://netloc/nova', group='database') ctxt = context.RequestContext() self.commands.map_cell0() cell_mapping = objects.CellMapping.get_by_uuid(ctxt, objects.CellMapping.CELL0_UUID) self.assertEqual('cell0', cell_mapping.name) self.assertEqual('none:///', cell_mapping.transport_url) self.assertEqual('fake://netloc/nova_cell0', cell_mapping.database_connection) @ddt.data('mysql+pymysql://nova:abcd0123:AB@controller/%s', 'mysql+pymysql://nova:abcd0123?AB@controller/%s', 'mysql+pymysql://nova:abcd0123@AB@controller/%s', 'mysql+pymysql://nova:abcd0123/AB@controller/%s', 'mysql+pymysql://test:abcd0123/AB@controller/%s?charset=utf8') def test_map_cell0_default_database_special_characters(self, connection): """Tests that a URL with special characters, like in the credentials, is handled properly. """ decoded_connection = connection % 'nova' self.flags(connection=decoded_connection, group='database') ctxt = context.RequestContext() self.commands.map_cell0() cell_mapping = objects.CellMapping.get_by_uuid( ctxt, objects.CellMapping.CELL0_UUID) self.assertEqual('cell0', cell_mapping.name) self.assertEqual('none:///', cell_mapping.transport_url) self.assertEqual( connection % 'nova_cell0', cell_mapping.database_connection) # Delete the cell mapping for the next iteration. cell_mapping.destroy() def _test_migrate_simple_command(self, cell0_sync_fail=False): ctxt = context.RequestContext() CONF.set_default('connection', 'fake://netloc/nova', group='database') values = { 'vcpus': 4, 'memory_mb': 4096, 'local_gb': 1024, 'vcpus_used': 2, 'memory_mb_used': 2048, 'local_gb_used': 512, 'hypervisor_type': 'Hyper-Dan-VM-ware', 'hypervisor_version': 1001, 'cpu_info': 'Schmintel i786', } for i in range(3): host = 'host%s' % i compute_node = objects.ComputeNode(ctxt, host=host, **values) compute_node.create() transport_url = "fake://guest:devstack@127.0.0.1:9999/" cell_uuid = uuidsentinel.cell @mock.patch('nova.db.migration.db_sync') @mock.patch.object(context, 'target_cell') @mock.patch.object(uuidutils, 'generate_uuid', return_value=cell_uuid) def _test(mock_gen_uuid, mock_target_cell, mock_db_sync): if cell0_sync_fail: mock_db_sync.side_effect = db_exc.DBError result = self.commands.simple_cell_setup(transport_url) mock_db_sync.assert_called() return result r = _test() self.assertEqual(0, r) # Check cell0 from default cell_mapping = objects.CellMapping.get_by_uuid(ctxt, objects.CellMapping.CELL0_UUID) self.assertEqual('cell0', cell_mapping.name) self.assertEqual('none:///', cell_mapping.transport_url) self.assertEqual('fake://netloc/nova_cell0', cell_mapping.database_connection) # Verify the cell mapping cell_mapping = objects.CellMapping.get_by_uuid(ctxt, cell_uuid) self.assertEqual(transport_url, cell_mapping.transport_url) # Verify the host mappings for i in range(3): host = 'host%s' % i host_mapping = objects.HostMapping.get_by_host(ctxt, host) self.assertEqual(cell_mapping.uuid, host_mapping.cell_mapping.uuid) def test_simple_command_single(self): self._test_migrate_simple_command() def test_simple_command_cell0_fail(self): # Make sure that if db_sync fails, we still do all the other # bits self._test_migrate_simple_command(cell0_sync_fail=True) def test_simple_command_multiple(self): # Make sure that the command is idempotent self._test_migrate_simple_command() self._test_migrate_simple_command() def test_simple_command_cellsv1(self): self.flags(enable=True, group='cells') self.assertEqual(2, self.commands.simple_cell_setup('foo')) def test_instance_verify_no_mapping(self): r = self.commands.verify_instance(uuidsentinel.instance) self.assertEqual(1, r) @mock.patch('nova.objects.InstanceMapping.get_by_instance_uuid') def test_instance_verify_has_only_instance_mapping(self, mock_get): im = objects.InstanceMapping(cell_mapping=None) mock_get.return_value = im r = self.commands.verify_instance(uuidsentinel.instance) self.assertEqual(2, r) @mock.patch('nova.objects.InstanceMapping.get_by_instance_uuid') @mock.patch('nova.objects.Instance.get_by_uuid') @mock.patch.object(context, 'target_cell') def test_instance_verify_has_all_mappings(self, mock_target_cell, mock_get2, mock_get1): cm = objects.CellMapping(name='foo', uuid=uuidsentinel.cel) im = objects.InstanceMapping(cell_mapping=cm) mock_get1.return_value = im mock_get2.return_value = None r = self.commands.verify_instance(uuidsentinel.instance) self.assertEqual(0, r) def test_instance_verify_quiet(self): # NOTE(danms): This will hit the first use of the say() wrapper # and reasonably verify that path self.assertEqual(1, self.commands.verify_instance(uuidsentinel.foo, quiet=True)) @mock.patch.object(context, 'target_cell') def test_instance_verify_has_instance_mapping_but_no_instance(self, mock_target_cell): ctxt = context.RequestContext('fake-user', 'fake_project') cell_uuid = uuidutils.generate_uuid() cell_mapping = objects.CellMapping(context=ctxt, uuid=cell_uuid, database_connection='fake:///db', transport_url='fake:///mq') cell_mapping.create() mock_target_cell.return_value.__enter__.return_value = ctxt uuid = uuidutils.generate_uuid() objects.Instance(ctxt, project_id=ctxt.project_id, uuid=uuid).create() objects.InstanceMapping(ctxt, project_id=ctxt.project_id, cell_mapping=cell_mapping, instance_uuid=uuid)\ .create() # a scenario where an instance is deleted, but not archived. inst = objects.Instance.get_by_uuid(ctxt, uuid) inst.destroy() r = self.commands.verify_instance(uuid) self.assertEqual(3, r) self.assertIn('has been deleted', self.output.getvalue()) # a scenario where there is only the instance mapping but no instance # like when an instance has been archived but the instance mapping # was not deleted. uuid = uuidutils.generate_uuid() objects.InstanceMapping(ctxt, project_id=ctxt.project_id, cell_mapping=cell_mapping, instance_uuid=uuid)\ .create() r = self.commands.verify_instance(uuid) self.assertEqual(4, r) self.assertIn('has been archived', self.output.getvalue()) def _return_compute_nodes(self, ctxt, num=1): nodes = [] for i in range(num): nodes.append(objects.ComputeNode(ctxt, uuid=uuidutils.generate_uuid(), host='host%s' % i, vcpus=1, memory_mb=1, local_gb=1, vcpus_used=0, memory_mb_used=0, local_gb_used=0, hypervisor_type='', hypervisor_version=1, cpu_info='')) return nodes @mock.patch.object(context, 'target_cell') @mock.patch.object(objects.CellMappingList, 'get_all') def test_discover_hosts_single_cell(self, mock_cell_mapping_get_all, mock_target_cell): ctxt = context.RequestContext() compute_nodes = self._return_compute_nodes(ctxt) for compute_node in compute_nodes: compute_node.create() cell_mapping = objects.CellMapping(context=ctxt, uuid=uuidutils.generate_uuid(), database_connection='fake:///db', transport_url='fake:///mq') cell_mapping.create() mock_target_cell.return_value.__enter__.return_value = ctxt self.commands.discover_hosts(cell_uuid=cell_mapping.uuid) # Check that the host mappings were created for i, compute_node in enumerate(compute_nodes): host_mapping = objects.HostMapping.get_by_host(ctxt, compute_node.host) self.assertEqual('host%s' % i, host_mapping.host) mock_target_cell.assert_called_once_with( test.MatchType(context.RequestContext), test.MatchObjPrims(cell_mapping)) mock_cell_mapping_get_all.assert_not_called() @mock.patch.object(context, 'target_cell') @mock.patch.object(objects.CellMappingList, 'get_all') def test_discover_hosts_single_cell_no_new_hosts( self, mock_cell_mapping_get_all, mock_target_cell): ctxt = context.RequestContext() # Create some compute nodes and matching host mappings cell_mapping = objects.CellMapping(context=ctxt, uuid=uuidutils.generate_uuid(), database_connection='fake:///db', transport_url='fake:///mq') cell_mapping.create() compute_nodes = self._return_compute_nodes(ctxt) for compute_node in compute_nodes: compute_node.create() host_mapping = objects.HostMapping(context=ctxt, host=compute_node.host, cell_mapping=cell_mapping) host_mapping.create() with mock.patch('nova.objects.HostMapping.create') as mock_create: self.commands.discover_hosts(cell_uuid=cell_mapping.uuid) mock_create.assert_not_called() mock_target_cell.assert_called_once_with( test.MatchType(context.RequestContext), test.MatchObjPrims(cell_mapping)) mock_cell_mapping_get_all.assert_not_called() @mock.patch.object(objects.CellMapping, 'get_by_uuid') def test_discover_hosts_multiple_cells(self, mock_cell_mapping_get_by_uuid): # Create in-memory databases for cell1 and cell2 to let target_cell # run for real. We want one compute node in cell1's db and the other # compute node in cell2's db. cell_dbs = nova_fixtures.CellDatabases() cell_dbs.add_cell_database('fake:///db1') cell_dbs.add_cell_database('fake:///db2') self.useFixture(cell_dbs) ctxt = context.RequestContext() cell_mapping0 = objects.CellMapping( context=ctxt, uuid=objects.CellMapping.CELL0_UUID, database_connection='fake:///db0', transport_url='none:///') cell_mapping0.create() cell_mapping1 = objects.CellMapping(context=ctxt, uuid=uuidutils.generate_uuid(), database_connection='fake:///db1', transport_url='fake:///mq1') cell_mapping1.create() cell_mapping2 = objects.CellMapping(context=ctxt, uuid=uuidutils.generate_uuid(), database_connection='fake:///db2', transport_url='fake:///mq2') cell_mapping2.create() compute_nodes = self._return_compute_nodes(ctxt, num=2) # Create the first compute node in cell1's db with context.target_cell(ctxt, cell_mapping1) as cctxt: compute_nodes[0]._context = cctxt compute_nodes[0].create() # Create the first compute node in cell2's db with context.target_cell(ctxt, cell_mapping2) as cctxt: compute_nodes[1]._context = cctxt compute_nodes[1].create() self.commands.discover_hosts(verbose=True) output = self.output.getvalue().strip() self.assertNotEqual('', output) # Check that the host mappings were created for i, compute_node in enumerate(compute_nodes): host_mapping = objects.HostMapping.get_by_host(ctxt, compute_node.host) self.assertEqual('host%s' % i, host_mapping.host) mock_cell_mapping_get_by_uuid.assert_not_called() @mock.patch('nova.objects.host_mapping.discover_hosts') def test_discover_hosts_strict(self, mock_discover_hosts): # Check for exit code 0 if unmapped hosts found mock_discover_hosts.return_value = ['fake'] self.assertEqual(self.commands.discover_hosts(strict=True), 0) # Check for exit code 1 if no unmapped hosts are found mock_discover_hosts.return_value = [] self.assertEqual(self.commands.discover_hosts(strict=True), 1) # Check the return when strict=False self.assertIsNone(self.commands.discover_hosts()) def test_validate_transport_url_in_conf(self): from_conf = 'fake://user:pass@host:port/' self.flags(transport_url=from_conf) self.assertEqual(from_conf, self.commands._validate_transport_url(None)) def test_validate_transport_url_on_command_line(self): from_cli = 'fake://user:pass@host:port/' self.assertEqual(from_cli, self.commands._validate_transport_url(from_cli)) def test_validate_transport_url_missing(self): self.assertIsNone(self.commands._validate_transport_url(None)) def test_validate_transport_url_favors_command_line(self): self.flags(transport_url='fake://user:pass@host:port/') from_cli = 'fake://otheruser:otherpass@otherhost:otherport' self.assertEqual(from_cli, self.commands._validate_transport_url(from_cli)) def test_non_unique_transport_url_database_connection_checker(self): ctxt = context.RequestContext() cell1 = objects.CellMapping(context=ctxt, uuid=uuidsentinel.cell1, name='cell1', transport_url='fake://mq1', database_connection='fake:///db1') cell1.create() objects.CellMapping(context=ctxt, uuid=uuidsentinel.cell2, name='cell2', transport_url='fake://mq2', database_connection='fake:///db2').create() resultf = self.commands.\ _non_unique_transport_url_database_connection_checker( ctxt, None, 'fake://mq3', 'fake:///db3') resultt = self.commands.\ _non_unique_transport_url_database_connection_checker( ctxt, None, 'fake://mq1', 'fake:///db1') resultd = self.commands.\ _non_unique_transport_url_database_connection_checker( ctxt, cell1, 'fake://mq1', 'fake:///db1') self.assertFalse(resultf) self.assertTrue(resultt) self.assertFalse(resultd) self.assertIn('exists', self.output.getvalue()) def test_create_cell_use_params(self): ctxt = context.get_context() kwargs = dict( name='fake-name', transport_url='fake-transport-url', database_connection='fake-db-connection') status = self.commands.create_cell(verbose=True, **kwargs) self.assertEqual(0, status) cell2_uuid = self.output.getvalue().strip() self.commands.create_cell(**kwargs) cell2 = objects.CellMapping.get_by_uuid(ctxt, cell2_uuid) self.assertEqual(kwargs['name'], cell2.name) self.assertEqual(kwargs['database_connection'], cell2.database_connection) self.assertEqual(kwargs['transport_url'], cell2.transport_url) def test_create_cell_use_config_values(self): settings = dict( transport_url='fake-conf-transport-url', database_connection='fake-conf-db-connection') self.flags(connection=settings['database_connection'], group='database') self.flags(transport_url=settings['transport_url']) ctxt = context.get_context() status = self.commands.create_cell(verbose=True) self.assertEqual(0, status) cell1_uuid = self.output.getvalue().strip() cell1 = objects.CellMapping.get_by_uuid(ctxt, cell1_uuid) self.assertIsNone(cell1.name) self.assertEqual(settings['database_connection'], cell1.database_connection) self.assertEqual(settings['transport_url'], cell1.transport_url) def test_create_cell_failed_if_non_unique(self): kwargs = dict( name='fake-name', transport_url='fake-transport-url', database_connection='fake-db-connection') status1 = self.commands.create_cell(verbose=True, **kwargs) status2 = self.commands.create_cell(verbose=True, **kwargs) self.assertEqual(0, status1) self.assertEqual(2, status2) self.assertIn('exists', self.output.getvalue()) def test_create_cell_failed_if_no_transport_url(self): status = self.commands.create_cell() self.assertEqual(1, status) self.assertIn('--transport-url', self.output.getvalue()) def test_create_cell_failed_if_no_database_connection(self): self.flags(connection=None, group='database') status = self.commands.create_cell(transport_url='fake-transport-url') self.assertEqual(1, status) self.assertIn('--database_connection', self.output.getvalue()) def test_list_cells_no_cells_verbose_false(self): ctxt = context.RequestContext() cell_mapping0 = objects.CellMapping( context=ctxt, uuid=uuidsentinel.map0, database_connection='fake://user1:pass1@host1/db0', transport_url='none://user1:pass1@host1/', name='cell0') cell_mapping0.create() cell_mapping1 = objects.CellMapping( context=ctxt, uuid=uuidsentinel.map1, database_connection='fake://user1@host1/db0', transport_url='none://user1@host1/vhost1', name='cell1') cell_mapping1.create() self.assertEqual(0, self.commands.list_cells()) output = self.output.getvalue().strip() self.assertEqual('''\ +-------+--------------------------------------+---------------------------+-----------------------------+ | Name | UUID | Transport URL | Database Connection | +-------+--------------------------------------+---------------------------+-----------------------------+ | cell0 | %(uuid_map0)s | none://user1:****@host1/ | fake://user1:****@host1/db0 | | cell1 | %(uuid_map1)s | none://user1@host1/vhost1 | fake://user1@host1/db0 | +-------+--------------------------------------+---------------------------+-----------------------------+''' % # noqa {"uuid_map0": uuidsentinel.map0, "uuid_map1": uuidsentinel.map1}, output) def test_list_cells_multiple_sorted_verbose_true(self): ctxt = context.RequestContext() cell_mapping0 = objects.CellMapping( context=ctxt, uuid=uuidsentinel.map0, database_connection='fake:///db0', transport_url='none:///', name='cell0') cell_mapping0.create() cell_mapping1 = objects.CellMapping( context=ctxt, uuid=uuidsentinel.map1, database_connection='fake:///dblon', transport_url='fake:///mqlon', name='london') cell_mapping1.create() cell_mapping2 = objects.CellMapping( context=ctxt, uuid=uuidsentinel.map2, database_connection='fake:///dbdal', transport_url='fake:///mqdal', name='dallas') cell_mapping2.create() self.assertEqual(0, self.commands.list_cells(verbose=True)) output = self.output.getvalue().strip() self.assertEqual('''\ +--------+--------------------------------------+---------------+---------------------+ | Name | UUID | Transport URL | Database Connection | +--------+--------------------------------------+---------------+---------------------+ | cell0 | %(uuid_map0)s | none:/// | fake:///db0 | | dallas | %(uuid_map2)s | fake:///mqdal | fake:///dbdal | | london | %(uuid_map1)s | fake:///mqlon | fake:///dblon | +--------+--------------------------------------+---------------+---------------------+''' % # noqa {"uuid_map0": uuidsentinel.map0, "uuid_map1": uuidsentinel.map1, "uuid_map2": uuidsentinel.map2}, output) def test_delete_cell_not_found(self): """Tests trying to delete a cell that is not found by uuid.""" cell_uuid = uuidutils.generate_uuid() self.assertEqual(1, self.commands.delete_cell(cell_uuid)) output = self.output.getvalue().strip() self.assertEqual('Cell with uuid %s was not found.' % cell_uuid, output) def test_delete_cell_host_mappings_exist(self): """Tests trying to delete a cell which has host mappings.""" cell_uuid = uuidutils.generate_uuid() ctxt = context.get_admin_context() # create the cell mapping cm = objects.CellMapping( context=ctxt, uuid=cell_uuid, database_connection='fake:///db', transport_url='fake:///mq') cm.create() # create a host mapping in this cell hm = objects.HostMapping( context=ctxt, host='fake-host', cell_mapping=cm) hm.create() self.assertEqual(2, self.commands.delete_cell(cell_uuid)) output = self.output.getvalue().strip() self.assertIn('There are existing hosts mapped to cell', output) @mock.patch.object(objects.InstanceList, 'get_all') def test_delete_cell_instance_mappings_exist_with_instances( self, mock_get_all): """Tests trying to delete a cell which has instance mappings.""" cell_uuid = uuidutils.generate_uuid() ctxt = context.get_admin_context() mock_get_all.return_value = [objects.Instance( ctxt, uuid=uuidsentinel.instance)] # create the cell mapping cm = objects.CellMapping( context=ctxt, uuid=cell_uuid, database_connection='fake:///db', transport_url='fake:///mq') cm.create() # create an instance mapping in this cell im = objects.InstanceMapping( context=ctxt, instance_uuid=uuidutils.generate_uuid(), cell_mapping=cm, project_id=uuidutils.generate_uuid()) im.create() self.assertEqual(3, self.commands.delete_cell(cell_uuid)) output = self.output.getvalue().strip() self.assertIn('There are existing instances mapped to cell', output) @mock.patch.object(objects.InstanceList, 'get_all', return_value=[]) def test_delete_cell_instance_mappings_exist_without_instances( self, mock_get_all): """Tests trying to delete a cell which has instance mappings.""" cell_uuid = uuidutils.generate_uuid() ctxt = context.get_admin_context() # create the cell mapping cm = objects.CellMapping( context=ctxt, uuid=cell_uuid, database_connection='fake:///db', transport_url='fake:///mq') cm.create() # create an instance mapping in this cell im = objects.InstanceMapping( context=ctxt, instance_uuid=uuidutils.generate_uuid(), cell_mapping=cm, project_id=uuidutils.generate_uuid()) im.create() self.assertEqual(4, self.commands.delete_cell(cell_uuid)) output = self.output.getvalue().strip() self.assertIn('There are instance mappings to cell with uuid', output) self.assertIn('but all instances have been deleted in the cell.', output) self.assertIn("So execute 'nova-manage db archive_deleted_rows' to " "delete the instance mappings.", output) def test_delete_cell_success_without_host_mappings(self): """Tests trying to delete an empty cell.""" cell_uuid = uuidutils.generate_uuid() ctxt = context.get_admin_context() # create the cell mapping cm = objects.CellMapping( context=ctxt, uuid=cell_uuid, database_connection='fake:///db', transport_url='fake:///mq') cm.create() self.assertEqual(0, self.commands.delete_cell(cell_uuid)) output = self.output.getvalue().strip() self.assertEqual('', output) @mock.patch.object(objects.HostMapping, 'destroy') @mock.patch.object(objects.CellMapping, 'destroy') def test_delete_cell_success_with_host_mappings(self, mock_cell_destroy, mock_hm_destroy): """Tests trying to delete a cell with host.""" ctxt = context.get_admin_context() # create the cell mapping cm = objects.CellMapping( context=ctxt, uuid=uuidsentinel.cell1, database_connection='fake:///db', transport_url='fake:///mq') cm.create() # create a host mapping in this cell hm = objects.HostMapping( context=ctxt, host='fake-host', cell_mapping=cm) hm.create() self.assertEqual(0, self.commands.delete_cell(uuidsentinel.cell1, force=True)) output = self.output.getvalue().strip() self.assertEqual('', output) mock_hm_destroy.assert_called_once_with() mock_cell_destroy.assert_called_once_with() def test_update_cell_not_found(self): self.assertEqual(1, self.commands.update_cell( uuidsentinel.cell1, 'foo', 'fake://new', 'fake:///new')) self.assertIn('not found', self.output.getvalue()) def test_update_cell_failed_if_non_unique_transport_db_urls(self): ctxt = context.get_admin_context() objects.CellMapping(context=ctxt, uuid=uuidsentinel.cell1, name='cell1', transport_url='fake://mq1', database_connection='fake:///db1').create() objects.CellMapping(context=ctxt, uuid=uuidsentinel.cell2, name='cell2', transport_url='fake://mq2', database_connection='fake:///db2').create() cell2_update1 = self.commands.update_cell( uuidsentinel.cell2, 'foo', 'fake://mq1', 'fake:///db1') self.assertEqual(3, cell2_update1) self.assertIn('exists', self.output.getvalue()) cell2_update2 = self.commands.update_cell( uuidsentinel.cell2, 'foo', 'fake://mq1', 'fake:///db3') self.assertEqual(3, cell2_update2) self.assertIn('exists', self.output.getvalue()) cell2_update3 = self.commands.update_cell( uuidsentinel.cell2, 'foo', 'fake://mq3', 'fake:///db1') self.assertEqual(3, cell2_update3) self.assertIn('exists', self.output.getvalue()) cell2_update4 = self.commands.update_cell( uuidsentinel.cell2, 'foo', 'fake://mq3', 'fake:///db3') self.assertEqual(0, cell2_update4) def test_update_cell_failed(self): ctxt = context.get_admin_context() objects.CellMapping(context=ctxt, uuid=uuidsentinel.cell1, name='cell1', transport_url='fake://mq', database_connection='fake:///db').create() with mock.patch('nova.objects.CellMapping.save') as mock_save: mock_save.side_effect = Exception self.assertEqual(2, self.commands.update_cell( uuidsentinel.cell1, 'foo', 'fake://new', 'fake:///new')) self.assertIn('Unable to update', self.output.getvalue()) def test_update_cell_success(self): ctxt = context.get_admin_context() objects.CellMapping(context=ctxt, uuid=uuidsentinel.cell1, name='cell1', transport_url='fake://mq', database_connection='fake:///db').create() self.assertEqual(0, self.commands.update_cell( uuidsentinel.cell1, 'foo', 'fake://new', 'fake:///new')) cm = objects.CellMapping.get_by_uuid(ctxt, uuidsentinel.cell1) self.assertEqual('foo', cm.name) self.assertEqual('fake://new', cm.transport_url) self.assertEqual('fake:///new', cm.database_connection) output = self.output.getvalue().strip() self.assertEqual('', output) def test_update_cell_success_defaults(self): ctxt = context.get_admin_context() objects.CellMapping(context=ctxt, uuid=uuidsentinel.cell1, name='cell1', transport_url='fake://mq', database_connection='fake:///db').create() self.assertEqual(0, self.commands.update_cell(uuidsentinel.cell1)) cm = objects.CellMapping.get_by_uuid(ctxt, uuidsentinel.cell1) self.assertEqual('cell1', cm.name) expected_transport_url = CONF.transport_url or 'fake://mq' self.assertEqual(expected_transport_url, cm.transport_url) expected_db_connection = CONF.database.connection or 'fake:///db' self.assertEqual(expected_db_connection, cm.database_connection) output = self.output.getvalue().strip() self.assertEqual('', output) def test_list_hosts(self): ctxt = context.get_admin_context() # create the cell mapping cm1 = objects.CellMapping( context=ctxt, uuid=uuidsentinel.map0, name='london', database_connection='fake:///db', transport_url='fake:///mq') cm1.create() cm2 = objects.CellMapping( context=ctxt, uuid=uuidsentinel.map1, name='dallas', database_connection='fake:///db', transport_url='fake:///mq') cm2.create() # create a host mapping in another cell hm1 = objects.HostMapping( context=ctxt, host='fake-host-1', cell_mapping=cm1) hm1.create() hm2 = objects.HostMapping( context=ctxt, host='fake-host-2', cell_mapping=cm2) hm2.create() self.assertEqual(0, self.commands.list_hosts()) output = self.output.getvalue().strip() self.assertEqual('''\ +-----------+--------------------------------------+-------------+ | Cell Name | Cell UUID | Hostname | +-----------+--------------------------------------+-------------+ | london | %(uuid_map0)s | fake-host-1 | | dallas | %(uuid_map1)s | fake-host-2 | +-----------+--------------------------------------+-------------+''' % {"uuid_map0": uuidsentinel.map0, "uuid_map1": uuidsentinel.map1}, output) def test_list_hosts_in_cell(self): ctxt = context.get_admin_context() # create the cell mapping cm1 = objects.CellMapping( context=ctxt, uuid=uuidsentinel.map0, name='london', database_connection='fake:///db', transport_url='fake:///mq') cm1.create() cm2 = objects.CellMapping( context=ctxt, uuid=uuidsentinel.map1, name='dallas', database_connection='fake:///db', transport_url='fake:///mq') cm2.create() # create a host mapping in another cell hm1 = objects.HostMapping( context=ctxt, host='fake-host-1', cell_mapping=cm1) hm1.create() hm2 = objects.HostMapping( context=ctxt, host='fake-host-2', cell_mapping=cm2) hm2.create() self.assertEqual(0, self.commands.list_hosts( cell_uuid=uuidsentinel.map0)) output = self.output.getvalue().strip() self.assertEqual('''\ +-----------+--------------------------------------+-------------+ | Cell Name | Cell UUID | Hostname | +-----------+--------------------------------------+-------------+ | london | %(uuid_map0)s | fake-host-1 | +-----------+--------------------------------------+-------------+''' % {"uuid_map0": uuidsentinel.map0}, output) def test_list_hosts_cell_not_found(self): """Tests trying to delete a host but a specified cell is not found.""" self.assertEqual(1, self.commands.list_hosts( cell_uuid=uuidsentinel.cell1)) output = self.output.getvalue().strip() self.assertEqual( 'Cell with uuid %s was not found.' % uuidsentinel.cell1, output) def test_delete_host_cell_not_found(self): """Tests trying to delete a host but a specified cell is not found.""" self.assertEqual(1, self.commands.delete_host(uuidsentinel.cell1, 'fake-host')) output = self.output.getvalue().strip() self.assertEqual( 'Cell with uuid %s was not found.' % uuidsentinel.cell1, output) def test_delete_host_host_not_found(self): """Tests trying to delete a host but the host is not found.""" ctxt = context.get_admin_context() # create the cell mapping cm = objects.CellMapping( context=ctxt, uuid=uuidsentinel.cell1, database_connection='fake:///db', transport_url='fake:///mq') cm.create() self.assertEqual(2, self.commands.delete_host(uuidsentinel.cell1, 'fake-host')) output = self.output.getvalue().strip() self.assertEqual('The host fake-host was not found.', output) def test_delete_host_host_not_in_cell(self): """Tests trying to delete a host but the host does not belongs to a specified cell. """ ctxt = context.get_admin_context() # create the cell mapping cm1 = objects.CellMapping( context=ctxt, uuid=uuidsentinel.cell1, database_connection='fake:///db', transport_url='fake:///mq') cm1.create() cm2 = objects.CellMapping( context=ctxt, uuid=uuidsentinel.cell2, database_connection='fake:///db', transport_url='fake:///mq') cm2.create() # create a host mapping in another cell hm = objects.HostMapping( context=ctxt, host='fake-host', cell_mapping=cm2) hm.create() self.assertEqual(3, self.commands.delete_host(uuidsentinel.cell1, 'fake-host')) output = self.output.getvalue().strip() self.assertEqual(('The host fake-host was not found in the cell %s.' % uuidsentinel.cell1), output) @mock.patch.object(objects.InstanceList, 'get_by_host') @mock.patch.object(objects.ComputeNodeList, 'get_all_by_host') def test_delete_host_instances_exist(self, mock_get_cn, mock_get_by_host): """Tests trying to delete a host but the host has instances.""" ctxt = context.get_admin_context() # create the cell mapping cm1 = objects.CellMapping( context=ctxt, uuid=uuidsentinel.cell1, database_connection='fake:///db', transport_url='fake:///mq') cm1.create() # create a host mapping in the cell hm = objects.HostMapping( context=ctxt, host='fake-host', cell_mapping=cm1) hm.create() mock_get_by_host.return_value = [objects.Instance( ctxt, uuid=uuidsentinel.instance)] mock_get_cn.return_value = [] self.assertEqual(4, self.commands.delete_host(uuidsentinel.cell1, 'fake-host')) output = self.output.getvalue().strip() self.assertEqual('There are instances on the host fake-host.', output) mock_get_by_host.assert_called_once_with( test.MatchType(context.RequestContext), 'fake-host') @mock.patch.object(objects.InstanceList, 'get_by_host', return_value=[]) @mock.patch.object(objects.HostMapping, 'destroy') @mock.patch.object(objects.ComputeNodeList, 'get_all_by_host') def test_delete_host_success(self, mock_get_cn, mock_destroy, mock_get_by_host): """Tests trying to delete a host that has not instances.""" ctxt = context.get_admin_context() # create the cell mapping cm1 = objects.CellMapping( context=ctxt, uuid=uuidsentinel.cell1, database_connection='fake:///db', transport_url='fake:///mq') cm1.create() # create a host mapping in the cell hm = objects.HostMapping( context=ctxt, host='fake-host', cell_mapping=cm1) hm.create() mock_get_cn.return_value = [mock.MagicMock(), mock.MagicMock()] self.assertEqual(0, self.commands.delete_host(uuidsentinel.cell1, 'fake-host')) output = self.output.getvalue().strip() self.assertEqual('', output) mock_get_by_host.assert_called_once_with( test.MatchType(context.RequestContext), 'fake-host') mock_destroy.assert_called_once_with() for node in mock_get_cn.return_value: self.assertEqual(0, node.mapped) node.save.assert_called_once_with() class TestNovaManageMain(test.NoDBTestCase): """Tests the nova-manage:main() setup code.""" def setUp(self): super(TestNovaManageMain, self).setUp() self.output = StringIO() self.useFixture(fixtures.MonkeyPatch('sys.stdout', self.output)) @mock.patch.object(manage.config, 'parse_args') @mock.patch.object(manage, 'CONF') def test_error_traceback(self, mock_conf, mock_parse_args): with mock.patch.object(manage.cmd_common, 'get_action_fn', side_effect=test.TestingException('oops')): mock_conf.post_mortem = False self.assertEqual(1, manage.main()) # assert the traceback is dumped to stdout output = self.output.getvalue() self.assertIn('An error has occurred', output) self.assertIn('Traceback', output) self.assertIn('oops', output) @mock.patch('pdb.post_mortem') @mock.patch.object(manage.config, 'parse_args') @mock.patch.object(manage, 'CONF') def test_error_post_mortem(self, mock_conf, mock_parse_args, mock_pm): with mock.patch.object(manage.cmd_common, 'get_action_fn', side_effect=test.TestingException('oops')): mock_conf.post_mortem = True self.assertEqual(1, manage.main()) self.assertTrue(mock_pm.called) nova-17.0.1/nova/tests/unit/test_safeutils.py0000666000175000017500000000602513250073126021265 0ustar zuulzuul00000000000000# Copyright 2011 Justin Santa Barbara # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import functools from nova import safe_utils from nova import test def get_closure(): x = 1 def wrapper(self, instance, red=None, blue=None): return x return wrapper class WrappedCodeTestCase(test.NoDBTestCase): """Test the get_wrapped_function utility method.""" def _wrapper(self, function): @functools.wraps(function) def decorated_function(self, *args, **kwargs): function(self, *args, **kwargs) return decorated_function def test_single_wrapped(self): @self._wrapper def wrapped(self, instance, red=None, blue=None): pass func = safe_utils.get_wrapped_function(wrapped) func_code = func.__code__ self.assertEqual(4, len(func_code.co_varnames)) self.assertIn('self', func_code.co_varnames) self.assertIn('instance', func_code.co_varnames) self.assertIn('red', func_code.co_varnames) self.assertIn('blue', func_code.co_varnames) def test_double_wrapped(self): @self._wrapper @self._wrapper def wrapped(self, instance, red=None, blue=None): pass func = safe_utils.get_wrapped_function(wrapped) func_code = func.__code__ self.assertEqual(4, len(func_code.co_varnames)) self.assertIn('self', func_code.co_varnames) self.assertIn('instance', func_code.co_varnames) self.assertIn('red', func_code.co_varnames) self.assertIn('blue', func_code.co_varnames) def test_triple_wrapped(self): @self._wrapper @self._wrapper @self._wrapper def wrapped(self, instance, red=None, blue=None): pass func = safe_utils.get_wrapped_function(wrapped) func_code = func.__code__ self.assertEqual(4, len(func_code.co_varnames)) self.assertIn('self', func_code.co_varnames) self.assertIn('instance', func_code.co_varnames) self.assertIn('red', func_code.co_varnames) self.assertIn('blue', func_code.co_varnames) def test_closure(self): closure = get_closure() func = safe_utils.get_wrapped_function(closure) func_code = func.__code__ self.assertEqual(4, len(func_code.co_varnames)) self.assertIn('self', func_code.co_varnames) self.assertIn('instance', func_code.co_varnames) self.assertIn('red', func_code.co_varnames) self.assertIn('blue', func_code.co_varnames) nova-17.0.1/nova/tests/unit/test_notifications.py0000666000175000017500000006205313250073126022142 0ustar zuulzuul00000000000000# Copyright (c) 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests for common notifications.""" import copy import datetime import mock from oslo_config import cfg from oslo_context import context as o_context from oslo_context import fixture as o_fixture from oslo_utils import timeutils from nova.compute import flavors from nova.compute import task_states from nova.compute import vm_states from nova import context from nova import exception from nova.notifications import base as notifications from nova import objects from nova.objects import base as obj_base from nova import test from nova.tests.unit import fake_network from nova.tests.unit import fake_notifier from nova.tests import uuidsentinel as uuids CONF = cfg.CONF class NotificationsTestCase(test.TestCase): def setUp(self): super(NotificationsTestCase, self).setUp() self.fixture = self.useFixture(o_fixture.ClearRequestContext()) self.net_info = fake_network.fake_get_instance_nw_info(self, 1, 1) def fake_get_nw_info(cls, ctxt, instance): self.assertTrue(ctxt.is_admin) return self.net_info self.stub_out('nova.network.api.API.get_instance_nw_info', fake_get_nw_info) fake_network.set_stub_network_methods(self) fake_notifier.stub_notifier(self) self.addCleanup(fake_notifier.reset) self.flags(network_manager='nova.network.manager.FlatManager', host='testhost') self.flags(notify_on_state_change="vm_and_task_state", group='notifications') self.flags(api_servers=['http://localhost:9292'], group='glance') self.user_id = 'fake' self.project_id = 'fake' self.context = context.RequestContext(self.user_id, self.project_id) self.fake_time = datetime.datetime(2017, 2, 2, 16, 45, 0) timeutils.set_time_override(self.fake_time) self.instance = self._wrapped_create() self.decorated_function_called = False def _wrapped_create(self, params=None): instance_type = flavors.get_flavor_by_name('m1.tiny') inst = objects.Instance(image_ref=uuids.image_ref, user_id=self.user_id, project_id=self.project_id, instance_type_id=instance_type['id'], root_gb=0, ephemeral_gb=0, access_ip_v4='1.2.3.4', access_ip_v6='feed::5eed', display_name='test_instance', hostname='test_instance_hostname', node='test_instance_node', system_metadata={}) inst._context = self.context if params: inst.update(params) inst.flavor = instance_type inst.create() return inst def test_notif_disabled(self): # test config disable of the notifications self.flags(notify_on_state_change=None, group='notifications') old = copy.copy(self.instance) self.instance.vm_state = vm_states.ACTIVE old_vm_state = old['vm_state'] new_vm_state = self.instance.vm_state old_task_state = old['task_state'] new_task_state = self.instance.task_state notifications.send_update_with_states(self.context, self.instance, old_vm_state, new_vm_state, old_task_state, new_task_state, verify_states=True) notifications.send_update(self.context, old, self.instance) self.assertEqual(0, len(fake_notifier.NOTIFICATIONS)) self.assertEqual(0, len(fake_notifier.VERSIONED_NOTIFICATIONS)) def test_task_notif(self): # test config disable of just the task state notifications self.flags(notify_on_state_change="vm_state", group='notifications') # we should not get a notification on task stgate chagne now old = copy.copy(self.instance) self.instance.task_state = task_states.SPAWNING old_vm_state = old['vm_state'] new_vm_state = self.instance.vm_state old_task_state = old['task_state'] new_task_state = self.instance.task_state notifications.send_update_with_states(self.context, self.instance, old_vm_state, new_vm_state, old_task_state, new_task_state, verify_states=True) self.assertEqual(0, len(fake_notifier.NOTIFICATIONS)) self.assertEqual(0, len(fake_notifier.VERSIONED_NOTIFICATIONS)) # ok now enable task state notifications and re-try self.flags(notify_on_state_change="vm_and_task_state", group='notifications') notifications.send_update(self.context, old, self.instance) self.assertEqual(1, len(fake_notifier.NOTIFICATIONS)) self.assertEqual(1, len(fake_notifier.VERSIONED_NOTIFICATIONS)) self.assertEqual( 'instance.update', fake_notifier.VERSIONED_NOTIFICATIONS[0]['event_type']) def test_send_no_notif(self): # test notification on send no initial vm state: old_vm_state = self.instance.vm_state new_vm_state = self.instance.vm_state old_task_state = self.instance.task_state new_task_state = self.instance.task_state notifications.send_update_with_states(self.context, self.instance, old_vm_state, new_vm_state, old_task_state, new_task_state, service="compute", host=None, verify_states=True) self.assertEqual(0, len(fake_notifier.NOTIFICATIONS)) self.assertEqual(0, len(fake_notifier.VERSIONED_NOTIFICATIONS)) def test_send_on_vm_change(self): old = obj_base.obj_to_primitive(self.instance) old['vm_state'] = None # pretend we just transitioned to ACTIVE: self.instance.vm_state = vm_states.ACTIVE notifications.send_update(self.context, old, self.instance) self.assertEqual(1, len(fake_notifier.NOTIFICATIONS)) # service name should default to 'compute' notif = fake_notifier.NOTIFICATIONS[0] self.assertEqual('compute.testhost', notif.publisher_id) self.assertEqual(1, len(fake_notifier.VERSIONED_NOTIFICATIONS)) self.assertEqual( 'nova-compute:testhost', fake_notifier.VERSIONED_NOTIFICATIONS[0]['publisher_id']) self.assertEqual( 'instance.update', fake_notifier.VERSIONED_NOTIFICATIONS[0]['event_type']) def test_send_on_task_change(self): old = obj_base.obj_to_primitive(self.instance) old['task_state'] = None # pretend we just transitioned to task SPAWNING: self.instance.task_state = task_states.SPAWNING notifications.send_update(self.context, old, self.instance) self.assertEqual(1, len(fake_notifier.NOTIFICATIONS)) self.assertEqual(1, len(fake_notifier.VERSIONED_NOTIFICATIONS)) self.assertEqual( 'instance.update', fake_notifier.VERSIONED_NOTIFICATIONS[0]['event_type']) def test_no_update_with_states(self): notifications.send_update_with_states(self.context, self.instance, vm_states.BUILDING, vm_states.BUILDING, task_states.SPAWNING, task_states.SPAWNING, verify_states=True) self.assertEqual(0, len(fake_notifier.NOTIFICATIONS)) self.assertEqual(0, len(fake_notifier.VERSIONED_NOTIFICATIONS)) def get_fake_bandwidth(self): usage = objects.BandwidthUsage(context=self.context) usage.create( self.instance.uuid, mac='DE:AD:BE:EF:00:01', bw_in=1, bw_out=2, last_ctr_in=0, last_ctr_out=0, start_period='2012-10-29T13:42:11Z') return usage @mock.patch.object(objects.BandwidthUsageList, 'get_by_uuids') def test_vm_update_with_states(self, mock_bandwidth_list): mock_bandwidth_list.return_value = [self.get_fake_bandwidth()] fake_net_info = fake_network.fake_get_instance_nw_info(self, 1, 1) self.instance.info_cache.network_info = fake_net_info notifications.send_update_with_states(self.context, self.instance, vm_states.BUILDING, vm_states.ACTIVE, task_states.SPAWNING, task_states.SPAWNING, verify_states=True) self._verify_notification() def _verify_notification(self, expected_state=vm_states.ACTIVE, expected_new_task_state=task_states.SPAWNING): self.assertEqual(1, len(fake_notifier.NOTIFICATIONS)) self.assertEqual(1, len(fake_notifier.VERSIONED_NOTIFICATIONS)) self.assertEqual( 'instance.update', fake_notifier.VERSIONED_NOTIFICATIONS[0]['event_type']) access_ip_v4 = str(self.instance.access_ip_v4) access_ip_v6 = str(self.instance.access_ip_v6) display_name = self.instance.display_name hostname = self.instance.hostname node = self.instance.node payload = fake_notifier.NOTIFICATIONS[0].payload self.assertEqual(vm_states.BUILDING, payload["old_state"]) self.assertEqual(expected_state, payload["state"]) self.assertEqual(task_states.SPAWNING, payload["old_task_state"]) self.assertEqual(expected_new_task_state, payload["new_task_state"]) self.assertEqual(payload["access_ip_v4"], access_ip_v4) self.assertEqual(payload["access_ip_v6"], access_ip_v6) self.assertEqual(payload["display_name"], display_name) self.assertEqual(payload["hostname"], hostname) self.assertEqual(payload["node"], node) self.assertEqual("2017-02-01T00:00:00.000000", payload["audit_period_beginning"]) self.assertEqual("2017-02-02T16:45:00.000000", payload["audit_period_ending"]) payload = fake_notifier.VERSIONED_NOTIFICATIONS[0][ 'payload']['nova_object.data'] state_update = payload['state_update']['nova_object.data'] self.assertEqual(vm_states.BUILDING, state_update['old_state']) self.assertEqual(expected_state, state_update["state"]) self.assertEqual(task_states.SPAWNING, state_update["old_task_state"]) self.assertEqual(expected_new_task_state, state_update["new_task_state"]) self.assertEqual(payload["display_name"], display_name) self.assertEqual(payload["host_name"], hostname) self.assertEqual(payload["node"], node) flavor = payload['flavor']['nova_object.data'] self.assertEqual(flavor['flavorid'], '1') self.assertEqual(payload['image_uuid'], uuids.image_ref) net_info = self.instance.info_cache.network_info vif = net_info[0] ip_addresses = payload['ip_addresses'] self.assertEqual(len(ip_addresses), 2) for actual_ip, expected_ip in zip(ip_addresses, vif.fixed_ips()): actual_ip = actual_ip['nova_object.data'] self.assertEqual(actual_ip['label'], vif['network']['label']) self.assertEqual(actual_ip['mac'], vif['address'].lower()) self.assertEqual(actual_ip['port_uuid'], vif['id']) self.assertEqual(actual_ip['device_name'], vif['devname']) self.assertEqual(actual_ip['version'], expected_ip['version']) self.assertEqual(actual_ip['address'], expected_ip['address']) bandwidth = payload['bandwidth'] self.assertEqual(len(bandwidth), 1) bandwidth = bandwidth[0]['nova_object.data'] self.assertEqual(bandwidth['in_bytes'], 1) self.assertEqual(bandwidth['out_bytes'], 2) self.assertEqual(bandwidth['network_name'], 'test1') @mock.patch.object(objects.BandwidthUsageList, 'get_by_uuids') def test_task_update_with_states(self, mock_bandwidth_list): self.flags(notify_on_state_change="vm_and_task_state", group='notifications') mock_bandwidth_list.return_value = [self.get_fake_bandwidth()] fake_net_info = fake_network.fake_get_instance_nw_info(self, 1, 1) self.instance.info_cache.network_info = fake_net_info notifications.send_update_with_states(self.context, self.instance, vm_states.BUILDING, vm_states.BUILDING, task_states.SPAWNING, None, verify_states=True) self._verify_notification(expected_state=vm_states.BUILDING, expected_new_task_state=None) def test_update_no_service_name(self): notifications.send_update_with_states(self.context, self.instance, vm_states.BUILDING, vm_states.BUILDING, task_states.SPAWNING, None) self.assertEqual(1, len(fake_notifier.NOTIFICATIONS)) self.assertEqual(1, len(fake_notifier.VERSIONED_NOTIFICATIONS)) # service name should default to 'compute' notif = fake_notifier.NOTIFICATIONS[0] self.assertEqual('compute.testhost', notif.publisher_id) # in the versioned notification it defaults to nova-compute notif = fake_notifier.VERSIONED_NOTIFICATIONS[0] self.assertEqual('nova-compute:testhost', notif['publisher_id']) def test_update_with_service_name(self): notifications.send_update_with_states(self.context, self.instance, vm_states.BUILDING, vm_states.BUILDING, task_states.SPAWNING, None, service="nova-compute") self.assertEqual(1, len(fake_notifier.NOTIFICATIONS)) self.assertEqual(1, len(fake_notifier.VERSIONED_NOTIFICATIONS)) # service name should default to 'compute' notif = fake_notifier.NOTIFICATIONS[0] self.assertEqual('nova-compute.testhost', notif.publisher_id) notif = fake_notifier.VERSIONED_NOTIFICATIONS[0] self.assertEqual('nova-compute:testhost', notif['publisher_id']) def test_update_with_host_name(self): notifications.send_update_with_states(self.context, self.instance, vm_states.BUILDING, vm_states.BUILDING, task_states.SPAWNING, None, host="someotherhost") self.assertEqual(1, len(fake_notifier.NOTIFICATIONS)) self.assertEqual(1, len(fake_notifier.VERSIONED_NOTIFICATIONS)) # service name should default to 'compute' notif = fake_notifier.NOTIFICATIONS[0] self.assertEqual('compute.someotherhost', notif.publisher_id) notif = fake_notifier.VERSIONED_NOTIFICATIONS[0] self.assertEqual('nova-compute:someotherhost', notif['publisher_id']) def test_payload_has_fixed_ip_labels(self): info = notifications.info_from_instance(self.context, self.instance, self.net_info, None) self.assertIn("fixed_ips", info) self.assertEqual(info["fixed_ips"][0]["label"], "test1") def test_payload_has_vif_mac_address(self): info = notifications.info_from_instance(self.context, self.instance, self.net_info, None) self.assertIn("fixed_ips", info) self.assertEqual(self.net_info[0]['address'], info["fixed_ips"][0]["vif_mac"]) def test_payload_has_cell_name_empty(self): info = notifications.info_from_instance(self.context, self.instance, self.net_info, None) self.assertIn("cell_name", info) self.assertIsNone(self.instance.cell_name) self.assertEqual("", info["cell_name"]) def test_payload_has_cell_name(self): self.instance.cell_name = "cell1" info = notifications.info_from_instance(self.context, self.instance, self.net_info, None) self.assertIn("cell_name", info) self.assertEqual("cell1", info["cell_name"]) def test_payload_has_progress_empty(self): info = notifications.info_from_instance(self.context, self.instance, self.net_info, None) self.assertIn("progress", info) self.assertIsNone(self.instance.progress) self.assertEqual("", info["progress"]) def test_payload_has_progress(self): self.instance.progress = 50 info = notifications.info_from_instance(self.context, self.instance, self.net_info, None) self.assertIn("progress", info) self.assertEqual(50, info["progress"]) def test_payload_has_flavor_attributes(self): # Zero these to make sure they are not used self.instance.vcpus = self.instance.memory_mb = 0 self.instance.root_gb = self.instance.ephemeral_gb = 0 # Set flavor values and make sure _these_ are present in the output self.instance.flavor.vcpus = 10 self.instance.flavor.root_gb = 20 self.instance.flavor.memory_mb = 30 self.instance.flavor.ephemeral_gb = 40 info = notifications.info_from_instance(self.context, self.instance, self.net_info, None) self.assertEqual(10, info['vcpus']) self.assertEqual(20, info['root_gb']) self.assertEqual(30, info['memory_mb']) self.assertEqual(40, info['ephemeral_gb']) self.assertEqual(60, info['disk_gb']) def test_payload_has_timestamp_fields(self): time = datetime.datetime(2017, 2, 2, 16, 45, 0) # do not define deleted_at to test that missing value is handled # properly self.instance.terminated_at = time self.instance.launched_at = time info = notifications.info_from_instance(self.context, self.instance, self.net_info, None) self.assertEqual('2017-02-02T16:45:00.000000', info['terminated_at']) self.assertEqual('2017-02-02T16:45:00.000000', info['launched_at']) self.assertEqual('', info['deleted_at']) def test_send_access_ip_update(self): notifications.send_update(self.context, self.instance, self.instance) self.assertEqual(1, len(fake_notifier.NOTIFICATIONS)) notif = fake_notifier.NOTIFICATIONS[0] payload = notif.payload access_ip_v4 = str(self.instance.access_ip_v4) access_ip_v6 = str(self.instance.access_ip_v6) self.assertEqual(payload["access_ip_v4"], access_ip_v4) self.assertEqual(payload["access_ip_v6"], access_ip_v6) def test_send_name_update(self): param = {"display_name": "new_display_name"} new_name_inst = self._wrapped_create(params=param) notifications.send_update(self.context, self.instance, new_name_inst) self.assertEqual(1, len(fake_notifier.NOTIFICATIONS)) self.assertEqual(1, len(fake_notifier.VERSIONED_NOTIFICATIONS)) old_display_name = self.instance.display_name new_display_name = new_name_inst.display_name for payload in [ fake_notifier.NOTIFICATIONS[0].payload, fake_notifier.VERSIONED_NOTIFICATIONS[0][ 'payload']['nova_object.data']]: self.assertEqual(payload["old_display_name"], old_display_name) self.assertEqual(payload["display_name"], new_display_name) def test_send_versioned_tags_update(self): objects.TagList.create(self.context, self.instance.uuid, [u'tag1', u'tag2']) notifications.send_update(self.context, self.instance, self.instance) self.assertEqual(1, len(fake_notifier.VERSIONED_NOTIFICATIONS)) self.assertEqual([u'tag1', u'tag2'], fake_notifier.VERSIONED_NOTIFICATIONS[0] ['payload']['nova_object.data']['tags']) def test_send_no_state_change(self): called = [False] def sending_no_state_change(context, instance, **kwargs): called[0] = True self.stub_out('nova.notifications.base.' 'send_instance_update_notification', sending_no_state_change) notifications.send_update(self.context, self.instance, self.instance) self.assertTrue(called[0]) def test_fail_sending_update(self): def fail_sending(context, instance, **kwargs): raise Exception('failed to notify') self.stub_out('nova.notifications.base.' 'send_instance_update_notification', fail_sending) notifications.send_update(self.context, self.instance, self.instance) self.assertEqual(0, len(fake_notifier.NOTIFICATIONS)) @mock.patch.object(notifications.LOG, 'exception') def test_fail_sending_update_instance_not_found(self, mock_log_exception): # Tests that InstanceNotFound is handled as an expected exception and # not logged as an error. notfound = exception.InstanceNotFound(instance_id=self.instance.uuid) with mock.patch.object(notifications, 'send_instance_update_notification', side_effect=notfound): notifications.send_update( self.context, self.instance, self.instance) self.assertEqual(0, len(fake_notifier.NOTIFICATIONS)) self.assertEqual(0, mock_log_exception.call_count) @mock.patch.object(notifications.LOG, 'exception') def test_fail_send_update_with_states_inst_not_found(self, mock_log_exception): # Tests that InstanceNotFound is handled as an expected exception and # not logged as an error. notfound = exception.InstanceNotFound(instance_id=self.instance.uuid) with mock.patch.object(notifications, 'send_instance_update_notification', side_effect=notfound): notifications.send_update_with_states( self.context, self.instance, vm_states.BUILDING, vm_states.ERROR, task_states.NETWORKING, new_task_state=None) self.assertEqual(0, len(fake_notifier.NOTIFICATIONS)) self.assertEqual(0, mock_log_exception.call_count) def _decorated_function(self, arg1, arg2): self.decorated_function_called = True def test_notify_decorator(self): func_name = self._decorated_function.__name__ # Decorated with notify_decorator like monkey_patch self._decorated_function = notifications.notify_decorator( func_name, self._decorated_function) ctxt = o_context.RequestContext() self._decorated_function(1, ctxt) self.assertEqual(1, len(fake_notifier.NOTIFICATIONS)) n = fake_notifier.NOTIFICATIONS[0] self.assertEqual(n.priority, 'INFO') self.assertEqual(n.event_type, func_name) self.assertEqual(n.context, ctxt) self.assertTrue(self.decorated_function_called) self.assertEqual(CONF.host, n.publisher_id) class NotificationsFormatTestCase(test.NoDBTestCase): def test_state_computation(self): instance = {'vm_state': mock.sentinel.vm_state, 'task_state': mock.sentinel.task_state} states = notifications._compute_states_payload(instance) self.assertEqual(mock.sentinel.vm_state, states['state']) self.assertEqual(mock.sentinel.vm_state, states['old_state']) self.assertEqual(mock.sentinel.task_state, states['old_task_state']) self.assertEqual(mock.sentinel.task_state, states['new_task_state']) states = notifications._compute_states_payload( instance, old_vm_state=mock.sentinel.old_vm_state, ) self.assertEqual(mock.sentinel.vm_state, states['state']) self.assertEqual(mock.sentinel.old_vm_state, states['old_state']) self.assertEqual(mock.sentinel.task_state, states['old_task_state']) self.assertEqual(mock.sentinel.task_state, states['new_task_state']) states = notifications._compute_states_payload( instance, old_vm_state=mock.sentinel.old_vm_state, old_task_state=mock.sentinel.old_task_state, new_vm_state=mock.sentinel.new_vm_state, new_task_state=mock.sentinel.new_task_state, ) self.assertEqual(mock.sentinel.new_vm_state, states['state']) self.assertEqual(mock.sentinel.old_vm_state, states['old_state']) self.assertEqual(mock.sentinel.old_task_state, states['old_task_state']) self.assertEqual(mock.sentinel.new_task_state, states['new_task_state']) nova-17.0.1/nova/tests/unit/api/0000775000175000017500000000000013250073472016425 5ustar zuulzuul00000000000000nova-17.0.1/nova/tests/unit/api/openstack/0000775000175000017500000000000013250073472020414 5ustar zuulzuul00000000000000nova-17.0.1/nova/tests/unit/api/openstack/test_common.py0000666000175000017500000006245613250073126023330 0ustar zuulzuul00000000000000# Copyright 2010 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Test suites for 'common' code used throughout the OpenStack HTTP API. """ import mock import six from testtools import matchers import webob import webob.exc import webob.multidict from nova.api.openstack import common from nova.compute import task_states from nova.compute import vm_states from nova import exception from nova import test from nova.tests.unit.api.openstack import fakes NS = "{http://docs.openstack.org/compute/api/v1.1}" ATOMNS = "{http://www.w3.org/2005/Atom}" class LimiterTest(test.NoDBTestCase): """Unit tests for the `nova.api.openstack.common.limited` method which takes in a list of items and, depending on the 'offset' and 'limit' GET params, returns a subset or complete set of the given items. """ def setUp(self): """Run before each test.""" super(LimiterTest, self).setUp() self.tiny = range(1) self.small = range(10) self.medium = range(1000) self.large = range(10000) def test_limiter_offset_zero(self): # Test offset key works with 0. req = webob.Request.blank('/?offset=0') self.assertEqual(common.limited(self.tiny, req), self.tiny) self.assertEqual(common.limited(self.small, req), self.small) self.assertEqual(common.limited(self.medium, req), self.medium) self.assertEqual(common.limited(self.large, req), self.large[:1000]) def test_limiter_offset_medium(self): # Test offset key works with a medium sized number. req = webob.Request.blank('/?offset=10') self.assertEqual(0, len(common.limited(self.tiny, req))) self.assertEqual(common.limited(self.small, req), self.small[10:]) self.assertEqual(common.limited(self.medium, req), self.medium[10:]) self.assertEqual(common.limited(self.large, req), self.large[10:1010]) def test_limiter_offset_over_max(self): # Test offset key works with a number over 1000 (max_limit). req = webob.Request.blank('/?offset=1001') self.assertEqual(0, len(common.limited(self.tiny, req))) self.assertEqual(0, len(common.limited(self.small, req))) self.assertEqual(0, len(common.limited(self.medium, req))) self.assertEqual( common.limited(self.large, req), self.large[1001:2001]) def test_limiter_offset_blank(self): # Test offset key works with a blank offset. req = webob.Request.blank('/?offset=') self.assertRaises( webob.exc.HTTPBadRequest, common.limited, self.tiny, req) def test_limiter_offset_bad(self): # Test offset key works with a BAD offset. req = webob.Request.blank(u'/?offset=\u0020aa') self.assertRaises( webob.exc.HTTPBadRequest, common.limited, self.tiny, req) def test_limiter_nothing(self): # Test request with no offset or limit. req = webob.Request.blank('/') self.assertEqual(common.limited(self.tiny, req), self.tiny) self.assertEqual(common.limited(self.small, req), self.small) self.assertEqual(common.limited(self.medium, req), self.medium) self.assertEqual(common.limited(self.large, req), self.large[:1000]) def test_limiter_limit_zero(self): # Test limit of zero. req = webob.Request.blank('/?limit=0') self.assertEqual(common.limited(self.tiny, req), self.tiny) self.assertEqual(common.limited(self.small, req), self.small) self.assertEqual(common.limited(self.medium, req), self.medium) self.assertEqual(common.limited(self.large, req), self.large[:1000]) def test_limiter_limit_medium(self): # Test limit of 10. req = webob.Request.blank('/?limit=10') self.assertEqual(common.limited(self.tiny, req), self.tiny) self.assertEqual(common.limited(self.small, req), self.small) self.assertEqual(common.limited(self.medium, req), self.medium[:10]) self.assertEqual(common.limited(self.large, req), self.large[:10]) def test_limiter_limit_over_max(self): # Test limit of 3000. req = webob.Request.blank('/?limit=3000') self.assertEqual(common.limited(self.tiny, req), self.tiny) self.assertEqual(common.limited(self.small, req), self.small) self.assertEqual(common.limited(self.medium, req), self.medium) self.assertEqual(common.limited(self.large, req), self.large[:1000]) def test_limiter_limit_and_offset(self): # Test request with both limit and offset. items = range(2000) req = webob.Request.blank('/?offset=1&limit=3') self.assertEqual(common.limited(items, req), items[1:4]) req = webob.Request.blank('/?offset=3&limit=0') self.assertEqual(common.limited(items, req), items[3:1003]) req = webob.Request.blank('/?offset=3&limit=1500') self.assertEqual(common.limited(items, req), items[3:1003]) req = webob.Request.blank('/?offset=3000&limit=10') self.assertEqual(0, len(common.limited(items, req))) def test_limiter_custom_max_limit(self): # Test a max_limit other than 1000. max_limit = 2000 self.flags(max_limit=max_limit, group='api') items = range(max_limit) req = webob.Request.blank('/?offset=1&limit=3') self.assertEqual( common.limited(items, req), items[1:4]) req = webob.Request.blank('/?offset=3&limit=0') self.assertEqual( common.limited(items, req), items[3:]) req = webob.Request.blank('/?offset=3&limit=2500') self.assertEqual( common.limited(items, req), items[3:]) req = webob.Request.blank('/?offset=3000&limit=10') self.assertEqual(0, len(common.limited(items, req))) def test_limiter_negative_limit(self): # Test a negative limit. req = webob.Request.blank('/?limit=-3000') self.assertRaises( webob.exc.HTTPBadRequest, common.limited, self.tiny, req) def test_limiter_negative_offset(self): # Test a negative offset. req = webob.Request.blank('/?offset=-30') self.assertRaises( webob.exc.HTTPBadRequest, common.limited, self.tiny, req) class SortParamUtilsTest(test.NoDBTestCase): def test_get_sort_params_defaults(self): '''Verifies the default sort key and direction.''' sort_keys, sort_dirs = common.get_sort_params({}) self.assertEqual(['created_at'], sort_keys) self.assertEqual(['desc'], sort_dirs) def test_get_sort_params_override_defaults(self): '''Verifies that the defaults can be overridden.''' sort_keys, sort_dirs = common.get_sort_params({}, default_key='key1', default_dir='dir1') self.assertEqual(['key1'], sort_keys) self.assertEqual(['dir1'], sort_dirs) sort_keys, sort_dirs = common.get_sort_params({}, default_key=None, default_dir=None) self.assertEqual([], sort_keys) self.assertEqual([], sort_dirs) def test_get_sort_params_single_value(self): '''Verifies a single sort key and direction.''' params = webob.multidict.MultiDict() params.add('sort_key', 'key1') params.add('sort_dir', 'dir1') sort_keys, sort_dirs = common.get_sort_params(params) self.assertEqual(['key1'], sort_keys) self.assertEqual(['dir1'], sort_dirs) def test_get_sort_params_single_with_default(self): '''Verifies a single sort value with a default.''' params = webob.multidict.MultiDict() params.add('sort_key', 'key1') sort_keys, sort_dirs = common.get_sort_params(params) self.assertEqual(['key1'], sort_keys) # sort_key was supplied, sort_dir should be defaulted self.assertEqual(['desc'], sort_dirs) params = webob.multidict.MultiDict() params.add('sort_dir', 'dir1') sort_keys, sort_dirs = common.get_sort_params(params) self.assertEqual(['created_at'], sort_keys) # sort_dir was supplied, sort_key should be defaulted self.assertEqual(['dir1'], sort_dirs) def test_get_sort_params_multiple_values(self): '''Verifies multiple sort parameter values.''' params = webob.multidict.MultiDict() params.add('sort_key', 'key1') params.add('sort_key', 'key2') params.add('sort_key', 'key3') params.add('sort_dir', 'dir1') params.add('sort_dir', 'dir2') params.add('sort_dir', 'dir3') sort_keys, sort_dirs = common.get_sort_params(params) self.assertEqual(['key1', 'key2', 'key3'], sort_keys) self.assertEqual(['dir1', 'dir2', 'dir3'], sort_dirs) # Also ensure that the input parameters are not modified sort_key_vals = [] sort_dir_vals = [] while 'sort_key' in params: sort_key_vals.append(params.pop('sort_key')) while 'sort_dir' in params: sort_dir_vals.append(params.pop('sort_dir')) self.assertEqual(['key1', 'key2', 'key3'], sort_key_vals) self.assertEqual(['dir1', 'dir2', 'dir3'], sort_dir_vals) self.assertEqual(0, len(params)) class PaginationParamsTest(test.NoDBTestCase): """Unit tests for the `nova.api.openstack.common.get_pagination_params` method which takes in a request object and returns 'marker' and 'limit' GET params. """ def test_no_params(self): # Test no params. req = webob.Request.blank('/') self.assertEqual(common.get_pagination_params(req), {}) def test_valid_marker(self): # Test valid marker param. req = webob.Request.blank( '/?marker=263abb28-1de6-412f-b00b-f0ee0c4333c2') self.assertEqual(common.get_pagination_params(req), {'marker': '263abb28-1de6-412f-b00b-f0ee0c4333c2'}) def test_valid_limit(self): # Test valid limit param. req = webob.Request.blank('/?limit=10') self.assertEqual(common.get_pagination_params(req), {'limit': 10}) def test_invalid_limit(self): # Test invalid limit param. req = webob.Request.blank('/?limit=-2') self.assertRaises( webob.exc.HTTPBadRequest, common.get_pagination_params, req) def test_valid_limit_and_marker(self): # Test valid limit and marker parameters. marker = '263abb28-1de6-412f-b00b-f0ee0c4333c2' req = webob.Request.blank('/?limit=20&marker=%s' % marker) self.assertEqual(common.get_pagination_params(req), {'marker': marker, 'limit': 20}) def test_valid_page_size(self): # Test valid page_size param. req = webob.Request.blank('/?page_size=10') self.assertEqual(common.get_pagination_params(req), {'page_size': 10}) def test_invalid_page_size(self): # Test invalid page_size param. req = webob.Request.blank('/?page_size=-2') self.assertRaises( webob.exc.HTTPBadRequest, common.get_pagination_params, req) def test_valid_limit_and_page_size(self): # Test valid limit and page_size parameters. req = webob.Request.blank('/?limit=20&page_size=5') self.assertEqual(common.get_pagination_params(req), {'page_size': 5, 'limit': 20}) class MiscFunctionsTest(test.TestCase): def test_remove_trailing_version_from_href(self): fixture = 'http://www.testsite.com/v1.1' expected = 'http://www.testsite.com' actual = common.remove_trailing_version_from_href(fixture) self.assertEqual(actual, expected) def test_remove_trailing_version_from_href_2(self): fixture = 'http://www.testsite.com/compute/v1.1' expected = 'http://www.testsite.com/compute' actual = common.remove_trailing_version_from_href(fixture) self.assertEqual(actual, expected) def test_remove_trailing_version_from_href_3(self): fixture = 'http://www.testsite.com/v1.1/images/v10.5' expected = 'http://www.testsite.com/v1.1/images' actual = common.remove_trailing_version_from_href(fixture) self.assertEqual(actual, expected) def test_remove_trailing_version_from_href_bad_request(self): fixture = 'http://www.testsite.com/v1.1/images' self.assertRaises(ValueError, common.remove_trailing_version_from_href, fixture) def test_remove_trailing_version_from_href_bad_request_2(self): fixture = 'http://www.testsite.com/images/v' self.assertRaises(ValueError, common.remove_trailing_version_from_href, fixture) def test_remove_trailing_version_from_href_bad_request_3(self): fixture = 'http://www.testsite.com/v1.1images' self.assertRaises(ValueError, common.remove_trailing_version_from_href, fixture) def test_get_id_from_href_with_int_url(self): fixture = 'http://www.testsite.com/dir/45' actual = common.get_id_from_href(fixture) expected = '45' self.assertEqual(actual, expected) def test_get_id_from_href_with_int(self): fixture = '45' actual = common.get_id_from_href(fixture) expected = '45' self.assertEqual(actual, expected) def test_get_id_from_href_with_int_url_query(self): fixture = 'http://www.testsite.com/dir/45?asdf=jkl' actual = common.get_id_from_href(fixture) expected = '45' self.assertEqual(actual, expected) def test_get_id_from_href_with_uuid_url(self): fixture = 'http://www.testsite.com/dir/abc123' actual = common.get_id_from_href(fixture) expected = "abc123" self.assertEqual(actual, expected) def test_get_id_from_href_with_uuid_url_query(self): fixture = 'http://www.testsite.com/dir/abc123?asdf=jkl' actual = common.get_id_from_href(fixture) expected = "abc123" self.assertEqual(actual, expected) def test_get_id_from_href_with_uuid(self): fixture = 'abc123' actual = common.get_id_from_href(fixture) expected = 'abc123' self.assertEqual(actual, expected) def test_raise_http_conflict_for_instance_invalid_state(self): exc = exception.InstanceInvalidState(attr='fake_attr', state='fake_state', method='fake_method', instance_uuid='fake') try: common.raise_http_conflict_for_instance_invalid_state(exc, 'meow', 'fake_server_id') except webob.exc.HTTPConflict as e: self.assertEqual(six.text_type(e), "Cannot 'meow' instance fake_server_id while it is in " "fake_attr fake_state") else: self.fail("webob.exc.HTTPConflict was not raised") def test_status_from_state(self): for vm_state in (vm_states.ACTIVE, vm_states.STOPPED): for task_state in (task_states.RESIZE_PREP, task_states.RESIZE_MIGRATING, task_states.RESIZE_MIGRATED, task_states.RESIZE_FINISH): actual = common.status_from_state(vm_state, task_state) expected = 'RESIZE' self.assertEqual(expected, actual) def test_status_rebuild_from_state(self): for vm_state in (vm_states.ACTIVE, vm_states.STOPPED, vm_states.ERROR): for task_state in (task_states.REBUILDING, task_states.REBUILD_BLOCK_DEVICE_MAPPING, task_states.REBUILD_SPAWNING): actual = common.status_from_state(vm_state, task_state) expected = 'REBUILD' self.assertEqual(expected, actual) def test_status_migrating_from_state(self): for vm_state in (vm_states.ACTIVE, vm_states.PAUSED): task_state = task_states.MIGRATING actual = common.status_from_state(vm_state, task_state) expected = 'MIGRATING' self.assertEqual(expected, actual) def test_task_and_vm_state_from_status(self): fixture1 = ['reboot'] actual = common.task_and_vm_state_from_status(fixture1) expected = [vm_states.ACTIVE], [task_states.REBOOT_PENDING, task_states.REBOOT_STARTED, task_states.REBOOTING] self.assertEqual(expected, actual) fixture2 = ['resize'] actual = common.task_and_vm_state_from_status(fixture2) expected = ([vm_states.ACTIVE, vm_states.STOPPED], [task_states.RESIZE_FINISH, task_states.RESIZE_MIGRATED, task_states.RESIZE_MIGRATING, task_states.RESIZE_PREP]) self.assertEqual(expected, actual) fixture3 = ['resize', 'reboot'] actual = common.task_and_vm_state_from_status(fixture3) expected = ([vm_states.ACTIVE, vm_states.STOPPED], [task_states.REBOOT_PENDING, task_states.REBOOT_STARTED, task_states.REBOOTING, task_states.RESIZE_FINISH, task_states.RESIZE_MIGRATED, task_states.RESIZE_MIGRATING, task_states.RESIZE_PREP]) self.assertEqual(expected, actual) def test_is_all_tenants_true(self): for value in ('', '1', 'true', 'True'): search_opts = {'all_tenants': value} self.assertTrue(common.is_all_tenants(search_opts)) self.assertIn('all_tenants', search_opts) def test_is_all_tenants_false(self): for value in ('0', 'false', 'False'): search_opts = {'all_tenants': value} self.assertFalse(common.is_all_tenants(search_opts)) self.assertIn('all_tenants', search_opts) def test_is_all_tenants_missing(self): self.assertFalse(common.is_all_tenants({})) def test_is_all_tenants_invalid(self): search_opts = {'all_tenants': 'wonk'} self.assertRaises(exception.InvalidInput, common.is_all_tenants, search_opts) class TestCollectionLinks(test.NoDBTestCase): """Tests the _get_collection_links method.""" @mock.patch('nova.api.openstack.common.ViewBuilder._get_next_link') def test_items_less_than_limit(self, href_link_mock): items = [ {"uuid": "123"} ] req = mock.MagicMock() params = mock.PropertyMock(return_value=dict(limit=10)) type(req).params = params builder = common.ViewBuilder() results = builder._get_collection_links(req, items, "ignored", "uuid") self.assertFalse(href_link_mock.called) self.assertThat(results, matchers.HasLength(0)) @mock.patch('nova.api.openstack.common.ViewBuilder._get_next_link') def test_items_equals_given_limit(self, href_link_mock): items = [ {"uuid": "123"} ] req = mock.MagicMock() params = mock.PropertyMock(return_value=dict(limit=1)) type(req).params = params builder = common.ViewBuilder() results = builder._get_collection_links(req, items, mock.sentinel.coll_key, "uuid") href_link_mock.assert_called_once_with(req, "123", mock.sentinel.coll_key) self.assertThat(results, matchers.HasLength(1)) @mock.patch('nova.api.openstack.common.ViewBuilder._get_next_link') def test_items_equals_default_limit(self, href_link_mock): items = [ {"uuid": "123"} ] req = mock.MagicMock() params = mock.PropertyMock(return_value=dict()) type(req).params = params self.flags(max_limit=1, group='api') builder = common.ViewBuilder() results = builder._get_collection_links(req, items, mock.sentinel.coll_key, "uuid") href_link_mock.assert_called_once_with(req, "123", mock.sentinel.coll_key) self.assertThat(results, matchers.HasLength(1)) @mock.patch('nova.api.openstack.common.ViewBuilder._get_next_link') def test_items_equals_default_limit_with_given(self, href_link_mock): items = [ {"uuid": "123"} ] req = mock.MagicMock() # Given limit is greater than default max, only return default max params = mock.PropertyMock(return_value=dict(limit=2)) type(req).params = params self.flags(max_limit=1, group='api') builder = common.ViewBuilder() results = builder._get_collection_links(req, items, mock.sentinel.coll_key, "uuid") href_link_mock.assert_called_once_with(req, "123", mock.sentinel.coll_key) self.assertThat(results, matchers.HasLength(1)) class LinkPrefixTest(test.NoDBTestCase): def test_update_link_prefix(self): vb = common.ViewBuilder() result = vb._update_link_prefix("http://192.168.0.243:24/", "http://127.0.0.1/compute") self.assertEqual("http://127.0.0.1/compute", result) result = vb._update_link_prefix("http://foo.x.com/v1", "http://new.prefix.com") self.assertEqual("http://new.prefix.com/v1", result) result = vb._update_link_prefix( "http://foo.x.com/v1", "http://new.prefix.com:20455/new_extra_prefix") self.assertEqual("http://new.prefix.com:20455/new_extra_prefix/v1", result) class UrlJoinTest(test.NoDBTestCase): def test_url_join(self): pieces = ["one", "two", "three"] joined = common.url_join(*pieces) self.assertEqual("one/two/three", joined) def test_url_join_extra_slashes(self): pieces = ["one/", "/two//", "/three/"] joined = common.url_join(*pieces) self.assertEqual("one/two/three", joined) def test_url_join_trailing_slash(self): pieces = ["one", "two", "three", ""] joined = common.url_join(*pieces) self.assertEqual("one/two/three/", joined) def test_url_join_empty_list(self): pieces = [] joined = common.url_join(*pieces) self.assertEqual("", joined) def test_url_join_single_empty_string(self): pieces = [""] joined = common.url_join(*pieces) self.assertEqual("", joined) def test_url_join_single_slash(self): pieces = ["/"] joined = common.url_join(*pieces) self.assertEqual("", joined) class ViewBuilderLinkTest(test.NoDBTestCase): project_id = "fake" api_version = "2.1" def setUp(self): super(ViewBuilderLinkTest, self).setUp() self.request = self.req("/%s" % self.project_id) self.vb = common.ViewBuilder() def req(self, url, use_admin_context=False): return fakes.HTTPRequest.blank(url, use_admin_context=use_admin_context, version=self.api_version) def test_get_project_id(self): proj_id = self.vb._get_project_id(self.request) self.assertEqual(self.project_id, proj_id) def test_get_project_id_with_none_project_id(self): self.request.environ["nova.context"].project_id = None proj_id = self.vb._get_project_id(self.request) self.assertEqual('', proj_id) def test_get_next_link(self): identifier = "identifier" collection = "collection" next_link = self.vb._get_next_link(self.request, identifier, collection) expected = "/".join((self.request.url, "%s?marker=%s" % (collection, identifier))) self.assertEqual(expected, next_link) def test_get_href_link(self): identifier = "identifier" collection = "collection" href_link = self.vb._get_href_link(self.request, identifier, collection) expected = "/".join((self.request.url, collection, identifier)) self.assertEqual(expected, href_link) def test_get_bookmark_link(self): identifier = "identifier" collection = "collection" bookmark_link = self.vb._get_bookmark_link(self.request, identifier, collection) bmk_url = common.remove_trailing_version_from_href( self.request.application_url) expected = "/".join((bmk_url, self.project_id, collection, identifier)) self.assertEqual(expected, bookmark_link) nova-17.0.1/nova/tests/unit/api/openstack/common.py0000666000175000017500000000323513250073126022257 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_serialization import jsonutils import webob def webob_factory(url): """Factory for removing duplicate webob code from tests.""" base_url = url def web_request(url, method=None, body=None): req = webob.Request.blank("%s%s" % (base_url, url)) if method: req.content_type = "application/json" req.method = method if body: req.body = jsonutils.dump_as_bytes(body) return req return web_request def compare_links(actual, expected): """Compare xml atom links.""" return compare_tree_to_dict(actual, expected, ('rel', 'href', 'type')) def compare_media_types(actual, expected): """Compare xml media types.""" return compare_tree_to_dict(actual, expected, ('base', 'type')) def compare_tree_to_dict(actual, expected, keys): """Compare parts of lxml.etree objects to dicts.""" for elem, data in zip(actual, expected): for key in keys: if elem.get(key) != data.get(key): return False return True nova-17.0.1/nova/tests/unit/api/openstack/placement/0000775000175000017500000000000013250073472022364 5ustar zuulzuul00000000000000nova-17.0.1/nova/tests/unit/api/openstack/placement/test_util.py0000666000175000017500000006154013250073136024757 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Unit tests for the utility functions used by the placement API.""" import datetime import fixtures import mock from oslo_middleware import request_id from oslo_utils import timeutils import webob import six.moves.urllib.parse as urlparse from nova.api.openstack.placement import lib as pl from nova.api.openstack.placement import microversion from nova.api.openstack.placement import util from nova.objects import resource_provider as rp_obj from nova import test from nova.tests import uuidsentinel class TestCheckAccept(test.NoDBTestCase): """Confirm behavior of util.check_accept.""" @staticmethod @util.check_accept('application/json', 'application/vnd.openstack') def handler(req): """Fake handler to test decorator.""" return True def test_fail_no_match(self): req = webob.Request.blank('/') req.accept = 'text/plain' error = self.assertRaises(webob.exc.HTTPNotAcceptable, self.handler, req) self.assertEqual( 'Only application/json, application/vnd.openstack is provided', str(error)) def test_fail_complex_no_match(self): req = webob.Request.blank('/') req.accept = 'text/html;q=0.9,text/plain,application/vnd.aws;q=0.8' error = self.assertRaises(webob.exc.HTTPNotAcceptable, self.handler, req) self.assertEqual( 'Only application/json, application/vnd.openstack is provided', str(error)) def test_success_no_accept(self): req = webob.Request.blank('/') self.assertTrue(self.handler(req)) def test_success_simple_match(self): req = webob.Request.blank('/') req.accept = 'application/json' self.assertTrue(self.handler(req)) def test_success_complex_any_match(self): req = webob.Request.blank('/') req.accept = 'application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8' self.assertTrue(self.handler(req)) def test_success_complex_lower_quality_match(self): req = webob.Request.blank('/') req.accept = 'application/xml;q=0.9,application/vnd.openstack;q=0.8' self.assertTrue(self.handler(req)) class TestExtractJSON(test.NoDBTestCase): # Although the intent of this test class is not to test that # schemas work, we may as well use a real one to ensure that # behaviors are what we expect. schema = { "type": "object", "properties": { "name": {"type": "string"}, "uuid": {"type": "string", "format": "uuid"} }, "required": ["name"], "additionalProperties": False } def test_not_json(self): error = self.assertRaises(webob.exc.HTTPBadRequest, util.extract_json, 'I am a string', self.schema) self.assertIn('Malformed JSON', str(error)) def test_malformed_json(self): error = self.assertRaises(webob.exc.HTTPBadRequest, util.extract_json, '{"my bytes got left behind":}', self.schema) self.assertIn('Malformed JSON', str(error)) def test_schema_mismatch(self): error = self.assertRaises(webob.exc.HTTPBadRequest, util.extract_json, '{"a": "b"}', self.schema) self.assertIn('JSON does not validate', str(error)) def test_type_invalid(self): error = self.assertRaises(webob.exc.HTTPBadRequest, util.extract_json, '{"name": 1}', self.schema) self.assertIn('JSON does not validate', str(error)) def test_format_checker(self): error = self.assertRaises(webob.exc.HTTPBadRequest, util.extract_json, '{"name": "hello", "uuid": "not a uuid"}', self.schema) self.assertIn('JSON does not validate', str(error)) def test_no_additional_properties(self): error = self.assertRaises(webob.exc.HTTPBadRequest, util.extract_json, '{"name": "hello", "cow": "moo"}', self.schema) self.assertIn('JSON does not validate', str(error)) def test_valid(self): data = util.extract_json( '{"name": "cow", ' '"uuid": "%s"}' % uuidsentinel.rp_uuid, self.schema) self.assertEqual('cow', data['name']) self.assertEqual(uuidsentinel.rp_uuid, data['uuid']) class TestJSONErrorFormatter(test.NoDBTestCase): def setUp(self): super(TestJSONErrorFormatter, self).setUp() self.environ = {} # TODO(jaypipes): Remove this when we get more than a single version # in the placement API. The fact that we only had a single version was # masking a bug in the utils code. _versions = [ '1.0', '1.1', ] mod_str = 'nova.api.openstack.placement.microversion.VERSIONS' self.useFixture(fixtures.MonkeyPatch(mod_str, _versions)) def test_status_to_int_code(self): body = '' status = '404 Not Found' title = '' result = util.json_error_formatter( body, status, title, self.environ) self.assertEqual(404, result['errors'][0]['status']) def test_strip_body_tags(self): body = '

Big Error!

' status = '400 Bad Request' title = '' result = util.json_error_formatter( body, status, title, self.environ) self.assertEqual('Big Error!', result['errors'][0]['detail']) def test_request_id_presence(self): body = '' status = '400 Bad Request' title = '' # no request id in environ, none in error result = util.json_error_formatter( body, status, title, self.environ) self.assertNotIn('request_id', result['errors'][0]) # request id in environ, request id in error self.environ[request_id.ENV_REQUEST_ID] = 'stub-id' result = util.json_error_formatter( body, status, title, self.environ) self.assertEqual('stub-id', result['errors'][0]['request_id']) def test_microversion_406_handling(self): body = '' status = '400 Bad Request' title = '' # Not a 406, no version info required. result = util.json_error_formatter( body, status, title, self.environ) self.assertNotIn('max_version', result['errors'][0]) self.assertNotIn('min_version', result['errors'][0]) # A 406 but not because of microversions (microversion # parsing was successful), no version info # required. status = '406 Not Acceptable' version_obj = microversion.parse_version_string('2.3') self.environ[microversion.MICROVERSION_ENVIRON] = version_obj result = util.json_error_formatter( body, status, title, self.environ) self.assertNotIn('max_version', result['errors'][0]) self.assertNotIn('min_version', result['errors'][0]) # Microversion parsing failed, status is 406, send version info. del self.environ[microversion.MICROVERSION_ENVIRON] result = util.json_error_formatter( body, status, title, self.environ) self.assertEqual(microversion.max_version_string(), result['errors'][0]['max_version']) self.assertEqual(microversion.min_version_string(), result['errors'][0]['min_version']) class TestRequireContent(test.NoDBTestCase): """Confirm behavior of util.require_accept.""" @staticmethod @util.require_content('application/json') def handler(req): """Fake handler to test decorator.""" return True def test_fail_no_content_type(self): req = webob.Request.blank('/') error = self.assertRaises(webob.exc.HTTPUnsupportedMediaType, self.handler, req) self.assertEqual( 'The media type None is not supported, use application/json', str(error)) def test_fail_wrong_content_type(self): req = webob.Request.blank('/') req.content_type = 'text/plain' error = self.assertRaises(webob.exc.HTTPUnsupportedMediaType, self.handler, req) self.assertEqual( 'The media type text/plain is not supported, use application/json', str(error)) def test_success_content_type(self): req = webob.Request.blank('/') req.content_type = 'application/json' self.assertTrue(self.handler(req)) class TestPlacementURLs(test.NoDBTestCase): def setUp(self): super(TestPlacementURLs, self).setUp() self.resource_provider = rp_obj.ResourceProvider( name=uuidsentinel.rp_name, uuid=uuidsentinel.rp_uuid) self.resource_class = rp_obj.ResourceClass( name='CUSTOM_BAREMETAL_GOLD', id=1000) def test_resource_provider_url(self): environ = {} expected_url = '/resource_providers/%s' % uuidsentinel.rp_uuid self.assertEqual(expected_url, util.resource_provider_url( environ, self.resource_provider)) def test_resource_provider_url_prefix(self): # SCRIPT_NAME represents the mount point of a WSGI # application when it is hosted at a path/prefix. environ = {'SCRIPT_NAME': '/placement'} expected_url = ('/placement/resource_providers/%s' % uuidsentinel.rp_uuid) self.assertEqual(expected_url, util.resource_provider_url( environ, self.resource_provider)) def test_inventories_url(self): environ = {} expected_url = ('/resource_providers/%s/inventories' % uuidsentinel.rp_uuid) self.assertEqual(expected_url, util.inventory_url( environ, self.resource_provider)) def test_inventory_url(self): resource_class = 'DISK_GB' environ = {} expected_url = ('/resource_providers/%s/inventories/%s' % (uuidsentinel.rp_uuid, resource_class)) self.assertEqual(expected_url, util.inventory_url( environ, self.resource_provider, resource_class)) def test_resource_class_url(self): environ = {} expected_url = '/resource_classes/CUSTOM_BAREMETAL_GOLD' self.assertEqual(expected_url, util.resource_class_url( environ, self.resource_class)) def test_resource_class_url_prefix(self): # SCRIPT_NAME represents the mount point of a WSGI # application when it is hosted at a path/prefix. environ = {'SCRIPT_NAME': '/placement'} expected_url = '/placement/resource_classes/CUSTOM_BAREMETAL_GOLD' self.assertEqual(expected_url, util.resource_class_url( environ, self.resource_class)) class TestNormalizeResourceQsParam(test.NoDBTestCase): def test_success(self): qs = "VCPU:1" resources = util.normalize_resources_qs_param(qs) expected = { 'VCPU': 1, } self.assertEqual(expected, resources) qs = "VCPU:1,MEMORY_MB:1024,DISK_GB:100" resources = util.normalize_resources_qs_param(qs) expected = { 'VCPU': 1, 'MEMORY_MB': 1024, 'DISK_GB': 100, } self.assertEqual(expected, resources) def test_400_empty_string(self): qs = "" self.assertRaises( webob.exc.HTTPBadRequest, util.normalize_resources_qs_param, qs, ) def test_400_bad_int(self): qs = "VCPU:foo" self.assertRaises( webob.exc.HTTPBadRequest, util.normalize_resources_qs_param, qs, ) def test_400_no_amount(self): qs = "VCPU" self.assertRaises( webob.exc.HTTPBadRequest, util.normalize_resources_qs_param, qs, ) def test_400_zero_amount(self): qs = "VCPU:0" self.assertRaises( webob.exc.HTTPBadRequest, util.normalize_resources_qs_param, qs, ) class TestNormalizeTraitsQsParam(test.NoDBTestCase): def test_one(self): trait = 'HW_CPU_X86_VMX' # Various whitespace permutations for fmt in ('%s', ' %s', '%s ', ' %s ', ' %s '): self.assertEqual(set([trait]), util.normalize_traits_qs_param(fmt % trait)) def test_multiple(self): traits = ( 'HW_CPU_X86_VMX', 'HW_GPU_API_DIRECT3D_V12_0', 'HW_NIC_OFFLOAD_RX', 'CUSTOM_GOLD', 'STORAGE_DISK_SSD', ) self.assertEqual( set(traits), util.normalize_traits_qs_param('%s, %s,%s , %s , %s ' % traits)) def test_400_all_empty(self): for qs in ('', ' ', ' ', ',', ' , , '): self.assertRaises( webob.exc.HTTPBadRequest, util.normalize_traits_qs_param, qs) def test_400_some_empty(self): traits = ( 'HW_NIC_OFFLOAD_RX', 'CUSTOM_GOLD', 'STORAGE_DISK_SSD', ) for fmt in ('%s,,%s,%s', ',%s,%s,%s', '%s,%s,%s,', ' %s , %s , , %s'): self.assertRaises(webob.exc.HTTPBadRequest, util.normalize_traits_qs_param, fmt % traits) class TestParseQsResourcesAndTraits(test.NoDBTestCase): @staticmethod def do_parse(qstring): """Converts a querystring to a MultiDict, mimicking request.GET, and runs parse_qs_request_groups on it. """ return util.parse_qs_request_groups(webob.multidict.MultiDict( urlparse.parse_qsl(qstring))) def assertRequestGroupsEqual(self, expected, observed): self.assertEqual(len(expected), len(observed)) for exp, obs in zip(expected, observed): self.assertEqual(vars(exp), vars(obs)) def test_empty(self): self.assertRequestGroupsEqual([], self.do_parse('')) def test_unnumbered_only(self): """Unnumbered resources & traits - no numbered groupings.""" qs = ('resources=VCPU:2,MEMORY_MB:2048' '&required=HW_CPU_X86_VMX,CUSTOM_GOLD') expected = [ pl.RequestGroup( use_same_provider=False, resources={ 'VCPU': 2, 'MEMORY_MB': 2048, }, required_traits={ 'HW_CPU_X86_VMX', 'CUSTOM_GOLD', }, ), ] self.assertRequestGroupsEqual(expected, self.do_parse(qs)) def test_unnumbered_resources_only(self): """Validate the bit that can be used for 1.10 and earlier.""" qs = 'resources=VCPU:2,MEMORY_MB:2048,DISK_GB:5,CUSTOM_MAGIC:123' expected = [ pl.RequestGroup( use_same_provider=False, resources={ 'VCPU': 2, 'MEMORY_MB': 2048, 'DISK_GB': 5, 'CUSTOM_MAGIC': 123, }, ), ] self.assertRequestGroupsEqual(expected, self.do_parse(qs)) def test_numbered_only(self): # Crazy ordering and nonsequential numbers don't matter. # It's okay to have a 'resources' without a 'required'. # A trait that's repeated shows up in both spots. qs = ('resources1=VCPU:2,MEMORY_MB:2048' '&required42=CUSTOM_GOLD' '&resources99=DISK_GB:5' '&resources42=CUSTOM_MAGIC:123' '&required1=HW_CPU_X86_VMX,CUSTOM_GOLD') expected = [ pl.RequestGroup( resources={ 'VCPU': 2, 'MEMORY_MB': 2048, }, required_traits={ 'HW_CPU_X86_VMX', 'CUSTOM_GOLD', }, ), pl.RequestGroup( resources={ 'CUSTOM_MAGIC': 123, }, required_traits={ 'CUSTOM_GOLD', }, ), pl.RequestGroup( resources={ 'DISK_GB': 5, }, ), ] self.assertRequestGroupsEqual(expected, self.do_parse(qs)) def test_numbered_and_unnumbered(self): qs = ('resources=VCPU:3,MEMORY_MB:4096,DISK_GB:10' '&required=HW_CPU_X86_VMX,CUSTOM_MEM_FLASH,STORAGE_DISK_SSD' '&resources1=SRIOV_NET_VF:2' '&required1=CUSTOM_PHYSNET_PRIVATE' '&resources2=SRIOV_NET_VF:1,NET_INGRESS_BYTES_SEC:20000' ',NET_EGRESS_BYTES_SEC:10000' '&required2=CUSTOM_SWITCH_BIG,CUSTOM_PHYSNET_PROD' '&resources3=CUSTOM_MAGIC:123') expected = [ pl.RequestGroup( use_same_provider=False, resources={ 'VCPU': 3, 'MEMORY_MB': 4096, 'DISK_GB': 10, }, required_traits={ 'HW_CPU_X86_VMX', 'CUSTOM_MEM_FLASH', 'STORAGE_DISK_SSD', }, ), pl.RequestGroup( resources={ 'SRIOV_NET_VF': 2, }, required_traits={ 'CUSTOM_PHYSNET_PRIVATE', }, ), pl.RequestGroup( resources={ 'SRIOV_NET_VF': 1, 'NET_INGRESS_BYTES_SEC': 20000, 'NET_EGRESS_BYTES_SEC': 10000, }, required_traits={ 'CUSTOM_SWITCH_BIG', 'CUSTOM_PHYSNET_PROD', }, ), pl.RequestGroup( resources={ 'CUSTOM_MAGIC': 123, }, ), ] self.assertRequestGroupsEqual(expected, self.do_parse(qs)) def test_400_malformed_resources(self): # Somewhat duplicates TestNormalizeResourceQsParam.test_400*. qs = ('resources=VCPU:0,MEMORY_MB:4096,DISK_GB:10' # Bad ----------^ '&required=HW_CPU_X86_VMX,CUSTOM_MEM_FLASH,STORAGE_DISK_SSD' '&resources1=SRIOV_NET_VF:2' '&required1=CUSTOM_PHYSNET_PRIVATE' '&resources2=SRIOV_NET_VF:1,NET_INGRESS_BYTES_SEC:20000' ',NET_EGRESS_BYTES_SEC:10000' '&required2=CUSTOM_SWITCH_BIG,CUSTOM_PHYSNET_PROD' '&resources3=CUSTOM_MAGIC:123') self.assertRaises(webob.exc.HTTPBadRequest, self.do_parse, qs) def test_400_malformed_traits(self): # Somewhat duplicates TestNormalizeResourceQsParam.test_400*. qs = ('resources=VCPU:7,MEMORY_MB:4096,DISK_GB:10' '&required=HW_CPU_X86_VMX,CUSTOM_MEM_FLASH,STORAGE_DISK_SSD' '&resources1=SRIOV_NET_VF:2' '&required1=CUSTOM_PHYSNET_PRIVATE' '&resources2=SRIOV_NET_VF:1,NET_INGRESS_BYTES_SEC:20000' ',NET_EGRESS_BYTES_SEC:10000' '&required2=CUSTOM_SWITCH_BIG,CUSTOM_PHYSNET_PROD,' # Bad -------------------------------------------^ '&resources3=CUSTOM_MAGIC:123') self.assertRaises(webob.exc.HTTPBadRequest, self.do_parse, qs) def test_400_traits_no_resources_unnumbered(self): qs = ('resources9=VCPU:7,MEMORY_MB:4096,DISK_GB:10' # Oops ---^ '&required=HW_CPU_X86_VMX,CUSTOM_MEM_FLASH,STORAGE_DISK_SSD' '&resources1=SRIOV_NET_VF:2' '&required1=CUSTOM_PHYSNET_PRIVATE' '&resources2=SRIOV_NET_VF:1,NET_INGRESS_BYTES_SEC:20000' ',NET_EGRESS_BYTES_SEC:10000' '&required2=CUSTOM_SWITCH_BIG,CUSTOM_PHYSNET_PROD' '&resources3=CUSTOM_MAGIC:123') self.assertRaises(webob.exc.HTTPBadRequest, self.do_parse, qs) def test_400_traits_no_resources_numbered(self): qs = ('resources=VCPU:7,MEMORY_MB:4096,DISK_GB:10' '&required=HW_CPU_X86_VMX,CUSTOM_MEM_FLASH,STORAGE_DISK_SSD' '&resources11=SRIOV_NET_VF:2' # Oops ----^^ '&required1=CUSTOM_PHYSNET_PRIVATE' '&resources20=SRIOV_NET_VF:1,NET_INGRESS_BYTES_SEC:20000' # Oops ----^^ ',NET_EGRESS_BYTES_SEC:10000' '&required2=CUSTOM_SWITCH_BIG,CUSTOM_PHYSNET_PROD' '&resources3=CUSTOM_MAGIC:123') self.assertRaises(webob.exc.HTTPBadRequest, self.do_parse, qs) class TestPickLastModified(test.NoDBTestCase): def setUp(self): super(TestPickLastModified, self).setUp() self.resource_provider = rp_obj.ResourceProvider( name=uuidsentinel.rp_name, uuid=uuidsentinel.rp_uuid) def test_updated_versus_none(self): now = timeutils.utcnow(with_timezone=True) self.resource_provider.updated_at = now self.resource_provider.created_at = now chosen_time = util.pick_last_modified(None, self.resource_provider) self.assertEqual(now, chosen_time) def test_created_versus_none(self): now = timeutils.utcnow(with_timezone=True) self.resource_provider.created_at = now self.resource_provider.updated_at = None chosen_time = util.pick_last_modified(None, self.resource_provider) self.assertEqual(now, chosen_time) def test_last_modified_less(self): now = timeutils.utcnow(with_timezone=True) less = now - datetime.timedelta(seconds=300) self.resource_provider.updated_at = now self.resource_provider.created_at = now chosen_time = util.pick_last_modified(less, self.resource_provider) self.assertEqual(now, chosen_time) def test_last_modified_more(self): now = timeutils.utcnow(with_timezone=True) more = now + datetime.timedelta(seconds=300) self.resource_provider.updated_at = now self.resource_provider.created_at = now chosen_time = util.pick_last_modified(more, self.resource_provider) self.assertEqual(more, chosen_time) def test_last_modified_same(self): now = timeutils.utcnow(with_timezone=True) self.resource_provider.updated_at = now self.resource_provider.created_at = now chosen_time = util.pick_last_modified(now, self.resource_provider) self.assertEqual(now, chosen_time) def test_no_object_time_fields_less(self): # An unsaved ovo will not have the created_at or updated_at fields # present on the object at all. now = timeutils.utcnow(with_timezone=True) less = now - datetime.timedelta(seconds=300) with mock.patch('oslo_utils.timeutils.utcnow') as mock_utc: mock_utc.return_value = now chosen_time = util.pick_last_modified( less, self.resource_provider) self.assertEqual(now, chosen_time) mock_utc.assert_called_once_with(with_timezone=True) def test_no_object_time_fields_more(self): # An unsaved ovo will not have the created_at or updated_at fields # present on the object at all. now = timeutils.utcnow(with_timezone=True) more = now + datetime.timedelta(seconds=300) with mock.patch('oslo_utils.timeutils.utcnow') as mock_utc: mock_utc.return_value = now chosen_time = util.pick_last_modified( more, self.resource_provider) self.assertEqual(more, chosen_time) mock_utc.assert_called_once_with(with_timezone=True) def test_no_object_time_fields_none(self): # An unsaved ovo will not have the created_at or updated_at fields # present on the object at all. now = timeutils.utcnow(with_timezone=True) with mock.patch('oslo_utils.timeutils.utcnow') as mock_utc: mock_utc.return_value = now chosen_time = util.pick_last_modified( None, self.resource_provider) self.assertEqual(now, chosen_time) mock_utc.assert_called_once_with(with_timezone=True) nova-17.0.1/nova/tests/unit/api/openstack/placement/__init__.py0000666000175000017500000000000013250073126024461 0ustar zuulzuul00000000000000nova-17.0.1/nova/tests/unit/api/openstack/placement/test_deploy.py0000666000175000017500000000305413250073136025272 0ustar zuulzuul00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Unit tests for the deply function used to build the Placement service.""" from oslo_config import cfg import webob from nova.api.openstack.placement import deploy from nova import test CONF = cfg.CONF class DeployTest(test.NoDBTestCase): def test_auth_middleware_factory(self): """Make sure that configuration settings make their way to the keystone middleware correctly. """ auth_uri = 'http://example.com/identity' authenticate_header_value = "Keystone uri='%s'" % auth_uri self.flags(auth_uri=auth_uri, group='keystone_authtoken') # ensure that the auth_token middleware is chosen self.flags(auth_strategy='keystone', group='api') app = deploy.deploy(CONF, 'nova') req = webob.Request.blank('/resource_providers', method="GET") response = req.get_response(app) self.assertEqual(authenticate_header_value, response.headers['www-authenticate']) nova-17.0.1/nova/tests/unit/api/openstack/placement/test_handler.py0000666000175000017500000001602713250073126025416 0ustar zuulzuul00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Unit tests for the functions used by the placement API handlers.""" import mock import routes import webob from nova.api.openstack.placement import handler from nova.api.openstack.placement.handlers import root from nova.api.openstack.placement import microversion from nova import test from nova.tests import uuidsentinel # Used in tests below def start_response(*args, **kwargs): pass def _environ(path='/moo', method='GET'): return { 'PATH_INFO': path, 'REQUEST_METHOD': method, 'SERVER_NAME': 'example.com', 'SERVER_PORT': '80', 'wsgi.url_scheme': 'http', # The microversion version value is not used, but it # needs to be set to avoid a KeyError. microversion.MICROVERSION_ENVIRON: microversion.Version(1, 12), } class DispatchTest(test.NoDBTestCase): def setUp(self): super(DispatchTest, self).setUp() self.mapper = routes.Mapper() self.route_handler = mock.MagicMock() def test_no_match_null_map(self): self.assertRaises(webob.exc.HTTPNotFound, handler.dispatch, _environ(), start_response, self.mapper) def test_no_match_with_map(self): self.mapper.connect('/foobar', action='hello') self.assertRaises(webob.exc.HTTPNotFound, handler.dispatch, _environ(), start_response, self.mapper) def test_simple_match(self): self.mapper.connect('/foobar', action=self.route_handler, conditions=dict(method=['GET'])) environ = _environ(path='/foobar') handler.dispatch(environ, start_response, self.mapper) self.route_handler.assert_called_with(environ, start_response) def test_simple_match_routing_args(self): self.mapper.connect('/foobar/{id}', action=self.route_handler, conditions=dict(method=['GET'])) environ = _environ(path='/foobar/%s' % uuidsentinel.foobar) handler.dispatch(environ, start_response, self.mapper) self.route_handler.assert_called_with(environ, start_response) self.assertEqual(uuidsentinel.foobar, environ['wsgiorg.routing_args'][1]['id']) class MapperTest(test.NoDBTestCase): def setUp(self): super(MapperTest, self).setUp() declarations = { '/hello': {'GET': 'hello'} } self.mapper = handler.make_map(declarations) def test_no_match(self): environ = _environ(path='/cow') self.assertIsNone(self.mapper.match(environ=environ)) def test_match(self): environ = _environ(path='/hello') action = self.mapper.match(environ=environ)['action'] self.assertEqual('hello', action) def test_405_methods(self): environ = _environ(path='/hello', method='POST') result = self.mapper.match(environ=environ) self.assertEqual(handler.handle_405, result['action']) self.assertEqual('GET', result['_methods']) def test_405_headers(self): environ = _environ(path='/hello', method='POST') global headers, status headers = status = None def local_start_response(*args, **kwargs): global headers, status status = args[0] headers = {header[0]: header[1] for header in args[1]} handler.dispatch(environ, local_start_response, self.mapper) allow_header = headers['allow'] self.assertEqual('405 Method Not Allowed', status) self.assertEqual('GET', allow_header) # PEP 3333 requires that headers be whatever the native str # is in that version of Python. Never unicode. self.assertEqual(str, type(allow_header)) class PlacementLoggingTest(test.NoDBTestCase): @mock.patch("nova.api.openstack.placement.handler.LOG") def test_404_no_error_log(self, mocked_log): environ = _environ(path='/hello', method='GET') context_mock = mock.Mock() context_mock.to_policy_values.return_value = {'roles': ['admin']} environ['placement.context'] = context_mock app = handler.PlacementHandler() self.assertRaises(webob.exc.HTTPNotFound, app, environ, start_response) mocked_log.error.assert_not_called() mocked_log.exception.assert_not_called() class DeclarationsTest(test.NoDBTestCase): def setUp(self): super(DeclarationsTest, self).setUp() self.mapper = handler.make_map(handler.ROUTE_DECLARATIONS) def test_root_slash_match(self): environ = _environ(path='/') result = self.mapper.match(environ=environ) self.assertEqual(root.home, result['action']) def test_root_empty_match(self): environ = _environ(path='') result = self.mapper.match(environ=environ) self.assertEqual(root.home, result['action']) class ContentHeadersTest(test.NoDBTestCase): def setUp(self): super(ContentHeadersTest, self).setUp() self.environ = _environ(path='/') self.app = handler.PlacementHandler() def test_no_content_type(self): self.environ['CONTENT_LENGTH'] = '10' self.assertRaisesRegex(webob.exc.HTTPBadRequest, "content-type header required when " "content-length > 0", self.app, self.environ, start_response) def test_non_integer_content_length(self): self.environ['CONTENT_LENGTH'] = 'foo' self.assertRaisesRegex(webob.exc.HTTPBadRequest, "content-length header must be an integer", self.app, self.environ, start_response) def test_empty_content_type(self): self.environ['CONTENT_LENGTH'] = '10' self.environ['CONTENT_TYPE'] = '' self.assertRaisesRegex(webob.exc.HTTPBadRequest, "content-type header required when " "content-length > 0", self.app, self.environ, start_response) def test_empty_content_length_and_type_works(self): self.environ['CONTENT_LENGTH'] = '' self.environ['CONTENT_TYPE'] = '' self.app(self.environ, start_response) def test_content_length_and_type_works(self): self.environ['CONTENT_LENGTH'] = '10' self.environ['CONTENT_TYPE'] = 'foo' self.app(self.environ, start_response) nova-17.0.1/nova/tests/unit/api/openstack/placement/test_requestlog.py0000666000175000017500000000552313250073126026172 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """Tests for the placement request log middleware.""" import mock import webob from nova.api.openstack.placement import requestlog from nova import test class TestRequestLog(test.NoDBTestCase): @staticmethod @webob.dec.wsgify def application(req): req.response.status = 200 return req.response def setUp(self): super(TestRequestLog, self).setUp() self.req = webob.Request.blank('/resource_providers?name=myrp') self.environ = self.req.environ # The blank does not include remote address, so add it. self.environ['REMOTE_ADDR'] = '127.0.0.1' # nor a microversion self.environ['placement.microversion'] = '2.1' def test_get_uri(self): req_uri = requestlog.RequestLog._get_uri(self.environ) self.assertEqual('/resource_providers?name=myrp', req_uri) def test_get_uri_knows_prefix(self): self.environ['SCRIPT_NAME'] = '/placement' req_uri = requestlog.RequestLog._get_uri(self.environ) self.assertEqual('/placement/resource_providers?name=myrp', req_uri) @mock.patch("nova.api.openstack.placement.requestlog.RequestLog.write_log") def test_middleware_writes_logs(self, write_log): start_response_mock = mock.MagicMock() app = requestlog.RequestLog(self.application) app(self.environ, start_response_mock) write_log.assert_called_once_with( self.environ, '/resource_providers?name=myrp', '200 OK', '0') @mock.patch("nova.api.openstack.placement.requestlog.LOG") def test_middleware_sends_message(self, mocked_log): start_response_mock = mock.MagicMock() app = requestlog.RequestLog(self.application) app(self.environ, start_response_mock) mocked_log.debug.assert_called_once_with( 'Starting request: %s "%s %s"', '127.0.0.1', 'GET', '/resource_providers?name=myrp') mocked_log.info.assert_called_once_with( '%(REMOTE_ADDR)s "%(REQUEST_METHOD)s %(REQUEST_URI)s" ' 'status: %(status)s len: %(bytes)s microversion: %(microversion)s', {'microversion': '2.1', 'status': '200', 'REQUEST_URI': '/resource_providers?name=myrp', 'REQUEST_METHOD': 'GET', 'REMOTE_ADDR': '127.0.0.1', 'bytes': '0'}) nova-17.0.1/nova/tests/unit/api/openstack/placement/test_microversion.py0000666000175000017500000001352113250073126026514 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. # See the License for the specific language governing permissions and # limitations under the License. """Tests for placement microversion handling.""" import collections import operator import webob import mock # import the handlers to load up handler decorators import nova.api.openstack.placement.handler # noqa from nova.api.openstack.placement import microversion from nova import test def handler(): return True class TestMicroversionFindMethod(test.NoDBTestCase): def test_method_405(self): self.assertRaises(webob.exc.HTTPMethodNotAllowed, microversion._find_method, handler, '1.1', 405) def test_method_404(self): self.assertRaises(webob.exc.HTTPNotFound, microversion._find_method, handler, '1.1', 404) class TestMicroversionDecoration(test.NoDBTestCase): @mock.patch('nova.api.openstack.placement.microversion.VERSIONED_METHODS', new=collections.defaultdict(list)) def test_methods_structure(self): """Test that VERSIONED_METHODS gets data as expected.""" self.assertEqual(0, len(microversion.VERSIONED_METHODS)) fully_qualified_method = microversion._fully_qualified_name( handler) microversion.version_handler('1.1', '1.10')(handler) microversion.version_handler('2.0')(handler) methods_data = microversion.VERSIONED_METHODS[fully_qualified_method] stored_method_data = methods_data[-1] self.assertEqual(2, len(methods_data)) self.assertEqual(microversion.Version(1, 1), stored_method_data[0]) self.assertEqual(microversion.Version(1, 10), stored_method_data[1]) self.assertEqual(handler, stored_method_data[2]) self.assertEqual(microversion.Version(2, 0), methods_data[0][0]) def test_version_handler_float_exception(self): self.assertRaises(AttributeError, microversion.version_handler(1.1), handler) def test_version_handler_nan_exception(self): self.assertRaises(TypeError, microversion.version_handler('cow'), handler) def test_version_handler_tuple_exception(self): self.assertRaises(AttributeError, microversion.version_handler((1, 1)), handler) class TestMicroversionIntersection(test.NoDBTestCase): """Test that there are no overlaps in the versioned handlers.""" # If you add versioned handlers you need to update this value to # reflect the change. The value is the total number of methods # with different names, not the total number overall. That is, # if you add two different versions of method 'foobar' the # number only goes up by one if no other version foobar yet # exists. This operates as a simple sanity check. TOTAL_VERSIONED_METHODS = 19 def test_methods_versioned(self): methods_data = microversion.VERSIONED_METHODS self.assertEqual(self.TOTAL_VERSIONED_METHODS, len(methods_data)) @staticmethod def _check_intersection(method_info): # See check_for_versions_intersection in # nova.api.openstack.wsgi. pairs = [] counter = 0 for min_ver, max_ver, func in method_info: pairs.append((min_ver, 1, func)) pairs.append((max_ver, -1, func)) pairs.sort(key=operator.itemgetter(0)) for p in pairs: counter += p[1] if counter > 1: return True return False @mock.patch('nova.api.openstack.placement.microversion.VERSIONED_METHODS', new=collections.defaultdict(list)) def test_faked_intersection(self): microversion.version_handler('1.0', '1.9')(handler) microversion.version_handler('1.8', '2.0')(handler) for method_info in microversion.VERSIONED_METHODS.values(): self.assertTrue(self._check_intersection(method_info)) @mock.patch('nova.api.openstack.placement.microversion.VERSIONED_METHODS', new=collections.defaultdict(list)) def test_faked_non_intersection(self): microversion.version_handler('1.0', '1.8')(handler) microversion.version_handler('1.9', '2.0')(handler) for method_info in microversion.VERSIONED_METHODS.values(): self.assertFalse(self._check_intersection(method_info)) def test_check_real_for_intersection(self): """Check the real handlers to make sure there is no intersctions.""" for method_name, method_info in microversion.VERSIONED_METHODS.items(): self.assertFalse( self._check_intersection(method_info), 'method %s has intersecting versioned handlers' % method_name) class MicroversionSequentialTest(test.NoDBTestCase): def test_microversion_sequential(self): for method_name, method_list in microversion.VERSIONED_METHODS.items(): previous_min_version = method_list[0][0] for method in method_list[1:]: previous_min_version = microversion.parse_version_string( '%s.%s' % (previous_min_version.major, previous_min_version.minor - 1)) self.assertEqual(previous_min_version, method[1], "The microversions aren't sequential in the mehtod %s" % method_name) previous_min_version = method[0] nova-17.0.1/nova/tests/unit/api/openstack/test_legacy_v2_compatible_wrapper.py0000666000175000017500000001754413250073126027650 0ustar zuulzuul00000000000000# Copyright 2015 Intel Corporation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from jsonschema import exceptions as jsonschema_exc import webob import webob.dec import nova.api.openstack from nova.api.openstack import wsgi from nova.api.validation import validators from nova import test class TestLegacyV2CompatibleWrapper(test.NoDBTestCase): def test_filter_out_microversions_request_header(self): req = webob.Request.blank('/') req.headers[wsgi.API_VERSION_REQUEST_HEADER] = '2.2' @webob.dec.wsgify def fake_app(req, *args, **kwargs): self.assertNotIn(wsgi.API_VERSION_REQUEST_HEADER, req) resp = webob.Response() return resp wrapper = nova.api.openstack.LegacyV2CompatibleWrapper(fake_app) req.get_response(wrapper) def test_filter_out_microversions_response_header(self): req = webob.Request.blank('/') @webob.dec.wsgify def fake_app(req, *args, **kwargs): resp = webob.Response() resp.status_int = 204 resp.headers[wsgi.API_VERSION_REQUEST_HEADER] = '2.3' return resp wrapper = nova.api.openstack.LegacyV2CompatibleWrapper(fake_app) response = req.get_response(wrapper) self.assertNotIn(wsgi.API_VERSION_REQUEST_HEADER, response.headers) def test_filter_out_microversions_vary_header(self): req = webob.Request.blank('/') @webob.dec.wsgify def fake_app(req, *args, **kwargs): resp = webob.Response() resp.status_int = 204 resp.headers['Vary'] = wsgi.API_VERSION_REQUEST_HEADER return resp wrapper = nova.api.openstack.LegacyV2CompatibleWrapper(fake_app) response = req.get_response(wrapper) self.assertNotIn('Vary', response.headers) def test_filter_out_microversions_vary_header_with_multi_fields(self): req = webob.Request.blank('/') @webob.dec.wsgify def fake_app(req, *args, **kwargs): resp = webob.Response() resp.status_int = 204 resp.headers['Vary'] = '%s, %s, %s' % ( wsgi.API_VERSION_REQUEST_HEADER, 'FAKE_HEADER1', 'FAKE_HEADER2') return resp wrapper = nova.api.openstack.LegacyV2CompatibleWrapper(fake_app) response = req.get_response(wrapper) self.assertEqual('FAKE_HEADER1,FAKE_HEADER2', response.headers['Vary']) def test_filter_out_microversions_no_vary_header(self): req = webob.Request.blank('/') @webob.dec.wsgify def fake_app(req, *args, **kwargs): resp = webob.Response() resp.status_int = 204 return resp wrapper = nova.api.openstack.LegacyV2CompatibleWrapper(fake_app) response = req.get_response(wrapper) self.assertNotIn('Vary', response.headers) def test_legacy_env_variable(self): req = webob.Request.blank('/') @webob.dec.wsgify(RequestClass=wsgi.Request) def fake_app(req, *args, **kwargs): self.assertTrue(req.is_legacy_v2()) resp = webob.Response() resp.status_int = 204 return resp wrapper = nova.api.openstack.LegacyV2CompatibleWrapper(fake_app) req.get_response(wrapper) class TestSoftAdditionalPropertiesValidation(test.NoDBTestCase): def setUp(self): super(TestSoftAdditionalPropertiesValidation, self).setUp() self.schema = { 'type': 'object', 'properties': { 'foo': {'type': 'string'}, 'bar': {'type': 'string'} }, 'additionalProperties': False} self.schema_allow = { 'type': 'object', 'properties': { 'foo': {'type': 'string'}, 'bar': {'type': 'string'} }, 'additionalProperties': True} self.schema_with_pattern = { 'type': 'object', 'patternProperties': { '^[a-zA-Z0-9-_:. ]{1,255}$': {'type': 'string'} }, 'additionalProperties': False} self.schema_allow_with_pattern = { 'type': 'object', 'patternProperties': { '^[a-zA-Z0-9-_:. ]{1,255}$': {'type': 'string'} }, 'additionalProperties': True} def test_strip_extra_properties_out_without_extra_props(self): validator = validators._SchemaValidator(self.schema).validator instance = {'foo': '1'} gen = validators._soft_validate_additional_properties( validator, False, instance, self.schema) self.assertRaises(StopIteration, next, gen) self.assertEqual({'foo': '1'}, instance) def test_strip_extra_properties_out_with_extra_props(self): validator = validators._SchemaValidator(self.schema).validator instance = {'foo': '1', 'extra_foo': 'extra'} gen = validators._soft_validate_additional_properties( validator, False, instance, self.schema) self.assertRaises(StopIteration, next, gen) self.assertEqual({'foo': '1'}, instance) def test_not_strip_extra_properties_out_with_allow_extra_props(self): validator = validators._SchemaValidator(self.schema_allow).validator instance = {'foo': '1', 'extra_foo': 'extra'} gen = validators._soft_validate_additional_properties( validator, True, instance, self.schema_allow) self.assertRaises(StopIteration, next, gen) self.assertEqual({'foo': '1', 'extra_foo': 'extra'}, instance) def test_pattern_properties_with_invalid_property_and_allow_extra_props( self): validator = validators._SchemaValidator( self.schema_with_pattern).validator instance = {'foo': '1', 'b' * 300: 'extra'} gen = validators._soft_validate_additional_properties( validator, True, instance, self.schema_with_pattern) self.assertRaises(StopIteration, next, gen) def test_pattern_properties(self): validator = validators._SchemaValidator( self.schema_with_pattern).validator instance = {'foo': '1'} gen = validators._soft_validate_additional_properties( validator, False, instance, self.schema_with_pattern) self.assertRaises(StopIteration, next, gen) def test_pattern_properties_with_invalid_property(self): validator = validators._SchemaValidator( self.schema_with_pattern).validator instance = {'foo': '1', 'b' * 300: 'extra'} gen = validators._soft_validate_additional_properties( validator, False, instance, self.schema_with_pattern) exc = next(gen) self.assertIsInstance(exc, jsonschema_exc.ValidationError) self.assertIn('was', exc.message) def test_pattern_properties_with_multiple_invalid_properties(self): validator = validators._SchemaValidator( self.schema_with_pattern).validator instance = {'foo': '1', 'b' * 300: 'extra', 'c' * 300: 'extra'} gen = validators._soft_validate_additional_properties( validator, False, instance, self.schema_with_pattern) exc = next(gen) self.assertIsInstance(exc, jsonschema_exc.ValidationError) self.assertIn('were', exc.message) nova-17.0.1/nova/tests/unit/api/openstack/test_wsgi.py0000666000175000017500000011574713250073126023013 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_serialization import jsonutils import six import testscenarios import webob from nova.api.openstack import api_version_request as api_version from nova.api.openstack import versioned_method from nova.api.openstack import wsgi from nova import exception from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests.unit import matchers from nova.tests.unit import utils class MicroversionedTest(testscenarios.WithScenarios, test.NoDBTestCase): scenarios = [ ('legacy-microversion', { 'header_name': 'X-OpenStack-Nova-API-Version', }), ('modern-microversion', { 'header_name': 'OpenStack-API-Version', }) ] def _make_microversion_header(self, value): if 'nova' in self.header_name.lower(): return {self.header_name: value} else: return {self.header_name: 'compute %s' % value} class RequestTest(MicroversionedTest): def setUp(self): super(RequestTest, self).setUp() self.stub_out('nova.i18n.get_available_languages', lambda *args, **kwargs: ['en_GB', 'en_AU', 'de', 'zh_CN', 'en_US']) def test_content_type_missing(self): request = wsgi.Request.blank('/tests/123', method='POST') request.body = b"" self.assertIsNone(request.get_content_type()) def test_content_type_unsupported(self): request = wsgi.Request.blank('/tests/123', method='POST') request.headers["Content-Type"] = "text/html" request.body = b"asdf
" self.assertRaises(exception.InvalidContentType, request.get_content_type) def test_content_type_with_charset(self): request = wsgi.Request.blank('/tests/123') request.headers["Content-Type"] = "application/json; charset=UTF-8" result = request.get_content_type() self.assertEqual(result, "application/json") def test_content_type_accept_default(self): request = wsgi.Request.blank('/tests/123.unsupported') request.headers["Accept"] = "application/unsupported1" result = request.best_match_content_type() self.assertEqual(result, "application/json") def test_cache_and_retrieve_instances(self): request = wsgi.Request.blank('/foo') instances = [] for x in range(3): instances.append({'uuid': 'uuid%s' % x}) # Store 2 request.cache_db_instances(instances[:2]) # Store 1 request.cache_db_instance(instances[2]) self.assertEqual(request.get_db_instance('uuid0'), instances[0]) self.assertEqual(request.get_db_instance('uuid1'), instances[1]) self.assertEqual(request.get_db_instance('uuid2'), instances[2]) self.assertIsNone(request.get_db_instance('uuid3')) self.assertEqual(request.get_db_instances(), {'uuid0': instances[0], 'uuid1': instances[1], 'uuid2': instances[2]}) def test_from_request(self): request = wsgi.Request.blank('/') accepted = 'bogus;q=1.1, en-gb;q=0.7,en-us,en;q=.5,*;q=.7' request.headers = {'Accept-Language': accepted} self.assertEqual(request.best_match_language(), 'en_US') def test_asterisk(self): # asterisk should match first available if there # are not any other available matches request = wsgi.Request.blank('/') accepted = '*,es;q=.5' request.headers = {'Accept-Language': accepted} self.assertEqual(request.best_match_language(), 'en_GB') def test_prefix(self): request = wsgi.Request.blank('/') accepted = 'zh' request.headers = {'Accept-Language': accepted} self.assertEqual(request.best_match_language(), 'zh_CN') def test_secondary(self): request = wsgi.Request.blank('/') accepted = 'nn,en-gb;q=.5' request.headers = {'Accept-Language': accepted} self.assertEqual(request.best_match_language(), 'en_GB') def test_none_found(self): request = wsgi.Request.blank('/') accepted = 'nb-no' request.headers = {'Accept-Language': accepted} self.assertIsNone(request.best_match_language()) def test_no_lang_header(self): request = wsgi.Request.blank('/') accepted = '' request.headers = {'Accept-Language': accepted} self.assertIsNone(request.best_match_language()) def test_api_version_request_header_none(self): request = wsgi.Request.blank('/') request.set_api_version_request() self.assertEqual(api_version.APIVersionRequest( api_version.DEFAULT_API_VERSION), request.api_version_request) @mock.patch("nova.api.openstack.api_version_request.max_api_version") def test_api_version_request_header(self, mock_maxver): mock_maxver.return_value = api_version.APIVersionRequest("2.14") request = wsgi.Request.blank('/') request.headers = self._make_microversion_header('2.14') request.set_api_version_request() self.assertEqual(api_version.APIVersionRequest("2.14"), request.api_version_request) @mock.patch("nova.api.openstack.api_version_request.max_api_version") def test_api_version_request_header_latest(self, mock_maxver): mock_maxver.return_value = api_version.APIVersionRequest("3.5") request = wsgi.Request.blank('/') request.headers = self._make_microversion_header('latest') request.set_api_version_request() self.assertEqual(api_version.APIVersionRequest("3.5"), request.api_version_request) def test_api_version_request_header_invalid(self): request = wsgi.Request.blank('/') request.headers = self._make_microversion_header('2.1.3') self.assertRaises(exception.InvalidAPIVersionString, request.set_api_version_request) class ActionDispatcherTest(test.NoDBTestCase): def test_dispatch(self): serializer = wsgi.ActionDispatcher() serializer.create = lambda x: 'pants' self.assertEqual(serializer.dispatch({}, action='create'), 'pants') def test_dispatch_action_None(self): serializer = wsgi.ActionDispatcher() serializer.create = lambda x: 'pants' serializer.default = lambda x: 'trousers' self.assertEqual(serializer.dispatch({}, action=None), 'trousers') def test_dispatch_default(self): serializer = wsgi.ActionDispatcher() serializer.create = lambda x: 'pants' serializer.default = lambda x: 'trousers' self.assertEqual(serializer.dispatch({}, action='update'), 'trousers') class JSONDictSerializerTest(test.NoDBTestCase): def test_json(self): input_dict = dict(servers=dict(a=(2, 3))) expected_json = '{"servers":{"a":[2,3]}}' serializer = wsgi.JSONDictSerializer() result = serializer.serialize(input_dict) result = result.replace('\n', '').replace(' ', '') self.assertEqual(result, expected_json) class JSONDeserializerTest(test.NoDBTestCase): def test_json(self): data = """{"a": { "a1": "1", "a2": "2", "bs": ["1", "2", "3", {"c": {"c1": "1"}}], "d": {"e": "1"}, "f": "1"}}""" as_dict = { 'body': { 'a': { 'a1': '1', 'a2': '2', 'bs': ['1', '2', '3', {'c': {'c1': '1'}}], 'd': {'e': '1'}, 'f': '1', }, }, } deserializer = wsgi.JSONDeserializer() self.assertEqual(deserializer.deserialize(data), as_dict) def test_json_valid_utf8(self): data = b"""{"server": {"min_count": 1, "flavorRef": "1", "name": "\xe6\xa6\x82\xe5\xbf\xb5", "imageRef": "10bab10c-1304-47d", "max_count": 1}} """ as_dict = { 'body': { u'server': { u'min_count': 1, u'flavorRef': u'1', u'name': u'\u6982\u5ff5', u'imageRef': u'10bab10c-1304-47d', u'max_count': 1 } } } deserializer = wsgi.JSONDeserializer() self.assertEqual(deserializer.deserialize(data), as_dict) def test_json_invalid_utf8(self): """Send invalid utf-8 to JSONDeserializer.""" data = b"""{"server": {"min_count": 1, "flavorRef": "1", "name": "\xf0\x28\x8c\x28", "imageRef": "10bab10c-1304-47d", "max_count": 1}} """ deserializer = wsgi.JSONDeserializer() self.assertRaises(exception.MalformedRequestBody, deserializer.deserialize, data) class ResourceTest(MicroversionedTest): def get_req_id_header_name(self, request): header_name = 'x-openstack-request-id' if utils.get_api_version(request) < 3: header_name = 'x-compute-request-id' return header_name def test_resource_receives_api_version_request_default(self): class Controller(object): def index(self, req): if req.api_version_request != \ api_version.APIVersionRequest( api_version.DEFAULT_API_VERSION): raise webob.exc.HTTPInternalServerError() return 'success' app = fakes.TestRouter(Controller()) req = webob.Request.blank('/tests') response = req.get_response(app) self.assertEqual(b'success', response.body) self.assertEqual(response.status_int, 200) @mock.patch("nova.api.openstack.api_version_request.max_api_version") def test_resource_receives_api_version_request(self, mock_maxver): version = "2.5" mock_maxver.return_value = api_version.APIVersionRequest(version) class Controller(object): def index(self, req): if req.api_version_request != \ api_version.APIVersionRequest(version): raise webob.exc.HTTPInternalServerError() return 'success' app = fakes.TestRouter(Controller()) req = webob.Request.blank('/tests') req.headers = self._make_microversion_header(version) response = req.get_response(app) self.assertEqual(b'success', response.body) self.assertEqual(response.status_int, 200) def test_resource_receives_api_version_request_invalid(self): invalid_version = "2.5.3" class Controller(object): def index(self, req): return 'success' app = fakes.TestRouter(Controller()) req = webob.Request.blank('/tests') req.headers = self._make_microversion_header(invalid_version) response = req.get_response(app) self.assertEqual(400, response.status_int) def test_resource_call_with_method_get(self): class Controller(object): def index(self, req): return 'success' app = fakes.TestRouter(Controller()) # the default method is GET req = webob.Request.blank('/tests') response = req.get_response(app) self.assertEqual(b'success', response.body) self.assertEqual(response.status_int, 200) req.body = b'{"body": {"key": "value"}}' response = req.get_response(app) self.assertEqual(b'success', response.body) self.assertEqual(response.status_int, 200) req.content_type = 'application/json' response = req.get_response(app) self.assertEqual(b'success', response.body) self.assertEqual(response.status_int, 200) def test_resource_call_with_method_post(self): class Controller(object): @wsgi.expected_errors(400) def create(self, req, body): if expected_body != body: msg = "The request body invalid" raise webob.exc.HTTPBadRequest(explanation=msg) return "success" # verify the method: POST app = fakes.TestRouter(Controller()) req = webob.Request.blank('/tests', method="POST", content_type='application/json') req.body = b'{"body": {"key": "value"}}' expected_body = {'body': { "key": "value" } } response = req.get_response(app) self.assertEqual(response.status_int, 200) self.assertEqual(b'success', response.body) # verify without body expected_body = None req.body = None response = req.get_response(app) self.assertEqual(response.status_int, 200) self.assertEqual(b'success', response.body) # the body is validated in the controller expected_body = {'body': None} response = req.get_response(app) expected_unsupported_type_body = {'badRequest': {'message': 'The request body invalid', 'code': 400}} self.assertEqual(response.status_int, 400) self.assertEqual(expected_unsupported_type_body, jsonutils.loads(response.body)) def test_resource_call_with_method_put(self): class Controller(object): def update(self, req, id, body): if expected_body != body: msg = "The request body invalid" raise webob.exc.HTTPBadRequest(explanation=msg) return "success" # verify the method: PUT app = fakes.TestRouter(Controller()) req = webob.Request.blank('/tests/test_id', method="PUT", content_type='application/json') req.body = b'{"body": {"key": "value"}}' expected_body = {'body': { "key": "value" } } response = req.get_response(app) self.assertEqual(b'success', response.body) self.assertEqual(response.status_int, 200) req.body = None expected_body = None response = req.get_response(app) self.assertEqual(response.status_int, 200) # verify no content_type is contained in the request req = webob.Request.blank('/tests/test_id', method="PUT", content_type='application/xml') req.content_type = 'application/xml' req.body = b'{"body": {"key": "value"}}' response = req.get_response(app) expected_unsupported_type_body = {'badMediaType': {'message': 'Unsupported Content-Type', 'code': 415}} self.assertEqual(response.status_int, 415) self.assertEqual(expected_unsupported_type_body, jsonutils.loads(response.body)) def test_resource_call_with_method_delete(self): class Controller(object): def delete(self, req, id): return "success" # verify the method: DELETE app = fakes.TestRouter(Controller()) req = webob.Request.blank('/tests/test_id', method="DELETE") response = req.get_response(app) self.assertEqual(response.status_int, 200) self.assertEqual(b'success', response.body) # ignore the body req.body = b'{"body": {"key": "value"}}' response = req.get_response(app) self.assertEqual(response.status_int, 200) self.assertEqual(b'success', response.body) def test_resource_forbidden(self): class Controller(object): def index(self, req): raise exception.Forbidden() req = webob.Request.blank('/tests') app = fakes.TestRouter(Controller()) response = req.get_response(app) self.assertEqual(response.status_int, 403) def test_resource_not_authorized(self): class Controller(object): def index(self, req): raise exception.Unauthorized() req = webob.Request.blank('/tests') app = fakes.TestRouter(Controller()) self.assertRaises( exception.Unauthorized, req.get_response, app) def test_dispatch(self): class Controller(object): def index(self, req, pants=None): return pants controller = Controller() resource = wsgi.Resource(controller) method, extensions = resource.get_method(None, 'index', None, '') actual = resource.dispatch(method, None, {'pants': 'off'}) expected = 'off' self.assertEqual(actual, expected) def test_get_method_unknown_controller_method(self): class Controller(object): def index(self, req, pants=None): return pants controller = Controller() resource = wsgi.Resource(controller) self.assertRaises(AttributeError, resource.get_method, None, 'create', None, '') def test_get_method_action_json(self): class Controller(wsgi.Controller): @wsgi.action('fooAction') def _action_foo(self, req, id, body): return body controller = Controller() resource = wsgi.Resource(controller) method, extensions = resource.get_method(None, 'action', 'application/json', '{"fooAction": true}') self.assertEqual(controller._action_foo, method) def test_get_method_action_bad_body(self): class Controller(wsgi.Controller): @wsgi.action('fooAction') def _action_foo(self, req, id, body): return body controller = Controller() resource = wsgi.Resource(controller) self.assertRaises(exception.MalformedRequestBody, resource.get_method, None, 'action', 'application/json', '{}') def test_get_method_unknown_controller_action(self): class Controller(wsgi.Controller): @wsgi.action('fooAction') def _action_foo(self, req, id, body): return body controller = Controller() resource = wsgi.Resource(controller) self.assertRaises(KeyError, resource.get_method, None, 'action', 'application/json', '{"barAction": true}') def test_get_method_action_method(self): class Controller(object): def action(self, req, pants=None): return pants controller = Controller() resource = wsgi.Resource(controller) method, extensions = resource.get_method(None, 'action', 'application/xml', 'true", "$$akey$", "!akey", "") for key in invalid_keys: body = {"extra_specs": {key: "value1"}} req = self._get_request('1/os-extra_specs', use_admin_context=True) self.assertRaises(self.bad_request, self.controller.create, req, 1, body=body) @mock.patch('nova.objects.flavor._flavor_extra_specs_add') def test_create_valid_specs_key(self, mock_flavor_extra_specs): valid_keys = ("key1", "month.price", "I_am-a Key", "finance:g2") mock_flavor_extra_specs.side_effect = return_create_flavor_extra_specs for key in valid_keys: body = {"extra_specs": {key: "value1"}} req = self._get_request('1/os-extra_specs', use_admin_context=True) res_dict = self.controller.create(req, 1, body=body) self.assertEqual('value1', res_dict['extra_specs'][key]) @mock.patch('nova.objects.flavor._flavor_extra_specs_add') def test_update_item(self, mock_add): mock_add.side_effect = return_create_flavor_extra_specs body = {"key1": "value1"} req = self._get_request('1/os-extra_specs/key1', use_admin_context=True) res_dict = self.controller.update(req, 1, 'key1', body=body) self.assertEqual('value1', res_dict['key1']) def test_update_item_no_admin(self): body = {"key1": "value1"} req = self._get_request('1/os-extra_specs/key1') self.assertRaises(exception.Forbidden, self.controller.update, req, 1, 'key1', body=body) def _test_update_item_bad_request(self, body): req = self._get_request('1/os-extra_specs/key1', use_admin_context=True) self.assertRaises(self.bad_request, self.controller.update, req, 1, 'key1', body=body) def test_update_item_empty_body(self): self._test_update_item_bad_request('') def test_update_item_too_many_keys(self): body = {"key1": "value1", "key2": "value2"} self._test_update_item_bad_request(body) def test_update_item_non_dict_extra_specs(self): self._test_update_item_bad_request("non_dict") def test_update_item_non_string_key(self): self._test_update_item_bad_request({None: "value1"}) def test_update_item_non_string_value(self): self._test_update_item_bad_request({"key1": None}) def test_update_item_zero_length_key(self): self._test_update_item_bad_request({"": "value1"}) def test_update_item_long_key(self): key = "a" * 256 self._test_update_item_bad_request({key: "value1"}) def test_update_item_long_value(self): value = "a" * 256 self._test_update_item_bad_request({"key1": value}) def test_update_item_body_uri_mismatch(self): body = {"key1": "value1"} req = self._get_request('1/os-extra_specs/bad', use_admin_context=True) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, req, 1, 'bad', body=body) def test_update_flavor_not_found(self): body = {"key1": "value1"} req = self._get_request('1/os-extra_specs/key1', use_admin_context=True) with mock.patch('nova.objects.Flavor.save', side_effect=exception.FlavorNotFound(flavor_id='')): self.assertRaises(webob.exc.HTTPNotFound, self.controller.update, req, 1, 'key1', body=body) def test_update_flavor_db_duplicate(self): body = {"key1": "value1"} req = self._get_request('1/os-extra_specs/key1', use_admin_context=True) with mock.patch( 'nova.objects.Flavor.save', side_effect=exception.FlavorExtraSpecUpdateCreateFailed( id=1, retries=5)): self.assertRaises(webob.exc.HTTPConflict, self.controller.update, req, 1, 'key1', body=body) def test_update_really_long_integer_value(self): value = 10 ** 1000 req = self._get_request('1/os-extra_specs/key1', use_admin_context=True) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, req, 1, 'key1', body={"key1": value}) nova-17.0.1/nova/tests/unit/api/openstack/compute/test_evacuate.py0000666000175000017500000004723413250073126025306 0ustar zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import testtools import webob from nova.api.openstack.compute import evacuate as evacuate_v21 from nova.compute import api as compute_api from nova.compute import vm_states import nova.conf from nova import exception from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_instance from nova.tests import uuidsentinel as uuids CONF = nova.conf.CONF def fake_compute_api(*args, **kwargs): return True def fake_compute_api_get(self, context, instance_id, **kwargs): # BAD_UUID is something that does not exist if instance_id == 'BAD_UUID': raise exception.InstanceNotFound(instance_id=instance_id) else: return fake_instance.fake_instance_obj(context, id=1, uuid=instance_id, task_state=None, host='host1', vm_state=vm_states.ACTIVE) def fake_service_get_by_compute_host(self, context, host): if host == 'bad-host': raise exception.ComputeHostNotFound(host=host) elif host == 'unmapped-host': raise exception.HostMappingNotFound(name=host) else: return { 'host_name': host, 'service': 'compute', 'zone': 'nova' } class EvacuateTestV21(test.NoDBTestCase): validation_error = exception.ValidationError _methods = ('resize', 'evacuate') def setUp(self): super(EvacuateTestV21, self).setUp() self.stub_out('nova.compute.api.API.get', fake_compute_api_get) self.stub_out('nova.compute.api.HostAPI.service_get_by_compute_host', fake_service_get_by_compute_host) self.UUID = uuids.fake for _method in self._methods: self.stub_out('nova.compute.api.API.%s' % _method, fake_compute_api) self._set_up_controller() self.admin_req = fakes.HTTPRequest.blank('', use_admin_context=True) self.req = fakes.HTTPRequest.blank('') def _set_up_controller(self): self.controller = evacuate_v21.EvacuateController() self.controller_no_ext = self.controller def _get_evacuate_response(self, json_load, uuid=None): base_json_load = {'evacuate': json_load} response = self.controller._evacuate(self.admin_req, uuid or self.UUID, body=base_json_load) return response def _check_evacuate_failure(self, exception, body, uuid=None, controller=None): controller = controller or self.controller body = {'evacuate': body} self.assertRaises(exception, controller._evacuate, self.admin_req, uuid or self.UUID, body=body) def test_evacuate_with_valid_instance(self): admin_pass = 'MyNewPass' res = self._get_evacuate_response({'host': 'my-host', 'onSharedStorage': 'False', 'adminPass': admin_pass}) self.assertEqual(admin_pass, res['adminPass']) def test_evacuate_with_invalid_instance(self): self._check_evacuate_failure(webob.exc.HTTPNotFound, {'host': 'my-host', 'onSharedStorage': 'False', 'adminPass': 'MyNewPass'}, uuid='BAD_UUID') def test_evacuate_with_active_service(self): def fake_evacuate(*args, **kwargs): raise exception.ComputeServiceInUse("Service still in use") self.stub_out('nova.compute.api.API.evacuate', fake_evacuate) self._check_evacuate_failure(webob.exc.HTTPBadRequest, {'host': 'my-host', 'onSharedStorage': 'False', 'adminPass': 'MyNewPass'}) def test_evacuate_instance_with_no_target(self): admin_pass = 'MyNewPass' res = self._get_evacuate_response({'onSharedStorage': 'False', 'adminPass': admin_pass}) self.assertEqual(admin_pass, res['adminPass']) def test_evacuate_instance_without_on_shared_storage(self): self._check_evacuate_failure(self.validation_error, {'host': 'my-host', 'adminPass': 'MyNewPass'}) def test_evacuate_instance_with_invalid_characters_host(self): host = 'abc!#' self._check_evacuate_failure(self.validation_error, {'host': host, 'onSharedStorage': 'False', 'adminPass': 'MyNewPass'}) def test_evacuate_instance_with_too_long_host(self): host = 'a' * 256 self._check_evacuate_failure(self.validation_error, {'host': host, 'onSharedStorage': 'False', 'adminPass': 'MyNewPass'}) def test_evacuate_instance_with_invalid_on_shared_storage(self): self._check_evacuate_failure(self.validation_error, {'host': 'my-host', 'onSharedStorage': 'foo', 'adminPass': 'MyNewPass'}) def test_evacuate_instance_with_bad_target(self): self._check_evacuate_failure(webob.exc.HTTPNotFound, {'host': 'bad-host', 'onSharedStorage': 'False', 'adminPass': 'MyNewPass'}) def test_evacuate_instance_with_unmapped_target(self): self._check_evacuate_failure(webob.exc.HTTPNotFound, {'host': 'unmapped-host', 'onSharedStorage': 'False', 'adminPass': 'MyNewPass'}) def test_evacuate_instance_with_target(self): admin_pass = 'MyNewPass' res = self._get_evacuate_response({'host': 'my-host', 'onSharedStorage': 'False', 'adminPass': admin_pass}) self.assertEqual(admin_pass, res['adminPass']) @mock.patch('nova.objects.Instance.save') def test_evacuate_shared_and_pass(self, mock_save): self._check_evacuate_failure(webob.exc.HTTPBadRequest, {'host': 'bad-host', 'onSharedStorage': 'True', 'adminPass': 'MyNewPass'}) @mock.patch('nova.objects.Instance.save') def test_evacuate_not_shared_pass_generated(self, mock_save): res = self._get_evacuate_response({'host': 'my-host', 'onSharedStorage': 'False'}) self.assertEqual(CONF.password_length, len(res['adminPass'])) @mock.patch('nova.objects.Instance.save') def test_evacuate_shared(self, mock_save): self._get_evacuate_response({'host': 'my-host', 'onSharedStorage': 'True'}) def test_not_admin(self): body = {'evacuate': {'host': 'my-host', 'onSharedStorage': 'False'}} self.assertRaises(exception.PolicyNotAuthorized, self.controller._evacuate, self.req, self.UUID, body=body) def test_evacuate_to_same_host(self): self._check_evacuate_failure(webob.exc.HTTPBadRequest, {'host': 'host1', 'onSharedStorage': 'False', 'adminPass': 'MyNewPass'}) def test_evacuate_instance_with_empty_host(self): self._check_evacuate_failure(self.validation_error, {'host': '', 'onSharedStorage': 'False', 'adminPass': 'MyNewPass'}, controller=self.controller_no_ext) @mock.patch('nova.objects.Instance.save') def test_evacuate_instance_with_underscore_in_hostname(self, mock_save): admin_pass = 'MyNewPass' # NOTE: The hostname grammar in RFC952 does not allow for # underscores in hostnames. However, we should test that it # is supported because it sometimes occurs in real systems. res = self._get_evacuate_response({'host': 'underscore_hostname', 'onSharedStorage': 'False', 'adminPass': admin_pass}) self.assertEqual(admin_pass, res['adminPass']) def test_evacuate_disable_password_return(self): self._test_evacuate_enable_instance_password_conf(enable_pass=False) def test_evacuate_enable_password_return(self): self._test_evacuate_enable_instance_password_conf(enable_pass=True) @mock.patch('nova.objects.Instance.save') def _test_evacuate_enable_instance_password_conf(self, mock_save, enable_pass): self.flags(enable_instance_password=enable_pass, group='api') res = self._get_evacuate_response({'host': 'underscore_hostname', 'onSharedStorage': 'False'}) if enable_pass: self.assertIn('adminPass', res) else: self.assertIsNone(res) class EvacuatePolicyEnforcementv21(test.NoDBTestCase): def setUp(self): super(EvacuatePolicyEnforcementv21, self).setUp() self.controller = evacuate_v21.EvacuateController() self.req = fakes.HTTPRequest.blank('') req_context = self.req.environ['nova.context'] self.stub_out('nova.compute.api.HostAPI.service_get_by_compute_host', fake_service_get_by_compute_host) def fake_get_instance(self, context, id): return fake_instance.fake_instance_obj( req_context, project_id=req_context.project_id, user_id=req_context.user_id) self.stub_out( 'nova.api.openstack.common.get_instance', fake_get_instance) def test_evacuate_policy_failed_with_other_project(self): rule_name = "os_compute_api:os-evacuate" self.policy.set_rules({rule_name: "project_id:%(project_id)s"}) req = fakes.HTTPRequest.blank('') # Change the project_id in request context. req.environ['nova.context'].project_id = 'other-project' body = {'evacuate': {'host': 'my-host', 'onSharedStorage': 'False', 'adminPass': 'MyNewPass' }} exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller._evacuate, req, fakes.FAKE_UUID, body=body) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) @mock.patch('nova.compute.api.API.evacuate') def test_evacuate_overridden_policy_pass_with_same_project(self, evacuate_mock): rule_name = "os_compute_api:os-evacuate" self.policy.set_rules({rule_name: "project_id:%(project_id)s"}) body = {'evacuate': {'host': 'my-host', 'onSharedStorage': 'False', 'adminPass': 'MyNewPass' }} self.controller._evacuate(self.req, fakes.FAKE_UUID, body=body) evacuate_mock.assert_called_once_with(self.req.environ['nova.context'], mock.ANY, 'my-host', False, 'MyNewPass', None) def test_evacuate_overridden_policy_failed_with_other_user(self): rule_name = "os_compute_api:os-evacuate" self.policy.set_rules({rule_name: "user_id:%(user_id)s"}) req = fakes.HTTPRequest.blank('') # Change the user_id in request context. req.environ['nova.context'].user_id = 'other-user' body = {'evacuate': {'host': 'my-host', 'onSharedStorage': 'False', 'adminPass': 'MyNewPass' }} exc = self.assertRaises(exception.PolicyNotAuthorized, self.controller._evacuate, req, fakes.FAKE_UUID, body=body) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) @mock.patch('nova.compute.api.API.evacuate') def test_evacuate_overridden_policy_pass_with_same_user(self, evacuate_mock): rule_name = "os_compute_api:os-evacuate" self.policy.set_rules({rule_name: "user_id:%(user_id)s"}) body = {'evacuate': {'host': 'my-host', 'onSharedStorage': 'False', 'adminPass': 'MyNewPass' }} self.controller._evacuate(self.req, fakes.FAKE_UUID, body=body) evacuate_mock.assert_called_once_with(self.req.environ['nova.context'], mock.ANY, 'my-host', False, 'MyNewPass', None) class EvacuateTestV214(EvacuateTestV21): def setUp(self): super(EvacuateTestV214, self).setUp() self.admin_req = fakes.HTTPRequest.blank('', use_admin_context=True, version='2.14') self.req = fakes.HTTPRequest.blank('', version='2.14') def _get_evacuate_response(self, json_load, uuid=None): json_load.pop('onSharedStorage', None) base_json_load = {'evacuate': json_load} response = self.controller._evacuate(self.admin_req, uuid or self.UUID, body=base_json_load) return response def _check_evacuate_failure(self, exception, body, uuid=None, controller=None): controller = controller or self.controller body.pop('onSharedStorage', None) body = {'evacuate': body} self.assertRaises(exception, controller._evacuate, self.admin_req, uuid or self.UUID, body=body) @mock.patch.object(compute_api.API, 'evacuate') def test_evacuate_instance(self, mock_evacuate): self._get_evacuate_response({}) admin_pass = mock_evacuate.call_args_list[0][0][4] on_shared_storage = mock_evacuate.call_args_list[0][0][3] self.assertEqual(CONF.password_length, len(admin_pass)) self.assertIsNone(on_shared_storage) def test_evacuate_with_valid_instance(self): admin_pass = 'MyNewPass' res = self._get_evacuate_response({'host': 'my-host', 'adminPass': admin_pass}) self.assertIsNone(res) @testtools.skip('Password is not returned from Microversion 2.14') def test_evacuate_disable_password_return(self): pass @testtools.skip('Password is not returned from Microversion 2.14') def test_evacuate_enable_password_return(self): pass @testtools.skip('onSharedStorage was removed from Microversion 2.14') def test_evacuate_instance_with_invalid_on_shared_storage(self): pass @testtools.skip('onSharedStorage was removed from Microversion 2.14') @mock.patch('nova.objects.Instance.save') def test_evacuate_not_shared_pass_generated(self, mock_save): pass @mock.patch.object(compute_api.API, 'evacuate') @mock.patch('nova.objects.Instance.save') def test_evacuate_pass_generated(self, mock_save, mock_evacuate): self._get_evacuate_response({'host': 'my-host'}) self.assertEqual(CONF.password_length, len(mock_evacuate.call_args_list[0][0][4])) def test_evacuate_instance_without_on_shared_storage(self): self._get_evacuate_response({'host': 'my-host', 'adminPass': 'MyNewPass'}) def test_evacuate_instance_with_no_target(self): admin_pass = 'MyNewPass' with mock.patch.object(compute_api.API, 'evacuate') as mock_evacuate: self._get_evacuate_response({'adminPass': admin_pass}) self.assertEqual(admin_pass, mock_evacuate.call_args_list[0][0][4]) def test_not_admin(self): body = {'evacuate': {'host': 'my-host'}} self.assertRaises(exception.PolicyNotAuthorized, self.controller._evacuate, self.req, self.UUID, body=body) @testtools.skip('onSharedStorage was removed from Microversion 2.14') @mock.patch('nova.objects.Instance.save') def test_evacuate_shared_and_pass(self, mock_save): pass @testtools.skip('from Microversion 2.14 it is covered with ' 'test_evacuate_pass_generated') def test_evacuate_instance_with_target(self): pass @mock.patch('nova.objects.Instance.save') def test_evacuate_instance_with_underscore_in_hostname(self, mock_save): # NOTE: The hostname grammar in RFC952 does not allow for # underscores in hostnames. However, we should test that it # is supported because it sometimes occurs in real systems. self._get_evacuate_response({'host': 'underscore_hostname'}) class EvacuateTestV229(EvacuateTestV214): def setUp(self): super(EvacuateTestV229, self).setUp() self.admin_req = fakes.HTTPRequest.blank('', use_admin_context=True, version='2.29') self.req = fakes.HTTPRequest.blank('', version='2.29') @mock.patch.object(compute_api.API, 'evacuate') def test_evacuate_instance(self, mock_evacuate): self._get_evacuate_response({}) admin_pass = mock_evacuate.call_args_list[0][0][4] on_shared_storage = mock_evacuate.call_args_list[0][0][3] force = mock_evacuate.call_args_list[0][0][5] self.assertEqual(CONF.password_length, len(admin_pass)) self.assertIsNone(on_shared_storage) self.assertFalse(force) def test_evacuate_with_valid_instance(self): admin_pass = 'MyNewPass' res = self._get_evacuate_response({'host': 'my-host', 'adminPass': admin_pass, 'force': 'false'}) self.assertIsNone(res) @mock.patch.object(compute_api.API, 'evacuate') def test_evacuate_instance_with_forced_host(self, mock_evacuate): self._get_evacuate_response({'host': 'my-host', 'force': 'true'}) force = mock_evacuate.call_args_list[0][0][5] self.assertTrue(force) def test_forced_evacuate_with_no_host_provided(self): self._check_evacuate_failure(webob.exc.HTTPBadRequest, {'force': 'true'}) nova-17.0.1/nova/tests/unit/api/openstack/compute/test_disk_config.py0000666000175000017500000004441513250073126025766 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import mock from oslo_serialization import jsonutils import six from nova.api.openstack import compute from nova.compute import api as compute_api from nova.compute import flavors from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_instance import nova.tests.unit.image.fake MANUAL_INSTANCE_UUID = fakes.FAKE_UUID AUTO_INSTANCE_UUID = fakes.FAKE_UUID.replace('a', 'b') stub_instance = fakes.stub_instance API_DISK_CONFIG = 'OS-DCF:diskConfig' def instance_addresses(context, instance_id): return None class DiskConfigTestCaseV21(test.TestCase): def setUp(self): super(DiskConfigTestCaseV21, self).setUp() fakes.stub_out_nw_api(self) fakes.stub_out_secgroup_api(self) self._set_up_app() self._setup_fake_image_service() FAKE_INSTANCES = [ fakes.stub_instance(1, uuid=MANUAL_INSTANCE_UUID, auto_disk_config=False), fakes.stub_instance(2, uuid=AUTO_INSTANCE_UUID, auto_disk_config=True) ] def fake_instance_get(context, id_): for instance in FAKE_INSTANCES: if id_ == instance['id']: return instance self.stub_out('nova.db.instance_get', fake_instance_get) def fake_instance_get_by_uuid(context, uuid, columns_to_join=None, use_slave=False): for instance in FAKE_INSTANCES: if uuid == instance['uuid']: return instance self.stub_out('nova.db.instance_get_by_uuid', fake_instance_get_by_uuid) def fake_instance_get_all(context, *args, **kwargs): return FAKE_INSTANCES self.stub_out('nova.db.instance_get_all', fake_instance_get_all) self.stub_out('nova.db.instance_get_all_by_filters', fake_instance_get_all) self.stub_out('nova.objects.Instance.save', lambda *args, **kwargs: None) def fake_rebuild(*args, **kwargs): pass self.stub_out('nova.compute.api.API.rebuild', fake_rebuild) def fake_instance_create(context, inst_, session=None): inst = fake_instance.fake_db_instance(**{ 'id': 1, 'uuid': AUTO_INSTANCE_UUID, 'created_at': datetime.datetime(2010, 10, 10, 12, 0, 0), 'updated_at': datetime.datetime(2010, 10, 10, 12, 0, 0), 'progress': 0, 'name': 'instance-1', # this is a property 'task_state': '', 'vm_state': '', 'auto_disk_config': inst_['auto_disk_config'], 'security_groups': inst_['security_groups'], 'instance_type': flavors.get_default_flavor(), }) def fake_instance_get_for_create(context, id_, *args, **kwargs): return (inst, inst) self.stub_out('nova.db.instance_update_and_get_original', fake_instance_get_for_create) def fake_instance_get_all_for_create(context, *args, **kwargs): return [inst] self.stub_out('nova.db.instance_get_all', fake_instance_get_all_for_create) self.stub_out('nova.db.instance_get_all_by_filters', fake_instance_get_all_for_create) def fake_instance_add_security_group(context, instance_id, security_group_id): pass self.stub_out('nova.db.instance_add_security_group', fake_instance_add_security_group) return inst self.stub_out('nova.db.instance_create', fake_instance_create) def _set_up_app(self): self.app = compute.APIRouterV21() def _get_expected_msg_for_invalid_disk_config(self): if six.PY3: return ('{{"badRequest": {{"message": "Invalid input for' ' field/attribute {0}. Value: {1}. \'{1}\' is' ' not one of [\'AUTO\', \'MANUAL\']", "code": 400}}}}') else: return ('{{"badRequest": {{"message": "Invalid input for' ' field/attribute {0}. Value: {1}. u\'{1}\' is' ' not one of [\'AUTO\', \'MANUAL\']", "code": 400}}}}') def _setup_fake_image_service(self): self.image_service = nova.tests.unit.image.fake.stub_out_image_service( self) timestamp = datetime.datetime(2011, 1, 1, 1, 2, 3) image = {'id': '88580842-f50a-11e2-8d3a-f23c91aec05e', 'name': 'fakeimage7', 'created_at': timestamp, 'updated_at': timestamp, 'deleted_at': None, 'deleted': False, 'status': 'active', 'is_public': False, 'container_format': 'ova', 'disk_format': 'vhd', 'size': '74185822', 'properties': {'auto_disk_config': 'Disabled'}} self.image_service.create(None, image) def tearDown(self): super(DiskConfigTestCaseV21, self).tearDown() nova.tests.unit.image.fake.FakeImageService_reset() def assertDiskConfig(self, dict_, value): self.assertIn(API_DISK_CONFIG, dict_) self.assertEqual(dict_[API_DISK_CONFIG], value) def test_show_server(self): req = fakes.HTTPRequest.blank( '/fake/servers/%s' % MANUAL_INSTANCE_UUID) res = req.get_response(self.app) server_dict = jsonutils.loads(res.body)['server'] self.assertDiskConfig(server_dict, 'MANUAL') req = fakes.HTTPRequest.blank( '/fake/servers/%s' % AUTO_INSTANCE_UUID) res = req.get_response(self.app) server_dict = jsonutils.loads(res.body)['server'] self.assertDiskConfig(server_dict, 'AUTO') def test_detail_servers(self): req = fakes.HTTPRequest.blank('/fake/servers/detail') res = req.get_response(self.app) server_dicts = jsonutils.loads(res.body)['servers'] expectations = ['MANUAL', 'AUTO'] for server_dict, expected in zip(server_dicts, expectations): self.assertDiskConfig(server_dict, expected) def test_show_image(self): self.flags(group='glance', api_servers=['http://localhost:9292']) req = fakes.HTTPRequest.blank( '/fake/images/a440c04b-79fa-479c-bed1-0b816eaec379') res = req.get_response(self.app) image_dict = jsonutils.loads(res.body)['image'] self.assertDiskConfig(image_dict, 'MANUAL') req = fakes.HTTPRequest.blank( '/fake/images/70a599e0-31e7-49b7-b260-868f441e862b') res = req.get_response(self.app) image_dict = jsonutils.loads(res.body)['image'] self.assertDiskConfig(image_dict, 'AUTO') def test_detail_image(self): req = fakes.HTTPRequest.blank('/fake/images/detail') res = req.get_response(self.app) image_dicts = jsonutils.loads(res.body)['images'] expectations = ['MANUAL', 'AUTO'] for image_dict, expected in zip(image_dicts, expectations): # NOTE(sirp): image fixtures 6 and 7 are setup for # auto_disk_config testing if image_dict['id'] in (6, 7): self.assertDiskConfig(image_dict, expected) def test_create_server_override_auto(self): req = fakes.HTTPRequest.blank('/fake/servers') req.method = 'POST' req.content_type = 'application/json' body = {'server': { 'name': 'server_test', 'imageRef': 'cedef40a-ed67-4d10-800e-17455edce175', 'flavorRef': '1', API_DISK_CONFIG: 'AUTO' }} req.body = jsonutils.dump_as_bytes(body) res = req.get_response(self.app) server_dict = jsonutils.loads(res.body)['server'] self.assertDiskConfig(server_dict, 'AUTO') def test_create_server_override_manual(self): req = fakes.HTTPRequest.blank('/fake/servers') req.method = 'POST' req.content_type = 'application/json' body = {'server': { 'name': 'server_test', 'imageRef': 'cedef40a-ed67-4d10-800e-17455edce175', 'flavorRef': '1', API_DISK_CONFIG: 'MANUAL' }} req.body = jsonutils.dump_as_bytes(body) res = req.get_response(self.app) server_dict = jsonutils.loads(res.body)['server'] self.assertDiskConfig(server_dict, 'MANUAL') def test_create_server_detect_from_image(self): """If user doesn't pass in diskConfig for server, use image metadata to specify AUTO or MANUAL. """ req = fakes.HTTPRequest.blank('/fake/servers') req.method = 'POST' req.content_type = 'application/json' body = {'server': { 'name': 'server_test', 'imageRef': 'a440c04b-79fa-479c-bed1-0b816eaec379', 'flavorRef': '1', }} req.body = jsonutils.dump_as_bytes(body) res = req.get_response(self.app) server_dict = jsonutils.loads(res.body)['server'] self.assertDiskConfig(server_dict, 'MANUAL') req = fakes.HTTPRequest.blank('/fake/servers') req.method = 'POST' req.content_type = 'application/json' body = {'server': { 'name': 'server_test', 'imageRef': '70a599e0-31e7-49b7-b260-868f441e862b', 'flavorRef': '1', }} req.body = jsonutils.dump_as_bytes(body) res = req.get_response(self.app) server_dict = jsonutils.loads(res.body)['server'] self.assertDiskConfig(server_dict, 'AUTO') def test_create_server_detect_from_image_disabled_goes_to_manual(self): req = fakes.HTTPRequest.blank('/fake/servers') req.method = 'POST' req.content_type = 'application/json' body = {'server': { 'name': 'server_test', 'imageRef': '88580842-f50a-11e2-8d3a-f23c91aec05e', 'flavorRef': '1', }} req.body = jsonutils.dump_as_bytes(body) res = req.get_response(self.app) server_dict = jsonutils.loads(res.body)['server'] self.assertDiskConfig(server_dict, 'MANUAL') def test_create_server_errors_when_disabled_and_auto(self): req = fakes.HTTPRequest.blank('/fake/servers') req.method = 'POST' req.content_type = 'application/json' body = {'server': { 'name': 'server_test', 'imageRef': '88580842-f50a-11e2-8d3a-f23c91aec05e', 'flavorRef': '1', API_DISK_CONFIG: 'AUTO' }} req.body = jsonutils.dump_as_bytes(body) res = req.get_response(self.app) self.assertEqual(res.status_int, 400) def test_create_server_when_disabled_and_manual(self): req = fakes.HTTPRequest.blank('/fake/servers') req.method = 'POST' req.content_type = 'application/json' body = {'server': { 'name': 'server_test', 'imageRef': '88580842-f50a-11e2-8d3a-f23c91aec05e', 'flavorRef': '1', API_DISK_CONFIG: 'MANUAL' }} req.body = jsonutils.dump_as_bytes(body) res = req.get_response(self.app) server_dict = jsonutils.loads(res.body)['server'] self.assertDiskConfig(server_dict, 'MANUAL') @mock.patch('nova.api.openstack.common.get_instance') def _test_update_server_disk_config(self, uuid, disk_config, get_instance_mock): req = fakes.HTTPRequest.blank( '/fake/servers/%s' % uuid) req.method = 'PUT' req.content_type = 'application/json' body = {'server': {API_DISK_CONFIG: disk_config}} req.body = jsonutils.dump_as_bytes(body) auto_disk_config = (disk_config == 'AUTO') instance = fakes.stub_instance_obj( req.environ['nova.context'], project_id=req.environ['nova.context'].project_id, user_id=req.environ['nova.context'].user_id, auto_disk_config=auto_disk_config) get_instance_mock.return_value = instance res = req.get_response(self.app) server_dict = jsonutils.loads(res.body)['server'] self.assertDiskConfig(server_dict, disk_config) def test_update_server_override_auto(self): self._test_update_server_disk_config(AUTO_INSTANCE_UUID, 'AUTO') def test_update_server_override_manual(self): self._test_update_server_disk_config(MANUAL_INSTANCE_UUID, 'MANUAL') def test_update_server_invalid_disk_config(self): # Return BadRequest if user passes an invalid diskConfig value. req = fakes.HTTPRequest.blank( '/fake/servers/%s' % MANUAL_INSTANCE_UUID) req.method = 'PUT' req.content_type = 'application/json' body = {'server': {API_DISK_CONFIG: 'server_test'}} req.body = jsonutils.dump_as_bytes(body) res = req.get_response(self.app) self.assertEqual(res.status_int, 400) expected_msg = self._get_expected_msg_for_invalid_disk_config() expected_msg = expected_msg.format(API_DISK_CONFIG, 'server_test') self.assertJsonEqual(jsonutils.loads(expected_msg), jsonutils.loads(res.body)) @mock.patch('nova.api.openstack.common.get_instance') def _test_rebuild_server_disk_config(self, uuid, disk_config, get_instance_mock): req = fakes.HTTPRequest.blank( '/fake/servers/%s/action' % uuid) req.method = 'POST' req.content_type = 'application/json' auto_disk_config = (disk_config == 'AUTO') instance = fakes.stub_instance_obj( req.environ['nova.context'], project_id=req.environ['nova.context'].project_id, user_id=req.environ['nova.context'].user_id, auto_disk_config=auto_disk_config) get_instance_mock.return_value = instance body = {"rebuild": { 'imageRef': 'cedef40a-ed67-4d10-800e-17455edce175', API_DISK_CONFIG: disk_config }} req.body = jsonutils.dump_as_bytes(body) res = req.get_response(self.app) server_dict = jsonutils.loads(res.body)['server'] self.assertDiskConfig(server_dict, disk_config) def test_rebuild_server_override_auto(self): self._test_rebuild_server_disk_config(AUTO_INSTANCE_UUID, 'AUTO') def test_rebuild_server_override_manual(self): self._test_rebuild_server_disk_config(MANUAL_INSTANCE_UUID, 'MANUAL') def test_create_server_with_auto_disk_config(self): req = fakes.HTTPRequest.blank('/fake/servers') req.method = 'POST' req.content_type = 'application/json' body = {'server': { 'name': 'server_test', 'imageRef': 'cedef40a-ed67-4d10-800e-17455edce175', 'flavorRef': '1', API_DISK_CONFIG: 'AUTO' }} old_create = compute_api.API.create def create(*args, **kwargs): self.assertIn('auto_disk_config', kwargs) self.assertTrue(kwargs['auto_disk_config']) return old_create(*args, **kwargs) self.stub_out('nova.compute.api.API.create', create) req.body = jsonutils.dump_as_bytes(body) res = req.get_response(self.app) server_dict = jsonutils.loads(res.body)['server'] self.assertDiskConfig(server_dict, 'AUTO') @mock.patch('nova.api.openstack.common.get_instance') def test_rebuild_server_with_auto_disk_config(self, get_instance_mock): req = fakes.HTTPRequest.blank( '/fake/servers/%s/action' % AUTO_INSTANCE_UUID) req.method = 'POST' req.content_type = 'application/json' instance = fakes.stub_instance_obj( req.environ['nova.context'], project_id=req.environ['nova.context'].project_id, user_id=req.environ['nova.context'].user_id, auto_disk_config=True) get_instance_mock.return_value = instance body = {"rebuild": { 'imageRef': 'cedef40a-ed67-4d10-800e-17455edce175', API_DISK_CONFIG: 'AUTO' }} def rebuild(*args, **kwargs): self.assertIn('auto_disk_config', kwargs) self.assertTrue(kwargs['auto_disk_config']) self.stub_out('nova.compute.api.API.rebuild', rebuild) req.body = jsonutils.dump_as_bytes(body) res = req.get_response(self.app) server_dict = jsonutils.loads(res.body)['server'] self.assertDiskConfig(server_dict, 'AUTO') def test_resize_server_with_auto_disk_config(self): req = fakes.HTTPRequest.blank( '/fake/servers/%s/action' % AUTO_INSTANCE_UUID) req.method = 'POST' req.content_type = 'application/json' body = {"resize": { "flavorRef": "3", API_DISK_CONFIG: 'AUTO' }} def resize(*args, **kwargs): self.assertIn('auto_disk_config', kwargs) self.assertTrue(kwargs['auto_disk_config']) self.stub_out('nova.compute.api.API.resize', resize) req.body = jsonutils.dump_as_bytes(body) req.get_response(self.app) nova-17.0.1/nova/tests/unit/api/openstack/compute/test_quota_classes.py0000666000175000017500000001745313250073126026357 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import webob from nova.api.openstack.compute import quota_classes \ as quota_classes_v21 from nova import exception from nova import test from nova.tests.unit.api.openstack import fakes class QuotaClassSetsTestV21(test.TestCase): validation_error = exception.ValidationError api_version = '2.1' quota_resources = {'metadata_items': 128, 'ram': 51200, 'floating_ips': 10, 'fixed_ips': -1, 'instances': 10, 'injected_files': 5, 'cores': 20, 'injected_file_content_bytes': 10240, 'security_groups': 10, 'security_group_rules': 20, 'key_pairs': 100, 'injected_file_path_bytes': 255} filtered_quotas = None def quota_set(self, class_name): quotas = copy.deepcopy(self.quota_resources) quotas['id'] = class_name return {'quota_class_set': quotas} def setUp(self): super(QuotaClassSetsTestV21, self).setUp() self.req = fakes.HTTPRequest.blank('', version=self.api_version) self._setup() def _setup(self): self.controller = quota_classes_v21.QuotaClassSetsController() def _check_filtered_extended_quota(self, quota_set): self.assertNotIn('server_groups', quota_set) self.assertNotIn('server_group_members', quota_set) self.assertEqual(10, quota_set['floating_ips']) self.assertEqual(-1, quota_set['fixed_ips']) self.assertEqual(10, quota_set['security_groups']) self.assertEqual(20, quota_set['security_group_rules']) def test_format_quota_set(self): quota_set = self.controller._format_quota_set('test_class', self.quota_resources, self.filtered_quotas) qs = quota_set['quota_class_set'] self.assertEqual(qs['id'], 'test_class') for resource, value in self.quota_resources.items(): self.assertEqual(value, qs[resource]) if self.filtered_quotas: for resource in self.filtered_quotas: self.assertNotIn(resource, qs) self._check_filtered_extended_quota(qs) def test_quotas_show(self): res_dict = self.controller.show(self.req, 'test_class') self.assertEqual(res_dict, self.quota_set('test_class')) def test_quotas_update(self): expected_body = {'quota_class_set': self.quota_resources} request_quota_resources = copy.deepcopy(self.quota_resources) request_quota_resources['server_groups'] = 10 request_quota_resources['server_group_members'] = 10 request_body = {'quota_class_set': request_quota_resources} res_dict = self.controller.update(self.req, 'test_class', body=request_body) self.assertEqual(res_dict, expected_body) def test_quotas_update_with_empty_body(self): body = {} self.assertRaises(self.validation_error, self.controller.update, self.req, 'test_class', body=body) def test_quotas_update_with_invalid_integer(self): body = {'quota_class_set': {'instances': 2 ** 31 + 1}} self.assertRaises(self.validation_error, self.controller.update, self.req, 'test_class', body=body) def test_quotas_update_with_long_quota_class_name(self): name = 'a' * 256 body = {'quota_class_set': {'instances': 10}} self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, self.req, name, body=body) def test_quotas_update_with_non_integer(self): body = {'quota_class_set': {'instances': "abc"}} self.assertRaises(self.validation_error, self.controller.update, self.req, 'test_class', body=body) body = {'quota_class_set': {'instances': 50.5}} self.assertRaises(self.validation_error, self.controller.update, self.req, 'test_class', body=body) body = {'quota_class_set': { 'instances': u'\u30aa\u30fc\u30d7\u30f3'}} self.assertRaises(self.validation_error, self.controller.update, self.req, 'test_class', body=body) def test_quotas_update_with_unsupported_quota_class(self): body = {'quota_class_set': {'instances': 50, 'cores': 50, 'ram': 51200, 'unsupported': 12}} self.assertRaises(self.validation_error, self.controller.update, self.req, 'test_class', body=body) class QuotaClassSetsTestV250(QuotaClassSetsTestV21): api_version = '2.50' quota_resources = {'metadata_items': 128, 'ram': 51200, 'instances': 10, 'injected_files': 5, 'cores': 20, 'injected_file_content_bytes': 10240, 'key_pairs': 100, 'injected_file_path_bytes': 255, 'server_groups': 10, 'server_group_members': 10} filtered_quotas = quota_classes_v21.FILTERED_QUOTAS_2_50 def _check_filtered_extended_quota(self, quota_set): self.assertEqual(10, quota_set['server_groups']) self.assertEqual(10, quota_set['server_group_members']) for resource in self.filtered_quotas: self.assertNotIn(resource, quota_set) def test_quotas_update_with_filtered_quota(self): for resource in self.filtered_quotas: body = {'quota_class_set': {resource: 10}} self.assertRaises(self.validation_error, self.controller.update, self.req, 'test_class', body=body) class QuotaClassSetsTestV257(QuotaClassSetsTestV250): api_version = '2.57' def setUp(self): super(QuotaClassSetsTestV257, self).setUp() for resource in quota_classes_v21.FILTERED_QUOTAS_2_57: self.quota_resources.pop(resource, None) self.filtered_quotas.extend(quota_classes_v21.FILTERED_QUOTAS_2_57) class QuotaClassesPolicyEnforcementV21(test.NoDBTestCase): def setUp(self): super(QuotaClassesPolicyEnforcementV21, self).setUp() self.controller = quota_classes_v21.QuotaClassSetsController() self.req = fakes.HTTPRequest.blank('') def test_show_policy_failed(self): rule_name = "os_compute_api:os-quota-class-sets:show" self.policy.set_rules({rule_name: "quota_class:non_fake"}) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller.show, self.req, fakes.FAKE_UUID) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) def test_update_policy_failed(self): rule_name = "os_compute_api:os-quota-class-sets:update" self.policy.set_rules({rule_name: "quota_class:non_fake"}) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller.update, self.req, fakes.FAKE_UUID, body={'quota_class_set': {}}) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) nova-17.0.1/nova/tests/unit/api/openstack/compute/test_agents.py0000666000175000017500000004551613250073126024773 0ustar zuulzuul00000000000000# Copyright 2012 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import webob.exc from nova.api.openstack.compute import agents as agents_v21 from nova import db from nova.db.sqlalchemy import models from nova import exception from nova import test from nova.tests.unit.api.openstack import fakes fake_agents_list = [{'hypervisor': 'kvm', 'os': 'win', 'architecture': 'x86', 'version': '7.0', 'url': 'http://example.com/path/to/resource', 'md5hash': 'add6bb58e139be103324d04d82d8f545', 'id': 1}, {'hypervisor': 'kvm', 'os': 'linux', 'architecture': 'x86', 'version': '16.0', 'url': 'http://example.com/path/to/resource1', 'md5hash': 'add6bb58e139be103324d04d82d8f546', 'id': 2}, {'hypervisor': 'xen', 'os': 'linux', 'architecture': 'x86', 'version': '16.0', 'url': 'http://example.com/path/to/resource2', 'md5hash': 'add6bb58e139be103324d04d82d8f547', 'id': 3}, {'hypervisor': 'xen', 'os': 'win', 'architecture': 'power', 'version': '7.0', 'url': 'http://example.com/path/to/resource3', 'md5hash': 'add6bb58e139be103324d04d82d8f548', 'id': 4}, ] def fake_agent_build_get_all(context, hypervisor): agent_build_all = [] for agent in fake_agents_list: if hypervisor and hypervisor != agent['hypervisor']: continue agent_build_ref = models.AgentBuild() agent_build_ref.update(agent) agent_build_all.append(agent_build_ref) return agent_build_all def fake_agent_build_update(context, agent_build_id, values): pass def fake_agent_build_destroy(context, agent_update_id): pass def fake_agent_build_create(context, values): values['id'] = 1 agent_build_ref = models.AgentBuild() agent_build_ref.update(values) return agent_build_ref class AgentsTestV21(test.NoDBTestCase): controller = agents_v21.AgentController() validation_error = exception.ValidationError def setUp(self): super(AgentsTestV21, self).setUp() self.stub_out("nova.db.agent_build_get_all", fake_agent_build_get_all) self.stub_out("nova.db.agent_build_update", fake_agent_build_update) self.stub_out("nova.db.agent_build_destroy", fake_agent_build_destroy) self.stub_out("nova.db.agent_build_create", fake_agent_build_create) self.req = self._get_http_request() def _get_http_request(self): return fakes.HTTPRequest.blank('') def test_agents_create(self): body = {'agent': {'hypervisor': 'kvm', 'os': 'win', 'architecture': 'x86', 'version': '7.0', 'url': 'http://example.com/path/to/resource', 'md5hash': 'add6bb58e139be103324d04d82d8f545'}} response = {'agent': {'hypervisor': 'kvm', 'os': 'win', 'architecture': 'x86', 'version': '7.0', 'url': 'http://example.com/path/to/resource', 'md5hash': 'add6bb58e139be103324d04d82d8f545', 'agent_id': 1}} res_dict = self.controller.create(self.req, body=body) self.assertEqual(res_dict, response) def _test_agents_create_key_error(self, key): body = {'agent': {'hypervisor': 'kvm', 'os': 'win', 'architecture': 'x86', 'version': '7.0', 'url': 'xxx://xxxx/xxx/xxx', 'md5hash': 'add6bb58e139be103324d04d82d8f545'}} body['agent'].pop(key) self.assertRaises(self.validation_error, self.controller.create, self.req, body=body) def test_agents_create_without_hypervisor(self): self._test_agents_create_key_error('hypervisor') def test_agents_create_without_os(self): self._test_agents_create_key_error('os') def test_agents_create_without_architecture(self): self._test_agents_create_key_error('architecture') def test_agents_create_without_version(self): self._test_agents_create_key_error('version') def test_agents_create_without_url(self): self._test_agents_create_key_error('url') def test_agents_create_without_md5hash(self): self._test_agents_create_key_error('md5hash') def test_agents_create_with_wrong_type(self): body = {'agent': None} self.assertRaises(self.validation_error, self.controller.create, self.req, body=body) def test_agents_create_with_empty_type(self): body = {} self.assertRaises(self.validation_error, self.controller.create, self.req, body=body) def test_agents_create_with_existed_agent(self): def fake_agent_build_create_with_exited_agent(context, values): raise exception.AgentBuildExists(**values) self.stub_out('nova.db.agent_build_create', fake_agent_build_create_with_exited_agent) body = {'agent': {'hypervisor': 'kvm', 'os': 'win', 'architecture': 'x86', 'version': '7.0', 'url': 'xxx://xxxx/xxx/xxx', 'md5hash': 'add6bb58e139be103324d04d82d8f545'}} self.assertRaises(webob.exc.HTTPConflict, self.controller.create, self.req, body=body) def _test_agents_create_with_invalid_length(self, key): body = {'agent': {'hypervisor': 'kvm', 'os': 'win', 'architecture': 'x86', 'version': '7.0', 'url': 'http://example.com/path/to/resource', 'md5hash': 'add6bb58e139be103324d04d82d8f545'}} body['agent'][key] = 'x' * 256 self.assertRaises(self.validation_error, self.controller.create, self.req, body=body) def test_agents_create_with_invalid_length_hypervisor(self): self._test_agents_create_with_invalid_length('hypervisor') def test_agents_create_with_invalid_length_os(self): self._test_agents_create_with_invalid_length('os') def test_agents_create_with_invalid_length_architecture(self): self._test_agents_create_with_invalid_length('architecture') def test_agents_create_with_invalid_length_version(self): self._test_agents_create_with_invalid_length('version') def test_agents_create_with_invalid_length_url(self): self._test_agents_create_with_invalid_length('url') def test_agents_create_with_invalid_length_md5hash(self): self._test_agents_create_with_invalid_length('md5hash') def test_agents_delete(self): self.controller.delete(self.req, 1) def test_agents_delete_with_id_not_found(self): with mock.patch.object(db, 'agent_build_destroy', side_effect=exception.AgentBuildNotFound(id=1)): self.assertRaises(webob.exc.HTTPNotFound, self.controller.delete, self.req, 1) def test_agents_delete_string_id(self): self.assertRaises(webob.exc.HTTPBadRequest, self.controller.delete, self.req, 'string_id') def _test_agents_list(self, query_string=None): req = fakes.HTTPRequest.blank('', use_admin_context=True, query_string=query_string) res_dict = self.controller.index(req) agents_list = [{'hypervisor': 'kvm', 'os': 'win', 'architecture': 'x86', 'version': '7.0', 'url': 'http://example.com/path/to/resource', 'md5hash': 'add6bb58e139be103324d04d82d8f545', 'agent_id': 1}, {'hypervisor': 'kvm', 'os': 'linux', 'architecture': 'x86', 'version': '16.0', 'url': 'http://example.com/path/to/resource1', 'md5hash': 'add6bb58e139be103324d04d82d8f546', 'agent_id': 2}, {'hypervisor': 'xen', 'os': 'linux', 'architecture': 'x86', 'version': '16.0', 'url': 'http://example.com/path/to/resource2', 'md5hash': 'add6bb58e139be103324d04d82d8f547', 'agent_id': 3}, {'hypervisor': 'xen', 'os': 'win', 'architecture': 'power', 'version': '7.0', 'url': 'http://example.com/path/to/resource3', 'md5hash': 'add6bb58e139be103324d04d82d8f548', 'agent_id': 4}, ] self.assertEqual(res_dict, {'agents': agents_list}) def test_agents_list(self): self._test_agents_list() def test_agents_list_with_hypervisor(self): req = fakes.HTTPRequest.blank('', use_admin_context=True, query_string='hypervisor=kvm') res_dict = self.controller.index(req) response = [{'hypervisor': 'kvm', 'os': 'win', 'architecture': 'x86', 'version': '7.0', 'url': 'http://example.com/path/to/resource', 'md5hash': 'add6bb58e139be103324d04d82d8f545', 'agent_id': 1}, {'hypervisor': 'kvm', 'os': 'linux', 'architecture': 'x86', 'version': '16.0', 'url': 'http://example.com/path/to/resource1', 'md5hash': 'add6bb58e139be103324d04d82d8f546', 'agent_id': 2}, ] self.assertEqual(res_dict, {'agents': response}) def test_agents_list_with_multi_hypervisor_filter(self): query_string = 'hypervisor=xen&hypervisor=kvm' req = fakes.HTTPRequest.blank('', use_admin_context=True, query_string=query_string) res_dict = self.controller.index(req) response = [{'hypervisor': 'kvm', 'os': 'win', 'architecture': 'x86', 'version': '7.0', 'url': 'http://example.com/path/to/resource', 'md5hash': 'add6bb58e139be103324d04d82d8f545', 'agent_id': 1}, {'hypervisor': 'kvm', 'os': 'linux', 'architecture': 'x86', 'version': '16.0', 'url': 'http://example.com/path/to/resource1', 'md5hash': 'add6bb58e139be103324d04d82d8f546', 'agent_id': 2}, ] self.assertEqual(res_dict, {'agents': response}) def test_agents_list_query_allow_negative_int_as_string(self): req = fakes.HTTPRequest.blank('', use_admin_context=True, query_string='hypervisor=-1') res_dict = self.controller.index(req) self.assertEqual(res_dict, {'agents': []}) def test_agents_list_query_allow_int_as_string(self): req = fakes.HTTPRequest.blank('', use_admin_context=True, query_string='hypervisor=1') res_dict = self.controller.index(req) self.assertEqual(res_dict, {'agents': []}) def test_agents_list_with_unknown_filter(self): query_string = 'unknown_filter=abc' self._test_agents_list(query_string=query_string) def test_agents_list_with_hypervisor_and_additional_filter(self): req = fakes.HTTPRequest.blank( '', use_admin_context=True, query_string='hypervisor=kvm&additional_filter=abc') res_dict = self.controller.index(req) response = [{'hypervisor': 'kvm', 'os': 'win', 'architecture': 'x86', 'version': '7.0', 'url': 'http://example.com/path/to/resource', 'md5hash': 'add6bb58e139be103324d04d82d8f545', 'agent_id': 1}, {'hypervisor': 'kvm', 'os': 'linux', 'architecture': 'x86', 'version': '16.0', 'url': 'http://example.com/path/to/resource1', 'md5hash': 'add6bb58e139be103324d04d82d8f546', 'agent_id': 2}, ] self.assertEqual(res_dict, {'agents': response}) def test_agents_update(self): body = {'para': {'version': '7.0', 'url': 'http://example.com/path/to/resource', 'md5hash': 'add6bb58e139be103324d04d82d8f545'}} response = {'agent': {'agent_id': 1, 'version': '7.0', 'url': 'http://example.com/path/to/resource', 'md5hash': 'add6bb58e139be103324d04d82d8f545'}} res_dict = self.controller.update(self.req, 1, body=body) self.assertEqual(res_dict, response) def _test_agents_update_key_error(self, key): body = {'para': {'version': '7.0', 'url': 'xxx://xxxx/xxx/xxx', 'md5hash': 'add6bb58e139be103324d04d82d8f545'}} body['para'].pop(key) self.assertRaises(self.validation_error, self.controller.update, self.req, 1, body=body) def test_agents_update_without_version(self): self._test_agents_update_key_error('version') def test_agents_update_without_url(self): self._test_agents_update_key_error('url') def test_agents_update_without_md5hash(self): self._test_agents_update_key_error('md5hash') def test_agents_update_with_wrong_type(self): body = {'agent': None} self.assertRaises(self.validation_error, self.controller.update, self.req, 1, body=body) def test_agents_update_with_empty(self): body = {} self.assertRaises(self.validation_error, self.controller.update, self.req, 1, body=body) def test_agents_update_value_error(self): body = {'para': {'version': '7.0', 'url': 1111, 'md5hash': 'add6bb58e139be103324d04d82d8f545'}} self.assertRaises(self.validation_error, self.controller.update, self.req, 1, body=body) def test_agents_update_with_string_id(self): body = {'para': {'version': '7.0', 'url': 'http://example.com/path/to/resource', 'md5hash': 'add6bb58e139be103324d04d82d8f545'}} self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, self.req, 'string_id', body=body) def _test_agents_update_with_invalid_length(self, key): body = {'para': {'version': '7.0', 'url': 'http://example.com/path/to/resource', 'md5hash': 'add6bb58e139be103324d04d82d8f545'}} body['para'][key] = 'x' * 256 self.assertRaises(self.validation_error, self.controller.update, self.req, 1, body=body) def test_agents_update_with_invalid_length_version(self): self._test_agents_update_with_invalid_length('version') def test_agents_update_with_invalid_length_url(self): self._test_agents_update_with_invalid_length('url') def test_agents_update_with_invalid_length_md5hash(self): self._test_agents_update_with_invalid_length('md5hash') def test_agents_update_with_id_not_found(self): with mock.patch.object(db, 'agent_build_update', side_effect=exception.AgentBuildNotFound(id=1)): body = {'para': {'version': '7.0', 'url': 'http://example.com/path/to/resource', 'md5hash': 'add6bb58e139be103324d04d82d8f545'}} self.assertRaises(webob.exc.HTTPNotFound, self.controller.update, self.req, 1, body=body) class AgentsPolicyEnforcementV21(test.NoDBTestCase): def setUp(self): super(AgentsPolicyEnforcementV21, self).setUp() self.controller = agents_v21.AgentController() self.req = fakes.HTTPRequest.blank('') def test_create_policy_failed(self): rule_name = "os_compute_api:os-agents" self.policy.set_rules({rule_name: "project_id:non_fake"}) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller.create, self.req, body={'agent': {'hypervisor': 'kvm', 'os': 'win', 'architecture': 'x86', 'version': '7.0', 'url': 'xxx://xxxx/xxx/xxx', 'md5hash': 'add6bb58e139be103324d04d82d8f545'}}) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) def test_index_policy_failed(self): rule_name = "os_compute_api:os-agents" self.policy.set_rules({rule_name: "project_id:non_fake"}) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller.index, self.req) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) def test_delete_policy_failed(self): rule_name = "os_compute_api:os-agents" self.policy.set_rules({rule_name: "project_id:non_fake"}) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller.delete, self.req, fakes.FAKE_UUID) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) def test_update_policy_failed(self): rule_name = "os_compute_api:os-agents" self.policy.set_rules({rule_name: "project_id:non_fake"}) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller.update, self.req, fakes.FAKE_UUID, body={'para': {'version': '7.0', 'url': 'xxx://xxxx/xxx/xxx', 'md5hash': 'add6bb58e139be103324d04d82d8f545'}}) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) nova-17.0.1/nova/tests/unit/api/openstack/compute/test_floating_ip_pools.py0000666000175000017500000000636113250073126027214 0ustar zuulzuul00000000000000# Copyright (c) 2011 X.commerce, a business unit of eBay Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova.api.openstack.compute import floating_ip_pools \ as fipp_v21 from nova import context from nova import exception from nova import test from nova.tests.unit.api.openstack import fakes def fake_get_floating_ip_pools(*args, **kwargs): return ['nova', 'other'] class FloatingIpPoolTestV21(test.NoDBTestCase): floating_ip_pools = fipp_v21 def setUp(self): super(FloatingIpPoolTestV21, self).setUp() self.context = context.RequestContext('fake', 'fake') self.controller = self.floating_ip_pools.FloatingIPPoolsController() self.req = fakes.HTTPRequest.blank('') def test_translate_floating_ip_pools_view(self): pools = fake_get_floating_ip_pools(None, self.context) view = self.floating_ip_pools._translate_floating_ip_pools_view(pools) self.assertIn('floating_ip_pools', view) self.assertEqual(view['floating_ip_pools'][0]['name'], pools[0]) self.assertEqual(view['floating_ip_pools'][1]['name'], pools[1]) def test_floating_ips_pools_list(self): with mock.patch.object(self.controller.network_api, 'get_floating_ip_pools', fake_get_floating_ip_pools): res_dict = self.controller.index(self.req) pools = fake_get_floating_ip_pools(None, self.context) response = {'floating_ip_pools': [{'name': name} for name in pools]} self.assertEqual(res_dict, response) class FloatingIPPoolsPolicyEnforcementV21(test.NoDBTestCase): def setUp(self): super(FloatingIPPoolsPolicyEnforcementV21, self).setUp() self.controller = fipp_v21.FloatingIPPoolsController() self.req = fakes.HTTPRequest.blank('') def test_change_password_policy_failed(self): rule_name = "os_compute_api:os-floating-ip-pools" rule = {rule_name: "project:non_fake"} self.policy.set_rules(rule) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller.index, self.req) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) class FloatingIpPoolDeprecationTest(test.NoDBTestCase): def setUp(self): super(FloatingIpPoolDeprecationTest, self).setUp() self.controller = fipp_v21.FloatingIPPoolsController() self.req = fakes.HTTPRequest.blank('', version='2.36') def test_not_found_for_fip_pool_api(self): self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.index, self.req) nova-17.0.1/nova/tests/unit/api/openstack/compute/test_fixed_ips.py0000666000175000017500000002334713250073126025462 0ustar zuulzuul00000000000000# Copyright 2012 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import webob from nova.api.openstack import api_version_request from nova.api.openstack.compute import fixed_ips as fixed_ips_v21 from nova.api.openstack import wsgi as os_wsgi from nova import context from nova import exception from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests.unit.objects import test_network from nova.tests import uuidsentinel as uuids fake_fixed_ips = [{'id': 1, 'address': '192.168.1.1', 'network_id': 1, 'virtual_interface_id': 1, 'instance_uuid': uuids.instance_1, 'allocated': False, 'leased': False, 'reserved': False, 'host': None, 'instance': None, 'network': test_network.fake_network, 'created_at': None, 'updated_at': None, 'deleted_at': None, 'deleted': False}, {'id': 2, 'address': '192.168.1.2', 'network_id': 1, 'virtual_interface_id': 2, 'instance_uuid': uuids.instance_2, 'allocated': False, 'leased': False, 'reserved': False, 'host': None, 'instance': None, 'network': test_network.fake_network, 'created_at': None, 'updated_at': None, 'deleted_at': None, 'deleted': False}, {'id': 3, 'address': '10.0.0.2', 'network_id': 1, 'virtual_interface_id': 3, 'instance_uuid': uuids.instance_3, 'allocated': False, 'leased': False, 'reserved': False, 'host': None, 'instance': None, 'network': test_network.fake_network, 'created_at': None, 'updated_at': None, 'deleted_at': None, 'deleted': True}, ] def fake_fixed_ip_get_by_address(context, address, columns_to_join=None): if address == 'inv.ali.d.ip': msg = "Invalid fixed IP Address %s in request" % address raise exception.FixedIpInvalid(msg) for fixed_ip in fake_fixed_ips: if fixed_ip['address'] == address and not fixed_ip['deleted']: return fixed_ip raise exception.FixedIpNotFoundForAddress(address=address) def fake_fixed_ip_update(context, address, values): fixed_ip = fake_fixed_ip_get_by_address(context, address) if fixed_ip is None: raise exception.FixedIpNotFoundForAddress(address=address) else: for key in values: fixed_ip[key] = values[key] class FakeModel(object): """Stubs out for model.""" def __init__(self, values): self.values = values def __getattr__(self, name): return self.values[name] def __getitem__(self, key): if key in self.values: return self.values[key] else: raise NotImplementedError() def __repr__(self): return '' % self.values def fake_network_get_all(context): network = {'id': 1, 'cidr': "192.168.1.0/24"} return [FakeModel(network)] class FixedIpTestV21(test.NoDBTestCase): fixed_ips = fixed_ips_v21 url = '/v2/fake/os-fixed-ips' wsgi_api_version = os_wsgi.DEFAULT_API_VERSION def setUp(self): super(FixedIpTestV21, self).setUp() self.stub_out("nova.db.fixed_ip_get_by_address", fake_fixed_ip_get_by_address) self.stub_out("nova.db.fixed_ip_update", fake_fixed_ip_update) self.context = context.get_admin_context() self.controller = self.fixed_ips.FixedIPController() def _assert_equal(self, ret, exp): self.assertEqual(ret.wsgi_code, exp) def _get_reserve_action(self): return self.controller.reserve def _get_unreserve_action(self): return self.controller.unreserve def _get_reserved_status(self, address): return {} def test_fixed_ips_get(self): req = fakes.HTTPRequest.blank('%s/192.168.1.1' % self.url) req.api_version_request = api_version_request.APIVersionRequest( self.wsgi_api_version) res_dict = self.controller.show(req, '192.168.1.1') response = {'fixed_ip': {'cidr': '192.168.1.0/24', 'hostname': None, 'host': None, 'address': '192.168.1.1'}} response['fixed_ip'].update(self._get_reserved_status('192.168.1.1')) self.assertEqual(response, res_dict, self.wsgi_api_version) def test_fixed_ips_get_bad_ip_fail(self): req = fakes.HTTPRequest.blank('%s/10.0.0.1' % self.url) self.assertRaises(webob.exc.HTTPNotFound, self.controller.show, req, '10.0.0.1') def test_fixed_ips_get_invalid_ip_address(self): req = fakes.HTTPRequest.blank('%s/inv.ali.d.ip' % self.url) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.show, req, 'inv.ali.d.ip') def test_fixed_ips_get_deleted_ip_fail(self): req = fakes.HTTPRequest.blank('%s/10.0.0.2' % self.url) self.assertRaises(webob.exc.HTTPNotFound, self.controller.show, req, '10.0.0.2') def test_fixed_ip_reserve(self): fake_fixed_ips[0]['reserved'] = False body = {'reserve': None} req = fakes.HTTPRequest.blank('%s/192.168.1.1/action' % self.url) action = self._get_reserve_action() result = action(req, "192.168.1.1", body=body) self._assert_equal(result or action, 202) self.assertTrue(fake_fixed_ips[0]['reserved']) def test_fixed_ip_reserve_bad_ip(self): body = {'reserve': None} req = fakes.HTTPRequest.blank('%s/10.0.0.1/action' % self.url) action = self._get_reserve_action() self.assertRaises(webob.exc.HTTPNotFound, action, req, '10.0.0.1', body=body) def test_fixed_ip_reserve_invalid_ip_address(self): body = {'reserve': None} req = fakes.HTTPRequest.blank('%s/inv.ali.d.ip/action' % self.url) action = self._get_reserve_action() self.assertRaises(webob.exc.HTTPBadRequest, action, req, 'inv.ali.d.ip', body=body) def test_fixed_ip_reserve_deleted_ip(self): body = {'reserve': None} action = self._get_reserve_action() req = fakes.HTTPRequest.blank('%s/10.0.0.2/action' % self.url) self.assertRaises(webob.exc.HTTPNotFound, action, req, '10.0.0.2', body=body) def test_fixed_ip_unreserve(self): fake_fixed_ips[0]['reserved'] = True body = {'unreserve': None} req = fakes.HTTPRequest.blank('%s/192.168.1.1/action' % self.url) action = self._get_unreserve_action() result = action(req, "192.168.1.1", body=body) self._assert_equal(result or action, 202) self.assertFalse(fake_fixed_ips[0]['reserved']) def test_fixed_ip_unreserve_bad_ip(self): body = {'unreserve': None} req = fakes.HTTPRequest.blank('%s/10.0.0.1/action' % self.url) action = self._get_unreserve_action() self.assertRaises(webob.exc.HTTPNotFound, action, req, '10.0.0.1', body=body) def test_fixed_ip_unreserve_invalid_ip_address(self): body = {'unreserve': None} req = fakes.HTTPRequest.blank('%s/inv.ali.d.ip/action' % self.url) action = self._get_unreserve_action() self.assertRaises(webob.exc.HTTPBadRequest, action, req, 'inv.ali.d.ip', body=body) def test_fixed_ip_unreserve_deleted_ip(self): body = {'unreserve': None} req = fakes.HTTPRequest.blank('%s/10.0.0.2/action' % self.url) action = self._get_unreserve_action() self.assertRaises(webob.exc.HTTPNotFound, action, req, '10.0.0.2', body=body) class FixedIpTestV24(FixedIpTestV21): wsgi_api_version = '2.4' def _get_reserved_status(self, address): for fixed_ip in fake_fixed_ips: if address == fixed_ip['address']: return {'reserved': fixed_ip['reserved']} self.fail('Invalid address: %s' % address) class FixedIpDeprecationTest(test.NoDBTestCase): def setUp(self): super(FixedIpDeprecationTest, self).setUp() self.req = fakes.HTTPRequest.blank('', version='2.36') self.controller = fixed_ips_v21.FixedIPController() def test_all_apis_return_not_found(self): self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.show, self.req, fakes.FAKE_UUID) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.reserve, self.req, fakes.FAKE_UUID, {}) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.unreserve, self.req, fakes.FAKE_UUID, {}) nova-17.0.1/nova/tests/unit/api/openstack/compute/test_certificates.py0000666000175000017500000000322113250073126026142 0ustar zuulzuul00000000000000# Copyright (c) 2012 OpenStack Foundation # All Rights Reserved. # Copyright 2013 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import webob.exc from nova.api.openstack.compute import certificates \ as certificates_v21 from nova import context from nova import test from nova.tests.unit.api.openstack import fakes class CertificatesTestV21(test.NoDBTestCase): certificates = certificates_v21 url = '/v2/fake/os-certificates' certificate_show_extension = 'os_compute_api:os-certificates:show' certificate_create_extension = \ 'os_compute_api:os-certificates:create' def setUp(self): super(CertificatesTestV21, self).setUp() self.context = context.RequestContext('fake', 'fake') self.controller = self.certificates.CertificatesController() self.req = fakes.HTTPRequest.blank('') def test_certificates_show_root(self): self.assertRaises(webob.exc.HTTPGone, self.controller.show, self.req, 'root') def test_certificates_create_certificate(self): self.assertRaises(webob.exc.HTTPGone, self.controller.create, self.req) nova-17.0.1/nova/tests/unit/api/openstack/compute/test_migrations.py0000666000175000017500000003461313250073126025662 0ustar zuulzuul00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import mock import six from webob import exc from nova.api.openstack.compute import migrations as migrations_v21 from nova import context from nova import exception from nova import objects from nova.objects import base from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests import uuidsentinel as uuids fake_migrations = [ # in-progress live migration { 'id': 1, 'source_node': 'node1', 'dest_node': 'node2', 'source_compute': 'compute1', 'dest_compute': 'compute2', 'dest_host': '1.2.3.4', 'status': 'running', 'instance_uuid': uuids.instance1, 'old_instance_type_id': 1, 'new_instance_type_id': 2, 'migration_type': 'live-migration', 'hidden': False, 'memory_total': 123456, 'memory_processed': 12345, 'memory_remaining': 111111, 'disk_total': 234567, 'disk_processed': 23456, 'disk_remaining': 211111, 'created_at': datetime.datetime(2012, 10, 29, 13, 42, 2), 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 2), 'deleted_at': None, 'deleted': False, 'uuid': uuids.migration1, }, # non in-progress live migration { 'id': 2, 'source_node': 'node1', 'dest_node': 'node2', 'source_compute': 'compute1', 'dest_compute': 'compute2', 'dest_host': '1.2.3.4', 'status': 'error', 'instance_uuid': uuids.instance1, 'old_instance_type_id': 1, 'new_instance_type_id': 2, 'migration_type': 'live-migration', 'hidden': False, 'memory_total': 123456, 'memory_processed': 12345, 'memory_remaining': 111111, 'disk_total': 234567, 'disk_processed': 23456, 'disk_remaining': 211111, 'created_at': datetime.datetime(2012, 10, 29, 13, 42, 2), 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 2), 'deleted_at': None, 'deleted': False, 'uuid': uuids.migration2, }, # in-progress resize { 'id': 4, 'source_node': 'node10', 'dest_node': 'node20', 'source_compute': 'compute10', 'dest_compute': 'compute20', 'dest_host': '5.6.7.8', 'status': 'migrating', 'instance_uuid': uuids.instance2, 'old_instance_type_id': 5, 'new_instance_type_id': 6, 'migration_type': 'resize', 'hidden': False, 'memory_total': 456789, 'memory_processed': 56789, 'memory_remaining': 45000, 'disk_total': 96789, 'disk_processed': 6789, 'disk_remaining': 96000, 'created_at': datetime.datetime(2013, 10, 22, 13, 42, 2), 'updated_at': datetime.datetime(2013, 10, 22, 13, 42, 2), 'deleted_at': None, 'deleted': False, 'uuid': uuids.migration3, }, # non in-progress resize { 'id': 5, 'source_node': 'node10', 'dest_node': 'node20', 'source_compute': 'compute10', 'dest_compute': 'compute20', 'dest_host': '5.6.7.8', 'status': 'error', 'instance_uuid': uuids.instance2, 'old_instance_type_id': 5, 'new_instance_type_id': 6, 'migration_type': 'resize', 'hidden': False, 'memory_total': 456789, 'memory_processed': 56789, 'memory_remaining': 45000, 'disk_total': 96789, 'disk_processed': 6789, 'disk_remaining': 96000, 'created_at': datetime.datetime(2013, 10, 22, 13, 42, 2), 'updated_at': datetime.datetime(2013, 10, 22, 13, 42, 2), 'deleted_at': None, 'deleted': False, 'uuid': uuids.migration4, } ] migrations_obj = base.obj_make_list( 'fake-context', objects.MigrationList(), objects.Migration, fake_migrations) class FakeRequest(object): environ = {"nova.context": context.RequestContext('fake_user', 'fake', is_admin=True)} GET = {} class MigrationsTestCaseV21(test.NoDBTestCase): migrations = migrations_v21 def _migrations_output(self): return self.controller._output(self.req, migrations_obj) def setUp(self): """Run before each test.""" super(MigrationsTestCaseV21, self).setUp() self.controller = self.migrations.MigrationsController() self.req = fakes.HTTPRequest.blank('', use_admin_context=True) self.context = self.req.environ['nova.context'] def test_index(self): migrations_in_progress = {'migrations': self._migrations_output()} for mig in migrations_in_progress['migrations']: self.assertIn('id', mig) self.assertNotIn('deleted', mig) self.assertNotIn('deleted_at', mig) self.assertNotIn('links', mig) filters = {'host': 'host1', 'status': 'migrating', 'instance_uuid': uuids.instance1, 'source_compute': 'host1', 'hidden': '0', 'migration_type': 'resize'} # python-novaclient actually supports sending this even though it's # not used in the DB API layer and is totally useless. This lets us, # however, test that additionalProperties=True allows it. unknown_filter = {'cell_name': 'ChildCell'} self.req.GET.update(filters) self.req.GET.update(unknown_filter) with mock.patch.object(self.controller.compute_api, 'get_migrations', return_value=migrations_obj) as ( mock_get_migrations ): response = self.controller.index(self.req) self.assertEqual(migrations_in_progress, response) # Only with the filters, and the unknown filter is stripped mock_get_migrations.assert_called_once_with(self.context, filters) def test_index_query_allow_negative_int_as_string(self): migrations = {'migrations': self._migrations_output()} filters = ['host', 'status', 'cell_name', 'instance_uuid', 'source_compute', 'hidden', 'migration_type'] with mock.patch.object(self.controller.compute_api, 'get_migrations', return_value=migrations_obj): for fl in filters: req = fakes.HTTPRequest.blank('/os-migrations', use_admin_context=True, query_string='%s=-1' % fl) response = self.controller.index(req) self.assertEqual(migrations, response) def test_index_query_duplicate_query_parameters(self): migrations = {'migrations': self._migrations_output()} params = {'host': 'host1', 'status': 'migrating', 'cell_name': 'ChildCell', 'instance_uuid': uuids.instance1, 'source_compute': 'host1', 'hidden': '0', 'migration_type': 'resize'} with mock.patch.object(self.controller.compute_api, 'get_migrations', return_value=migrations_obj): for k, v in params.items(): req = fakes.HTTPRequest.blank( '/os-migrations', use_admin_context=True, query_string='%s=%s&%s=%s' % (k, v, k, v)) response = self.controller.index(req) self.assertEqual(migrations, response) class MigrationsTestCaseV223(MigrationsTestCaseV21): wsgi_api_version = '2.23' def setUp(self): """Run before each test.""" super(MigrationsTestCaseV223, self).setUp() self.req = fakes.HTTPRequest.blank( '', version=self.wsgi_api_version, use_admin_context=True) def test_index(self): migrations = {'migrations': self.controller._output( self.req, migrations_obj, True)} for i, mig in enumerate(migrations['migrations']): # first item is in-progress live migration if i == 0: self.assertIn('links', mig) else: self.assertNotIn('links', mig) self.assertIn('migration_type', mig) self.assertIn('id', mig) self.assertNotIn('deleted', mig) self.assertNotIn('deleted_at', mig) with mock.patch.object(self.controller.compute_api, 'get_migrations') as m_get: m_get.return_value = migrations_obj response = self.controller.index(self.req) self.assertEqual(migrations, response) self.assertIn('links', response['migrations'][0]) self.assertIn('migration_type', response['migrations'][0]) class MigrationsTestCaseV259(MigrationsTestCaseV223): wsgi_api_version = '2.59' def test_index(self): migrations = {'migrations': self.controller._output( self.req, migrations_obj, True, True)} for i, mig in enumerate(migrations['migrations']): # first item is in-progress live migration if i == 0: self.assertIn('links', mig) else: self.assertNotIn('links', mig) self.assertIn('migration_type', mig) self.assertIn('id', mig) self.assertIn('uuid', mig) self.assertNotIn('deleted', mig) self.assertNotIn('deleted_at', mig) with mock.patch.object(self.controller.compute_api, 'get_migrations_sorted') as m_get: m_get.return_value = migrations_obj response = self.controller.index(self.req) self.assertEqual(migrations, response) self.assertIn('links', response['migrations'][0]) self.assertIn('migration_type', response['migrations'][0]) @mock.patch('nova.compute.api.API.get_migrations_sorted') def test_index_with_invalid_marker(self, mock_migrations_get): """Tests detail paging with an invalid marker (not found).""" mock_migrations_get.side_effect = exception.MarkerNotFound( marker=uuids.invalid_marker) req = fakes.HTTPRequest.blank( '/os-migrations?marker=%s' % uuids.invalid_marker, version=self.wsgi_api_version, use_admin_context=True) e = self.assertRaises(exc.HTTPBadRequest, self.controller.index, req) self.assertEqual( "Marker %s could not be found." % uuids.invalid_marker, six.text_type(e)) def test_index_with_invalid_limit(self): """Tests detail paging with an invalid limit.""" req = fakes.HTTPRequest.blank( '/os-migrations?limit=x', version=self.wsgi_api_version, use_admin_context=True) self.assertRaises(exception.ValidationError, self.controller.index, req) req = fakes.HTTPRequest.blank( '/os-migrations?limit=-1', version=self.wsgi_api_version, use_admin_context=True) self.assertRaises(exception.ValidationError, self.controller.index, req) def test_index_with_invalid_changes_since(self): """Tests detail paging with an invalid changes-since value.""" req = fakes.HTTPRequest.blank( '/os-migrations?changes-since=wrong_time', version=self.wsgi_api_version, use_admin_context=True) self.assertRaises(exception.ValidationError, self.controller.index, req) def test_index_with_unknown_query_param(self): """Tests detail paging with an unknown query parameter.""" req = fakes.HTTPRequest.blank( '/os-migrations?foo=bar', version=self.wsgi_api_version, use_admin_context=True) ex = self.assertRaises(exception.ValidationError, self.controller.index, req) self.assertIn('Additional properties are not allowed', six.text_type(ex)) @mock.patch('nova.compute.api.API.get_migrations', return_value=objects.MigrationList()) def test_index_with_changes_since_old_microversion(self, get_migrations): """Tests that the changes-since query parameteris ignored before microversion 2.59. """ # Also use a valid filter (instance_uuid) to make sure only # changes-since is removed. req = fakes.HTTPRequest.blank( '/os-migrations?changes-since=2018-01-10T16:59:24.138939&' 'instance_uuid=%s' % uuids.instance_uuid, version='2.58', use_admin_context=True) result = self.controller.index(req) self.assertEqual({'migrations': []}, result) get_migrations.assert_called_once_with( req.environ['nova.context'], {'instance_uuid': uuids.instance_uuid}) class MigrationsPolicyEnforcement(test.NoDBTestCase): def setUp(self): super(MigrationsPolicyEnforcement, self).setUp() self.controller = migrations_v21.MigrationsController() self.req = fakes.HTTPRequest.blank('') def test_list_policy_failed(self): rule_name = "os_compute_api:os-migrations:index" self.policy.set_rules({rule_name: "project_id:non_fake"}) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller.index, self.req) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) class MigrationsPolicyEnforcementV223(MigrationsPolicyEnforcement): wsgi_api_version = '2.23' def setUp(self): super(MigrationsPolicyEnforcementV223, self).setUp() self.req = fakes.HTTPRequest.blank('', version=self.wsgi_api_version) class MigrationsPolicyEnforcementV259(MigrationsPolicyEnforcementV223): wsgi_api_version = '2.59' nova-17.0.1/nova/tests/unit/api/openstack/compute/test_console_auth_tokens.py0000666000175000017500000001045313250073126027550 0ustar zuulzuul00000000000000# Copyright 2013 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import webob from nova.api.openstack import api_version_request from nova.api.openstack.compute import console_auth_tokens \ as console_auth_tokens_v21 from nova.consoleauth import rpcapi as consoleauth_rpcapi from nova import test from nova.tests.unit.api.openstack import fakes class ConsoleAuthTokensExtensionTestV21(test.NoDBTestCase): controller_class = console_auth_tokens_v21 _EXPECTED_OUTPUT = {'console': {'instance_uuid': fakes.FAKE_UUID, 'host': 'fake_host', 'port': 'fake_port', 'internal_access_path': 'fake_access_path'}} def setUp(self): super(ConsoleAuthTokensExtensionTestV21, self).setUp() self.controller = self.controller_class.ConsoleAuthTokensController() self.req = fakes.HTTPRequest.blank('', use_admin_context=True) self.context = self.req.environ['nova.context'] @mock.patch.object(consoleauth_rpcapi.ConsoleAuthAPI, 'check_token', return_value={ 'instance_uuid': fakes.FAKE_UUID, 'host': 'fake_host', 'port': 'fake_port', 'internal_access_path': 'fake_access_path', 'console_type': 'rdp-html5'}) def test_get_console_connect_info(self, mock_check_token): output = self.controller.show(self.req, fakes.FAKE_UUID) self.assertEqual(self._EXPECTED_OUTPUT, output) mock_check_token.assert_called_once_with(self.context, fakes.FAKE_UUID) @mock.patch.object(consoleauth_rpcapi.ConsoleAuthAPI, 'check_token', return_value=None) def test_get_console_connect_info_token_not_found(self, mock_check_token): self.assertRaises(webob.exc.HTTPNotFound, self.controller.show, self.req, fakes.FAKE_UUID) mock_check_token.assert_called_once_with(self.context, fakes.FAKE_UUID) @mock.patch.object(consoleauth_rpcapi.ConsoleAuthAPI, 'check_token', return_value={ 'instance_uuid': fakes.FAKE_UUID, 'host': 'fake_host', 'port': 'fake_port', 'internal_access_path': 'fake_access_path', 'console_type': 'unauthorized_console_type'}) def test_get_console_connect_info_nonrdp_console_type(self, mock_check_token): self.assertRaises(webob.exc.HTTPUnauthorized, self.controller.show, self.req, fakes.FAKE_UUID) mock_check_token.assert_called_once_with(self.context, fakes.FAKE_UUID) class ConsoleAuthTokensExtensionTestV231(ConsoleAuthTokensExtensionTestV21): def setUp(self): super(ConsoleAuthTokensExtensionTestV231, self).setUp() self.req.api_version_request = api_version_request.APIVersionRequest( '2.31') @mock.patch.object(consoleauth_rpcapi.ConsoleAuthAPI, 'check_token') def test_get_console_connect_info_nonrdp_console_type(self, mock_check): mock_check.return_value = {'instance_uuid': fakes.FAKE_UUID, 'host': 'fake_host', 'port': 'fake_port', 'internal_access_path': 'fake_access_path', 'console_type': 'webmks'} output = self.controller.show(self.req, fakes.FAKE_UUID) self.assertEqual(self._EXPECTED_OUTPUT, output) mock_check.assert_called_once_with(self.context, fakes.FAKE_UUID) nova-17.0.1/nova/tests/unit/api/openstack/compute/test_hypervisor_status.py0000666000175000017500000000701013250073126027312 0ustar zuulzuul00000000000000# Copyright 2014 Intel Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import mock from nova.api.openstack.compute import hypervisors \ as hypervisors_v21 from nova import objects from nova import test from nova.tests.unit.api.openstack.compute import test_hypervisors from nova.tests.unit.api.openstack import fakes TEST_HYPER = test_hypervisors.TEST_HYPERS_OBJ[0].obj_clone() TEST_SERVICE = objects.Service(id=1, host="compute1", binary="nova-compute", topic="compute_topic", report_count=5, disabled=False, disabled_reason=None, availability_zone="nova") class HypervisorStatusTestV21(test.NoDBTestCase): def _prepare_extension(self): self.controller = hypervisors_v21.HypervisorsController() self.controller.servicegroup_api.service_is_up = mock.MagicMock( return_value=True) def _get_request(self): return fakes.HTTPRequest.blank('/v2/fake/os-hypervisors/detail', use_admin_context=True) def test_view_hypervisor_service_status(self): self._prepare_extension() req = self._get_request() result = self.controller._view_hypervisor( TEST_HYPER, TEST_SERVICE, False, req) self.assertEqual('enabled', result['status']) self.assertEqual('up', result['state']) self.assertEqual('enabled', result['status']) self.controller.servicegroup_api.service_is_up.return_value = False result = self.controller._view_hypervisor( TEST_HYPER, TEST_SERVICE, False, req) self.assertEqual('down', result['state']) hyper = copy.deepcopy(TEST_HYPER) service = copy.deepcopy(TEST_SERVICE) service.disabled = True result = self.controller._view_hypervisor(hyper, service, False, req) self.assertEqual('disabled', result['status']) def test_view_hypervisor_detail_status(self): self._prepare_extension() req = self._get_request() result = self.controller._view_hypervisor( TEST_HYPER, TEST_SERVICE, True, req) self.assertEqual('enabled', result['status']) self.assertEqual('up', result['state']) self.assertIsNone(result['service']['disabled_reason']) self.controller.servicegroup_api.service_is_up.return_value = False result = self.controller._view_hypervisor( TEST_HYPER, TEST_SERVICE, True, req) self.assertEqual('down', result['state']) hyper = copy.deepcopy(TEST_HYPER) service = copy.deepcopy(TEST_SERVICE) service.disabled = True service.disabled_reason = "fake" result = self.controller._view_hypervisor(hyper, service, True, req) self.assertEqual('disabled', result['status'],) self.assertEqual('fake', result['service']['disabled_reason']) nova-17.0.1/nova/tests/unit/api/openstack/compute/test_flavors.py0000666000175000017500000011031113250073126025150 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import six.moves.urllib.parse as urlparse import webob from nova.api.openstack import common from nova.api.openstack.compute import flavors as flavors_v21 from nova import context from nova import exception from nova import objects from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests.unit import matchers NS = "{http://docs.openstack.org/compute/api/v1.1}" ATOMNS = "{http://www.w3.org/2005/Atom}" def fake_get_limit_and_marker(request, max_limit=1): params = common.get_pagination_params(request) limit = params.get('limit', max_limit) limit = min(max_limit, limit) marker = params.get('marker') return limit, marker def return_flavor_not_found(context, flavor_id, read_deleted=None): raise exception.FlavorNotFound(flavor_id=flavor_id) class FlavorsTestV21(test.TestCase): _prefix = "/v2/fake" Controller = flavors_v21.FlavorsController fake_request = fakes.HTTPRequestV21 _rspv = "v2/fake" _fake = "/fake" microversion = '2.1' # Flag to tell the test if a description should be expected in a response. expect_description = False def setUp(self): super(FlavorsTestV21, self).setUp() fakes.stub_out_networking(self) fakes.stub_out_flavor_get_all(self) fakes.stub_out_flavor_get_by_flavor_id(self) self.controller = self.Controller() def _build_request(self, url): return self.fake_request.blank( self._prefix + url, version=self.microversion) def _set_expected_body(self, expected, flavor): # NOTE(oomichi): On v2.1 API, some extensions of v2.0 are merged # as core features and we can get the following parameters as the # default. expected['OS-FLV-EXT-DATA:ephemeral'] = flavor.ephemeral_gb expected['OS-FLV-DISABLED:disabled'] = flavor.disabled expected['swap'] = flavor.swap if self.expect_description: expected['description'] = flavor.description @mock.patch('nova.objects.Flavor.get_by_flavor_id', side_effect=return_flavor_not_found) def test_get_flavor_by_invalid_id(self, mock_get): req = self._build_request('/flavors/asdf') self.assertRaises(webob.exc.HTTPNotFound, self.controller.show, req, 'asdf') def test_get_flavor_by_id(self): req = self._build_request('/flavors/1') flavor = self.controller.show(req, '1') expected = { "flavor": { "id": fakes.FLAVORS['1'].flavorid, "name": fakes.FLAVORS['1'].name, "ram": fakes.FLAVORS['1'].memory_mb, "disk": fakes.FLAVORS['1'].root_gb, "vcpus": fakes.FLAVORS['1'].vcpus, "os-flavor-access:is_public": True, "rxtx_factor": 1.0, "links": [ { "rel": "self", "href": "http://localhost/" + self._rspv + "/flavors/1", }, { "rel": "bookmark", "href": "http://localhost" + self._fake + "/flavors/1", }, ], }, } self._set_expected_body(expected['flavor'], fakes.FLAVORS['1']) self.assertEqual(flavor, expected) def test_get_flavor_with_custom_link_prefix(self): self.flags(compute_link_prefix='http://zoo.com:42', glance_link_prefix='http://circus.com:34', group='api') req = self._build_request('/flavors/1') flavor = self.controller.show(req, '1') expected = { "flavor": { "id": fakes.FLAVORS['1'].flavorid, "name": fakes.FLAVORS['1'].name, "ram": fakes.FLAVORS['1'].memory_mb, "disk": fakes.FLAVORS['1'].root_gb, "vcpus": fakes.FLAVORS['1'].vcpus, "os-flavor-access:is_public": True, "rxtx_factor": 1.0, "links": [ { "rel": "self", "href": "http://zoo.com:42/" + self._rspv + "/flavors/1", }, { "rel": "bookmark", "href": "http://zoo.com:42" + self._fake + "/flavors/1", }, ], }, } self._set_expected_body(expected['flavor'], fakes.FLAVORS['1']) self.assertEqual(expected, flavor) def test_get_flavor_list(self): req = self._build_request('/flavors') flavor = self.controller.index(req) expected = { "flavors": [ { "id": fakes.FLAVORS['1'].flavorid, "name": fakes.FLAVORS['1'].name, "links": [ { "rel": "self", "href": "http://localhost/" + self._rspv + "/flavors/1", }, { "rel": "bookmark", "href": "http://localhost" + self._fake + "/flavors/1", }, ], }, { "id": fakes.FLAVORS['2'].flavorid, "name": fakes.FLAVORS['2'].name, "links": [ { "rel": "self", "href": "http://localhost/" + self._rspv + "/flavors/2", }, { "rel": "bookmark", "href": "http://localhost" + self._fake + "/flavors/2", }, ], }, ], } if self.expect_description: for idx, _flavor in enumerate(expected['flavors']): expected['flavors'][idx]['description'] = ( fakes.FLAVORS[_flavor['id']].description) self.assertEqual(flavor, expected) def test_get_flavor_list_with_marker(self): self.maxDiff = None url = '/flavors?limit=1&marker=1' req = self._build_request(url) flavor = self.controller.index(req) expected = { "flavors": [ { "id": fakes.FLAVORS['2'].flavorid, "name": fakes.FLAVORS['2'].name, "links": [ { "rel": "self", "href": "http://localhost/" + self._rspv + "/flavors/2", }, { "rel": "bookmark", "href": "http://localhost" + self._fake + "/flavors/2", }, ], }, ], 'flavors_links': [ {'href': 'http://localhost/' + self._rspv + '/flavors?limit=1&marker=2', 'rel': 'next'} ] } if self.expect_description: expected['flavors'][0]['description'] = ( fakes.FLAVORS['2'].description) self.assertThat(flavor, matchers.DictMatches(expected)) def test_get_flavor_list_with_invalid_marker(self): req = self._build_request('/flavors?marker=99999') self.assertRaises(webob.exc.HTTPBadRequest, self.controller.index, req) def test_get_flavor_detail_with_limit(self): url = '/flavors/detail?limit=1' req = self._build_request(url) response = self.controller.detail(req) response_list = response["flavors"] response_links = response["flavors_links"] expected_flavors = [ { "id": fakes.FLAVORS['1'].flavorid, "name": fakes.FLAVORS['1'].name, "ram": fakes.FLAVORS['1'].memory_mb, "disk": fakes.FLAVORS['1'].root_gb, "vcpus": fakes.FLAVORS['1'].vcpus, "os-flavor-access:is_public": True, "rxtx_factor": 1.0, "links": [ { "rel": "self", "href": "http://localhost/" + self._rspv + "/flavors/1", }, { "rel": "bookmark", "href": "http://localhost" + self._fake + "/flavors/1", }, ], }, ] self._set_expected_body(expected_flavors[0], fakes.FLAVORS['1']) self.assertEqual(response_list, expected_flavors) self.assertEqual(response_links[0]['rel'], 'next') href_parts = urlparse.urlparse(response_links[0]['href']) self.assertEqual('/' + self._rspv + '/flavors/detail', href_parts.path) params = urlparse.parse_qs(href_parts.query) self.assertThat({'limit': ['1'], 'marker': ['1']}, matchers.DictMatches(params)) def test_get_flavor_with_limit(self): req = self._build_request('/flavors?limit=2') response = self.controller.index(req) response_list = response["flavors"] response_links = response["flavors_links"] expected_flavors = [ { "id": fakes.FLAVORS['1'].flavorid, "name": fakes.FLAVORS['1'].name, "links": [ { "rel": "self", "href": "http://localhost/" + self._rspv + "/flavors/1", }, { "rel": "bookmark", "href": "http://localhost" + self._fake + "/flavors/1", }, ], }, { "id": fakes.FLAVORS['2'].flavorid, "name": fakes.FLAVORS['2'].name, "links": [ { "rel": "self", "href": "http://localhost/" + self._rspv + "/flavors/2", }, { "rel": "bookmark", "href": "http://localhost" + self._fake + "/flavors/2", }, ], } ] if self.expect_description: for idx, _flavor in enumerate(expected_flavors): expected_flavors[idx]['description'] = ( fakes.FLAVORS[_flavor['id']].description) self.assertEqual(response_list, expected_flavors) self.assertEqual(response_links[0]['rel'], 'next') href_parts = urlparse.urlparse(response_links[0]['href']) self.assertEqual('/' + self._rspv + '/flavors', href_parts.path) params = urlparse.parse_qs(href_parts.query) self.assertThat({'limit': ['2'], 'marker': ['2']}, matchers.DictMatches(params)) def test_get_flavor_with_default_limit(self): self.stub_out('nova.api.openstack.common.get_limit_and_marker', fake_get_limit_and_marker) self.flags(max_limit=1, group='api') req = fakes.HTTPRequest.blank('/v2/fake/flavors?limit=2') response = self.controller.index(req) response_list = response["flavors"] response_links = response["flavors_links"] expected_flavors = [ { "id": fakes.FLAVORS['1'].flavorid, "name": fakes.FLAVORS['1'].name, "links": [ { "rel": "self", "href": "http://localhost/v2/fake/flavors/1", }, { "rel": "bookmark", "href": "http://localhost/fake/flavors/1", } ] } ] self.assertEqual(response_list, expected_flavors) self.assertEqual(response_links[0]['rel'], 'next') href_parts = urlparse.urlparse(response_links[0]['href']) self.assertEqual('/v2/fake/flavors', href_parts.path) params = urlparse.parse_qs(href_parts.query) self.assertThat({'limit': ['2'], 'marker': ['1']}, matchers.DictMatches(params)) def test_get_flavor_list_detail(self): req = self._build_request('/flavors/detail') flavor = self.controller.detail(req) expected = { "flavors": [ { "id": fakes.FLAVORS['1'].flavorid, "name": fakes.FLAVORS['1'].name, "ram": fakes.FLAVORS['1'].memory_mb, "disk": fakes.FLAVORS['1'].root_gb, "vcpus": fakes.FLAVORS['1'].vcpus, "os-flavor-access:is_public": True, "rxtx_factor": 1.0, "links": [ { "rel": "self", "href": "http://localhost/" + self._rspv + "/flavors/1", }, { "rel": "bookmark", "href": "http://localhost" + self._fake + "/flavors/1", }, ], }, { "id": fakes.FLAVORS['2'].flavorid, "name": fakes.FLAVORS['2'].name, "ram": fakes.FLAVORS['2'].memory_mb, "disk": fakes.FLAVORS['2'].root_gb, "vcpus": fakes.FLAVORS['2'].vcpus, "os-flavor-access:is_public": True, "rxtx_factor": '', "links": [ { "rel": "self", "href": "http://localhost/" + self._rspv + "/flavors/2", }, { "rel": "bookmark", "href": "http://localhost" + self._fake + "/flavors/2", }, ], }, ], } self._set_expected_body(expected['flavors'][0], fakes.FLAVORS['1']) self._set_expected_body(expected['flavors'][1], fakes.FLAVORS['2']) self.assertEqual(expected, flavor) @mock.patch('nova.objects.FlavorList.get_all', return_value=objects.FlavorList()) def test_get_empty_flavor_list(self, mock_get): req = self._build_request('/flavors') flavors = self.controller.index(req) expected = {'flavors': []} self.assertEqual(flavors, expected) def test_get_flavor_list_filter_min_ram(self): # Flavor lists may be filtered by minRam. req = self._build_request('/flavors?minRam=512') flavor = self.controller.index(req) expected = { "flavors": [ { "id": fakes.FLAVORS['2'].flavorid, "name": fakes.FLAVORS['2'].name, "links": [ { "rel": "self", "href": "http://localhost/" + self._rspv + "/flavors/2", }, { "rel": "bookmark", "href": "http://localhost" + self._fake + "/flavors/2", }, ], }, ], } if self.expect_description: expected['flavors'][0]['description'] = ( fakes.FLAVORS['2'].description) self.assertEqual(flavor, expected) def test_get_flavor_list_filter_invalid_min_ram(self): # Ensure you cannot list flavors with invalid minRam param. req = self._build_request('/flavors?minRam=NaN') self.assertRaises(webob.exc.HTTPBadRequest, self.controller.index, req) def test_get_flavor_list_filter_min_disk(self): # Flavor lists may be filtered by minDisk. req = self._build_request('/flavors?minDisk=20') flavor = self.controller.index(req) expected = { "flavors": [ { "id": fakes.FLAVORS['2'].flavorid, "name": fakes.FLAVORS['2'].name, "links": [ { "rel": "self", "href": "http://localhost/" + self._rspv + "/flavors/2", }, { "rel": "bookmark", "href": "http://localhost" + self._fake + "/flavors/2", }, ], }, ], } if self.expect_description: expected['flavors'][0]['description'] = ( fakes.FLAVORS['2'].description) self.assertEqual(flavor, expected) def test_get_flavor_list_filter_invalid_min_disk(self): # Ensure you cannot list flavors with invalid minDisk param. req = self._build_request('/flavors?minDisk=NaN') self.assertRaises(webob.exc.HTTPBadRequest, self.controller.index, req) def test_get_flavor_list_detail_min_ram_and_min_disk(self): """Tests that filtering work on flavor details and that minRam and minDisk filters can be combined """ req = self._build_request('/flavors/detail?minRam=256&minDisk=20') flavor = self.controller.detail(req) expected = { "flavors": [ { "id": fakes.FLAVORS['2'].flavorid, "name": fakes.FLAVORS['2'].name, "ram": fakes.FLAVORS['2'].memory_mb, "disk": fakes.FLAVORS['2'].root_gb, "vcpus": fakes.FLAVORS['2'].vcpus, "os-flavor-access:is_public": True, "rxtx_factor": '', "links": [ { "rel": "self", "href": "http://localhost/" + self._rspv + "/flavors/2", }, { "rel": "bookmark", "href": "http://localhost" + self._fake + "/flavors/2", }, ], }, ], } self._set_expected_body(expected['flavors'][0], fakes.FLAVORS['2']) self.assertEqual(expected, flavor) def _test_list_flavors_with_invalid_filter( self, url, expected_exception=exception.ValidationError): controller_list = self.controller.index if 'detail' in url: controller_list = self.controller.detail req = self.fake_request.blank(self._prefix + url) self.assertRaises(expected_exception, controller_list, req) def test_list_flavors_with_invalid_non_int_limit(self): self._test_list_flavors_with_invalid_filter('/flavors?limit=-9') def test_list_detail_flavors_with_invalid_non_int_limit(self): self._test_list_flavors_with_invalid_filter('/flavors/detail?limit=-9') def test_list_flavors_with_invalid_string_limit(self): self._test_list_flavors_with_invalid_filter('/flavors?limit=abc') def test_list_detail_flavors_with_invalid_string_limit(self): self._test_list_flavors_with_invalid_filter( '/flavors/detail?limit=abc') def test_list_duplicate_query_with_invalid_string_limit(self): self._test_list_flavors_with_invalid_filter( '/flavors?limit=1&limit=abc') def test_list_detail_duplicate_query_with_invalid_string_limit(self): self._test_list_flavors_with_invalid_filter( '/flavors/detail?limit=1&limit=abc') def _test_list_flavors_duplicate_query_parameters_validation( self, url, expected=None): controller_list = self.controller.index if 'detail' in url: controller_list = self.controller.detail expected_resp = [{ "id": fakes.FLAVORS['2'].flavorid, "name": fakes.FLAVORS['2'].name, "links": [ { "rel": "self", "href": "http://localhost/" + self._rspv + "/flavors/2", }, { "rel": "bookmark", "href": "http://localhost" + self._fake + "/flavors/2", }, ], }] if expected: expected_resp[0].update(expected) params = { 'limit': 1, 'marker': 1, 'is_public': 't', 'minRam': 2, 'minDisk': 2, 'sort_key': 'id', 'sort_dir': 'asc' } for param, value in params.items(): req = self.fake_request.blank( self._prefix + url + '?marker=1&%s=%s&%s=%s' % (param, value, param, value)) result = controller_list(req) self.assertEqual(expected_resp, result['flavors']) def test_list_duplicate_query_parameters_validation(self): self._test_list_flavors_duplicate_query_parameters_validation( '/flavors') def test_list_detail_duplicate_query_parameters_validation(self): expected = { "ram": fakes.FLAVORS['2'].memory_mb, "disk": fakes.FLAVORS['2'].root_gb, "vcpus": fakes.FLAVORS['2'].vcpus, "os-flavor-access:is_public": True, "rxtx_factor": '', "OS-FLV-EXT-DATA:ephemeral": fakes.FLAVORS['2'].ephemeral_gb, "OS-FLV-DISABLED:disabled": fakes.FLAVORS['2'].disabled, "swap": fakes.FLAVORS['2'].swap } self._test_list_flavors_duplicate_query_parameters_validation( '/flavors/detail', expected) def _test_list_flavors_with_allowed_filter( self, url, expected=None): controller_list = self.controller.index if 'detail' in url: controller_list = self.controller.detail expected_resp = [{ "id": fakes.FLAVORS['2'].flavorid, "name": fakes.FLAVORS['2'].name, "links": [ { "rel": "self", "href": "http://localhost/" + self._rspv + "/flavors/2", }, { "rel": "bookmark", "href": "http://localhost" + self._fake + "/flavors/2", }, ], }] if expected: expected_resp[0].update(expected) req = self.fake_request.blank(self._prefix + url + '&limit=1&marker=1') result = controller_list(req) self.assertEqual(expected_resp, result['flavors']) def test_list_flavors_with_additional_filter(self): self._test_list_flavors_with_allowed_filter( '/flavors?limit=1&marker=1&additional=something') def test_list_detail_flavors_with_additional_filter(self): expected = { "ram": fakes.FLAVORS['2'].memory_mb, "disk": fakes.FLAVORS['2'].root_gb, "vcpus": fakes.FLAVORS['2'].vcpus, "os-flavor-access:is_public": True, "rxtx_factor": '', "OS-FLV-EXT-DATA:ephemeral": fakes.FLAVORS['2'].ephemeral_gb, "OS-FLV-DISABLED:disabled": fakes.FLAVORS['2'].disabled, "swap": fakes.FLAVORS['2'].swap } self._test_list_flavors_with_allowed_filter( '/flavors/detail?limit=1&marker=1&additional=something', expected) def test_list_flavors_with_min_ram_filter_as_negative_int(self): self._test_list_flavors_with_allowed_filter( '/flavors?minRam=-2') def test_list_detail_flavors_with_min_ram_filter_as_negative_int(self): expected = { "ram": fakes.FLAVORS['2'].memory_mb, "disk": fakes.FLAVORS['2'].root_gb, "vcpus": fakes.FLAVORS['2'].vcpus, "os-flavor-access:is_public": True, "rxtx_factor": '', "OS-FLV-EXT-DATA:ephemeral": fakes.FLAVORS['2'].ephemeral_gb, "OS-FLV-DISABLED:disabled": fakes.FLAVORS['2'].disabled, "swap": fakes.FLAVORS['2'].swap } self._test_list_flavors_with_allowed_filter( '/flavors/detail?minRam=-2', expected) def test_list_flavors_with_min_ram_filter_as_float(self): self._test_list_flavors_with_invalid_filter( '/flavors?minRam=1.2', expected_exception=webob.exc.HTTPBadRequest) def test_list_detail_flavors_with_min_ram_filter_as_float(self): self._test_list_flavors_with_invalid_filter( '/flavors/detail?minRam=1.2', expected_exception=webob.exc.HTTPBadRequest) def test_list_flavors_with_min_disk_filter_as_negative_int(self): self._test_list_flavors_with_allowed_filter('/flavors?minDisk=-2') def test_list_detail_flavors_with_min_disk_filter_as_negative_int(self): expected = { "ram": fakes.FLAVORS['2'].memory_mb, "disk": fakes.FLAVORS['2'].root_gb, "vcpus": fakes.FLAVORS['2'].vcpus, "os-flavor-access:is_public": True, "rxtx_factor": '', "OS-FLV-EXT-DATA:ephemeral": fakes.FLAVORS['2'].ephemeral_gb, "OS-FLV-DISABLED:disabled": fakes.FLAVORS['2'].disabled, "swap": fakes.FLAVORS['2'].swap } self._test_list_flavors_with_allowed_filter( '/flavors/detail?minDisk=-2', expected) def test_list_flavors_with_min_disk_filter_as_float(self): self._test_list_flavors_with_invalid_filter( '/flavors?minDisk=1.2', expected_exception=webob.exc.HTTPBadRequest) def test_list_detail_flavors_with_min_disk_filter_as_float(self): self._test_list_flavors_with_invalid_filter( '/flavors/detail?minDisk=1.2', expected_exception=webob.exc.HTTPBadRequest) def test_list_flavors_with_is_public_filter_as_string_none(self): self._test_list_flavors_with_allowed_filter( '/flavors?is_public=none') def test_list_detail_flavors_with_is_public_filter_as_string_none(self): expected = { "ram": fakes.FLAVORS['2'].memory_mb, "disk": fakes.FLAVORS['2'].root_gb, "vcpus": fakes.FLAVORS['2'].vcpus, "os-flavor-access:is_public": True, "rxtx_factor": '', "OS-FLV-EXT-DATA:ephemeral": fakes.FLAVORS['2'].ephemeral_gb, "OS-FLV-DISABLED:disabled": fakes.FLAVORS['2'].disabled, "swap": fakes.FLAVORS['2'].swap } self._test_list_flavors_with_allowed_filter( '/flavors/detail?is_public=none', expected) def test_list_flavors_with_is_public_filter_as_valid_bool(self): self._test_list_flavors_with_allowed_filter( '/flavors?is_public=false') def test_list_detail_flavors_with_is_public_filter_as_valid_bool(self): expected = { "ram": fakes.FLAVORS['2'].memory_mb, "disk": fakes.FLAVORS['2'].root_gb, "vcpus": fakes.FLAVORS['2'].vcpus, "OS-FLV-EXT-DATA:ephemeral": fakes.FLAVORS['2'].ephemeral_gb, "os-flavor-access:is_public": True, "rxtx_factor": '', "OS-FLV-DISABLED:disabled": fakes.FLAVORS['2'].disabled, "swap": fakes.FLAVORS['2'].swap } self._test_list_flavors_with_allowed_filter( '/flavors/detail?is_public=false', expected) def test_list_flavors_with_is_public_filter_as_invalid_string(self): self._test_list_flavors_with_allowed_filter( '/flavors?is_public=invalid') def test_list_detail_flavors_with_is_public_filter_as_invalid_string(self): expected = { "ram": fakes.FLAVORS['2'].memory_mb, "disk": fakes.FLAVORS['2'].root_gb, "vcpus": fakes.FLAVORS['2'].vcpus, "os-flavor-access:is_public": True, "rxtx_factor": '', "OS-FLV-EXT-DATA:ephemeral": fakes.FLAVORS['2'].ephemeral_gb, "OS-FLV-DISABLED:disabled": fakes.FLAVORS['2'].disabled, "swap": fakes.FLAVORS['2'].swap } self._test_list_flavors_with_allowed_filter( '/flavors/detail?is_public=invalid', expected) class FlavorsTestV2_55(FlavorsTestV21): """Run the same tests as we would for v2.1 but with a description.""" microversion = '2.55' expect_description = True class FlavorsPolicyEnforcementV21(test.NoDBTestCase): def setUp(self): super(FlavorsPolicyEnforcementV21, self).setUp() self.flavor_controller = flavors_v21.FlavorsController() fakes.stub_out_flavor_get_by_flavor_id(self) fakes.stub_out_flavor_get_all(self) self.req = fakes.HTTPRequest.blank('') def test_show_flavor_access_policy_failed(self): rule_name = "os_compute_api:os-flavor-access" self.policy.set_rules({rule_name: "project:non_fake"}) resp = self.flavor_controller.show(self.req, '1') self.assertNotIn('os-flavor-access:is_public', resp['flavor']) def test_detail_flavor_access_policy_failed(self): rule_name = "os_compute_api:os-flavor-access" self.policy.set_rules({rule_name: "project:non_fake"}) resp = self.flavor_controller.detail(self.req) self.assertNotIn('os-flavor-access:is_public', resp['flavors'][0]) def test_show_flavor_rxtx_policy_failed(self): rule_name = "os_compute_api:os-flavor-rxtx" self.policy.set_rules({rule_name: "project:non_fake"}) resp = self.flavor_controller.show(self.req, '1') self.assertNotIn('rxtx_factor', resp['flavor']) def test_detail_flavor_rxtx_policy_failed(self): rule_name = "os_compute_api:os-flavor-rxtx" self.policy.set_rules({rule_name: "project:non_fake"}) resp = self.flavor_controller.detail(self.req) self.assertNotIn('rxtx_factor', resp['flavors'][0]) def test_create_flavor_extended_policy_failed(self): rules = {"os_compute_api:os-flavor-rxtx": "project:non_fake", "os_compute_api:os-flavor-access": "project:non_fake"} self.policy.set_rules(rules) resp = self.flavor_controller.detail(self.req) self.assertNotIn('rxtx_factor', resp['flavors'][0]) def test_update_flavor_extended_policy_failed(self): rules = {"os_compute_api:os-flavor-rxtx": "project:non_fake", "os_compute_api:os-flavor-access": "project:non_fake"} self.policy.set_rules(rules) resp = self.flavor_controller.detail(self.req) self.assertNotIn('rxtx_factor', resp['flavors'][0]) class DisabledFlavorsWithRealDBTestV21(test.TestCase): """Tests that disabled flavors should not be shown nor listed.""" Controller = flavors_v21.FlavorsController _prefix = "/v2" fake_request = fakes.HTTPRequestV21 def setUp(self): super(DisabledFlavorsWithRealDBTestV21, self).setUp() # Add a new disabled type to the list of flavors self.req = self.fake_request.blank(self._prefix + '/flavors') self.context = self.req.environ['nova.context'] self.admin_context = context.get_admin_context() self.disabled_type = self._create_disabled_instance_type() self.addCleanup(self.disabled_type.destroy) self.inst_types = objects.FlavorList.get_all(self.admin_context) self.controller = self.Controller() def _create_disabled_instance_type(self): flavor = objects.Flavor(context=self.admin_context, name='foo.disabled', flavorid='10.disabled', memory_mb=512, vcpus=2, root_gb=1, ephemeral_gb=0, swap=0, rxtx_factor=1.0, vcpu_weight=1, disabled=True, is_public=True, extra_specs={}, projects=[]) flavor.create() return flavor def test_index_should_not_list_disabled_flavors_to_user(self): self.context.is_admin = False flavor_list = self.controller.index(self.req)['flavors'] api_flavorids = set(f['id'] for f in flavor_list) db_flavorids = set(i['flavorid'] for i in self.inst_types) disabled_flavorid = str(self.disabled_type['flavorid']) self.assertIn(disabled_flavorid, db_flavorids) self.assertEqual(db_flavorids - set([disabled_flavorid]), api_flavorids) def test_index_should_list_disabled_flavors_to_admin(self): self.context.is_admin = True flavor_list = self.controller.index(self.req)['flavors'] api_flavorids = set(f['id'] for f in flavor_list) db_flavorids = set(i['flavorid'] for i in self.inst_types) disabled_flavorid = str(self.disabled_type['flavorid']) self.assertIn(disabled_flavorid, db_flavorids) self.assertEqual(db_flavorids, api_flavorids) def test_show_should_include_disabled_flavor_for_user(self): """Counterintuitively we should show disabled flavors to all users and not just admins. The reason is that, when a user performs a server-show request, we want to be able to display the pretty flavor name ('512 MB Instance') and not just the flavor-id even if the flavor id has been marked disabled. """ self.context.is_admin = False flavor = self.controller.show( self.req, self.disabled_type['flavorid'])['flavor'] self.assertEqual(flavor['name'], self.disabled_type['name']) def test_show_should_include_disabled_flavor_for_admin(self): self.context.is_admin = True flavor = self.controller.show( self.req, self.disabled_type['flavorid'])['flavor'] self.assertEqual(flavor['name'], self.disabled_type['name']) class ParseIsPublicTestV21(test.TestCase): Controller = flavors_v21.FlavorsController def setUp(self): super(ParseIsPublicTestV21, self).setUp() self.controller = self.Controller() def assertPublic(self, expected, is_public): self.assertIs(expected, self.controller._parse_is_public(is_public), '%s did not return %s' % (is_public, expected)) def test_None(self): self.assertPublic(True, None) def test_truthy(self): self.assertPublic(True, True) self.assertPublic(True, 't') self.assertPublic(True, 'true') self.assertPublic(True, 'yes') self.assertPublic(True, '1') def test_falsey(self): self.assertPublic(False, False) self.assertPublic(False, 'f') self.assertPublic(False, 'false') self.assertPublic(False, 'no') self.assertPublic(False, '0') def test_string_none(self): self.assertPublic(None, 'none') self.assertPublic(None, 'None') def test_other(self): self.assertRaises( webob.exc.HTTPBadRequest, self.assertPublic, None, 'other') nova-17.0.1/nova/tests/unit/api/openstack/compute/test_security_group_default_rules.py0000666000175000017500000003647613250073126031520 0ustar zuulzuul00000000000000# Copyright 2013 Metacloud, Inc # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import webob from nova.api.openstack.compute import \ security_group_default_rules as security_group_default_rules_v21 from nova import context import nova.db from nova import exception from nova import test from nova.tests.unit.api.openstack import fakes class AttrDict(dict): def __getattr__(self, k): return self[k] def security_group_default_rule_template(**kwargs): rule = kwargs.copy() rule.setdefault('ip_protocol', 'TCP') rule.setdefault('from_port', 22) rule.setdefault('to_port', 22) rule.setdefault('cidr', '10.10.10.0/24') return rule def security_group_default_rule_db(security_group_default_rule, id=None): attrs = security_group_default_rule.copy() if id is not None: attrs['id'] = id return AttrDict(attrs) class TestSecurityGroupDefaultRulesNeutronV21(test.TestCase): controller_cls = (security_group_default_rules_v21. SecurityGroupDefaultRulesController) def setUp(self): self.flags(use_neutron=True) super(TestSecurityGroupDefaultRulesNeutronV21, self).setUp() self.controller = self.controller_cls() def test_create_security_group_default_rule_not_implemented_neutron(self): sgr = security_group_default_rule_template() req = fakes.HTTPRequest.blank( '/v2/fake/os-security-group-default-rules', use_admin_context=True) self.assertRaises(webob.exc.HTTPNotImplemented, self.controller.create, req, {'security_group_default_rule': sgr}) def test_security_group_default_rules_list_not_implemented_neutron(self): req = fakes.HTTPRequest.blank( '/v2/fake/os-security-group-default-rules', use_admin_context=True) self.assertRaises(webob.exc.HTTPNotImplemented, self.controller.index, req) def test_security_group_default_rules_show_not_implemented_neutron(self): req = fakes.HTTPRequest.blank( '/v2/fake/os-security-group-default-rules', use_admin_context=True) self.assertRaises(webob.exc.HTTPNotImplemented, self.controller.show, req, '602ed77c-a076-4f9b-a617-f93b847b62c5') def test_security_group_default_rules_delete_not_implemented_neutron(self): req = fakes.HTTPRequest.blank( '/v2/fake/os-security-group-default-rules', use_admin_context=True) self.assertRaises(webob.exc.HTTPNotImplemented, self.controller.delete, req, '602ed77c-a076-4f9b-a617-f93b847b62c5') class TestSecurityGroupDefaultRulesV21(test.TestCase): controller_cls = (security_group_default_rules_v21. SecurityGroupDefaultRulesController) def setUp(self): super(TestSecurityGroupDefaultRulesV21, self).setUp() self.flags(use_neutron=False) self.controller = self.controller_cls() self.req = fakes.HTTPRequest.blank( '/v2/fake/os-security-group-default-rules') def test_create_security_group_default_rule(self): sgr = security_group_default_rule_template() sgr_dict = dict(security_group_default_rule=sgr) res_dict = self.controller.create(self.req, sgr_dict) security_group_default_rule = res_dict['security_group_default_rule'] self.assertEqual(security_group_default_rule['ip_protocol'], sgr['ip_protocol']) self.assertEqual(security_group_default_rule['from_port'], sgr['from_port']) self.assertEqual(security_group_default_rule['to_port'], sgr['to_port']) self.assertEqual(security_group_default_rule['ip_range']['cidr'], sgr['cidr']) def test_create_security_group_default_rule_with_no_to_port(self): sgr = security_group_default_rule_template() del sgr['to_port'] self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, {'security_group_default_rule': sgr}) def test_create_security_group_default_rule_with_no_from_port(self): sgr = security_group_default_rule_template() del sgr['from_port'] self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, {'security_group_default_rule': sgr}) def test_create_security_group_default_rule_with_no_ip_protocol(self): sgr = security_group_default_rule_template() del sgr['ip_protocol'] self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, {'security_group_default_rule': sgr}) def test_create_security_group_default_rule_with_no_cidr(self): sgr = security_group_default_rule_template() del sgr['cidr'] res_dict = self.controller.create(self.req, {'security_group_default_rule': sgr}) security_group_default_rule = res_dict['security_group_default_rule'] self.assertNotEqual(security_group_default_rule['id'], 0) self.assertEqual(security_group_default_rule['ip_range']['cidr'], '0.0.0.0/0') def test_create_security_group_default_rule_with_blank_to_port(self): sgr = security_group_default_rule_template(to_port='') self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, {'security_group_default_rule': sgr}) def test_create_security_group_default_rule_with_blank_from_port(self): sgr = security_group_default_rule_template(from_port='') self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, {'security_group_default_rule': sgr}) def test_create_security_group_default_rule_with_blank_ip_protocol(self): sgr = security_group_default_rule_template(ip_protocol='') self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, {'security_group_default_rule': sgr}) def test_create_security_group_default_rule_with_blank_cidr(self): sgr = security_group_default_rule_template(cidr='') res_dict = self.controller.create(self.req, {'security_group_default_rule': sgr}) security_group_default_rule = res_dict['security_group_default_rule'] self.assertNotEqual(security_group_default_rule['id'], 0) self.assertEqual(security_group_default_rule['ip_range']['cidr'], '0.0.0.0/0') def test_create_security_group_default_rule_non_numerical_to_port(self): sgr = security_group_default_rule_template(to_port='invalid') self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, {'security_group_default_rule': sgr}) def test_create_security_group_default_rule_non_numerical_from_port(self): sgr = security_group_default_rule_template(from_port='invalid') self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, {'security_group_default_rule': sgr}) def test_create_security_group_default_rule_invalid_ip_protocol(self): sgr = security_group_default_rule_template(ip_protocol='invalid') self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, {'security_group_default_rule': sgr}) def test_create_security_group_default_rule_invalid_cidr(self): sgr = security_group_default_rule_template(cidr='10.10.2222.0/24') self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, {'security_group_default_rule': sgr}) def test_create_security_group_default_rule_invalid_to_port(self): sgr = security_group_default_rule_template(to_port='666666') self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, {'security_group_default_rule': sgr}) def test_create_security_group_default_rule_invalid_from_port(self): sgr = security_group_default_rule_template(from_port='666666') self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, {'security_group_default_rule': sgr}) def test_create_security_group_default_rule_with_no_body(self): self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, None) def test_create_duplicate_security_group_default_rule(self): sgr = security_group_default_rule_template() self.controller.create(self.req, {'security_group_default_rule': sgr}) self.assertRaises(webob.exc.HTTPConflict, self.controller.create, self.req, {'security_group_default_rule': sgr}) def test_security_group_default_rules_list(self): self.test_create_security_group_default_rule() rules = [dict(id=1, ip_protocol='TCP', from_port=22, to_port=22, ip_range=dict(cidr='10.10.10.0/24'))] expected = {'security_group_default_rules': rules} res_dict = self.controller.index(self.req) self.assertEqual(res_dict, expected) @mock.patch('nova.db.security_group_default_rule_list', side_effect=(exception. SecurityGroupDefaultRuleNotFound("Rule Not Found"))) def test_non_existing_security_group_default_rules_list(self, mock_sec_grp_rule): self.assertRaises(webob.exc.HTTPNotFound, self.controller.index, self.req) def test_default_security_group_default_rule_show(self): sgr = security_group_default_rule_template(id=1) self.test_create_security_group_default_rule() res_dict = self.controller.show(self.req, '1') security_group_default_rule = res_dict['security_group_default_rule'] self.assertEqual(security_group_default_rule['ip_protocol'], sgr['ip_protocol']) self.assertEqual(security_group_default_rule['to_port'], sgr['to_port']) self.assertEqual(security_group_default_rule['from_port'], sgr['from_port']) self.assertEqual(security_group_default_rule['ip_range']['cidr'], sgr['cidr']) @mock.patch('nova.db.security_group_default_rule_get', side_effect=(exception. SecurityGroupDefaultRuleNotFound("Rule Not Found"))) def test_non_existing_security_group_default_rule_show(self, mock_sec_grp_rule): self.assertRaises(webob.exc.HTTPNotFound, self.controller.show, self.req, '1') def test_delete_security_group_default_rule(self): sgr = security_group_default_rule_template(id=1) self.test_create_security_group_default_rule() self.called = False def security_group_default_rule_destroy(context, id): self.called = True def return_security_group_default_rule(context, id): self.assertEqual(sgr['id'], id) return security_group_default_rule_db(sgr) self.stub_out('nova.db.security_group_default_rule_destroy', security_group_default_rule_destroy) self.stub_out('nova.db.security_group_default_rule_get', return_security_group_default_rule) self.controller.delete(self.req, '1') self.assertTrue(self.called) @mock.patch('nova.db.security_group_default_rule_destroy', side_effect=(exception. SecurityGroupDefaultRuleNotFound("Rule Not Found"))) def test_non_existing_security_group_default_rule_delete( self, mock_sec_grp_rule): self.assertRaises(webob.exc.HTTPNotFound, self.controller.delete, self.req, '1') def test_security_group_ensure_default(self): sgr = security_group_default_rule_template(id=1) self.test_create_security_group_default_rule() ctxt = context.get_admin_context() setattr(ctxt, 'project_id', 'new_project_id') sg = nova.db.security_group_ensure_default(ctxt) rules = nova.db.security_group_rule_get_by_security_group(ctxt, sg.id) security_group_rule = rules[0] self.assertEqual(sgr['id'], security_group_rule.id) self.assertEqual(sgr['ip_protocol'], security_group_rule.protocol) self.assertEqual(sgr['from_port'], security_group_rule.from_port) self.assertEqual(sgr['to_port'], security_group_rule.to_port) self.assertEqual(sgr['cidr'], security_group_rule.cidr) class SecurityGroupDefaultRulesPolicyEnforcementV21(test.NoDBTestCase): def setUp(self): super(SecurityGroupDefaultRulesPolicyEnforcementV21, self).setUp() self.controller = (security_group_default_rules_v21. SecurityGroupDefaultRulesController()) self.req = fakes.HTTPRequest.blank('') def _common_policy_check(self, func, *arg, **kwarg): rule_name = "os_compute_api:os-security-group-default-rules" rule = {rule_name: "project:non_fake"} self.policy.set_rules(rule) exc = self.assertRaises( exception.PolicyNotAuthorized, func, *arg, **kwarg) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) def test_create_policy_failed(self): self._common_policy_check(self.controller.create, self.req, {}) def test_show_policy_failed(self): self._common_policy_check( self.controller.show, self.req, fakes.FAKE_UUID) def test_delete_policy_failed(self): self._common_policy_check( self.controller.delete, self.req, fakes.FAKE_UUID) def test_index_policy_failed(self): self._common_policy_check(self.controller.index, self.req) class TestSecurityGroupDefaultRulesDeprecation(test.NoDBTestCase): def setUp(self): super(TestSecurityGroupDefaultRulesDeprecation, self).setUp() self.req = fakes.HTTPRequest.blank('', version='2.36') self.controller = (security_group_default_rules_v21. SecurityGroupDefaultRulesController()) def test_all_apis_return_not_found(self): self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.create, self.req, {}) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.show, self.req, fakes.FAKE_UUID) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.delete, self.req, fakes.FAKE_UUID) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.index, self.req) nova-17.0.1/nova/tests/unit/api/openstack/compute/test_console_output.py0000666000175000017500000001523213250073126026564 0ustar zuulzuul00000000000000# Copyright 2011 Eldar Nugaev # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import string import mock import webob from nova.api.openstack.compute import console_output \ as console_output_v21 from nova.compute import api as compute_api from nova import exception from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_instance def fake_get_console_output(self, _context, _instance, tail_length): fixture = [str(i) for i in range(5)] if tail_length is None: pass elif tail_length == 0: fixture = [] else: fixture = fixture[-int(tail_length):] return '\n'.join(fixture) def fake_get_console_output_not_ready(self, _context, _instance, tail_length): raise exception.InstanceNotReady(instance_id=_instance["uuid"]) def fake_get_console_output_all_characters(self, _ctx, _instance, _tail_len): return string.printable def fake_get(self, context, instance_uuid, expected_attrs=None): return fake_instance.fake_instance_obj(context, **{'uuid': instance_uuid}) def fake_get_not_found(*args, **kwargs): raise exception.InstanceNotFound(instance_id='fake') class ConsoleOutputExtensionTestV21(test.NoDBTestCase): controller_class = console_output_v21 validation_error = exception.ValidationError def setUp(self): super(ConsoleOutputExtensionTestV21, self).setUp() self.stub_out('nova.compute.api.API.get_console_output', fake_get_console_output) self.stub_out('nova.compute.api.API.get', fake_get) self.controller = self.controller_class.ConsoleOutputController() self.req = fakes.HTTPRequest.blank('') def _get_console_output(self, length_dict=None): length_dict = length_dict or {} body = {'os-getConsoleOutput': length_dict} return self.controller.get_console_output(self.req, fakes.FAKE_UUID, body=body) def _check_console_output_failure(self, exception, body): self.assertRaises(exception, self.controller.get_console_output, self.req, fakes.FAKE_UUID, body=body) def test_get_text_console_instance_action(self): output = self._get_console_output() self.assertEqual({'output': '0\n1\n2\n3\n4'}, output) def test_get_console_output_with_tail(self): output = self._get_console_output(length_dict={'length': 3}) self.assertEqual({'output': '2\n3\n4'}, output) def test_get_console_output_with_none_length(self): output = self._get_console_output(length_dict={'length': None}) self.assertEqual({'output': '0\n1\n2\n3\n4'}, output) def test_get_console_output_with_length_as_str(self): output = self._get_console_output(length_dict={'length': '3'}) self.assertEqual({'output': '2\n3\n4'}, output) def test_get_console_output_filtered_characters(self): self.stub_out('nova.compute.api.API.get_console_output', fake_get_console_output_all_characters) output = self._get_console_output() expect = (string.digits + string.ascii_letters + string.punctuation + ' \t\n') self.assertEqual({'output': expect}, output) def test_get_text_console_no_instance(self): self.stub_out('nova.compute.api.API.get', fake_get_not_found) body = {'os-getConsoleOutput': {}} self._check_console_output_failure(webob.exc.HTTPNotFound, body) def test_get_text_console_no_instance_on_get_output(self): self.stub_out('nova.compute.api.API.get_console_output', fake_get_not_found) body = {'os-getConsoleOutput': {}} self._check_console_output_failure(webob.exc.HTTPNotFound, body) def test_get_console_output_with_non_integer_length(self): body = {'os-getConsoleOutput': {'length': 'NaN'}} self._check_console_output_failure(self.validation_error, body) def test_get_text_console_bad_body(self): body = {} self._check_console_output_failure(self.validation_error, body) def test_get_console_output_with_length_as_float(self): body = {'os-getConsoleOutput': {'length': 2.5}} self._check_console_output_failure(self.validation_error, body) def test_get_console_output_not_ready(self): self.stub_out('nova.compute.api.API.get_console_output', fake_get_console_output_not_ready) body = {'os-getConsoleOutput': {}} self._check_console_output_failure(webob.exc.HTTPConflict, body) def test_not_implemented(self): self.stub_out('nova.compute.api.API.get_console_output', fakes.fake_not_implemented) body = {'os-getConsoleOutput': {}} self._check_console_output_failure(webob.exc.HTTPNotImplemented, body) def test_get_console_output_with_boolean_length(self): body = {'os-getConsoleOutput': {'length': True}} self._check_console_output_failure(self.validation_error, body) @mock.patch.object(compute_api.API, 'get_console_output', side_effect=exception.ConsoleNotAvailable( instance_uuid='fake_uuid')) def test_get_console_output_not_available(self, mock_get_console_output): body = {'os-getConsoleOutput': {}} self._check_console_output_failure(webob.exc.HTTPNotFound, body) class ConsoleOutputPolicyEnforcementV21(test.NoDBTestCase): def setUp(self): super(ConsoleOutputPolicyEnforcementV21, self).setUp() self.controller = console_output_v21.ConsoleOutputController() def test_get_console_output_policy_failed(self): rule_name = "os_compute_api:os-console-output" self.policy.set_rules({rule_name: "project:non_fake"}) req = fakes.HTTPRequest.blank('') body = {'os-getConsoleOutput': {}} exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller.get_console_output, req, fakes.FAKE_UUID, body=body) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) nova-17.0.1/nova/tests/unit/api/openstack/compute/test_extended_ips.py0000666000175000017500000001111713250073126026153 0ustar zuulzuul00000000000000# Copyright 2013 Nebula, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_serialization import jsonutils import six from nova import objects from nova import test from nova.tests.unit.api.openstack import fakes UUID1 = '00000000-0000-0000-0000-000000000001' UUID2 = '00000000-0000-0000-0000-000000000002' UUID3 = '00000000-0000-0000-0000-000000000003' NW_CACHE = [ { 'address': 'aa:aa:aa:aa:aa:aa', 'id': 1, 'network': { 'bridge': 'br0', 'id': 1, 'label': 'private', 'subnets': [ { 'cidr': '192.168.1.0/24', 'ips': [ { 'address': '192.168.1.100', 'type': 'fixed', 'floating_ips': [ {'address': '5.0.0.1', 'type': 'floating'}, ], }, ], }, ] } }, { 'address': 'bb:bb:bb:bb:bb:bb', 'id': 2, 'network': { 'bridge': 'br1', 'id': 2, 'label': 'public', 'subnets': [ { 'cidr': '10.0.0.0/24', 'ips': [ { 'address': '10.0.0.100', 'type': 'fixed', 'floating_ips': [ {'address': '5.0.0.2', 'type': 'floating'}, ], } ], }, ] } } ] ALL_IPS = [] for cache in NW_CACHE: for subnet in cache['network']['subnets']: for fixed in subnet['ips']: sanitized = dict(fixed) sanitized.pop('floating_ips') ALL_IPS.append(sanitized) for floating in fixed['floating_ips']: ALL_IPS.append(floating) ALL_IPS.sort(key=lambda x: str(x)) def fake_compute_get(*args, **kwargs): inst = fakes.stub_instance_obj(None, 1, uuid=UUID3, nw_cache=NW_CACHE) return inst def fake_compute_get_all(*args, **kwargs): inst_list = [ fakes.stub_instance_obj(None, 1, uuid=UUID1, nw_cache=NW_CACHE), fakes.stub_instance_obj(None, 2, uuid=UUID2, nw_cache=NW_CACHE), ] return objects.InstanceList(objects=inst_list) class ExtendedIpsTestV21(test.TestCase): content_type = 'application/json' prefix = 'OS-EXT-IPS:' def setUp(self): super(ExtendedIpsTestV21, self).setUp() fakes.stub_out_nw_api(self) fakes.stub_out_secgroup_api(self) self.stub_out('nova.compute.api.API.get', fake_compute_get) self.stub_out('nova.compute.api.API.get_all', fake_compute_get_all) def _make_request(self, url): req = fakes.HTTPRequest.blank(url) req.headers['Accept'] = self.content_type res = req.get_response(fakes.wsgi_app_v21()) return res def _get_server(self, body): return jsonutils.loads(body).get('server') def _get_servers(self, body): return jsonutils.loads(body).get('servers') def _get_ips(self, server): for network in six.itervalues(server['addresses']): for ip in network: yield ip def assertServerStates(self, server): results = [] for ip in self._get_ips(server): results.append({'address': ip.get('addr'), 'type': ip.get('%stype' % self.prefix)}) self.assertJsonEqual(ALL_IPS, results) def test_show(self): url = '/v2/fake/servers/%s' % UUID3 res = self._make_request(url) self.assertEqual(res.status_int, 200) self.assertServerStates(self._get_server(res.body)) def test_detail(self): url = '/v2/fake/servers/detail' res = self._make_request(url) self.assertEqual(res.status_int, 200) for i, server in enumerate(self._get_servers(res.body)): self.assertServerStates(server) nova-17.0.1/nova/tests/unit/api/openstack/compute/test_admin_password.py0000666000175000017500000002410513250073126026513 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # Copyright 2013 IBM Corp. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import webob from nova.api.openstack.compute import admin_password as admin_password_v21 from nova import exception from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_instance def fake_get(self, context, id, expected_attrs=None): return fake_instance.fake_instance_obj( context, uuid=id, project_id=context.project_id, user_id=context.user_id, expected_attrs=expected_attrs) def fake_set_admin_password(self, context, instance, password=None): pass class AdminPasswordTestV21(test.NoDBTestCase): validation_error = exception.ValidationError def setUp(self): super(AdminPasswordTestV21, self).setUp() self.stub_out('nova.compute.api.API.set_admin_password', fake_set_admin_password) self.stub_out('nova.compute.api.API.get', fake_get) self.fake_req = fakes.HTTPRequest.blank('') def _get_action(self): return admin_password_v21.AdminPasswordController().change_password def _check_status(self, expected_status, res, controller_method): self.assertEqual(expected_status, controller_method.wsgi_code) def test_change_password(self): body = {'changePassword': {'adminPass': 'test'}} res = self._get_action()(self.fake_req, fakes.FAKE_UUID, body=body) self._check_status(202, res, self._get_action()) def test_change_password_empty_string(self): body = {'changePassword': {'adminPass': ''}} res = self._get_action()(self.fake_req, fakes.FAKE_UUID, body=body) self._check_status(202, res, self._get_action()) @mock.patch('nova.compute.api.API.set_admin_password', side_effect=NotImplementedError()) def test_change_password_with_non_implement(self, mock_set_admin_password): body = {'changePassword': {'adminPass': 'test'}} self.assertRaises(webob.exc.HTTPNotImplemented, self._get_action(), self.fake_req, fakes.FAKE_UUID, body=body) @mock.patch('nova.compute.api.API.get', side_effect=exception.InstanceNotFound( instance_id=fakes.FAKE_UUID)) def test_change_password_with_non_existed_instance(self, mock_get): body = {'changePassword': {'adminPass': 'test'}} self.assertRaises(webob.exc.HTTPNotFound, self._get_action(), self.fake_req, fakes.FAKE_UUID, body=body) def test_change_password_with_non_string_password(self): body = {'changePassword': {'adminPass': 1234}} self.assertRaises(self.validation_error, self._get_action(), self.fake_req, fakes.FAKE_UUID, body=body) @mock.patch('nova.compute.api.API.set_admin_password', side_effect=exception.InstancePasswordSetFailed(instance="1", reason='')) def test_change_password_failed(self, mock_set_admin_password): body = {'changePassword': {'adminPass': 'test'}} self.assertRaises(webob.exc.HTTPConflict, self._get_action(), self.fake_req, fakes.FAKE_UUID, body=body) @mock.patch('nova.compute.api.API.set_admin_password', side_effect=exception.SetAdminPasswdNotSupported(instance="1", reason='')) def test_change_password_not_supported(self, mock_set_admin_password): body = {'changePassword': {'adminPass': 'test'}} self.assertRaises(webob.exc.HTTPConflict, self._get_action(), self.fake_req, fakes.FAKE_UUID, body=body) @mock.patch('nova.compute.api.API.set_admin_password', side_effect=exception.InstanceAgentNotEnabled(instance="1", reason='')) def test_change_password_guest_agent_disabled(self, mock_set_admin_password): body = {'changePassword': {'adminPass': 'test'}} self.assertRaises(webob.exc.HTTPConflict, self._get_action(), self.fake_req, fakes.FAKE_UUID, body=body) def test_change_password_without_admin_password(self): body = {'changPassword': {}} self.assertRaises(self.validation_error, self._get_action(), self.fake_req, fakes.FAKE_UUID, body=body) def test_change_password_none(self): body = {'changePassword': {'adminPass': None}} self.assertRaises(self.validation_error, self._get_action(), self.fake_req, fakes.FAKE_UUID, body=body) def test_change_password_adminpass_none(self): body = {'changePassword': None} self.assertRaises(self.validation_error, self._get_action(), self.fake_req, fakes.FAKE_UUID, body=body) def test_change_password_bad_request(self): body = {'changePassword': {'pass': '12345'}} self.assertRaises(self.validation_error, self._get_action(), self.fake_req, fakes.FAKE_UUID, body=body) def test_server_change_password_pass_disabled(self): # run with enable_instance_password disabled to verify adminPass # is missing from response. See lp bug 921814 self.flags(enable_instance_password=False, group='api') body = {'changePassword': {'adminPass': '1234pass'}} res = self._get_action()(self.fake_req, fakes.FAKE_UUID, body=body) self._check_status(202, res, self._get_action()) @mock.patch('nova.compute.api.API.set_admin_password', side_effect=exception.InstanceInvalidState( instance_uuid='fake', attr='vm_state', state='stopped', method='set_admin_password')) def test_change_password_invalid_state(self, mock_set_admin_password): body = {'changePassword': {'adminPass': 'test'}} self.assertRaises(webob.exc.HTTPConflict, self._get_action(), self.fake_req, fakes.FAKE_UUID, body=body) class AdminPasswordPolicyEnforcementV21(test.NoDBTestCase): def setUp(self): super(AdminPasswordPolicyEnforcementV21, self).setUp() self.controller = admin_password_v21.AdminPasswordController() self.req = fakes.HTTPRequest.blank('') req_context = self.req.environ['nova.context'] def fake_get_instance(self, context, id): return fake_instance.fake_instance_obj( req_context, uuid=id, project_id=req_context.project_id, user_id=req_context.user_id) self.stub_out( 'nova.api.openstack.common.get_instance', fake_get_instance) def _common_policy_check(self, rules, rule_name, func, *arg, **kwarg): self.policy.set_rules(rules) exc = self.assertRaises( exception.PolicyNotAuthorized, func, *arg, **kwarg) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) def test_change_password_policy_failed_with_other_project(self): rule_name = "os_compute_api:os-admin-password" rule = {rule_name: "project_id:%(project_id)s"} body = {'changePassword': {'adminPass': '1234pass'}} # Change the project_id in request context. req = fakes.HTTPRequest.blank('') req.environ['nova.context'].project_id = 'other-project' self._common_policy_check( rule, rule_name, self.controller.change_password, req, fakes.FAKE_UUID, body=body) @mock.patch('nova.compute.api.API.set_admin_password') def test_change_password_overridden_policy_pass_with_same_project( self, password_mock): rule_name = "os_compute_api:os-admin-password" self.policy.set_rules({rule_name: "user_id:%(user_id)s"}) body = {'changePassword': {'adminPass': '1234pass'}} self.controller.change_password(self.req, fakes.FAKE_UUID, body=body) password_mock.assert_called_once_with(self.req.environ['nova.context'], mock.ANY, '1234pass') def test_change_password_overridden_policy_failed_with_other_user(self): rule_name = "os_compute_api:os-admin-password" rule = {rule_name: "user_id:%(user_id)s"} # Change the user_id in request context. req = fakes.HTTPRequest.blank('') req.environ['nova.context'].user_id = 'other-user' body = {'changePassword': {'adminPass': '1234pass'}} self._common_policy_check( rule, rule_name, self.controller.change_password, req, fakes.FAKE_UUID, body=body) @mock.patch('nova.compute.api.API.set_admin_password') def test_change_password_overridden_policy_pass_with_same_user( self, password_mock): rule_name = "os_compute_api:os-admin-password" self.policy.set_rules({rule_name: "user_id:%(user_id)s"}) body = {'changePassword': {'adminPass': '1234pass'}} self.controller.change_password(self.req, fakes.FAKE_UUID, body=body) password_mock.assert_called_once_with(self.req.environ['nova.context'], mock.ANY, '1234pass') nova-17.0.1/nova/tests/unit/api/openstack/compute/test_volumes.py0000666000175000017500000015341413250073126025201 0ustar zuulzuul00000000000000# Copyright 2013 Josh Durgin # Copyright 2013 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import mock from oslo_serialization import jsonutils from oslo_utils import encodeutils import six from six.moves import urllib import webob from webob import exc from nova.api.openstack import common from nova.api.openstack.compute import assisted_volume_snapshots \ as assisted_snaps_v21 from nova.api.openstack.compute import volumes as volumes_v21 from nova.compute import api as compute_api from nova.compute import flavors from nova.compute import task_states from nova.compute import vm_states import nova.conf from nova import context from nova import exception from nova import objects from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_block_device from nova.tests.unit import fake_instance from nova.volume import cinder CONF = nova.conf.CONF # This is the server ID. FAKE_UUID = 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa' # This is the old volume ID (to swap from). FAKE_UUID_A = '00000000-aaaa-aaaa-aaaa-000000000000' # This is the new volume ID (to swap to). FAKE_UUID_B = 'bbbbbbbb-bbbb-bbbb-bbbb-bbbbbbbbbbbb' # This is a volume that is not found. FAKE_UUID_C = 'cccccccc-cccc-cccc-cccc-cccccccccccc' IMAGE_UUID = 'c905cedb-7281-47e4-8a62-f26bc5fc4c77' def fake_get_instance(self, context, instance_id, expected_attrs=None): return fake_instance.fake_instance_obj(context, **{'uuid': instance_id}) def fake_get_volume(self, context, id): if id == FAKE_UUID_A: status = 'in-use' attach_status = 'attached' elif id == FAKE_UUID_B: status = 'available' attach_status = 'detached' else: raise exception.VolumeNotFound(volume_id=id) return {'id': id, 'status': status, 'attach_status': attach_status} def fake_attach_volume(self, context, instance, volume_id, device, tag=None, supports_multiattach=False): pass def fake_detach_volume(self, context, instance, volume): pass def fake_swap_volume(self, context, instance, old_volume, new_volume): if old_volume['id'] != FAKE_UUID_A: raise exception.VolumeBDMNotFound(volume_id=old_volume['id']) def fake_create_snapshot(self, context, volume, name, description): return {'id': 123, 'volume_id': 'fakeVolId', 'status': 'available', 'volume_size': 123, 'created_at': '2013-01-01 00:00:01', 'display_name': 'myVolumeName', 'display_description': 'myVolumeDescription'} def fake_delete_snapshot(self, context, snapshot_id): pass def fake_compute_volume_snapshot_delete(self, context, volume_id, snapshot_id, delete_info): pass def fake_compute_volume_snapshot_create(self, context, volume_id, create_info): pass @classmethod def fake_bdm_get_by_volume_and_instance(cls, ctxt, volume_id, instance_uuid): if volume_id != FAKE_UUID_A: raise exception.VolumeBDMNotFound(volume_id=volume_id) db_bdm = fake_block_device.FakeDbBlockDeviceDict( {'id': 1, 'instance_uuid': instance_uuid, 'device_name': '/dev/fake0', 'delete_on_termination': 'False', 'source_type': 'volume', 'destination_type': 'volume', 'snapshot_id': None, 'volume_id': FAKE_UUID_A, 'volume_size': 1}) return objects.BlockDeviceMapping._from_db_object( ctxt, objects.BlockDeviceMapping(), db_bdm) class BootFromVolumeTest(test.TestCase): def setUp(self): super(BootFromVolumeTest, self).setUp() self.stubs.Set(compute_api.API, 'create', self._get_fake_compute_api_create()) fakes.stub_out_nw_api(self) self._block_device_mapping_seen = None self._legacy_bdm_seen = True def _get_fake_compute_api_create(self): def _fake_compute_api_create(cls, context, instance_type, image_href, **kwargs): self._block_device_mapping_seen = kwargs.get( 'block_device_mapping') self._legacy_bdm_seen = kwargs.get('legacy_bdm') inst_type = flavors.get_flavor_by_flavor_id(2) resv_id = None return ([{'id': 1, 'display_name': 'test_server', 'uuid': FAKE_UUID, 'instance_type': inst_type, 'access_ip_v4': '1.2.3.4', 'access_ip_v6': 'fead::1234', 'image_ref': IMAGE_UUID, 'user_id': 'fake', 'project_id': 'fake', 'created_at': datetime.datetime(2010, 10, 10, 12, 0, 0), 'updated_at': datetime.datetime(2010, 11, 11, 11, 0, 0), 'progress': 0, 'fixed_ips': [] }], resv_id) return _fake_compute_api_create def test_create_root_volume(self): body = dict(server=dict( name='test_server', imageRef=IMAGE_UUID, flavorRef=2, min_count=1, max_count=1, block_device_mapping=[dict( volume_id='ca9fe3f5-cede-43cb-8050-1672acabe348', device_name='/dev/vda', delete_on_termination=False, )] )) req = fakes.HTTPRequest.blank('/v2/fake/os-volumes_boot') req.method = 'POST' req.body = jsonutils.dump_as_bytes(body) req.headers['content-type'] = 'application/json' res = req.get_response(fakes.wsgi_app_v21()) self.assertEqual(202, res.status_int) server = jsonutils.loads(res.body)['server'] self.assertEqual(FAKE_UUID, server['id']) self.assertEqual(CONF.password_length, len(server['adminPass'])) self.assertEqual(1, len(self._block_device_mapping_seen)) self.assertTrue(self._legacy_bdm_seen) self.assertEqual('ca9fe3f5-cede-43cb-8050-1672acabe348', self._block_device_mapping_seen[0]['volume_id']) self.assertEqual('/dev/vda', self._block_device_mapping_seen[0]['device_name']) def test_create_root_volume_bdm_v2(self): body = dict(server=dict( name='test_server', imageRef=IMAGE_UUID, flavorRef=2, min_count=1, max_count=1, block_device_mapping_v2=[dict( source_type='volume', uuid='1', device_name='/dev/vda', boot_index=0, delete_on_termination=False, )] )) req = fakes.HTTPRequest.blank('/v2/fake/os-volumes_boot') req.method = 'POST' req.body = jsonutils.dump_as_bytes(body) req.headers['content-type'] = 'application/json' res = req.get_response(fakes.wsgi_app_v21()) self.assertEqual(202, res.status_int) server = jsonutils.loads(res.body)['server'] self.assertEqual(FAKE_UUID, server['id']) self.assertEqual(CONF.password_length, len(server['adminPass'])) self.assertEqual(1, len(self._block_device_mapping_seen)) self.assertFalse(self._legacy_bdm_seen) self.assertEqual('1', self._block_device_mapping_seen[0]['volume_id']) self.assertEqual(0, self._block_device_mapping_seen[0]['boot_index']) self.assertEqual('/dev/vda', self._block_device_mapping_seen[0]['device_name']) class VolumeApiTestV21(test.NoDBTestCase): url_prefix = '/v2/fake' def setUp(self): super(VolumeApiTestV21, self).setUp() fakes.stub_out_networking(self) self.stubs.Set(cinder.API, "delete", fakes.stub_volume_delete) self.stubs.Set(cinder.API, "get", fakes.stub_volume_get) self.stubs.Set(cinder.API, "get_all", fakes.stub_volume_get_all) self.context = context.get_admin_context() @property def app(self): return fakes.wsgi_app_v21() def test_volume_create(self): self.stubs.Set(cinder.API, "create", fakes.stub_volume_create) vol = {"size": 100, "display_name": "Volume Test Name", "display_description": "Volume Test Desc", "availability_zone": "zone1:host1"} body = {"volume": vol} req = fakes.HTTPRequest.blank(self.url_prefix + '/os-volumes') req.method = 'POST' req.body = jsonutils.dump_as_bytes(body) req.headers['content-type'] = 'application/json' resp = req.get_response(self.app) self.assertEqual(200, resp.status_int) resp_dict = jsonutils.loads(resp.body) self.assertIn('volume', resp_dict) self.assertEqual(vol['size'], resp_dict['volume']['size']) self.assertEqual(vol['display_name'], resp_dict['volume']['displayName']) self.assertEqual(vol['display_description'], resp_dict['volume']['displayDescription']) self.assertEqual(vol['availability_zone'], resp_dict['volume']['availabilityZone']) def _test_volume_translate_exception(self, cinder_exc, api_exc): """Tests that cinder exceptions are correctly translated""" def fake_volume_create(self, context, size, name, description, snapshot, **param): raise cinder_exc self.stubs.Set(cinder.API, "create", fake_volume_create) vol = {"size": '10', "display_name": "Volume Test Name", "display_description": "Volume Test Desc", "availability_zone": "zone1:host1"} body = {"volume": vol} req = fakes.HTTPRequest.blank(self.url_prefix + '/os-volumes') self.assertRaises(api_exc, volumes_v21.VolumeController().create, req, body=body) @mock.patch.object(cinder.API, 'get_snapshot') @mock.patch.object(cinder.API, 'create') def test_volume_create_bad_snapshot_id(self, mock_create, mock_get): vol = {"snapshot_id": '1', "size": 10} body = {"volume": vol} mock_get.side_effect = exception.SnapshotNotFound(snapshot_id='1') req = fakes.HTTPRequest.blank(self.url_prefix + '/os-volumes') self.assertRaises(webob.exc.HTTPNotFound, volumes_v21.VolumeController().create, req, body=body) def test_volume_create_bad_input(self): self._test_volume_translate_exception( exception.InvalidInput(reason='fake'), webob.exc.HTTPBadRequest) def test_volume_create_bad_quota(self): self._test_volume_translate_exception( exception.OverQuota(overs='fake'), webob.exc.HTTPForbidden) def test_volume_index(self): req = fakes.HTTPRequest.blank(self.url_prefix + '/os-volumes') resp = req.get_response(self.app) self.assertEqual(200, resp.status_int) def test_volume_detail(self): req = fakes.HTTPRequest.blank(self.url_prefix + '/os-volumes/detail') resp = req.get_response(self.app) self.assertEqual(200, resp.status_int) def test_volume_show(self): req = fakes.HTTPRequest.blank(self.url_prefix + '/os-volumes/123') resp = req.get_response(self.app) self.assertEqual(200, resp.status_int) def test_volume_show_no_volume(self): self.stubs.Set(cinder.API, "get", fakes.stub_volume_notfound) req = fakes.HTTPRequest.blank(self.url_prefix + '/os-volumes/456') resp = req.get_response(self.app) self.assertEqual(404, resp.status_int) self.assertIn('Volume 456 could not be found.', encodeutils.safe_decode(resp.body)) def test_volume_delete(self): req = fakes.HTTPRequest.blank(self.url_prefix + '/os-volumes/123') req.method = 'DELETE' resp = req.get_response(self.app) self.assertEqual(202, resp.status_int) def test_volume_delete_no_volume(self): self.stubs.Set(cinder.API, "delete", fakes.stub_volume_notfound) req = fakes.HTTPRequest.blank(self.url_prefix + '/os-volumes/456') req.method = 'DELETE' resp = req.get_response(self.app) self.assertEqual(404, resp.status_int) self.assertIn('Volume 456 could not be found.', encodeutils.safe_decode(resp.body)) def _test_list_with_invalid_filter(self, url): prefix = '/os-volumes' req = fakes.HTTPRequest.blank(prefix + url) self.assertRaises(exception.ValidationError, volumes_v21.VolumeController().index, req) def test_list_with_invalid_non_int_limit(self): self._test_list_with_invalid_filter('?limit=-9') def test_list_with_invalid_string_limit(self): self._test_list_with_invalid_filter('?limit=abc') def test_list_duplicate_query_with_invalid_string_limit(self): self._test_list_with_invalid_filter( '?limit=1&limit=abc') def test_detail_list_with_invalid_non_int_limit(self): self._test_list_with_invalid_filter('/detail?limit=-9') def test_detail_list_with_invalid_string_limit(self): self._test_list_with_invalid_filter('/detail?limit=abc') def test_detail_list_duplicate_query_with_invalid_string_limit(self): self._test_list_with_invalid_filter( '/detail?limit=1&limit=abc') def test_list_with_invalid_non_int_offset(self): self._test_list_with_invalid_filter('?offset=-9') def test_list_with_invalid_string_offset(self): self._test_list_with_invalid_filter('?offset=abc') def test_list_duplicate_query_with_invalid_string_offset(self): self._test_list_with_invalid_filter( '?offset=1&offset=abc') def test_detail_list_with_invalid_non_int_offset(self): self._test_list_with_invalid_filter('/detail?offset=-9') def test_detail_list_with_invalid_string_offset(self): self._test_list_with_invalid_filter('/detail?offset=abc') def test_detail_list_duplicate_query_with_invalid_string_offset(self): self._test_list_with_invalid_filter( '/detail?offset=1&offset=abc') def _test_list_duplicate_query_parameters_validation(self, url): params = { 'limit': 1, 'offset': 1 } for param, value in params.items(): req = fakes.HTTPRequest.blank( self.url_prefix + url + '?%s=%s&%s=%s' % (param, value, param, value)) resp = req.get_response(self.app) self.assertEqual(200, resp.status_int) def test_list_duplicate_query_parameters_validation(self): self._test_list_duplicate_query_parameters_validation('/os-volumes') def test_detail_list_duplicate_query_parameters_validation(self): self._test_list_duplicate_query_parameters_validation( '/os-volumes/detail') def test_list_with_additional_filter(self): req = fakes.HTTPRequest.blank(self.url_prefix + '/os-volumes?limit=1&offset=1&additional=something') resp = req.get_response(self.app) self.assertEqual(200, resp.status_int) def test_detail_list_with_additional_filter(self): req = fakes.HTTPRequest.blank(self.url_prefix + '/os-volumes/detail?limit=1&offset=1&additional=something') resp = req.get_response(self.app) self.assertEqual(200, resp.status_int) class VolumeAttachTestsV21(test.NoDBTestCase): validation_error = exception.ValidationError def setUp(self): super(VolumeAttachTestsV21, self).setUp() self.stub_out('nova.objects.BlockDeviceMapping' '.get_by_volume_and_instance', fake_bdm_get_by_volume_and_instance) self.stubs.Set(compute_api.API, 'get', fake_get_instance) self.stubs.Set(cinder.API, 'get', fake_get_volume) self.context = context.get_admin_context() self.expected_show = {'volumeAttachment': {'device': '/dev/fake0', 'serverId': FAKE_UUID, 'id': FAKE_UUID_A, 'volumeId': FAKE_UUID_A }} self.attachments = volumes_v21.VolumeAttachmentController() self.req = fakes.HTTPRequest.blank( '/v2/servers/id/os-volume_attachments/uuid') self.req.body = jsonutils.dump_as_bytes({}) self.req.headers['content-type'] = 'application/json' self.req.environ['nova.context'] = self.context def test_show(self): result = self.attachments.show(self.req, FAKE_UUID, FAKE_UUID_A) self.assertEqual(self.expected_show, result) @mock.patch.object(compute_api.API, 'get', side_effect=exception.InstanceNotFound(instance_id=FAKE_UUID)) def test_show_no_instance(self, mock_mr): self.assertRaises(exc.HTTPNotFound, self.attachments.show, self.req, FAKE_UUID, FAKE_UUID_A) @mock.patch.object(objects.BlockDeviceMapping, 'get_by_volume_and_instance', side_effect=exception.VolumeBDMNotFound( volume_id=FAKE_UUID_A)) def test_show_no_bdms(self, mock_mr): self.assertRaises(exc.HTTPNotFound, self.attachments.show, self.req, FAKE_UUID, FAKE_UUID_A) def test_show_bdms_no_mountpoint(self): FAKE_UUID_NOTEXIST = '00000000-aaaa-aaaa-aaaa-aaaaaaaaaaaa' self.assertRaises(exc.HTTPNotFound, self.attachments.show, self.req, FAKE_UUID, FAKE_UUID_NOTEXIST) def test_detach(self): self.stubs.Set(compute_api.API, 'detach_volume', fake_detach_volume) inst = fake_instance.fake_instance_obj(self.context, **{'uuid': FAKE_UUID}) with mock.patch.object(common, 'get_instance', return_value=inst) as mock_get_instance: result = self.attachments.delete(self.req, FAKE_UUID, FAKE_UUID_A) # NOTE: on v2.1, http status code is set as wsgi_code of API # method instead of status_int in a response object. if isinstance(self.attachments, volumes_v21.VolumeAttachmentController): status_int = self.attachments.delete.wsgi_code else: status_int = result.status_int self.assertEqual(202, status_int) mock_get_instance.assert_called_with( self.attachments.compute_api, self.context, FAKE_UUID, expected_attrs=['device_metadata']) @mock.patch.object(common, 'get_instance') def test_detach_vol_shelved_not_supported(self, mock_get_instance): inst = fake_instance.fake_instance_obj(self.context, **{'uuid': FAKE_UUID}) inst.vm_state = vm_states.SHELVED mock_get_instance.return_value = inst req = fakes.HTTPRequest.blank( '/v2/servers/id/os-volume_attachments/uuid', version='2.19') req.method = 'DELETE' req.headers['content-type'] = 'application/json' req.environ['nova.context'] = self.context self.assertRaises(webob.exc.HTTPConflict, self.attachments.delete, req, FAKE_UUID, FAKE_UUID_A) @mock.patch.object(compute_api.API, 'detach_volume') @mock.patch.object(common, 'get_instance') def test_detach_vol_shelved_supported(self, mock_get_instance, mock_detach): inst = fake_instance.fake_instance_obj(self.context, **{'uuid': FAKE_UUID}) inst.vm_state = vm_states.SHELVED mock_get_instance.return_value = inst req = fakes.HTTPRequest.blank( '/v2/servers/id/os-volume_attachments/uuid', version='2.20') req.method = 'DELETE' req.headers['content-type'] = 'application/json' req.environ['nova.context'] = self.context self.attachments.delete(req, FAKE_UUID, FAKE_UUID_A) self.assertTrue(mock_detach.called) def test_detach_vol_not_found(self): self.stubs.Set(compute_api.API, 'detach_volume', fake_detach_volume) self.assertRaises(exc.HTTPNotFound, self.attachments.delete, self.req, FAKE_UUID, FAKE_UUID_C) @mock.patch('nova.objects.BlockDeviceMapping.is_root', new_callable=mock.PropertyMock) def test_detach_vol_root(self, mock_isroot): mock_isroot.return_value = True self.assertRaises(exc.HTTPForbidden, self.attachments.delete, self.req, FAKE_UUID, FAKE_UUID_A) def test_detach_volume_from_locked_server(self): def fake_detach_volume_from_locked_server(self, context, instance, volume): raise exception.InstanceIsLocked(instance_uuid=instance['uuid']) self.stubs.Set(compute_api.API, 'detach_volume', fake_detach_volume_from_locked_server) self.assertRaises(webob.exc.HTTPConflict, self.attachments.delete, self.req, FAKE_UUID, FAKE_UUID_A) def test_attach_volume(self): self.stubs.Set(compute_api.API, 'attach_volume', fake_attach_volume) body = {'volumeAttachment': {'volumeId': FAKE_UUID_A, 'device': '/dev/fake'}} result = self.attachments.create(self.req, FAKE_UUID, body=body) self.assertEqual('00000000-aaaa-aaaa-aaaa-000000000000', result['volumeAttachment']['id']) @mock.patch.object(compute_api.API, 'attach_volume', side_effect=exception.VolumeTaggedAttachNotSupported()) def test_tagged_volume_attach_not_supported(self, mock_attach_volume): body = {'volumeAttachment': {'volumeId': FAKE_UUID_A, 'device': '/dev/fake'}} self.assertRaises(webob.exc.HTTPBadRequest, self.attachments.create, self.req, FAKE_UUID, body=body) @mock.patch.object(common, 'get_instance') def test_attach_vol_shelved_not_supported(self, mock_get_instance): body = {'volumeAttachment': {'volumeId': FAKE_UUID_A, 'device': '/dev/fake'}} inst = fake_instance.fake_instance_obj(self.context, **{'uuid': FAKE_UUID}) inst.vm_state = vm_states.SHELVED mock_get_instance.return_value = inst self.assertRaises(webob.exc.HTTPConflict, self.attachments.create, self.req, FAKE_UUID, body=body) @mock.patch.object(compute_api.API, 'attach_volume', return_value='/dev/myfake') @mock.patch.object(common, 'get_instance') def test_attach_vol_shelved_supported(self, mock_get_instance, mock_attach): body = {'volumeAttachment': {'volumeId': FAKE_UUID_A, 'device': '/dev/fake'}} inst = fake_instance.fake_instance_obj(self.context, **{'uuid': FAKE_UUID}) inst.vm_state = vm_states.SHELVED mock_get_instance.return_value = inst req = fakes.HTTPRequest.blank('/v2/servers/id/os-volume_attachments', version='2.20') req.method = 'POST' req.body = jsonutils.dump_as_bytes({}) req.headers['content-type'] = 'application/json' req.environ['nova.context'] = self.context result = self.attachments.create(req, FAKE_UUID, body=body) self.assertEqual('00000000-aaaa-aaaa-aaaa-000000000000', result['volumeAttachment']['id']) self.assertEqual('/dev/myfake', result['volumeAttachment']['device']) @mock.patch.object(compute_api.API, 'attach_volume', return_value='/dev/myfake') def test_attach_volume_with_auto_device(self, mock_attach): body = {'volumeAttachment': {'volumeId': FAKE_UUID_A, 'device': None}} result = self.attachments.create(self.req, FAKE_UUID, body=body) self.assertEqual('00000000-aaaa-aaaa-aaaa-000000000000', result['volumeAttachment']['id']) self.assertEqual('/dev/myfake', result['volumeAttachment']['device']) def test_attach_volume_to_locked_server(self): def fake_attach_volume_to_locked_server(self, context, instance, volume_id, device=None, tag=None, supports_multiattach=False): raise exception.InstanceIsLocked(instance_uuid=instance['uuid']) self.stubs.Set(compute_api.API, 'attach_volume', fake_attach_volume_to_locked_server) body = {'volumeAttachment': {'volumeId': FAKE_UUID_A, 'device': '/dev/fake'}} self.assertRaises(webob.exc.HTTPConflict, self.attachments.create, self.req, FAKE_UUID, body=body) def test_attach_volume_bad_id(self): self.stubs.Set(compute_api.API, 'attach_volume', fake_attach_volume) body = { 'volumeAttachment': { 'device': None, 'volumeId': 'TESTVOLUME', } } self.assertRaises(self.validation_error, self.attachments.create, self.req, FAKE_UUID, body=body) @mock.patch.object(compute_api.API, 'attach_volume', side_effect=exception.DevicePathInUse(path='/dev/sda')) def test_attach_volume_device_in_use(self, mock_attach): body = { 'volumeAttachment': { 'device': '/dev/sda', 'volumeId': FAKE_UUID_A, } } self.assertRaises(webob.exc.HTTPConflict, self.attachments.create, self.req, FAKE_UUID, body=body) def test_attach_volume_without_volumeId(self): self.stubs.Set(compute_api.API, 'attach_volume', fake_attach_volume) body = { 'volumeAttachment': { 'device': None } } self.assertRaises(self.validation_error, self.attachments.create, self.req, FAKE_UUID, body=body) def test_attach_volume_with_extra_arg(self): body = {'volumeAttachment': {'volumeId': FAKE_UUID_A, 'device': '/dev/fake', 'extra': 'extra_arg'}} self.assertRaises(self.validation_error, self.attachments.create, self.req, FAKE_UUID, body=body) @mock.patch.object(compute_api.API, 'attach_volume') def test_attach_volume_with_invalid_input(self, mock_attach): mock_attach.side_effect = exception.InvalidInput( reason='Invalid volume') body = {'volumeAttachment': {'volumeId': FAKE_UUID_A, 'device': '/dev/fake'}} req = fakes.HTTPRequest.blank('/v2/servers/id/os-volume_attachments') req.method = 'POST' req.body = jsonutils.dump_as_bytes({}) req.headers['content-type'] = 'application/json' req.environ['nova.context'] = self.context self.assertRaises(exc.HTTPBadRequest, self.attachments.create, req, FAKE_UUID, body=body) def _test_swap(self, attachments, uuid=FAKE_UUID_A, fake_func=None, body=None): fake_func = fake_func or fake_swap_volume self.stubs.Set(compute_api.API, 'swap_volume', fake_func) body = body or {'volumeAttachment': {'volumeId': FAKE_UUID_B}} return attachments.update(self.req, FAKE_UUID, uuid, body=body) def test_swap_volume_for_locked_server(self): def fake_swap_volume_for_locked_server(self, context, instance, old_volume, new_volume): raise exception.InstanceIsLocked(instance_uuid=instance['uuid']) self.assertRaises(webob.exc.HTTPConflict, self._test_swap, self.attachments, fake_func=fake_swap_volume_for_locked_server) def test_swap_volume(self): result = self._test_swap(self.attachments) # NOTE: on v2.1, http status code is set as wsgi_code of API # method instead of status_int in a response object. if isinstance(self.attachments, volumes_v21.VolumeAttachmentController): status_int = self.attachments.update.wsgi_code else: status_int = result.status_int self.assertEqual(202, status_int) def test_swap_volume_with_nonexistent_uri(self): self.assertRaises(exc.HTTPNotFound, self._test_swap, self.attachments, uuid=FAKE_UUID_C) @mock.patch.object(cinder.API, 'get') def test_swap_volume_with_nonexistent_dest_in_body(self, mock_update): mock_update.side_effect = [ None, exception.VolumeNotFound(volume_id=FAKE_UUID_C)] body = {'volumeAttachment': {'volumeId': FAKE_UUID_C}} self.assertRaises(exc.HTTPBadRequest, self._test_swap, self.attachments, body=body) def test_swap_volume_without_volumeId(self): body = {'volumeAttachment': {'device': '/dev/fake'}} self.assertRaises(self.validation_error, self._test_swap, self.attachments, body=body) def test_swap_volume_with_extra_arg(self): body = {'volumeAttachment': {'volumeId': FAKE_UUID_A, 'device': '/dev/fake'}} self.assertRaises(self.validation_error, self._test_swap, self.attachments, body=body) def test_swap_volume_for_bdm_not_found(self): def fake_swap_volume_for_bdm_not_found(self, context, instance, old_volume, new_volume): raise exception.VolumeBDMNotFound(volume_id=FAKE_UUID_C) self.assertRaises(webob.exc.HTTPNotFound, self._test_swap, self.attachments, fake_func=fake_swap_volume_for_bdm_not_found) def _test_list_with_invalid_filter(self, url): prefix = '/servers/id/os-volume_attachments' req = fakes.HTTPRequest.blank(prefix + url) self.assertRaises(exception.ValidationError, self.attachments.index, req, FAKE_UUID) def test_list_with_invalid_non_int_limit(self): self._test_list_with_invalid_filter('?limit=-9') def test_list_with_invalid_string_limit(self): self._test_list_with_invalid_filter('?limit=abc') def test_list_duplicate_query_with_invalid_string_limit(self): self._test_list_with_invalid_filter( '?limit=1&limit=abc') def test_list_with_invalid_non_int_offset(self): self._test_list_with_invalid_filter('?offset=-9') def test_list_with_invalid_string_offset(self): self._test_list_with_invalid_filter('?offset=abc') def test_list_duplicate_query_with_invalid_string_offset(self): self._test_list_with_invalid_filter( '?offset=1&offset=abc') @mock.patch.object(objects.BlockDeviceMappingList, 'get_by_instance_uuid') def test_list_duplicate_query_parameters_validation(self, mock_get): fake_bdms = objects.BlockDeviceMappingList() mock_get.return_value = fake_bdms params = { 'limit': 1, 'offset': 1 } for param, value in params.items(): req = fakes.HTTPRequest.blank( '/servers/id/os-volume_attachments' + '?%s=%s&%s=%s' % (param, value, param, value)) self.attachments.index(req, FAKE_UUID) @mock.patch.object(objects.BlockDeviceMappingList, 'get_by_instance_uuid') def test_list_with_additional_filter(self, mock_get): fake_bdms = objects.BlockDeviceMappingList() mock_get.return_value = fake_bdms req = fakes.HTTPRequest.blank( '/servers/id/os-volume_attachments?limit=1&additional=something') self.attachments.index(req, FAKE_UUID) class VolumeAttachTestsV249(test.NoDBTestCase): validation_error = exception.ValidationError def setUp(self): super(VolumeAttachTestsV249, self).setUp() self.attachments = volumes_v21.VolumeAttachmentController() self.req = fakes.HTTPRequest.blank( '/v2/servers/id/os-volume_attachments/uuid', version='2.49') def test_tagged_volume_attach_invalid_tag_comma(self): body = {'volumeAttachment': {'volumeId': FAKE_UUID_A, 'device': '/dev/fake', 'tag': ','}} self.assertRaises(exception.ValidationError, self.attachments.create, self.req, FAKE_UUID, body=body) def test_tagged_volume_attach_invalid_tag_slash(self): body = {'volumeAttachment': {'volumeId': FAKE_UUID_A, 'device': '/dev/fake', 'tag': '/'}} self.assertRaises(exception.ValidationError, self.attachments.create, self.req, FAKE_UUID, body=body) def test_tagged_volume_attach_invalid_tag_too_long(self): tag = ''.join(map(str, range(10, 41))) body = {'volumeAttachment': {'volumeId': FAKE_UUID_A, 'device': '/dev/fake', 'tag': tag}} self.assertRaises(exception.ValidationError, self.attachments.create, self.req, FAKE_UUID, body=body) @mock.patch('nova.compute.api.API.attach_volume') @mock.patch('nova.compute.api.API.get', fake_get_instance) def test_tagged_volume_attach_valid_tag(self, _): body = {'volumeAttachment': {'volumeId': FAKE_UUID_A, 'device': '/dev/fake', 'tag': 'foo'}} self.attachments.create(self.req, FAKE_UUID, body=body) class VolumeAttachTestsV260(test.NoDBTestCase): """Negative tests for attaching a multiattach volume with version 2.60.""" def setUp(self): super(VolumeAttachTestsV260, self).setUp() self.controller = volumes_v21.VolumeAttachmentController() get_instance = mock.patch('nova.compute.api.API.get') get_instance.side_effect = fake_get_instance get_instance.start() self.addCleanup(get_instance.stop) def _post_attach(self, version=None): body = {'volumeAttachment': {'volumeId': FAKE_UUID_A}} req = fakes.HTTPRequestV21.blank( '/servers/%s/os-volume_attachments' % FAKE_UUID, version=version or '2.60') req.body = jsonutils.dump_as_bytes(body) req.method = 'POST' req.headers['content-type'] = 'application/json' return self.controller.create(req, FAKE_UUID, body=body) def test_attach_with_multiattach_fails_old_microversion(self): """Tests the case that the user tries to attach with a multiattach volume but before using microversion 2.60. """ with mock.patch.object( self.controller.compute_api, 'attach_volume', side_effect= exception.MultiattachNotSupportedOldMicroversion) as attach: ex = self.assertRaises(webob.exc.HTTPBadRequest, self._post_attach, '2.59') create_kwargs = attach.call_args[1] self.assertFalse(create_kwargs['supports_multiattach']) self.assertIn('Multiattach volumes are only supported starting with ' 'compute API version 2.60', six.text_type(ex)) def test_attach_with_multiattach_fails_not_available(self): """Tests the case that the user tries to attach with a multiattach volume but before the compute hosting the instance is upgraded. This would come from reserve_block_device_name in the compute RPC API client. """ with mock.patch.object( self.controller.compute_api, 'attach_volume', side_effect= exception.MultiattachSupportNotYetAvailable) as attach: ex = self.assertRaises(webob.exc.HTTPConflict, self._post_attach) create_kwargs = attach.call_args[1] self.assertTrue(create_kwargs['supports_multiattach']) self.assertIn('Multiattach volume support is not yet available', six.text_type(ex)) def test_attach_with_multiattach_fails_not_supported_by_driver(self): """Tests the case that the user tries to attach with a multiattach volume but the compute hosting the instance does not support multiattach volumes. This would come from reserve_block_device_name via RPC call to the compute service. """ with mock.patch.object( self.controller.compute_api, 'attach_volume', side_effect= exception.MultiattachNotSupportedByVirtDriver( volume_id=FAKE_UUID_A)) as attach: ex = self.assertRaises(webob.exc.HTTPConflict, self._post_attach) create_kwargs = attach.call_args[1] self.assertTrue(create_kwargs['supports_multiattach']) self.assertIn("has 'multiattach' set, which is not supported for " "this instance", six.text_type(ex)) def test_attach_with_multiattach_fails_for_shelved_offloaded_server(self): """Tests the case that the user tries to attach with a multiattach volume to a shelved offloaded server which is not supported. """ with mock.patch.object( self.controller.compute_api, 'attach_volume', side_effect= exception.MultiattachToShelvedNotSupported) as attach: ex = self.assertRaises(webob.exc.HTTPBadRequest, self._post_attach) create_kwargs = attach.call_args[1] self.assertTrue(create_kwargs['supports_multiattach']) self.assertIn('Attaching multiattach volumes is not supported for ' 'shelved-offloaded instances.', six.text_type(ex)) class CommonBadRequestTestCase(object): resource = None entity_name = None controller_cls = None kwargs = {} bad_request = exc.HTTPBadRequest """ Tests of places we throw 400 Bad Request from """ def setUp(self): super(CommonBadRequestTestCase, self).setUp() self.controller = self.controller_cls() def _bad_request_create(self, body): req = fakes.HTTPRequest.blank('/v2/fake/' + self.resource) req.method = 'POST' kwargs = self.kwargs.copy() kwargs['body'] = body self.assertRaises(self.bad_request, self.controller.create, req, **kwargs) def test_create_no_body(self): self._bad_request_create(body=None) def test_create_missing_volume(self): body = {'foo': {'a': 'b'}} self._bad_request_create(body=body) def test_create_malformed_entity(self): body = {self.entity_name: 'string'} self._bad_request_create(body=body) class BadRequestVolumeTestCaseV21(CommonBadRequestTestCase, test.NoDBTestCase): resource = 'os-volumes' entity_name = 'volume' controller_cls = volumes_v21.VolumeController bad_request = exception.ValidationError @mock.patch.object(cinder.API, 'delete', side_effect=exception.InvalidInput(reason='vol attach')) def test_delete_invalid_status_volume(self, mock_delete): req = fakes.HTTPRequest.blank('/v2.1/os-volumes') req.method = 'DELETE' self.assertRaises(webob.exc.HTTPBadRequest, self.controller.delete, req, FAKE_UUID) class BadRequestSnapshotTestCaseV21(CommonBadRequestTestCase, test.NoDBTestCase): resource = 'os-snapshots' entity_name = 'snapshot' controller_cls = volumes_v21.SnapshotController bad_request = exception.ValidationError class AssistedSnapshotCreateTestCaseV21(test.NoDBTestCase): assisted_snaps = assisted_snaps_v21 bad_request = exception.ValidationError def setUp(self): super(AssistedSnapshotCreateTestCaseV21, self).setUp() self.controller = \ self.assisted_snaps.AssistedVolumeSnapshotsController() self.stubs.Set(compute_api.API, 'volume_snapshot_create', fake_compute_volume_snapshot_create) def test_assisted_create(self): req = fakes.HTTPRequest.blank('/v2/fake/os-assisted-volume-snapshots') body = {'snapshot': {'volume_id': '1', 'create_info': {'type': 'qcow2', 'new_file': 'new_file', 'snapshot_id': 'snapshot_id'}}} req.method = 'POST' self.controller.create(req, body=body) def test_assisted_create_missing_create_info(self): req = fakes.HTTPRequest.blank('/v2/fake/os-assisted-volume-snapshots') body = {'snapshot': {'volume_id': '1'}} req.method = 'POST' self.assertRaises(self.bad_request, self.controller.create, req, body=body) def test_assisted_create_with_unexpected_attr(self): req = fakes.HTTPRequest.blank('/v2/fake/os-assisted-volume-snapshots') body = { 'snapshot': { 'volume_id': '1', 'create_info': { 'type': 'qcow2', 'new_file': 'new_file', 'snapshot_id': 'snapshot_id' } }, 'unexpected': 0, } req.method = 'POST' self.assertRaises(self.bad_request, self.controller.create, req, body=body) @mock.patch('nova.objects.BlockDeviceMapping.get_by_volume', side_effect=exception.VolumeBDMIsMultiAttach(volume_id='1')) def test_assisted_create_multiattach_fails(self, bdm_get_by_volume): # unset the stub on volume_snapshot_create from setUp self.mox.UnsetStubs() req = fakes.HTTPRequest.blank('/v2/fake/os-assisted-volume-snapshots') body = {'snapshot': {'volume_id': '1', 'create_info': {'type': 'qcow2', 'new_file': 'new_file', 'snapshot_id': 'snapshot_id'}}} req.method = 'POST' self.assertRaises( webob.exc.HTTPBadRequest, self.controller.create, req, body=body) def _test_assisted_create_instance_conflict(self, api_error): # unset the stub on volume_snaphost_create from setUp self.mox.UnsetStubs() req = fakes.HTTPRequest.blank('/v2/fake/os-assisted-volume-snapshots') body = {'snapshot': {'volume_id': '1', 'create_info': {'type': 'qcow2', 'new_file': 'new_file', 'snapshot_id': 'snapshot_id'}}} req.method = 'POST' with mock.patch.object(compute_api.API, 'volume_snapshot_create', side_effect=api_error): self.assertRaises( webob.exc.HTTPBadRequest, self.controller.create, req, body=body) def test_assisted_create_instance_invalid_state(self): api_error = exception.InstanceInvalidState( instance_uuid=FAKE_UUID, attr='task_state', state=task_states.SHELVING_OFFLOADING, method='volume_snapshot_create') self._test_assisted_create_instance_conflict(api_error) def test_assisted_create_instance_not_ready(self): api_error = exception.InstanceNotReady(instance_id=FAKE_UUID) self._test_assisted_create_instance_conflict(api_error) class AssistedSnapshotDeleteTestCaseV21(test.NoDBTestCase): assisted_snaps = assisted_snaps_v21 def _check_status(self, expected_status, res, controller_method): self.assertEqual(expected_status, controller_method.wsgi_code) def setUp(self): super(AssistedSnapshotDeleteTestCaseV21, self).setUp() self.controller = \ self.assisted_snaps.AssistedVolumeSnapshotsController() self.stubs.Set(compute_api.API, 'volume_snapshot_delete', fake_compute_volume_snapshot_delete) def test_assisted_delete(self): params = { 'delete_info': jsonutils.dumps({'volume_id': '1'}), } req = fakes.HTTPRequest.blank( '/v2/fake/os-assisted-volume-snapshots?%s' % urllib.parse.urlencode(params)) req.method = 'DELETE' result = self.controller.delete(req, '5') self._check_status(204, result, self.controller.delete) def test_assisted_delete_missing_delete_info(self): req = fakes.HTTPRequest.blank('/v2/fake/os-assisted-volume-snapshots') req.method = 'DELETE' self.assertRaises(webob.exc.HTTPBadRequest, self.controller.delete, req, '5') def _test_assisted_delete_instance_conflict(self, api_error): # unset the stub on volume_snapshot_delete from setUp self.mox.UnsetStubs() params = { 'delete_info': jsonutils.dumps({'volume_id': '1'}), } req = fakes.HTTPRequest.blank( '/v2/fake/os-assisted-volume-snapshots?%s' % urllib.parse.urlencode(params)) req.method = 'DELETE' with mock.patch.object(compute_api.API, 'volume_snapshot_delete', side_effect=api_error): self.assertRaises( webob.exc.HTTPBadRequest, self.controller.delete, req, '5') def test_assisted_delete_instance_invalid_state(self): api_error = exception.InstanceInvalidState( instance_uuid=FAKE_UUID, attr='task_state', state=task_states.UNSHELVING, method='volume_snapshot_delete') self._test_assisted_delete_instance_conflict(api_error) def test_assisted_delete_instance_not_ready(self): api_error = exception.InstanceNotReady(instance_id=FAKE_UUID) self._test_assisted_delete_instance_conflict(api_error) def test_delete_additional_query_parameters(self): params = { 'delete_info': jsonutils.dumps({'volume_id': '1'}), 'additional': 123 } req = fakes.HTTPRequest.blank( '/v2/fake/os-assisted-volume-snapshots?%s' % urllib.parse.urlencode(params)) req.method = 'DELETE' self.controller.delete(req, '5') def test_delete_duplicate_query_parameters_validation(self): params = { 'delete_info': jsonutils.dumps({'volume_id': '1'}), 'delete_info': jsonutils.dumps({'volume_id': '2'}) } req = fakes.HTTPRequest.blank( '/v2/fake/os-assisted-volume-snapshots?%s' % urllib.parse.urlencode(params)) req.method = 'DELETE' self.controller.delete(req, '5') def test_assisted_delete_missing_volume_id(self): params = { 'delete_info': jsonutils.dumps({'something_else': '1'}), } req = fakes.HTTPRequest.blank( '/v2/fake/os-assisted-volume-snapshots?%s' % urllib.parse.urlencode(params)) req.method = 'DELETE' ex = self.assertRaises(webob.exc.HTTPBadRequest, self.controller.delete, req, '5') # This is the result of a KeyError but the only thing in the message # is the missing key. self.assertIn('volume_id', six.text_type(ex)) class TestAssistedVolumeSnapshotsPolicyEnforcementV21(test.NoDBTestCase): def setUp(self): super(TestAssistedVolumeSnapshotsPolicyEnforcementV21, self).setUp() self.controller = ( assisted_snaps_v21.AssistedVolumeSnapshotsController()) self.req = fakes.HTTPRequest.blank('') def test_create_assisted_volumes_snapshots_policy_failed(self): rule_name = "os_compute_api:os-assisted-volume-snapshots:create" self.policy.set_rules({rule_name: "project:non_fake"}) body = {'snapshot': {'volume_id': '1', 'create_info': {'type': 'qcow2', 'new_file': 'new_file', 'snapshot_id': 'snapshot_id'}}} exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller.create, self.req, body=body) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) def test_delete_assisted_volumes_snapshots_policy_failed(self): rule_name = "os_compute_api:os-assisted-volume-snapshots:delete" self.policy.set_rules({rule_name: "project:non_fake"}) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller.delete, self.req, '5') self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) class TestVolumeAttachPolicyEnforcementV21(test.NoDBTestCase): def setUp(self): super(TestVolumeAttachPolicyEnforcementV21, self).setUp() self.controller = volumes_v21.VolumeAttachmentController() self.req = fakes.HTTPRequest.blank('') def _common_policy_check(self, rules, rule_name, func, *arg, **kwarg): self.policy.set_rules(rules) exc = self.assertRaises( exception.PolicyNotAuthorized, func, *arg, **kwarg) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) def test_index_volume_attach_policy_failed(self): rule_name = "os_compute_api:os-volumes-attachments:index" rules = {rule_name: "project:non_fake"} self._common_policy_check(rules, rule_name, self.controller.index, self.req, FAKE_UUID) def test_show_volume_attach_policy_failed(self): rule_name = "os_compute_api:os-volumes-attachments:show" rules = {rule_name: "project:non_fake"} self._common_policy_check(rules, rule_name, self.controller.show, self.req, FAKE_UUID, FAKE_UUID_A) def test_create_volume_attach_policy_failed(self): rule_name = "os_compute_api:os-volumes-attachments:create" rules = {rule_name: "project:non_fake"} body = {'volumeAttachment': {'volumeId': FAKE_UUID_A, 'device': '/dev/fake'}} self._common_policy_check(rules, rule_name, self.controller.create, self.req, FAKE_UUID, body=body) def test_update_volume_attach_policy_failed(self): rule_name = "os_compute_api:os-volumes-attachments:update" rules = {rule_name: "project:non_fake"} body = {'volumeAttachment': {'volumeId': FAKE_UUID_B}} self._common_policy_check(rules, rule_name, self.controller.update, self.req, FAKE_UUID, FAKE_UUID_A, body=body) def test_delete_volume_attach_policy_failed(self): rule_name = "os_compute_api:os-volumes-attachments:delete" rules = {rule_name: "project:non_fake"} self._common_policy_check(rules, rule_name, self.controller.delete, self.req, FAKE_UUID, FAKE_UUID_A) class TestVolumesAPIDeprecation(test.NoDBTestCase): def setUp(self): super(TestVolumesAPIDeprecation, self).setUp() self.controller = volumes_v21.VolumeController() self.req = fakes.HTTPRequest.blank('', version='2.36') def test_all_apis_return_not_found(self): self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.show, self.req, fakes.FAKE_UUID) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.delete, self.req, fakes.FAKE_UUID) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.index, self.req) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.create, self.req, {}) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.detail, self.req) nova-17.0.1/nova/tests/unit/api/openstack/compute/test_neutron_security_groups.py0000666000175000017500000011532213250073126030523 0ustar zuulzuul00000000000000# Copyright 2013 Nicira, Inc. # All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six import mock from neutronclient.common import exceptions as n_exc from oslo_config import cfg from oslo_serialization import jsonutils from oslo_utils import encodeutils from oslo_utils import uuidutils import webob from nova.api.openstack.compute import security_groups from nova import compute from nova import context import nova.db from nova import exception from nova.network import model from nova.network.neutronv2 import api as neutron_api from nova.network.security_group import neutron_driver from nova.objects import instance as instance_obj from nova import test from nova.tests.unit.api.openstack.compute import test_security_groups from nova.tests.unit.api.openstack import fakes from nova.tests import uuidsentinel as uuids UUID_SERVER = uuids.server class TestNeutronSecurityGroupsTestCase(test.TestCase): def setUp(self): super(TestNeutronSecurityGroupsTestCase, self).setUp() cfg.CONF.set_override('use_neutron', True) self.original_client = neutron_api.get_client neutron_api.get_client = get_client def tearDown(self): neutron_api.get_client = self.original_client get_client()._reset() super(TestNeutronSecurityGroupsTestCase, self).tearDown() class TestNeutronSecurityGroupsV21( test_security_groups.TestSecurityGroupsV21, TestNeutronSecurityGroupsTestCase): # Used to override set config in the base test in test_security_groups. use_neutron = True def _create_sg_template(self, **kwargs): sg = test_security_groups.security_group_request_template(**kwargs) return self.controller.create(self.req, body={'security_group': sg}) def _create_network(self): body = {'network': {'name': 'net1'}} neutron = get_client() net = neutron.create_network(body) body = {'subnet': {'network_id': net['network']['id'], 'cidr': '10.0.0.0/24'}} neutron.create_subnet(body) return net def _create_port(self, **kwargs): body = {'port': {'binding:vnic_type': model.VNIC_TYPE_NORMAL}} fields = ['security_groups', 'device_id', 'network_id', 'port_security_enabled', 'ip_allocation'] for field in fields: if field in kwargs: body['port'][field] = kwargs[field] neutron = get_client() return neutron.create_port(body) def _create_security_group(self, **kwargs): body = {'security_group': {}} fields = ['name', 'description'] for field in fields: if field in kwargs: body['security_group'][field] = kwargs[field] neutron = get_client() return neutron.create_security_group(body) def test_create_security_group_with_no_description(self): # Neutron's security group description field is optional. pass def test_create_security_group_with_empty_description(self): # Neutron's security group description field is optional. pass def test_create_security_group_with_blank_name(self): # Neutron's security group name field is optional. pass def test_create_security_group_with_whitespace_name(self): # Neutron allows security group name to be whitespace. pass def test_create_security_group_with_blank_description(self): # Neutron's security group description field is optional. pass def test_create_security_group_with_whitespace_description(self): # Neutron allows description to be whitespace. pass def test_create_security_group_with_duplicate_name(self): # Neutron allows duplicate names for security groups. pass def test_create_security_group_non_string_name(self): # Neutron allows security group name to be non string. pass def test_create_security_group_non_string_description(self): # Neutron allows non string description. pass def test_create_security_group_quota_limit(self): # Enforced by Neutron server. pass def test_create_security_group_over_quota_during_recheck(self): # Enforced by Neutron server. pass def test_create_security_group_no_quota_recheck(self): # Enforced by Neutron server. pass def test_update_security_group(self): # Enforced by Neutron server. pass def test_get_security_group_list(self): self._create_sg_template().get('security_group') req = fakes.HTTPRequest.blank('/v2/fake/os-security-groups') list_dict = self.controller.index(req) self.assertEqual(len(list_dict['security_groups']), 2) def test_get_security_group_list_offset_and_limit(self): path = '/v2/fake/os-security-groups?offset=1&limit=1' self._create_sg_template().get('security_group') req = fakes.HTTPRequest.blank(path) list_dict = self.controller.index(req) self.assertEqual(len(list_dict['security_groups']), 1) def test_get_security_group_list_all_tenants(self): pass def test_get_security_group_by_instance(self): sg = self._create_sg_template().get('security_group') net = self._create_network() self._create_port( network_id=net['network']['id'], security_groups=[sg['id']], device_id=test_security_groups.FAKE_UUID1) expected = [{'rules': [], 'tenant_id': 'fake', 'id': sg['id'], 'name': 'test', 'description': 'test-description'}] self.stub_out('nova.db.instance_get_by_uuid', test_security_groups.return_server_by_uuid) req = fakes.HTTPRequest.blank('/v2/fake/servers/%s/os-security-groups' % test_security_groups.FAKE_UUID1) res_dict = self.server_controller.index( req, test_security_groups.FAKE_UUID1)['security_groups'] self.assertEqual(expected, res_dict) def test_get_security_group_by_id(self): sg = self._create_sg_template().get('security_group') req = fakes.HTTPRequest.blank('/v2/fake/os-security-groups/%s' % sg['id']) res_dict = self.controller.show(req, sg['id']) expected = {'security_group': sg} self.assertEqual(res_dict, expected) def test_delete_security_group_by_id(self): sg = self._create_sg_template().get('security_group') req = fakes.HTTPRequest.blank('/v2/fake/os-security-groups/%s' % sg['id']) self.controller.delete(req, sg['id']) def test_delete_security_group_by_admin(self): sg = self._create_sg_template().get('security_group') req = fakes.HTTPRequest.blank('/v2/fake/os-security-groups/%s' % sg['id'], use_admin_context=True) self.controller.delete(req, sg['id']) @mock.patch('nova.compute.utils.refresh_info_cache_for_instance') def test_delete_security_group_in_use(self, refresh_info_cache_mock): sg = self._create_sg_template().get('security_group') self._create_network() db_inst = fakes.stub_instance(id=1, nw_cache=[], security_groups=[]) _context = context.get_admin_context() instance = instance_obj.Instance._from_db_object( _context, instance_obj.Instance(), db_inst, expected_attrs=instance_obj.INSTANCE_DEFAULT_FIELDS) neutron = neutron_api.API() with mock.patch.object(nova.db, 'instance_get_by_uuid', return_value=db_inst): neutron.allocate_for_instance(_context, instance, False, None, security_groups=[sg['id']]) req = fakes.HTTPRequest.blank('/v2/fake/os-security-groups/%s' % sg['id']) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.delete, req, sg['id']) def test_associate_non_running_instance(self): # Neutron does not care if the instance is running or not. When the # instances is detected by neutron it will push down the security # group policy to it. pass def test_associate_already_associated_security_group_to_instance(self): # Neutron security groups does not raise an error if you update a # port adding a security group to it that was already associated # to the port. This is because PUT semantics are used. pass def test_associate(self): sg = self._create_sg_template().get('security_group') net = self._create_network() self._create_port( network_id=net['network']['id'], security_groups=[sg['id']], device_id=UUID_SERVER) self.stub_out('nova.db.instance_get_by_uuid', test_security_groups.return_server) body = dict(addSecurityGroup=dict(name="test")) req = fakes.HTTPRequest.blank('/v2/fake/servers/%s/action' % UUID_SERVER) self.manager._addSecurityGroup(req, UUID_SERVER, body) def test_associate_duplicate_names(self): sg1 = self._create_security_group(name='sg1', description='sg1')['security_group'] self._create_security_group(name='sg1', description='sg1')['security_group'] net = self._create_network() self._create_port( network_id=net['network']['id'], security_groups=[sg1['id']], device_id=UUID_SERVER) self.stub_out('nova.db.instance_get_by_uuid', test_security_groups.return_server) body = dict(addSecurityGroup=dict(name="sg1")) req = fakes.HTTPRequest.blank('/v2/fake/servers/%s/action' % UUID_SERVER) self.assertRaises(webob.exc.HTTPConflict, self.manager._addSecurityGroup, req, UUID_SERVER, body) def test_associate_port_security_enabled_true(self): sg = self._create_sg_template().get('security_group') net = self._create_network() self._create_port( network_id=net['network']['id'], security_groups=[sg['id']], port_security_enabled=True, device_id=UUID_SERVER) self.stub_out('nova.db.instance_get_by_uuid', test_security_groups.return_server) body = dict(addSecurityGroup=dict(name="test")) req = fakes.HTTPRequest.blank('/v2/fake/servers/%s/action' % UUID_SERVER) self.manager._addSecurityGroup(req, UUID_SERVER, body) def test_associate_port_security_enabled_false(self): self._create_sg_template().get('security_group') net = self._create_network() self._create_port( network_id=net['network']['id'], port_security_enabled=False, device_id=UUID_SERVER) self.stub_out('nova.db.instance_get_by_uuid', test_security_groups.return_server) body = dict(addSecurityGroup=dict(name="test")) req = fakes.HTTPRequest.blank('/v2/fake/servers/%s/action' % UUID_SERVER) self.assertRaises(webob.exc.HTTPBadRequest, self.manager._addSecurityGroup, req, UUID_SERVER, body) def test_associate_deferred_ip_port(self): sg = self._create_sg_template().get('security_group') net = self._create_network() self._create_port( network_id=net['network']['id'], security_groups=[sg['id']], port_security_enabled=True, ip_allocation='deferred', device_id=UUID_SERVER) self.stub_out('nova.db.instance_get_by_uuid', test_security_groups.return_server) body = dict(addSecurityGroup=dict(name="test")) req = fakes.HTTPRequest.blank('/v2/fake/servers/%s/action' % UUID_SERVER) self.manager._addSecurityGroup(req, UUID_SERVER, body) def test_disassociate_by_non_existing_security_group_name(self): self.stub_out('nova.db.instance_get_by_uuid', test_security_groups.return_server) body = dict(removeSecurityGroup=dict(name='non-existing')) req = fakes.HTTPRequest.blank('/v2/fake/servers/%s/action' % UUID_SERVER) self.assertRaises(webob.exc.HTTPNotFound, self.manager._removeSecurityGroup, req, UUID_SERVER, body) def test_disassociate_non_running_instance(self): # Neutron does not care if the instance is running or not. When the # instances is detected by neutron it will push down the security # group policy to it. pass def test_disassociate_already_associated_security_group_to_instance(self): # Neutron security groups does not raise an error if you update a # port adding a security group to it that was already associated # to the port. This is because PUT semantics are used. pass def test_disassociate(self): sg = self._create_sg_template().get('security_group') net = self._create_network() self._create_port( network_id=net['network']['id'], security_groups=[sg['id']], device_id=UUID_SERVER) self.stub_out('nova.db.instance_get_by_uuid', test_security_groups.return_server) body = dict(removeSecurityGroup=dict(name="test")) req = fakes.HTTPRequest.blank('/v2/fake/servers/%s/action' % UUID_SERVER) self.manager._removeSecurityGroup(req, UUID_SERVER, body) def test_get_instances_security_groups_bindings(self): servers = [{'id': test_security_groups.FAKE_UUID1}, {'id': test_security_groups.FAKE_UUID2}] sg1 = self._create_sg_template(name='test1').get('security_group') sg2 = self._create_sg_template(name='test2').get('security_group') # test name='' is replaced with id sg3 = self._create_sg_template(name='').get('security_group') net = self._create_network() self._create_port( network_id=net['network']['id'], security_groups=[sg1['id'], sg2['id']], device_id=test_security_groups.FAKE_UUID1) self._create_port( network_id=net['network']['id'], security_groups=[sg2['id'], sg3['id']], device_id=test_security_groups.FAKE_UUID2) expected = {test_security_groups.FAKE_UUID1: [{'name': sg1['name']}, {'name': sg2['name']}], test_security_groups.FAKE_UUID2: [{'name': sg2['name']}, {'name': sg3['id']}]} security_group_api = self.controller.security_group_api bindings = ( security_group_api.get_instances_security_groups_bindings( context.get_admin_context(), servers)) self.assertEqual(bindings, expected) def test_get_instance_security_groups(self): sg1 = self._create_sg_template(name='test1').get('security_group') sg2 = self._create_sg_template(name='test2').get('security_group') # test name='' is replaced with id sg3 = self._create_sg_template(name='').get('security_group') net = self._create_network() self._create_port( network_id=net['network']['id'], security_groups=[sg1['id'], sg2['id'], sg3['id']], device_id=test_security_groups.FAKE_UUID1) expected = [{'name': sg1['name']}, {'name': sg2['name']}, {'name': sg3['id']}] security_group_api = self.controller.security_group_api sgs = security_group_api.get_instance_security_groups( context.get_admin_context(), instance_obj.Instance(uuid=test_security_groups.FAKE_UUID1)) self.assertEqual(sgs, expected) @mock.patch('nova.network.security_group.neutron_driver.SecurityGroupAPI.' 'get_instances_security_groups_bindings') def test_get_security_group_empty_for_instance(self, neutron_sg_bind_mock): servers = [{'id': test_security_groups.FAKE_UUID1}] neutron_sg_bind_mock.return_value = {} security_group_api = self.controller.security_group_api ctx = context.get_admin_context() sgs = security_group_api.get_instance_security_groups(ctx, instance_obj.Instance(uuid=test_security_groups.FAKE_UUID1)) neutron_sg_bind_mock.assert_called_once_with(ctx, servers, False) self.assertEqual([], sgs) def test_create_port_with_sg_and_port_security_enabled_true(self): sg1 = self._create_sg_template(name='test1').get('security_group') net = self._create_network() self._create_port( network_id=net['network']['id'], security_groups=[sg1['id']], port_security_enabled=True, device_id=test_security_groups.FAKE_UUID1) security_group_api = self.controller.security_group_api sgs = security_group_api.get_instance_security_groups( context.get_admin_context(), instance_obj.Instance(uuid=test_security_groups.FAKE_UUID1)) self.assertEqual(sgs, [{'name': 'test1'}]) def test_create_port_with_sg_and_port_security_enabled_false(self): sg1 = self._create_sg_template(name='test1').get('security_group') net = self._create_network() self.assertRaises(exception.SecurityGroupCannotBeApplied, self._create_port, network_id=net['network']['id'], security_groups=[sg1['id']], port_security_enabled=False, device_id=test_security_groups.FAKE_UUID1) class TestNeutronSecurityGroupRulesTestCase(TestNeutronSecurityGroupsTestCase): def setUp(self): super(TestNeutronSecurityGroupRulesTestCase, self).setUp() id1 = '11111111-1111-1111-1111-111111111111' sg_template1 = test_security_groups.security_group_template( security_group_rules=[], id=id1) id2 = '22222222-2222-2222-2222-222222222222' sg_template2 = test_security_groups.security_group_template( security_group_rules=[], id=id2) self.controller_sg = security_groups.SecurityGroupController() neutron = get_client() neutron._fake_security_groups[id1] = sg_template1 neutron._fake_security_groups[id2] = sg_template2 def tearDown(self): neutron_api.get_client = self.original_client get_client()._reset() super(TestNeutronSecurityGroupsTestCase, self).tearDown() class _TestNeutronSecurityGroupRulesBase(object): def test_create_add_existing_rules_by_cidr(self): sg = test_security_groups.security_group_template() req = fakes.HTTPRequest.blank('/v2/fake/os-security-groups') self.controller_sg.create(req, {'security_group': sg}) rule = test_security_groups.security_group_rule_template( cidr='15.0.0.0/8', parent_group_id=self.sg2['id']) req = fakes.HTTPRequest.blank('/v2/fake/os-security-group-rules') self.controller.create(req, {'security_group_rule': rule}) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, req, {'security_group_rule': rule}) def test_create_add_existing_rules_by_group_id(self): sg = test_security_groups.security_group_template() req = fakes.HTTPRequest.blank('/v2/fake/os-security-groups') self.controller_sg.create(req, {'security_group': sg}) rule = test_security_groups.security_group_rule_template( group=self.sg1['id'], parent_group_id=self.sg2['id']) req = fakes.HTTPRequest.blank('/v2/fake/os-security-group-rules') self.controller.create(req, {'security_group_rule': rule}) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, req, {'security_group_rule': rule}) def test_delete(self): rule = test_security_groups.security_group_rule_template( parent_group_id=self.sg2['id']) req = fakes.HTTPRequest.blank('/v2/fake/os-security-group-rules') res_dict = self.controller.create(req, {'security_group_rule': rule}) security_group_rule = res_dict['security_group_rule'] req = fakes.HTTPRequest.blank('/v2/fake/os-security-group-rules/%s' % security_group_rule['id']) self.controller.delete(req, security_group_rule['id']) def test_create_rule_quota_limit(self): # Enforced by neutron pass def test_create_rule_over_quota_during_recheck(self): # Enforced by neutron pass def test_create_rule_no_quota_recheck(self): # Enforced by neutron pass class TestNeutronSecurityGroupRulesV21( _TestNeutronSecurityGroupRulesBase, test_security_groups.TestSecurityGroupRulesV21, TestNeutronSecurityGroupRulesTestCase): # Used to override set config in the base test in test_security_groups. use_neutron = True class TestNeutronSecurityGroupsOutputTest(TestNeutronSecurityGroupsTestCase): content_type = 'application/json' def setUp(self): super(TestNeutronSecurityGroupsOutputTest, self).setUp() fakes.stub_out_nw_api(self) self.controller = security_groups.SecurityGroupController() self.stubs.Set(compute.api.API, 'get', test_security_groups.fake_compute_get) self.stubs.Set(compute.api.API, 'get_all', test_security_groups.fake_compute_get_all) self.stubs.Set(compute.api.API, 'create', test_security_groups.fake_compute_create) self.stubs.Set(neutron_driver.SecurityGroupAPI, 'get_instances_security_groups_bindings', (test_security_groups. fake_get_instances_security_groups_bindings)) def _make_request(self, url, body=None): req = fakes.HTTPRequest.blank(url) if body: req.method = 'POST' req.body = encodeutils.safe_encode(self._encode_body(body)) req.content_type = self.content_type req.headers['Accept'] = self.content_type # NOTE: This 'os-security-groups' is for enabling security_groups # attribute on response body. res = req.get_response(fakes.wsgi_app_v21()) return res def _encode_body(self, body): return jsonutils.dumps(body) def _get_server(self, body): return jsonutils.loads(body).get('server') def _get_servers(self, body): return jsonutils.loads(body).get('servers') def _get_groups(self, server): return server.get('security_groups') def test_create(self): url = '/v2/fake/servers' image_uuid = 'c905cedb-7281-47e4-8a62-f26bc5fc4c77' req = fakes.HTTPRequest.blank('/v2/fake/os-security-groups') security_groups = [{'name': 'fake-2-0'}, {'name': 'fake-2-1'}] for security_group in security_groups: sg = test_security_groups.security_group_template( name=security_group['name']) self.controller.create(req, {'security_group': sg}) server = dict(name='server_test', imageRef=image_uuid, flavorRef=2, security_groups=security_groups) res = self._make_request(url, {'server': server}) self.assertEqual(res.status_int, 202) server = self._get_server(res.body) for i, group in enumerate(self._get_groups(server)): name = 'fake-2-%s' % i self.assertEqual(group.get('name'), name) def test_create_server_get_default_security_group(self): url = '/v2/fake/servers' image_uuid = 'c905cedb-7281-47e4-8a62-f26bc5fc4c77' server = dict(name='server_test', imageRef=image_uuid, flavorRef=2) res = self._make_request(url, {'server': server}) self.assertEqual(res.status_int, 202) server = self._get_server(res.body) group = self._get_groups(server)[0] self.assertEqual(group.get('name'), 'default') def test_show(self): def fake_get_instance_security_groups(inst, context, id): return [{'name': 'fake-2-0'}, {'name': 'fake-2-1'}] self.stubs.Set(neutron_driver.SecurityGroupAPI, 'get_instance_security_groups', fake_get_instance_security_groups) url = '/v2/fake/servers' image_uuid = 'c905cedb-7281-47e4-8a62-f26bc5fc4c77' req = fakes.HTTPRequest.blank('/v2/fake/os-security-groups') security_groups = [{'name': 'fake-2-0'}, {'name': 'fake-2-1'}] for security_group in security_groups: sg = test_security_groups.security_group_template( name=security_group['name']) self.controller.create(req, {'security_group': sg}) server = dict(name='server_test', imageRef=image_uuid, flavorRef=2, security_groups=security_groups) res = self._make_request(url, {'server': server}) self.assertEqual(res.status_int, 202) server = self._get_server(res.body) for i, group in enumerate(self._get_groups(server)): name = 'fake-2-%s' % i self.assertEqual(group.get('name'), name) # Test that show (GET) returns the same information as create (POST) url = '/v2/fake/servers/' + test_security_groups.UUID3 res = self._make_request(url) self.assertEqual(res.status_int, 200) server = self._get_server(res.body) for i, group in enumerate(self._get_groups(server)): name = 'fake-2-%s' % i self.assertEqual(group.get('name'), name) def test_detail(self): url = '/v2/fake/servers/detail' res = self._make_request(url) self.assertEqual(res.status_int, 200) for i, server in enumerate(self._get_servers(res.body)): for j, group in enumerate(self._get_groups(server)): name = 'fake-%s-%s' % (i, j) self.assertEqual(group.get('name'), name) def test_no_instance_passthrough_404(self): def fake_compute_get(*args, **kwargs): raise exception.InstanceNotFound(instance_id='fake') self.stubs.Set(compute.api.API, 'get', fake_compute_get) url = '/v2/fake/servers/70f6db34-de8d-4fbd-aafb-4065bdfa6115' res = self._make_request(url) self.assertEqual(res.status_int, 404) def get_client(context=None, admin=False): return MockClient() class MockClient(object): # Needs to be global to survive multiple calls to get_client. _fake_security_groups = {} _fake_ports = {} _fake_networks = {} _fake_subnets = {} _fake_security_group_rules = {} def __init__(self): # add default security group if not len(self._fake_security_groups): ret = {'name': 'default', 'description': 'default', 'tenant_id': 'fake_tenant', 'security_group_rules': [], 'id': uuidutils.generate_uuid()} self._fake_security_groups[ret['id']] = ret def _reset(self): self._fake_security_groups.clear() self._fake_ports.clear() self._fake_networks.clear() self._fake_subnets.clear() self._fake_security_group_rules.clear() def create_security_group(self, body=None): s = body.get('security_group') if not isinstance(s.get('name', ''), six.string_types): msg = ('BadRequest: Invalid input for name. Reason: ' 'None is not a valid string.') raise n_exc.BadRequest(message=msg) if not isinstance(s.get('description.', ''), six.string_types): msg = ('BadRequest: Invalid input for description. Reason: ' 'None is not a valid string.') raise n_exc.BadRequest(message=msg) if len(s.get('name')) > 255 or len(s.get('description')) > 255: msg = 'Security Group name great than 255' raise n_exc.NeutronClientException(message=msg, status_code=401) ret = {'name': s.get('name'), 'description': s.get('description'), 'tenant_id': 'fake', 'security_group_rules': [], 'id': uuidutils.generate_uuid()} self._fake_security_groups[ret['id']] = ret return {'security_group': ret} def create_network(self, body): n = body.get('network') ret = {'status': 'ACTIVE', 'subnets': [], 'name': n.get('name'), 'admin_state_up': n.get('admin_state_up', True), 'tenant_id': 'fake_tenant', 'id': uuidutils.generate_uuid()} if 'port_security_enabled' in n: ret['port_security_enabled'] = n['port_security_enabled'] self._fake_networks[ret['id']] = ret return {'network': ret} def create_subnet(self, body): s = body.get('subnet') try: net = self._fake_networks[s.get('network_id')] except KeyError: msg = 'Network %s not found' % s.get('network_id') raise n_exc.NeutronClientException(message=msg, status_code=404) ret = {'name': s.get('name'), 'network_id': s.get('network_id'), 'tenant_id': 'fake_tenant', 'cidr': s.get('cidr'), 'id': uuidutils.generate_uuid(), 'gateway_ip': '10.0.0.1'} net['subnets'].append(ret['id']) self._fake_networks[net['id']] = net self._fake_subnets[ret['id']] = ret return {'subnet': ret} def create_port(self, body): p = body.get('port') ret = {'status': 'ACTIVE', 'id': uuidutils.generate_uuid(), 'mac_address': p.get('mac_address', 'fa:16:3e:b8:f5:fb'), 'device_id': p.get('device_id', uuidutils.generate_uuid()), 'admin_state_up': p.get('admin_state_up', True), 'security_groups': p.get('security_groups', []), 'network_id': p.get('network_id'), 'ip_allocation': p.get('ip_allocation'), 'binding:vnic_type': p.get('binding:vnic_type') or model.VNIC_TYPE_NORMAL} network = self._fake_networks[p['network_id']] if 'port_security_enabled' in p: ret['port_security_enabled'] = p['port_security_enabled'] elif 'port_security_enabled' in network: ret['port_security_enabled'] = network['port_security_enabled'] port_security = ret.get('port_security_enabled', True) # port_security must be True if security groups are present if not port_security and ret['security_groups']: raise exception.SecurityGroupCannotBeApplied() if network['subnets'] and p.get('ip_allocation') != 'deferred': ret['fixed_ips'] = [{'subnet_id': network['subnets'][0], 'ip_address': '10.0.0.1'}] if not ret['security_groups'] and (port_security is None or port_security is True): for security_group in self._fake_security_groups.values(): if security_group['name'] == 'default': ret['security_groups'] = [security_group['id']] break self._fake_ports[ret['id']] = ret return {'port': ret} def create_security_group_rule(self, body): # does not handle bulk case so just picks rule[0] r = body.get('security_group_rules')[0] fields = ['direction', 'protocol', 'port_range_min', 'port_range_max', 'ethertype', 'remote_ip_prefix', 'tenant_id', 'security_group_id', 'remote_group_id'] ret = {} for field in fields: ret[field] = r.get(field) ret['id'] = uuidutils.generate_uuid() self._fake_security_group_rules[ret['id']] = ret return {'security_group_rules': [ret]} def show_security_group(self, security_group, **_params): try: sg = self._fake_security_groups[security_group] except KeyError: msg = 'Security Group %s not found' % security_group raise n_exc.NeutronClientException(message=msg, status_code=404) for security_group_rule in self._fake_security_group_rules.values(): if security_group_rule['security_group_id'] == sg['id']: sg['security_group_rules'].append(security_group_rule) return {'security_group': sg} def show_security_group_rule(self, security_group_rule, **_params): try: return {'security_group_rule': self._fake_security_group_rules[security_group_rule]} except KeyError: msg = 'Security Group rule %s not found' % security_group_rule raise n_exc.NeutronClientException(message=msg, status_code=404) def show_network(self, network, **_params): try: return {'network': self._fake_networks[network]} except KeyError: msg = 'Network %s not found' % network raise n_exc.NeutronClientException(message=msg, status_code=404) def show_port(self, port, **_params): try: return {'port': self._fake_ports[port]} except KeyError: msg = 'Port %s not found' % port raise n_exc.NeutronClientException(message=msg, status_code=404) def show_subnet(self, subnet, **_params): try: return {'subnet': self._fake_subnets[subnet]} except KeyError: msg = 'Port %s not found' % subnet raise n_exc.NeutronClientException(message=msg, status_code=404) def list_security_groups(self, **_params): ret = [] for security_group in self._fake_security_groups.values(): names = _params.get('name') if names: if not isinstance(names, list): names = [names] for name in names: if security_group.get('name') == name: ret.append(security_group) ids = _params.get('id') if ids: if not isinstance(ids, list): ids = [ids] for id in ids: if security_group.get('id') == id: ret.append(security_group) elif not (names or ids): ret.append(security_group) return {'security_groups': ret} def list_networks(self, **_params): # neutronv2/api.py _get_available_networks calls this assuming # search_opts filter "shared" is implemented and not ignored shared = _params.get("shared", None) if shared: return {'networks': []} else: return {'networks': [network for network in self._fake_networks.values()]} def list_ports(self, **_params): ret = [] device_id = _params.get('device_id') for port in self._fake_ports.values(): if device_id: if port['device_id'] in device_id: ret.append(port) else: ret.append(port) return {'ports': ret} def list_subnets(self, **_params): return {'subnets': [subnet for subnet in self._fake_subnets.values()]} def list_floatingips(self, **_params): return {'floatingips': []} def delete_security_group(self, security_group): self.show_security_group(security_group) ports = self.list_ports() for port in ports.get('ports'): for sg_port in port['security_groups']: if sg_port == security_group: msg = ('Unable to delete Security group %s in use' % security_group) raise n_exc.NeutronClientException(message=msg, status_code=409) del self._fake_security_groups[security_group] def delete_security_group_rule(self, security_group_rule): self.show_security_group_rule(security_group_rule) del self._fake_security_group_rules[security_group_rule] def delete_network(self, network): self.show_network(network) self._check_ports_on_network(network) for subnet in self._fake_subnets.values(): if subnet['network_id'] == network: del self._fake_subnets[subnet['id']] del self._fake_networks[network] def delete_port(self, port): self.show_port(port) del self._fake_ports[port] def update_port(self, port, body=None): self.show_port(port) self._fake_ports[port].update(body['port']) return {'port': self._fake_ports[port]} def list_extensions(self, **_parms): return {'extensions': []} def _check_ports_on_network(self, network): ports = self.list_ports() for port in ports: if port['network_id'] == network: msg = ('Unable to complete operation on network %s. There is ' 'one or more ports still in use on the network' % network) raise n_exc.NeutronClientException(message=msg, status_code=409) def find_resource(self, resource, name_or_id, project_id=None, cmd_resource=None, parent_id=None, fields=None): if resource == 'security_group': # lookup first by unique id sg = self._fake_security_groups.get(name_or_id) if sg: return sg # lookup by name, raise an exception on duplicates res = None for sg in self._fake_security_groups.values(): if sg['name'] == name_or_id: if res: raise n_exc.NeutronClientNoUniqueMatch( resource=resource, name=name_or_id) res = sg if res: return res raise n_exc.NotFound("Fake %s '%s' not found." % (resource, name_or_id)) nova-17.0.1/nova/tests/unit/api/openstack/compute/test_floating_ip_dns.py0000666000175000017500000004467613250073126026657 0ustar zuulzuul00000000000000# Copyright 2011 Andrew Bogott for the Wikimedia Foundation # All Rights Reserved. # Copyright 2013 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from six.moves import urllib import webob from nova.api.openstack.compute import floating_ip_dns \ as fipdns_v21 from nova import context from nova import db from nova import exception from nova import network from nova import test from nova.tests.unit.api.openstack import fakes name = "arbitraryname" name2 = "anotherarbitraryname" test_ipv4_address = '10.0.0.66' test_ipv4_address2 = '10.0.0.67' test_ipv6_address = 'fe80:0:0:0:0:0:a00:42' domain = "example.org" domain2 = "example.net" floating_ip_id = '1' def _quote_domain(domain): """Domain names tend to have .'s in them. Urllib doesn't quote dots, but Routes tends to choke on them, so we need an extra level of by-hand quoting here. This function needs to duplicate the one in python-novaclient/novaclient/v1_1/floating_ip_dns.py """ return urllib.parse.quote(domain.replace('.', '%2E')) def network_api_get_floating_ip(self, context, id): return {'id': floating_ip_id, 'address': test_ipv4_address, 'fixed_ip': None} def network_get_dns_domains(self, context): return [{'domain': 'example.org', 'scope': 'public'}, {'domain': 'example.com', 'scope': 'public', 'project': 'project1'}, {'domain': 'private.example.com', 'scope': 'private', 'availability_zone': 'avzone'}] def network_get_dns_entries_by_address(self, context, address, domain): return [name, name2] def network_get_dns_entries_by_name(self, context, address, domain): return [test_ipv4_address] def network_add_dns_entry(self, context, address, name, dns_type, domain): return {'dns_entry': {'ip': test_ipv4_address, 'name': name, 'type': dns_type, 'domain': domain}} def network_modify_dns_entry(self, context, address, name, domain): return {'dns_entry': {'name': name, 'ip': address, 'domain': domain}} def network_create_private_dns_domain(self, context, domain, avail_zone): pass def network_create_public_dns_domain(self, context, domain, project): pass class FloatingIpDNSTestV21(test.TestCase): floating_ip_dns = fipdns_v21 def _create_floating_ip(self): """Create a floating ip object.""" host = "fake_host" db.floating_ip_create(self.context, {'address': test_ipv4_address, 'host': host}) db.floating_ip_create(self.context, {'address': test_ipv6_address, 'host': host}) def _delete_floating_ip(self): db.floating_ip_destroy(self.context, test_ipv4_address) db.floating_ip_destroy(self.context, test_ipv6_address) def _check_status(self, expected_status, res, controller_method): self.assertEqual(expected_status, controller_method.wsgi_code) def _bad_request(self): return webob.exc.HTTPBadRequest def setUp(self): super(FloatingIpDNSTestV21, self).setUp() # None of these APIs are implemented for Neutron. self.flags(use_neutron=False) self.stub_out("nova.network.api.API.get_dns_domains", network_get_dns_domains) self.stub_out("nova.network.api.API.get_dns_entries_by_address", network_get_dns_entries_by_address) self.stub_out("nova.network.api.API.get_dns_entries_by_name", network_get_dns_entries_by_name) self.stub_out("nova.network.api.API.get_floating_ip", network_api_get_floating_ip) self.stub_out("nova.network.api.API.add_dns_entry", network_add_dns_entry) self.stub_out("nova.network.api.API.modify_dns_entry", network_modify_dns_entry) self.stub_out("nova.network.api.API.create_public_dns_domain", network_create_public_dns_domain) self.stub_out("nova.network.api.API.create_private_dns_domain", network_create_private_dns_domain) self.context = context.get_admin_context() self._create_floating_ip() temp = self.floating_ip_dns.FloatingIPDNSDomainController() self.domain_controller = temp self.entry_controller = self.floating_ip_dns.\ FloatingIPDNSEntryController() self.admin_req = fakes.HTTPRequest.blank('', use_admin_context=True) self.req = fakes.HTTPRequest.blank('') def tearDown(self): self._delete_floating_ip() super(FloatingIpDNSTestV21, self).tearDown() def test_dns_domains_list(self): res_dict = self.domain_controller.index(self.req) entries = res_dict['domain_entries'] self.assertTrue(entries) self.assertEqual(entries[0]['domain'], "example.org") self.assertFalse(entries[0]['project']) self.assertFalse(entries[0]['availability_zone']) self.assertEqual(entries[1]['domain'], "example.com") self.assertEqual(entries[1]['project'], "project1") self.assertFalse(entries[1]['availability_zone']) self.assertEqual(entries[2]['domain'], "private.example.com") self.assertFalse(entries[2]['project']) self.assertEqual(entries[2]['availability_zone'], "avzone") def _test_get_dns_entries_by_address(self, address): entries = self.entry_controller.show(self.req, _quote_domain(domain), address) entries = entries.obj self.assertEqual(len(entries['dns_entries']), 2) self.assertEqual(entries['dns_entries'][0]['name'], name) self.assertEqual(entries['dns_entries'][1]['name'], name2) self.assertEqual(entries['dns_entries'][0]['domain'], domain) def test_get_dns_entries_by_ipv4_address(self): self._test_get_dns_entries_by_address(test_ipv4_address) def test_get_dns_entries_by_ipv6_address(self): self._test_get_dns_entries_by_address(test_ipv6_address) def test_get_dns_entries_by_name(self): entry = self.entry_controller.show(self.req, _quote_domain(domain), name) self.assertEqual(entry['dns_entry']['ip'], test_ipv4_address) self.assertEqual(entry['dns_entry']['domain'], domain) @mock.patch.object(network.api.API, "get_dns_entries_by_name", side_effect=webob.exc.HTTPNotFound()) def test_dns_entries_not_found(self, mock_get_entries): self.assertRaises(webob.exc.HTTPNotFound, self.entry_controller.show, self.req, _quote_domain(domain), 'nonexistent') self.assertTrue(mock_get_entries.called) def test_create_entry(self): body = {'dns_entry': {'ip': test_ipv4_address, 'dns_type': 'A'}} entry = self.entry_controller.update(self.req, _quote_domain(domain), name, body=body) self.assertEqual(entry['dns_entry']['ip'], test_ipv4_address) def test_create_domain(self): self._test_create_domain(self.req) def _test_create_domain(self, req): body = {'domain_entry': {'scope': 'private', 'project': 'testproject'}} self.assertRaises(self._bad_request(), self.domain_controller.update, req, _quote_domain(domain), body=body) body = {'domain_entry': {'scope': 'public', 'availability_zone': 'zone1'}} self.assertRaises(self._bad_request(), self.domain_controller.update, req, _quote_domain(domain), body=body) body = {'domain_entry': {'scope': 'public', 'project': 'testproject'}} entry = self.domain_controller.update(req, _quote_domain(domain), body=body) self.assertEqual(entry['domain_entry']['domain'], domain) self.assertEqual(entry['domain_entry']['scope'], 'public') self.assertEqual(entry['domain_entry']['project'], 'testproject') body = {'domain_entry': {'scope': 'private', 'availability_zone': 'zone1'}} entry = self.domain_controller.update(req, _quote_domain(domain), body=body) self.assertEqual(entry['domain_entry']['domain'], domain) self.assertEqual(entry['domain_entry']['scope'], 'private') self.assertEqual(entry['domain_entry']['availability_zone'], 'zone1') @mock.patch.object(network.api.API, "delete_dns_entry") def test_delete_entry(self, mock_del_entry): delete = self.entry_controller.delete res = delete(self.req, _quote_domain(domain), name) self._check_status(202, res, delete) mock_del_entry.assert_called_once_with(mock.ANY, name, domain) @mock.patch.object(network.api.API, "delete_dns_entry", side_effect=exception.NotFound) def test_delete_entry_notfound(self, mock_del_entry): self.assertRaises(webob.exc.HTTPNotFound, self.entry_controller.delete, self.req, _quote_domain(domain), name) self.assertTrue(mock_del_entry.called) def test_delete_domain(self): self._test_delete_domain(self.req) @mock.patch.object(network.api.API, "delete_dns_domain") def _test_delete_domain(self, req, mock_del_dom): delete = self.domain_controller.delete res = delete(req, _quote_domain(domain)) self._check_status(202, res, delete) mock_del_dom.assert_called_once_with(mock.ANY, domain) def test_delete_domain_notfound(self): self._test_delete_domain_notfound(self.req) @mock.patch.object(network.api.API, "delete_dns_domain", side_effect=exception.NotFound) def _test_delete_domain_notfound(self, req, mock_del_dom): self.assertRaises( webob.exc.HTTPNotFound, self.domain_controller.delete, req, _quote_domain(domain)) self.assertTrue(mock_del_dom.called) def test_modify(self): body = {'dns_entry': {'ip': test_ipv4_address2, 'dns_type': 'A'}} entry = self.entry_controller.update(self.req, domain, name, body=body) self.assertEqual(entry['dns_entry']['ip'], test_ipv4_address2) def test_not_implemented_dns_entry_update(self): body = {'dns_entry': {'ip': test_ipv4_address, 'dns_type': 'A'}} with mock.patch.object(network.api.API, 'modify_dns_entry', side_effect=NotImplementedError()): self.assertRaises(webob.exc.HTTPNotImplemented, self.entry_controller.update, self.req, _quote_domain(domain), name, body=body) def test_not_implemented_dns_entry_show(self): with mock.patch.object(network.api.API, 'get_dns_entries_by_name', side_effect=NotImplementedError()): self.assertRaises(webob.exc.HTTPNotImplemented, self.entry_controller.show, self.req, _quote_domain(domain), name) def test_not_implemented_delete_entry(self): with mock.patch.object(network.api.API, 'delete_dns_entry', side_effect=NotImplementedError()): self.assertRaises(webob.exc.HTTPNotImplemented, self.entry_controller.delete, self.req, _quote_domain(domain), name) def test_not_implemented_delete_domain(self): with mock.patch.object(network.api.API, 'delete_dns_domain', side_effect=NotImplementedError()): self.assertRaises(webob.exc.HTTPNotImplemented, self.domain_controller.delete, self.admin_req, _quote_domain(domain)) def test_not_implemented_create_domain(self): body = {'domain_entry': {'scope': 'private', 'availability_zone': 'zone1'}} with mock.patch.object(network.api.API, 'create_private_dns_domain', side_effect=NotImplementedError()): self.assertRaises(webob.exc.HTTPNotImplemented, self.domain_controller.update, self.admin_req, _quote_domain(domain), body=body) def test_not_implemented_dns_domains_list(self): with mock.patch.object(network.api.API, 'get_dns_domains', side_effect=NotImplementedError()): self.assertRaises(webob.exc.HTTPNotImplemented, self.domain_controller.index, self.req) class FloatingIPDNSDomainPolicyEnforcementV21(test.NoDBTestCase): def setUp(self): super(FloatingIPDNSDomainPolicyEnforcementV21, self).setUp() self.controller = fipdns_v21.FloatingIPDNSDomainController() self.rule_name = "os_compute_api:os-floating-ip-dns" self.policy.set_rules({self.rule_name: "project:non_fake"}) self.req = fakes.HTTPRequest.blank('') def test_get_floating_ip_dns_policy_failed(self): rule_name = "os_compute_api:os-floating-ip-dns" self.policy.set_rules({rule_name: "project:non_fake"}) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller.index, self.req) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) def test_update_floating_ip_dns_policy_failed(self): rule_name = "os_compute_api:os-floating-ip-dns:domain:update" self.policy.set_rules({rule_name: "project:non_fake"}) body = {'domain_entry': {'scope': 'public', 'project': 'testproject'}} exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller.update, self.req, _quote_domain(domain), body=body) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) def test_delete_floating_ip_dns_policy_failed(self): rule_name = "os_compute_api:os-floating-ip-dns:domain:delete" self.policy.set_rules({rule_name: "project:non_fake"}) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller.delete, self.req, _quote_domain(domain)) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) class FloatingIPDNSEntryPolicyEnforcementV21(test.NoDBTestCase): def setUp(self): super(FloatingIPDNSEntryPolicyEnforcementV21, self).setUp() self.controller = fipdns_v21.FloatingIPDNSEntryController() self.rule_name = "os_compute_api:os-floating-ip-dns" self.policy.set_rules({self.rule_name: "project:non_fake"}) self.req = fakes.HTTPRequest.blank('') def test_show_floating_ip_dns_entry_policy_failed(self): exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller.show, self.req, _quote_domain(domain), test_ipv4_address) self.assertEqual( "Policy doesn't allow %s to be performed." % self.rule_name, exc.format_message()) def test_update_floating_ip_dns_policy_failed(self): body = {'dns_entry': {'ip': test_ipv4_address, 'dns_type': 'A'}} exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller.update, self.req, _quote_domain(domain), name, body=body) self.assertEqual( "Policy doesn't allow %s to be performed." % self.rule_name, exc.format_message()) def test_delete_floating_ip_dns_policy_failed(self): exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller.delete, self.req, _quote_domain(domain), name) self.assertEqual( "Policy doesn't allow %s to be performed." % self.rule_name, exc.format_message()) class FloatingIpDNSDomainDeprecationTest(test.NoDBTestCase): def setUp(self): super(FloatingIpDNSDomainDeprecationTest, self).setUp() self.controller = fipdns_v21.FloatingIPDNSDomainController() self.req = fakes.HTTPRequest.blank('', version='2.36') def test_all_apis_return_not_found(self): self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.index, self.req) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.update, self.req, fakes.FAKE_UUID, {}) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.delete, self.req, fakes.FAKE_UUID) class FloatingIpDNSEntryDeprecationTest(test.NoDBTestCase): def setUp(self): super(FloatingIpDNSEntryDeprecationTest, self).setUp() self.controller = fipdns_v21.FloatingIPDNSEntryController() self.req = fakes.HTTPRequest.blank('', version='2.36') def test_all_apis_return_not_found(self): self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.show, self.req, fakes.FAKE_UUID, fakes.FAKE_UUID) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.update, self.req, fakes.FAKE_UUID, fakes.FAKE_UUID, {}) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.delete, self.req, fakes.FAKE_UUID, fakes.FAKE_UUID) nova-17.0.1/nova/tests/unit/api/openstack/compute/test_api.py0000666000175000017500000001427613250073126024262 0ustar zuulzuul00000000000000# Copyright 2010 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_serialization import jsonutils from oslo_utils import encodeutils import webob.dec import webob.exc from nova.api import openstack as openstack_api from nova.api.openstack import wsgi from nova import exception from nova import test from nova.tests.unit.api.openstack import fakes class APITest(test.NoDBTestCase): @property def wsgi_app(self): return fakes.wsgi_app_v21() def _wsgi_app(self, inner_app): return openstack_api.FaultWrapper(inner_app) def test_malformed_json(self): req = fakes.HTTPRequest.blank('/') req.method = 'POST' req.body = b'{' req.headers["content-type"] = "application/json" res = req.get_response(self.wsgi_app) self.assertEqual(res.status_int, 400) def test_malformed_xml(self): req = fakes.HTTPRequest.blank('/') req.method = 'POST' req.body = b'' req.headers["content-type"] = "application/xml" res = req.get_response(self.wsgi_app) self.assertEqual(res.status_int, 415) def test_vendor_content_type_json(self): ctype = 'application/vnd.openstack.compute+json' req = fakes.HTTPRequest.blank('/') req.headers['Accept'] = ctype res = req.get_response(self.wsgi_app) self.assertEqual(res.status_int, 200) # NOTE(scottynomad): Webob's Response assumes that header values are # strings so the `res.content_type` property is broken in python3. # # Consider changing `api.openstack.wsgi.Resource._process_stack` # to encode header values in ASCII rather than UTF-8. # https://tools.ietf.org/html/rfc7230#section-3.2.4 content_type = res.headers.get('Content-Type') self.assertEqual(ctype, encodeutils.safe_decode(content_type)) jsonutils.loads(res.body) def test_exceptions_are_converted_to_faults_webob_exc(self): @webob.dec.wsgify def raise_webob_exc(req): raise webob.exc.HTTPNotFound(explanation='Raised a webob.exc') # api.application = raise_webob_exc api = self._wsgi_app(raise_webob_exc) resp = fakes.HTTPRequest.blank('/').get_response(api) self.assertEqual(resp.status_int, 404, resp.text) def test_exceptions_are_converted_to_faults_api_fault(self): @webob.dec.wsgify def raise_api_fault(req): exc = webob.exc.HTTPNotFound(explanation='Raised a webob.exc') return wsgi.Fault(exc) # api.application = raise_api_fault api = self._wsgi_app(raise_api_fault) resp = fakes.HTTPRequest.blank('/').get_response(api) self.assertIn('itemNotFound', resp.text) self.assertEqual(resp.status_int, 404, resp.text) def test_exceptions_are_converted_to_faults_exception(self): @webob.dec.wsgify def fail(req): raise Exception("Threw an exception") # api.application = fail api = self._wsgi_app(fail) resp = fakes.HTTPRequest.blank('/').get_response(api) self.assertIn('{"computeFault', resp.text) self.assertEqual(resp.status_int, 500, resp.text) def _do_test_exception_safety_reflected_in_faults(self, expose): class ExceptionWithSafety(exception.NovaException): safe = expose @webob.dec.wsgify def fail(req): raise ExceptionWithSafety('some explanation') api = self._wsgi_app(fail) resp = fakes.HTTPRequest.blank('/').get_response(api) self.assertIn('{"computeFault', resp.text) expected = ('ExceptionWithSafety: some explanation' if expose else 'The server has either erred or is incapable ' 'of performing the requested operation.') self.assertIn(expected, resp.text) self.assertEqual(resp.status_int, 500, resp.text) def test_safe_exceptions_are_described_in_faults(self): self._do_test_exception_safety_reflected_in_faults(True) def test_unsafe_exceptions_are_not_described_in_faults(self): self._do_test_exception_safety_reflected_in_faults(False) def _do_test_exception_mapping(self, exception_type, msg): @webob.dec.wsgify def fail(req): raise exception_type(msg) api = self._wsgi_app(fail) resp = fakes.HTTPRequest.blank('/').get_response(api) self.assertIn(msg, resp.text) self.assertEqual(resp.status_int, exception_type.code, resp.text) if hasattr(exception_type, 'headers'): for (key, value) in exception_type.headers.items(): self.assertIn(key, resp.headers) self.assertEqual(resp.headers[key], str(value)) def test_quota_error_mapping(self): self._do_test_exception_mapping(exception.QuotaError, 'too many used') def test_non_nova_notfound_exception_mapping(self): class ExceptionWithCode(Exception): code = 404 self._do_test_exception_mapping(ExceptionWithCode, 'NotFound') def test_non_nova_exception_mapping(self): class ExceptionWithCode(Exception): code = 417 self._do_test_exception_mapping(ExceptionWithCode, 'Expectation failed') def test_exception_with_none_code_throws_500(self): class ExceptionWithNoneCode(Exception): code = None @webob.dec.wsgify def fail(req): raise ExceptionWithNoneCode() api = self._wsgi_app(fail) resp = fakes.HTTPRequest.blank('/').get_response(api) self.assertEqual(500, resp.status_int) nova-17.0.1/nova/tests/unit/api/openstack/compute/test_baremetal_nodes.py0000666000175000017500000002301413250073126026623 0ustar zuulzuul00000000000000# Copyright (c) 2013 NTT DOCOMO, INC. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from ironicclient import exc as ironic_exc import mock import six from webob import exc from nova.api.openstack.compute import baremetal_nodes \ as b_nodes_v21 from nova import context from nova import exception from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests.unit.virt.ironic import utils as ironic_utils def fake_node(**updates): node = { 'id': 1, 'service_host': "host", 'cpus': 8, 'memory_mb': 8192, 'local_gb': 128, 'pm_address': "10.1.2.3", 'pm_user': "pm_user", 'pm_password': "pm_pass", 'terminal_port': 8000, 'interfaces': [], 'instance_uuid': 'fake-instance-uuid', } if updates: node.update(updates) return node def fake_node_ext_status(**updates): node = fake_node(uuid='fake-uuid', task_state='fake-task-state', updated_at='fake-updated-at', pxe_config_path='fake-pxe-config-path') if updates: node.update(updates) return node FAKE_IRONIC_CLIENT = ironic_utils.FakeClient() @mock.patch.object(b_nodes_v21, '_get_ironic_client', lambda *_: FAKE_IRONIC_CLIENT) class BareMetalNodesTestV21(test.NoDBTestCase): mod = b_nodes_v21 def setUp(self): super(BareMetalNodesTestV21, self).setUp() self._setup() self.context = context.get_admin_context() self.request = fakes.HTTPRequest.blank('', use_admin_context=True) def _setup(self): self.controller = b_nodes_v21.BareMetalNodeController() @mock.patch.object(FAKE_IRONIC_CLIENT.node, 'list') def test_index_ironic(self, mock_list): properties = {'cpus': 2, 'memory_mb': 1024, 'local_gb': 20} node = ironic_utils.get_test_node(properties=properties) mock_list.return_value = [node] res_dict = self.controller.index(self.request) expected_output = {'nodes': [{'memory_mb': properties['memory_mb'], 'host': 'IRONIC MANAGED', 'disk_gb': properties['local_gb'], 'interfaces': [], 'task_state': None, 'id': node.uuid, 'cpus': properties['cpus']}]} self.assertEqual(expected_output, res_dict) mock_list.assert_called_once_with(detail=True) @mock.patch.object(FAKE_IRONIC_CLIENT.node, 'list') def test_index_ironic_missing_properties(self, mock_list): properties = {'cpus': 2} node = ironic_utils.get_test_node(properties=properties) mock_list.return_value = [node] res_dict = self.controller.index(self.request) expected_output = {'nodes': [{'memory_mb': 0, 'host': 'IRONIC MANAGED', 'disk_gb': 0, 'interfaces': [], 'task_state': None, 'id': node.uuid, 'cpus': properties['cpus']}]} self.assertEqual(expected_output, res_dict) mock_list.assert_called_once_with(detail=True) def test_index_ironic_not_implemented(self): with mock.patch.object(self.mod, 'ironic_client', None): self.assertRaises(exc.HTTPNotImplemented, self.controller.index, self.request) @mock.patch.object(FAKE_IRONIC_CLIENT.node, 'list_ports') @mock.patch.object(FAKE_IRONIC_CLIENT.node, 'get') def test_show_ironic(self, mock_get, mock_list_ports): properties = {'cpus': 1, 'memory_mb': 512, 'local_gb': 10} node = ironic_utils.get_test_node(properties=properties) port = ironic_utils.get_test_port() mock_get.return_value = node mock_list_ports.return_value = [port] res_dict = self.controller.show(self.request, node.uuid) expected_output = {'node': {'memory_mb': properties['memory_mb'], 'instance_uuid': None, 'host': 'IRONIC MANAGED', 'disk_gb': properties['local_gb'], 'interfaces': [{'address': port.address}], 'task_state': None, 'id': node.uuid, 'cpus': properties['cpus']}} self.assertEqual(expected_output, res_dict) mock_get.assert_called_once_with(node.uuid) mock_list_ports.assert_called_once_with(node.uuid) @mock.patch.object(FAKE_IRONIC_CLIENT.node, 'list_ports') @mock.patch.object(FAKE_IRONIC_CLIENT.node, 'get') def test_show_ironic_no_properties(self, mock_get, mock_list_ports): properties = {} node = ironic_utils.get_test_node(properties=properties) port = ironic_utils.get_test_port() mock_get.return_value = node mock_list_ports.return_value = [port] res_dict = self.controller.show(self.request, node.uuid) expected_output = {'node': {'memory_mb': 0, 'instance_uuid': None, 'host': 'IRONIC MANAGED', 'disk_gb': 0, 'interfaces': [{'address': port.address}], 'task_state': None, 'id': node.uuid, 'cpus': 0}} self.assertEqual(expected_output, res_dict) mock_get.assert_called_once_with(node.uuid) mock_list_ports.assert_called_once_with(node.uuid) @mock.patch.object(FAKE_IRONIC_CLIENT.node, 'list_ports') @mock.patch.object(FAKE_IRONIC_CLIENT.node, 'get') def test_show_ironic_no_interfaces(self, mock_get, mock_list_ports): properties = {'cpus': 1, 'memory_mb': 512, 'local_gb': 10} node = ironic_utils.get_test_node(properties=properties) mock_get.return_value = node mock_list_ports.return_value = [] res_dict = self.controller.show(self.request, node.uuid) self.assertEqual([], res_dict['node']['interfaces']) mock_get.assert_called_once_with(node.uuid) mock_list_ports.assert_called_once_with(node.uuid) @mock.patch.object(FAKE_IRONIC_CLIENT.node, 'get', side_effect=ironic_exc.NotFound()) def test_show_ironic_node_not_found(self, mock_get): error = self.assertRaises(exc.HTTPNotFound, self.controller.show, self.request, 'fake-uuid') self.assertIn('fake-uuid', six.text_type(error)) def test_show_ironic_not_implemented(self): with mock.patch.object(self.mod, 'ironic_client', None): properties = {'cpus': 1, 'memory_mb': 512, 'local_gb': 10} node = ironic_utils.get_test_node(properties=properties) self.assertRaises(exc.HTTPNotImplemented, self.controller.show, self.request, node.uuid) def test_create_ironic_not_supported(self): self.assertRaises(exc.HTTPBadRequest, self.controller.create, self.request, {'node': object()}) def test_delete_ironic_not_supported(self): self.assertRaises(exc.HTTPBadRequest, self.controller.delete, self.request, 'fake-id') def test_add_interface_ironic_not_supported(self): self.assertRaises(exc.HTTPBadRequest, self.controller._add_interface, self.request, 'fake-id', 'fake-body') def test_remove_interface_ironic_not_supported(self): self.assertRaises(exc.HTTPBadRequest, self.controller._remove_interface, self.request, 'fake-id', 'fake-body') class BareMetalNodesTestDeprecation(test.NoDBTestCase): def setUp(self): super(BareMetalNodesTestDeprecation, self).setUp() self.controller = b_nodes_v21.BareMetalNodeController() self.req = fakes.HTTPRequest.blank('', version='2.36') def test_all_apis_return_not_found(self): self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.show, self.req, fakes.FAKE_UUID) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.index, self.req) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.create, self.req, {}) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.delete, self.req, fakes.FAKE_UUID) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller._add_interface, self.req, fakes.FAKE_UUID, {}) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller._remove_interface, self.req, fakes.FAKE_UUID, {}) nova-17.0.1/nova/tests/unit/api/openstack/compute/test_services.py0000666000175000017500000016041513250073126025331 0ustar zuulzuul00000000000000# Copyright 2012 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import datetime import iso8601 import mock from oslo_utils import fixture as utils_fixture import six import webob.exc from nova.api.openstack.compute import services as services_v21 from nova.api.openstack import wsgi as os_wsgi from nova import availability_zones from nova.cells import utils as cells_utils from nova import compute from nova import context from nova import exception from nova import objects from nova.servicegroup.drivers import db as db_driver from nova import test from nova.tests import fixtures from nova.tests.unit.api.openstack import fakes from nova.tests.unit.objects import test_service from nova.tests import uuidsentinel # This is tied into the os-services API samples functional tests. FAKE_UUID_COMPUTE_HOST1 = 'e81d66a4-ddd3-4aba-8a84-171d1cb4d339' fake_services_list = [ dict(test_service.fake_service, binary='nova-scheduler', host='host1', id=1, uuid=uuidsentinel.svc1, disabled=True, topic='scheduler', updated_at=datetime.datetime(2012, 10, 29, 13, 42, 2), created_at=datetime.datetime(2012, 9, 18, 2, 46, 27), last_seen_up=datetime.datetime(2012, 10, 29, 13, 42, 2), forced_down=False, disabled_reason='test1'), dict(test_service.fake_service, binary='nova-compute', host='host1', id=2, uuid=FAKE_UUID_COMPUTE_HOST1, disabled=True, topic='compute', updated_at=datetime.datetime(2012, 10, 29, 13, 42, 5), created_at=datetime.datetime(2012, 9, 18, 2, 46, 27), last_seen_up=datetime.datetime(2012, 10, 29, 13, 42, 5), forced_down=False, disabled_reason='test2'), dict(test_service.fake_service, binary='nova-scheduler', host='host2', id=3, uuid=uuidsentinel.svc3, disabled=False, topic='scheduler', updated_at=datetime.datetime(2012, 9, 19, 6, 55, 34), created_at=datetime.datetime(2012, 9, 18, 2, 46, 28), last_seen_up=datetime.datetime(2012, 9, 19, 6, 55, 34), forced_down=False, disabled_reason=None), dict(test_service.fake_service, binary='nova-compute', host='host2', id=4, uuid=uuidsentinel.svc4, disabled=True, topic='compute', updated_at=datetime.datetime(2012, 9, 18, 8, 3, 38), created_at=datetime.datetime(2012, 9, 18, 2, 46, 28), last_seen_up=datetime.datetime(2012, 9, 18, 8, 3, 38), forced_down=False, disabled_reason='test4'), # NOTE(rpodolyaka): API services are special case and must be filtered out dict(test_service.fake_service, binary='nova-osapi_compute', host='host2', id=5, uuid=uuidsentinel.svc5, disabled=False, topic=None, updated_at=None, created_at=datetime.datetime(2012, 9, 18, 2, 46, 28), last_seen_up=None, forced_down=False, disabled_reason=None), dict(test_service.fake_service, binary='nova-metadata', host='host2', id=6, uuid=uuidsentinel.svc6, disabled=False, topic=None, updated_at=None, created_at=datetime.datetime(2012, 9, 18, 2, 46, 28), last_seen_up=None, forced_down=False, disabled_reason=None), ] def fake_service_get_all(services): def service_get_all(context, filters=None, set_zones=False, all_cells=False): if set_zones or 'availability_zone' in filters: return availability_zones.set_availability_zones(context, services) return services return service_get_all def fake_db_api_service_get_all(context, disabled=None): return fake_services_list def fake_db_service_get_by_host_binary(services): def service_get_by_host_binary(context, host, binary): for service in services: if service['host'] == host and service['binary'] == binary: return service raise exception.HostBinaryNotFound(host=host, binary=binary) return service_get_by_host_binary def fake_service_get_by_host_binary(context, host, binary): fake = fake_db_service_get_by_host_binary(fake_services_list) return fake(context, host, binary) def _service_get_by_id(services, value): for service in services: if service['id'] == value: return service return None def fake_db_service_update(services): def service_update(context, service_id, values): service = _service_get_by_id(services, service_id) if service is None: raise exception.ServiceNotFound(service_id=service_id) service = copy.deepcopy(service) service.update(values) return service return service_update def fake_service_update(context, service_id, values): fake = fake_db_service_update(fake_services_list) return fake(context, service_id, values) def fake_utcnow(): return datetime.datetime(2012, 10, 29, 13, 42, 11) class ServicesTestV21(test.TestCase): service_is_up_exc = webob.exc.HTTPInternalServerError bad_request = exception.ValidationError wsgi_api_version = os_wsgi.DEFAULT_API_VERSION def _set_up_controller(self): self.controller = services_v21.ServiceController() def setUp(self): super(ServicesTestV21, self).setUp() self.ctxt = context.get_admin_context() self.host_api = compute.HostAPI() self._set_up_controller() self.controller.host_api.service_get_all = ( mock.Mock(side_effect=fake_service_get_all(fake_services_list))) self.useFixture(utils_fixture.TimeFixture(fake_utcnow())) self.stub_out('nova.db.service_get_by_host_and_binary', fake_db_service_get_by_host_binary(fake_services_list)) self.stub_out('nova.db.service_update', fake_db_service_update(fake_services_list)) self.req = fakes.HTTPRequest.blank('') self.useFixture(fixtures.SingleCellSimple()) def _process_output(self, services, has_disabled=False, has_id=False): return services def test_services_list(self): req = fakes.HTTPRequest.blank('/fake/services', use_admin_context=True) res_dict = self.controller.index(req) response = {'services': [ {'binary': 'nova-scheduler', 'host': 'host1', 'zone': 'internal', 'status': 'disabled', 'id': 1, 'state': 'up', 'disabled_reason': 'test1', 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 2)}, {'binary': 'nova-compute', 'host': 'host1', 'zone': 'nova', 'id': 2, 'status': 'disabled', 'disabled_reason': 'test2', 'state': 'up', 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 5)}, {'binary': 'nova-scheduler', 'host': 'host2', 'zone': 'internal', 'id': 3, 'status': 'enabled', 'disabled_reason': None, 'state': 'down', 'updated_at': datetime.datetime(2012, 9, 19, 6, 55, 34)}, {'binary': 'nova-compute', 'host': 'host2', 'zone': 'nova', 'id': 4, 'status': 'disabled', 'disabled_reason': 'test4', 'state': 'down', 'updated_at': datetime.datetime(2012, 9, 18, 8, 3, 38)}]} self._process_output(response) self.assertEqual(res_dict, response) def test_services_list_with_host(self): req = fakes.HTTPRequest.blank('/fake/services?host=host1', use_admin_context=True) res_dict = self.controller.index(req) response = {'services': [ {'binary': 'nova-scheduler', 'host': 'host1', 'disabled_reason': 'test1', 'id': 1, 'zone': 'internal', 'status': 'disabled', 'state': 'up', 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 2)}, {'binary': 'nova-compute', 'host': 'host1', 'zone': 'nova', 'disabled_reason': 'test2', 'id': 2, 'status': 'disabled', 'state': 'up', 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 5)}]} self._process_output(response) self.assertEqual(res_dict, response) def test_services_list_with_service(self): req = fakes.HTTPRequest.blank('/fake/services?binary=nova-compute', use_admin_context=True) res_dict = self.controller.index(req) response = {'services': [ {'binary': 'nova-compute', 'host': 'host1', 'disabled_reason': 'test2', 'id': 2, 'zone': 'nova', 'status': 'disabled', 'state': 'up', 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 5)}, {'binary': 'nova-compute', 'host': 'host2', 'zone': 'nova', 'disabled_reason': 'test4', 'id': 4, 'status': 'disabled', 'state': 'down', 'updated_at': datetime.datetime(2012, 9, 18, 8, 3, 38)}]} self._process_output(response) self.assertEqual(res_dict, response) def _test_services_list_with_param(self, url): req = fakes.HTTPRequest.blank(url, use_admin_context=True) res_dict = self.controller.index(req) response = {'services': [ {'binary': 'nova-compute', 'host': 'host1', 'zone': 'nova', 'disabled_reason': 'test2', 'id': 2, 'status': 'disabled', 'state': 'up', 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 5)}]} self._process_output(response) self.assertEqual(res_dict, response) def test_services_list_with_host_service(self): url = '/fake/services?host=host1&binary=nova-compute' self._test_services_list_with_param(url) def test_services_list_with_additional_filter(self): url = '/fake/services?host=host1&binary=nova-compute&unknown=abc' self._test_services_list_with_param(url) def test_services_list_with_unknown_filter(self): url = '/fake/services?unknown=abc' req = fakes.HTTPRequest.blank(url, use_admin_context=True) res_dict = self.controller.index(req) response = {'services': [ {'binary': 'nova-scheduler', 'disabled_reason': 'test1', 'host': 'host1', 'id': 1, 'state': 'up', 'status': 'disabled', 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 2), 'zone': 'internal'}, {'binary': 'nova-compute', 'disabled_reason': 'test2', 'host': 'host1', 'id': 2, 'state': 'up', 'status': 'disabled', 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 5), 'zone': 'nova'}, {'binary': 'nova-scheduler', 'disabled_reason': None, 'host': 'host2', 'id': 3, 'state': 'down', 'status': 'enabled', 'updated_at': datetime.datetime(2012, 9, 19, 6, 55, 34), 'zone': 'internal'}, {'binary': 'nova-compute', 'disabled_reason': 'test4', 'host': 'host2', 'id': 4, 'state': 'down', 'status': 'disabled', 'updated_at': datetime.datetime(2012, 9, 18, 8, 3, 38), 'zone': 'nova'}]} self._process_output(response) self.assertEqual(res_dict, response) def test_services_list_with_multiple_host_filter(self): url = '/fake/services?host=host1&host=host2' req = fakes.HTTPRequest.blank(url, use_admin_context=True) res_dict = self.controller.index(req) # 2nd query param 'host2' is used here response = {'services': [ {'binary': 'nova-scheduler', 'disabled_reason': None, 'host': 'host2', 'id': 3, 'state': 'down', 'status': 'enabled', 'updated_at': datetime.datetime(2012, 9, 19, 6, 55, 34), 'zone': 'internal'}, {'binary': 'nova-compute', 'disabled_reason': 'test4', 'host': 'host2', 'id': 4, 'state': 'down', 'status': 'disabled', 'updated_at': datetime.datetime(2012, 9, 18, 8, 3, 38), 'zone': 'nova'}]} self._process_output(response) self.assertEqual(response, res_dict) def test_services_list_with_multiple_service_filter(self): url = '/fake/services?binary=nova-compute&binary=nova-scheduler' req = fakes.HTTPRequest.blank(url, use_admin_context=True) res_dict = self.controller.index(req) # 2nd query param 'nova-scheduler' is used here response = {'services': [ {'binary': 'nova-scheduler', 'disabled_reason': 'test1', 'host': 'host1', 'id': 1, 'state': 'up', 'status': 'disabled', 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 2), 'zone': 'internal'}, {'binary': 'nova-scheduler', 'disabled_reason': None, 'host': 'host2', 'id': 3, 'state': 'down', 'status': 'enabled', 'updated_at': datetime.datetime(2012, 9, 19, 6, 55, 34), 'zone': 'internal'}]} self.assertEqual(response, res_dict) def test_services_list_host_query_allow_int_as_string(self): req = fakes.HTTPRequest.blank('', use_admin_context=True, query_string='binary=1') res_dict = self.controller.index(req) self.assertEqual({'services': []}, res_dict) def test_services_list_service_query_allow_int_as_string(self): req = fakes.HTTPRequest.blank('', use_admin_context=True, query_string='host=1') res_dict = self.controller.index(req) self.assertEqual({'services': []}, res_dict) def test_services_list_with_host_service_dummy(self): # This is for backward compatible, need remove it when # restriction to param is enabled. url = '/fake/services?host=host1&binary=nova-compute&dummy=dummy' self._test_services_list_with_param(url) def test_services_detail(self): req = fakes.HTTPRequest.blank('/fake/services', use_admin_context=True) res_dict = self.controller.index(req) response = {'services': [ {'binary': 'nova-scheduler', 'host': 'host1', 'zone': 'internal', 'status': 'disabled', 'id': 1, 'state': 'up', 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 2), 'disabled_reason': 'test1'}, {'binary': 'nova-compute', 'host': 'host1', 'zone': 'nova', 'status': 'disabled', 'state': 'up', 'id': 2, 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 5), 'disabled_reason': 'test2'}, {'binary': 'nova-scheduler', 'host': 'host2', 'zone': 'internal', 'status': 'enabled', 'id': 3, 'state': 'down', 'updated_at': datetime.datetime(2012, 9, 19, 6, 55, 34), 'disabled_reason': None}, {'binary': 'nova-compute', 'host': 'host2', 'zone': 'nova', 'id': 4, 'status': 'disabled', 'state': 'down', 'updated_at': datetime.datetime(2012, 9, 18, 8, 3, 38), 'disabled_reason': 'test4'}]} self._process_output(response, has_disabled=True) self.assertEqual(res_dict, response) def test_service_detail_with_host(self): req = fakes.HTTPRequest.blank('/fake/services?host=host1', use_admin_context=True) res_dict = self.controller.index(req) response = {'services': [ {'binary': 'nova-scheduler', 'host': 'host1', 'zone': 'internal', 'id': 1, 'status': 'disabled', 'state': 'up', 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 2), 'disabled_reason': 'test1'}, {'binary': 'nova-compute', 'host': 'host1', 'zone': 'nova', 'id': 2, 'status': 'disabled', 'state': 'up', 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 5), 'disabled_reason': 'test2'}]} self._process_output(response, has_disabled=True) self.assertEqual(res_dict, response) def test_service_detail_with_service(self): req = fakes.HTTPRequest.blank('/fake/services?binary=nova-compute', use_admin_context=True) res_dict = self.controller.index(req) response = {'services': [ {'binary': 'nova-compute', 'host': 'host1', 'zone': 'nova', 'id': 2, 'status': 'disabled', 'state': 'up', 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 5), 'disabled_reason': 'test2'}, {'binary': 'nova-compute', 'host': 'host2', 'id': 4, 'zone': 'nova', 'status': 'disabled', 'state': 'down', 'updated_at': datetime.datetime(2012, 9, 18, 8, 3, 38), 'disabled_reason': 'test4'}]} self._process_output(response, has_disabled=True) self.assertEqual(res_dict, response) def test_service_detail_with_host_service(self): url = '/fake/services?host=host1&binary=nova-compute' req = fakes.HTTPRequest.blank(url, use_admin_context=True) res_dict = self.controller.index(req) response = {'services': [ {'binary': 'nova-compute', 'host': 'host1', 'zone': 'nova', 'status': 'disabled', 'id': 2, 'state': 'up', 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 5), 'disabled_reason': 'test2'}]} self._process_output(response, has_disabled=True) self.assertEqual(res_dict, response) def test_services_detail_with_delete_extension(self): req = fakes.HTTPRequest.blank('/fake/services', use_admin_context=True) res_dict = self.controller.index(req) response = {'services': [ {'binary': 'nova-scheduler', 'host': 'host1', 'id': 1, 'zone': 'internal', 'disabled_reason': 'test1', 'status': 'disabled', 'state': 'up', 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 2)}, {'binary': 'nova-compute', 'host': 'host1', 'id': 2, 'zone': 'nova', 'disabled_reason': 'test2', 'status': 'disabled', 'state': 'up', 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 5)}, {'binary': 'nova-scheduler', 'host': 'host2', 'disabled_reason': None, 'id': 3, 'zone': 'internal', 'status': 'enabled', 'state': 'down', 'updated_at': datetime.datetime(2012, 9, 19, 6, 55, 34)}, {'binary': 'nova-compute', 'host': 'host2', 'id': 4, 'disabled_reason': 'test4', 'zone': 'nova', 'status': 'disabled', 'state': 'down', 'updated_at': datetime.datetime(2012, 9, 18, 8, 3, 38)}]} self._process_output(response, has_id=True) self.assertEqual(res_dict, response) def test_services_enable(self): def _service_update(context, service_id, values): self.assertIsNone(values['disabled_reason']) return dict(test_service.fake_service, id=service_id, **values) self.stub_out('nova.db.service_update', _service_update) body = {'host': 'host1', 'binary': 'nova-compute'} res_dict = self.controller.update(self.req, "enable", body=body) self.assertEqual(res_dict['service']['status'], 'enabled') self.assertNotIn('disabled_reason', res_dict['service']) def test_services_enable_with_invalid_host(self): body = {'host': 'invalid', 'binary': 'nova-compute'} self.assertRaises(webob.exc.HTTPNotFound, self.controller.update, self.req, "enable", body=body) def test_services_enable_with_unmapped_host(self): body = {'host': 'invalid', 'binary': 'nova-compute'} with mock.patch.object(self.controller.host_api, 'service_update') as m: m.side_effect = exception.HostMappingNotFound(name='something') self.assertRaises(webob.exc.HTTPNotFound, self.controller.update, self.req, "enable", body=body) def test_services_enable_with_invalid_binary(self): body = {'host': 'host1', 'binary': 'invalid'} self.assertRaises(webob.exc.HTTPNotFound, self.controller.update, self.req, "enable", body=body) def test_services_disable(self): body = {'host': 'host1', 'binary': 'nova-compute'} res_dict = self.controller.update(self.req, "disable", body=body) self.assertEqual(res_dict['service']['status'], 'disabled') self.assertNotIn('disabled_reason', res_dict['service']) def test_services_disable_with_invalid_host(self): body = {'host': 'invalid', 'binary': 'nova-compute'} self.assertRaises(webob.exc.HTTPNotFound, self.controller.update, self.req, "disable", body=body) def test_services_disable_with_invalid_binary(self): body = {'host': 'host1', 'binary': 'invalid'} self.assertRaises(webob.exc.HTTPNotFound, self.controller.update, self.req, "disable", body=body) def test_services_disable_log_reason(self): body = {'host': 'host1', 'binary': 'nova-compute', 'disabled_reason': 'test-reason', } res_dict = self.controller.update(self.req, "disable-log-reason", body=body) self.assertEqual(res_dict['service']['status'], 'disabled') self.assertEqual(res_dict['service']['disabled_reason'], 'test-reason') def test_mandatory_reason_field(self): body = {'host': 'host1', 'binary': 'nova-compute', } self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, self.req, "disable-log-reason", body=body) def test_invalid_reason_field(self): reason = 'a' * 256 body = {'host': 'host1', 'binary': 'nova-compute', 'disabled_reason': reason, } self.assertRaises(self.bad_request, self.controller.update, self.req, "disable-log-reason", body=body) def test_services_delete(self): compute = self.host_api.db.service_create(self.ctxt, {'host': 'fake-compute-host', 'binary': 'nova-compute', 'topic': 'compute', 'report_count': 0}) with mock.patch.object(self.controller.host_api, 'service_delete') as service_delete: self.controller.delete(self.req, compute.id) service_delete.assert_called_once_with( self.req.environ['nova.context'], compute.id) self.assertEqual(self.controller.delete.wsgi_code, 204) def test_services_delete_not_found(self): self.assertRaises(webob.exc.HTTPNotFound, self.controller.delete, self.req, 1234) def test_services_delete_invalid_id(self): self.assertRaises(webob.exc.HTTPBadRequest, self.controller.delete, self.req, 'abc') def test_services_delete_duplicate_service(self): with mock.patch.object(self.controller, 'host_api') as host_api: host_api.service_delete.side_effect = exception.ServiceNotUnique() self.assertRaises(webob.exc.HTTPBadRequest, self.controller.delete, self.req, 1234) self.assertTrue(host_api.service_delete.called) # This test is just to verify that the servicegroup API gets used when # calling the API @mock.patch.object(db_driver.DbDriver, 'is_up', side_effect=KeyError) def test_services_with_exception(self, mock_is_up): url = '/fake/services?host=host1&binary=nova-compute' req = fakes.HTTPRequest.blank(url, use_admin_context=True) self.assertRaises(self.service_is_up_exc, self.controller.index, req) class ServicesTestV211(ServicesTestV21): wsgi_api_version = '2.11' def test_services_list(self): req = fakes.HTTPRequest.blank('/fake/services', use_admin_context=True, version=self.wsgi_api_version) res_dict = self.controller.index(req) response = {'services': [ {'binary': 'nova-scheduler', 'host': 'host1', 'zone': 'internal', 'status': 'disabled', 'id': 1, 'state': 'up', 'forced_down': False, 'disabled_reason': 'test1', 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 2)}, {'binary': 'nova-compute', 'host': 'host1', 'zone': 'nova', 'id': 2, 'status': 'disabled', 'disabled_reason': 'test2', 'state': 'up', 'forced_down': False, 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 5)}, {'binary': 'nova-scheduler', 'host': 'host2', 'zone': 'internal', 'id': 3, 'status': 'enabled', 'disabled_reason': None, 'state': 'down', 'forced_down': False, 'updated_at': datetime.datetime(2012, 9, 19, 6, 55, 34)}, {'binary': 'nova-compute', 'host': 'host2', 'zone': 'nova', 'id': 4, 'status': 'disabled', 'disabled_reason': 'test4', 'state': 'down', 'forced_down': False, 'updated_at': datetime.datetime(2012, 9, 18, 8, 3, 38)}]} self._process_output(response) self.assertEqual(res_dict, response) def test_services_list_with_host(self): req = fakes.HTTPRequest.blank('/fake/services?host=host1', use_admin_context=True, version=self.wsgi_api_version) res_dict = self.controller.index(req) response = {'services': [ {'binary': 'nova-scheduler', 'host': 'host1', 'disabled_reason': 'test1', 'id': 1, 'zone': 'internal', 'status': 'disabled', 'state': 'up', 'forced_down': False, 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 2)}, {'binary': 'nova-compute', 'host': 'host1', 'zone': 'nova', 'disabled_reason': 'test2', 'id': 2, 'status': 'disabled', 'state': 'up', 'forced_down': False, 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 5)}]} self._process_output(response) self.assertEqual(res_dict, response) def test_services_list_with_service(self): req = fakes.HTTPRequest.blank('/fake/services?binary=nova-compute', version=self.wsgi_api_version, use_admin_context=True) res_dict = self.controller.index(req) response = {'services': [ {'binary': 'nova-compute', 'host': 'host1', 'disabled_reason': 'test2', 'id': 2, 'zone': 'nova', 'status': 'disabled', 'state': 'up', 'forced_down': False, 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 5)}, {'binary': 'nova-compute', 'host': 'host2', 'zone': 'nova', 'disabled_reason': 'test4', 'id': 4, 'status': 'disabled', 'state': 'down', 'forced_down': False, 'updated_at': datetime.datetime(2012, 9, 18, 8, 3, 38)}]} self._process_output(response) self.assertEqual(res_dict, response) def test_services_list_with_host_service(self): url = '/fake/services?host=host1&binary=nova-compute' req = fakes.HTTPRequest.blank(url, use_admin_context=True, version=self.wsgi_api_version) res_dict = self.controller.index(req) response = {'services': [ {'binary': 'nova-compute', 'host': 'host1', 'zone': 'nova', 'disabled_reason': 'test2', 'id': 2, 'status': 'disabled', 'state': 'up', 'forced_down': False, 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 5)}]} self._process_output(response) self.assertEqual(res_dict, response) def test_services_detail(self): req = fakes.HTTPRequest.blank('/fake/services', use_admin_context=True, version=self.wsgi_api_version) res_dict = self.controller.index(req) response = {'services': [ {'binary': 'nova-scheduler', 'host': 'host1', 'zone': 'internal', 'status': 'disabled', 'id': 1, 'state': 'up', 'forced_down': False, 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 2), 'disabled_reason': 'test1'}, {'binary': 'nova-compute', 'host': 'host1', 'zone': 'nova', 'status': 'disabled', 'state': 'up', 'id': 2, 'forced_down': False, 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 5), 'disabled_reason': 'test2'}, {'binary': 'nova-scheduler', 'host': 'host2', 'zone': 'internal', 'status': 'enabled', 'id': 3, 'state': 'down', 'forced_down': False, 'updated_at': datetime.datetime(2012, 9, 19, 6, 55, 34), 'disabled_reason': None}, {'binary': 'nova-compute', 'host': 'host2', 'zone': 'nova', 'id': 4, 'status': 'disabled', 'state': 'down', 'forced_down': False, 'updated_at': datetime.datetime(2012, 9, 18, 8, 3, 38), 'disabled_reason': 'test4'}]} self._process_output(response, has_disabled=True) self.assertEqual(res_dict, response) def test_service_detail_with_host(self): req = fakes.HTTPRequest.blank('/fake/services?host=host1', use_admin_context=True, version=self.wsgi_api_version) res_dict = self.controller.index(req) response = {'services': [ {'binary': 'nova-scheduler', 'host': 'host1', 'zone': 'internal', 'id': 1, 'status': 'disabled', 'state': 'up', 'forced_down': False, 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 2), 'disabled_reason': 'test1'}, {'binary': 'nova-compute', 'host': 'host1', 'zone': 'nova', 'id': 2, 'status': 'disabled', 'state': 'up', 'forced_down': False, 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 5), 'disabled_reason': 'test2'}]} self._process_output(response, has_disabled=True) self.assertEqual(res_dict, response) def test_service_detail_with_service(self): req = fakes.HTTPRequest.blank('/fake/services?binary=nova-compute', version=self.wsgi_api_version, use_admin_context=True) res_dict = self.controller.index(req) response = {'services': [ {'binary': 'nova-compute', 'host': 'host1', 'zone': 'nova', 'id': 2, 'status': 'disabled', 'state': 'up', 'forced_down': False, 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 5), 'disabled_reason': 'test2'}, {'binary': 'nova-compute', 'host': 'host2', 'id': 4, 'zone': 'nova', 'status': 'disabled', 'state': 'down', 'forced_down': False, 'updated_at': datetime.datetime(2012, 9, 18, 8, 3, 38), 'disabled_reason': 'test4'}]} self._process_output(response, has_disabled=True) self.assertEqual(res_dict, response) def test_service_detail_with_host_service(self): url = '/fake/services?host=host1&binary=nova-compute' req = fakes.HTTPRequest.blank(url, use_admin_context=True, version=self.wsgi_api_version) res_dict = self.controller.index(req) response = {'services': [ {'binary': 'nova-compute', 'host': 'host1', 'zone': 'nova', 'status': 'disabled', 'id': 2, 'state': 'up', 'forced_down': False, 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 5), 'disabled_reason': 'test2'}]} self._process_output(response, has_disabled=True) self.assertEqual(res_dict, response) def test_services_detail_with_delete_extension(self): req = fakes.HTTPRequest.blank('/fake/services', use_admin_context=True, version=self.wsgi_api_version) res_dict = self.controller.index(req) response = {'services': [ {'binary': 'nova-scheduler', 'host': 'host1', 'id': 1, 'zone': 'internal', 'disabled_reason': 'test1', 'status': 'disabled', 'state': 'up', 'forced_down': False, 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 2)}, {'binary': 'nova-compute', 'host': 'host1', 'id': 2, 'zone': 'nova', 'disabled_reason': 'test2', 'status': 'disabled', 'state': 'up', 'forced_down': False, 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 5)}, {'binary': 'nova-scheduler', 'host': 'host2', 'disabled_reason': None, 'id': 3, 'zone': 'internal', 'status': 'enabled', 'state': 'down', 'forced_down': False, 'updated_at': datetime.datetime(2012, 9, 19, 6, 55, 34)}, {'binary': 'nova-compute', 'host': 'host2', 'id': 4, 'disabled_reason': 'test4', 'zone': 'nova', 'status': 'disabled', 'state': 'down', 'forced_down': False, 'updated_at': datetime.datetime(2012, 9, 18, 8, 3, 38)}]} self._process_output(response, has_id=True) self.assertEqual(res_dict, response) def test_force_down_service(self): req = fakes.HTTPRequest.blank('/fake/services', use_admin_context=True, version=self.wsgi_api_version) req_body = {"forced_down": True, "host": "host1", "binary": "nova-compute"} res_dict = self.controller.update(req, 'force-down', body=req_body) response = { "service": { "forced_down": True, "host": "host1", "binary": "nova-compute" } } self.assertEqual(response, res_dict) def test_force_down_service_with_string_forced_down(self): req = fakes.HTTPRequest.blank('/fake/services', use_admin_context=True, version=self.wsgi_api_version) req_body = {"forced_down": "True", "host": "host1", "binary": "nova-compute"} res_dict = self.controller.update(req, 'force-down', body=req_body) response = { "service": { "forced_down": True, "host": "host1", "binary": "nova-compute" } } self.assertEqual(response, res_dict) def test_force_down_service_with_invalid_parameter(self): req = fakes.HTTPRequest.blank('/fake/services', use_admin_context=True, version=self.wsgi_api_version) req_body = {"forced_down": "Invalid", "host": "host1", "binary": "nova-compute"} self.assertRaises(exception.ValidationError, self.controller.update, req, 'force-down', body=req_body) class ServicesTestV252(ServicesTestV211): """This is a boundary test to ensure that 2.52 behaves the same as 2.11.""" wsgi_api_version = '2.52' class FakeServiceGroupAPI(object): def service_is_up(self, *args, **kwargs): return True def get_updated_time(self, *args, **kwargs): return mock.sentinel.updated_time class ServicesTestV253(test.TestCase): """Tests for the 2.53 microversion in the os-services API.""" def setUp(self): super(ServicesTestV253, self).setUp() self.controller = services_v21.ServiceController() self.controller.servicegroup_api = FakeServiceGroupAPI() self.req = fakes.HTTPRequest.blank( '', version=services_v21.UUID_FOR_ID_MIN_VERSION) def assert_services_equal(self, s1, s2): for k in ('binary', 'host'): self.assertEqual(s1[k], s2[k]) def test_list_has_uuid_in_id_field(self): """Tests that a GET response includes an id field but the value is the service uuid rather than the id integer primary key. """ service_uuids = [s['uuid'] for s in fake_services_list] with mock.patch.object( self.controller.host_api, 'service_get_all', side_effect=fake_service_get_all(fake_services_list)): resp = self.controller.index(self.req) for service in resp['services']: # Make sure a uuid field wasn't returned. self.assertNotIn('uuid', service) # Make sure the id field is one of our known uuids. self.assertIn(service['id'], service_uuids) # Make sure this service was in our known list of fake services. expected = next(iter(filter( lambda s: s['uuid'] == service['id'], fake_services_list))) self.assert_services_equal(expected, service) def test_delete_takes_uuid_for_id(self): """Tests that a DELETE request correctly deletes a service when a valid service uuid is provided for an existing service. """ service = self.start_service( 'compute', 'fake-compute-host').service_ref with mock.patch.object(self.controller.host_api, 'service_delete') as service_delete: self.controller.delete(self.req, service.uuid) service_delete.assert_called_once_with( self.req.environ['nova.context'], service.uuid) self.assertEqual(204, self.controller.delete.wsgi_code) def test_delete_uuid_not_found(self): """Tests that we get a 404 response when attempting to delete a service that is not found by the given uuid. """ self.assertRaises(webob.exc.HTTPNotFound, self.controller.delete, self.req, uuidsentinel.svc2) def test_delete_invalid_uuid(self): """Tests that the service uuid is validated in a DELETE request.""" ex = self.assertRaises(webob.exc.HTTPBadRequest, self.controller.delete, self.req, 1234) self.assertIn('Invalid uuid', six.text_type(ex)) def test_update_invalid_service_uuid(self): """Tests that the service uuid is validated in a PUT request.""" ex = self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, self.req, 1234, body={}) self.assertIn('Invalid uuid', six.text_type(ex)) def test_update_policy_failed(self): """Tests that policy is checked with microversion 2.53.""" rule_name = "os_compute_api:os-services" self.policy.set_rules({rule_name: "project_id:non_fake"}) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller.update, self.req, uuidsentinel.service_uuid, body={}) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) def test_update_service_not_found(self): """Tests that we get a 404 response if the service is not found by the given uuid when handling a PUT request. """ self.assertRaises(webob.exc.HTTPNotFound, self.controller.update, self.req, uuidsentinel.service_uuid, body={}) def test_update_invalid_status(self): """Tests that jsonschema validates the status field in the request body and fails if it's not "enabled" or "disabled". """ service = self.start_service( 'compute', 'fake-compute-host').service_ref self.assertRaises( exception.ValidationError, self.controller.update, self.req, service.uuid, body={'status': 'invalid'}) def test_update_disabled_no_reason_then_enable(self): """Tests disabling a service with no reason given. Then enables it to see the change in the response body. """ service = self.start_service( 'compute', 'fake-compute-host').service_ref resp = self.controller.update(self.req, service.uuid, body={'status': 'disabled'}) expected_resp = { 'service': { 'status': 'disabled', 'state': 'up', 'binary': 'nova-compute', 'host': 'fake-compute-host', 'zone': 'nova', # Comes from CONF.default_availability_zone 'updated_at': mock.sentinel.updated_time, 'disabled_reason': None, 'id': service.uuid, 'forced_down': False } } self.assertDictEqual(expected_resp, resp) # Now enable the service to see the response change. req = fakes.HTTPRequest.blank( '', version=services_v21.UUID_FOR_ID_MIN_VERSION) resp = self.controller.update(req, service.uuid, body={'status': 'enabled'}) expected_resp['service']['status'] = 'enabled' self.assertDictEqual(expected_resp, resp) def test_update_enable_with_disabled_reason_fails(self): """Validates that requesting to both enable a service and set the disabled_reason results in a 400 BadRequest error. """ service = self.start_service( 'compute', 'fake-compute-host').service_ref ex = self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, self.req, service.uuid, body={'status': 'enabled', 'disabled_reason': 'invalid'}) self.assertIn("Specifying 'disabled_reason' with status 'enabled' " "is invalid.", six.text_type(ex)) def test_update_disabled_reason_and_forced_down(self): """Tests disabling a service with a reason and forcing it down is reflected back in the response. """ service = self.start_service( 'compute', 'fake-compute-host').service_ref resp = self.controller.update(self.req, service.uuid, body={'status': 'disabled', 'disabled_reason': 'maintenance', # Also tests bool_from_string usage 'forced_down': 'yes'}) expected_resp = { 'service': { 'status': 'disabled', 'state': 'up', 'binary': 'nova-compute', 'host': 'fake-compute-host', 'zone': 'nova', # Comes from CONF.default_availability_zone 'updated_at': mock.sentinel.updated_time, 'disabled_reason': 'maintenance', 'id': service.uuid, 'forced_down': True } } self.assertDictEqual(expected_resp, resp) def test_update_forced_down_invalid_value(self): """Tests that passing an invalid value for forced_down results in a validation error. """ service = self.start_service( 'compute', 'fake-compute-host').service_ref self.assertRaises(exception.ValidationError, self.controller.update, self.req, service.uuid, body={'status': 'disabled', 'disabled_reason': 'maintenance', 'forced_down': 'invalid'}) def test_update_forced_down_invalid_service(self): """Tests that you can't update a non-nova-compute service.""" service = self.start_service( 'scheduler', 'fake-scheduler-host').service_ref ex = self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, self.req, service.uuid, body={'forced_down': True}) self.assertEqual('Updating a nova-scheduler service is not supported. ' 'Only nova-compute services can be updated.', six.text_type(ex)) def test_update_empty_body(self): """Tests that the caller gets a 400 error if they don't request any updates. """ service = self.start_service('compute').service_ref ex = self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, self.req, service.uuid, body={}) self.assertEqual("No updates were requested. Fields 'status' or " "'forced_down' should be specified.", six.text_type(ex)) def test_update_only_disabled_reason(self): """Tests that the caller gets a 400 error if they only specify disabled_reason but don't also specify status='disabled'. """ service = self.start_service('compute').service_ref ex = self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, self.req, service.uuid, body={'disabled_reason': 'missing status'}) self.assertEqual("No updates were requested. Fields 'status' or " "'forced_down' should be specified.", six.text_type(ex)) class ServicesCellsTestV21(test.TestCase): def setUp(self): super(ServicesCellsTestV21, self).setUp() host_api = compute.cells_api.HostAPI() self._set_up_controller() self.controller.host_api = host_api self.useFixture(utils_fixture.TimeFixture(fake_utcnow())) services_list = [] for service in fake_services_list: service = service.copy() del service['version'] service_obj = objects.Service(**service) service_proxy = cells_utils.ServiceProxy(service_obj, 'cell1') services_list.append(service_proxy) host_api.cells_rpcapi.service_get_all = ( mock.Mock(side_effect=fake_service_get_all(services_list))) def _set_up_controller(self): self.controller = services_v21.ServiceController() def _process_out(self, res_dict): for res in res_dict['services']: res.pop('disabled_reason') def test_services_detail(self): req = fakes.HTTPRequest.blank('/fake/services', use_admin_context=True) res_dict = self.controller.index(req) utc = iso8601.UTC response = {'services': [ {'id': 'cell1@1', 'binary': 'nova-scheduler', 'host': 'cell1@host1', 'zone': 'internal', 'status': 'disabled', 'state': 'up', 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 2, tzinfo=utc)}, {'id': 'cell1@2', 'binary': 'nova-compute', 'host': 'cell1@host1', 'zone': 'nova', 'status': 'disabled', 'state': 'up', 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 5, tzinfo=utc)}, {'id': 'cell1@3', 'binary': 'nova-scheduler', 'host': 'cell1@host2', 'zone': 'internal', 'status': 'enabled', 'state': 'down', 'updated_at': datetime.datetime(2012, 9, 19, 6, 55, 34, tzinfo=utc)}, {'id': 'cell1@4', 'binary': 'nova-compute', 'host': 'cell1@host2', 'zone': 'nova', 'status': 'disabled', 'state': 'down', 'updated_at': datetime.datetime(2012, 9, 18, 8, 3, 38, tzinfo=utc)}]} self._process_out(res_dict) self.assertEqual(response, res_dict) class ServicesPolicyEnforcementV21(test.NoDBTestCase): def setUp(self): super(ServicesPolicyEnforcementV21, self).setUp() self.controller = services_v21.ServiceController() self.req = fakes.HTTPRequest.blank('') def test_update_policy_failed(self): rule_name = "os_compute_api:os-services" self.policy.set_rules({rule_name: "project_id:non_fake"}) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller.update, self.req, fakes.FAKE_UUID, body={'host': 'host1', 'binary': 'nova-compute'}) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) def test_delete_policy_failed(self): rule_name = "os_compute_api:os-services" self.policy.set_rules({rule_name: "project_id:non_fake"}) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller.delete, self.req, fakes.FAKE_UUID) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) def test_index_policy_failed(self): rule_name = "os_compute_api:os-services" self.policy.set_rules({rule_name: "project_id:non_fake"}) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller.index, self.req) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) nova-17.0.1/nova/tests/unit/api/openstack/compute/test_flavor_access.py0000666000175000017500000004347213250073126026323 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import mock from webob import exc from nova.api.openstack import api_version_request as api_version from nova.api.openstack.compute import flavor_access \ as flavor_access_v21 from nova.api.openstack.compute import flavors as flavors_api from nova import context from nova import exception from nova import test from nova.tests.unit.api.openstack import fakes def generate_flavor(flavorid, ispublic): return { 'id': flavorid, 'flavorid': str(flavorid), 'root_gb': 1, 'ephemeral_gb': 1, 'name': u'test', 'created_at': datetime.datetime(2012, 1, 1, 1, 1, 1, 1), 'updated_at': None, 'memory_mb': 512, 'vcpus': 1, 'swap': 512, 'rxtx_factor': 1.0, 'disabled': False, 'extra_specs': {}, 'vcpu_weight': None, 'is_public': bool(ispublic), 'description': None } INSTANCE_TYPES = { '0': generate_flavor(0, True), '1': generate_flavor(1, True), '2': generate_flavor(2, False), '3': generate_flavor(3, False)} ACCESS_LIST = [{'flavor_id': '2', 'project_id': 'proj2'}, {'flavor_id': '2', 'project_id': 'proj3'}, {'flavor_id': '3', 'project_id': 'proj3'}] def fake_get_flavor_access_by_flavor_id(context, flavorid): res = [] for access in ACCESS_LIST: if access['flavor_id'] == flavorid: res.append(access['project_id']) return res def fake_get_flavor_by_flavor_id(context, flavorid): return INSTANCE_TYPES[flavorid] def _has_flavor_access(flavorid, projectid): for access in ACCESS_LIST: if access['flavor_id'] == flavorid and \ access['project_id'] == projectid: return True return False def fake_get_all_flavors_sorted_list(context, inactive=False, filters=None, sort_key='flavorid', sort_dir='asc', limit=None, marker=None): if filters is None or filters['is_public'] is None: return sorted(INSTANCE_TYPES.values(), key=lambda item: item[sort_key]) res = {} for k, v in INSTANCE_TYPES.items(): if filters['is_public'] and _has_flavor_access(k, context.project_id): res.update({k: v}) continue if v['is_public'] == filters['is_public']: res.update({k: v}) res = sorted(res.values(), key=lambda item: item[sort_key]) return res class FakeRequest(object): environ = {"nova.context": context.get_admin_context()} api_version_request = api_version.APIVersionRequest("2.1") def get_db_flavor(self, flavor_id): return INSTANCE_TYPES[flavor_id] def is_legacy_v2(self): return False class FakeResponse(object): obj = {'flavor': {'id': '0'}, 'flavors': [ {'id': '0'}, {'id': '2'}] } def attach(self, **kwargs): pass def fake_get_flavor_projects_from_db(context, flavorid): raise exception.FlavorNotFound(flavor_id=flavorid) class FlavorAccessTestV21(test.NoDBTestCase): api_version = "2.1" FlavorAccessController = flavor_access_v21.FlavorAccessController FlavorActionController = flavor_access_v21.FlavorActionController _prefix = "/v2/fake" validation_ex = exception.ValidationError def setUp(self): super(FlavorAccessTestV21, self).setUp() self.flavor_controller = flavors_api.FlavorsController() # We need to stub out verify_project_id so that it doesn't # generate an EndpointNotFound exception and result in a # server error. self.stub_out('nova.api.openstack.identity.verify_project_id', lambda ctx, project_id: True) self.req = FakeRequest() self.req.environ = {"nova.context": context.RequestContext('fake_user', 'fake')} self.stub_out('nova.objects.Flavor._flavor_get_by_flavor_id_from_db', fake_get_flavor_by_flavor_id) self.stub_out('nova.objects.flavor._flavor_get_all_from_db', fake_get_all_flavors_sorted_list) self.stub_out('nova.objects.flavor._get_projects_from_db', fake_get_flavor_access_by_flavor_id) self.flavor_access_controller = self.FlavorAccessController() self.flavor_action_controller = self.FlavorActionController() def _verify_flavor_list(self, result, expected): # result already sorted by flavor_id self.assertEqual(len(result), len(expected)) for d1, d2 in zip(result, expected): self.assertEqual(d1['id'], d2['id']) @mock.patch('nova.objects.Flavor._flavor_get_by_flavor_id_from_db', side_effect=exception.FlavorNotFound(flavor_id='foo')) def test_list_flavor_access_public(self, mock_api_get): # query os-flavor-access on public flavor should return 404 self.assertRaises(exc.HTTPNotFound, self.flavor_access_controller.index, self.req, '1') def test_list_flavor_access_private(self): expected = {'flavor_access': [ {'flavor_id': '2', 'tenant_id': 'proj2'}, {'flavor_id': '2', 'tenant_id': 'proj3'}]} result = self.flavor_access_controller.index(self.req, '2') self.assertEqual(result, expected) def test_list_flavor_with_admin_default_proj1(self): expected = {'flavors': [{'id': '0'}, {'id': '1'}]} req = fakes.HTTPRequest.blank(self._prefix + '/flavors', use_admin_context=True) req.environ['nova.context'].project_id = 'proj1' result = self.flavor_controller.index(req) self._verify_flavor_list(result['flavors'], expected['flavors']) def test_list_flavor_with_admin_default_proj2(self): expected = {'flavors': [{'id': '0'}, {'id': '1'}, {'id': '2'}]} req = fakes.HTTPRequest.blank(self._prefix + '/flavors', use_admin_context=True) req.environ['nova.context'].project_id = 'proj2' result = self.flavor_controller.index(req) self._verify_flavor_list(result['flavors'], expected['flavors']) def test_list_flavor_with_admin_ispublic_true(self): expected = {'flavors': [{'id': '0'}, {'id': '1'}]} url = self._prefix + '/flavors?is_public=true' req = fakes.HTTPRequest.blank(url, use_admin_context=True) result = self.flavor_controller.index(req) self._verify_flavor_list(result['flavors'], expected['flavors']) def test_list_flavor_with_admin_ispublic_false(self): expected = {'flavors': [{'id': '2'}, {'id': '3'}]} url = self._prefix + '/flavors?is_public=false' req = fakes.HTTPRequest.blank(url, use_admin_context=True) result = self.flavor_controller.index(req) self._verify_flavor_list(result['flavors'], expected['flavors']) def test_list_flavor_with_admin_ispublic_false_proj2(self): expected = {'flavors': [{'id': '2'}, {'id': '3'}]} url = self._prefix + '/flavors?is_public=false' req = fakes.HTTPRequest.blank(url, use_admin_context=True) req.environ['nova.context'].project_id = 'proj2' result = self.flavor_controller.index(req) self._verify_flavor_list(result['flavors'], expected['flavors']) def test_list_flavor_with_admin_ispublic_none(self): expected = {'flavors': [{'id': '0'}, {'id': '1'}, {'id': '2'}, {'id': '3'}]} url = self._prefix + '/flavors?is_public=none' req = fakes.HTTPRequest.blank(url, use_admin_context=True) result = self.flavor_controller.index(req) self._verify_flavor_list(result['flavors'], expected['flavors']) def test_list_flavor_with_no_admin_default(self): expected = {'flavors': [{'id': '0'}, {'id': '1'}]} req = fakes.HTTPRequest.blank(self._prefix + '/flavors', use_admin_context=False) result = self.flavor_controller.index(req) self._verify_flavor_list(result['flavors'], expected['flavors']) def test_list_flavor_with_no_admin_ispublic_true(self): expected = {'flavors': [{'id': '0'}, {'id': '1'}]} url = self._prefix + '/flavors?is_public=true' req = fakes.HTTPRequest.blank(url, use_admin_context=False) result = self.flavor_controller.index(req) self._verify_flavor_list(result['flavors'], expected['flavors']) def test_list_flavor_with_no_admin_ispublic_false(self): expected = {'flavors': [{'id': '0'}, {'id': '1'}]} url = self._prefix + '/flavors?is_public=false' req = fakes.HTTPRequest.blank(url, use_admin_context=False) result = self.flavor_controller.index(req) self._verify_flavor_list(result['flavors'], expected['flavors']) def test_list_flavor_with_no_admin_ispublic_none(self): expected = {'flavors': [{'id': '0'}, {'id': '1'}]} url = self._prefix + '/flavors?is_public=none' req = fakes.HTTPRequest.blank(url, use_admin_context=False) result = self.flavor_controller.index(req) self._verify_flavor_list(result['flavors'], expected['flavors']) def test_add_tenant_access(self): def stub_add_flavor_access(context, flavor_id, projectid): self.assertEqual(3, flavor_id, "flavor_id") self.assertEqual("proj2", projectid, "projectid") self.stub_out('nova.objects.Flavor._flavor_add_project', stub_add_flavor_access) expected = {'flavor_access': [{'flavor_id': '3', 'tenant_id': 'proj3'}]} body = {'addTenantAccess': {'tenant': 'proj2'}} req = fakes.HTTPRequest.blank(self._prefix + '/flavors/2/action', use_admin_context=True) result = self.flavor_action_controller._add_tenant_access( req, '3', body=body) self.assertEqual(result, expected) @mock.patch('nova.objects.Flavor.get_by_flavor_id', side_effect=exception.FlavorNotFound(flavor_id='1')) def test_add_tenant_access_with_flavor_not_found(self, mock_get): body = {'addTenantAccess': {'tenant': 'proj2'}} req = fakes.HTTPRequest.blank(self._prefix + '/flavors/2/action', use_admin_context=True) self.assertRaises(exc.HTTPNotFound, self.flavor_action_controller._add_tenant_access, req, '2', body=body) def test_add_tenant_access_with_no_tenant(self): req = fakes.HTTPRequest.blank(self._prefix + '/flavors/2/action', use_admin_context=True) body = {'addTenantAccess': {'foo': 'proj2'}} self.assertRaises(self.validation_ex, self.flavor_action_controller._add_tenant_access, req, '2', body=body) body = {'addTenantAccess': {'tenant': ''}} self.assertRaises(self.validation_ex, self.flavor_action_controller._add_tenant_access, req, '2', body=body) def test_add_tenant_access_with_already_added_access(self): def stub_add_flavor_access(context, flavorid, projectid): raise exception.FlavorAccessExists(flavor_id=flavorid, project_id=projectid) self.stub_out('nova.objects.Flavor._flavor_add_project', stub_add_flavor_access) body = {'addTenantAccess': {'tenant': 'proj2'}} self.assertRaises(exc.HTTPConflict, self.flavor_action_controller._add_tenant_access, self.req, '3', body=body) def test_remove_tenant_access_with_bad_access(self): def stub_remove_flavor_access(context, flavorid, projectid): raise exception.FlavorAccessNotFound(flavor_id=flavorid, project_id=projectid) self.stub_out('nova.objects.Flavor._flavor_del_project', stub_remove_flavor_access) body = {'removeTenantAccess': {'tenant': 'proj2'}} self.assertRaises(exc.HTTPNotFound, self.flavor_action_controller._remove_tenant_access, self.req, '3', body=body) def test_add_tenant_access_is_public(self): body = {'addTenantAccess': {'tenant': 'proj2'}} req = fakes.HTTPRequest.blank(self._prefix + '/flavors/2/action', use_admin_context=True) req.api_version_request = api_version.APIVersionRequest('2.7') self.assertRaises(exc.HTTPConflict, self.flavor_action_controller._add_tenant_access, req, '1', body=body) @mock.patch('nova.objects.Flavor._flavor_get_by_flavor_id_from_db', side_effect=exception.FlavorNotFound(flavor_id='foo')) def test_delete_tenant_access_with_no_tenant(self, mock_api_get): req = fakes.HTTPRequest.blank(self._prefix + '/flavors/2/action', use_admin_context=True) body = {'removeTenantAccess': {'foo': 'proj2'}} self.assertRaises(self.validation_ex, self.flavor_action_controller._remove_tenant_access, req, '2', body=body) body = {'removeTenantAccess': {'tenant': ''}} self.assertRaises(self.validation_ex, self.flavor_action_controller._remove_tenant_access, req, '2', body=body) @mock.patch('nova.api.openstack.identity.verify_project_id', side_effect=exc.HTTPBadRequest( explanation="Project ID proj2 is not a valid project.")) def test_add_tenant_access_with_invalid_tenant(self, mock_verify): """Tests the case that the tenant does not exist in Keystone.""" req = fakes.HTTPRequest.blank(self._prefix + '/flavors/2/action', use_admin_context=True) body = {'addTenantAccess': {'tenant': 'proj2'}} self.assertRaises(exc.HTTPBadRequest, self.flavor_action_controller._add_tenant_access, req, '2', body=body) mock_verify.assert_called_once_with( req.environ['nova.context'], 'proj2') @mock.patch('nova.api.openstack.identity.verify_project_id', side_effect=exc.HTTPBadRequest( explanation="Project ID proj2 is not a valid project.")) def test_remove_tenant_access_with_invalid_tenant(self, mock_verify): """Tests the case that the tenant does not exist in Keystone.""" req = fakes.HTTPRequest.blank(self._prefix + '/flavors/2/action', use_admin_context=True) body = {'removeTenantAccess': {'tenant': 'proj2'}} self.assertRaises(exc.HTTPBadRequest, self.flavor_action_controller._remove_tenant_access, req, '2', body=body) mock_verify.assert_called_once_with( req.environ['nova.context'], 'proj2') class FlavorAccessPolicyEnforcementV21(test.NoDBTestCase): def setUp(self): super(FlavorAccessPolicyEnforcementV21, self).setUp() self.act_controller = flavor_access_v21.FlavorActionController() self.access_controller = flavor_access_v21.FlavorAccessController() self.req = fakes.HTTPRequest.blank('') def test_add_tenant_access_policy_failed(self): rule_name = "os_compute_api:os-flavor-access:add_tenant_access" self.policy.set_rules({rule_name: "project:non_fake"}) exc = self.assertRaises( exception.PolicyNotAuthorized, self.act_controller._add_tenant_access, self.req, fakes.FAKE_UUID, body={'addTenantAccess': {'tenant': fakes.FAKE_UUID}}) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) def test_remove_tenant_access_policy_failed(self): rule_name = ("os_compute_api:os-flavor-access:" "remove_tenant_access") self.policy.set_rules({rule_name: "project:non_fake"}) exc = self.assertRaises( exception.PolicyNotAuthorized, self.act_controller._remove_tenant_access, self.req, fakes.FAKE_UUID, body={'removeTenantAccess': {'tenant': fakes.FAKE_UUID}}) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) def test_index_policy_failed(self): rule_name = "os_compute_api:os-flavor-access" self.policy.set_rules({rule_name: "project:non_fake"}) exc = self.assertRaises( exception.PolicyNotAuthorized, self.access_controller.index, self.req, fakes.FAKE_UUID) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) nova-17.0.1/nova/tests/unit/api/openstack/compute/test_server_external_events.py0000666000175000017500000002146213250073126030300 0ustar zuulzuul00000000000000# Copyright 2014 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import webob from nova.api.openstack.compute import server_external_events \ as server_external_events_v21 from nova import exception from nova import objects from nova.objects import instance as instance_obj from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests import uuidsentinel as uuids fake_instances = { '00000000-0000-0000-0000-000000000001': objects.Instance(id=1, uuid='00000000-0000-0000-0000-000000000001', host='host1'), '00000000-0000-0000-0000-000000000002': objects.Instance(id=2, uuid='00000000-0000-0000-0000-000000000002', host='host1'), '00000000-0000-0000-0000-000000000003': objects.Instance(id=3, uuid='00000000-0000-0000-0000-000000000003', host='host2'), '00000000-0000-0000-0000-000000000004': objects.Instance(id=4, uuid='00000000-0000-0000-0000-000000000004', host=None), } fake_instance_uuids = sorted(fake_instances.keys()) MISSING_UUID = '00000000-0000-0000-0000-000000000005' fake_cells = [objects.CellMapping(uuid=uuids.cell1, database_connection="db1"), objects.CellMapping(uuid=uuids.cell2, database_connection="db2")] fake_instance_mappings = [ objects.InstanceMapping(cell_mapping=fake_cells[instance.id % 2], instance_uuid=instance.uuid) for instance in fake_instances.values()] @classmethod def fake_get_by_filters(cls, context, filters, expected_attrs=None): if expected_attrs: # This is a regression check for bug 1645479. expected_attrs_set = set(expected_attrs) full_expected_attrs_set = set(instance_obj.INSTANCE_OPTIONAL_ATTRS) assert expected_attrs_set.issubset(full_expected_attrs_set), \ ('%s is not a subset of %s' % (expected_attrs_set, full_expected_attrs_set)) l = objects.InstanceList(objects=[ inst for inst in fake_instances.values() if inst.uuid in filters['uuid']]) return l @classmethod def fake_get_by_instance_uuids(cls, context, uuids): mappings = [im for im in fake_instance_mappings if im.instance_uuid in uuids] return objects.InstanceMappingList(objects=mappings) @mock.patch('nova.objects.InstanceMappingList.get_by_instance_uuids', fake_get_by_instance_uuids) @mock.patch('nova.objects.InstanceList.get_by_filters', fake_get_by_filters) class ServerExternalEventsTestV21(test.NoDBTestCase): server_external_events = server_external_events_v21 invalid_error = exception.ValidationError wsgi_api_version = '2.1' def setUp(self): super(ServerExternalEventsTestV21, self).setUp() self.api = \ self.server_external_events.ServerExternalEventsController() self.event_1 = {'name': 'network-vif-plugged', 'tag': 'foo', 'server_uuid': fake_instance_uuids[0], 'status': 'completed'} self.event_2 = {'name': 'network-changed', 'server_uuid': fake_instance_uuids[1]} self.default_body = {'events': [self.event_1, self.event_2]} self.resp_event_1 = dict(self.event_1) self.resp_event_1['code'] = 200 self.resp_event_2 = dict(self.event_2) self.resp_event_2['code'] = 200 self.resp_event_2['status'] = 'completed' self.default_resp_body = {'events': [self.resp_event_1, self.resp_event_2]} self.req = fakes.HTTPRequest.blank('', use_admin_context=True, version=self.wsgi_api_version) def _assert_call(self, body, expected_uuids, expected_events): with mock.patch.object(self.api.compute_api, 'external_instance_event') as api_method: response = self.api.create(self.req, body=body) result = response.obj code = response._code self.assertEqual(1, api_method.call_count) call = api_method.call_args_list[0] args = call[0] call_instances = args[1] call_events = args[2] self.assertEqual(set(expected_uuids), set([instance.uuid for instance in call_instances])) self.assertEqual(len(expected_uuids), len(call_instances)) self.assertEqual(set(expected_events), set([event.name for event in call_events])) self.assertEqual(len(expected_events), len(call_events)) return result, code def test_create(self): result, code = self._assert_call(self.default_body, fake_instance_uuids[:2], ['network-vif-plugged', 'network-changed']) self.assertEqual(self.default_resp_body, result) self.assertEqual(200, code) def test_create_one_bad_instance(self): body = self.default_body body['events'][1]['server_uuid'] = MISSING_UUID result, code = self._assert_call(body, [fake_instance_uuids[0]], ['network-vif-plugged']) self.assertEqual('failed', result['events'][1]['status']) self.assertEqual(200, result['events'][0]['code']) self.assertEqual(404, result['events'][1]['code']) self.assertEqual(207, code) def test_create_event_instance_has_no_host(self): body = self.default_body body['events'][0]['server_uuid'] = fake_instance_uuids[-1] # the instance without host should not be passed to the compute layer result, code = self._assert_call(body, [fake_instance_uuids[1]], ['network-changed']) self.assertEqual(422, result['events'][0]['code']) self.assertEqual('failed', result['events'][0]['status']) self.assertEqual(200, result['events'][1]['code']) self.assertEqual(207, code) def test_create_no_good_instances(self): body = self.default_body body['events'][0]['server_uuid'] = MISSING_UUID body['events'][1]['server_uuid'] = MISSING_UUID self.assertRaises(webob.exc.HTTPNotFound, self.api.create, self.req, body=body) def test_create_bad_status(self): body = self.default_body body['events'][1]['status'] = 'foo' self.assertRaises(self.invalid_error, self.api.create, self.req, body=body) def test_create_extra_gorp(self): body = self.default_body body['events'][0]['foobar'] = 'bad stuff' self.assertRaises(self.invalid_error, self.api.create, self.req, body=body) def test_create_bad_events(self): body = {'events': 'foo'} self.assertRaises(self.invalid_error, self.api.create, self.req, body=body) def test_create_bad_body(self): body = {'foo': 'bar'} self.assertRaises(self.invalid_error, self.api.create, self.req, body=body) def test_create_unknown_events(self): self.event_1['name'] = 'unkown_event' body = {'events': self.event_1} self.assertRaises(self.invalid_error, self.api.create, self.req, body=body) @mock.patch('nova.objects.InstanceMappingList.get_by_instance_uuids', fake_get_by_instance_uuids) @mock.patch('nova.objects.InstanceList.get_by_filters', fake_get_by_filters) class ServerExternalEventsTestV251(ServerExternalEventsTestV21): wsgi_api_version = '2.51' def test_create_with_missing_tag(self): body = self.default_body body['events'][1]['name'] = 'volume-extended' result, code = self._assert_call(body, [fake_instance_uuids[0]], ['network-vif-plugged']) self.assertEqual(200, result['events'][0]['code']) self.assertEqual('completed', result['events'][0]['status']) self.assertEqual(400, result['events'][1]['code']) self.assertEqual('failed', result['events'][1]['status']) self.assertEqual(207, code) nova-17.0.1/nova/tests/unit/api/openstack/compute/test_auth.py0000666000175000017500000000621313250073126024442 0ustar zuulzuul00000000000000# Copyright 2013 IBM Corp. # Copyright 2010 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import testscenarios from nova.api import openstack as openstack_api from nova.api.openstack import auth from nova.api.openstack import compute from nova.api.openstack import urlmap from nova import test from nova.tests.unit.api.openstack import fakes class TestNoAuthMiddleware(testscenarios.WithScenarios, test.NoDBTestCase): scenarios = [ ('project_id', { 'expected_url': 'http://localhost/v2.1/user1_project', 'auth_middleware': auth.NoAuthMiddleware}), ('no_project_id', { 'expected_url': 'http://localhost/v2.1', 'auth_middleware': auth.NoAuthMiddlewareV2_18}), ] def setUp(self): super(TestNoAuthMiddleware, self).setUp() fakes.stub_out_networking(self) api_v21 = openstack_api.FaultWrapper( self.auth_middleware( compute.APIRouterV21() ) ) self.wsgi_app = urlmap.URLMap() self.wsgi_app['/v2.1'] = api_v21 self.req_url = '/v2.1' def test_authorize_user(self): req = fakes.HTTPRequest.blank(self.req_url, base_url='') req.headers['X-Auth-User'] = 'user1' req.headers['X-Auth-Key'] = 'user1_key' req.headers['X-Auth-Project-Id'] = 'user1_project' result = req.get_response(self.wsgi_app) self.assertEqual(result.status, '204 No Content') self.assertEqual(result.headers['X-Server-Management-Url'], self.expected_url) def test_authorize_user_trailing_slash(self): # make sure it works with trailing slash on the request self.req_url = self.req_url + '/' req = fakes.HTTPRequest.blank(self.req_url, base_url='') req.headers['X-Auth-User'] = 'user1' req.headers['X-Auth-Key'] = 'user1_key' req.headers['X-Auth-Project-Id'] = 'user1_project' result = req.get_response(self.wsgi_app) self.assertEqual(result.status, '204 No Content') self.assertEqual(result.headers['X-Server-Management-Url'], self.expected_url) def test_auth_token_no_empty_headers(self): req = fakes.HTTPRequest.blank(self.req_url, base_url='') req.headers['X-Auth-User'] = 'user1' req.headers['X-Auth-Key'] = 'user1_key' req.headers['X-Auth-Project-Id'] = 'user1_project' result = req.get_response(self.wsgi_app) self.assertEqual(result.status, '204 No Content') self.assertNotIn('X-CDN-Management-Url', result.headers) self.assertNotIn('X-Storage-Url', result.headers) nova-17.0.1/nova/tests/unit/api/openstack/compute/test_image_size.py0000666000175000017500000000656513250073126025627 0ustar zuulzuul00000000000000# Copyright 2013 Rackspace Hosting # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_serialization import jsonutils from nova import test from nova.tests.unit.api.openstack import fakes NOW_API_FORMAT = "2010-10-11T10:30:22Z" IMAGES = [{ 'id': '123', 'name': 'public image', 'metadata': {'key1': 'value1'}, 'updated': NOW_API_FORMAT, 'created': NOW_API_FORMAT, 'status': 'ACTIVE', 'progress': 100, 'minDisk': 10, 'minRam': 128, 'size': 12345678, "links": [{ "rel": "self", "href": "http://localhost/v2/fake/images/123", }, { "rel": "bookmark", "href": "http://localhost/fake/images/123", }], }, { 'id': '124', 'name': 'queued snapshot', 'updated': NOW_API_FORMAT, 'created': NOW_API_FORMAT, 'status': 'SAVING', 'progress': 25, 'minDisk': 0, 'minRam': 0, 'size': 87654321, "links": [{ "rel": "self", "href": "http://localhost/v2/fake/images/124", }, { "rel": "bookmark", "href": "http://localhost/fake/images/124", }], }] def fake_show(*args, **kwargs): return IMAGES[0] def fake_detail(*args, **kwargs): return IMAGES class ImageSizeTestV21(test.NoDBTestCase): content_type = 'application/json' prefix = 'OS-EXT-IMG-SIZE' def setUp(self): super(ImageSizeTestV21, self).setUp() self.stub_out('nova.image.glance.GlanceImageServiceV2.show', fake_show) self.stub_out('nova.image.glance.GlanceImageServiceV2.detail', fake_detail) self.flags(api_servers=['http://localhost:9292'], group='glance') def _make_request(self, url): req = fakes.HTTPRequest.blank(url) req.headers['Accept'] = self.content_type res = req.get_response(self._get_app()) return res def _get_app(self): return fakes.wsgi_app_v21() def _get_image(self, body): return jsonutils.loads(body).get('image') def _get_images(self, body): return jsonutils.loads(body).get('images') def assertImageSize(self, image, size): self.assertEqual(size, image.get('%s:size' % self.prefix)) def test_show(self): url = '/v2/fake/images/1' res = self._make_request(url) self.assertEqual(200, res.status_int) image = self._get_image(res.body) self.assertImageSize(image, 12345678) def test_detail(self): url = '/v2/fake/images/detail' res = self._make_request(url) self.assertEqual(200, res.status_int) images = self._get_images(res.body) self.assertImageSize(images[0], 12345678) self.assertImageSize(images[1], 87654321) nova-17.0.1/nova/tests/unit/api/openstack/compute/test_scheduler_hints.py0000666000175000017500000001070213250073126026662 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import cfg from oslo_serialization import jsonutils from nova.api.openstack import compute from nova import test from nova.tests.unit.api.openstack import fakes UUID = fakes.FAKE_UUID CONF = cfg.CONF class SchedulerHintsTestCaseV21(test.TestCase): def setUp(self): super(SchedulerHintsTestCaseV21, self).setUp() self.fake_instance = fakes.stub_instance_obj(None, id=1, uuid=UUID) self._set_up_router() def _set_up_router(self): self.app = compute.APIRouterV21() def _get_request(self): return fakes.HTTPRequest.blank('/fake/servers') def test_create_server_without_hints(self): def fake_create(*args, **kwargs): self.assertEqual(kwargs['scheduler_hints'], {}) return ([self.fake_instance], '') self.stub_out('nova.compute.api.API.create', fake_create) req = self._get_request() req.method = 'POST' req.content_type = 'application/json' body = {'server': { 'name': 'server_test', 'imageRef': 'cedef40a-ed67-4d10-800e-17455edce175', 'flavorRef': '1', }} req.body = jsonutils.dump_as_bytes(body) res = req.get_response(self.app) self.assertEqual(202, res.status_int) def _test_create_server_with_hint(self, hint): def fake_create(*args, **kwargs): self.assertEqual(kwargs['scheduler_hints'], hint) return ([self.fake_instance], '') self.stub_out('nova.compute.api.API.create', fake_create) req = self._get_request() req.method = 'POST' req.content_type = 'application/json' body = { 'server': { 'name': 'server_test', 'imageRef': 'cedef40a-ed67-4d10-800e-17455edce175', 'flavorRef': '1', }, 'os:scheduler_hints': hint, } req.body = jsonutils.dump_as_bytes(body) res = req.get_response(self.app) self.assertEqual(202, res.status_int) def test_create_server_with_group_hint(self): self._test_create_server_with_hint({'group': UUID}) def test_create_server_with_non_uuid_group_hint(self): self._create_server_with_scheduler_hints_bad_request( {'group': 'non-uuid'}) def test_create_server_with_different_host_hint(self): self._test_create_server_with_hint( {'different_host': '9c47bf55-e9d8-42da-94ab-7f9e80cd1857'}) self._test_create_server_with_hint( {'different_host': ['9c47bf55-e9d8-42da-94ab-7f9e80cd1857', '82412fa6-0365-43a9-95e4-d8b20e00c0de']}) def _create_server_with_scheduler_hints_bad_request(self, param): req = self._get_request() req.method = 'POST' req.content_type = 'application/json' body = { 'server': { 'name': 'server_test', 'imageRef': 'cedef40a-ed67-4d10-800e-17455edce175', 'flavorRef': '1', }, 'os:scheduler_hints': param, } req.body = jsonutils.dump_as_bytes(body) res = req.get_response(self.app) self.assertEqual(400, res.status_int) def test_create_server_bad_hints_non_dict(self): self._create_server_with_scheduler_hints_bad_request('non-dict') def test_create_server_bad_hints_long_group(self): param = {'group': 'a' * 256} self._create_server_with_scheduler_hints_bad_request(param) def test_create_server_with_bad_different_host_hint(self): param = {'different_host': 'non-server-id'} self._create_server_with_scheduler_hints_bad_request(param) param = {'different_host': ['non-server-id01', 'non-server-id02']} self._create_server_with_scheduler_hints_bad_request(param) nova-17.0.1/nova/tests/unit/api/openstack/compute/test_extension_info.py0000666000175000017500000000434013250073126026527 0ustar zuulzuul00000000000000# Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import webob from nova.api.openstack.compute import extension_info from nova import exception from nova import policy from nova import test from nova.tests.unit.api.openstack import fakes class ExtensionInfoV21Test(test.NoDBTestCase): def setUp(self): super(ExtensionInfoV21Test, self).setUp() self.controller = extension_info.ExtensionInfoController() patcher = mock.patch.object(policy, 'authorize', return_value=True) patcher.start() self.addCleanup(patcher.stop) def test_extension_info_show_servers_not_present(self): req = fakes.HTTPRequest.blank('/extensions/servers') self.assertRaises(webob.exc.HTTPNotFound, self.controller.show, req, 'servers') class ExtensionInfoPolicyEnforcementV21(test.NoDBTestCase): def setUp(self): super(ExtensionInfoPolicyEnforcementV21, self).setUp() self.controller = extension_info.ExtensionInfoController() self.req = fakes.HTTPRequest.blank('') def _test_extension_policy_failed(self, action, *args): rule_name = "os_compute_api:extensions" self.policy.set_rules({rule_name: "project:non_fake"}) exc = self.assertRaises( exception.PolicyNotAuthorized, getattr(self.controller, action), self.req, *args) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) def test_extension_index_policy_failed(self): self._test_extension_policy_failed('index') def test_extension_show_policy_failed(self): self._test_extension_policy_failed('show', 1) nova-17.0.1/nova/tests/unit/api/openstack/compute/test_pause_server.py0000666000175000017500000001363213250073126026207 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova.api.openstack.compute import pause_server as \ pause_server_v21 from nova import exception from nova import test from nova.tests.unit.api.openstack.compute import admin_only_action_common from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_instance class PauseServerTestsV21(admin_only_action_common.CommonTests): pause_server = pause_server_v21 controller_name = 'PauseServerController' _api_version = '2.1' def setUp(self): super(PauseServerTestsV21, self).setUp() self.controller = getattr(self.pause_server, self.controller_name)() self.compute_api = self.controller.compute_api def _fake_controller(*args, **kwargs): return self.controller self.stubs.Set(self.pause_server, self.controller_name, _fake_controller) self.mox.StubOutWithMock(self.compute_api, 'get') def test_pause_unpause(self): self._test_actions(['_pause', '_unpause']) def test_actions_raise_on_not_implemented(self): for action in ['_pause', '_unpause']: self.mox.StubOutWithMock(self.compute_api, action.replace('_', '')) self._test_not_implemented_state(action) # Re-mock this. self.mox.StubOutWithMock(self.compute_api, 'get') def test_pause_unpause_with_non_existed_instance(self): self._test_actions_with_non_existed_instance(['_pause', '_unpause']) def test_pause_unpause_with_non_existed_instance_in_compute_api(self): self._test_actions_instance_not_found_in_compute_api(['_pause', '_unpause']) def test_pause_unpause_raise_conflict_on_invalid_state(self): self._test_actions_raise_conflict_on_invalid_state(['_pause', '_unpause']) def test_actions_with_locked_instance(self): self._test_actions_with_locked_instance(['_pause', '_unpause']) class PauseServerPolicyEnforcementV21(test.NoDBTestCase): def setUp(self): super(PauseServerPolicyEnforcementV21, self).setUp() self.controller = pause_server_v21.PauseServerController() self.req = fakes.HTTPRequest.blank('') @mock.patch('nova.api.openstack.common.get_instance') def test_pause_policy_failed_with_other_project(self, get_instance_mock): get_instance_mock.return_value = fake_instance.fake_instance_obj( self.req.environ['nova.context'], project_id=self.req.environ['nova.context'].project_id) rule_name = "os_compute_api:os-pause-server:pause" self.policy.set_rules({rule_name: "project_id:%(project_id)s"}) # Change the project_id in request context. self.req.environ['nova.context'].project_id = 'other-project' exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller._pause, self.req, fakes.FAKE_UUID, body={'pause': {}}) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) @mock.patch('nova.api.openstack.common.get_instance') def test_pause_overridden_policy_failed_with_other_user_in_same_project( self, get_instance_mock): get_instance_mock.return_value = ( fake_instance.fake_instance_obj(self.req.environ['nova.context'])) rule_name = "os_compute_api:os-pause-server:pause" self.policy.set_rules({rule_name: "user_id:%(user_id)s"}) # Change the user_id in request context. self.req.environ['nova.context'].user_id = 'other-user' exc = self.assertRaises(exception.PolicyNotAuthorized, self.controller._pause, self.req, fakes.FAKE_UUID, body={'pause': {}}) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) @mock.patch('nova.compute.api.API.pause') @mock.patch('nova.api.openstack.common.get_instance') def test_pause_overridden_policy_pass_with_same_user(self, get_instance_mock, pause_mock): instance = fake_instance.fake_instance_obj( self.req.environ['nova.context'], user_id=self.req.environ['nova.context'].user_id) get_instance_mock.return_value = instance rule_name = "os_compute_api:os-pause-server:pause" self.policy.set_rules({rule_name: "user_id:%(user_id)s"}) self.controller._pause(self.req, fakes.FAKE_UUID, body={'pause': {}}) pause_mock.assert_called_once_with(self.req.environ['nova.context'], instance) def test_unpause_policy_failed(self): rule_name = "os_compute_api:os-pause-server:unpause" self.policy.set_rules({rule_name: "project:non_fake"}) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller._unpause, self.req, fakes.FAKE_UUID, body={'unpause': {}}) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) nova-17.0.1/nova/tests/unit/api/openstack/compute/test_keypairs.py0000666000175000017500000007017413250073126025337 0ustar zuulzuul00000000000000# Copyright 2011 Eldar Nugaev # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_policy import policy as oslo_policy from oslo_serialization import jsonutils import webob from nova.api.openstack.compute import keypairs as keypairs_v21 from nova.api.openstack import wsgi as os_wsgi from nova.compute import api as compute_api from nova import context as nova_context from nova import exception from nova import objects from nova import policy from nova import quota from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests.unit.objects import test_keypair from nova.tests import uuidsentinel as uuids QUOTAS = quota.QUOTAS keypair_data = { 'public_key': 'FAKE_KEY', 'fingerprint': 'FAKE_FINGERPRINT', } FAKE_UUID = 'b48316c5-71e8-45e4-9884-6c78055b9b13' def fake_keypair(name): return dict(test_keypair.fake_keypair, name=name, **keypair_data) def db_key_pair_get_all_by_user(self, user_id, limit, marker): return [fake_keypair('FAKE')] def db_key_pair_create(self, keypair): return fake_keypair(name=keypair['name']) def db_key_pair_destroy(context, user_id, name): if not (user_id and name): raise Exception() def db_key_pair_create_duplicate(context): raise exception.KeyPairExists(key_name='create_duplicate') class KeypairsTestV21(test.TestCase): base_url = '/v2/fake' validation_error = exception.ValidationError wsgi_api_version = os_wsgi.DEFAULT_API_VERSION def _setup_app_and_controller(self): self.app_server = fakes.wsgi_app_v21() self.controller = keypairs_v21.KeypairController() def setUp(self): super(KeypairsTestV21, self).setUp() fakes.stub_out_networking(self) fakes.stub_out_secgroup_api(self) self.stub_out("nova.db.key_pair_get_all_by_user", db_key_pair_get_all_by_user) self.stub_out("nova.db.key_pair_create", db_key_pair_create) self.stub_out("nova.db.key_pair_destroy", db_key_pair_destroy) self._setup_app_and_controller() self.req = fakes.HTTPRequest.blank('', version=self.wsgi_api_version) def test_keypair_list(self): res_dict = self.controller.index(self.req) response = {'keypairs': [{'keypair': dict(keypair_data, name='FAKE')}]} self.assertEqual(res_dict, response) def test_keypair_create(self): body = {'keypair': {'name': 'create_test'}} res_dict = self.controller.create(self.req, body=body) self.assertGreater(len(res_dict['keypair']['fingerprint']), 0) self.assertGreater(len(res_dict['keypair']['private_key']), 0) self._assert_keypair_type(res_dict) def _test_keypair_create_bad_request_case(self, body, exception): self.assertRaises(exception, self.controller.create, self.req, body=body) def test_keypair_create_with_empty_name(self): body = {'keypair': {'name': ''}} self._test_keypair_create_bad_request_case(body, self.validation_error) def test_keypair_create_with_name_too_long(self): body = { 'keypair': { 'name': 'a' * 256 } } self._test_keypair_create_bad_request_case(body, self.validation_error) def test_keypair_create_with_name_leading_trailing_spaces(self): body = { 'keypair': { 'name': ' test ' } } self._test_keypair_create_bad_request_case(body, self.validation_error) def test_keypair_create_with_name_leading_trailing_spaces_compat_mode( self): body = {'keypair': {'name': ' test '}} self.req.set_legacy_v2() res_dict = self.controller.create(self.req, body=body) self.assertEqual('test', res_dict['keypair']['name']) def test_keypair_create_with_non_alphanumeric_name(self): body = { 'keypair': { 'name': 'test/keypair' } } self._test_keypair_create_bad_request_case(body, webob.exc.HTTPBadRequest) def test_keypair_import_bad_key(self): body = { 'keypair': { 'name': 'create_test', 'public_key': 'ssh-what negative', }, } self._test_keypair_create_bad_request_case(body, webob.exc.HTTPBadRequest) def test_keypair_create_with_invalid_keypair_body(self): body = {'alpha': {'name': 'create_test'}} self._test_keypair_create_bad_request_case(body, self.validation_error) def test_keypair_import(self): body = { 'keypair': { 'name': 'create_test', 'public_key': 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDBYIznA' 'x9D7118Q1VKGpXy2HDiKyUTM8XcUuhQpo0srqb9rboUp4' 'a9NmCwpWpeElDLuva707GOUnfaBAvHBwsRXyxHJjRaI6Y' 'Qj2oLJwqvaSaWUbyT1vtryRqy6J3TecN0WINY71f4uymi' 'MZP0wby4bKBcYnac8KiCIlvkEl0ETjkOGUq8OyWRmn7lj' 'j5SESEUdBP0JnuTFKddWTU/wD6wydeJaUhBTqOlHn0kX1' 'GyqoNTE1UEhcM5ZRWgfUZfTjVyDF2kGj3vJLCJtJ8LoGc' 'j7YaN4uPg1rBle+izwE/tLonRrds+cev8p6krSSrxWOwB' 'bHkXa6OciiJDvkRzJXzf', }, } res_dict = self.controller.create(self.req, body=body) # FIXME(ja): Should we check that public_key was sent to create? self.assertGreater(len(res_dict['keypair']['fingerprint']), 0) self.assertNotIn('private_key', res_dict['keypair']) self._assert_keypair_type(res_dict) @mock.patch('nova.objects.Quotas.check_deltas') def test_keypair_import_quota_limit(self, mock_check): mock_check.side_effect = exception.OverQuota(overs='key_pairs', usages={'key_pairs': 100}) body = { 'keypair': { 'name': 'create_test', 'public_key': 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDBYIznA' 'x9D7118Q1VKGpXy2HDiKyUTM8XcUuhQpo0srqb9rboUp4' 'a9NmCwpWpeElDLuva707GOUnfaBAvHBwsRXyxHJjRaI6Y' 'Qj2oLJwqvaSaWUbyT1vtryRqy6J3TecN0WINY71f4uymi' 'MZP0wby4bKBcYnac8KiCIlvkEl0ETjkOGUq8OyWRmn7lj' 'j5SESEUdBP0JnuTFKddWTU/wD6wydeJaUhBTqOlHn0kX1' 'GyqoNTE1UEhcM5ZRWgfUZfTjVyDF2kGj3vJLCJtJ8LoGc' 'j7YaN4uPg1rBle+izwE/tLonRrds+cev8p6krSSrxWOwB' 'bHkXa6OciiJDvkRzJXzf', }, } ex = self.assertRaises(webob.exc.HTTPForbidden, self.controller.create, self.req, body=body) self.assertIn('Quota exceeded, too many key pairs.', ex.explanation) @mock.patch('nova.objects.Quotas.check_deltas') def test_keypair_create_quota_limit(self, mock_check): mock_check.side_effect = exception.OverQuota(overs='key_pairs', usages={'key_pairs': 100}) body = { 'keypair': { 'name': 'create_test', }, } ex = self.assertRaises(webob.exc.HTTPForbidden, self.controller.create, self.req, body=body) self.assertIn('Quota exceeded, too many key pairs.', ex.explanation) @mock.patch('nova.objects.Quotas.check_deltas') def test_keypair_create_over_quota_during_recheck(self, mock_check): # Simulate a race where the first check passes and the recheck fails. # First check occurs in compute/api. exc = exception.OverQuota(overs='key_pairs', usages={'key_pairs': 100}) mock_check.side_effect = [None, exc] body = { 'keypair': { 'name': 'create_test', }, } self.assertRaises(webob.exc.HTTPForbidden, self.controller.create, self.req, body=body) ctxt = self.req.environ['nova.context'] self.assertEqual(2, mock_check.call_count) call1 = mock.call(ctxt, {'key_pairs': 1}, ctxt.user_id) call2 = mock.call(ctxt, {'key_pairs': 0}, ctxt.user_id) mock_check.assert_has_calls([call1, call2]) # Verify we removed the key pair that was added after the first # quota check passed. key_pairs = objects.KeyPairList.get_by_user(ctxt, ctxt.user_id) names = [key_pair.name for key_pair in key_pairs] self.assertNotIn('create_test', names) @mock.patch('nova.objects.Quotas.check_deltas') def test_keypair_create_no_quota_recheck(self, mock_check): # Disable recheck_quota. self.flags(recheck_quota=False, group='quota') body = { 'keypair': { 'name': 'create_test', }, } self.controller.create(self.req, body=body) ctxt = self.req.environ['nova.context'] # check_deltas should have been called only once. mock_check.assert_called_once_with(ctxt, {'key_pairs': 1}, ctxt.user_id) def test_keypair_create_duplicate(self): self.stub_out("nova.objects.KeyPair.create", db_key_pair_create_duplicate) body = {'keypair': {'name': 'create_duplicate'}} ex = self.assertRaises(webob.exc.HTTPConflict, self.controller.create, self.req, body=body) self.assertIn("Key pair 'create_duplicate' already exists.", ex.explanation) @mock.patch('nova.objects.KeyPair.get_by_name') def test_keypair_delete(self, mock_get_by_name): mock_get_by_name.return_value = objects.KeyPair( nova_context.get_admin_context(), **fake_keypair('FAKE')) self.controller.delete(self.req, 'FAKE') def test_keypair_get_keypair_not_found(self): self.assertRaises(webob.exc.HTTPNotFound, self.controller.show, self.req, 'DOESNOTEXIST') def test_keypair_delete_not_found(self): def db_key_pair_get_not_found(context, user_id, name): raise exception.KeypairNotFound(user_id=user_id, name=name) self.stub_out("nova.db.key_pair_destroy", db_key_pair_get_not_found) self.assertRaises(webob.exc.HTTPNotFound, self.controller.delete, self.req, 'FAKE') def test_keypair_show(self): def _db_key_pair_get(context, user_id, name): return dict(test_keypair.fake_keypair, name='foo', public_key='XXX', fingerprint='YYY', type='ssh') self.stub_out("nova.db.key_pair_get", _db_key_pair_get) res_dict = self.controller.show(self.req, 'FAKE') self.assertEqual('foo', res_dict['keypair']['name']) self.assertEqual('XXX', res_dict['keypair']['public_key']) self.assertEqual('YYY', res_dict['keypair']['fingerprint']) self._assert_keypair_type(res_dict) def test_keypair_show_not_found(self): def _db_key_pair_get(context, user_id, name): raise exception.KeypairNotFound(user_id=user_id, name=name) self.stub_out("nova.db.key_pair_get", _db_key_pair_get) self.assertRaises(webob.exc.HTTPNotFound, self.controller.show, self.req, 'FAKE') def test_show_server(self): self.stub_out('nova.db.instance_get', fakes.fake_instance_get()) self.stub_out('nova.db.instance_get_by_uuid', fakes.fake_instance_get()) # NOTE(sdague): because of the way extensions work, we have to # also stub out the Request compute cache with a real compute # object. Delete this once we remove all the gorp of # extensions modifying the server objects. self.stub_out('nova.api.openstack.wsgi.Request.get_db_instance', fakes.fake_compute_get()) req = fakes.HTTPRequest.blank( self.base_url + '/servers/' + uuids.server) req.headers['Content-Type'] = 'application/json' response = req.get_response(self.app_server) self.assertEqual(response.status_int, 200) res_dict = jsonutils.loads(response.body) self.assertIn('key_name', res_dict['server']) self.assertEqual(res_dict['server']['key_name'], '') @mock.patch('nova.compute.api.API.get_all') def test_detail_servers(self, mock_get_all): # NOTE(danms): Orphan these fakes (no context) so that we # are sure that the API is requesting what it needs without # having to lazy-load. mock_get_all.return_value = objects.InstanceList( objects=[fakes.stub_instance_obj(ctxt=None, id=1), fakes.stub_instance_obj(ctxt=None, id=2)]) req = fakes.HTTPRequest.blank(self.base_url + '/servers/detail') res = req.get_response(self.app_server) server_dicts = jsonutils.loads(res.body)['servers'] self.assertEqual(len(server_dicts), 2) for server_dict in server_dicts: self.assertIn('key_name', server_dict) self.assertEqual(server_dict['key_name'], '') def _assert_keypair_type(self, res_dict): self.assertNotIn('type', res_dict['keypair']) def test_create_server_keypair_name_with_leading_trailing(self): req = fakes.HTTPRequest.blank(self.base_url + '/servers') req.method = 'POST' req.headers["content-type"] = "application/json" req.body = jsonutils.dump_as_bytes({'server': {'name': 'test', 'flavorRef': 1, 'keypair_name': ' abc ', 'imageRef': FAKE_UUID}}) res = req.get_response(self.app_server) self.assertEqual(400, res.status_code) self.assertIn(b'keypair_name', res.body) @mock.patch.object(compute_api.API, 'create') def test_create_server_keypair_name_with_leading_trailing_compat_mode( self, mock_create): mock_create.return_value = ( objects.InstanceList(objects=[ fakes.stub_instance_obj(ctxt=None, id=1)]), None) req = fakes.HTTPRequest.blank(self.base_url + '/servers') req.method = 'POST' req.headers["content-type"] = "application/json" req.body = jsonutils.dump_as_bytes({'server': {'name': 'test', 'flavorRef': 1, 'keypair_name': ' abc ', 'imageRef': FAKE_UUID}}) req.set_legacy_v2() res = req.get_response(self.app_server) self.assertEqual(202, res.status_code) class KeypairPolicyTestV21(test.NoDBTestCase): KeyPairController = keypairs_v21.KeypairController() policy_path = 'os_compute_api:os-keypairs' def setUp(self): super(KeypairPolicyTestV21, self).setUp() @staticmethod def _db_key_pair_get(context, user_id, name=None): if name is not None: return dict(test_keypair.fake_keypair, name='foo', public_key='XXX', fingerprint='YYY', type='ssh') else: return db_key_pair_get_all_by_user(context, user_id) self.stub_out("nova.objects.keypair.KeyPair._get_from_db", _db_key_pair_get) self.req = fakes.HTTPRequest.blank('') def test_keypair_list_fail_policy(self): rules = {self.policy_path + ':index': 'role:admin'} policy.set_rules(oslo_policy.Rules.from_dict(rules)) self.assertRaises(exception.Forbidden, self.KeyPairController.index, self.req) @mock.patch('nova.objects.KeyPairList.get_by_user') def test_keypair_list_pass_policy(self, mock_get): rules = {self.policy_path + ':index': ''} policy.set_rules(oslo_policy.Rules.from_dict(rules)) res = self.KeyPairController.index(self.req) self.assertIn('keypairs', res) def test_keypair_show_fail_policy(self): rules = {self.policy_path + ':show': 'role:admin'} policy.set_rules(oslo_policy.Rules.from_dict(rules)) self.assertRaises(exception.Forbidden, self.KeyPairController.show, self.req, 'FAKE') def test_keypair_show_pass_policy(self): rules = {self.policy_path + ':show': ''} policy.set_rules(oslo_policy.Rules.from_dict(rules)) res = self.KeyPairController.show(self.req, 'FAKE') self.assertIn('keypair', res) def test_keypair_create_fail_policy(self): body = {'keypair': {'name': 'create_test'}} rules = {self.policy_path + ':create': 'role:admin'} policy.set_rules(oslo_policy.Rules.from_dict(rules)) self.assertRaises(exception.Forbidden, self.KeyPairController.create, self.req, body=body) def _assert_keypair_create(self, mock_create, req): mock_create.assert_called_with(req, 'fake_user', 'create_test', 'ssh') @mock.patch.object(compute_api.KeypairAPI, 'create_key_pair') def test_keypair_create_pass_policy(self, mock_create): keypair_obj = objects.KeyPair(name='', public_key='', fingerprint='', user_id='') mock_create.return_value = (keypair_obj, 'dummy') body = {'keypair': {'name': 'create_test'}} rules = {self.policy_path + ':create': ''} policy.set_rules(oslo_policy.Rules.from_dict(rules)) res = self.KeyPairController.create(self.req, body=body) self.assertIn('keypair', res) req = self.req.environ['nova.context'] self._assert_keypair_create(mock_create, req) def test_keypair_delete_fail_policy(self): rules = {self.policy_path + ':delete': 'role:admin'} policy.set_rules(oslo_policy.Rules.from_dict(rules)) self.assertRaises(exception.Forbidden, self.KeyPairController.delete, self.req, 'FAKE') @mock.patch('nova.objects.KeyPair.destroy_by_name') def test_keypair_delete_pass_policy(self, mock_destroy): rules = {self.policy_path + ':delete': ''} policy.set_rules(oslo_policy.Rules.from_dict(rules)) self.KeyPairController.delete(self.req, 'FAKE') class KeypairsTestV22(KeypairsTestV21): wsgi_api_version = '2.2' def test_keypair_list(self): res_dict = self.controller.index(self.req) expected = {'keypairs': [{'keypair': dict(keypair_data, name='FAKE', type='ssh')}]} self.assertEqual(expected, res_dict) def _assert_keypair_type(self, res_dict): self.assertEqual('ssh', res_dict['keypair']['type']) def test_keypair_create_with_name_leading_trailing_spaces_compat_mode( self): pass def test_create_server_keypair_name_with_leading_trailing_compat_mode( self): pass class KeypairsTestV210(KeypairsTestV22): wsgi_api_version = '2.10' def test_keypair_create_with_name_leading_trailing_spaces_compat_mode( self): pass def test_create_server_keypair_name_with_leading_trailing_compat_mode( self): pass def test_keypair_list_other_user(self): req = fakes.HTTPRequest.blank(self.base_url + '/os-keypairs?user_id=foo', version=self.wsgi_api_version, use_admin_context=True) with mock.patch.object(self.controller.api, 'get_key_pairs') as mock_g: self.controller.index(req) userid = mock_g.call_args_list[0][0][1] self.assertEqual('foo', userid) def test_keypair_list_other_user_not_admin(self): req = fakes.HTTPRequest.blank(self.base_url + '/os-keypairs?user_id=foo', version=self.wsgi_api_version) with mock.patch.object(self.controller.api, 'get_key_pairs'): self.assertRaises(exception.PolicyNotAuthorized, self.controller.index, req) def test_keypair_show_other_user(self): req = fakes.HTTPRequest.blank(self.base_url + '/os-keypairs/FAKE?user_id=foo', version=self.wsgi_api_version, use_admin_context=True) with mock.patch.object(self.controller.api, 'get_key_pair') as mock_g: self.controller.show(req, 'FAKE') userid = mock_g.call_args_list[0][0][1] self.assertEqual('foo', userid) def test_keypair_show_other_user_not_admin(self): req = fakes.HTTPRequest.blank(self.base_url + '/os-keypairs/FAKE?user_id=foo', version=self.wsgi_api_version) with mock.patch.object(self.controller.api, 'get_key_pair'): self.assertRaises(exception.PolicyNotAuthorized, self.controller.show, req, 'FAKE') def test_keypair_delete_other_user(self): req = fakes.HTTPRequest.blank(self.base_url + '/os-keypairs/FAKE?user_id=foo', version=self.wsgi_api_version, use_admin_context=True) with mock.patch.object(self.controller.api, 'delete_key_pair') as mock_g: self.controller.delete(req, 'FAKE') userid = mock_g.call_args_list[0][0][1] self.assertEqual('foo', userid) def test_keypair_delete_other_user_not_admin(self): req = fakes.HTTPRequest.blank(self.base_url + '/os-keypairs/FAKE?user_id=foo', version=self.wsgi_api_version) with mock.patch.object(self.controller.api, 'delete_key_pair'): self.assertRaises(exception.PolicyNotAuthorized, self.controller.delete, req, 'FAKE') def test_keypair_create_other_user(self): req = fakes.HTTPRequest.blank(self.base_url + '/os-keypairs', version=self.wsgi_api_version, use_admin_context=True) body = {'keypair': {'name': 'create_test', 'user_id': '8861f37f-034e-4ca8-8abe-6d13c074574a'}} with mock.patch.object(self.controller.api, 'create_key_pair', return_value=(mock.MagicMock(), 1)) as mock_g: res = self.controller.create(req, body=body) userid = mock_g.call_args_list[0][0][1] self.assertEqual('8861f37f-034e-4ca8-8abe-6d13c074574a', userid) self.assertIn('keypair', res) def test_keypair_import_other_user(self): req = fakes.HTTPRequest.blank(self.base_url + '/os-keypairs', version=self.wsgi_api_version, use_admin_context=True) body = {'keypair': {'name': 'create_test', 'user_id': '8861f37f-034e-4ca8-8abe-6d13c074574a', 'public_key': 'public_key'}} with mock.patch.object(self.controller.api, 'import_key_pair') as mock_g: res = self.controller.create(req, body=body) userid = mock_g.call_args_list[0][0][1] self.assertEqual('8861f37f-034e-4ca8-8abe-6d13c074574a', userid) self.assertIn('keypair', res) def test_keypair_create_other_user_not_admin(self): req = fakes.HTTPRequest.blank(self.base_url + '/os-keypairs', version=self.wsgi_api_version) body = {'keypair': {'name': 'create_test', 'user_id': '8861f37f-034e-4ca8-8abe-6d13c074574a'}} self.assertRaises(exception.PolicyNotAuthorized, self.controller.create, req, body=body) def test_keypair_list_other_user_invalid_in_old_microversion(self): req = fakes.HTTPRequest.blank(self.base_url + '/os-keypairs?user_id=foo', version="2.9", use_admin_context=True) with mock.patch.object(self.controller.api, 'get_key_pairs') as mock_g: self.controller.index(req) userid = mock_g.call_args_list[0][0][1] self.assertEqual('fake_user', userid) class KeypairsTestV235(test.TestCase): base_url = '/v2/fake' wsgi_api_version = '2.35' def _setup_app_and_controller(self): self.app_server = fakes.wsgi_app_v21() self.controller = keypairs_v21.KeypairController() def setUp(self): super(KeypairsTestV235, self).setUp() self._setup_app_and_controller() @mock.patch("nova.db.key_pair_get_all_by_user") def test_keypair_list_limit_and_marker(self, mock_kp_get): mock_kp_get.side_effect = db_key_pair_get_all_by_user req = fakes.HTTPRequest.blank( self.base_url + '/os-keypairs?limit=3&marker=fake_marker', version=self.wsgi_api_version, use_admin_context=True) res_dict = self.controller.index(req) mock_kp_get.assert_called_once_with( req.environ['nova.context'], 'fake_user', limit=3, marker='fake_marker') response = {'keypairs': [{'keypair': dict(keypair_data, name='FAKE', type='ssh')}]} self.assertEqual(res_dict, response) @mock.patch('nova.compute.api.KeypairAPI.get_key_pairs') def test_keypair_list_limit_and_marker_invalid_marker(self, mock_kp_get): mock_kp_get.side_effect = exception.MarkerNotFound(marker='unknown_kp') req = fakes.HTTPRequest.blank( self.base_url + '/os-keypairs?limit=3&marker=unknown_kp', version=self.wsgi_api_version, use_admin_context=True) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.index, req) def test_keypair_list_limit_and_marker_invalid_limit(self): req = fakes.HTTPRequest.blank( self.base_url + '/os-keypairs?limit=abc&marker=fake_marker', version=self.wsgi_api_version, use_admin_context=True) self.assertRaises(exception.ValidationError, self.controller.index, req) @mock.patch("nova.db.key_pair_get_all_by_user") def test_keypair_list_limit_and_marker_invalid_in_old_microversion( self, mock_kp_get): mock_kp_get.side_effect = db_key_pair_get_all_by_user req = fakes.HTTPRequest.blank( self.base_url + '/os-keypairs?limit=3&marker=fake_marker', version="2.30", use_admin_context=True) self.controller.index(req) mock_kp_get.assert_called_once_with( req.environ['nova.context'], 'fake_user', limit=None, marker=None) nova-17.0.1/nova/tests/unit/api/openstack/compute/admin_only_action_common.py0000666000175000017500000002657013250073126027510 0ustar zuulzuul00000000000000# Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils import timeutils from oslo_utils import uuidutils import webob from nova.compute import vm_states from nova import exception from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_instance class CommonMixin(object): def setUp(self): super(CommonMixin, self).setUp() self.compute_api = None self.req = fakes.HTTPRequest.blank('') self.context = self.req.environ['nova.context'] def _stub_instance_get(self, uuid=None): if uuid is None: uuid = uuidutils.generate_uuid() instance = fake_instance.fake_instance_obj(self.context, id=1, uuid=uuid, vm_state=vm_states.ACTIVE, task_state=None, launched_at=timeutils.utcnow()) self.compute_api.get( self.context, uuid, expected_attrs=None).AndReturn(instance) return instance def _stub_instance_get_failure(self, exc_info, uuid=None): if uuid is None: uuid = uuidutils.generate_uuid() self.compute_api.get( self.context, uuid, expected_attrs=None).AndRaise(exc_info) return uuid def _test_non_existing_instance(self, action, body_map=None): uuid = uuidutils.generate_uuid() self._stub_instance_get_failure( exception.InstanceNotFound(instance_id=uuid), uuid=uuid) self.mox.ReplayAll() controller_function = getattr(self.controller, action) self.assertRaises(webob.exc.HTTPNotFound, controller_function, self.req, uuid, body=body_map) # Do these here instead of tearDown because this method is called # more than once for the same test case self.mox.VerifyAll() self.mox.UnsetStubs() def _test_action(self, action, body=None, method=None, compute_api_args_map=None): if method is None: method = action.replace('_', '') compute_api_args_map = compute_api_args_map or {} instance = self._stub_instance_get() args, kwargs = compute_api_args_map.get(action, ((), {})) getattr(self.compute_api, method)(self.context, instance, *args, **kwargs) self.mox.ReplayAll() controller_function = getattr(self.controller, action) res = controller_function(self.req, instance.uuid, body=body) # NOTE: on v2.1, http status code is set as wsgi_code of API # method instead of status_int in a response object. if self._api_version == '2.1': status_int = controller_function.wsgi_code else: status_int = res.status_int self.assertEqual(202, status_int) # Do these here instead of tearDown because this method is called # more than once for the same test case self.mox.VerifyAll() self.mox.UnsetStubs() def _test_not_implemented_state(self, action, method=None): if method is None: method = action.replace('_', '') instance = self._stub_instance_get() body = {} compute_api_args_map = {} args, kwargs = compute_api_args_map.get(action, ((), {})) getattr(self.compute_api, method)(self.context, instance, *args, **kwargs).AndRaise( NotImplementedError()) self.mox.ReplayAll() controller_function = getattr(self.controller, action) self.assertRaises(webob.exc.HTTPNotImplemented, controller_function, self.req, instance.uuid, body=body) # Do these here instead of tearDown because this method is called # more than once for the same test case self.mox.VerifyAll() self.mox.UnsetStubs() def _test_invalid_state(self, action, method=None, body_map=None, compute_api_args_map=None, exception_arg=None): if method is None: method = action.replace('_', '') if body_map is None: body_map = {} if compute_api_args_map is None: compute_api_args_map = {} instance = self._stub_instance_get() args, kwargs = compute_api_args_map.get(action, ((), {})) getattr(self.compute_api, method)(self.context, instance, *args, **kwargs).AndRaise( exception.InstanceInvalidState( attr='vm_state', instance_uuid=instance.uuid, state='foo', method=method)) self.mox.ReplayAll() controller_function = getattr(self.controller, action) ex = self.assertRaises(webob.exc.HTTPConflict, controller_function, self.req, instance.uuid, body=body_map) self.assertIn("Cannot \'%(action)s\' instance %(id)s" % {'action': exception_arg or method, 'id': instance.uuid}, ex.explanation) # Do these here instead of tearDown because this method is called # more than once for the same test case self.mox.VerifyAll() self.mox.UnsetStubs() def _test_locked_instance(self, action, method=None, body=None, compute_api_args_map=None): if method is None: method = action.replace('_', '') compute_api_args_map = compute_api_args_map or {} instance = self._stub_instance_get() args, kwargs = compute_api_args_map.get(action, ((), {})) getattr(self.compute_api, method)(self.context, instance, *args, **kwargs).AndRaise( exception.InstanceIsLocked(instance_uuid=instance.uuid)) self.mox.ReplayAll() controller_function = getattr(self.controller, action) self.assertRaises(webob.exc.HTTPConflict, controller_function, self.req, instance.uuid, body=body) # Do these here instead of tearDown because this method is called # more than once for the same test case self.mox.VerifyAll() self.mox.UnsetStubs() def _test_instance_not_found_in_compute_api(self, action, method=None, body=None, compute_api_args_map=None): if method is None: method = action.replace('_', '') compute_api_args_map = compute_api_args_map or {} instance = self._stub_instance_get() args, kwargs = compute_api_args_map.get(action, ((), {})) getattr(self.compute_api, method)(self.context, instance, *args, **kwargs).AndRaise( exception.InstanceNotFound(instance_id=instance.uuid)) self.mox.ReplayAll() controller_function = getattr(self.controller, action) self.assertRaises(webob.exc.HTTPNotFound, controller_function, self.req, instance.uuid, body=body) # Do these here instead of tearDown because this method is called # more than once for the same test case self.mox.VerifyAll() self.mox.UnsetStubs() class CommonTests(CommonMixin, test.NoDBTestCase): def _test_actions(self, actions, method_translations=None, body_map=None, args_map=None): method_translations = method_translations or {} body_map = body_map or {} args_map = args_map or {} for action in actions: method = method_translations.get(action) body = body_map.get(action) self.mox.StubOutWithMock(self.compute_api, method or action.replace('_', '')) self._test_action(action, method=method, body=body, compute_api_args_map=args_map) # Re-mock this. self.mox.StubOutWithMock(self.compute_api, 'get') def _test_actions_instance_not_found_in_compute_api(self, actions, method_translations=None, body_map=None, args_map=None): method_translations = method_translations or {} body_map = body_map or {} args_map = args_map or {} for action in actions: method = method_translations.get(action) body = body_map.get(action) self.mox.StubOutWithMock(self.compute_api, method or action.replace('_', '')) self._test_instance_not_found_in_compute_api( action, method=method, body=body, compute_api_args_map=args_map) # Re-mock this. self.mox.StubOutWithMock(self.compute_api, 'get') def _test_actions_with_non_existed_instance(self, actions, body_map=None): body_map = body_map or {} for action in actions: self._test_non_existing_instance(action, body_map=body_map.get(action)) # Re-mock this. self.mox.StubOutWithMock(self.compute_api, 'get') def _test_actions_raise_conflict_on_invalid_state( self, actions, method_translations=None, body_map=None, args_map=None, exception_args=None): method_translations = method_translations or {} body_map = body_map or {} args_map = args_map or {} exception_args = exception_args or {} for action in actions: method = method_translations.get(action) exception_arg = exception_args.get(action) self.mox.StubOutWithMock(self.compute_api, method or action.replace('_', '')) self._test_invalid_state(action, method=method, body_map=body_map.get(action), compute_api_args_map=args_map, exception_arg=exception_arg) # Re-mock this. self.mox.StubOutWithMock(self.compute_api, 'get') def _test_actions_with_locked_instance(self, actions, method_translations=None, body_map=None, args_map=None): method_translations = method_translations or {} body_map = body_map or {} args_map = args_map or {} for action in actions: method = method_translations.get(action) body = body_map.get(action) self.mox.StubOutWithMock(self.compute_api, method or action.replace('_', '')) self._test_locked_instance(action, method=method, body=body, compute_api_args_map=args_map) # Re-mock this. self.mox.StubOutWithMock(self.compute_api, 'get') nova-17.0.1/nova/tests/unit/api/openstack/compute/test_admin_actions.py0000666000175000017500000000730613250073126026315 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.openstack.compute import admin_actions as admin_actions_v21 from nova import exception from nova import test from nova.tests.unit.api.openstack.compute import admin_only_action_common from nova.tests.unit.api.openstack import fakes class AdminActionsTestV21(admin_only_action_common.CommonTests): admin_actions = admin_actions_v21 _api_version = '2.1' def setUp(self): super(AdminActionsTestV21, self).setUp() self.controller = self.admin_actions.AdminActionsController() self.compute_api = self.controller.compute_api def _fake_controller(*args, **kwargs): return self.controller self.stubs.Set(self.admin_actions, 'AdminActionsController', _fake_controller) self.mox.StubOutWithMock(self.compute_api, 'get') def test_actions(self): actions = ['_reset_network', '_inject_network_info'] method_translations = {'_reset_network': 'reset_network', '_inject_network_info': 'inject_network_info'} self._test_actions(actions, method_translations) def test_actions_with_non_existed_instance(self): actions = ['_reset_network', '_inject_network_info'] self._test_actions_with_non_existed_instance(actions) def test_actions_with_locked_instance(self): actions = ['_reset_network', '_inject_network_info'] method_translations = {'_reset_network': 'reset_network', '_inject_network_info': 'inject_network_info'} self._test_actions_with_locked_instance(actions, method_translations=method_translations) class AdminActionsPolicyEnforcementV21(test.NoDBTestCase): def setUp(self): super(AdminActionsPolicyEnforcementV21, self).setUp() self.controller = admin_actions_v21.AdminActionsController() self.req = fakes.HTTPRequest.blank('') self.fake_id = 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa' def common_policy_check(self, rule, fun_name, *arg, **kwarg): self.policy.set_rules(rule) func = getattr(self.controller, fun_name) exc = self.assertRaises( exception.PolicyNotAuthorized, func, *arg, **kwarg) self.assertEqual( "Policy doesn't allow %s to be performed." % rule.popitem()[0], exc.format_message()) def test_reset_network_policy_failed(self): rule = {"os_compute_api:os-admin-actions:reset_network": "project:non_fake"} self.common_policy_check( rule, "_reset_network", self.req, self.fake_id, body={}) def test_inject_network_info_policy_failed(self): rule = {"os_compute_api:os-admin-actions:inject_network_info": "project:non_fake"} self.common_policy_check( rule, "_inject_network_info", self.req, self.fake_id, body={}) def test_reset_state_policy_failed(self): rule = {"os_compute_api:os-admin-actions:reset_state": "project:non_fake"} self.common_policy_check( rule, "_reset_state", self.req, self.fake_id, body={"os-resetState": {"state": "active"}}) nova-17.0.1/nova/tests/unit/api/openstack/compute/test_serversV21.py0000666000175000017500000072766313250073126025506 0ustar zuulzuul00000000000000# Copyright 2010-2011 OpenStack Foundation # Copyright 2011 Piston Cloud Computing, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import datetime import ddt import uuid import fixtures import iso8601 import mock from oslo_policy import policy as oslo_policy from oslo_serialization import base64 from oslo_serialization import jsonutils from oslo_utils import encodeutils from oslo_utils import timeutils from oslo_utils import uuidutils import six from six.moves import range import six.moves.urllib.parse as urlparse import testtools import webob from nova.api.openstack import api_version_request from nova.api.openstack import common from nova.api.openstack import compute from nova.api.openstack.compute import ips from nova.api.openstack.compute import servers from nova.api.openstack.compute import views from nova.api.openstack import wsgi as os_wsgi from nova import availability_zones from nova.compute import api as compute_api from nova.compute import flavors from nova.compute import task_states from nova.compute import vm_states import nova.conf from nova import context from nova import db from nova.db.sqlalchemy import api as db_api from nova.db.sqlalchemy import models from nova import exception from nova.image import glance from nova.network import manager from nova import objects from nova.objects import instance as instance_obj from nova.objects import tag from nova.policies import servers as server_policies from nova import policy from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_flavor from nova.tests.unit import fake_instance from nova.tests.unit import fake_network from nova.tests.unit.image import fake from nova.tests.unit import matchers from nova.tests import uuidsentinel as uuids from nova import utils as nova_utils CONF = nova.conf.CONF FAKE_UUID = fakes.FAKE_UUID INSTANCE_IDS = {FAKE_UUID: 1} FIELDS = instance_obj.INSTANCE_DEFAULT_FIELDS def fake_gen_uuid(): return FAKE_UUID def return_servers_empty(context, *args, **kwargs): return objects.InstanceList(objects=[]) def instance_update_and_get_original(context, instance_uuid, values, columns_to_join=None, ): inst = fakes.stub_instance(INSTANCE_IDS.get(instance_uuid), name=values.get('display_name')) inst = dict(inst, **values) return (inst, inst) def instance_update(context, instance_uuid, values): inst = fakes.stub_instance(INSTANCE_IDS.get(instance_uuid), name=values.get('display_name')) inst = dict(inst, **values) return inst def fake_compute_api(cls, req, id): return True def fake_start_stop_not_ready(self, context, instance): raise exception.InstanceNotReady(instance_id=instance["uuid"]) def fake_start_stop_invalid_state(self, context, instance): raise exception.InstanceInvalidState( instance_uuid=instance['uuid'], attr='fake_attr', method='fake_method', state='fake_state') def fake_instance_get_by_uuid_not_found(context, uuid, columns_to_join, use_slave=False): raise exception.InstanceNotFound(instance_id=uuid) def fake_instance_get_all_with_locked(context, list_locked, **kwargs): obj_list = [] s_id = 0 for locked in list_locked: uuid = fakes.get_fake_uuid(locked) s_id = s_id + 1 kwargs['locked_by'] = None if locked == 'not_locked' else locked server = fakes.stub_instance_obj(context, id=s_id, uuid=uuid, **kwargs) obj_list.append(server) return objects.InstanceList(objects=obj_list) def fake_instance_get_all_with_description(context, list_desc, **kwargs): obj_list = [] s_id = 0 for desc in list_desc: uuid = fakes.get_fake_uuid(desc) s_id = s_id + 1 kwargs['display_description'] = desc server = fakes.stub_instance_obj(context, id=s_id, uuid=uuid, **kwargs) obj_list.append(server) return objects.InstanceList(objects=obj_list) class MockSetAdminPassword(object): def __init__(self): self.instance_id = None self.password = None def __call__(self, context, instance_id, password): self.instance_id = instance_id self.password = password class ControllerTest(test.TestCase): def setUp(self): super(ControllerTest, self).setUp() self.flags(use_ipv6=False) fakes.stub_out_key_pair_funcs(self) fake.stub_out_image_service(self) return_server = fakes.fake_compute_get() return_servers = fakes.fake_compute_get_all() # Server sort keys extension is enabled in v21 so sort data is passed # to the instance API and the sorted DB API is invoked self.stubs.Set(compute_api.API, 'get_all', lambda api, *a, **k: return_servers(*a, **k)) self.stubs.Set(compute_api.API, 'get', lambda api, *a, **k: return_server(*a, **k)) self.stub_out('nova.db.instance_update_and_get_original', instance_update_and_get_original) self.flags(group='glance', api_servers=['http://localhost:9292']) self.controller = servers.ServersController() self.ips_controller = ips.IPsController() policy.reset() policy.init() fake_network.stub_out_nw_api_get_instance_nw_info(self) class ServersControllerTest(ControllerTest): wsgi_api_version = os_wsgi.DEFAULT_API_VERSION def req(self, url, use_admin_context=False): return fakes.HTTPRequest.blank(url, use_admin_context=use_admin_context, version=self.wsgi_api_version) @mock.patch('nova.objects.Instance.get_by_uuid') @mock.patch('nova.objects.InstanceMapping.get_by_instance_uuid') def test_cellsv1_instance_lookup_no_target(self, mock_get_im, mock_get_inst): self.flags(enable=True, group='cells') ctxt = context.RequestContext('fake', 'fake') self.controller._get_instance(ctxt, 'foo') self.assertFalse(mock_get_im.called) self.assertIsNone(ctxt.db_connection) @mock.patch('nova.objects.Instance.get_by_uuid') @mock.patch('nova.objects.InstanceMapping.get_by_instance_uuid') def test_instance_lookup_targets(self, mock_get_im, mock_get_inst): ctxt = context.RequestContext('fake', 'fake') mock_get_im.return_value.cell_mapping.database_connection = uuids.cell1 self.controller._get_instance(ctxt, 'foo') mock_get_im.assert_called_once_with(ctxt, 'foo') self.assertIsNotNone(ctxt.db_connection) def test_requested_networks_prefix(self): self.flags(use_neutron=True) uuid = 'br-00000000-0000-0000-0000-000000000000' requested_networks = [{'uuid': uuid}] res = self.controller._get_requested_networks(requested_networks) self.assertIn((uuid, None, None, None), res.as_tuples()) def test_requested_networks_neutronv2_enabled_with_port(self): self.flags(use_neutron=True) port = 'eeeeeeee-eeee-eeee-eeee-eeeeeeeeeeee' requested_networks = [{'port': port}] res = self.controller._get_requested_networks(requested_networks) self.assertEqual([(None, None, port, None)], res.as_tuples()) def test_requested_networks_neutronv2_enabled_with_network(self): self.flags(use_neutron=True) network = 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa' requested_networks = [{'uuid': network}] res = self.controller._get_requested_networks(requested_networks) self.assertEqual([(network, None, None, None)], res.as_tuples()) def test_requested_networks_neutronv2_enabled_with_network_and_port(self): self.flags(use_neutron=True) network = 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa' port = 'eeeeeeee-eeee-eeee-eeee-eeeeeeeeeeee' requested_networks = [{'uuid': network, 'port': port}] res = self.controller._get_requested_networks(requested_networks) self.assertEqual([(None, None, port, None)], res.as_tuples()) def test_requested_networks_with_duplicate_networks_nova_net(self): # duplicate networks are allowed only for nova neutron v2.0 self.flags(use_neutron=False) network = 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa' requested_networks = [{'uuid': network}, {'uuid': network}] self.assertRaises( webob.exc.HTTPBadRequest, self.controller._get_requested_networks, requested_networks) def test_requested_networks_with_neutronv2_and_duplicate_networks(self): # duplicate networks are allowed only for nova neutron v2.0 self.flags(use_neutron=True) network = 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa' requested_networks = [{'uuid': network}, {'uuid': network}] res = self.controller._get_requested_networks(requested_networks) self.assertEqual([(network, None, None, None), (network, None, None, None)], res.as_tuples()) def test_requested_networks_neutronv2_enabled_conflict_on_fixed_ip(self): self.flags(use_neutron=True) network = 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa' port = 'eeeeeeee-eeee-eeee-eeee-eeeeeeeeeeee' addr = '10.0.0.1' requested_networks = [{'uuid': network, 'fixed_ip': addr, 'port': port}] self.assertRaises( webob.exc.HTTPBadRequest, self.controller._get_requested_networks, requested_networks) def test_requested_networks_neutronv2_disabled_with_port(self): self.flags(use_neutron=False) port = 'eeeeeeee-eeee-eeee-eeee-eeeeeeeeeeee' requested_networks = [{'port': port}] self.assertRaises( webob.exc.HTTPBadRequest, self.controller._get_requested_networks, requested_networks) def test_requested_networks_api_enabled_with_v2_subclass(self): self.flags(use_neutron=True) network = 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa' port = 'eeeeeeee-eeee-eeee-eeee-eeeeeeeeeeee' requested_networks = [{'uuid': network, 'port': port}] res = self.controller._get_requested_networks(requested_networks) self.assertEqual([(None, None, port, None)], res.as_tuples()) def test_get_server_by_uuid(self): req = self.req('/fake/servers/%s' % FAKE_UUID) res_dict = self.controller.show(req, FAKE_UUID) self.assertEqual(res_dict['server']['id'], FAKE_UUID) def test_get_server_joins(self): def fake_get(_self, *args, **kwargs): expected_attrs = kwargs['expected_attrs'] self.assertEqual(['flavor', 'info_cache', 'metadata', 'numa_topology'], expected_attrs) ctxt = context.RequestContext('fake', 'fake') return fake_instance.fake_instance_obj( ctxt, expected_attrs=expected_attrs) self.stubs.Set(compute_api.API, 'get', fake_get) req = self.req('/fake/servers/%s' % FAKE_UUID) self.controller.show(req, FAKE_UUID) def test_unique_host_id(self): """Create two servers with the same host and different project_ids and check that the host_id's are unique. """ def return_instance_with_host(context, *args, **kwargs): project_id = uuidutils.generate_uuid() return fakes.stub_instance_obj(context, id=1, uuid=FAKE_UUID, project_id=project_id, host='fake_host') self.stubs.Set(compute_api.API, 'get', return_instance_with_host) req = self.req('/fake/servers/%s' % FAKE_UUID) with mock.patch.object(compute_api.API, 'get') as mock_get: mock_get.side_effect = return_instance_with_host server1 = self.controller.show(req, FAKE_UUID) server2 = self.controller.show(req, FAKE_UUID) self.assertNotEqual(server1['server']['hostId'], server2['server']['hostId']) def _get_server_data_dict(self, uuid, image_bookmark, flavor_bookmark, status="ACTIVE", progress=100): return { "server": { "id": uuid, "user_id": "fake_user", "tenant_id": "fake_project", "updated": "2010-11-11T11:00:00Z", "created": "2010-10-10T12:00:00Z", "progress": progress, "name": "server2", "status": status, "hostId": '', "image": { "id": "10", "links": [ { "rel": "bookmark", "href": image_bookmark, }, ], }, "flavor": { "id": "2", "links": [ { "rel": "bookmark", "href": flavor_bookmark, }, ], }, "addresses": { 'test1': [ {'version': 4, 'addr': '192.168.1.100', 'OS-EXT-IPS:type': 'fixed', 'OS-EXT-IPS-MAC:mac_addr': 'aa:aa:aa:aa:aa:aa'}, {'version': 6, 'addr': '2001:db8:0:1::1', 'OS-EXT-IPS:type': 'fixed', 'OS-EXT-IPS-MAC:mac_addr': 'aa:aa:aa:aa:aa:aa'} ] }, "metadata": { "seq": "2", }, "links": [ { "rel": "self", "href": "http://localhost/v2/fake/servers/%s" % uuid, }, { "rel": "bookmark", "href": "http://localhost/fake/servers/%s" % uuid, }, ], "OS-DCF:diskConfig": "MANUAL", "accessIPv4": '', "accessIPv6": '', } } def test_get_server_by_id(self): self.flags(use_ipv6=True) image_bookmark = "http://localhost/fake/images/10" flavor_bookmark = "http://localhost/fake/flavors/2" uuid = FAKE_UUID req = self.req('/v2/fake/servers/%s' % uuid) res_dict = self.controller.show(req, uuid) expected_server = self._get_server_data_dict(uuid, image_bookmark, flavor_bookmark, progress=0) expected_server['server']['name'] = 'server1' expected_server['server']['metadata']['seq'] = '1' self.assertThat(res_dict, matchers.DictMatches(expected_server)) def test_get_server_with_active_status_by_id(self): image_bookmark = "http://localhost/fake/images/10" flavor_bookmark = "http://localhost/fake/flavors/2" new_return_server = fakes.fake_compute_get( id=2, vm_state=vm_states.ACTIVE, progress=100) self.stubs.Set(compute_api.API, 'get', lambda api, *a, **k: new_return_server(*a, **k)) uuid = FAKE_UUID req = self.req('/fake/servers/%s' % uuid) res_dict = self.controller.show(req, uuid) expected_server = self._get_server_data_dict(uuid, image_bookmark, flavor_bookmark) self.assertThat(res_dict, matchers.DictMatches(expected_server)) def test_get_server_with_id_image_ref_by_id(self): image_ref = "10" image_bookmark = "http://localhost/fake/images/10" flavor_id = "1" flavor_bookmark = "http://localhost/fake/flavors/2" new_return_server = fakes.fake_compute_get( id=2, vm_state=vm_states.ACTIVE, image_ref=image_ref, flavor_id=flavor_id, progress=100) self.stubs.Set(compute_api.API, 'get', lambda api, *a, **k: new_return_server(*a, **k)) uuid = FAKE_UUID req = self.req('/fake/servers/%s' % uuid) res_dict = self.controller.show(req, uuid) expected_server = self._get_server_data_dict(uuid, image_bookmark, flavor_bookmark) self.assertThat(res_dict, matchers.DictMatches(expected_server)) def test_get_server_addresses_from_cache(self): pub0 = ('172.19.0.1', '172.19.0.2',) pub1 = ('1.2.3.4',) pub2 = ('b33f::fdee:ddff:fecc:bbaa',) priv0 = ('192.168.0.3', '192.168.0.4',) def _ip(ip): return {'address': ip, 'type': 'fixed'} nw_cache = [ {'address': 'aa:aa:aa:aa:aa:aa', 'id': 1, 'network': {'bridge': 'br0', 'id': 1, 'label': 'public', 'subnets': [{'cidr': '172.19.0.0/24', 'ips': [_ip(ip) for ip in pub0]}, {'cidr': '1.2.3.0/16', 'ips': [_ip(ip) for ip in pub1]}, {'cidr': 'b33f::/64', 'ips': [_ip(ip) for ip in pub2]}]}}, {'address': 'bb:bb:bb:bb:bb:bb', 'id': 2, 'network': {'bridge': 'br1', 'id': 2, 'label': 'private', 'subnets': [{'cidr': '192.168.0.0/24', 'ips': [_ip(ip) for ip in priv0]}]}}] return_server = fakes.fake_compute_get(nw_cache=nw_cache) self.stubs.Set(compute_api.API, 'get', lambda api, *a, **k: return_server(*a, **k)) req = self.req('/fake/servers/%s/ips' % FAKE_UUID) res_dict = self.ips_controller.index(req, FAKE_UUID) expected = { 'addresses': { 'private': [ {'version': 4, 'addr': '192.168.0.3'}, {'version': 4, 'addr': '192.168.0.4'}, ], 'public': [ {'version': 4, 'addr': '172.19.0.1'}, {'version': 4, 'addr': '172.19.0.2'}, {'version': 4, 'addr': '1.2.3.4'}, {'version': 6, 'addr': 'b33f::fdee:ddff:fecc:bbaa'}, ], }, } self.assertThat(res_dict, matchers.DictMatches(expected)) # Make sure we kept the addresses in order self.assertIsInstance(res_dict['addresses'], collections.OrderedDict) labels = [vif['network']['label'] for vif in nw_cache] for index, label in enumerate(res_dict['addresses'].keys()): self.assertEqual(label, labels[index]) def test_get_server_addresses_nonexistent_network(self): url = '/v2/fake/servers/%s/ips/network_0' % FAKE_UUID req = self.req(url) self.assertRaises(webob.exc.HTTPNotFound, self.ips_controller.show, req, FAKE_UUID, 'network_0') def test_get_server_addresses_nonexistent_server(self): def fake_instance_get(*args, **kwargs): raise exception.InstanceNotFound(instance_id='fake') self.stubs.Set(compute_api.API, 'get', fake_instance_get) server_id = uuids.fake req = self.req('/fake/servers/%s/ips' % server_id) self.assertRaises(webob.exc.HTTPNotFound, self.ips_controller.index, req, server_id) def test_get_server_list_empty(self): self.stubs.Set(compute_api.API, 'get_all', return_servers_empty) req = self.req('/fake/servers') res_dict = self.controller.index(req) num_servers = len(res_dict['servers']) self.assertEqual(0, num_servers) def test_get_server_list_with_reservation_id(self): req = self.req('/fake/servers?reservation_id=foo') res_dict = self.controller.index(req) i = 0 for s in res_dict['servers']: self.assertEqual(s.get('name'), 'server%d' % (i + 1)) i += 1 def test_get_server_list_with_reservation_id_empty(self): req = self.req('/fake/servers/detail?' 'reservation_id=foo') res_dict = self.controller.detail(req) i = 0 for s in res_dict['servers']: self.assertEqual(s.get('name'), 'server%d' % (i + 1)) i += 1 def test_get_server_list_with_reservation_id_details(self): req = self.req('/fake/servers/detail?' 'reservation_id=foo') res_dict = self.controller.detail(req) i = 0 for s in res_dict['servers']: self.assertEqual(s.get('name'), 'server%d' % (i + 1)) i += 1 def test_get_server_list(self): req = self.req('/fake/servers') res_dict = self.controller.index(req) self.assertEqual(len(res_dict['servers']), 5) for i, s in enumerate(res_dict['servers']): self.assertEqual(s['id'], fakes.get_fake_uuid(i)) self.assertEqual(s['name'], 'server%d' % (i + 1)) self.assertIsNone(s.get('image', None)) expected_links = [ { "rel": "self", "href": "http://localhost/v2/fake/servers/%s" % s['id'], }, { "rel": "bookmark", "href": "http://localhost/fake/servers/%s" % s['id'], }, ] self.assertEqual(s['links'], expected_links) def test_get_servers_with_limit(self): req = self.req('/fake/servers?limit=3') res_dict = self.controller.index(req) servers = res_dict['servers'] self.assertEqual([s['id'] for s in servers], [fakes.get_fake_uuid(i) for i in range(len(servers))]) servers_links = res_dict['servers_links'] self.assertEqual(servers_links[0]['rel'], 'next') href_parts = urlparse.urlparse(servers_links[0]['href']) self.assertEqual('/v2/fake/servers', href_parts.path) params = urlparse.parse_qs(href_parts.query) expected_params = {'limit': ['3'], 'marker': [fakes.get_fake_uuid(2)]} self.assertThat(params, matchers.DictMatches(expected_params)) def test_get_servers_with_limit_bad_value(self): req = self.req('/fake/servers?limit=aaa') self.assertRaises(exception.ValidationError, self.controller.index, req) def test_get_server_details_empty(self): self.stubs.Set(compute_api.API, 'get_all', return_servers_empty) req = self.req('/fake/servers/detail') res_dict = self.controller.detail(req) num_servers = len(res_dict['servers']) self.assertEqual(0, num_servers) def test_get_server_details_with_bad_name(self): req = self.req('/fake/servers/detail?name=%2Binstance') self.assertRaises(exception.ValidationError, self.controller.index, req) def test_get_server_details_with_limit(self): req = self.req('/fake/servers/detail?limit=3') res = self.controller.detail(req) servers = res['servers'] self.assertEqual([s['id'] for s in servers], [fakes.get_fake_uuid(i) for i in range(len(servers))]) servers_links = res['servers_links'] self.assertEqual(servers_links[0]['rel'], 'next') href_parts = urlparse.urlparse(servers_links[0]['href']) self.assertEqual('/v2/fake/servers/detail', href_parts.path) params = urlparse.parse_qs(href_parts.query) expected = {'limit': ['3'], 'marker': [fakes.get_fake_uuid(2)]} self.assertThat(params, matchers.DictMatches(expected)) def test_get_server_details_with_limit_bad_value(self): req = self.req('/fake/servers/detail?limit=aaa') self.assertRaises(exception.ValidationError, self.controller.detail, req) def test_get_server_details_with_limit_and_other_params(self): req = self.req('/fake/servers/detail' '?limit=3&blah=2:t' '&sort_key=uuid&sort_dir=asc') res = self.controller.detail(req) servers = res['servers'] self.assertEqual([s['id'] for s in servers], [fakes.get_fake_uuid(i) for i in range(len(servers))]) servers_links = res['servers_links'] self.assertEqual(servers_links[0]['rel'], 'next') href_parts = urlparse.urlparse(servers_links[0]['href']) self.assertEqual('/v2/fake/servers/detail', href_parts.path) params = urlparse.parse_qs(href_parts.query) expected = {'limit': ['3'], 'sort_key': ['uuid'], 'sort_dir': ['asc'], 'marker': [fakes.get_fake_uuid(2)]} self.assertThat(params, matchers.DictMatches(expected)) def test_get_servers_with_too_big_limit(self): req = self.req('/fake/servers?limit=30') res_dict = self.controller.index(req) self.assertNotIn('servers_links', res_dict) def test_get_servers_with_bad_limit(self): req = self.req('/fake/servers?limit=asdf') self.assertRaises(exception.ValidationError, self.controller.index, req) def test_get_servers_with_marker(self): url = '/v2/fake/servers?marker=%s' % fakes.get_fake_uuid(2) req = self.req(url) servers = self.controller.index(req)['servers'] self.assertEqual([s['name'] for s in servers], ["server4", "server5"]) def test_get_servers_with_limit_and_marker(self): url = ('/v2/fake/servers?limit=2&marker=%s' % fakes.get_fake_uuid(1)) req = self.req(url) servers = self.controller.index(req)['servers'] self.assertEqual([s['name'] for s in servers], ['server3', 'server4']) def test_get_servers_with_bad_marker(self): req = self.req('/fake/servers?limit=2&marker=asdf') self.assertRaises(webob.exc.HTTPBadRequest, self.controller.index, req) def test_get_servers_with_invalid_filter_param(self): req = self.req('/fake/servers?info_cache=asdf', use_admin_context=True) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.index, req) req = self.req('/fake/servers?__foo__=asdf', use_admin_context=True) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.index, req) def test_get_servers_with_invalid_regex_filter_param(self): req = self.req('/fake/servers?flavor=[[[', use_admin_context=True) self.assertRaises(exception.ValidationError, self.controller.index, req) def test_get_servers_with_empty_regex_filter_param(self): empty_string = '' req = self.req('/fake/servers?flavor=%s' % empty_string, use_admin_context=True) self.assertRaises(exception.ValidationError, self.controller.index, req) def test_get_servers_detail_with_empty_regex_filter_param(self): empty_string = '' req = self.req('/fake/servers/detail?flavor=%s' % empty_string, use_admin_context=True) self.assertRaises(exception.ValidationError, self.controller.detail, req) def test_get_servers_invalid_sort_key(self): req = self.req('/fake/servers?sort_key=foo&sort_dir=desc') self.assertRaises(exception.ValidationError, self.controller.index, req) @mock.patch.object(compute_api.API, 'get_all') def test_get_servers_ignore_sort_key(self, mock_get): req = self.req('/fake/servers?sort_key=vcpus&sort_dir=asc') self.controller.index(req) mock_get.assert_called_once_with( mock.ANY, search_opts=mock.ANY, limit=mock.ANY, marker=mock.ANY, expected_attrs=mock.ANY, sort_keys=[], sort_dirs=[]) @mock.patch.object(compute_api.API, 'get_all') def test_get_servers_ignore_sort_key_only_one_dir(self, mock_get): req = self.req( '/fake/servers?sort_key=user_id&sort_key=vcpus&sort_dir=asc') self.controller.index(req) mock_get.assert_called_once_with( mock.ANY, search_opts=mock.ANY, limit=mock.ANY, marker=mock.ANY, expected_attrs=mock.ANY, sort_keys=['user_id'], sort_dirs=['asc']) @mock.patch.object(compute_api.API, 'get_all') def test_get_servers_ignore_sort_key_with_no_sort_dir(self, mock_get): req = self.req('/fake/servers?sort_key=vcpus&sort_key=user_id') self.controller.index(req) mock_get.assert_called_once_with( mock.ANY, search_opts=mock.ANY, limit=mock.ANY, marker=mock.ANY, expected_attrs=mock.ANY, sort_keys=['user_id'], sort_dirs=[]) @mock.patch.object(compute_api.API, 'get_all') def test_get_servers_ignore_sort_key_with_bad_sort_dir(self, mock_get): req = self.req('/fake/servers?sort_key=vcpus&sort_dir=bad_dir') self.controller.index(req) mock_get.assert_called_once_with( mock.ANY, search_opts=mock.ANY, limit=mock.ANY, marker=mock.ANY, expected_attrs=mock.ANY, sort_keys=[], sort_dirs=[]) def test_get_servers_non_admin_with_admin_only_sort_key(self): req = self.req('/fake/servers?sort_key=host&sort_dir=desc') self.assertRaises(webob.exc.HTTPForbidden, self.controller.index, req) @mock.patch.object(compute_api.API, 'get_all') def test_get_servers_admin_with_admin_only_sort_key(self, mock_get): req = self.req('/fake/servers?sort_key=node&sort_dir=desc', use_admin_context=True) self.controller.detail(req) mock_get.assert_called_once_with( mock.ANY, search_opts=mock.ANY, limit=mock.ANY, marker=mock.ANY, expected_attrs=mock.ANY, sort_keys=['node'], sort_dirs=['desc']) def test_get_servers_with_bad_option(self): server_uuid = uuids.fake def fake_get_all(compute_self, context, search_opts=None, limit=None, marker=None, expected_attrs=None, sort_keys=None, sort_dirs=None): db_list = [fakes.stub_instance(100, uuid=server_uuid)] return instance_obj._make_instance_list( context, objects.InstanceList(), db_list, FIELDS) self.stubs.Set(compute_api.API, 'get_all', fake_get_all) req = self.req('/fake/servers?unknownoption=whee') servers = self.controller.index(req)['servers'] self.assertEqual(len(servers), 1) self.assertEqual(servers[0]['id'], server_uuid) def test_get_servers_allows_image(self): server_uuid = uuids.fake def fake_get_all(compute_self, context, search_opts=None, limit=None, marker=None, expected_attrs=None, sort_keys=None, sort_dirs=None): self.assertIsNotNone(search_opts) self.assertIn('image', search_opts) self.assertEqual(search_opts['image'], '12345') db_list = [fakes.stub_instance(100, uuid=server_uuid)] return instance_obj._make_instance_list( context, objects.InstanceList(), db_list, FIELDS) self.stubs.Set(compute_api.API, 'get_all', fake_get_all) req = self.req('/fake/servers?image=12345') servers = self.controller.index(req)['servers'] self.assertEqual(len(servers), 1) self.assertEqual(servers[0]['id'], server_uuid) def test_tenant_id_filter_no_admin_context(self): def fake_get_all(context, search_opts=None, **kwargs): self.assertIsNotNone(search_opts) self.assertNotIn('tenant_id', search_opts) self.assertEqual(search_opts['project_id'], 'fake') return [fakes.stub_instance_obj(100)] req = self.req('/fake/servers?tenant_id=newfake') with mock.patch.object(compute_api.API, 'get_all') as mock_get: mock_get.side_effect = fake_get_all servers = self.controller.index(req)['servers'] self.assertEqual(len(servers), 1) def test_tenant_id_filter_admin_context(self): """"Test tenant_id search opt is dropped if all_tenants is not set.""" def fake_get_all(context, search_opts=None, **kwargs): self.assertIsNotNone(search_opts) self.assertNotIn('tenant_id', search_opts) self.assertEqual('fake', search_opts['project_id']) return [fakes.stub_instance_obj(100)] req = self.req('/fake/servers?tenant_id=newfake', use_admin_context=True) with mock.patch.object(compute_api.API, 'get_all') as mock_get: mock_get.side_effect = fake_get_all servers = self.controller.index(req)['servers'] self.assertEqual(len(servers), 1) def test_all_tenants_param_normal(self): def fake_get_all(context, search_opts=None, **kwargs): self.assertNotIn('project_id', search_opts) return [fakes.stub_instance_obj(100)] req = self.req('/fake/servers?all_tenants', use_admin_context=True) with mock.patch.object(compute_api.API, 'get_all') as mock_get: mock_get.side_effect = fake_get_all servers = self.controller.index(req)['servers'] self.assertEqual(len(servers), 1) def test_all_tenants_param_one(self): def fake_get_all(api, context, search_opts=None, **kwargs): self.assertNotIn('project_id', search_opts) return [fakes.stub_instance_obj(100)] self.stubs.Set(compute_api.API, 'get_all', fake_get_all) req = self.req('/fake/servers?all_tenants=1', use_admin_context=True) servers = self.controller.index(req)['servers'] self.assertEqual(len(servers), 1) def test_all_tenants_param_zero(self): def fake_get_all(api, context, search_opts=None, **kwargs): self.assertNotIn('all_tenants', search_opts) return [fakes.stub_instance_obj(100)] self.stubs.Set(compute_api.API, 'get_all', fake_get_all) req = self.req('/fake/servers?all_tenants=0', use_admin_context=True) servers = self.controller.index(req)['servers'] self.assertEqual(len(servers), 1) def test_all_tenants_param_false(self): def fake_get_all(api, context, search_opts=None, **kwargs): self.assertNotIn('all_tenants', search_opts) return [fakes.stub_instance_obj(100)] self.stubs.Set(compute_api.API, 'get_all', fake_get_all) req = self.req('/fake/servers?all_tenants=false', use_admin_context=True) servers = self.controller.index(req)['servers'] self.assertEqual(len(servers), 1) def test_all_tenants_param_invalid(self): def fake_get_all(api, context, search_opts=None, **kwargs): self.assertNotIn('all_tenants', search_opts) return [fakes.stub_instance_obj(100)] self.stubs.Set(compute_api.API, 'get_all', fake_get_all) req = self.req('/fake/servers?all_tenants=xxx', use_admin_context=True) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.index, req) def test_admin_restricted_tenant(self): def fake_get_all(api, context, search_opts=None, **kwargs): self.assertIsNotNone(search_opts) self.assertEqual(search_opts['project_id'], 'fake') return [fakes.stub_instance_obj(100)] self.stubs.Set(compute_api.API, 'get_all', fake_get_all) req = self.req('/fake/servers', use_admin_context=True) servers = self.controller.index(req)['servers'] self.assertEqual(len(servers), 1) def test_all_tenants_pass_policy(self): def fake_get_all(api, context, search_opts=None, **kwargs): self.assertIsNotNone(search_opts) self.assertNotIn('project_id', search_opts) self.assertTrue(context.is_admin) return [fakes.stub_instance_obj(100)] self.stubs.Set(compute_api.API, 'get_all', fake_get_all) rules = { "os_compute_api:servers:index": "project_id:fake", "os_compute_api:servers:index:get_all_tenants": "project_id:fake" } policy.set_rules(oslo_policy.Rules.from_dict(rules)) req = self.req('/fake/servers?all_tenants=1') servers = self.controller.index(req)['servers'] self.assertEqual(len(servers), 1) def test_all_tenants_fail_policy(self): def fake_get_all(api, context, search_opts=None, **kwargs): self.assertIsNotNone(search_opts) return [fakes.stub_instance_obj(100)] rules = { "os_compute_api:servers:index:get_all_tenants": "project_id:non_fake", "os_compute_api:servers:get_all": "project_id:fake", } policy.set_rules(oslo_policy.Rules.from_dict(rules)) self.stubs.Set(compute_api.API, 'get_all', fake_get_all) req = self.req('/fake/servers?all_tenants=1') self.assertRaises(exception.PolicyNotAuthorized, self.controller.index, req) def test_get_servers_allows_flavor(self): server_uuid = uuids.fake def fake_get_all(compute_self, context, search_opts=None, limit=None, marker=None, expected_attrs=None, sort_keys=None, sort_dirs=None): self.assertIsNotNone(search_opts) self.assertIn('flavor', search_opts) # flavor is an integer ID self.assertEqual(search_opts['flavor'], '12345') return objects.InstanceList( objects=[fakes.stub_instance_obj(100, uuid=server_uuid)]) self.stubs.Set(compute_api.API, 'get_all', fake_get_all) req = self.req('/fake/servers?flavor=12345') servers = self.controller.index(req)['servers'] self.assertEqual(len(servers), 1) self.assertEqual(servers[0]['id'], server_uuid) def test_get_servers_with_bad_flavor(self): req = self.req('/fake/servers?flavor=abcde') with mock.patch.object(compute_api.API, 'get_all') as mock_get: mock_get.return_value = objects.InstanceList(objects=[]) servers = self.controller.index(req)['servers'] self.assertEqual(len(servers), 0) def test_get_server_details_with_bad_flavor(self): req = self.req('/fake/servers?flavor=abcde') with mock.patch.object(compute_api.API, 'get_all') as mock_get: mock_get.return_value = objects.InstanceList(objects=[]) servers = self.controller.detail(req)['servers'] self.assertThat(servers, testtools.matchers.HasLength(0)) def test_get_servers_allows_status(self): server_uuid = uuids.fake def fake_get_all(compute_self, context, search_opts=None, limit=None, marker=None, expected_attrs=None, sort_keys=None, sort_dirs=None): self.assertIsNotNone(search_opts) self.assertIn('vm_state', search_opts) self.assertEqual(search_opts['vm_state'], [vm_states.ACTIVE]) return objects.InstanceList( objects=[fakes.stub_instance_obj(100, uuid=server_uuid)]) self.stubs.Set(compute_api.API, 'get_all', fake_get_all) req = self.req('/fake/servers?status=active') servers = self.controller.index(req)['servers'] self.assertEqual(len(servers), 1) self.assertEqual(servers[0]['id'], server_uuid) def test_get_servers_allows_task_status(self): server_uuid = uuids.fake task_state = task_states.REBOOTING def fake_get_all(compute_self, context, search_opts=None, limit=None, marker=None, expected_attrs=None, sort_keys=None, sort_dirs=None): self.assertIsNotNone(search_opts) self.assertIn('task_state', search_opts) self.assertEqual([task_states.REBOOT_PENDING, task_states.REBOOT_STARTED, task_states.REBOOTING], search_opts['task_state']) return objects.InstanceList( objects=[fakes.stub_instance_obj(100, uuid=server_uuid, task_state=task_state)]) self.stubs.Set(compute_api.API, 'get_all', fake_get_all) req = self.req('/fake/servers?status=reboot') servers = self.controller.index(req)['servers'] self.assertEqual(len(servers), 1) self.assertEqual(servers[0]['id'], server_uuid) def test_get_servers_resize_status(self): # Test when resize status, it maps list of vm states. server_uuid = uuids.fake def fake_get_all(compute_self, context, search_opts=None, limit=None, marker=None, expected_attrs=None, sort_keys=None, sort_dirs=None): self.assertIn('vm_state', search_opts) self.assertEqual(search_opts['vm_state'], [vm_states.ACTIVE, vm_states.STOPPED]) return objects.InstanceList( objects=[fakes.stub_instance_obj(100, uuid=server_uuid)]) self.stubs.Set(compute_api.API, 'get_all', fake_get_all) req = self.req('/fake/servers?status=resize') servers = self.controller.detail(req)['servers'] self.assertEqual(len(servers), 1) self.assertEqual(servers[0]['id'], server_uuid) def test_get_servers_invalid_status(self): # Test getting servers by invalid status. req = self.req('/fake/servers?status=baloney', use_admin_context=False) servers = self.controller.index(req)['servers'] self.assertEqual(len(servers), 0) def test_get_servers_deleted_status_as_user(self): req = self.req('/fake/servers?status=deleted', use_admin_context=False) self.assertRaises(webob.exc.HTTPForbidden, self.controller.detail, req) def test_get_servers_deleted_status_as_admin(self): server_uuid = uuids.fake def fake_get_all(compute_self, context, search_opts=None, limit=None, marker=None, expected_attrs=None, sort_keys=None, sort_dirs=None): self.assertIn('vm_state', search_opts) self.assertEqual(search_opts['vm_state'], ['deleted']) return objects.InstanceList( objects=[fakes.stub_instance_obj(100, uuid=server_uuid)]) self.stubs.Set(compute_api.API, 'get_all', fake_get_all) req = self.req('/fake/servers?status=deleted', use_admin_context=True) servers = self.controller.detail(req)['servers'] self.assertEqual(len(servers), 1) self.assertEqual(servers[0]['id'], server_uuid) @mock.patch.object(compute_api.API, 'get_all') def test_get_servers_deleted_filter_str_to_bool(self, mock_get_all): server_uuid = uuids.fake db_list = objects.InstanceList( objects=[fakes.stub_instance_obj(100, uuid=server_uuid, vm_state='deleted')]) mock_get_all.return_value = db_list req = self.req('/fake/servers?deleted=true', use_admin_context=True) servers = self.controller.detail(req)['servers'] self.assertEqual(1, len(servers)) self.assertEqual(server_uuid, servers[0]['id']) # Assert that 'deleted' filter value is converted to boolean # while calling get_all() method. expected_search_opts = {'deleted': True, 'project_id': 'fake'} self.assertEqual(expected_search_opts, mock_get_all.call_args[1]['search_opts']) @mock.patch.object(compute_api.API, 'get_all') def test_get_servers_deleted_filter_invalid_str(self, mock_get_all): server_uuid = uuids.fake db_list = objects.InstanceList( objects=[fakes.stub_instance_obj(100, uuid=server_uuid)]) mock_get_all.return_value = db_list req = fakes.HTTPRequest.blank('/fake/servers?deleted=abc', use_admin_context=True) servers = self.controller.detail(req)['servers'] self.assertEqual(1, len(servers)) self.assertEqual(server_uuid, servers[0]['id']) # Assert that invalid 'deleted' filter value is converted to boolean # False while calling get_all() method. expected_search_opts = {'deleted': False, 'project_id': 'fake'} self.assertEqual(expected_search_opts, mock_get_all.call_args[1]['search_opts']) def test_get_servers_allows_name(self): server_uuid = uuids.fake def fake_get_all(compute_self, context, search_opts=None, limit=None, marker=None, expected_attrs=None, sort_keys=None, sort_dirs=None): self.assertIsNotNone(search_opts) self.assertIn('name', search_opts) self.assertEqual(search_opts['name'], 'whee.*') self.assertEqual([], expected_attrs) return objects.InstanceList( objects=[fakes.stub_instance_obj(100, uuid=server_uuid)]) self.stubs.Set(compute_api.API, 'get_all', fake_get_all) req = self.req('/fake/servers?name=whee.*') servers = self.controller.index(req)['servers'] self.assertEqual(len(servers), 1) self.assertEqual(servers[0]['id'], server_uuid) @mock.patch.object(compute_api.API, 'get_all') def test_get_servers_flavor_not_found(self, get_all_mock): get_all_mock.side_effect = exception.FlavorNotFound(flavor_id=1) req = fakes.HTTPRequest.blank( '/fake/servers?status=active&flavor=abc') servers = self.controller.index(req)['servers'] self.assertEqual(0, len(servers)) def test_get_servers_allows_changes_since(self): server_uuid = uuids.fake def fake_get_all(compute_self, context, search_opts=None, limit=None, marker=None, expected_attrs=None, sort_keys=None, sort_dirs=None): self.assertIsNotNone(search_opts) self.assertIn('changes-since', search_opts) changes_since = datetime.datetime(2011, 1, 24, 17, 8, 1, tzinfo=iso8601.iso8601.UTC) self.assertEqual(search_opts['changes-since'], changes_since) self.assertNotIn('deleted', search_opts) return objects.InstanceList( objects=[fakes.stub_instance_obj(100, uuid=server_uuid)]) self.stubs.Set(compute_api.API, 'get_all', fake_get_all) params = 'changes-since=2011-01-24T17:08:01Z' req = self.req('/fake/servers?%s' % params) servers = self.controller.index(req)['servers'] self.assertEqual(len(servers), 1) self.assertEqual(servers[0]['id'], server_uuid) def test_get_servers_allows_changes_since_bad_value(self): params = 'changes-since=asdf' req = self.req('/fake/servers?%s' % params) self.assertRaises(exception.ValidationError, self.controller.index, req) def test_get_servers_admin_filters_as_user(self): """Test getting servers by admin-only or unknown options when context is not admin. Make sure the admin and unknown options are stripped before they get to compute_api.get_all() """ server_uuid = uuids.fake def fake_get_all(compute_self, context, search_opts=None, limit=None, marker=None, expected_attrs=None, sort_keys=None, sort_dirs=None): self.assertIsNotNone(search_opts) # Allowed by user self.assertIn('name', search_opts) self.assertIn('ip', search_opts) # OSAPI converts status to vm_state self.assertIn('vm_state', search_opts) # Allowed only by admins with admin API on self.assertNotIn('unknown_option', search_opts) return objects.InstanceList( objects=[fakes.stub_instance_obj(100, uuid=server_uuid)]) self.stubs.Set(compute_api.API, 'get_all', fake_get_all) query_str = "name=foo&ip=10.*&status=active&unknown_option=meow" req = fakes.HTTPRequest.blank('/fake/servers?%s' % query_str) res = self.controller.index(req) servers = res['servers'] self.assertEqual(len(servers), 1) self.assertEqual(servers[0]['id'], server_uuid) def test_get_servers_admin_options_as_admin(self): """Test getting servers by admin-only or unknown options when context is admin. All options should be passed """ server_uuid = uuids.fake def fake_get_all(compute_self, context, search_opts=None, limit=None, marker=None, expected_attrs=None, sort_keys=None, sort_dirs=None): self.assertIsNotNone(search_opts) # Allowed by user self.assertIn('name', search_opts) self.assertIn('terminated_at', search_opts) # OSAPI converts status to vm_state self.assertIn('vm_state', search_opts) # Allowed only by admins with admin API on self.assertIn('ip', search_opts) self.assertNotIn('unknown_option', search_opts) return objects.InstanceList( objects=[fakes.stub_instance_obj(100, uuid=server_uuid)]) self.stubs.Set(compute_api.API, 'get_all', fake_get_all) query_str = ("name=foo&ip=10.*&status=active&unknown_option=meow&" "terminated_at=^2016-02-01.*") req = self.req('/fake/servers?%s' % query_str, use_admin_context=True) servers = self.controller.index(req)['servers'] self.assertEqual(len(servers), 1) self.assertEqual(servers[0]['id'], server_uuid) def test_get_servers_allows_ip(self): """Test getting servers by ip.""" server_uuid = uuids.fake def fake_get_all(compute_self, context, search_opts=None, limit=None, marker=None, expected_attrs=None, sort_keys=None, sort_dirs=None): self.assertIsNotNone(search_opts) self.assertIn('ip', search_opts) self.assertEqual(search_opts['ip'], '10\..*') return objects.InstanceList( objects=[fakes.stub_instance_obj(100, uuid=server_uuid)]) self.stubs.Set(compute_api.API, 'get_all', fake_get_all) req = self.req('/fake/servers?ip=10\..*') servers = self.controller.index(req)['servers'] self.assertEqual(len(servers), 1) self.assertEqual(servers[0]['id'], server_uuid) def test_get_servers_admin_allows_ip6(self): """Test getting servers by ip6 with admin_api enabled and admin context """ server_uuid = uuids.fake def fake_get_all(compute_self, context, search_opts=None, limit=None, marker=None, expected_attrs=None, sort_keys=None, sort_dirs=None): self.assertIsNotNone(search_opts) self.assertIn('ip6', search_opts) self.assertEqual(search_opts['ip6'], 'ffff.*') return objects.InstanceList( objects=[fakes.stub_instance_obj(100, uuid=server_uuid)]) self.stubs.Set(compute_api.API, 'get_all', fake_get_all) req = self.req('/fake/servers?ip6=ffff.*', use_admin_context=True) servers = self.controller.index(req)['servers'] self.assertEqual(len(servers), 1) self.assertEqual(servers[0]['id'], server_uuid) def test_get_servers_allows_ip6_with_new_version(self): """Test getting servers by ip6 with new version requested and no admin context """ server_uuid = uuids.fake def fake_get_all(compute_self, context, search_opts=None, limit=None, marker=None, expected_attrs=None, sort_keys=None, sort_dirs=None): self.assertIsNotNone(search_opts) self.assertIn('ip6', search_opts) self.assertEqual(search_opts['ip6'], 'ffff.*') return objects.InstanceList( objects=[fakes.stub_instance_obj(100, uuid=server_uuid)]) self.stubs.Set(compute_api.API, 'get_all', fake_get_all) req = self.req('/fake/servers?ip6=ffff.*') req.api_version_request = api_version_request.APIVersionRequest('2.5') servers = self.controller.index(req)['servers'] self.assertEqual(len(servers), 1) self.assertEqual(servers[0]['id'], server_uuid) def test_get_servers_admin_allows_access_ip_v4(self): """Test getting servers by access_ip_v4 with admin_api enabled and admin context """ server_uuid = uuids.fake def fake_get_all(compute_self, context, search_opts=None, limit=None, marker=None, expected_attrs=None, sort_keys=None, sort_dirs=None): self.assertIsNotNone(search_opts) self.assertIn('access_ip_v4', search_opts) self.assertEqual(search_opts['access_ip_v4'], 'ffff.*') return objects.InstanceList( objects=[fakes.stub_instance_obj(100, uuid=server_uuid)]) self.stubs.Set(compute_api.API, 'get_all', fake_get_all) req = self.req('/fake/servers?access_ip_v4=ffff.*', use_admin_context=True) servers = self.controller.index(req)['servers'] self.assertEqual(1, len(servers)) self.assertEqual(server_uuid, servers[0]['id']) def test_get_servers_admin_allows_access_ip_v6(self): """Test getting servers by access_ip_v6 with admin_api enabled and admin context """ server_uuid = uuids.fake def fake_get_all(compute_self, context, search_opts=None, limit=None, marker=None, expected_attrs=None, sort_keys=None, sort_dirs=None): self.assertIsNotNone(search_opts) self.assertIn('access_ip_v6', search_opts) self.assertEqual(search_opts['access_ip_v6'], 'ffff.*') return objects.InstanceList( objects=[fakes.stub_instance_obj(100, uuid=server_uuid)]) self.stubs.Set(compute_api.API, 'get_all', fake_get_all) req = self.req('/fake/servers?access_ip_v6=ffff.*', use_admin_context=True) servers = self.controller.index(req)['servers'] self.assertEqual(1, len(servers)) self.assertEqual(server_uuid, servers[0]['id']) def test_get_all_server_details(self): expected_flavor = { "id": "2", "links": [ { "rel": "bookmark", "href": 'http://localhost/fake/flavors/2', }, ], } expected_image = { "id": "10", "links": [ { "rel": "bookmark", "href": 'http://localhost/fake/images/10', }, ], } req = self.req('/fake/servers/detail') res_dict = self.controller.detail(req) for i, s in enumerate(res_dict['servers']): self.assertEqual(s['id'], fakes.get_fake_uuid(i)) self.assertEqual(s['hostId'], '') self.assertEqual(s['name'], 'server%d' % (i + 1)) self.assertEqual(s['image'], expected_image) self.assertEqual(s['flavor'], expected_flavor) self.assertEqual(s['status'], 'ACTIVE') self.assertEqual(s['metadata']['seq'], str(i + 1)) def test_get_all_server_details_with_host(self): """We want to make sure that if two instances are on the same host, then they return the same hostId. If two instances are on different hosts, they should return different hostIds. In this test, there are 5 instances - 2 on one host and 3 on another. """ def return_servers_with_host(*args, **kwargs): return objects.InstanceList( objects=[fakes.stub_instance_obj(None, id=i + 1, user_id='fake', project_id='fake', host=i % 2, uuid=fakes.get_fake_uuid(i)) for i in range(5)]) self.stubs.Set(compute_api.API, 'get_all', return_servers_with_host) req = self.req('/fake/servers/detail') res_dict = self.controller.detail(req) server_list = res_dict['servers'] host_ids = [server_list[0]['hostId'], server_list[1]['hostId']] self.assertTrue(host_ids[0] and host_ids[1]) self.assertNotEqual(host_ids[0], host_ids[1]) for i, s in enumerate(server_list): self.assertEqual(s['id'], fakes.get_fake_uuid(i)) self.assertEqual(s['hostId'], host_ids[i % 2]) self.assertEqual(s['name'], 'server%d' % (i + 1)) def test_get_servers_joins_services(self): def fake_get_all(compute_self, context, search_opts=None, limit=None, marker=None, expected_attrs=None, sort_keys=None, sort_dirs=None): self.assertIn('services', expected_attrs) return objects.InstanceList() self.stubs.Set(compute_api.API, 'get_all', fake_get_all) req = self.req('/fake/servers/detail', use_admin_context=True) self.assertIn('servers', self.controller.detail(req)) class ServersControllerTestV29(ServersControllerTest): wsgi_api_version = '2.9' def _get_server_data_dict(self, uuid, image_bookmark, flavor_bookmark, status="ACTIVE", progress=100): server_dict = super(ServersControllerTestV29, self)._get_server_data_dict(uuid, image_bookmark, flavor_bookmark, status, progress) server_dict['server']['locked'] = False return server_dict @mock.patch.object(compute_api.API, 'get') def _test_get_server_with_lock(self, locked_by, get_mock): image_bookmark = "http://localhost/fake/images/10" flavor_bookmark = "http://localhost/fake/flavors/2" uuid = FAKE_UUID get_mock.side_effect = fakes.fake_compute_get(id=2, locked_by=locked_by, uuid=uuid) req = self.req('/fake/servers/%s' % uuid) res_dict = self.controller.show(req, uuid) expected_server = self._get_server_data_dict(uuid, image_bookmark, flavor_bookmark, progress=0) expected_server['server']['locked'] = True if locked_by else False self.assertThat(res_dict, matchers.DictMatches(expected_server)) return res_dict def test_get_server_with_locked_by_admin(self): res_dict = self._test_get_server_with_lock('admin') self.assertTrue(res_dict['server']['locked']) def test_get_server_with_locked_by_owner(self): res_dict = self._test_get_server_with_lock('owner') self.assertTrue(res_dict['server']['locked']) def test_get_server_not_locked(self): res_dict = self._test_get_server_with_lock(None) self.assertFalse(res_dict['server']['locked']) @mock.patch.object(compute_api.API, 'get_all') def _test_list_server_detail_with_lock(self, s1_locked, s2_locked, get_all_mock): get_all_mock.return_value = fake_instance_get_all_with_locked( context, [s1_locked, s2_locked]) req = self.req('/fake/servers/detail') servers_list = self.controller.detail(req) # Check that each returned server has the same 'locked' value # and 'id' as they were created. for locked in [s1_locked, s2_locked]: server = next(server for server in servers_list['servers'] if (server['id'] == fakes.get_fake_uuid(locked))) expected = False if locked == 'not_locked' else True self.assertEqual(expected, server['locked']) def test_list_server_detail_with_locked_s1_admin_s2_owner(self): self._test_list_server_detail_with_lock('admin', 'owner') def test_list_server_detail_with_locked_s1_owner_s2_admin(self): self._test_list_server_detail_with_lock('owner', 'admin') def test_list_server_detail_with_locked_s1_admin_s2_admin(self): self._test_list_server_detail_with_lock('admin', 'admin') def test_list_server_detail_with_locked_s1_admin_s2_not_locked(self): self._test_list_server_detail_with_lock('admin', 'not_locked') def test_list_server_detail_with_locked_s1_s2_not_locked(self): self._test_list_server_detail_with_lock('not_locked', 'not_locked') @mock.patch.object(compute_api.API, 'get_all') def test_get_servers_remove_non_search_options(self, get_all_mock): req = fakes.HTTPRequestV21.blank('/servers' '?sort_key=uuid&sort_dir=asc' '&sort_key=user_id&sort_dir=desc' '&limit=1&marker=123', use_admin_context=True) self.controller.index(req) kwargs = get_all_mock.call_args[1] search_opts = kwargs['search_opts'] for key in ('sort_key', 'sort_dir', 'limit', 'marker'): self.assertNotIn(key, search_opts) class ServersControllerTestV219(ServersControllerTest): wsgi_api_version = '2.19' def _get_server_data_dict(self, uuid, image_bookmark, flavor_bookmark, status="ACTIVE", progress=100, description=None): server_dict = super(ServersControllerTestV219, self)._get_server_data_dict(uuid, image_bookmark, flavor_bookmark, status, progress) server_dict['server']['locked'] = False server_dict['server']['description'] = description return server_dict @mock.patch.object(compute_api.API, 'get') def _test_get_server_with_description(self, description, get_mock): image_bookmark = "http://localhost/fake/images/10" flavor_bookmark = "http://localhost/fake/flavors/2" uuid = FAKE_UUID get_mock.side_effect = fakes.fake_compute_get(id=2, display_description=description, uuid=uuid) req = self.req('/fake/servers/%s' % uuid) res_dict = self.controller.show(req, uuid) expected_server = self._get_server_data_dict(uuid, image_bookmark, flavor_bookmark, progress=0, description=description) self.assertThat(res_dict, matchers.DictMatches(expected_server)) return res_dict @mock.patch.object(compute_api.API, 'get_all') def _test_list_server_detail_with_descriptions(self, s1_desc, s2_desc, get_all_mock): get_all_mock.return_value = fake_instance_get_all_with_description( context, [s1_desc, s2_desc]) req = self.req('/fake/servers/detail') servers_list = self.controller.detail(req) # Check that each returned server has the same 'description' value # and 'id' as they were created. for desc in [s1_desc, s2_desc]: server = next(server for server in servers_list['servers'] if (server['id'] == fakes.get_fake_uuid(desc))) expected = desc self.assertEqual(expected, server['description']) def test_get_server_with_description(self): self._test_get_server_with_description('test desc') def test_list_server_detail_with_descriptions(self): self._test_list_server_detail_with_descriptions('desc1', 'desc2') class ServersControllerTestV226(ControllerTest): wsgi_api_version = '2.26' @mock.patch.object(compute_api.API, 'get') def test_get_server_with_tags_by_id(self, mock_get): req = fakes.HTTPRequest.blank('/fake/servers/%s' % FAKE_UUID, version=self.wsgi_api_version) ctxt = req.environ['nova.context'] tags = ['tag1', 'tag2'] def fake_get(_self, *args, **kwargs): self.assertIn('tags', kwargs['expected_attrs']) fake_server = fakes.stub_instance_obj( ctxt, id=2, vm_state=vm_states.ACTIVE, progress=100) tag_list = objects.TagList(objects=[ objects.Tag(resource_id=FAKE_UUID, tag=tag) for tag in tags]) fake_server.tags = tag_list return fake_server mock_get.side_effect = fake_get res_dict = self.controller.show(req, FAKE_UUID) self.assertIn('tags', res_dict['server']) self.assertEqual(res_dict['server']['tags'], tags) @mock.patch.object(compute_api.API, 'get_all') def _test_get_servers_allows_tag_filters(self, filter_name, mock_get_all): server_uuid = uuids.fake req = fakes.HTTPRequest.blank('/fake/servers?%s=t1,t2' % filter_name, version=self.wsgi_api_version) ctxt = req.environ['nova.context'] def fake_get_all(*a, **kw): self.assertIsNotNone(kw['search_opts']) self.assertIn(filter_name, kw['search_opts']) self.assertEqual(kw['search_opts'][filter_name], ['t1', 't2']) return objects.InstanceList( objects=[fakes.stub_instance_obj(ctxt, uuid=server_uuid)]) mock_get_all.side_effect = fake_get_all servers = self.controller.index(req)['servers'] self.assertEqual(len(servers), 1) self.assertEqual(servers[0]['id'], server_uuid) def test_get_servers_allows_tags_filter(self): self._test_get_servers_allows_tag_filters('tags') def test_get_servers_allows_tags_any_filter(self): self._test_get_servers_allows_tag_filters('tags-any') def test_get_servers_allows_not_tags_filter(self): self._test_get_servers_allows_tag_filters('not-tags') def test_get_servers_allows_not_tags_any_filter(self): self._test_get_servers_allows_tag_filters('not-tags-any') class ServerControllerTestV238(ControllerTest): wsgi_api_version = '2.38' def _test_invalid_status(self, is_admin): req = fakes.HTTPRequest.blank('/fake/servers/detail?status=invalid', version=self.wsgi_api_version, use_admin_context=is_admin) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.detail, req) def test_list_servers_detail_invalid_status_for_admin(self): self._test_invalid_status(True) def test_list_servers_detail_invalid_status_for_non_admin(self): self._test_invalid_status(False) class ServerControllerTestV247(ControllerTest): """Server controller test for microversion 2.47 The intent here is simply to verify that when showing server details after microversion 2.47 that the flavor is shown as a dict of flavor information rather than as dict of id/links. The existence of the 'extra_specs' key is controlled by policy. """ wsgi_api_version = '2.47' @mock.patch.object(objects.TagList, 'get_by_resource_id') def test_get_all_server_details(self, mock_get_by_resource_id): # Fake out tags on the instances mock_get_by_resource_id.return_value = objects.TagList() expected_flavor = { 'disk': 20, 'ephemeral': 0, 'extra_specs': {}, 'original_name': u'm1.small', 'ram': 2048, 'swap': 0, 'vcpus': 1} req = fakes.HTTPRequest.blank('/fake/servers/detail', version=self.wsgi_api_version) hits = [] real_auth = policy.authorize # Wrapper for authorize to count the number of times # we authorize for extra-specs def fake_auth(context, action, target): if 'extra-specs' in action: hits.append(1) return real_auth(context, action, target) with mock.patch('nova.policy.authorize') as mock_auth: mock_auth.side_effect = fake_auth res_dict = self.controller.detail(req) # We should have found more than one servers, but only hit the # policy check once self.assertGreater(len(res_dict['servers']), 1) self.assertEqual(1, len(hits)) for i, s in enumerate(res_dict['servers']): self.assertEqual(s['flavor'], expected_flavor) @mock.patch.object(objects.TagList, 'get_by_resource_id') def test_get_all_server_details_no_extra_spec(self, mock_get_by_resource_id): # Fake out tags on the instances mock_get_by_resource_id.return_value = objects.TagList() # Set the policy so we don't have permission to index # flavor extra-specs but are able to get server details. servers_rule = 'os_compute_api:servers:detail' extraspec_rule = 'os_compute_api:os-flavor-extra-specs:index' self.policy.set_rules({ extraspec_rule: 'rule:admin_api', servers_rule: '@'}) expected_flavor = { 'disk': 20, 'ephemeral': 0, 'original_name': u'm1.small', 'ram': 2048, 'swap': 0, 'vcpus': 1} req = fakes.HTTPRequest.blank('/fake/servers/detail', version=self.wsgi_api_version) res_dict = self.controller.detail(req) for i, s in enumerate(res_dict['servers']): self.assertEqual(s['flavor'], expected_flavor) class ServersControllerDeleteTest(ControllerTest): def setUp(self): super(ServersControllerDeleteTest, self).setUp() self.server_delete_called = False def fake_delete(api, context, instance): if instance.uuid == uuids.non_existent_uuid: raise exception.InstanceNotFound(instance_id=instance.uuid) self.server_delete_called = True self.stubs.Set(compute_api.API, 'delete', fake_delete) def _create_delete_request(self, uuid): fakes.stub_out_instance_quota(self, 0, 10) req = fakes.HTTPRequestV21.blank('/fake/servers/%s' % uuid) req.method = 'DELETE' fake_get = fakes.fake_compute_get( uuid=uuid, vm_state=vm_states.ACTIVE, project_id=req.environ['nova.context'].project_id, user_id=req.environ['nova.context'].user_id) self.stub_out('nova.compute.api.API.get', lambda api, *a, **k: fake_get(*a, **k)) return req def _delete_server_instance(self, uuid=FAKE_UUID): req = self._create_delete_request(uuid) self.controller.delete(req, uuid) def test_delete_server_instance(self): self._delete_server_instance() self.assertTrue(self.server_delete_called) def test_delete_server_instance_not_found(self): self.assertRaises(webob.exc.HTTPNotFound, self._delete_server_instance, uuid=uuids.non_existent_uuid) def test_delete_server_instance_while_building(self): req = self._create_delete_request(FAKE_UUID) self.controller.delete(req, FAKE_UUID) self.assertTrue(self.server_delete_called) def test_delete_locked_server(self): req = self._create_delete_request(FAKE_UUID) self.stubs.Set(compute_api.API, 'soft_delete', fakes.fake_actions_to_locked_server) self.stubs.Set(compute_api.API, 'delete', fakes.fake_actions_to_locked_server) self.assertRaises(webob.exc.HTTPConflict, self.controller.delete, req, FAKE_UUID) def test_delete_server_instance_while_resize(self): req = self._create_delete_request(FAKE_UUID) fake_get = fakes.fake_compute_get( vm_state=vm_states.ACTIVE, task_state=task_states.RESIZE_PREP, project_id=req.environ['nova.context'].project_id, user_id=req.environ['nova.context'].user_id) self.stubs.Set(compute_api.API, 'get', lambda api, *a, **k: fake_get(*a, **k)) self.controller.delete(req, FAKE_UUID) def test_delete_server_instance_if_not_launched(self): self.flags(reclaim_instance_interval=3600) req = fakes.HTTPRequestV21.blank('/fake/servers/%s' % FAKE_UUID) req.method = 'DELETE' self.server_delete_called = False fake_get = fakes.fake_compute_get( launched_at=None, project_id=req.environ['nova.context'].project_id, user_id=req.environ['nova.context'].user_id) self.stubs.Set(compute_api.API, 'get', lambda api, *a, **k: fake_get(*a, **k)) def instance_destroy_mock(*args, **kwargs): self.server_delete_called = True deleted_at = timeutils.utcnow() return fake_instance.fake_db_instance(deleted_at=deleted_at) self.stub_out('nova.db.instance_destroy', instance_destroy_mock) self.controller.delete(req, FAKE_UUID) # delete() should be called for instance which has never been active, # even if reclaim_instance_interval has been set. self.assertTrue(self.server_delete_called) class ServersControllerRebuildInstanceTest(ControllerTest): image_uuid = '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6' def setUp(self): super(ServersControllerRebuildInstanceTest, self).setUp() self.req = fakes.HTTPRequest.blank('/fake/servers/a/action') self.req.method = 'POST' self.req.headers["content-type"] = "application/json" self.req_user_id = self.req.environ['nova.context'].user_id self.req_project_id = self.req.environ['nova.context'].project_id self.useFixture(nova_fixtures.SingleCellSimple()) def fake_get(ctrl, ctxt, uuid): if uuid == 'test_inst': raise webob.exc.HTTPNotFound(explanation='fakeout') return fakes.stub_instance_obj(None, vm_state=vm_states.ACTIVE, project_id=self.req_project_id, user_id=self.req_user_id) self.useFixture( fixtures.MonkeyPatch('nova.api.openstack.compute.servers.' 'ServersController._get_instance', fake_get)) fake_get = fakes.fake_compute_get(vm_state=vm_states.ACTIVE, project_id=self.req_project_id, user_id=self.req_user_id) self.stubs.Set(compute_api.API, 'get', lambda api, *a, **k: fake_get(*a, **k)) self.body = { 'rebuild': { 'name': 'new_name', 'imageRef': self.image_uuid, 'metadata': { 'open': 'stack', }, }, } def test_rebuild_server_with_image_not_uuid(self): self.body['rebuild']['imageRef'] = 'not-uuid' self.assertRaises(exception.ValidationError, self.controller._action_rebuild, self.req, FAKE_UUID, body=self.body) def test_rebuild_server_with_image_as_full_url(self): image_href = ('http://localhost/v2/fake/images/' '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6') self.body['rebuild']['imageRef'] = image_href self.assertRaises(exception.ValidationError, self.controller._action_rebuild, self.req, FAKE_UUID, body=self.body) def test_rebuild_server_with_image_as_empty_string(self): self.body['rebuild']['imageRef'] = '' self.assertRaises(exception.ValidationError, self.controller._action_rebuild, self.req, FAKE_UUID, body=self.body) def test_rebuild_instance_name_with_spaces_in_the_middle(self): self.body['rebuild']['name'] = 'abc def' self.req.body = jsonutils.dump_as_bytes(self.body) self.controller._action_rebuild(self.req, FAKE_UUID, body=self.body) def test_rebuild_instance_name_with_leading_trailing_spaces(self): self.body['rebuild']['name'] = ' abc def ' self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(exception.ValidationError, self.controller._action_rebuild, self.req, FAKE_UUID, body=self.body) def test_rebuild_instance_name_with_leading_trailing_spaces_compat_mode( self): self.body['rebuild']['name'] = ' abc def ' self.req.body = jsonutils.dump_as_bytes(self.body) self.req.set_legacy_v2() def fake_rebuild(*args, **kwargs): self.assertEqual('abc def', kwargs['display_name']) with mock.patch.object(compute_api.API, 'rebuild') as mock_rebuild: mock_rebuild.side_effect = fake_rebuild self.controller._action_rebuild(self.req, FAKE_UUID, body=self.body) def test_rebuild_instance_with_blank_metadata_key(self): self.body['rebuild']['metadata'][''] = 'world' self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(exception.ValidationError, self.controller._action_rebuild, self.req, FAKE_UUID, body=self.body) def test_rebuild_instance_with_metadata_key_too_long(self): self.body['rebuild']['metadata'][('a' * 260)] = 'world' self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(exception.ValidationError, self.controller._action_rebuild, self.req, FAKE_UUID, body=self.body) def test_rebuild_instance_with_metadata_value_too_long(self): self.body['rebuild']['metadata']['key1'] = ('a' * 260) self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(exception.ValidationError, self.controller._action_rebuild, self.req, FAKE_UUID, body=self.body) def test_rebuild_instance_with_metadata_value_not_string(self): self.body['rebuild']['metadata']['key1'] = 1 self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(exception.ValidationError, self.controller._action_rebuild, self.req, FAKE_UUID, body=self.body) def test_rebuild_instance_fails_when_min_ram_too_small(self): # make min_ram larger than our instance ram size def fake_get_image(self, context, image_href, **kwargs): return dict(id='76fa36fc-c930-4bf3-8c8a-ea2a2420deb6', name='public image', is_public=True, status='active', properties={'key1': 'value1'}, min_ram="4096", min_disk="10") self.stubs.Set(fake._FakeImageService, 'show', fake_get_image) self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(webob.exc.HTTPBadRequest, self.controller._action_rebuild, self.req, FAKE_UUID, body=self.body) def test_rebuild_instance_fails_when_min_disk_too_small(self): # make min_disk larger than our instance disk size def fake_get_image(self, context, image_href, **kwargs): return dict(id='76fa36fc-c930-4bf3-8c8a-ea2a2420deb6', name='public image', is_public=True, status='active', properties={'key1': 'value1'}, min_ram="128", min_disk="100000") self.stubs.Set(fake._FakeImageService, 'show', fake_get_image) self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(webob.exc.HTTPBadRequest, self.controller._action_rebuild, self.req, FAKE_UUID, body=self.body) def test_rebuild_instance_image_too_large(self): # make image size larger than our instance disk size size = str(1000 * (1024 ** 3)) def fake_get_image(self, context, image_href, **kwargs): return dict(id='76fa36fc-c930-4bf3-8c8a-ea2a2420deb6', name='public image', is_public=True, status='active', size=size) self.stubs.Set(fake._FakeImageService, 'show', fake_get_image) self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(webob.exc.HTTPBadRequest, self.controller._action_rebuild, self.req, FAKE_UUID, body=self.body) def test_rebuild_instance_name_all_blank(self): def fake_get_image(self, context, image_href, **kwargs): return dict(id='76fa36fc-c930-4bf3-8c8a-ea2a2420deb6', name='public image', is_public=True, status='active') self.stubs.Set(fake._FakeImageService, 'show', fake_get_image) self.body['rebuild']['name'] = ' ' self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(exception.ValidationError, self.controller._action_rebuild, self.req, FAKE_UUID, body=self.body) def test_rebuild_instance_with_deleted_image(self): def fake_get_image(self, context, image_href, **kwargs): return dict(id='76fa36fc-c930-4bf3-8c8a-ea2a2420deb6', name='public image', is_public=True, status='DELETED') self.stubs.Set(fake._FakeImageService, 'show', fake_get_image) self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(webob.exc.HTTPBadRequest, self.controller._action_rebuild, self.req, FAKE_UUID, body=self.body) def test_rebuild_instance_onset_file_limit_over_quota(self): def fake_get_image(self, context, image_href, **kwargs): return dict(id='76fa36fc-c930-4bf3-8c8a-ea2a2420deb6', name='public image', is_public=True, status='active') with test.nested( mock.patch.object(fake._FakeImageService, 'show', side_effect=fake_get_image), mock.patch.object(self.controller.compute_api, 'rebuild', side_effect=exception.OnsetFileLimitExceeded) ) as ( show_mock, rebuild_mock ): self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(webob.exc.HTTPForbidden, self.controller._action_rebuild, self.req, FAKE_UUID, body=self.body) def test_rebuild_bad_personality(self): # Personality files have been deprecated as of v2.57 self.req.api_version_request = \ api_version_request.APIVersionRequest('2.56') body = { "rebuild": { "imageRef": self.image_uuid, "personality": [{ "path": "/path/to/file", "contents": "INVALID b64", }] }, } self.assertRaises(exception.ValidationError, self.controller._action_rebuild, self.req, FAKE_UUID, body=body) def test_rebuild_personality(self): # Personality files have been deprecated as of v2.57 self.req.api_version_request = \ api_version_request.APIVersionRequest('2.56') body = { "rebuild": { "imageRef": self.image_uuid, "personality": [{ "path": "/path/to/file", "contents": base64.encode_as_text("Test String"), }] }, } body = self.controller._action_rebuild(self.req, FAKE_UUID, body=body).obj self.assertNotIn('personality', body['server']) @mock.patch.object(compute_api.API, 'start') def test_start(self, mock_start): req = fakes.HTTPRequestV21.blank('/fake/servers/%s/action' % FAKE_UUID) body = dict(start="") self.controller._start_server(req, FAKE_UUID, body) mock_start.assert_called_once_with(mock.ANY, mock.ANY) @mock.patch.object(compute_api.API, 'start', fake_start_stop_not_ready) def test_start_not_ready(self): req = fakes.HTTPRequestV21.blank('/fake/servers/%s/action' % FAKE_UUID) body = dict(start="") self.assertRaises(webob.exc.HTTPConflict, self.controller._start_server, req, FAKE_UUID, body) @mock.patch.object( compute_api.API, 'start', fakes.fake_actions_to_locked_server) def test_start_locked_server(self): req = fakes.HTTPRequestV21.blank('/fake/servers/%s/action' % FAKE_UUID) body = dict(start="") self.assertRaises(webob.exc.HTTPConflict, self.controller._start_server, req, FAKE_UUID, body) @mock.patch.object(compute_api.API, 'start', fake_start_stop_invalid_state) def test_start_invalid(self): req = fakes.HTTPRequestV21.blank('/fake/servers/%s/action' % FAKE_UUID) body = dict(start="") self.assertRaises(webob.exc.HTTPConflict, self.controller._start_server, req, FAKE_UUID, body) @mock.patch.object(compute_api.API, 'stop') def test_stop(self, mock_stop): req = fakes.HTTPRequestV21.blank('/fake/servers/%s/action' % FAKE_UUID) body = dict(stop="") self.controller._stop_server(req, FAKE_UUID, body) mock_stop.assert_called_once_with(mock.ANY, mock.ANY) @mock.patch.object(compute_api.API, 'stop', fake_start_stop_not_ready) def test_stop_not_ready(self): req = fakes.HTTPRequestV21.blank('/fake/servers/%s/action' % FAKE_UUID) body = dict(stop="") self.assertRaises(webob.exc.HTTPConflict, self.controller._stop_server, req, FAKE_UUID, body) @mock.patch.object( compute_api.API, 'stop', fakes.fake_actions_to_locked_server) def test_stop_locked_server(self): req = fakes.HTTPRequestV21.blank('/fake/servers/%s/action' % FAKE_UUID) body = dict(stop="") self.assertRaises(webob.exc.HTTPConflict, self.controller._stop_server, req, FAKE_UUID, body) @mock.patch.object(compute_api.API, 'stop', fake_start_stop_invalid_state) def test_stop_invalid_state(self): req = fakes.HTTPRequestV21.blank('/fake/servers/%s/action' % FAKE_UUID) body = dict(start="") self.assertRaises(webob.exc.HTTPConflict, self.controller._stop_server, req, FAKE_UUID, body) @mock.patch( 'nova.db.instance_get_by_uuid', fake_instance_get_by_uuid_not_found) def test_start_with_bogus_id(self): req = fakes.HTTPRequestV21.blank('/fake/servers/test_inst/action') body = dict(start="") self.assertRaises(webob.exc.HTTPNotFound, self.controller._start_server, req, 'test_inst', body) @mock.patch( 'nova.db.instance_get_by_uuid', fake_instance_get_by_uuid_not_found) def test_stop_with_bogus_id(self): req = fakes.HTTPRequestV21.blank('/fake/servers/test_inst/action') body = dict(stop="") self.assertRaises(webob.exc.HTTPNotFound, self.controller._stop_server, req, 'test_inst', body) class ServersControllerRebuildTestV254(ServersControllerRebuildInstanceTest): def setUp(self): super(ServersControllerRebuildTestV254, self).setUp() fakes.stub_out_key_pair_funcs(self) self.req.api_version_request = \ api_version_request.APIVersionRequest('2.54') def _test_set_key_name_rebuild(self, set_key_name=True): key_name = "key" fake_get = fakes.fake_compute_get(vm_state=vm_states.ACTIVE, key_name=key_name, project_id=self.req_project_id, user_id=self.req_user_id) with mock.patch.object(compute_api.API, 'get', side_effect=fake_get): if set_key_name: self.body['rebuild']['key_name'] = key_name self.req.body = jsonutils.dump_as_bytes(self.body) server = self.controller._action_rebuild( self.req, FAKE_UUID, body=self.body).obj['server'] self.assertEqual(server['id'], FAKE_UUID) self.assertEqual(server['key_name'], key_name) def test_rebuild_accepted_with_keypair_name(self): self._test_set_key_name_rebuild() def test_rebuild_key_not_changed(self): self._test_set_key_name_rebuild(set_key_name=False) def test_rebuild_invalid_microversion_253(self): self.req.api_version_request = \ api_version_request.APIVersionRequest('2.53') body = { "rebuild": { "imageRef": self.image_uuid, "key_name": "key" }, } excpt = self.assertRaises(exception.ValidationError, self.controller._action_rebuild, self.req, FAKE_UUID, body=body) self.assertIn('key_name', six.text_type(excpt)) def test_rebuild_with_not_existed_keypair_name(self): body = { "rebuild": { "imageRef": self.image_uuid, "key_name": "nonexistentkey" }, } self.assertRaises(webob.exc.HTTPBadRequest, self.controller._action_rebuild, self.req, FAKE_UUID, body=body) def test_rebuild_user_has_no_key_pair(self): def no_key_pair(context, user_id, name): raise exception.KeypairNotFound(user_id=user_id, name=name) self.stub_out('nova.db.key_pair_get', no_key_pair) fake_get = fakes.fake_compute_get(vm_state=vm_states.ACTIVE, key_name=None, project_id=self.req_project_id, user_id=self.req_user_id) with mock.patch.object(compute_api.API, 'get', side_effect=fake_get): self.body['rebuild']['key_name'] = "a-key-name" self.assertRaises(webob.exc.HTTPBadRequest, self.controller._action_rebuild, self.req, FAKE_UUID, body=self.body) def test_rebuild_with_non_string_keypair_name(self): body = { "rebuild": { "imageRef": self.image_uuid, "key_name": 12345 }, } self.assertRaises(exception.ValidationError, self.controller._action_rebuild, self.req, FAKE_UUID, body=body) def test_rebuild_with_invalid_keypair_name(self): body = { "rebuild": { "imageRef": self.image_uuid, "key_name": "123\0d456" }, } self.assertRaises(webob.exc.HTTPBadRequest, self.controller._action_rebuild, self.req, FAKE_UUID, body=body) def test_rebuild_with_empty_keypair_name(self): body = { "rebuild": { "imageRef": self.image_uuid, "key_name": '' }, } self.assertRaises(exception.ValidationError, self.controller._action_rebuild, self.req, FAKE_UUID, body=body) def test_rebuild_with_none_keypair_name(self): key_name = None fake_get = fakes.fake_compute_get(vm_state=vm_states.ACTIVE, key_name=key_name, project_id=self.req_project_id, user_id=self.req_user_id) with mock.patch.object(compute_api.API, 'get', side_effect=fake_get): with mock.patch.object(objects.KeyPair, 'get_by_name') as key_get: self.body['rebuild']['key_name'] = key_name self.req.body = jsonutils.dump_as_bytes(self.body) self.controller._action_rebuild( self.req, FAKE_UUID, body=self.body) # NOTE: because the api will call _get_server twice. The server # response will always be the same one. So we just use # objects.KeyPair.get_by_name to verify test. key_get.assert_not_called() def test_rebuild_with_too_large_keypair_name(self): body = { "rebuild": { "imageRef": self.image_uuid, "key_name": 256 * "k" }, } self.assertRaises(exception.ValidationError, self.controller._action_rebuild, self.req, FAKE_UUID, body=body) class ServersControllerRebuildTestV257(ServersControllerRebuildTestV254): """Tests server rebuild at microversion 2.57 where user_data can be provided and personality files are no longer accepted. """ def setUp(self): super(ServersControllerRebuildTestV257, self).setUp() self.req.api_version_request = \ api_version_request.APIVersionRequest('2.57') def test_rebuild_personality(self): """Tests that trying to rebuild with personality files fails.""" body = { "rebuild": { "imageRef": self.image_uuid, "personality": [{ "path": "/path/to/file", "contents": base64.encode_as_text("Test String"), }] } } ex = self.assertRaises(exception.ValidationError, self.controller._action_rebuild, self.req, FAKE_UUID, body=body) self.assertIn('personality', six.text_type(ex)) def test_rebuild_user_data_old_version(self): """Tests that trying to rebuild with user_data before 2.57 fails.""" body = { "rebuild": { "imageRef": self.image_uuid, "user_data": "ZWNobyAiaGVsbG8gd29ybGQi" } } self.req.api_version_request = \ api_version_request.APIVersionRequest('2.55') ex = self.assertRaises(exception.ValidationError, self.controller._action_rebuild, self.req, FAKE_UUID, body=body) self.assertIn('user_data', six.text_type(ex)) def test_rebuild_user_data_malformed(self): """Tests that trying to rebuild with malformed user_data fails.""" body = { "rebuild": { "imageRef": self.image_uuid, "user_data": b'invalid' } } ex = self.assertRaises(exception.ValidationError, self.controller._action_rebuild, self.req, FAKE_UUID, body=body) self.assertIn('user_data', six.text_type(ex)) def test_rebuild_user_data_too_large(self): """Tests that passing user_data to rebuild that is too large fails.""" body = { "rebuild": { "imageRef": self.image_uuid, "user_data": ('MQ==' * 16384) } } ex = self.assertRaises(exception.ValidationError, self.controller._action_rebuild, self.req, FAKE_UUID, body=body) self.assertIn('user_data', six.text_type(ex)) @mock.patch.object(context.RequestContext, 'can') @mock.patch.object(compute_api.API, 'get') @mock.patch('nova.db.instance_update_and_get_original') def test_rebuild_reset_user_data(self, mock_update, mock_get, mock_policy): """Tests that passing user_data=None resets the user_data on the instance. """ body = { "rebuild": { "imageRef": self.image_uuid, "user_data": None } } mock_get.return_value = fakes.stub_instance_obj( context.RequestContext(self.req_user_id, self.req_project_id), user_data='ZWNobyAiaGVsbG8gd29ybGQi') def fake_instance_update_and_get_original( ctxt, instance_uuid, values, **kwargs): # save() is called twice and the second one has system_metadata # in the updates, so we can ignore that one. if 'system_metadata' not in values: self.assertIn('user_data', values) self.assertIsNone(values['user_data']) return instance_update_and_get_original( ctxt, instance_uuid, values, **kwargs) mock_update.side_effect = fake_instance_update_and_get_original self.controller._action_rebuild(self.req, FAKE_UUID, body=body) self.assertEqual(2, mock_update.call_count) class ServersControllerRebuildTestV219(ServersControllerRebuildInstanceTest): def setUp(self): super(ServersControllerRebuildTestV219, self).setUp() self.req.api_version_request = \ api_version_request.APIVersionRequest('2.19') def _rebuild_server(self, set_desc, desc): fake_get = fakes.fake_compute_get(vm_state=vm_states.ACTIVE, display_description=desc, project_id=self.req_project_id, user_id=self.req_user_id) self.stubs.Set(compute_api.API, 'get', lambda api, *a, **k: fake_get(*a, **k)) if set_desc: self.body['rebuild']['description'] = desc self.req.body = jsonutils.dump_as_bytes(self.body) server = self.controller._action_rebuild(self.req, FAKE_UUID, body=self.body).obj['server'] self.assertEqual(server['id'], FAKE_UUID) self.assertEqual(server['description'], desc) def test_rebuild_server_with_description(self): self._rebuild_server(True, 'server desc') def test_rebuild_server_empty_description(self): self._rebuild_server(True, '') def test_rebuild_server_without_description(self): self._rebuild_server(False, '') def test_rebuild_server_remove_description(self): self._rebuild_server(True, None) def test_rebuild_server_description_too_long(self): self.body['rebuild']['description'] = 'x' * 256 self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(exception.ValidationError, self.controller._action_rebuild, self.req, FAKE_UUID, body=self.body) def test_rebuild_server_description_invalid(self): # Invalid non-printable control char in the desc. self.body['rebuild']['description'] = "123\0d456" self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(exception.ValidationError, self.controller._action_rebuild, self.req, FAKE_UUID, body=self.body) class ServersControllerUpdateTest(ControllerTest): def _get_request(self, body=None): req = fakes.HTTPRequestV21.blank('/fake/servers/%s' % FAKE_UUID) req.method = 'PUT' req.content_type = 'application/json' req.body = jsonutils.dump_as_bytes(body) fake_get = fakes.fake_compute_get( project_id=req.environ['nova.context'].project_id, user_id=req.environ['nova.context'].user_id) self.stub_out('nova.compute.api.API.get', lambda api, *a, **k: fake_get(*a, **k)) return req def test_update_server_all_attributes(self): body = {'server': { 'name': 'server_test', }} req = self._get_request(body) res_dict = self.controller.update(req, FAKE_UUID, body=body) self.assertEqual(res_dict['server']['id'], FAKE_UUID) self.assertEqual(res_dict['server']['name'], 'server_test') def test_update_server_name(self): body = {'server': {'name': 'server_test'}} req = self._get_request(body) res_dict = self.controller.update(req, FAKE_UUID, body=body) self.assertEqual(res_dict['server']['id'], FAKE_UUID) self.assertEqual(res_dict['server']['name'], 'server_test') def test_update_server_name_too_long(self): body = {'server': {'name': 'x' * 256}} req = self._get_request(body) self.assertRaises(exception.ValidationError, self.controller.update, req, FAKE_UUID, body=body) def test_update_server_name_all_blank_spaces(self): self.stub_out('nova.db.instance_get', fakes.fake_instance_get(name='server_test')) req = fakes.HTTPRequest.blank('/fake/servers/%s' % FAKE_UUID) req.method = 'PUT' req.content_type = 'application/json' body = {'server': {'name': ' ' * 64}} req.body = jsonutils.dump_as_bytes(body) self.assertRaises(exception.ValidationError, self.controller.update, req, FAKE_UUID, body=body) def test_update_server_name_with_spaces_in_the_middle(self): body = {'server': {'name': 'abc def'}} req = self._get_request(body) self.controller.update(req, FAKE_UUID, body=body) def test_update_server_name_with_leading_trailing_spaces(self): self.stub_out('nova.db.instance_get', fakes.fake_instance_get(name='server_test')) req = fakes.HTTPRequest.blank('/fake/servers/%s' % FAKE_UUID) req.method = 'PUT' req.content_type = 'application/json' body = {'server': {'name': ' abc def '}} req.body = jsonutils.dump_as_bytes(body) self.assertRaises(exception.ValidationError, self.controller.update, req, FAKE_UUID, body=body) def test_update_server_name_with_leading_trailing_spaces_compat_mode(self): body = {'server': {'name': ' abc def '}} req = self._get_request(body) req.set_legacy_v2() self.controller.update(req, FAKE_UUID, body=body) def test_update_server_admin_password_extra_arg(self): inst_dict = dict(name='server_test', admin_password='bacon') body = dict(server=inst_dict) req = fakes.HTTPRequest.blank('/fake/servers/%s' % FAKE_UUID) req.method = 'PUT' req.content_type = "application/json" req.body = jsonutils.dump_as_bytes(body) self.assertRaises(exception.ValidationError, self.controller.update, req, FAKE_UUID, body=body) def test_update_server_host_id(self): inst_dict = dict(host_id='123') body = dict(server=inst_dict) req = fakes.HTTPRequest.blank('/fake/servers/%s' % FAKE_UUID) req.method = 'PUT' req.content_type = "application/json" req.body = jsonutils.dump_as_bytes(body) self.assertRaises(exception.ValidationError, self.controller.update, req, FAKE_UUID, body=body) def test_update_server_not_found(self): def fake_get(*args, **kwargs): raise exception.InstanceNotFound(instance_id='fake') self.stubs.Set(compute_api.API, 'get', fake_get) body = {'server': {'name': 'server_test'}} req = fakes.HTTPRequest.blank('/fake/servers/%s' % FAKE_UUID) req.method = 'PUT' req.content_type = "application/json" req.body = jsonutils.dump_as_bytes(body) self.assertRaises(webob.exc.HTTPNotFound, self.controller.update, req, FAKE_UUID, body=body) @mock.patch.object(compute_api.API, 'update_instance') def test_update_server_not_found_on_update(self, mock_update_instance): def fake_update(*args, **kwargs): raise exception.InstanceNotFound(instance_id='fake') mock_update_instance.side_effect = fake_update body = {'server': {'name': 'server_test'}} req = self._get_request(body) self.assertRaises(webob.exc.HTTPNotFound, self.controller.update, req, FAKE_UUID, body=body) def test_update_server_policy_fail(self): rule = {'compute:update': 'role:admin'} policy.set_rules(oslo_policy.Rules.from_dict(rule)) body = {'server': {'name': 'server_test'}} req = self._get_request(body) self.assertRaises(exception.PolicyNotAuthorized, self.controller.update, req, FAKE_UUID, body=body) class ServersControllerTriggerCrashDumpTest(ControllerTest): def setUp(self): super(ServersControllerTriggerCrashDumpTest, self).setUp() self.instance = fakes.stub_instance_obj(None, vm_state=vm_states.ACTIVE, project_id='fake') def fake_get(ctrl, ctxt, uuid): if uuid != FAKE_UUID: raise webob.exc.HTTPNotFound(explanation='fakeout') return self.instance self.useFixture( fixtures.MonkeyPatch('nova.api.openstack.compute.servers.' 'ServersController._get_instance', fake_get)) self.req = fakes.HTTPRequest.blank('/servers/%s/action' % FAKE_UUID) self.req.api_version_request =\ api_version_request.APIVersionRequest('2.17') self.body = dict(trigger_crash_dump=None) @mock.patch.object(compute_api.API, 'trigger_crash_dump') def test_trigger_crash_dump(self, mock_trigger_crash_dump): ctxt = self.req.environ['nova.context'] self.controller._action_trigger_crash_dump(self.req, FAKE_UUID, body=self.body) mock_trigger_crash_dump.assert_called_with(ctxt, self.instance) def test_trigger_crash_dump_policy_failed(self): rule_name = "os_compute_api:servers:trigger_crash_dump" self.policy.set_rules({rule_name: "project_id:non_fake"}) exc = self.assertRaises(exception.PolicyNotAuthorized, self.controller._action_trigger_crash_dump, self.req, FAKE_UUID, body=self.body) self.assertIn("os_compute_api:servers:trigger_crash_dump", exc.format_message()) @mock.patch.object(compute_api.API, 'trigger_crash_dump', fake_start_stop_not_ready) def test_trigger_crash_dump_not_ready(self): self.assertRaises(webob.exc.HTTPConflict, self.controller._action_trigger_crash_dump, self.req, FAKE_UUID, body=self.body) @mock.patch.object(compute_api.API, 'trigger_crash_dump', fakes.fake_actions_to_locked_server) def test_trigger_crash_dump_locked_server(self): self.assertRaises(webob.exc.HTTPConflict, self.controller._action_trigger_crash_dump, self.req, FAKE_UUID, body=self.body) @mock.patch.object(compute_api.API, 'trigger_crash_dump', fake_start_stop_invalid_state) def test_trigger_crash_dump_invalid_state(self): self.assertRaises(webob.exc.HTTPConflict, self.controller._action_trigger_crash_dump, self.req, FAKE_UUID, body=self.body) def test_trigger_crash_dump_with_bogus_id(self): self.assertRaises(webob.exc.HTTPNotFound, self.controller._action_trigger_crash_dump, self.req, 'test_inst', body=self.body) def test_trigger_crash_dump_schema_invalid_type(self): self.body['trigger_crash_dump'] = 'not null' self.assertRaises(exception.ValidationError, self.controller._action_trigger_crash_dump, self.req, FAKE_UUID, body=self.body) def test_trigger_crash_dump_schema_extra_property(self): self.body['extra_property'] = 'extra' self.assertRaises(exception.ValidationError, self.controller._action_trigger_crash_dump, self.req, FAKE_UUID, body=self.body) @mock.patch.object(compute_api.API, 'trigger_crash_dump', side_effect=exception.TriggerCrashDumpNotSupported) def test_trigger_crash_dump_not_supported(self, mock_trigger_crash_dump): self.assertRaises(webob.exc.HTTPBadRequest, self.controller._action_trigger_crash_dump, self.req, FAKE_UUID, body=self.body) class ServersControllerUpdateTestV219(ServersControllerUpdateTest): def _get_request(self, body=None): req = super(ServersControllerUpdateTestV219, self)._get_request( body=body) req.api_version_request = api_version_request.APIVersionRequest('2.19') return req def _update_server_desc(self, set_desc, desc=None): body = {'server': {}} if set_desc: body['server']['description'] = desc req = self._get_request() res_dict = self.controller.update(req, FAKE_UUID, body=body) return res_dict def test_update_server_description(self): res_dict = self._update_server_desc(True, 'server_desc') self.assertEqual(res_dict['server']['id'], FAKE_UUID) self.assertEqual(res_dict['server']['description'], 'server_desc') def test_update_server_empty_description(self): res_dict = self._update_server_desc(True, '') self.assertEqual(res_dict['server']['id'], FAKE_UUID) self.assertEqual(res_dict['server']['description'], '') def test_update_server_without_description(self): res_dict = self._update_server_desc(False) self.assertEqual(res_dict['server']['id'], FAKE_UUID) self.assertIsNone(res_dict['server']['description']) def test_update_server_remove_description(self): res_dict = self._update_server_desc(True) self.assertEqual(res_dict['server']['id'], FAKE_UUID) self.assertIsNone(res_dict['server']['description']) def test_update_server_all_attributes(self): body = {'server': { 'name': 'server_test', 'description': 'server_desc' }} req = self._get_request(body) res_dict = self.controller.update(req, FAKE_UUID, body=body) self.assertEqual(res_dict['server']['id'], FAKE_UUID) self.assertEqual(res_dict['server']['name'], 'server_test') self.assertEqual(res_dict['server']['description'], 'server_desc') def test_update_server_description_too_long(self): body = {'server': {'description': 'x' * 256}} req = self._get_request(body) self.assertRaises(exception.ValidationError, self.controller.update, req, FAKE_UUID, body=body) def test_update_server_description_invalid(self): # Invalid non-printable control char in the desc. body = {'server': {'description': "123\0d456"}} req = self._get_request(body) self.assertRaises(exception.ValidationError, self.controller.update, req, FAKE_UUID, body=body) class ServerStatusTest(test.TestCase): def setUp(self): super(ServerStatusTest, self).setUp() fakes.stub_out_nw_api(self) self.controller = servers.ServersController() def _get_with_state(self, vm_state, task_state=None): self.stub_out('nova.db.instance_get_by_uuid', fakes.fake_instance_get(vm_state=vm_state, task_state=task_state)) request = fakes.HTTPRequestV21.blank('/fake/servers/%s' % FAKE_UUID) return self.controller.show(request, FAKE_UUID) def test_active(self): response = self._get_with_state(vm_states.ACTIVE) self.assertEqual(response['server']['status'], 'ACTIVE') def test_reboot(self): response = self._get_with_state(vm_states.ACTIVE, task_states.REBOOTING) self.assertEqual(response['server']['status'], 'REBOOT') def test_reboot_hard(self): response = self._get_with_state(vm_states.ACTIVE, task_states.REBOOTING_HARD) self.assertEqual(response['server']['status'], 'HARD_REBOOT') def test_reboot_resize_policy_fail(self): def fake_get_server(context, req, id): return fakes.stub_instance(id) self.stubs.Set(self.controller, '_get_server', fake_get_server) rule = {'compute:reboot': 'role:admin'} policy.set_rules(oslo_policy.Rules.from_dict(rule)) req = fakes.HTTPRequestV21.blank('/fake/servers/1234/action') self.assertRaises(exception.PolicyNotAuthorized, self.controller._action_reboot, req, '1234', body={'reboot': {'type': 'HARD'}}) def test_rebuild(self): response = self._get_with_state(vm_states.ACTIVE, task_states.REBUILDING) self.assertEqual(response['server']['status'], 'REBUILD') def test_rebuild_error(self): response = self._get_with_state(vm_states.ERROR) self.assertEqual(response['server']['status'], 'ERROR') def test_resize(self): response = self._get_with_state(vm_states.ACTIVE, task_states.RESIZE_PREP) self.assertEqual(response['server']['status'], 'RESIZE') def test_confirm_resize_policy_fail(self): def fake_get_server(context, req, id): return fakes.stub_instance(id) self.stubs.Set(self.controller, '_get_server', fake_get_server) rule = {'compute:confirm_resize': 'role:admin'} policy.set_rules(oslo_policy.Rules.from_dict(rule)) req = fakes.HTTPRequestV21.blank('/fake/servers/1234/action') self.assertRaises(exception.PolicyNotAuthorized, self.controller._action_confirm_resize, req, '1234', {}) def test_verify_resize(self): response = self._get_with_state(vm_states.RESIZED, None) self.assertEqual(response['server']['status'], 'VERIFY_RESIZE') def test_revert_resize(self): response = self._get_with_state(vm_states.RESIZED, task_states.RESIZE_REVERTING) self.assertEqual(response['server']['status'], 'REVERT_RESIZE') def test_revert_resize_policy_fail(self): def fake_get_server(context, req, id): return fakes.stub_instance(id) self.stubs.Set(self.controller, '_get_server', fake_get_server) rule = {'compute:revert_resize': 'role:admin'} policy.set_rules(oslo_policy.Rules.from_dict(rule)) req = fakes.HTTPRequestV21.blank('/fake/servers/1234/action') self.assertRaises(exception.PolicyNotAuthorized, self.controller._action_revert_resize, req, '1234', {}) def test_password_update(self): response = self._get_with_state(vm_states.ACTIVE, task_states.UPDATING_PASSWORD) self.assertEqual(response['server']['status'], 'PASSWORD') def test_stopped(self): response = self._get_with_state(vm_states.STOPPED) self.assertEqual(response['server']['status'], 'SHUTOFF') class ServersControllerCreateTest(test.TestCase): image_uuid = '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6' flavor_ref = 'http://localhost/123/flavors/3' def setUp(self): """Shared implementation for tests below that create instance.""" super(ServersControllerCreateTest, self).setUp() self.flags(enable_instance_password=True, group='api') self.instance_cache_num = 0 self.instance_cache_by_id = {} self.instance_cache_by_uuid = {} fakes.stub_out_nw_api(self) self.controller = servers.ServersController() def instance_create(context, inst): inst_type = flavors.get_flavor_by_flavor_id(3) image_uuid = '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6' def_image_ref = 'http://localhost/fake/images/%s' % image_uuid self.instance_cache_num += 1 instance = fake_instance.fake_db_instance(**{ 'id': self.instance_cache_num, 'display_name': inst['display_name'] or 'test', 'display_description': inst['display_description'] or '', 'uuid': FAKE_UUID, 'instance_type': inst_type, 'image_ref': inst.get('image_ref', def_image_ref), 'user_id': 'fake', 'project_id': 'fake', 'reservation_id': inst['reservation_id'], "created_at": datetime.datetime(2010, 10, 10, 12, 0, 0), "updated_at": datetime.datetime(2010, 11, 11, 11, 0, 0), "config_drive": None, "progress": 0, "fixed_ips": [], "task_state": "", "vm_state": "", "root_device_name": inst.get('root_device_name', 'vda'), }) self.instance_cache_by_id[instance['id']] = instance self.instance_cache_by_uuid[instance['uuid']] = instance return instance def instance_get(context, instance_id): """Stub for compute/api create() pulling in instance after scheduling """ return self.instance_cache_by_id[instance_id] def instance_update(context, uuid, values): instance = self.instance_cache_by_uuid[uuid] instance.update(values) return instance def server_update_and_get_original( context, instance_uuid, params, columns_to_join=None): inst = self.instance_cache_by_uuid[instance_uuid] inst.update(params) return (inst, inst) def fake_method(*args, **kwargs): pass def project_get_networks(context, user_id): return dict(id='1', host='localhost') fakes.stub_out_key_pair_funcs(self) fake.stub_out_image_service(self) self.stubs.Set(uuid, 'uuid4', fake_gen_uuid) self.stub_out('nova.db.project_get_networks', project_get_networks) self.stub_out('nova.db.instance_create', instance_create) self.stub_out('nova.db.instance_system_metadata_update', fake_method) self.stub_out('nova.db.instance_get', instance_get) self.stub_out('nova.db.instance_update', instance_update) self.stub_out('nova.db.instance_update_and_get_original', server_update_and_get_original) self.stubs.Set(manager.VlanManager, 'allocate_fixed_ip', fake_method) self.body = { 'server': { 'name': 'server_test', 'imageRef': self.image_uuid, 'flavorRef': self.flavor_ref, 'metadata': { 'hello': 'world', 'open': 'stack', }, 'networks': [{ 'uuid': 'ff608d40-75e9-48cb-b745-77bb55b5eaf2' }], }, } self.bdm = [{'delete_on_termination': 1, 'device_name': 123, 'volume_size': 1, 'volume_id': '11111111-1111-1111-1111-111111111111'}] self.req = fakes.HTTPRequest.blank('/fake/servers') self.req.method = 'POST' self.req.headers["content-type"] = "application/json" def _check_admin_password_len(self, server_dict): """utility function - check server_dict for admin_password length.""" self.assertEqual(CONF.password_length, len(server_dict["adminPass"])) def _check_admin_password_missing(self, server_dict): """utility function - check server_dict for admin_password absence.""" self.assertNotIn("adminPass", server_dict) def _test_create_instance(self, flavor=2): image_uuid = 'c905cedb-7281-47e4-8a62-f26bc5fc4c77' self.body['server']['imageRef'] = image_uuid self.body['server']['flavorRef'] = flavor self.req.body = jsonutils.dump_as_bytes(self.body) server = self.controller.create(self.req, body=self.body).obj['server'] self._check_admin_password_len(server) self.assertEqual(FAKE_UUID, server['id']) def test_create_instance_with_none_value_port(self): self.body['server'] = {'networks': [{'port': None, 'uuid': FAKE_UUID}]} self.body['server']['name'] = 'test' self._test_create_instance() def test_create_instance_private_flavor(self): values = { 'name': 'fake_name', 'memory_mb': 512, 'vcpus': 1, 'root_gb': 10, 'ephemeral_gb': 10, 'flavorid': '1324', 'swap': 0, 'rxtx_factor': 0.5, 'vcpu_weight': 1, 'disabled': False, 'is_public': False, } db.flavor_create(context.get_admin_context(), values) self.assertRaises(webob.exc.HTTPBadRequest, self._test_create_instance, flavor=1324) def test_create_server_bad_image_uuid(self): self.body['server']['min_count'] = 1 self.body['server']['imageRef'] = 1, self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(exception.ValidationError, self.controller.create, self.req, body=self.body) # TODO(cyeoh): bp-v3-api-unittests # This needs to be ported to the os-networks extension tests # def test_create_server_with_invalid_networks_parameter(self): # self.ext_mgr.extensions = {'os-networks': 'fake'} # image_href = '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6' # flavor_ref = 'http://localhost/123/flavors/3' # body = { # 'server': { # 'name': 'server_test', # 'imageRef': image_href, # 'flavorRef': flavor_ref, # 'networks': {'uuid': '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6'}, # } # } # req = fakes.HTTPRequest.blank('/fake/servers') # req.method = 'POST' # req.body = jsonutils.dump_as_bytes(body) # req.headers["content-type"] = "application/json" # self.assertRaises(webob.exc.HTTPBadRequest, # self.controller.create, # req, # body) def test_create_server_with_deleted_image(self): # Get the fake image service so we can set the status to deleted (image_service, image_id) = glance.get_remote_image_service( context, '') image_service.update(context, self.image_uuid, {'status': 'DELETED'}) self.addCleanup(image_service.update, context, self.image_uuid, {'status': 'active'}) self.body['server']['flavorRef'] = 2 self.req.body = jsonutils.dump_as_bytes(self.body) with testtools.ExpectedException( webob.exc.HTTPBadRequest, 'Image 76fa36fc-c930-4bf3-8c8a-ea2a2420deb6 is not active.'): self.controller.create(self.req, body=self.body) def test_create_server_image_too_large(self): # Get the fake image service so we can update the size of the image (image_service, image_id) = glance.get_remote_image_service( context, self.image_uuid) image = image_service.show(context, image_id) orig_size = image['size'] new_size = str(1000 * (1024 ** 3)) image_service.update(context, self.image_uuid, {'size': new_size}) self.addCleanup(image_service.update, context, self.image_uuid, {'size': orig_size}) self.body['server']['flavorRef'] = 2 self.req.body = jsonutils.dump_as_bytes(self.body) with testtools.ExpectedException( webob.exc.HTTPBadRequest, "Flavor's disk is too small for requested image."): self.controller.create(self.req, body=self.body) def test_create_instance_with_image_non_uuid(self): self.body['server']['imageRef'] = 'not-uuid' self.assertRaises(exception.ValidationError, self.controller.create, self.req, body=self.body) def test_create_instance_with_image_as_full_url(self): image_href = ('http://localhost/v2/fake/images/' '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6') self.body['server']['imageRef'] = image_href self.assertRaises(exception.ValidationError, self.controller.create, self.req, body=self.body) def test_create_instance_with_image_as_empty_string(self): self.body['server']['imageRef'] = '' self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, body=self.body) def test_create_instance_no_key_pair(self): fakes.stub_out_key_pair_funcs(self, have_key_pair=False) self._test_create_instance() def _test_create_extra(self, params, no_image=False): self.body['server']['flavorRef'] = 2 if no_image: self.body['server'].pop('imageRef', None) self.body['server'].update(params) self.req.body = jsonutils.dump_as_bytes(self.body) self.req.headers["content-type"] = "application/json" self.controller.create(self.req, body=self.body).obj['server'] # TODO(cyeoh): bp-v3-api-unittests # This needs to be ported to the os-keypairs extension tests # def test_create_instance_with_keypairs_enabled(self): # self.ext_mgr.extensions = {'os-keypairs': 'fake'} # key_name = 'green' # # params = {'key_name': key_name} # old_create = compute_api.API.create # # # NOTE(sdague): key pair goes back to the database, # # so we need to stub it out for tests # def key_pair_get(context, user_id, name): # return {'public_key': 'FAKE_KEY', # 'fingerprint': 'FAKE_FINGERPRINT', # 'name': name} # # def create(*args, **kwargs): # self.assertEqual(kwargs['key_name'], key_name) # return old_create(*args, **kwargs) # # self.stub_out('nova.db.key_pair_get', key_pair_get) # self.stubs.Set(compute_api.API, 'create', create) # self._test_create_extra(params) # # TODO(cyeoh): bp-v3-api-unittests # This needs to be ported to the os-networks extension tests # def test_create_instance_with_networks_enabled(self): # self.ext_mgr.extensions = {'os-networks': 'fake'} # net_uuid = '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6' # requested_networks = [{'uuid': net_uuid}] # params = {'networks': requested_networks} # old_create = compute_api.API.create # def create(*args, **kwargs): # result = [('76fa36fc-c930-4bf3-8c8a-ea2a2420deb6', None)] # self.assertEqual(kwargs['requested_networks'], result) # return old_create(*args, **kwargs) # self.stubs.Set(compute_api.API, 'create', create) # self._test_create_extra(params) def test_create_instance_with_port_with_no_fixed_ips(self): port_id = 'eeeeeeee-eeee-eeee-eeee-eeeeeeeeeeee' requested_networks = [{'port': port_id}] params = {'networks': requested_networks} def fake_create(*args, **kwargs): raise exception.PortRequiresFixedIP(port_id=port_id) self.stubs.Set(compute_api.API, 'create', fake_create) self.assertRaises(webob.exc.HTTPBadRequest, self._test_create_extra, params) def test_create_instance_raise_user_data_too_large(self): self.body['server']['user_data'] = (b'1' * 65536) ex = self.assertRaises(exception.ValidationError, self.controller.create, self.req, body=self.body) # Make sure the failure was about user_data and not something else. self.assertIn('user_data', six.text_type(ex)) def test_create_instance_with_network_with_no_subnet(self): network = 'eeeeeeee-eeee-eeee-eeee-eeeeeeeeeeee' requested_networks = [{'uuid': network}] params = {'networks': requested_networks} def fake_create(*args, **kwargs): raise exception.NetworkRequiresSubnet(network_uuid=network) self.stubs.Set(compute_api.API, 'create', fake_create) self.assertRaises(webob.exc.HTTPBadRequest, self._test_create_extra, params) def test_create_instance_with_non_unique_secgroup_name(self): network = 'eeeeeeee-eeee-eeee-eeee-eeeeeeeeeeee' requested_networks = [{'uuid': network}] params = {'networks': requested_networks, 'security_groups': [{'name': 'dup'}, {'name': 'dup'}]} def fake_create(*args, **kwargs): raise exception.NoUniqueMatch("No Unique match found for ...") self.stubs.Set(compute_api.API, 'create', fake_create) self.assertRaises(webob.exc.HTTPConflict, self._test_create_extra, params) def test_create_instance_secgroup_leading_trailing_spaces(self): network = 'eeeeeeee-eeee-eeee-eeee-eeeeeeeeeeee' requested_networks = [{'uuid': network}] params = {'networks': requested_networks, 'security_groups': [{'name': ' sg '}]} self.assertRaises(exception.ValidationError, self._test_create_extra, params) def test_create_instance_secgroup_leading_trailing_spaces_compat_mode( self): network = 'eeeeeeee-eeee-eeee-eeee-eeeeeeeeeeee' requested_networks = [{'uuid': network}] params = {'networks': requested_networks, 'security_groups': [{'name': ' sg '}]} def fake_create(*args, **kwargs): self.assertEqual([' sg '], kwargs['security_groups']) return (objects.InstanceList(objects=[fakes.stub_instance_obj( self.req.environ['nova.context'])]), None) self.stubs.Set(compute_api.API, 'create', fake_create) self.req.set_legacy_v2() self._test_create_extra(params) def test_create_instance_with_networks_disabled_neutronv2(self): self.flags(use_neutron=True) net_uuid = '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6' requested_networks = [{'uuid': net_uuid}] params = {'networks': requested_networks} old_create = compute_api.API.create def create(*args, **kwargs): result = [('76fa36fc-c930-4bf3-8c8a-ea2a2420deb6', None, None, None)] self.assertEqual(result, kwargs['requested_networks'].as_tuples()) return old_create(*args, **kwargs) self.stubs.Set(compute_api.API, 'create', create) self._test_create_extra(params) def test_create_instance_with_pass_disabled(self): # test with admin passwords disabled See lp bug 921814 self.flags(enable_instance_password=False, group='api') self.flags(enable_instance_password=False, group='api') self.req.body = jsonutils.dump_as_bytes(self.body) res = self.controller.create(self.req, body=self.body).obj server = res['server'] self._check_admin_password_missing(server) self.assertEqual(FAKE_UUID, server['id']) def test_create_instance_name_too_long(self): self.body['server']['name'] = 'X' * 256 self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(exception.ValidationError, self.controller.create, self.req, body=self.body) def test_create_instance_name_with_spaces_in_the_middle(self): self.body['server']['name'] = 'abc def' self.req.body = jsonutils.dump_as_bytes(self.body) self.controller.create(self.req, body=self.body) def test_create_instance_name_with_leading_trailing_spaces(self): self.body['server']['name'] = ' abc def ' self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(exception.ValidationError, self.controller.create, self.req, body=self.body) def test_create_instance_name_with_leading_trailing_spaces_in_compat_mode( self): self.body['server']['name'] = ' abc def ' self.req.body = jsonutils.dump_as_bytes(self.body) self.req.set_legacy_v2() self.controller.create(self.req, body=self.body) def test_create_instance_name_all_blank_spaces(self): image_uuid = '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6' flavor_ref = 'http://localhost/fake/flavors/3' body = { 'server': { 'name': ' ' * 64, 'imageRef': image_uuid, 'flavorRef': flavor_ref, 'metadata': { 'hello': 'world', 'open': 'stack', }, }, } req = fakes.HTTPRequest.blank('/fake/servers') req.method = 'POST' req.body = jsonutils.dump_as_bytes(body) req.headers["content-type"] = "application/json" self.assertRaises(exception.ValidationError, self.controller.create, req, body=body) def test_create_az_with_leading_trailing_spaces(self): self.body['server']['availability_zone'] = ' zone1 ' self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(exception.ValidationError, self.controller.create, self.req, body=self.body) def test_create_az_with_leading_trailing_spaces_in_compat_mode( self): self.body['server']['name'] = ' abc def ' self.body['server']['availability_zones'] = ' zone1 ' self.req.body = jsonutils.dump_as_bytes(self.body) self.req.set_legacy_v2() with mock.patch.object(availability_zones, 'get_availability_zones', return_value=[' zone1 ']): self.controller.create(self.req, body=self.body) def test_create_instance(self): self.req.body = jsonutils.dump_as_bytes(self.body) res = self.controller.create(self.req, body=self.body).obj server = res['server'] self._check_admin_password_len(server) self.assertEqual(FAKE_UUID, server['id']) def test_create_instance_extension_create_exception(self): def fake_keypair_server_create(server_dict, create_kwargs, body_deprecated_param): raise KeyError self.controller.server_create_func_list.append( fake_keypair_server_create) image_uuid = '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6' flavor_ref = 'http://localhost/123/flavors/3' body = { 'server': { 'name': 'server_test', 'imageRef': image_uuid, 'flavorRef': flavor_ref, 'metadata': { 'hello': 'world', 'open': 'stack', }, }, } req = fakes.HTTPRequestV21.blank('/fake/servers') req.method = 'POST' req.body = jsonutils.dump_as_bytes(body) req.headers["content-type"] = "application/json" self.assertRaises(webob.exc.HTTPInternalServerError, self.controller.create, req, body=body) self.controller.server_create_func_list.remove( fake_keypair_server_create) def test_create_instance_pass_disabled(self): self.flags(enable_instance_password=False, group='api') self.req.body = jsonutils.dump_as_bytes(self.body) res = self.controller.create(self.req, body=self.body).obj server = res['server'] self._check_admin_password_missing(server) self.assertEqual(FAKE_UUID, server['id']) @mock.patch('nova.virt.hardware.numa_get_constraints') def _test_create_instance_numa_topology_wrong(self, exc, numa_constraints_mock): numa_constraints_mock.side_effect = exc(**{'name': None, 'cpunum': 0, 'cpumax': 0, 'cpuset': None, 'memsize': 0, 'memtotal': 0}) self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, body=self.body) def test_create_instance_numa_topology_wrong(self): for exc in [exception.ImageNUMATopologyIncomplete, exception.ImageNUMATopologyForbidden, exception.ImageNUMATopologyAsymmetric, exception.ImageNUMATopologyCPUOutOfRange, exception.ImageNUMATopologyCPUDuplicates, exception.ImageNUMATopologyCPUsUnassigned, exception.ImageNUMATopologyMemoryOutOfRange]: self._test_create_instance_numa_topology_wrong(exc) def test_create_instance_too_much_metadata(self): self.flags(metadata_items=1, group='quota') self.body['server']['metadata']['vote'] = 'fiddletown' self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(webob.exc.HTTPForbidden, self.controller.create, self.req, body=self.body) def test_create_instance_metadata_key_too_long(self): self.flags(metadata_items=1, group='quota') self.body['server']['metadata'] = {('a' * 260): '12345'} self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(exception.ValidationError, self.controller.create, self.req, body=self.body) def test_create_instance_metadata_value_too_long(self): self.flags(metadata_items=1, group='quota') self.body['server']['metadata'] = {'key1': ('a' * 260)} self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(exception.ValidationError, self.controller.create, self.req, body=self.body) def test_create_instance_metadata_key_blank(self): self.flags(metadata_items=1, group='quota') self.body['server']['metadata'] = {'': 'abcd'} self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(exception.ValidationError, self.controller.create, self.req, body=self.body) def test_create_instance_metadata_not_dict(self): self.flags(metadata_items=1, group='quota') self.body['server']['metadata'] = 'string' self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(exception.ValidationError, self.controller.create, self.req, body=self.body) def test_create_instance_metadata_key_not_string(self): self.flags(metadata_items=1, group='quota') self.body['server']['metadata'] = {1: 'test'} self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(exception.ValidationError, self.controller.create, self.req, body=self.body) def test_create_instance_metadata_value_not_string(self): self.flags(metadata_items=1, group='quota') self.body['server']['metadata'] = {'test': ['a', 'list']} self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(exception.ValidationError, self.controller.create, self.req, body=self.body) def test_create_user_data_malformed_bad_request(self): params = {'user_data': 'u1234'} self.assertRaises(exception.ValidationError, self._test_create_extra, params) def test_create_instance_invalid_key_name(self): self.body['server']['key_name'] = 'nonexistentkey' self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, body=self.body) def test_create_instance_valid_key_name(self): self.body['server']['key_name'] = 'key' self.req.body = jsonutils.dump_as_bytes(self.body) res = self.controller.create(self.req, body=self.body).obj self.assertEqual(FAKE_UUID, res["server"]["id"]) self._check_admin_password_len(res["server"]) def test_create_instance_invalid_flavor_href(self): flavor_ref = 'http://localhost/v2/flavors/asdf' self.body['server']['flavorRef'] = flavor_ref self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, body=self.body) def test_create_instance_invalid_flavor_id_int(self): flavor_ref = -1 self.body['server']['flavorRef'] = flavor_ref self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, body=self.body) @mock.patch.object(nova.compute.flavors, 'get_flavor_by_flavor_id', return_value=objects.Flavor()) @mock.patch.object(compute_api.API, 'create') def test_create_instance_with_non_existing_snapshot_id( self, mock_create, mock_get_flavor_by_flavor_id): mock_create.side_effect = exception.SnapshotNotFound(snapshot_id='123') self.body['server'] = {'name': 'server_test', 'flavorRef': self.flavor_ref, 'block_device_mapping_v2': [{'source_type': 'snapshot', 'uuid': '123'}]} self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, body=self.body) def test_create_instance_invalid_flavor_id_empty(self): flavor_ref = "" self.body['server']['flavorRef'] = flavor_ref self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(exception.ValidationError, self.controller.create, self.req, body=self.body) def test_create_instance_bad_flavor_href(self): flavor_ref = 'http://localhost/v2/flavors/17' self.body['server']['flavorRef'] = flavor_ref self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, body=self.body) def test_create_instance_local_href(self): self.req.body = jsonutils.dump_as_bytes(self.body) res = self.controller.create(self.req, body=self.body).obj server = res['server'] self.assertEqual(FAKE_UUID, server['id']) def test_create_instance_admin_password(self): self.body['server']['flavorRef'] = 3 self.body['server']['adminPass'] = 'testpass' self.req.body = jsonutils.dump_as_bytes(self.body) res = self.controller.create(self.req, body=self.body).obj server = res['server'] self.assertEqual(server['adminPass'], self.body['server']['adminPass']) def test_create_instance_admin_password_pass_disabled(self): self.flags(enable_instance_password=False, group='api') self.body['server']['flavorRef'] = 3 self.body['server']['adminPass'] = 'testpass' self.req.body = jsonutils.dump_as_bytes(self.body) res = self.controller.create(self.req, body=self.body).obj self.assertIn('server', res) self.assertIn('adminPass', self.body['server']) def test_create_instance_admin_password_empty(self): self.body['server']['flavorRef'] = 3 self.body['server']['adminPass'] = '' self.req.body = jsonutils.dump_as_bytes(self.body) # The fact that the action doesn't raise is enough validation self.controller.create(self.req, body=self.body) def test_create_location(self): selfhref = 'http://localhost/v2/fake/servers/%s' % FAKE_UUID self.req.body = jsonutils.dump_as_bytes(self.body) robj = self.controller.create(self.req, body=self.body) self.assertEqual(encodeutils.safe_decode(robj['Location']), selfhref) @mock.patch('nova.objects.Quotas.get_all_by_project') @mock.patch('nova.objects.Quotas.get_all_by_project_and_user') @mock.patch('nova.objects.Quotas.count_as_dict') def _do_test_create_instance_above_quota(self, resource, allowed, quota, expected_msg, mock_count, mock_get_all_pu, mock_get_all_p): count = {'project': {}, 'user': {}} for res in ('instances', 'ram', 'cores'): if res == resource: value = quota - allowed count['project'][res] = count['user'][res] = value else: count['project'][res] = count['user'][res] = 0 mock_count.return_value = count mock_get_all_p.return_value = {'project_id': 'fake'} mock_get_all_pu.return_value = {'project_id': 'fake', 'user_id': 'fake_user'} if resource in db_api.PER_PROJECT_QUOTAS: mock_get_all_p.return_value[resource] = quota else: mock_get_all_pu.return_value[resource] = quota fakes.stub_out_instance_quota(self, allowed, quota, resource) self.body['server']['flavorRef'] = 3 self.req.body = jsonutils.dump_as_bytes(self.body) try: self.controller.create(self.req, body=self.body).obj['server'] self.fail('expected quota to be exceeded') except webob.exc.HTTPForbidden as e: self.assertEqual(e.explanation, expected_msg) def test_create_instance_above_quota_instances(self): msg = ('Quota exceeded for instances: Requested 1, but' ' already used 10 of 10 instances') self._do_test_create_instance_above_quota('instances', 0, 10, msg) def test_create_instance_above_quota_ram(self): msg = ('Quota exceeded for ram: Requested 4096, but' ' already used 8192 of 10240 ram') self._do_test_create_instance_above_quota('ram', 2048, 10 * 1024, msg) def test_create_instance_above_quota_cores(self): msg = ('Quota exceeded for cores: Requested 2, but' ' already used 9 of 10 cores') self._do_test_create_instance_above_quota('cores', 1, 10, msg) def test_create_instance_above_quota_server_group_members(self): ctxt = self.req.environ['nova.context'] fake_group = objects.InstanceGroup(ctxt) fake_group.project_id = ctxt.project_id fake_group.user_id = ctxt.user_id fake_group.create() real_count = fakes.QUOTAS.count_as_dict def fake_count(context, name, group, user_id): if name == 'server_group_members': self.assertEqual(group.uuid, fake_group.uuid) self.assertEqual(user_id, self.req.environ['nova.context'].user_id) return {'user': {'server_group_members': 10}} else: return real_count(context, name, group, user_id) def fake_limit_check(context, **kwargs): if 'server_group_members' in kwargs: raise exception.OverQuota(overs={}) def fake_instance_destroy(context, uuid, constraint): return fakes.stub_instance(1) self.stubs.Set(fakes.QUOTAS, 'count_as_dict', fake_count) self.stubs.Set(fakes.QUOTAS, 'limit_check', fake_limit_check) self.stub_out('nova.db.instance_destroy', fake_instance_destroy) self.body['os:scheduler_hints'] = {'group': fake_group.uuid} self.req.body = jsonutils.dump_as_bytes(self.body) expected_msg = "Quota exceeded, too many servers in group" try: self.controller.create(self.req, body=self.body).obj self.fail('expected quota to be exceeded') except webob.exc.HTTPForbidden as e: self.assertEqual(e.explanation, expected_msg) def test_create_instance_with_group_hint(self): ctxt = self.req.environ['nova.context'] test_group = objects.InstanceGroup(ctxt) test_group.project_id = ctxt.project_id test_group.user_id = ctxt.user_id test_group.create() def fake_instance_destroy(context, uuid, constraint): return fakes.stub_instance(1) self.stub_out('nova.db.instance_destroy', fake_instance_destroy) self.body['os:scheduler_hints'] = {'group': test_group.uuid} self.req.body = jsonutils.dump_as_bytes(self.body) server = self.controller.create(self.req, body=self.body).obj['server'] test_group = objects.InstanceGroup.get_by_uuid(ctxt, test_group.uuid) self.assertIn(server['id'], test_group.members) def test_create_instance_with_group_hint_group_not_found(self): def fake_instance_destroy(context, uuid, constraint): return fakes.stub_instance(1) self.stub_out('nova.db.instance_destroy', fake_instance_destroy) self.body['os:scheduler_hints'] = { 'group': '5b674f73-c8cf-40ef-9965-3b6fe4b304b1'} self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, body=self.body) def test_create_instance_with_group_hint_wrong_uuid_format(self): self.body['os:scheduler_hints'] = { 'group': 'non-uuid'} self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(exception.ValidationError, self.controller.create, self.req, body=self.body) def test_create_instance_with_neutronv2_port_in_use(self): network = 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa' port = 'eeeeeeee-eeee-eeee-eeee-eeeeeeeeeeee' requested_networks = [{'uuid': network, 'port': port}] params = {'networks': requested_networks} def fake_create(*args, **kwargs): raise exception.PortInUse(port_id=port) self.stubs.Set(compute_api.API, 'create', fake_create) self.assertRaises(webob.exc.HTTPConflict, self._test_create_extra, params) @mock.patch.object(compute_api.API, 'create') def test_create_instance_public_network_non_admin(self, mock_create): public_network_uuid = 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa' params = {'networks': [{'uuid': public_network_uuid}]} self.req.body = jsonutils.dump_as_bytes(self.body) mock_create.side_effect = exception.ExternalNetworkAttachForbidden( network_uuid=public_network_uuid) self.assertRaises(webob.exc.HTTPForbidden, self._test_create_extra, params) @mock.patch.object(compute_api.API, 'create') def test_create_multiple_instance_with_specified_ip_neutronv2(self, _api_mock): _api_mock.side_effect = exception.InvalidFixedIpAndMaxCountRequest( reason="") network = 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa' port = 'eeeeeeee-eeee-eeee-eeee-eeeeeeeeeeee' address = '10.0.0.1' requested_networks = [{'uuid': network, 'fixed_ip': address, 'port': port}] params = {'networks': requested_networks} self.body['server']['max_count'] = 2 self.assertRaises(webob.exc.HTTPBadRequest, self._test_create_extra, params) def test_create_multiple_instance_with_neutronv2_port(self): network = 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa' port = 'eeeeeeee-eeee-eeee-eeee-eeeeeeeeeeee' requested_networks = [{'uuid': network, 'port': port}] params = {'networks': requested_networks} self.body['server']['max_count'] = 2 def fake_create(*args, **kwargs): msg = ("Unable to launch multiple instances with" " a single configured port ID. Please launch your" " instance one by one with different ports.") raise exception.MultiplePortsNotApplicable(reason=msg) self.stubs.Set(compute_api.API, 'create', fake_create) self.assertRaises(webob.exc.HTTPBadRequest, self._test_create_extra, params) def test_create_instance_with_neutronv2_not_found_network(self): network = 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa' requested_networks = [{'uuid': network}] params = {'networks': requested_networks} def fake_create(*args, **kwargs): raise exception.NetworkNotFound(network_id=network) self.stubs.Set(compute_api.API, 'create', fake_create) self.assertRaises(webob.exc.HTTPBadRequest, self._test_create_extra, params) def test_create_instance_with_neutronv2_port_not_found(self): network = 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa' port = 'eeeeeeee-eeee-eeee-eeee-eeeeeeeeeeee' requested_networks = [{'uuid': network, 'port': port}] params = {'networks': requested_networks} def fake_create(*args, **kwargs): raise exception.PortNotFound(port_id=port) self.stubs.Set(compute_api.API, 'create', fake_create) self.assertRaises(webob.exc.HTTPBadRequest, self._test_create_extra, params) @mock.patch.object(compute_api.API, 'create') def test_create_instance_with_network_ambiguous(self, mock_create): mock_create.side_effect = exception.NetworkAmbiguous() self.assertRaises(webob.exc.HTTPConflict, self._test_create_extra, {}) @mock.patch.object(compute_api.API, 'create', side_effect=exception.UnableToAutoAllocateNetwork( project_id=FAKE_UUID)) def test_create_instance_with_unable_to_auto_allocate_network(self, mock_create): self.assertRaises(webob.exc.HTTPBadRequest, self._test_create_extra, {}) @mock.patch.object(compute_api.API, 'create', side_effect=exception.ImageNotAuthorized( image_id=FAKE_UUID)) def test_create_instance_with_image_not_authorized(self, mock_create): self.assertRaises(webob.exc.HTTPBadRequest, self._test_create_extra, {}) @mock.patch.object(compute_api.API, 'create', side_effect=exception.InstanceExists( name='instance-name')) def test_create_instance_raise_instance_exists(self, mock_create): self.assertRaises(webob.exc.HTTPConflict, self.controller.create, self.req, body=self.body) @mock.patch.object(compute_api.API, 'create', side_effect=exception.InvalidBDMEphemeralSize) def test_create_instance_raise_invalid_bdm_ephsize(self, mock_create): self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, body=self.body) @mock.patch.object(compute_api.API, 'create', side_effect=exception.InvalidNUMANodesNumber( nodes='-1')) def test_create_instance_raise_invalid_numa_nodes(self, mock_create): self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, body=self.body) @mock.patch.object(compute_api.API, 'create', side_effect=exception.InvalidBDMFormat(details='')) def test_create_instance_raise_invalid_bdm_format(self, mock_create): self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, body=self.body) @mock.patch.object(compute_api.API, 'create', side_effect=exception.InvalidBDMSwapSize) def test_create_instance_raise_invalid_bdm_swapsize(self, mock_create): self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, body=self.body) @mock.patch.object(compute_api.API, 'create', side_effect=exception.InvalidBDM) def test_create_instance_raise_invalid_bdm(self, mock_create): self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, body=self.body) @mock.patch.object(compute_api.API, 'create', side_effect=exception.ImageBadRequest( image_id='dummy', response='dummy')) def test_create_instance_raise_image_bad_request(self, mock_create): self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, body=self.body) def test_create_instance_invalid_availability_zone(self): self.body['server']['availability_zone'] = 'invalid::::zone' self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, body=self.body) @mock.patch.object(compute_api.API, 'create', side_effect=exception.FixedIpNotFoundForAddress( address='dummy')) def test_create_instance_raise_fixed_ip_not_found_bad_request(self, mock_create): self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, body=self.body) @mock.patch('nova.virt.hardware.numa_get_constraints', side_effect=exception.CPUThreadPolicyConfigurationInvalid()) def test_create_instance_raise_cpu_thread_policy_configuration_invalid( self, mock_numa): self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, body=self.body) @mock.patch('nova.virt.hardware.numa_get_constraints', side_effect=exception.ImageCPUPinningForbidden()) def test_create_instance_raise_image_cpu_pinning_forbidden( self, mock_numa): self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, body=self.body) @mock.patch('nova.virt.hardware.numa_get_constraints', side_effect=exception.ImageCPUThreadPolicyForbidden()) def test_create_instance_raise_image_cpu_thread_policy_forbidden( self, mock_numa): self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, body=self.body) @mock.patch('nova.virt.hardware.numa_get_constraints', side_effect=exception.MemoryPageSizeInvalid(pagesize='-1')) def test_create_instance_raise_memory_page_size_invalid(self, mock_numa): self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, body=self.body) @mock.patch('nova.virt.hardware.numa_get_constraints', side_effect=exception.MemoryPageSizeForbidden(pagesize='1', against='2')) def test_create_instance_raise_memory_page_size_forbidden(self, mock_numa): self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, body=self.body) @mock.patch('nova.virt.hardware.numa_get_constraints', side_effect=exception.RealtimeConfigurationInvalid()) def test_create_instance_raise_realtime_configuration_invalid( self, mock_numa): self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, body=self.body) @mock.patch('nova.virt.hardware.numa_get_constraints', side_effect=exception.RealtimeMaskNotFoundOrInvalid()) def test_create_instance_raise_realtime_mask_not_found_or_invalid( self, mock_numa): self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, body=self.body) @mock.patch.object(compute_api.API, 'create') def test_create_instance_invalid_personality(self, mock_create): # Personality files have been deprecated as of v2.57 self.req.api_version_request = \ api_version_request.APIVersionRequest('2.56') codec = 'utf8' content = encodeutils.safe_encode( 'b25zLiINCg0KLVJpY2hhcmQgQ$$%QQmFjaA==') start_position = 19 end_position = 20 msg = 'invalid start byte' mock_create.side_effect = UnicodeDecodeError(codec, content, start_position, end_position, msg) self.body['server']['personality'] = [ { "path": "/etc/banner.txt", "contents": "b25zLiINCg0KLVJpY2hhcmQgQ$$%QQmFjaA==", }, ] self.req.body = jsonutils.dump_as_bytes(self.body) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, body=self.body) def test_create_instance_without_personality_should_get_empty_list(self): # Personality files have been deprecated as of v2.57 self.req.api_version_request = \ api_version_request.APIVersionRequest('2.56') old_create = compute_api.API.create def create(*args, **kwargs): self.assertEqual([], kwargs['injected_files']) return old_create(*args, **kwargs) self.stub_out('nova.compute.api.API.create', create) self._test_create_instance() def test_create_instance_with_extra_personality_arg(self): # Personality files have been deprecated as of v2.57 self.req.api_version_request = \ api_version_request.APIVersionRequest('2.56') self.body['server']['personality'] = [ { "path": "/etc/banner.txt", "contents": "b25zLiINCg0KLVJpY2hhcmQgQ$$%QQmFjaA==", "extra_arg": "extra value" }, ] self.assertRaises(exception.ValidationError, self.controller.create, self.req, body=self.body) @mock.patch.object(compute_api.API, 'create', side_effect=exception.PciRequestAliasNotDefined( alias='fake_name')) def test_create_instance_pci_alias_not_defined(self, mock_create): # Tests that PciRequestAliasNotDefined is translated to a 400 error. self.assertRaises(webob.exc.HTTPBadRequest, self._test_create_extra, {}) class ServersControllerCreateTestV219(ServersControllerCreateTest): def _create_instance_req(self, set_desc, desc=None): if set_desc: self.body['server']['description'] = desc self.req.body = jsonutils.dump_as_bytes(self.body) self.req.api_version_request = \ api_version_request.APIVersionRequest('2.19') def test_create_instance_with_description(self): self._create_instance_req(True, 'server_desc') # The fact that the action doesn't raise is enough validation self.controller.create(self.req, body=self.body).obj def test_create_instance_with_none_description(self): self._create_instance_req(True) # The fact that the action doesn't raise is enough validation self.controller.create(self.req, body=self.body).obj def test_create_instance_with_empty_description(self): self._create_instance_req(True, '') # The fact that the action doesn't raise is enough validation self.controller.create(self.req, body=self.body).obj def test_create_instance_without_description(self): self._create_instance_req(False) # The fact that the action doesn't raise is enough validation self.controller.create(self.req, body=self.body).obj def test_create_instance_description_too_long(self): self._create_instance_req(True, 'X' * 256) self.assertRaises(exception.ValidationError, self.controller.create, self.req, body=self.body) def test_create_instance_description_invalid(self): self._create_instance_req(True, "abc\0ddef") self.assertRaises(exception.ValidationError, self.controller.create, self.req, body=self.body) class ServersControllerCreateTestV232(test.NoDBTestCase): def setUp(self): super(ServersControllerCreateTestV232, self).setUp() self.flags(use_neutron=True) self.controller = servers.ServersController() self.body = { 'server': { 'name': 'device-tagging-server', 'imageRef': '6b0edabb-8cde-4684-a3f4-978960a51378', 'flavorRef': '2', 'networks': [{ 'uuid': 'ff608d40-75e9-48cb-b745-77bb55b5eaf2' }], 'block_device_mapping_v2': [{ 'uuid': '70a599e0-31e7-49b7-b260-868f441e862b', 'source_type': 'image', 'destination_type': 'volume', 'boot_index': 0, 'volume_size': '1' }] } } self.req = fakes.HTTPRequestV21.blank('/fake/servers', version='2.32') self.req.method = 'POST' self.req.headers['content-type'] = 'application/json' def _create_server(self): self.req.body = jsonutils.dump_as_bytes(self.body) self.controller.create(self.req, body=self.body) def test_create_server_no_tags_old_compute(self): with test.nested( mock.patch('nova.objects.service.get_minimum_version_all_cells', return_value=13), mock.patch.object(nova.compute.flavors, 'get_flavor_by_flavor_id', return_value=objects.Flavor()), mock.patch.object( compute_api.API, 'create', return_value=( [{'uuid': 'f60012d9-5ba4-4547-ab48-f94ff7e62d4e'}], 1)), ): self._create_server() @mock.patch('nova.objects.service.get_minimum_version_all_cells', return_value=13) def test_create_server_tagged_nic_old_compute_fails(self, get_min_ver): self.body['server']['networks'][0]['tag'] = 'foo' self.assertRaises(webob.exc.HTTPBadRequest, self._create_server) @mock.patch('nova.objects.service.get_minimum_version_all_cells', return_value=13) def test_create_server_tagged_bdm_old_compute_fails(self, get_min_ver): self.body['server']['block_device_mapping_v2'][0]['tag'] = 'foo' self.assertRaises(webob.exc.HTTPBadRequest, self._create_server) def test_create_server_tagged_nic_new_compute(self): with test.nested( mock.patch('nova.objects.service.get_minimum_version_all_cells', return_value=14), mock.patch.object(nova.compute.flavors, 'get_flavor_by_flavor_id', return_value=objects.Flavor()), mock.patch.object( compute_api.API, 'create', return_value=( [{'uuid': 'f60012d9-5ba4-4547-ab48-f94ff7e62d4e'}], 1)), ): self.body['server']['networks'][0]['tag'] = 'foo' self._create_server() def test_create_server_tagged_bdm_new_compute(self): with test.nested( mock.patch('nova.objects.service.get_minimum_version_all_cells', return_value=14), mock.patch.object(nova.compute.flavors, 'get_flavor_by_flavor_id', return_value=objects.Flavor()), mock.patch.object( compute_api.API, 'create', return_value=( [{'uuid': 'f60012d9-5ba4-4547-ab48-f94ff7e62d4e'}], 1)), ): self.body['server']['block_device_mapping_v2'][0]['tag'] = 'foo' self._create_server() class ServersControllerCreateTestV237(test.NoDBTestCase): """Tests server create scenarios with the v2.37 microversion. These tests are mostly about testing the validation on the 2.37 server create request with emphasis on negative scenarios. """ def setUp(self): super(ServersControllerCreateTestV237, self).setUp() # Set the use_neutron flag to process requested networks. self.flags(use_neutron=True) # Create the server controller. self.controller = servers.ServersController() # Define a basic server create request body which tests can customize. self.body = { 'server': { 'name': 'auto-allocate-test', 'imageRef': '6b0edabb-8cde-4684-a3f4-978960a51378', 'flavorRef': '2', }, } # Create a fake request using the 2.37 microversion. self.req = fakes.HTTPRequestV21.blank('/fake/servers', version='2.37') self.req.method = 'POST' self.req.headers['content-type'] = 'application/json' def _create_server(self, networks): self.body['server']['networks'] = networks self.req.body = jsonutils.dump_as_bytes(self.body) return self.controller.create(self.req, body=self.body).obj['server'] def test_create_server_auth_pre_2_37_fails(self): """Negative test to make sure you can't pass 'auto' before 2.37""" self.req.api_version_request = \ api_version_request.APIVersionRequest('2.36') self.assertRaises(exception.ValidationError, self._create_server, 'auto') def test_create_server_no_requested_networks_fails(self): """Negative test for a server create request with no networks requested which should fail with the v2.37 schema validation. """ self.assertRaises(exception.ValidationError, self._create_server, None) def test_create_server_network_id_not_uuid_fails(self): """Negative test for a server create request where the requested network id is not one of the auto/none enums. """ self.assertRaises(exception.ValidationError, self._create_server, 'not-auto-or-none') def test_create_server_network_id_empty_string_fails(self): """Negative test for a server create request where the requested network id is the empty string. """ self.assertRaises(exception.ValidationError, self._create_server, '') @mock.patch.object(context.RequestContext, 'can') def test_create_server_networks_none_skip_policy(self, context_can): """Test to ensure skip checking policy rule create:attach_network, when networks is 'none' which means no network will be allocated. """ with test.nested( mock.patch('nova.objects.service.get_minimum_version_all_cells', return_value=14), mock.patch.object(nova.compute.flavors, 'get_flavor_by_flavor_id', return_value=objects.Flavor()), mock.patch.object( compute_api.API, 'create', return_value=( [{'uuid': 'f9bccadf-5ab1-4a56-9156-c00c178fe5f5'}], 1)), ): network_policy = server_policies.SERVERS % 'create:attach_network' self._create_server('none') call_list = [c for c in context_can.call_args_list if c[0][0] == network_policy] self.assertEqual(0, len(call_list)) @mock.patch.object(objects.Flavor, 'get_by_flavor_id', side_effect=exception.FlavorNotFound(flavor_id='2')) def test_create_server_auto_flavornotfound(self, get_flavor): """Tests that requesting auto networking is OK. This test short-circuits on a FlavorNotFound error. """ self.useFixture(nova_fixtures.AllServicesCurrent()) ex = self.assertRaises( webob.exc.HTTPBadRequest, self._create_server, 'auto') # make sure it was a flavor not found error and not something else self.assertIn('Flavor 2 could not be found', six.text_type(ex)) @mock.patch.object(objects.Flavor, 'get_by_flavor_id', side_effect=exception.FlavorNotFound(flavor_id='2')) def test_create_server_none_flavornotfound(self, get_flavor): """Tests that requesting none for networking is OK. This test short-circuits on a FlavorNotFound error. """ self.useFixture(nova_fixtures.AllServicesCurrent()) ex = self.assertRaises( webob.exc.HTTPBadRequest, self._create_server, 'none') # make sure it was a flavor not found error and not something else self.assertIn('Flavor 2 could not be found', six.text_type(ex)) @mock.patch.object(objects.Flavor, 'get_by_flavor_id', side_effect=exception.FlavorNotFound(flavor_id='2')) def test_create_server_multiple_specific_nics_flavornotfound(self, get_flavor): """Tests that requesting multiple specific network IDs is OK. This test short-circuits on a FlavorNotFound error. """ self.useFixture(nova_fixtures.AllServicesCurrent()) ex = self.assertRaises( webob.exc.HTTPBadRequest, self._create_server, [{'uuid': 'e3b686a8-b91d-4a61-a3fc-1b74bb619ddb'}, {'uuid': 'e0f00941-f85f-46ec-9315-96ded58c2f14'}]) # make sure it was a flavor not found error and not something else self.assertIn('Flavor 2 could not be found', six.text_type(ex)) def test_create_server_legacy_neutron_network_id_fails(self): """Tests that we no longer support the legacy br- format for a network id. """ uuid = 'br-00000000-0000-0000-0000-000000000000' self.assertRaises(exception.ValidationError, self._create_server, [{'uuid': uuid}]) @ddt.ddt class ServersControllerCreateTestV252(test.NoDBTestCase): def setUp(self): super(ServersControllerCreateTestV252, self).setUp() self.controller = servers.ServersController() self.body = { 'server': { 'name': 'device-tagging-server', 'imageRef': '6b0edabb-8cde-4684-a3f4-978960a51378', 'flavorRef': '2', 'networks': [{ 'uuid': 'ff608d40-75e9-48cb-b745-77bb55b5eaf2' }] } } self.req = fakes.HTTPRequestV21.blank('/fake/servers', version='2.52') self.req.method = 'POST' self.req.headers['content-type'] = 'application/json' def _create_server(self, tags): self.body['server']['tags'] = tags self.req.body = jsonutils.dump_as_bytes(self.body) return self.controller.create(self.req, body=self.body).obj['server'] def test_create_server_with_tags_pre_2_52_fails(self): """Negative test to make sure you can't pass 'tags' before 2.52""" self.req.api_version_request = \ api_version_request.APIVersionRequest('2.51') self.assertRaises( exception.ValidationError, self._create_server, ['tag1']) @ddt.data([','], ['/'], ['a' * (tag.MAX_TAG_LENGTH + 1)], ['a'] * (instance_obj.MAX_TAG_COUNT + 1), [''], [1, 2, 3], {'tag': 'tag'}) def test_create_server_with_tags_incorrect_tags(self, tags): """Negative test to incorrect tags are not allowed""" self.req.api_version_request = \ api_version_request.APIVersionRequest('2.52') self.assertRaises( exception.ValidationError, self._create_server, tags) class ServersControllerCreateTestV257(test.NoDBTestCase): """Tests that trying to create a server with personality files using microversion 2.57 fails. """ def test_create_server_with_personality_fails(self): controller = servers.ServersController() body = { 'server': { 'name': 'no-personality-files', 'imageRef': '6b0edabb-8cde-4684-a3f4-978960a51378', 'flavorRef': '2', 'networks': 'auto', 'personality': [{ 'path': '/path/to/file', 'contents': 'ZWNobyAiaGVsbG8gd29ybGQi' }] } } req = fakes.HTTPRequestV21.blank('/servers', version='2.57') req.body = jsonutils.dump_as_bytes(body) req.method = 'POST' req.headers['content-type'] = 'application/json' ex = self.assertRaises( exception.ValidationError, controller.create, req, body=body) self.assertIn('personality', six.text_type(ex)) @mock.patch('nova.compute.utils.check_num_instances_quota', new=lambda *args, **kwargs: 1) class ServersControllerCreateTestV260(test.NoDBTestCase): """Negative tests for creating a server with a multiattach volume.""" def setUp(self): super(ServersControllerCreateTestV260, self).setUp() self.useFixture(nova_fixtures.NoopQuotaDriverFixture()) self.controller = servers.ServersController() get_flavor_mock = mock.patch( 'nova.compute.flavors.get_flavor_by_flavor_id', return_value=fake_flavor.fake_flavor_obj( context.get_admin_context(), flavorid='1')) get_flavor_mock.start() self.addCleanup(get_flavor_mock.stop) reqspec_create_mock = mock.patch( 'nova.objects.RequestSpec.create') reqspec_create_mock.start() self.addCleanup(reqspec_create_mock.stop) volume_get_mock = mock.patch( 'nova.volume.cinder.API.get', return_value={'id': uuids.fake_volume_id, 'multiattach': True}) volume_get_mock.start() self.addCleanup(volume_get_mock.stop) def _post_server(self, version=None): body = { 'server': { 'name': 'multiattach', 'flavorRef': '1', 'networks': 'none', 'block_device_mapping_v2': [{ 'uuid': uuids.fake_volume_id, 'source_type': 'volume', 'destination_type': 'volume', 'boot_index': 0, 'delete_on_termination': True}] } } req = fakes.HTTPRequestV21.blank( '/servers', version=version or '2.60') req.body = jsonutils.dump_as_bytes(body) req.method = 'POST' req.headers['content-type'] = 'application/json' return self.controller.create(req, body=body) def test_create_server_with_multiattach_fails_old_microversion(self): """Tests the case that the user tries to boot from volume with a multiattach volume but before using microversion 2.60. """ self.useFixture(nova_fixtures.AllServicesCurrent()) ex = self.assertRaises(webob.exc.HTTPBadRequest, self._post_server, '2.59') self.assertIn('Multiattach volumes are only supported starting with ' 'compute API version 2.60', six.text_type(ex)) @mock.patch('nova.objects.service.get_minimum_version_all_cells', return_value=compute_api.MIN_COMPUTE_MULTIATTACH - 1) def test_create_server_with_multiattach_fails_not_available( self, mock_get_min_version_all_cells): """Tests the case that the user tries to boot from volume with a multiattach volume but before the deployment is fully upgraded. """ ex = self.assertRaises(webob.exc.HTTPConflict, self._post_server) self.assertIn('Multiattach volume support is not yet available', six.text_type(ex)) class ServersControllerCreateTestWithMock(test.TestCase): image_uuid = '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6' flavor_ref = 'http://localhost/123/flavors/3' def setUp(self): """Shared implementation for tests below that create instance.""" super(ServersControllerCreateTestWithMock, self).setUp() self.flags(enable_instance_password=True, group='api') self.instance_cache_num = 0 self.instance_cache_by_id = {} self.instance_cache_by_uuid = {} self.controller = servers.ServersController() self.body = { 'server': { 'name': 'server_test', 'imageRef': self.image_uuid, 'flavorRef': self.flavor_ref, 'metadata': { 'hello': 'world', 'open': 'stack', }, }, } self.req = fakes.HTTPRequest.blank('/fake/servers') self.req.method = 'POST' self.req.headers["content-type"] = "application/json" def _test_create_extra(self, params, no_image=False): self.body['server']['flavorRef'] = 2 if no_image: self.body['server'].pop('imageRef', None) self.body['server'].update(params) self.req.body = jsonutils.dump_as_bytes(self.body) self.req.headers["content-type"] = "application/json" self.controller.create(self.req, body=self.body).obj['server'] @mock.patch.object(compute_api.API, 'create') def test_create_instance_with_neutronv2_fixed_ip_already_in_use(self, create_mock): network = 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa' address = '10.0.2.3' requested_networks = [{'uuid': network, 'fixed_ip': address}] params = {'networks': requested_networks} create_mock.side_effect = exception.FixedIpAlreadyInUse( address=address, instance_uuid=network) self.assertRaises(webob.exc.HTTPBadRequest, self._test_create_extra, params) self.assertEqual(1, len(create_mock.call_args_list)) @mock.patch.object(compute_api.API, 'create') def test_create_instance_with_neutronv2_invalid_fixed_ip(self, create_mock): self.flags(use_neutron=True) network = 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa' address = '999.0.2.3' requested_networks = [{'uuid': network, 'fixed_ip': address}] params = {'networks': requested_networks} self.assertRaises(exception.ValidationError, self._test_create_extra, params) self.assertFalse(create_mock.called) @mock.patch.object(compute_api.API, 'create', side_effect=exception.InvalidVolume(reason='error')) def test_create_instance_with_invalid_volume_error(self, create_mock): # Tests that InvalidVolume is translated to a 400 error. self.assertRaises(webob.exc.HTTPBadRequest, self._test_create_extra, {}) class ServersViewBuilderTest(test.TestCase): def setUp(self): super(ServersViewBuilderTest, self).setUp() self.flags(use_ipv6=True) self.flags(group='glance', api_servers=['http://localhost:9292']) nw_cache_info = self._generate_nw_cache_info() db_inst = fakes.stub_instance( id=1, image_ref="5", uuid="deadbeef-feed-edee-beef-d0ea7beefedd", display_name="test_server", include_fake_metadata=False, nw_cache=nw_cache_info) privates = ['172.19.0.1'] publics = ['192.168.0.3'] public6s = ['b33f::fdee:ddff:fecc:bbaa'] def nw_info(*args, **kwargs): return [(None, {'label': 'public', 'ips': [dict(ip=ip) for ip in publics], 'ip6s': [dict(ip=ip) for ip in public6s]}), (None, {'label': 'private', 'ips': [dict(ip=ip) for ip in privates]})] fakes.stub_out_nw_api_get_instance_nw_info(self, nw_info) self.uuid = db_inst['uuid'] self.view_builder = views.servers.ViewBuilder() self.request = fakes.HTTPRequestV21.blank("/fake") self.request.context = context.RequestContext('fake', 'fake') self.instance = fake_instance.fake_instance_obj( self.request.context, expected_attrs=instance_obj.INSTANCE_DEFAULT_FIELDS, **db_inst) self.self_link = "http://localhost/v2/fake/servers/%s" % self.uuid self.bookmark_link = "http://localhost/fake/servers/%s" % self.uuid def _generate_nw_cache_info(self): fixed_ipv4 = ('192.168.1.100', '192.168.2.100', '192.168.3.100') fixed_ipv6 = ('2001:db8:0:1::1',) def _ip(ip): return {'address': ip, 'type': 'fixed'} nw_cache = [ {'address': 'aa:aa:aa:aa:aa:aa', 'id': 1, 'network': {'bridge': 'br0', 'id': 1, 'label': 'test1', 'subnets': [{'cidr': '192.168.1.0/24', 'ips': [_ip(fixed_ipv4[0])]}, {'cidr': 'b33f::/64', 'ips': [_ip(fixed_ipv6[0])]}]}}, {'address': 'bb:bb:bb:bb:bb:bb', 'id': 2, 'network': {'bridge': 'br0', 'id': 1, 'label': 'test1', 'subnets': [{'cidr': '192.168.2.0/24', 'ips': [_ip(fixed_ipv4[1])]}]}}, {'address': 'cc:cc:cc:cc:cc:cc', 'id': 3, 'network': {'bridge': 'br0', 'id': 2, 'label': 'test2', 'subnets': [{'cidr': '192.168.3.0/24', 'ips': [_ip(fixed_ipv4[2])]}]}}] return nw_cache def test_get_flavor_valid_instance_type(self): flavor_bookmark = "http://localhost/fake/flavors/1" expected = {"id": "1", "links": [{"rel": "bookmark", "href": flavor_bookmark}]} result = self.view_builder._get_flavor(self.request, self.instance, False) self.assertEqual(result, expected) def test_build_server(self): expected_server = { "server": { "id": self.uuid, "name": "test_server", "links": [ { "rel": "self", "href": self.self_link, }, { "rel": "bookmark", "href": self.bookmark_link, }, ], } } output = self.view_builder.basic(self.request, self.instance) self.assertThat(output, matchers.DictMatches(expected_server)) def test_build_server_with_project_id(self): expected_server = { "server": { "id": self.uuid, "name": "test_server", "links": [ { "rel": "self", "href": self.self_link, }, { "rel": "bookmark", "href": self.bookmark_link, }, ], } } output = self.view_builder.basic(self.request, self.instance) self.assertThat(output, matchers.DictMatches(expected_server)) def test_build_server_detail(self): image_bookmark = "http://localhost/fake/images/5" flavor_bookmark = "http://localhost/fake/flavors/1" expected_server = { "server": { "id": self.uuid, "user_id": "fake_user", "tenant_id": "fake_project", "updated": "2010-11-11T11:00:00Z", "created": "2010-10-10T12:00:00Z", "progress": 0, "name": "test_server", "status": "ACTIVE", "hostId": '', "image": { "id": "5", "links": [ { "rel": "bookmark", "href": image_bookmark, }, ], }, "flavor": { "id": "1", "links": [ { "rel": "bookmark", "href": flavor_bookmark, }, ], }, "addresses": { 'test1': [ {'version': 4, 'addr': '192.168.1.100', 'OS-EXT-IPS:type': 'fixed', 'OS-EXT-IPS-MAC:mac_addr': 'aa:aa:aa:aa:aa:aa'}, {'version': 6, 'addr': '2001:db8:0:1::1', 'OS-EXT-IPS:type': 'fixed', 'OS-EXT-IPS-MAC:mac_addr': 'aa:aa:aa:aa:aa:aa'}, {'version': 4, 'addr': '192.168.2.100', 'OS-EXT-IPS:type': 'fixed', 'OS-EXT-IPS-MAC:mac_addr': 'bb:bb:bb:bb:bb:bb'} ], 'test2': [ {'version': 4, 'addr': '192.168.3.100', 'OS-EXT-IPS:type': 'fixed', 'OS-EXT-IPS-MAC:mac_addr': 'cc:cc:cc:cc:cc:cc'}, ] }, "metadata": {}, "links": [ { "rel": "self", "href": self.self_link, }, { "rel": "bookmark", "href": self.bookmark_link, }, ], "OS-DCF:diskConfig": "MANUAL", "accessIPv4": '', "accessIPv6": '', } } output = self.view_builder.show(self.request, self.instance) self.assertThat(output, matchers.DictMatches(expected_server)) def test_build_server_detail_with_fault(self): self.instance['vm_state'] = vm_states.ERROR self.instance['fault'] = fake_instance.fake_fault_obj( self.request.context, self.uuid) image_bookmark = "http://localhost/fake/images/5" flavor_bookmark = "http://localhost/fake/flavors/1" expected_server = { "server": { "id": self.uuid, "user_id": "fake_user", "tenant_id": "fake_project", "updated": "2010-11-11T11:00:00Z", "created": "2010-10-10T12:00:00Z", "name": "test_server", "status": "ERROR", "hostId": '', "image": { "id": "5", "links": [ { "rel": "bookmark", "href": image_bookmark, }, ], }, "flavor": { "id": "1", "links": [ { "rel": "bookmark", "href": flavor_bookmark, }, ], }, "addresses": { 'test1': [ {'version': 4, 'addr': '192.168.1.100', 'OS-EXT-IPS:type': 'fixed', 'OS-EXT-IPS-MAC:mac_addr': 'aa:aa:aa:aa:aa:aa'}, {'version': 6, 'addr': '2001:db8:0:1::1', 'OS-EXT-IPS:type': 'fixed', 'OS-EXT-IPS-MAC:mac_addr': 'aa:aa:aa:aa:aa:aa'}, {'version': 4, 'addr': '192.168.2.100', 'OS-EXT-IPS:type': 'fixed', 'OS-EXT-IPS-MAC:mac_addr': 'bb:bb:bb:bb:bb:bb'} ], 'test2': [ {'version': 4, 'addr': '192.168.3.100', 'OS-EXT-IPS:type': 'fixed', 'OS-EXT-IPS-MAC:mac_addr': 'cc:cc:cc:cc:cc:cc'}, ] }, "metadata": {}, "links": [ { "rel": "self", "href": self.self_link, }, { "rel": "bookmark", "href": self.bookmark_link, }, ], "fault": { "code": 404, "created": "2010-10-10T12:00:00Z", "message": "HTTPNotFound", "details": "Stock details for test", }, "OS-DCF:diskConfig": "MANUAL", "accessIPv4": '', "accessIPv6": '', } } self.request.context = context.RequestContext('fake', 'fake') output = self.view_builder.show(self.request, self.instance) self.assertThat(output, matchers.DictMatches(expected_server)) def test_build_server_detail_with_fault_that_has_been_deleted(self): self.instance['deleted'] = 1 self.instance['vm_state'] = vm_states.ERROR fault = fake_instance.fake_fault_obj(self.request.context, self.uuid, code=500, message="No valid host was found") self.instance['fault'] = fault expected_fault = {"code": 500, "created": "2010-10-10T12:00:00Z", "message": "No valid host was found"} self.request.context = context.RequestContext('fake', 'fake') output = self.view_builder.show(self.request, self.instance) # Regardless of vm_state deleted servers should be DELETED self.assertEqual("DELETED", output['server']['status']) self.assertThat(output['server']['fault'], matchers.DictMatches(expected_fault)) @mock.patch('nova.objects.InstanceMapping.get_by_instance_uuid') def test_build_server_detail_with_fault_no_instance_mapping(self, mock_im): self.instance['vm_state'] = vm_states.ERROR mock_im.side_effect = exception.InstanceMappingNotFound(uuid='foo') self.request.context = context.RequestContext('fake', 'fake') self.view_builder.show(self.request, self.instance) mock_im.assert_called_once_with(mock.ANY, self.uuid) @mock.patch('nova.objects.InstanceMapping.get_by_instance_uuid') def test_build_server_detail_with_fault_loaded(self, mock_im): self.instance['vm_state'] = vm_states.ERROR fault = fake_instance.fake_fault_obj(self.request.context, self.uuid, code=500, message="No valid host was found") self.instance['fault'] = fault self.request.context = context.RequestContext('fake', 'fake') self.view_builder.show(self.request, self.instance) self.assertFalse(mock_im.called) def test_build_server_detail_with_fault_no_details_not_admin(self): self.instance['vm_state'] = vm_states.ERROR self.instance['fault'] = fake_instance.fake_fault_obj( self.request.context, self.uuid, code=500, message='Error') expected_fault = {"code": 500, "created": "2010-10-10T12:00:00Z", "message": "Error"} self.request.context = context.RequestContext('fake', 'fake') output = self.view_builder.show(self.request, self.instance) self.assertThat(output['server']['fault'], matchers.DictMatches(expected_fault)) def test_build_server_detail_with_fault_admin(self): self.instance['vm_state'] = vm_states.ERROR self.instance['fault'] = fake_instance.fake_fault_obj( self.request.context, self.uuid, code=500, message='Error') expected_fault = {"code": 500, "created": "2010-10-10T12:00:00Z", "message": "Error", 'details': 'Stock details for test'} self.request.environ['nova.context'].is_admin = True output = self.view_builder.show(self.request, self.instance) self.assertThat(output['server']['fault'], matchers.DictMatches(expected_fault)) def test_build_server_detail_with_fault_no_details_admin(self): self.instance['vm_state'] = vm_states.ERROR self.instance['fault'] = fake_instance.fake_fault_obj( self.request.context, self.uuid, code=500, message='Error', details='') expected_fault = {"code": 500, "created": "2010-10-10T12:00:00Z", "message": "Error"} self.request.environ['nova.context'].is_admin = True output = self.view_builder.show(self.request, self.instance) self.assertThat(output['server']['fault'], matchers.DictMatches(expected_fault)) def test_build_server_detail_with_fault_but_active(self): self.instance['vm_state'] = vm_states.ACTIVE self.instance['progress'] = 100 self.instance['fault'] = fake_instance.fake_fault_obj( self.request.context, self.uuid) output = self.view_builder.show(self.request, self.instance) self.assertNotIn('fault', output['server']) def test_build_server_detail_active_status(self): # set the power state of the instance to running self.instance['vm_state'] = vm_states.ACTIVE self.instance['progress'] = 100 image_bookmark = "http://localhost/fake/images/5" flavor_bookmark = "http://localhost/fake/flavors/1" expected_server = { "server": { "id": self.uuid, "user_id": "fake_user", "tenant_id": "fake_project", "updated": "2010-11-11T11:00:00Z", "created": "2010-10-10T12:00:00Z", "progress": 100, "name": "test_server", "status": "ACTIVE", "hostId": '', "image": { "id": "5", "links": [ { "rel": "bookmark", "href": image_bookmark, }, ], }, "flavor": { "id": "1", "links": [ { "rel": "bookmark", "href": flavor_bookmark, }, ], }, "addresses": { 'test1': [ {'version': 4, 'addr': '192.168.1.100', 'OS-EXT-IPS:type': 'fixed', 'OS-EXT-IPS-MAC:mac_addr': 'aa:aa:aa:aa:aa:aa'}, {'version': 6, 'addr': '2001:db8:0:1::1', 'OS-EXT-IPS:type': 'fixed', 'OS-EXT-IPS-MAC:mac_addr': 'aa:aa:aa:aa:aa:aa'}, {'version': 4, 'addr': '192.168.2.100', 'OS-EXT-IPS:type': 'fixed', 'OS-EXT-IPS-MAC:mac_addr': 'bb:bb:bb:bb:bb:bb'} ], 'test2': [ {'version': 4, 'addr': '192.168.3.100', 'OS-EXT-IPS:type': 'fixed', 'OS-EXT-IPS-MAC:mac_addr': 'cc:cc:cc:cc:cc:cc'}, ] }, "metadata": {}, "links": [ { "rel": "self", "href": self.self_link, }, { "rel": "bookmark", "href": self.bookmark_link, }, ], "OS-DCF:diskConfig": "MANUAL", "accessIPv4": '', "accessIPv6": '', } } output = self.view_builder.show(self.request, self.instance) self.assertThat(output, matchers.DictMatches(expected_server)) def test_build_server_detail_with_metadata(self): metadata = [] metadata.append(models.InstanceMetadata(key="Open", value="Stack")) metadata = nova_utils.metadata_to_dict(metadata) self.instance['metadata'] = metadata image_bookmark = "http://localhost/fake/images/5" flavor_bookmark = "http://localhost/fake/flavors/1" expected_server = { "server": { "id": self.uuid, "user_id": "fake_user", "tenant_id": "fake_project", "updated": "2010-11-11T11:00:00Z", "created": "2010-10-10T12:00:00Z", "progress": 0, "name": "test_server", "status": "ACTIVE", "hostId": '', "image": { "id": "5", "links": [ { "rel": "bookmark", "href": image_bookmark, }, ], }, "flavor": { "id": "1", "links": [ { "rel": "bookmark", "href": flavor_bookmark, }, ], }, "addresses": { 'test1': [ {'version': 4, 'addr': '192.168.1.100', 'OS-EXT-IPS:type': 'fixed', 'OS-EXT-IPS-MAC:mac_addr': 'aa:aa:aa:aa:aa:aa'}, {'version': 6, 'addr': '2001:db8:0:1::1', 'OS-EXT-IPS:type': 'fixed', 'OS-EXT-IPS-MAC:mac_addr': 'aa:aa:aa:aa:aa:aa'}, {'version': 4, 'addr': '192.168.2.100', 'OS-EXT-IPS:type': 'fixed', 'OS-EXT-IPS-MAC:mac_addr': 'bb:bb:bb:bb:bb:bb'} ], 'test2': [ {'version': 4, 'addr': '192.168.3.100', 'OS-EXT-IPS:type': 'fixed', 'OS-EXT-IPS-MAC:mac_addr': 'cc:cc:cc:cc:cc:cc'}, ] }, "metadata": {"Open": "Stack"}, "links": [ { "rel": "self", "href": self.self_link, }, { "rel": "bookmark", "href": self.bookmark_link, }, ], "OS-DCF:diskConfig": "MANUAL", "accessIPv4": '', "accessIPv6": '', } } output = self.view_builder.show(self.request, self.instance) self.assertThat(output, matchers.DictMatches(expected_server)) class ServersAllExtensionsTestCase(test.TestCase): """Servers tests using default API router with all extensions enabled. The intent here is to catch cases where extensions end up throwing an exception because of a malformed request before the core API gets a chance to validate the request and return a 422 response. For example, AccessIPsController extends servers.Controller:: | @wsgi.extends | def create(self, req, resp_obj, body): | context = req.environ['nova.context'] | if authorize(context) and 'server' in resp_obj.obj: | resp_obj.attach(xml=AccessIPTemplate()) | server = resp_obj.obj['server'] | self._extend_server(req, server) we want to ensure that the extension isn't barfing on an invalid body. """ def setUp(self): super(ServersAllExtensionsTestCase, self).setUp() self.app = compute.APIRouterV21() def test_create_missing_server(self): # Test create with malformed body. def fake_create(*args, **kwargs): raise test.TestingException("Should not reach the compute API.") self.stubs.Set(compute_api.API, 'create', fake_create) req = fakes.HTTPRequestV21.blank('/fake/servers') req.method = 'POST' req.content_type = 'application/json' body = {'foo': {'a': 'b'}} req.body = jsonutils.dump_as_bytes(body) res = req.get_response(self.app) self.assertEqual(400, res.status_int) def test_update_missing_server(self): # Test update with malformed body. req = fakes.HTTPRequestV21.blank('/fake/servers/1') req.method = 'PUT' req.content_type = 'application/json' body = {'foo': {'a': 'b'}} req.body = jsonutils.dump_as_bytes(body) with mock.patch('nova.objects.Instance.save') as mock_save: res = req.get_response(self.app) self.assertFalse(mock_save.called) self.assertEqual(400, res.status_int) class ServersInvalidRequestTestCase(test.TestCase): """Tests of places we throw 400 Bad Request from.""" def setUp(self): super(ServersInvalidRequestTestCase, self).setUp() self.controller = servers.ServersController() def _invalid_server_create(self, body): req = fakes.HTTPRequestV21.blank('/fake/servers') req.method = 'POST' self.assertRaises(exception.ValidationError, self.controller.create, req, body=body) def test_create_server_no_body(self): self._invalid_server_create(body=None) def test_create_server_missing_server(self): body = {'foo': {'a': 'b'}} self._invalid_server_create(body=body) def test_create_server_malformed_entity(self): body = {'server': 'string'} self._invalid_server_create(body=body) def _unprocessable_server_update(self, body): req = fakes.HTTPRequestV21.blank('/fake/servers/%s' % FAKE_UUID) req.method = 'PUT' self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, req, FAKE_UUID, body=body) def test_update_server_no_body(self): self._invalid_server_create(body=None) def test_update_server_missing_server(self): body = {'foo': {'a': 'b'}} self._invalid_server_create(body=body) def test_create_update_malformed_entity(self): body = {'server': 'string'} self._invalid_server_create(body=body) # TODO(alex_xu): There isn't specified file for ips extension. Most of # unittest related to ips extension is in this file. So put the ips policy # enforcement tests at here until there is specified file for ips extension. class IPsPolicyEnforcementV21(test.NoDBTestCase): def setUp(self): super(IPsPolicyEnforcementV21, self).setUp() self.controller = ips.IPsController() self.req = fakes.HTTPRequest.blank("/v2/fake") def test_index_policy_failed(self): rule_name = "os_compute_api:ips:index" self.policy.set_rules({rule_name: "project:non_fake"}) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller.index, self.req, fakes.FAKE_UUID) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) def test_show_policy_failed(self): rule_name = "os_compute_api:ips:show" self.policy.set_rules({rule_name: "project:non_fake"}) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller.show, self.req, fakes.FAKE_UUID, fakes.FAKE_UUID) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) class ServersPolicyEnforcementV21(test.NoDBTestCase): def setUp(self): super(ServersPolicyEnforcementV21, self).setUp() self.useFixture(nova_fixtures.AllServicesCurrent()) self.controller = servers.ServersController() self.req = fakes.HTTPRequest.blank('') self.image_uuid = '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6' def _common_policy_check(self, rules, rule_name, func, *arg, **kwarg): self.policy.set_rules(rules) exc = self.assertRaises( exception.PolicyNotAuthorized, func, *arg, **kwarg) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) @mock.patch.object(servers.ServersController, '_get_instance') def test_start_policy_failed(self, _get_instance_mock): _get_instance_mock.return_value = None rule_name = "os_compute_api:servers:start" rule = {rule_name: "project:non_fake"} self._common_policy_check( rule, rule_name, self.controller._start_server, self.req, FAKE_UUID, body={}) @mock.patch.object(servers.ServersController, '_get_instance') def test_trigger_crash_dump_policy_failed_with_other_project( self, _get_instance_mock): _get_instance_mock.return_value = fake_instance.fake_instance_obj( self.req.environ['nova.context']) rule_name = "os_compute_api:servers:trigger_crash_dump" rule = {rule_name: "project_id:%(project_id)s"} self.req.api_version_request =\ api_version_request.APIVersionRequest('2.17') # Change the project_id in request context. self.req.environ['nova.context'].project_id = 'other-project' self._common_policy_check( rule, rule_name, self.controller._action_trigger_crash_dump, self.req, FAKE_UUID, body={'trigger_crash_dump': None}) @mock.patch('nova.compute.api.API.trigger_crash_dump') @mock.patch.object(servers.ServersController, '_get_instance') def test_trigger_crash_dump_overridden_policy_pass_with_same_project( self, _get_instance_mock, trigger_crash_dump_mock): instance = fake_instance.fake_instance_obj( self.req.environ['nova.context'], project_id=self.req.environ['nova.context'].project_id) _get_instance_mock.return_value = instance rule_name = "os_compute_api:servers:trigger_crash_dump" self.policy.set_rules({rule_name: "project_id:%(project_id)s"}) self.req.api_version_request = ( api_version_request.APIVersionRequest('2.17')) self.controller._action_trigger_crash_dump( self.req, fakes.FAKE_UUID, body={'trigger_crash_dump': None}) trigger_crash_dump_mock.assert_called_once_with( self.req.environ['nova.context'], instance) @mock.patch.object(servers.ServersController, '_get_instance') def test_trigger_crash_dump_overridden_policy_failed_with_other_user( self, _get_instance_mock): _get_instance_mock.return_value = ( fake_instance.fake_instance_obj(self.req.environ['nova.context'])) rule_name = "os_compute_api:servers:trigger_crash_dump" self.policy.set_rules({rule_name: "user_id:%(user_id)s"}) # Change the user_id in request context. self.req.environ['nova.context'].user_id = 'other-user' self.req.api_version_request = ( api_version_request.APIVersionRequest('2.17')) exc = self.assertRaises(exception.PolicyNotAuthorized, self.controller._action_trigger_crash_dump, self.req, fakes.FAKE_UUID, body={'trigger_crash_dump': None}) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) @mock.patch('nova.compute.api.API.trigger_crash_dump') @mock.patch.object(servers.ServersController, '_get_instance') def test_trigger_crash_dump_overridden_policy_pass_with_same_user( self, _get_instance_mock, trigger_crash_dump_mock): instance = fake_instance.fake_instance_obj( self.req.environ['nova.context'], user_id=self.req.environ['nova.context'].user_id) _get_instance_mock.return_value = instance rule_name = "os_compute_api:servers:trigger_crash_dump" self.policy.set_rules({rule_name: "user_id:%(user_id)s"}) self.req.api_version_request = ( api_version_request.APIVersionRequest('2.17')) self.controller._action_trigger_crash_dump( self.req, fakes.FAKE_UUID, body={'trigger_crash_dump': None}) trigger_crash_dump_mock.assert_called_once_with( self.req.environ['nova.context'], instance) def test_index_policy_failed(self): rule_name = "os_compute_api:servers:index" rule = {rule_name: "project:non_fake"} self._common_policy_check( rule, rule_name, self.controller.index, self.req) def test_detail_policy_failed(self): rule_name = "os_compute_api:servers:detail" rule = {rule_name: "project:non_fake"} self._common_policy_check( rule, rule_name, self.controller.detail, self.req) def test_detail_get_tenants_policy_failed(self): req = fakes.HTTPRequest.blank('') req.GET["all_tenants"] = "True" rule_name = "os_compute_api:servers:detail:get_all_tenants" rule = {rule_name: "project:non_fake"} self._common_policy_check( rule, rule_name, self.controller._get_servers, req, True) def test_index_get_tenants_policy_failed(self): req = fakes.HTTPRequest.blank('') req.GET["all_tenants"] = "True" rule_name = "os_compute_api:servers:index:get_all_tenants" rule = {rule_name: "project:non_fake"} self._common_policy_check( rule, rule_name, self.controller._get_servers, req, False) @mock.patch.object(common, 'get_instance') def test_show_policy_failed(self, get_instance_mock): get_instance_mock.return_value = None rule_name = "os_compute_api:servers:show" rule = {rule_name: "project:non_fake"} self._common_policy_check( rule, rule_name, self.controller.show, self.req, FAKE_UUID) @mock.patch.object(common, 'get_instance') def test_delete_policy_failed_with_other_project(self, get_instance_mock): get_instance_mock.return_value = fake_instance.fake_instance_obj( self.req.environ['nova.context']) rule_name = "os_compute_api:servers:delete" rule = {rule_name: "project_id:%(project_id)s"} # Change the project_id in request context. self.req.environ['nova.context'].project_id = 'other-project' self._common_policy_check( rule, rule_name, self.controller.delete, self.req, FAKE_UUID) @mock.patch('nova.compute.api.API.soft_delete') @mock.patch('nova.api.openstack.common.get_instance') def test_delete_overridden_policy_pass_with_same_project(self, get_instance_mock, soft_delete_mock): self.flags(reclaim_instance_interval=3600) instance = fake_instance.fake_instance_obj( self.req.environ['nova.context'], project_id=self.req.environ['nova.context'].project_id) get_instance_mock.return_value = instance rule_name = "os_compute_api:servers:delete" self.policy.set_rules({rule_name: "project_id:%(project_id)s"}) self.controller.delete(self.req, fakes.FAKE_UUID) soft_delete_mock.assert_called_once_with( self.req.environ['nova.context'], instance) @mock.patch('nova.api.openstack.common.get_instance') def test_delete_overridden_policy_failed_with_other_user_in_same_project( self, get_instance_mock): get_instance_mock.return_value = ( fake_instance.fake_instance_obj(self.req.environ['nova.context'])) rule_name = "os_compute_api:servers:delete" rule = {rule_name: "user_id:%(user_id)s"} # Change the user_id in request context. self.req.environ['nova.context'].user_id = 'other-user' self._common_policy_check( rule, rule_name, self.controller.delete, self.req, FAKE_UUID) @mock.patch('nova.compute.api.API.soft_delete') @mock.patch('nova.api.openstack.common.get_instance') def test_delete_overridden_policy_pass_with_same_user(self, get_instance_mock, soft_delete_mock): self.flags(reclaim_instance_interval=3600) instance = fake_instance.fake_instance_obj( self.req.environ['nova.context'], user_id=self.req.environ['nova.context'].user_id) get_instance_mock.return_value = instance rule_name = "os_compute_api:servers:delete" self.policy.set_rules({rule_name: "user_id:%(user_id)s"}) self.controller.delete(self.req, fakes.FAKE_UUID) soft_delete_mock.assert_called_once_with( self.req.environ['nova.context'], instance) @mock.patch.object(common, 'get_instance') def test_update_policy_failed_with_other_project(self, get_instance_mock): get_instance_mock.return_value = fake_instance.fake_instance_obj( self.req.environ['nova.context']) rule_name = "os_compute_api:servers:update" rule = {rule_name: "project_id:%(project_id)s"} body = {'server': {'name': 'server_test'}} # Change the project_id in request context. self.req.environ['nova.context'].project_id = 'other-project' self._common_policy_check( rule, rule_name, self.controller.update, self.req, FAKE_UUID, body=body) @mock.patch('nova.api.openstack.compute.views.servers.ViewBuilder.show') @mock.patch.object(compute_api.API, 'update_instance') @mock.patch.object(common, 'get_instance') def test_update_overridden_policy_pass_with_same_project( self, get_instance_mock, update_instance_mock, view_show_mock): instance = fake_instance.fake_instance_obj( self.req.environ['nova.context'], project_id=self.req.environ['nova.context'].project_id) get_instance_mock.return_value = instance rule_name = "os_compute_api:servers:update" self.policy.set_rules({rule_name: "project_id:%(project_id)s"}) body = {'server': {'name': 'server_test'}} self.controller.update(self.req, fakes.FAKE_UUID, body=body) @mock.patch.object(common, 'get_instance') def test_update_overridden_policy_failed_with_other_user_in_same_project( self, get_instance_mock): get_instance_mock.return_value = ( fake_instance.fake_instance_obj(self.req.environ['nova.context'])) rule_name = "os_compute_api:servers:update" rule = {rule_name: "user_id:%(user_id)s"} # Change the user_id in request context. self.req.environ['nova.context'].user_id = 'other-user' body = {'server': {'name': 'server_test'}} self._common_policy_check( rule, rule_name, self.controller.update, self.req, FAKE_UUID, body=body) @mock.patch('nova.api.openstack.compute.views.servers.ViewBuilder.show') @mock.patch.object(compute_api.API, 'update_instance') @mock.patch.object(common, 'get_instance') def test_update_overridden_policy_pass_with_same_user(self, get_instance_mock, update_instance_mock, view_show_mock): instance = fake_instance.fake_instance_obj( self.req.environ['nova.context'], user_id=self.req.environ['nova.context'].user_id) get_instance_mock.return_value = instance rule_name = "os_compute_api:servers:update" self.policy.set_rules({rule_name: "user_id:%(user_id)s"}) body = {'server': {'name': 'server_test'}} self.controller.update(self.req, fakes.FAKE_UUID, body=body) def test_confirm_resize_policy_failed(self): rule_name = "os_compute_api:servers:confirm_resize" rule = {rule_name: "project:non_fake"} body = {'server': {'name': 'server_test'}} self._common_policy_check( rule, rule_name, self.controller._action_confirm_resize, self.req, FAKE_UUID, body=body) def test_revert_resize_policy_failed(self): rule_name = "os_compute_api:servers:revert_resize" rule = {rule_name: "project:non_fake"} body = {'server': {'name': 'server_test'}} self._common_policy_check( rule, rule_name, self.controller._action_revert_resize, self.req, FAKE_UUID, body=body) def test_reboot_policy_failed(self): rule_name = "os_compute_api:servers:reboot" rule = {rule_name: "project:non_fake"} body = {'reboot': {'type': 'HARD'}} self._common_policy_check( rule, rule_name, self.controller._action_reboot, self.req, FAKE_UUID, body=body) @mock.patch('nova.api.openstack.common.get_instance') def test_resize_policy_failed_with_other_project(self, get_instance_mock): get_instance_mock.return_value = ( fake_instance.fake_instance_obj(self.req.environ['nova.context'])) rule_name = "os_compute_api:servers:resize" rule = {rule_name: "project_id:%(project_id)s"} body = {'resize': {'flavorRef': '1'}} # Change the project_id in request context. self.req.environ['nova.context'].project_id = 'other-project' self._common_policy_check( rule, rule_name, self.controller._action_resize, self.req, FAKE_UUID, body=body) @mock.patch('nova.compute.api.API.resize') @mock.patch('nova.api.openstack.common.get_instance') def test_resize_overridden_policy_pass_with_same_project(self, get_instance_mock, resize_mock): instance = fake_instance.fake_instance_obj( self.req.environ['nova.context'], project_id=self.req.environ['nova.context'].project_id) get_instance_mock.return_value = instance rule_name = "os_compute_api:servers:resize" self.policy.set_rules({rule_name: "project_id:%(project_id)s"}) body = {'resize': {'flavorRef': '1'}} self.controller._action_resize(self.req, fakes.FAKE_UUID, body=body) resize_mock.assert_called_once_with(self.req.environ['nova.context'], instance, '1') @mock.patch('nova.api.openstack.common.get_instance') def test_resize_overridden_policy_failed_with_other_user_in_same_project( self, get_instance_mock): get_instance_mock.return_value = ( fake_instance.fake_instance_obj(self.req.environ['nova.context'])) rule_name = "os_compute_api:servers:resize" rule = {rule_name: "user_id:%(user_id)s"} # Change the user_id in request context. self.req.environ['nova.context'].user_id = 'other-user' body = {'resize': {'flavorRef': '1'}} self._common_policy_check( rule, rule_name, self.controller._action_resize, self.req, FAKE_UUID, body=body) @mock.patch('nova.compute.api.API.resize') @mock.patch('nova.api.openstack.common.get_instance') def test_resize_overridden_policy_pass_with_same_user(self, get_instance_mock, resize_mock): instance = fake_instance.fake_instance_obj( self.req.environ['nova.context'], user_id=self.req.environ['nova.context'].user_id) get_instance_mock.return_value = instance rule_name = "os_compute_api:servers:resize" self.policy.set_rules({rule_name: "user_id:%(user_id)s"}) body = {'resize': {'flavorRef': '1'}} self.controller._action_resize(self.req, fakes.FAKE_UUID, body=body) resize_mock.assert_called_once_with(self.req.environ['nova.context'], instance, '1') @mock.patch('nova.api.openstack.common.get_instance') def test_rebuild_policy_failed_with_other_project(self, get_instance_mock): get_instance_mock.return_value = fake_instance.fake_instance_obj( self.req.environ['nova.context'], project_id=self.req.environ['nova.context'].project_id) rule_name = "os_compute_api:servers:rebuild" rule = {rule_name: "project_id:%(project_id)s"} body = {'rebuild': {'imageRef': self.image_uuid}} # Change the project_id in request context. self.req.environ['nova.context'].project_id = 'other-project' self._common_policy_check( rule, rule_name, self.controller._action_rebuild, self.req, FAKE_UUID, body=body) @mock.patch('nova.api.openstack.common.get_instance') def test_rebuild_overridden_policy_failed_with_other_user_in_same_project( self, get_instance_mock): get_instance_mock.return_value = ( fake_instance.fake_instance_obj(self.req.environ['nova.context'])) rule_name = "os_compute_api:servers:rebuild" rule = {rule_name: "user_id:%(user_id)s"} body = {'rebuild': {'imageRef': self.image_uuid}} # Change the user_id in request context. self.req.environ['nova.context'].user_id = 'other-user' self._common_policy_check( rule, rule_name, self.controller._action_rebuild, self.req, FAKE_UUID, body=body) @mock.patch('nova.api.openstack.compute.views.servers.ViewBuilder.show') @mock.patch('nova.compute.api.API.rebuild') @mock.patch('nova.api.openstack.common.get_instance') def test_rebuild_overridden_policy_pass_with_same_user(self, get_instance_mock, rebuild_mock, view_show_mock): instance = fake_instance.fake_instance_obj( self.req.environ['nova.context'], user_id=self.req.environ['nova.context'].user_id) get_instance_mock.return_value = instance rule_name = "os_compute_api:servers:rebuild" self.policy.set_rules({rule_name: "user_id:%(user_id)s"}) body = {'rebuild': {'imageRef': self.image_uuid, 'adminPass': 'dumpy_password'}} self.controller._action_rebuild(self.req, fakes.FAKE_UUID, body=body) rebuild_mock.assert_called_once_with(self.req.environ['nova.context'], instance, self.image_uuid, 'dumpy_password') def test_create_image_policy_failed(self): rule_name = "os_compute_api:servers:create_image" rule = {rule_name: "project:non_fake"} body = { 'createImage': { 'name': 'Snapshot 1', }, } self._common_policy_check( rule, rule_name, self.controller._action_create_image, self.req, FAKE_UUID, body=body) @mock.patch('nova.compute.utils.is_volume_backed_instance', return_value=True) @mock.patch.object(objects.BlockDeviceMappingList, 'get_by_instance_uuid') @mock.patch.object(servers.ServersController, '_get_server') def test_create_vol_backed_img_snapshotting_policy_blocks_project(self, mock_get_server, mock_get_uuidi, mock_is_vol_back): """Don't permit a snapshot of a volume backed instance if configured not to based on project """ rule_name = "os_compute_api:servers:create_image:allow_volume_backed" rules = { rule_name: "project:non_fake", "os_compute_api:servers:create_image": "", } body = { 'createImage': { 'name': 'Snapshot 1', }, } self._common_policy_check( rules, rule_name, self.controller._action_create_image, self.req, FAKE_UUID, body=body) @mock.patch('nova.compute.utils.is_volume_backed_instance', return_value=True) @mock.patch.object(objects.BlockDeviceMappingList, 'get_by_instance_uuid') @mock.patch.object(servers.ServersController, '_get_server') def test_create_vol_backed_img_snapshotting_policy_blocks_role(self, mock_get_server, mock_get_uuidi, mock_is_vol_back): """Don't permit a snapshot of a volume backed instance if configured not to based on role """ rule_name = "os_compute_api:servers:create_image:allow_volume_backed" rules = { rule_name: "role:non_fake", "os_compute_api:servers:create_image": "", } body = { 'createImage': { 'name': 'Snapshot 1', }, } self._common_policy_check( rules, rule_name, self.controller._action_create_image, self.req, FAKE_UUID, body=body) def _create_policy_check(self, rules, rule_name): flavor_ref = 'http://localhost/123/flavors/3' body = { 'server': { 'name': 'server_test', 'imageRef': self.image_uuid, 'flavorRef': flavor_ref, 'availability_zone': "zone1:host1:node1", 'block_device_mapping': [{'device_name': "/dev/sda1"}], 'networks': [{'uuid': 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa'}], 'metadata': { 'hello': 'world', 'open': 'stack', }, }, } self._common_policy_check( rules, rule_name, self.controller.create, self.req, body=body) def test_create_policy_failed(self): rule_name = "os_compute_api:servers:create" rules = {rule_name: "project:non_fake"} self._create_policy_check(rules, rule_name) def test_create_forced_host_policy_failed(self): rule_name = "os_compute_api:servers:create:forced_host" rule = {"os_compute_api:servers:create": "@", rule_name: "project:non_fake"} self._create_policy_check(rule, rule_name) def test_create_attach_volume_policy_failed(self): rule_name = "os_compute_api:servers:create:attach_volume" rules = {"os_compute_api:servers:create": "@", "os_compute_api:servers:create:forced_host": "@", rule_name: "project:non_fake"} self._create_policy_check(rules, rule_name) def test_create_attach_attach_network_policy_failed(self): rule_name = "os_compute_api:servers:create:attach_network" rules = {"os_compute_api:servers:create": "@", "os_compute_api:servers:create:forced_host": "@", "os_compute_api:servers:create:attach_volume": "@", rule_name: "project:non_fake"} self._create_policy_check(rules, rule_name) class ServersActionsJsonTestV239(test.NoDBTestCase): def setUp(self): super(ServersActionsJsonTestV239, self).setUp() self.controller = servers.ServersController() self.req = fakes.HTTPRequest.blank('', version='2.39') @mock.patch.object(common, 'check_img_metadata_properties_quota') @mock.patch.object(common, 'get_instance') def test_server_create_image_no_quota_checks(self, mock_get_instance, mock_check_quotas): # 'mock_get_instance' helps to skip the whole logic of the action, # but to make the test mock_get_instance.side_effect = webob.exc.HTTPNotFound body = { 'createImage': { 'name': 'Snapshot 1', }, } self.assertRaises(webob.exc.HTTPNotFound, self.controller._action_create_image, self.req, FAKE_UUID, body=body) # starting from version 2.39 no quota checks on Nova side are performed # for 'createImage' action after removing 'image-metadata' proxy API mock_check_quotas.assert_not_called() nova-17.0.1/nova/tests/unit/api/openstack/compute/test_instance_usage_audit_log.py0000666000175000017500000002124213250073126030517 0ustar zuulzuul00000000000000# Copyright (c) 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime from oslo_utils import fixture as utils_fixture from nova.api.openstack.compute import instance_usage_audit_log as v21_ial from nova import context from nova import exception from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests.unit.objects import test_service service_base = test_service.fake_service TEST_COMPUTE_SERVICES = [dict(service_base, host='foo', topic='compute'), dict(service_base, host='bar', topic='compute'), dict(service_base, host='baz', topic='compute'), dict(service_base, host='plonk', topic='compute'), dict(service_base, host='wibble', topic='bogus'), ] begin1 = datetime.datetime(2012, 7, 4, 6, 0, 0) begin2 = end1 = datetime.datetime(2012, 7, 5, 6, 0, 0) begin3 = end2 = datetime.datetime(2012, 7, 6, 6, 0, 0) end3 = datetime.datetime(2012, 7, 7, 6, 0, 0) # test data TEST_LOGS1 = [ # all services done, no errors. dict(host="plonk", period_beginning=begin1, period_ending=end1, state="DONE", errors=0, task_items=23, message="test1"), dict(host="baz", period_beginning=begin1, period_ending=end1, state="DONE", errors=0, task_items=17, message="test2"), dict(host="bar", period_beginning=begin1, period_ending=end1, state="DONE", errors=0, task_items=10, message="test3"), dict(host="foo", period_beginning=begin1, period_ending=end1, state="DONE", errors=0, task_items=7, message="test4"), ] TEST_LOGS2 = [ # some still running... dict(host="plonk", period_beginning=begin2, period_ending=end2, state="DONE", errors=0, task_items=23, message="test5"), dict(host="baz", period_beginning=begin2, period_ending=end2, state="DONE", errors=0, task_items=17, message="test6"), dict(host="bar", period_beginning=begin2, period_ending=end2, state="RUNNING", errors=0, task_items=10, message="test7"), dict(host="foo", period_beginning=begin2, period_ending=end2, state="DONE", errors=0, task_items=7, message="test8"), ] TEST_LOGS3 = [ # some errors.. dict(host="plonk", period_beginning=begin3, period_ending=end3, state="DONE", errors=0, task_items=23, message="test9"), dict(host="baz", period_beginning=begin3, period_ending=end3, state="DONE", errors=2, task_items=17, message="test10"), dict(host="bar", period_beginning=begin3, period_ending=end3, state="DONE", errors=0, task_items=10, message="test11"), dict(host="foo", period_beginning=begin3, period_ending=end3, state="DONE", errors=1, task_items=7, message="test12"), ] def fake_task_log_get_all(context, task_name, begin, end, host=None, state=None): assert task_name == "instance_usage_audit" if begin == begin1 and end == end1: return TEST_LOGS1 if begin == begin2 and end == end2: return TEST_LOGS2 if begin == begin3 and end == end3: return TEST_LOGS3 raise AssertionError("Invalid date %s to %s" % (begin, end)) def fake_last_completed_audit_period(unit=None, before=None): audit_periods = [(begin3, end3), (begin2, end2), (begin1, end1)] if before is not None: for begin, end in audit_periods: if before > end: return begin, end raise AssertionError("Invalid before date %s" % (before)) return begin1, end1 class InstanceUsageAuditLogTestV21(test.NoDBTestCase): def setUp(self): super(InstanceUsageAuditLogTestV21, self).setUp() self.context = context.get_admin_context() self.useFixture( utils_fixture.TimeFixture(datetime.datetime(2012, 7, 5, 10, 0, 0))) self._set_up_controller() self.host_api = self.controller.host_api def fake_service_get_all(context, disabled): self.assertIsNone(disabled) return TEST_COMPUTE_SERVICES self.stub_out('nova.utils.last_completed_audit_period', fake_last_completed_audit_period) self.stub_out('nova.db.service_get_all', fake_service_get_all) self.stub_out('nova.db.task_log_get_all', fake_task_log_get_all) self.req = fakes.HTTPRequest.blank('') def _set_up_controller(self): self.controller = v21_ial.InstanceUsageAuditLogController() def test_index(self): result = self.controller.index(self.req) self.assertIn('instance_usage_audit_logs', result) logs = result['instance_usage_audit_logs'] self.assertEqual(57, logs['total_instances']) self.assertEqual(0, logs['total_errors']) self.assertEqual(4, len(logs['log'])) self.assertEqual(4, logs['num_hosts']) self.assertEqual(4, logs['num_hosts_done']) self.assertEqual(0, logs['num_hosts_running']) self.assertEqual(0, logs['num_hosts_not_run']) self.assertEqual("ALL hosts done. 0 errors.", logs['overall_status']) def test_show(self): result = self.controller.show(self.req, '2012-07-05 10:00:00') self.assertIn('instance_usage_audit_log', result) logs = result['instance_usage_audit_log'] self.assertEqual(57, logs['total_instances']) self.assertEqual(0, logs['total_errors']) self.assertEqual(4, len(logs['log'])) self.assertEqual(4, logs['num_hosts']) self.assertEqual(4, logs['num_hosts_done']) self.assertEqual(0, logs['num_hosts_running']) self.assertEqual(0, logs['num_hosts_not_run']) self.assertEqual("ALL hosts done. 0 errors.", logs['overall_status']) def test_show_with_running(self): result = self.controller.show(self.req, '2012-07-06 10:00:00') self.assertIn('instance_usage_audit_log', result) logs = result['instance_usage_audit_log'] self.assertEqual(57, logs['total_instances']) self.assertEqual(0, logs['total_errors']) self.assertEqual(4, len(logs['log'])) self.assertEqual(4, logs['num_hosts']) self.assertEqual(3, logs['num_hosts_done']) self.assertEqual(1, logs['num_hosts_running']) self.assertEqual(0, logs['num_hosts_not_run']) self.assertEqual("3 of 4 hosts done. 0 errors.", logs['overall_status']) def test_show_with_errors(self): result = self.controller.show(self.req, '2012-07-07 10:00:00') self.assertIn('instance_usage_audit_log', result) logs = result['instance_usage_audit_log'] self.assertEqual(57, logs['total_instances']) self.assertEqual(3, logs['total_errors']) self.assertEqual(4, len(logs['log'])) self.assertEqual(4, logs['num_hosts']) self.assertEqual(4, logs['num_hosts_done']) self.assertEqual(0, logs['num_hosts_running']) self.assertEqual(0, logs['num_hosts_not_run']) self.assertEqual("ALL hosts done. 3 errors.", logs['overall_status']) class InstanceUsageAuditPolicyEnforcementV21(test.NoDBTestCase): def setUp(self): super(InstanceUsageAuditPolicyEnforcementV21, self).setUp() self.controller = v21_ial.InstanceUsageAuditLogController() self.req = fakes.HTTPRequest.blank('') def test_index_policy_failed(self): rule_name = "os_compute_api:os-instance-usage-audit-log" self.policy.set_rules({rule_name: "project_id:non_fake"}) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller.index, self.req) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) def test_show_policy_failed(self): rule_name = "os_compute_api:os-instance-usage-audit-log" self.policy.set_rules({rule_name: "project_id:non_fake"}) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller.show, self.req, '2012-07-05 10:00:00') self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) nova-17.0.1/nova/tests/unit/api/openstack/compute/__init__.py0000666000175000017500000000000013250073126024165 0ustar zuulzuul00000000000000nova-17.0.1/nova/tests/unit/api/openstack/compute/test_consoles.py0000666000175000017500000002615513250073126025335 0ustar zuulzuul00000000000000# Copyright 2010-2011 OpenStack Foundation # Copyright 2011 Piston Cloud Computing, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime from oslo_policy import policy as oslo_policy from oslo_utils import timeutils import webob from nova.api.openstack.compute import consoles as consoles_v21 from nova.compute import vm_states from nova import exception from nova import policy from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests.unit import matchers from nova.tests import uuidsentinel as uuids FAKE_UUID = 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa' class FakeInstanceDB(object): def __init__(self): self.instances_by_id = {} self.ids_by_uuid = {} self.max_id = 0 def return_server_by_id(self, context, id): if id not in self.instances_by_id: self._add_server(id=id) return dict(self.instances_by_id[id]) def return_server_by_uuid(self, context, uuid): if uuid not in self.ids_by_uuid: self._add_server(uuid=uuid) return dict(self.instances_by_id[self.ids_by_uuid[uuid]]) def _add_server(self, id=None, uuid=None): if id is None: id = self.max_id + 1 if uuid is None: uuid = uuids.fake instance = stub_instance(id, uuid=uuid) self.instances_by_id[id] = instance self.ids_by_uuid[uuid] = id if id > self.max_id: self.max_id = id def stub_instance(id, user_id='fake', project_id='fake', host=None, vm_state=None, task_state=None, reservation_id="", uuid=FAKE_UUID, image_ref="10", flavor_id="1", name=None, key_name='', access_ipv4=None, access_ipv6=None, progress=0): if host is not None: host = str(host) if key_name: key_data = 'FAKE' else: key_data = '' # ReservationID isn't sent back, hack it in there. server_name = name or "server%s" % id if reservation_id != "": server_name = "reservation_%s" % (reservation_id, ) instance = { "id": int(id), "created_at": datetime.datetime(2010, 10, 10, 12, 0, 0), "updated_at": datetime.datetime(2010, 11, 11, 11, 0, 0), "admin_pass": "", "user_id": user_id, "project_id": project_id, "image_ref": image_ref, "kernel_id": "", "ramdisk_id": "", "launch_index": 0, "key_name": key_name, "key_data": key_data, "vm_state": vm_state or vm_states.BUILDING, "task_state": task_state, "memory_mb": 0, "vcpus": 0, "root_gb": 0, "hostname": "", "host": host, "instance_type": {}, "user_data": "", "reservation_id": reservation_id, "mac_address": "", "launched_at": timeutils.utcnow(), "terminated_at": timeutils.utcnow(), "availability_zone": "", "display_name": server_name, "display_description": "", "locked": False, "metadata": [], "access_ip_v4": access_ipv4, "access_ip_v6": access_ipv6, "uuid": uuid, "progress": progress} return instance class ConsolesControllerTestV21(test.NoDBTestCase): def setUp(self): super(ConsolesControllerTestV21, self).setUp() self.instance_db = FakeInstanceDB() self.stub_out('nova.db.instance_get', self.instance_db.return_server_by_id) self.stub_out('nova.db.instance_get_by_uuid', self.instance_db.return_server_by_uuid) self.uuid = uuids.fake self.url = '/v2/fake/servers/%s/consoles' % self.uuid self._set_up_controller() def _set_up_controller(self): self.controller = consoles_v21.ConsolesController() def test_create_console(self): def fake_create_console(cons_self, context, instance_id): self.assertEqual(instance_id, self.uuid) return {} self.stub_out('nova.console.api.API.create_console', fake_create_console) req = fakes.HTTPRequest.blank(self.url) self.controller.create(req, self.uuid, None) def test_create_console_unknown_instance(self): def fake_create_console(cons_self, context, instance_id): raise exception.InstanceNotFound(instance_id=instance_id) self.stub_out('nova.console.api.API.create_console', fake_create_console) req = fakes.HTTPRequest.blank(self.url) self.assertRaises(webob.exc.HTTPNotFound, self.controller.create, req, self.uuid, None) def test_show_console(self): def fake_get_console(cons_self, context, instance_id, console_id): self.assertEqual(instance_id, self.uuid) self.assertEqual(console_id, 20) pool = dict(console_type='fake_type', public_hostname='fake_hostname') return dict(id=console_id, password='fake_password', port='fake_port', pool=pool, instance_name='inst-0001') expected = {'console': {'id': 20, 'port': 'fake_port', 'host': 'fake_hostname', 'password': 'fake_password', 'instance_name': 'inst-0001', 'console_type': 'fake_type'}} self.stub_out('nova.console.api.API.get_console', fake_get_console) req = fakes.HTTPRequest.blank(self.url + '/20') res_dict = self.controller.show(req, self.uuid, '20') self.assertThat(res_dict, matchers.DictMatches(expected)) def test_show_console_unknown_console(self): def fake_get_console(cons_self, context, instance_id, console_id): raise exception.ConsoleNotFound(console_id=console_id) self.stub_out('nova.console.api.API.get_console', fake_get_console) req = fakes.HTTPRequest.blank(self.url + '/20') self.assertRaises(webob.exc.HTTPNotFound, self.controller.show, req, self.uuid, '20') def test_show_console_unknown_instance(self): def fake_get_console(cons_self, context, instance_id, console_id): raise exception.ConsoleNotFoundForInstance( instance_uuid=instance_id) self.stub_out('nova.console.api.API.get_console', fake_get_console) req = fakes.HTTPRequest.blank(self.url + '/20') self.assertRaises(webob.exc.HTTPNotFound, self.controller.show, req, self.uuid, '20') def test_list_consoles(self): def fake_get_consoles(cons_self, context, instance_id): self.assertEqual(instance_id, self.uuid) pool1 = dict(console_type='fake_type', public_hostname='fake_hostname') cons1 = dict(id=10, password='fake_password', port='fake_port', pool=pool1) pool2 = dict(console_type='fake_type2', public_hostname='fake_hostname2') cons2 = dict(id=11, password='fake_password2', port='fake_port2', pool=pool2) return [cons1, cons2] expected = {'consoles': [{'console': {'id': 10, 'console_type': 'fake_type'}}, {'console': {'id': 11, 'console_type': 'fake_type2'}}]} self.stub_out('nova.console.api.API.get_consoles', fake_get_consoles) req = fakes.HTTPRequest.blank(self.url) res_dict = self.controller.index(req, self.uuid) self.assertThat(res_dict, matchers.DictMatches(expected)) def test_delete_console(self): def fake_get_console(cons_self, context, instance_id, console_id): self.assertEqual(instance_id, self.uuid) self.assertEqual(console_id, 20) pool = dict(console_type='fake_type', public_hostname='fake_hostname') return dict(id=console_id, password='fake_password', port='fake_port', pool=pool) def fake_delete_console(cons_self, context, instance_id, console_id): self.assertEqual(instance_id, self.uuid) self.assertEqual(console_id, 20) self.stub_out('nova.console.api.API.get_console', fake_get_console) self.stub_out('nova.console.api.API.delete_console', fake_delete_console) req = fakes.HTTPRequest.blank(self.url + '/20') self.controller.delete(req, self.uuid, '20') def test_delete_console_unknown_console(self): def fake_delete_console(cons_self, context, instance_id, console_id): raise exception.ConsoleNotFound(console_id=console_id) self.stub_out('nova.console.api.API.delete_console', fake_delete_console) req = fakes.HTTPRequest.blank(self.url + '/20') self.assertRaises(webob.exc.HTTPNotFound, self.controller.delete, req, self.uuid, '20') def test_delete_console_unknown_instance(self): def fake_delete_console(cons_self, context, instance_id, console_id): raise exception.ConsoleNotFoundForInstance( instance_uuid=instance_id) self.stub_out('nova.console.api.API.delete_console', fake_delete_console) req = fakes.HTTPRequest.blank(self.url + '/20') self.assertRaises(webob.exc.HTTPNotFound, self.controller.delete, req, self.uuid, '20') def _test_fail_policy(self, rule, action, data=None): rules = { rule: "!", } policy.set_rules(oslo_policy.Rules.from_dict(rules)) req = fakes.HTTPRequest.blank(self.url + '/20') if data is not None: self.assertRaises(exception.PolicyNotAuthorized, action, req, self.uuid, data) else: self.assertRaises(exception.PolicyNotAuthorized, action, req, self.uuid) def test_delete_console_fail_policy(self): self._test_fail_policy("os_compute_api:os-consoles:delete", self.controller.delete, data='20') def test_create_console_fail_policy(self): self._test_fail_policy("os_compute_api:os-consoles:create", self.controller.create, data='20') def test_index_console_fail_policy(self): self._test_fail_policy("os_compute_api:os-consoles:index", self.controller.index) def test_show_console_fail_policy(self): self._test_fail_policy("os_compute_api:os-consoles:show", self.controller.show, data='20') nova-17.0.1/nova/tests/unit/api/openstack/compute/test_quotas.py0000666000175000017500000006760613250073126025032 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # Copyright 2013 IBM Corp. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import webob from nova.api.openstack.compute import quota_sets as quotas_v21 from nova import db from nova import exception from nova import quota from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.unit.api.openstack import fakes def quota_set(id, include_server_group_quotas=True): res = {'quota_set': {'id': id, 'metadata_items': 128, 'ram': 51200, 'floating_ips': 10, 'fixed_ips': -1, 'instances': 10, 'injected_files': 5, 'cores': 20, 'injected_file_content_bytes': 10240, 'security_groups': 10, 'security_group_rules': 20, 'key_pairs': 100, 'injected_file_path_bytes': 255}} if include_server_group_quotas: res['quota_set']['server_groups'] = 10 res['quota_set']['server_group_members'] = 10 return res class BaseQuotaSetsTest(test.TestCase): def setUp(self): super(BaseQuotaSetsTest, self).setUp() # We need to stub out verify_project_id so that it doesn't # generate an EndpointNotFound exception and result in a # server error. self.stub_out('nova.api.openstack.identity.verify_project_id', lambda ctx, project_id: True) def get_delete_status_int(self, res): # NOTE: on v2.1, http status code is set as wsgi_code of API # method instead of status_int in a response object. return self.controller.delete.wsgi_code class QuotaSetsTestV21(BaseQuotaSetsTest): plugin = quotas_v21 validation_error = exception.ValidationError include_server_group_quotas = True def setUp(self): super(QuotaSetsTestV21, self).setUp() self._setup_controller() self.default_quotas = { 'instances': 10, 'cores': 20, 'ram': 51200, 'floating_ips': 10, 'fixed_ips': -1, 'metadata_items': 128, 'injected_files': 5, 'injected_file_path_bytes': 255, 'injected_file_content_bytes': 10240, 'security_groups': 10, 'security_group_rules': 20, 'key_pairs': 100, } if self.include_server_group_quotas: self.default_quotas['server_groups'] = 10 self.default_quotas['server_group_members'] = 10 def _setup_controller(self): self.controller = self.plugin.QuotaSetsController() def _get_http_request(self, url=''): return fakes.HTTPRequest.blank(url) def test_format_quota_set(self): quota_set = self.controller._format_quota_set('1234', self.default_quotas, []) qs = quota_set['quota_set'] self.assertEqual(qs['id'], '1234') self.assertEqual(qs['instances'], 10) self.assertEqual(qs['cores'], 20) self.assertEqual(qs['ram'], 51200) self.assertEqual(qs['floating_ips'], 10) self.assertEqual(qs['fixed_ips'], -1) self.assertEqual(qs['metadata_items'], 128) self.assertEqual(qs['injected_files'], 5) self.assertEqual(qs['injected_file_path_bytes'], 255) self.assertEqual(qs['injected_file_content_bytes'], 10240) self.assertEqual(qs['security_groups'], 10) self.assertEqual(qs['security_group_rules'], 20) self.assertEqual(qs['key_pairs'], 100) if self.include_server_group_quotas: self.assertEqual(qs['server_groups'], 10) self.assertEqual(qs['server_group_members'], 10) def test_validate_quota_limit(self): resource = 'fake' # Valid - finite values self.assertIsNone(self.controller._validate_quota_limit(resource, 50, 10, 100)) # Valid - finite limit and infinite maximum self.assertIsNone(self.controller._validate_quota_limit(resource, 50, 10, -1)) # Valid - infinite limit and infinite maximum self.assertIsNone(self.controller._validate_quota_limit(resource, -1, 10, -1)) # Valid - all infinite self.assertIsNone(self.controller._validate_quota_limit(resource, -1, -1, -1)) # Invalid - limit is less than -1 self.assertRaises(webob.exc.HTTPBadRequest, self.controller._validate_quota_limit, resource, -2, 10, 100) # Invalid - limit is less than minimum self.assertRaises(webob.exc.HTTPBadRequest, self.controller._validate_quota_limit, resource, 5, 10, 100) # Invalid - limit is greater than maximum self.assertRaises(webob.exc.HTTPBadRequest, self.controller._validate_quota_limit, resource, 200, 10, 100) # Invalid - infinite limit is greater than maximum self.assertRaises(webob.exc.HTTPBadRequest, self.controller._validate_quota_limit, resource, -1, 10, 100) # Invalid - limit is less than infinite minimum self.assertRaises(webob.exc.HTTPBadRequest, self.controller._validate_quota_limit, resource, 50, -1, -1) # Invalid - limit is larger than 0x7FFFFFFF self.assertRaises(webob.exc.HTTPBadRequest, self.controller._validate_quota_limit, resource, db.MAX_INT + 1, -1, -1) def test_quotas_defaults(self): uri = '/v2/fake_tenant/os-quota-sets/fake_tenant/defaults' req = fakes.HTTPRequest.blank(uri) res_dict = self.controller.defaults(req, 'fake_tenant') self.default_quotas.update({'id': 'fake_tenant'}) expected = {'quota_set': self.default_quotas} self.assertEqual(res_dict, expected) def test_quotas_show(self): req = self._get_http_request() res_dict = self.controller.show(req, 1234) ref_quota_set = quota_set('1234', self.include_server_group_quotas) self.assertEqual(res_dict, ref_quota_set) def test_quotas_update(self): self.default_quotas.update({ 'instances': 50, 'cores': 50 }) body = {'quota_set': self.default_quotas} req = self._get_http_request() res_dict = self.controller.update(req, 'update_me', body=body) self.assertEqual(body, res_dict) @mock.patch('nova.objects.Quotas.create_limit') def test_quotas_update_with_good_data(self, mock_createlimit): self.default_quotas.update({}) body = {'quota_set': self.default_quotas} req = self._get_http_request() self.controller.update(req, 'update_me', body=body) self.assertEqual(len(self.default_quotas), len(mock_createlimit.mock_calls)) @mock.patch('nova.api.validation.validators._SchemaValidator.validate') @mock.patch('nova.objects.Quotas.create_limit') def test_quotas_update_with_bad_data(self, mock_createlimit, mock_validate): self.default_quotas.update({ 'instances': 50, 'cores': -50 }) body = {'quota_set': self.default_quotas} req = self._get_http_request() self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, req, 'update_me', body=body) self.assertEqual(0, len(mock_createlimit.mock_calls)) def test_quotas_update_zero_value(self): body = {'quota_set': {'instances': 0, 'cores': 0, 'ram': 0, 'floating_ips': 0, 'metadata_items': 0, 'injected_files': 0, 'injected_file_content_bytes': 0, 'injected_file_path_bytes': 0, 'security_groups': 0, 'security_group_rules': 0, 'key_pairs': 100, 'fixed_ips': -1}} if self.include_server_group_quotas: body['quota_set']['server_groups'] = 10 body['quota_set']['server_group_members'] = 10 req = self._get_http_request() res_dict = self.controller.update(req, 'update_me', body=body) self.assertEqual(body, res_dict) def _quotas_update_bad_request_case(self, body): req = self._get_http_request() self.assertRaises(self.validation_error, self.controller.update, req, 'update_me', body=body) def test_quotas_update_invalid_key(self): body = {'quota_set': {'instances2': -2, 'cores': -2, 'ram': -2, 'floating_ips': -2, 'metadata_items': -2, 'injected_files': -2, 'injected_file_content_bytes': -2}} self._quotas_update_bad_request_case(body) def test_quotas_update_invalid_limit(self): body = {'quota_set': {'instances': -2, 'cores': -2, 'ram': -2, 'floating_ips': -2, 'fixed_ips': -2, 'metadata_items': -2, 'injected_files': -2, 'injected_file_content_bytes': -2}} self._quotas_update_bad_request_case(body) def test_quotas_update_empty_body(self): body = {} self._quotas_update_bad_request_case(body) def test_quotas_update_invalid_value_non_int(self): # when PUT non integer value self.default_quotas.update({ 'instances': 'test' }) body = {'quota_set': self.default_quotas} self._quotas_update_bad_request_case(body) def test_quotas_update_invalid_value_with_float(self): # when PUT non integer value self.default_quotas.update({ 'instances': 50.5 }) body = {'quota_set': self.default_quotas} self._quotas_update_bad_request_case(body) def test_quotas_update_invalid_value_with_unicode(self): # when PUT non integer value self.default_quotas.update({ 'instances': u'\u30aa\u30fc\u30d7\u30f3' }) body = {'quota_set': self.default_quotas} self._quotas_update_bad_request_case(body) @mock.patch.object(quota.QUOTAS, 'destroy_all_by_project') def test_quotas_delete(self, mock_destroy_all_by_project): req = self._get_http_request() res = self.controller.delete(req, 1234) self.assertEqual(202, self.get_delete_status_int(res)) mock_destroy_all_by_project.assert_called_once_with( req.environ['nova.context'], 1234) def test_update_network_quota_disabled(self): self.flags(enable_network_quota=False) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, self._get_http_request(), 1234, body={'quota_set': {'networks': 1}}) def test_update_network_quota_enabled(self): self.flags(enable_network_quota=True) self.useFixture(nova_fixtures.RegisterNetworkQuota()) self.controller.update(self._get_http_request(), 1234, body={'quota_set': {'networks': 1}}) def test_duplicate_quota_filter(self): query_string = 'user_id=1&user_id=2' req = fakes.HTTPRequest.blank('', query_string=query_string) self.controller.show(req, 1234) self.controller.update(req, 1234, body={'quota_set': {}}) self.controller.detail(req, 1234) self.controller.delete(req, 1234) def test_quota_filter_negative_int_as_string(self): req = fakes.HTTPRequest.blank('', query_string='user_id=-1') self.controller.show(req, 1234) self.controller.update(req, 1234, body={'quota_set': {}}) self.controller.detail(req, 1234) self.controller.delete(req, 1234) def test_quota_filter_int_as_string(self): req = fakes.HTTPRequest.blank('', query_string='user_id=123') self.controller.show(req, 1234) self.controller.update(req, 1234, body={'quota_set': {}}) self.controller.detail(req, 1234) self.controller.delete(req, 1234) def test_unknown_quota_filter(self): query_string = 'unknown_filter=abc' req = fakes.HTTPRequest.blank('', query_string=query_string) self.controller.show(req, 1234) self.controller.update(req, 1234, body={'quota_set': {}}) self.controller.detail(req, 1234) self.controller.delete(req, 1234) def test_quota_additional_filter(self): query_string = 'user_id=1&additional_filter=2' req = fakes.HTTPRequest.blank('', query_string=query_string) self.controller.show(req, 1234) self.controller.update(req, 1234, body={'quota_set': {}}) self.controller.detail(req, 1234) self.controller.delete(req, 1234) class ExtendedQuotasTestV21(BaseQuotaSetsTest): plugin = quotas_v21 def setUp(self): super(ExtendedQuotasTestV21, self).setUp() self._setup_controller() fake_quotas = {'ram': {'limit': 51200, 'in_use': 12800, 'reserved': 12800}, 'cores': {'limit': 20, 'in_use': 10, 'reserved': 5}, 'instances': {'limit': 100, 'in_use': 0, 'reserved': 0}} def _setup_controller(self): self.controller = self.plugin.QuotaSetsController() def fake_get_quotas(self, context, id, user_id=None, usages=False): if usages: return self.fake_quotas else: return {k: v['limit'] for k, v in self.fake_quotas.items()} def fake_get_settable_quotas(self, context, project_id, user_id=None): return { 'ram': {'minimum': self.fake_quotas['ram']['in_use'] + self.fake_quotas['ram']['reserved'], 'maximum': -1}, 'cores': {'minimum': self.fake_quotas['cores']['in_use'] + self.fake_quotas['cores']['reserved'], 'maximum': -1}, 'instances': {'minimum': self.fake_quotas['instances']['in_use'] + self.fake_quotas['instances']['reserved'], 'maximum': -1}, } def _get_http_request(self, url=''): return fakes.HTTPRequest.blank(url) @mock.patch.object(quota.QUOTAS, 'get_settable_quotas') def test_quotas_update_exceed_in_used(self, get_settable_quotas): body = {'quota_set': {'cores': 10}} get_settable_quotas.side_effect = self.fake_get_settable_quotas req = self._get_http_request() self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, req, 'update_me', body=body) @mock.patch.object(quota.QUOTAS, 'get_settable_quotas') def test_quotas_force_update_exceed_in_used(self, get_settable_quotas): with mock.patch.object(self.plugin.QuotaSetsController, '_get_quotas') as _get_quotas: body = {'quota_set': {'cores': 10, 'force': 'True'}} get_settable_quotas.side_effect = self.fake_get_settable_quotas _get_quotas.side_effect = self.fake_get_quotas req = self._get_http_request() self.controller.update(req, 'update_me', body=body) @mock.patch('nova.objects.Quotas.create_limit') def test_quotas_update_good_data(self, mock_createlimit): body = {'quota_set': {'cores': 1, 'instances': 1}} req = fakes.HTTPRequest.blank('/v2/fake4/os-quota-sets/update_me', use_admin_context=True) self.controller.update(req, 'update_me', body=body) self.assertEqual(2, len(mock_createlimit.mock_calls)) @mock.patch('nova.objects.Quotas.create_limit') @mock.patch.object(quota.QUOTAS, 'get_settable_quotas') def test_quotas_update_bad_data(self, mock_gsq, mock_createlimit): body = {'quota_set': {'cores': 10, 'instances': 1}} mock_gsq.side_effect = self.fake_get_settable_quotas req = fakes.HTTPRequest.blank('/v2/fake4/os-quota-sets/update_me', use_admin_context=True) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, req, 'update_me', body=body) self.assertEqual(0, len(mock_createlimit.mock_calls)) class UserQuotasTestV21(BaseQuotaSetsTest): plugin = quotas_v21 include_server_group_quotas = True def setUp(self): super(UserQuotasTestV21, self).setUp() self._setup_controller() def _get_http_request(self, url=''): return fakes.HTTPRequest.blank(url) def _setup_controller(self): self.controller = self.plugin.QuotaSetsController() def test_user_quotas_show(self): req = self._get_http_request('/v2/fake4/os-quota-sets/1234?user_id=1') res_dict = self.controller.show(req, 1234) ref_quota_set = quota_set('1234', self.include_server_group_quotas) self.assertEqual(res_dict, ref_quota_set) def test_user_quotas_update(self): body = {'quota_set': {'instances': 10, 'cores': 20, 'ram': 51200, 'floating_ips': 10, 'fixed_ips': -1, 'metadata_items': 128, 'injected_files': 5, 'injected_file_content_bytes': 10240, 'injected_file_path_bytes': 255, 'security_groups': 10, 'security_group_rules': 20, 'key_pairs': 100}} if self.include_server_group_quotas: body['quota_set']['server_groups'] = 10 body['quota_set']['server_group_members'] = 10 url = '/v2/fake4/os-quota-sets/update_me?user_id=1' req = self._get_http_request(url) res_dict = self.controller.update(req, 'update_me', body=body) self.assertEqual(body, res_dict) def test_user_quotas_update_exceed_project(self): body = {'quota_set': {'instances': 20}} url = '/v2/fake4/os-quota-sets/update_me?user_id=1' req = self._get_http_request(url) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, req, 'update_me', body=body) @mock.patch.object(quota.QUOTAS, "destroy_all_by_project_and_user") def test_user_quotas_delete(self, mock_destroy_all_by_project_and_user): url = '/v2/fake4/os-quota-sets/1234?user_id=1' req = self._get_http_request(url) res = self.controller.delete(req, 1234) self.assertEqual(202, self.get_delete_status_int(res)) mock_destroy_all_by_project_and_user.assert_called_once_with( req.environ['nova.context'], 1234, '1' ) @mock.patch('nova.objects.Quotas.create_limit') def test_user_quotas_update_good_data(self, mock_createlimit): body = {'quota_set': {'instances': 1, 'cores': 1}} url = '/v2/fake4/os-quota-sets/update_me?user_id=1' req = fakes.HTTPRequest.blank(url, use_admin_context=True) self.controller.update(req, 'update_me', body=body) self.assertEqual(2, len(mock_createlimit.mock_calls)) @mock.patch('nova.objects.Quotas.create_limit') def test_user_quotas_update_bad_data(self, mock_createlimit): body = {'quota_set': {'instances': 20, 'cores': 1}} url = '/v2/fake4/os-quota-sets/update_me?user_id=1' req = fakes.HTTPRequest.blank(url, use_admin_context=True) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, req, 'update_me', body=body) self.assertEqual(0, len(mock_createlimit.mock_calls)) class QuotaSetsPolicyEnforcementV21(test.NoDBTestCase): def setUp(self): super(QuotaSetsPolicyEnforcementV21, self).setUp() self.controller = quotas_v21.QuotaSetsController() self.req = fakes.HTTPRequest.blank('') def test_delete_policy_failed(self): rule_name = "os_compute_api:os-quota-sets:delete" self.policy.set_rules({rule_name: "project_id:non_fake"}) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller.delete, self.req, fakes.FAKE_UUID) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) def test_defaults_policy_failed(self): rule_name = "os_compute_api:os-quota-sets:defaults" self.policy.set_rules({rule_name: "project_id:non_fake"}) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller.defaults, self.req, fakes.FAKE_UUID) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) def test_show_policy_failed(self): rule_name = "os_compute_api:os-quota-sets:show" self.policy.set_rules({rule_name: "project_id:non_fake"}) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller.show, self.req, fakes.FAKE_UUID) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) def test_detail_policy_failed(self): rule_name = "os_compute_api:os-quota-sets:detail" self.policy.set_rules({rule_name: "project_id:non_fake"}) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller.detail, self.req, fakes.FAKE_UUID) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) def test_update_policy_failed(self): rule_name = "os_compute_api:os-quota-sets:update" self.policy.set_rules({rule_name: "project_id:non_fake"}) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller.update, self.req, fakes.FAKE_UUID, body={'quota_set': {}}) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) class QuotaSetsTestV236(test.NoDBTestCase): microversion = '2.36' def setUp(self): super(QuotaSetsTestV236, self).setUp() # We need to stub out verify_project_id so that it doesn't # generate an EndpointNotFound exception and result in a # server error. self.stub_out('nova.api.openstack.identity.verify_project_id', lambda ctx, project_id: True) self.flags(enable_network_quota=True) self.useFixture(nova_fixtures.RegisterNetworkQuota()) self.old_req = fakes.HTTPRequest.blank('', version='2.1') self.filtered_quotas = ['fixed_ips', 'floating_ips', 'networks', 'security_group_rules', 'security_groups'] self.quotas = { 'cores': {'limit': 20}, 'fixed_ips': {'limit': -1}, 'floating_ips': {'limit': 10}, 'injected_file_content_bytes': {'limit': 10240}, 'injected_file_path_bytes': {'limit': 255}, 'injected_files': {'limit': 5}, 'instances': {'limit': 10}, 'key_pairs': {'limit': 100}, 'metadata_items': {'limit': 128}, 'networks': {'limit': 3}, 'ram': {'limit': 51200}, 'security_group_rules': {'limit': 20}, 'security_groups': {'limit': 10}, 'server_group_members': {'limit': 10}, 'server_groups': {'limit': 10} } self.defaults = { 'cores': 20, 'fixed_ips': -1, 'floating_ips': 10, 'injected_file_content_bytes': 10240, 'injected_file_path_bytes': 255, 'injected_files': 5, 'instances': 10, 'key_pairs': 100, 'metadata_items': 128, 'networks': 3, 'ram': 51200, 'security_group_rules': 20, 'security_groups': 10, 'server_group_members': 10, 'server_groups': 10 } self.controller = quotas_v21.QuotaSetsController() self.req = fakes.HTTPRequest.blank('', version=self.microversion) def _ensure_filtered_quotas_existed_in_old_api(self): res_dict = self.controller.show(self.old_req, 1234) for filtered in self.filtered_quotas: self.assertIn(filtered, res_dict['quota_set']) @mock.patch('nova.quota.QUOTAS.get_project_quotas') def test_quotas_show_filtered(self, mock_quotas): mock_quotas.return_value = self.quotas self._ensure_filtered_quotas_existed_in_old_api() res_dict = self.controller.show(self.req, 1234) for filtered in self.filtered_quotas: self.assertNotIn(filtered, res_dict['quota_set']) @mock.patch('nova.quota.QUOTAS.get_defaults') @mock.patch('nova.quota.QUOTAS.get_project_quotas') def test_quotas_default_filtered(self, mock_quotas, mock_defaults): mock_quotas.return_value = self.quotas self._ensure_filtered_quotas_existed_in_old_api() res_dict = self.controller.defaults(self.req, 1234) for filtered in self.filtered_quotas: self.assertNotIn(filtered, res_dict['quota_set']) @mock.patch('nova.quota.QUOTAS.get_project_quotas') def test_quotas_detail_filtered(self, mock_quotas): mock_quotas.return_value = self.quotas self._ensure_filtered_quotas_existed_in_old_api() res_dict = self.controller.detail(self.req, 1234) for filtered in self.filtered_quotas: self.assertNotIn(filtered, res_dict['quota_set']) @mock.patch('nova.quota.QUOTAS.get_project_quotas') def test_quotas_update_input_filtered(self, mock_quotas): mock_quotas.return_value = self.quotas self._ensure_filtered_quotas_existed_in_old_api() for filtered in self.filtered_quotas: self.assertRaises(exception.ValidationError, self.controller.update, self.req, 1234, body={'quota_set': {filtered: 100}}) @mock.patch('nova.objects.Quotas.create_limit') @mock.patch('nova.quota.QUOTAS.get_settable_quotas') @mock.patch('nova.quota.QUOTAS.get_project_quotas') def test_quotas_update_output_filtered(self, mock_quotas, mock_settable, mock_create_limit): mock_quotas.return_value = self.quotas mock_settable.return_value = {'cores': {'maximum': -1, 'minimum': 0}} self._ensure_filtered_quotas_existed_in_old_api() res_dict = self.controller.update(self.req, 1234, body={'quota_set': {'cores': 100}}) for filtered in self.filtered_quotas: self.assertNotIn(filtered, res_dict['quota_set']) class QuotaSetsTestV257(QuotaSetsTestV236): microversion = '2.57' def setUp(self): super(QuotaSetsTestV257, self).setUp() self.filtered_quotas.extend(quotas_v21.FILTERED_QUOTAS_2_57) nova-17.0.1/nova/tests/unit/api/openstack/compute/test_access_ips.py0000666000175000017500000001176313250073126025623 0ustar zuulzuul00000000000000# Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.api.openstack.compute import servers as servers_v21 from nova import exception from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests.unit.image import fake v4_key = "accessIPv4" v6_key = "accessIPv6" class AccessIPsAPIValidationTestV21(test.TestCase): validation_error = exception.ValidationError def setUp(self): super(AccessIPsAPIValidationTestV21, self).setUp() def fake_save(context, **kwargs): pass def fake_rebuild(*args, **kwargs): pass fakes.stub_out_nw_api(self) self._set_up_controller() fake.stub_out_image_service(self) self.stub_out('nova.db.instance_get_by_uuid', fakes.fake_instance_get()) self.stub_out('nova.objects.instance.Instance.save', fake_save) self.stub_out('nova.compute.api.API.rebuild', fake_rebuild) self.req = fakes.HTTPRequest.blank('') def _set_up_controller(self): self.controller = servers_v21.ServersController() def _verify_update_access_ip(self, res_dict, params): for key, value in params.items(): value = value or '' self.assertEqual(res_dict['server'][key], value) def _test_create(self, params): body = { 'server': { 'name': 'server_test', 'imageRef': '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6', 'flavorRef': 'http://localhost/123/flavors/3', }, } body['server'].update(params) res_dict = self.controller.create(self.req, body=body).obj return res_dict def _test_update(self, params): body = { 'server': { }, } body['server'].update(params) res_dict = self.controller.update(self.req, fakes.FAKE_UUID, body=body) self._verify_update_access_ip(res_dict, params) def _test_rebuild(self, params): body = { 'rebuild': { 'imageRef': '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6', }, } body['rebuild'].update(params) self.controller._action_rebuild(self.req, fakes.FAKE_UUID, body=body) def test_create_server_with_access_ipv4(self): params = {v4_key: '192.168.0.10'} self._test_create(params) def test_create_server_with_access_ip_pass_disabled(self): # test with admin passwords disabled See lp bug 921814 self.flags(enable_instance_password=False, group='api') params = {v4_key: '192.168.0.10', v6_key: '2001:db8::9abc'} res = self._test_create(params) server = res['server'] self.assertNotIn("admin_password", server) def test_create_server_with_invalid_access_ipv4(self): params = {v4_key: '1.1.1.1.1.1'} self.assertRaises(self.validation_error, self._test_create, params) def test_create_server_with_access_ipv6(self): params = {v6_key: '2001:db8::9abc'} self._test_create(params) def test_create_server_with_invalid_access_ipv6(self): params = {v6_key: 'fe80:::::::'} self.assertRaises(self.validation_error, self._test_create, params) def test_update_server_with_access_ipv4(self): params = {v4_key: '192.168.0.10'} self._test_update(params) def test_update_server_with_invalid_access_ipv4(self): params = {v4_key: '1.1.1.1.1.1'} self.assertRaises(self.validation_error, self._test_update, params) def test_update_server_with_access_ipv6(self): params = {v6_key: '2001:db8::9abc'} self._test_update(params) def test_update_server_with_invalid_access_ipv6(self): params = {v6_key: 'fe80:::::::'} self.assertRaises(self.validation_error, self._test_update, params) def test_rebuild_server_with_access_ipv4(self): params = {v4_key: '192.168.0.10'} self._test_rebuild(params) def test_rebuild_server_with_invalid_access_ipv4(self): params = {v4_key: '1.1.1.1.1.1'} self.assertRaises(self.validation_error, self._test_rebuild, params) def test_rebuild_server_with_access_ipv6(self): params = {v6_key: '2001:db8::9abc'} self._test_rebuild(params) def test_rebuild_server_with_invalid_access_ipv6(self): params = {v6_key: 'fe80:::::::'} self.assertRaises(self.validation_error, self._test_rebuild, params) nova-17.0.1/nova/tests/unit/api/openstack/compute/test_deferred_delete.py0000666000175000017500000002713713250073126026613 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import webob from nova.api.openstack.compute import deferred_delete as dd_v21 from nova.compute import api as compute_api from nova import context from nova import exception from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_instance class FakeRequest(object): def __init__(self, context): self.environ = {'nova.context': context} class DeferredDeleteExtensionTestV21(test.NoDBTestCase): ext_ver = dd_v21.DeferredDeleteController def setUp(self): super(DeferredDeleteExtensionTestV21, self).setUp() self.fake_input_dict = {} self.fake_uuid = 'fake_uuid' self.fake_context = context.RequestContext('fake', 'fake') self.fake_req = FakeRequest(self.fake_context) self.extension = self.ext_ver() @mock.patch.object(compute_api.API, 'get') @mock.patch.object(compute_api.API, 'force_delete') def test_force_delete(self, mock_force_delete, mock_get): instance = fake_instance.fake_instance_obj( self.fake_req.environ['nova.context']) mock_get.return_value = instance res = self.extension._force_delete(self.fake_req, self.fake_uuid, self.fake_input_dict) # NOTE: on v2.1, http status code is set as wsgi_code of API # method instead of status_int in a response object. if isinstance(self.extension, dd_v21.DeferredDeleteController): status_int = self.extension._force_delete.wsgi_code else: status_int = res.status_int self.assertEqual(202, status_int) mock_get.assert_called_once_with(self.fake_context, self.fake_uuid, expected_attrs=None) mock_force_delete.assert_called_once_with(self.fake_context, instance) @mock.patch.object(compute_api.API, 'get') def test_force_delete_instance_not_found(self, mock_get): mock_get.side_effect = exception.InstanceNotFound( instance_id='instance-0000') self.assertRaises(webob.exc.HTTPNotFound, self.extension._force_delete, self.fake_req, self.fake_uuid, self.fake_input_dict) mock_get.assert_called_once_with(self.fake_context, self.fake_uuid, expected_attrs=None) @mock.patch.object(compute_api.API, 'get') @mock.patch.object(compute_api.API, 'force_delete', side_effect=exception.InstanceIsLocked( instance_uuid='fake_uuid')) def test_force_delete_instance_locked(self, mock_force_delete, mock_get): req = fakes.HTTPRequest.blank('/v2/fake/servers/fake_uuid/action') ex = self.assertRaises(webob.exc.HTTPConflict, self.extension._force_delete, req, 'fake_uuid', '') self.assertIn('Instance fake_uuid is locked', ex.explanation) @mock.patch.object(compute_api.API, 'get') @mock.patch.object(compute_api.API, 'force_delete', side_effect=exception.InstanceNotFound( instance_id='fake_uuid')) def test_force_delete_instance_notfound(self, mock_force_delete, mock_get): req = fakes.HTTPRequest.blank('/v2/fake/servers/fake_uuid/action') ex = self.assertRaises(webob.exc.HTTPNotFound, self.extension._force_delete, req, 'fake_uuid', '') self.assertIn('Instance fake_uuid could not be found', ex.explanation) @mock.patch.object(compute_api.API, 'get') @mock.patch.object(compute_api.API, 'force_delete', side_effect=exception.InstanceUnknownCell( instance_uuid='fake_uuid')) def test_force_delete_instance_cellunknown(self, mock_force_delete, mock_get): req = fakes.HTTPRequest.blank('/v2/fake/servers/fake_uuid/action') ex = self.assertRaises(webob.exc.HTTPNotFound, self.extension._force_delete, req, 'fake_uuid', '') self.assertIn('Cell is not known for instance fake_uuid', ex.explanation) @mock.patch.object(compute_api.API, 'get') @mock.patch.object(compute_api.API, 'restore') def test_restore(self, mock_restore, mock_get): instance = fake_instance.fake_instance_obj( self.fake_req.environ['nova.context']) mock_get.return_value = instance res = self.extension._restore(self.fake_req, self.fake_uuid, self.fake_input_dict) # NOTE: on v2.1, http status code is set as wsgi_code of API # method instead of status_int in a response object. if isinstance(self.extension, dd_v21.DeferredDeleteController): status_int = self.extension._restore.wsgi_code else: status_int = res.status_int self.assertEqual(202, status_int) mock_get.assert_called_once_with(self.fake_context, self.fake_uuid, expected_attrs=None) mock_restore.assert_called_once_with(self.fake_context, instance) @mock.patch.object(compute_api.API, 'get') def test_restore_instance_not_found(self, mock_get): mock_get.side_effect = exception.InstanceNotFound( instance_id='instance-0000') self.assertRaises(webob.exc.HTTPNotFound, self.extension._restore, self.fake_req, self.fake_uuid, self.fake_input_dict) mock_get.assert_called_once_with(self.fake_context, self.fake_uuid, expected_attrs=None) @mock.patch.object(compute_api.API, 'get') @mock.patch.object(compute_api.API, 'restore') def test_restore_raises_conflict_on_invalid_state(self, mock_restore, mock_get): instance = fake_instance.fake_instance_obj( self.fake_req.environ['nova.context']) mock_get.return_value = instance mock_restore.side_effect = exception.InstanceInvalidState( attr='fake_attr', state='fake_state', method='fake_method', instance_uuid='fake') self.assertRaises(webob.exc.HTTPConflict, self.extension._restore, self.fake_req, self.fake_uuid, self.fake_input_dict) mock_get.assert_called_once_with(self.fake_context, self.fake_uuid, expected_attrs=None) mock_restore.assert_called_once_with(self.fake_context, instance) class DeferredDeletePolicyEnforcementV21(test.NoDBTestCase): def setUp(self): super(DeferredDeletePolicyEnforcementV21, self).setUp() self.controller = dd_v21.DeferredDeleteController() self.req = fakes.HTTPRequest.blank('') def test_restore_policy_failed(self): rule_name = "os_compute_api:os-deferred-delete" self.policy.set_rules({rule_name: "project:non_fake"}) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller._restore, self.req, fakes.FAKE_UUID, body={'restore': {}}) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) @mock.patch('nova.api.openstack.common.get_instance') def test_force_delete_policy_failed_with_other_project( self, get_instance_mock): get_instance_mock.return_value = ( fake_instance.fake_instance_obj(self.req.environ['nova.context'])) rule_name = "os_compute_api:os-deferred-delete" self.policy.set_rules({rule_name: "project_id:%(project_id)s"}) # Change the project_id in request context. self.req.environ['nova.context'].project_id = 'other-project' exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller._force_delete, self.req, fakes.FAKE_UUID, body={'forceDelete': {}}) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) @mock.patch('nova.compute.api.API.force_delete') @mock.patch('nova.api.openstack.common.get_instance') def test_force_delete_overridden_policy_pass_with_same_project( self, get_instance_mock, force_delete_mock): instance = fake_instance.fake_instance_obj( self.req.environ['nova.context'], project_id=self.req.environ['nova.context'].project_id) get_instance_mock.return_value = instance rule_name = "os_compute_api:os-deferred-delete" self.policy.set_rules({rule_name: "project_id:%(project_id)s"}) self.controller._force_delete(self.req, fakes.FAKE_UUID, body={'forceDelete': {}}) force_delete_mock.assert_called_once_with( self.req.environ['nova.context'], instance) @mock.patch('nova.api.openstack.common.get_instance') def test_force_delete_overridden_policy_failed_with_other_user( self, get_instance_mock): get_instance_mock.return_value = ( fake_instance.fake_instance_obj(self.req.environ['nova.context'])) rule_name = "os_compute_api:os-deferred-delete" self.policy.set_rules({rule_name: "user_id:%(user_id)s"}) # Change the user_id in request context. self.req.environ['nova.context'].user_id = 'other-user' exc = self.assertRaises(exception.PolicyNotAuthorized, self.controller._force_delete, self.req, fakes.FAKE_UUID, body={'forceDelete': {}}) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) @mock.patch('nova.compute.api.API.force_delete') @mock.patch('nova.api.openstack.common.get_instance') def test_force_delete_overridden_policy_pass_with_same_user(self, get_instance_mock, force_delete_mock): instance = fake_instance.fake_instance_obj( self.req.environ['nova.context'], user_id=self.req.environ['nova.context'].user_id) get_instance_mock.return_value = instance rule_name = "os_compute_api:os-deferred-delete" self.policy.set_rules({rule_name: "user_id:%(user_id)s"}) self.controller._force_delete(self.req, fakes.FAKE_UUID, body={'forceDelete': {}}) force_delete_mock.assert_called_once_with( self.req.environ['nova.context'], instance) nova-17.0.1/nova/tests/unit/api/openstack/compute/test_simple_tenant_usage.py0000666000175000017500000006112013250073126027525 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import mock from oslo_policy import policy as oslo_policy from oslo_utils import timeutils from six.moves import range import webob from nova.api.openstack.compute import simple_tenant_usage as \ simple_tenant_usage_v21 from nova.compute import vm_states import nova.conf from nova import context from nova import exception from nova import objects from nova import policy from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests import uuidsentinel as uuids CONF = nova.conf.CONF SERVERS = 5 TENANTS = 2 HOURS = 24 ROOT_GB = 10 EPHEMERAL_GB = 20 MEMORY_MB = 1024 VCPUS = 2 NOW = timeutils.utcnow() START = NOW - datetime.timedelta(hours=HOURS) STOP = NOW FAKE_INST_TYPE = {'id': 1, 'vcpus': VCPUS, 'root_gb': ROOT_GB, 'ephemeral_gb': EPHEMERAL_GB, 'memory_mb': MEMORY_MB, 'name': 'fakeflavor', 'flavorid': 'foo', 'rxtx_factor': 1.0, 'vcpu_weight': 1, 'swap': 0, 'created_at': None, 'updated_at': None, 'deleted_at': None, 'deleted': 0, 'disabled': False, 'is_public': True, 'extra_specs': {'foo': 'bar'}} def _fake_instance(start, end, instance_id, tenant_id, vm_state=vm_states.ACTIVE): flavor = objects.Flavor(**FAKE_INST_TYPE) return objects.Instance( deleted=False, id=instance_id, uuid=getattr(uuids, 'instance_%d' % instance_id), image_ref='1', project_id=tenant_id, user_id='fakeuser', display_name='name', instance_type_id=FAKE_INST_TYPE['id'], launched_at=start, terminated_at=end, vm_state=vm_state, memory_mb=MEMORY_MB, vcpus=VCPUS, root_gb=ROOT_GB, ephemeral_gb=EPHEMERAL_GB, flavor=flavor) def _fake_instance_deleted_flavorless(context, start, end, instance_id, tenant_id, vm_state=vm_states.ACTIVE): return objects.Instance( context=context, deleted=instance_id, id=instance_id, uuid=getattr(uuids, 'instance_%d' % instance_id), image_ref='1', project_id=tenant_id, user_id='fakeuser', display_name='name', instance_type_id=FAKE_INST_TYPE['id'], launched_at=start, terminated_at=end, deleted_at=start, vm_state=vm_state, memory_mb=MEMORY_MB, vcpus=VCPUS, root_gb=ROOT_GB, ephemeral_gb=EPHEMERAL_GB) @classmethod def fake_get_active_deleted_flavorless(cls, context, begin, end=None, project_id=None, host=None, expected_attrs=None, use_slave=False, limit=None, marker=None): # First get some normal instances to have actual usage instances = [ _fake_instance(START, STOP, x, project_id or 'faketenant_%s' % (x // SERVERS)) for x in range(TENANTS * SERVERS)] # Then get some deleted instances with no flavor to test bugs 1643444 and # 1692893 (duplicates) instances.extend([ _fake_instance_deleted_flavorless( context, START, STOP, x, project_id or 'faketenant_%s' % (x // SERVERS)) for x in range(TENANTS * SERVERS)]) return objects.InstanceList(objects=instances) @classmethod def fake_get_active_by_window_joined(cls, context, begin, end=None, project_id=None, host=None, expected_attrs=None, use_slave=False, limit=None, marker=None): return objects.InstanceList(objects=[ _fake_instance(START, STOP, x, project_id or 'faketenant_%s' % (x // SERVERS)) for x in range(TENANTS * SERVERS)]) class SimpleTenantUsageTestV21(test.TestCase): version = '2.1' policy_rule_prefix = "os_compute_api:os-simple-tenant-usage" controller = simple_tenant_usage_v21.SimpleTenantUsageController() def setUp(self): super(SimpleTenantUsageTestV21, self).setUp() self.admin_context = context.RequestContext('fakeadmin_0', 'faketenant_0', is_admin=True) self.user_context = context.RequestContext('fakeadmin_0', 'faketenant_0', is_admin=False) self.alt_user_context = context.RequestContext('fakeadmin_0', 'faketenant_1', is_admin=False) self.num_cells = len(objects.CellMappingList.get_all( self.admin_context)) def _test_verify_index(self, start, stop, limit=None): url = '?start=%s&end=%s' if limit: url += '&limit=%s' % (limit) req = fakes.HTTPRequest.blank(url % (start.isoformat(), stop.isoformat()), version=self.version) req.environ['nova.context'] = self.admin_context res_dict = self.controller.index(req) usages = res_dict['tenant_usages'] if limit: num = 1 else: # NOTE(danms): We call our fake data mock once per cell, # and the default fixture has two cells (cell0 and cell1), # so all our math will be doubled. num = self.num_cells for i in range(TENANTS): self.assertEqual(SERVERS * HOURS * num, int(usages[i]['total_hours'])) self.assertEqual(SERVERS * (ROOT_GB + EPHEMERAL_GB) * HOURS * num, int(usages[i]['total_local_gb_usage'])) self.assertEqual(SERVERS * MEMORY_MB * HOURS * num, int(usages[i]['total_memory_mb_usage'])) self.assertEqual(SERVERS * VCPUS * HOURS * num, int(usages[i]['total_vcpus_usage'])) self.assertFalse(usages[i].get('server_usages')) if limit: self.assertIn('tenant_usages_links', res_dict) self.assertEqual('next', res_dict['tenant_usages_links'][0]['rel']) else: self.assertNotIn('tenant_usages_links', res_dict) # NOTE(artom) Test for bugs 1643444 and 1692893 (duplicates). We simulate a # situation where an instance has been deleted (moved to shadow table) and # its corresponding instance_extra row has been archived (deleted from # shadow table). @mock.patch('nova.objects.InstanceList.get_active_by_window_joined', fake_get_active_deleted_flavorless) @mock.patch.object( objects.Instance, '_load_flavor', side_effect=exception.InstanceNotFound(instance_id='fake-id')) def test_verify_index_deleted_flavorless(self, mock_load): with mock.patch.object(self.controller, '_get_flavor', return_value=None): self._test_verify_index(START, STOP) @mock.patch('nova.objects.InstanceList.get_active_by_window_joined', fake_get_active_by_window_joined) def test_verify_index(self): self._test_verify_index(START, STOP) @mock.patch('nova.objects.InstanceList.get_active_by_window_joined', fake_get_active_by_window_joined) def test_verify_index_future_end_time(self): future = NOW + datetime.timedelta(hours=HOURS) self._test_verify_index(START, future) def test_verify_show(self): self._test_verify_show(START, STOP) def test_verify_show_future_end_time(self): future = NOW + datetime.timedelta(hours=HOURS) self._test_verify_show(START, future) @mock.patch('nova.objects.InstanceList.get_active_by_window_joined', fake_get_active_by_window_joined) def _get_tenant_usages(self, detailed=''): req = fakes.HTTPRequest.blank('?detailed=%s&start=%s&end=%s' % (detailed, START.isoformat(), STOP.isoformat()), version=self.version) req.environ['nova.context'] = self.admin_context # Make sure that get_active_by_window_joined is only called with # expected_attrs=['flavor']. orig_get_active_by_window_joined = ( objects.InstanceList.get_active_by_window_joined) def fake_get_active_by_window_joined(context, begin, end=None, project_id=None, host=None, expected_attrs=None, use_slave=False, limit=None, marker=None): self.assertEqual(['flavor'], expected_attrs) return orig_get_active_by_window_joined(context, begin, end, project_id, host, expected_attrs, use_slave) with mock.patch.object(objects.InstanceList, 'get_active_by_window_joined', side_effect=fake_get_active_by_window_joined): res_dict = self.controller.index(req) return res_dict['tenant_usages'] def test_verify_detailed_index(self): usages = self._get_tenant_usages('1') for i in range(TENANTS): servers = usages[i]['server_usages'] for j in range(SERVERS): self.assertEqual(HOURS, int(servers[j]['hours'])) def test_verify_simple_index(self): usages = self._get_tenant_usages(detailed='0') for i in range(TENANTS): self.assertIsNone(usages[i].get('server_usages')) def test_verify_simple_index_empty_param(self): # NOTE(lzyeval): 'detailed=&start=..&end=..' usages = self._get_tenant_usages() for i in range(TENANTS): self.assertIsNone(usages[i].get('server_usages')) @mock.patch('nova.objects.InstanceList.get_active_by_window_joined', fake_get_active_by_window_joined) def _test_verify_show(self, start, stop, limit=None): tenant_id = 1 url = '?start=%s&end=%s' if limit: url += '&limit=%s' % (limit) req = fakes.HTTPRequest.blank(url % (start.isoformat(), stop.isoformat()), version=self.version) req.environ['nova.context'] = self.user_context res_dict = self.controller.show(req, tenant_id) if limit: num = 1 else: # NOTE(danms): We call our fake data mock once per cell, # and the default fixture has two cells (cell0 and cell1), # so all our math will be doubled. num = self.num_cells usage = res_dict['tenant_usage'] servers = usage['server_usages'] self.assertEqual(TENANTS * SERVERS * num, len(usage['server_usages'])) server_uuids = [getattr(uuids, 'instance_%d' % x) for x in range(SERVERS)] for j in range(SERVERS): delta = STOP - START # NOTE(javeme): cast seconds from float to int for clarity uptime = int(delta.total_seconds()) self.assertEqual(uptime, int(servers[j]['uptime'])) self.assertEqual(HOURS, int(servers[j]['hours'])) self.assertIn(servers[j]['instance_id'], server_uuids) if limit: self.assertIn('tenant_usage_links', res_dict) self.assertEqual('next', res_dict['tenant_usage_links'][0]['rel']) else: self.assertNotIn('tenant_usage_links', res_dict) def test_verify_show_cannot_view_other_tenant(self): req = fakes.HTTPRequest.blank('?start=%s&end=%s' % (START.isoformat(), STOP.isoformat()), version=self.version) req.environ['nova.context'] = self.alt_user_context rules = { self.policy_rule_prefix + ":show": [ ["role:admin"], ["project_id:%(project_id)s"]] } policy.set_rules(oslo_policy.Rules.from_dict(rules)) try: self.assertRaises(exception.PolicyNotAuthorized, self.controller.show, req, 'faketenant_0') finally: policy.reset() def test_get_tenants_usage_with_bad_start_date(self): future = NOW + datetime.timedelta(hours=HOURS) req = fakes.HTTPRequest.blank('?start=%s&end=%s' % (future.isoformat(), NOW.isoformat()), version=self.version) req.environ['nova.context'] = self.user_context self.assertRaises(webob.exc.HTTPBadRequest, self.controller.show, req, 'faketenant_0') def test_get_tenants_usage_with_invalid_start_date(self): req = fakes.HTTPRequest.blank('?start=%s&end=%s' % ("xxxx", NOW.isoformat()), version=self.version) req.environ['nova.context'] = self.user_context self.assertRaises(webob.exc.HTTPBadRequest, self.controller.show, req, 'faketenant_0') def _test_get_tenants_usage_with_one_date(self, date_url_param): req = fakes.HTTPRequest.blank('?%s' % date_url_param, version=self.version) req.environ['nova.context'] = self.user_context res = self.controller.show(req, 'faketenant_0') self.assertIn('tenant_usage', res) def test_get_tenants_usage_with_no_start_date(self): self._test_get_tenants_usage_with_one_date( 'end=%s' % (NOW + datetime.timedelta(5)).isoformat()) def test_get_tenants_usage_with_no_end_date(self): self._test_get_tenants_usage_with_one_date( 'start=%s' % (NOW - datetime.timedelta(5)).isoformat()) def test_index_additional_query_parameters(self): req = fakes.HTTPRequest.blank('?start=%s&end=%s&additional=1' % (START.isoformat(), STOP.isoformat()), version=self.version) res = self.controller.index(req) self.assertIn('tenant_usages', res) def _test_index_duplicate_query_parameters_validation(self, params): for param, value in params.items(): req = fakes.HTTPRequest.blank('?start=%s&%s=%s&%s=%s' % (START.isoformat(), param, value, param, value), version=self.version) res = self.controller.index(req) self.assertIn('tenant_usages', res) def test_index_duplicate_query_parameters_validation(self): params = { 'start': START.isoformat(), 'end': STOP.isoformat(), 'detailed': 1 } self._test_index_duplicate_query_parameters_validation(params) def test_show_additional_query_parameters(self): req = fakes.HTTPRequest.blank('?start=%s&end=%s&additional=1' % (START.isoformat(), STOP.isoformat()), version=self.version) res = self.controller.show(req, 1) self.assertIn('tenant_usage', res) def _test_show_duplicate_query_parameters_validation(self, params): for param, value in params.items(): req = fakes.HTTPRequest.blank('?start=%s&%s=%s&%s=%s' % (START.isoformat(), param, value, param, value), version=self.version) res = self.controller.show(req, 1) self.assertIn('tenant_usage', res) def test_show_duplicate_query_parameters_validation(self): params = { 'start': START.isoformat(), 'end': STOP.isoformat() } self._test_show_duplicate_query_parameters_validation(params) class SimpleTenantUsageTestV40(SimpleTenantUsageTestV21): version = '2.40' @mock.patch('nova.objects.InstanceList.get_active_by_window_joined', fake_get_active_by_window_joined) def test_next_links_show(self): self._test_verify_show(START, STOP, limit=SERVERS * TENANTS) @mock.patch('nova.objects.InstanceList.get_active_by_window_joined', fake_get_active_by_window_joined) def test_next_links_index(self): self._test_verify_index(START, STOP, limit=SERVERS * TENANTS) @mock.patch('nova.objects.InstanceList.get_active_by_window_joined', fake_get_active_by_window_joined) def test_index_duplicate_query_parameters_validation(self): params = { 'start': START.isoformat(), 'end': STOP.isoformat(), 'detailed': 1, 'limit': 1, 'marker': 1 } self._test_index_duplicate_query_parameters_validation(params) @mock.patch('nova.objects.InstanceList.get_active_by_window_joined', fake_get_active_by_window_joined) def test_show_duplicate_query_parameters_validation(self): params = { 'start': START.isoformat(), 'end': STOP.isoformat(), 'limit': 1, 'marker': 1 } self._test_show_duplicate_query_parameters_validation(params) class SimpleTenantUsageLimitsTestV21(test.TestCase): version = '2.1' def setUp(self): super(SimpleTenantUsageLimitsTestV21, self).setUp() self.controller = simple_tenant_usage_v21.SimpleTenantUsageController() self.tenant_id = 1 def _get_request(self, url): url = url % (START.isoformat(), STOP.isoformat()) return fakes.HTTPRequest.blank(url, version=self.version) def assert_limit(self, mock_get, limit): mock_get.assert_called_with( mock.ANY, mock.ANY, mock.ANY, mock.ANY, expected_attrs=['flavor'], limit=1000, marker=None) @mock.patch('nova.objects.InstanceList.get_active_by_window_joined') def test_limit_defaults_to_conf_max_limit_show(self, mock_get): req = self._get_request('?start=%s&end=%s') self.controller.show(req, self.tenant_id) self.assert_limit(mock_get, CONF.api.max_limit) @mock.patch('nova.objects.InstanceList.get_active_by_window_joined') def test_limit_defaults_to_conf_max_limit_index(self, mock_get): req = self._get_request('?start=%s&end=%s') self.controller.index(req) self.assert_limit(mock_get, CONF.api.max_limit) class SimpleTenantUsageLimitsTestV240(SimpleTenantUsageLimitsTestV21): version = '2.40' def assert_limit_and_marker(self, mock_get, limit, marker): # NOTE(danms): Make sure we called at least once with the marker mock_get.assert_any_call( mock.ANY, mock.ANY, mock.ANY, mock.ANY, expected_attrs=['flavor'], limit=3, marker=marker) @mock.patch('nova.objects.InstanceList.get_active_by_window_joined') def test_limit_and_marker_show(self, mock_get): req = self._get_request('?start=%s&end=%s&limit=3&marker=some-marker') self.controller.show(req, self.tenant_id) self.assert_limit_and_marker(mock_get, 3, 'some-marker') @mock.patch('nova.objects.InstanceList.get_active_by_window_joined') def test_limit_and_marker_index(self, mock_get): req = self._get_request('?start=%s&end=%s&limit=3&marker=some-marker') self.controller.index(req) self.assert_limit_and_marker(mock_get, 3, 'some-marker') @mock.patch('nova.objects.InstanceList.get_active_by_window_joined') def test_marker_not_found_show(self, mock_get): mock_get.side_effect = exception.MarkerNotFound(marker='some-marker') req = self._get_request('?start=%s&end=%s&limit=3&marker=some-marker') self.assertRaises( webob.exc.HTTPBadRequest, self.controller.show, req, 1) @mock.patch('nova.objects.InstanceList.get_active_by_window_joined') def test_marker_not_found_index(self, mock_get): mock_get.side_effect = exception.MarkerNotFound(marker='some-marker') req = self._get_request('?start=%s&end=%s&limit=3&marker=some-marker') self.assertRaises( webob.exc.HTTPBadRequest, self.controller.index, req) def test_index_with_invalid_non_int_limit(self): req = self._get_request('?start=%s&end=%s&limit=-3') self.assertRaises(exception.ValidationError, self.controller.index, req) def test_index_with_invalid_string_limit(self): req = self._get_request('?start=%s&end=%s&limit=abc') self.assertRaises(exception.ValidationError, self.controller.index, req) def test_index_duplicate_query_with_invalid_string_limit(self): req = self._get_request('?start=%s&end=%s&limit=3&limit=abc') self.assertRaises(exception.ValidationError, self.controller.index, req) def test_show_with_invalid_non_int_limit(self): req = self._get_request('?start=%s&end=%s&limit=-3') self.assertRaises(exception.ValidationError, self.controller.show, req) def test_show_with_invalid_string_limit(self): req = self._get_request('?start=%s&end=%s&limit=abc') self.assertRaises(exception.ValidationError, self.controller.show, req) def test_show_duplicate_query_with_invalid_string_limit(self): req = self._get_request('?start=%s&end=%s&limit=3&limit=abc') self.assertRaises(exception.ValidationError, self.controller.show, req) class SimpleTenantUsageControllerTestV21(test.TestCase): controller = simple_tenant_usage_v21.SimpleTenantUsageController() def setUp(self): super(SimpleTenantUsageControllerTestV21, self).setUp() self.context = context.RequestContext('fakeuser', 'fake-project') self.inst_obj = _fake_instance(START, STOP, instance_id=1, tenant_id=self.context.project_id, vm_state=vm_states.DELETED) @mock.patch('nova.objects.Instance.get_flavor', side_effect=exception.NotFound()) def test_get_flavor_from_non_deleted_with_id_fails(self, fake_get_flavor): # If an instance is not deleted and missing type information from # instance.flavor, then that's a bug self.assertRaises(exception.NotFound, self.controller._get_flavor, self.context, self.inst_obj, {}) @mock.patch('nova.objects.Instance.get_flavor', side_effect=exception.NotFound()) def test_get_flavor_from_deleted_with_notfound(self, fake_get_flavor): # If the flavor is not found from the instance and the instance is # deleted, attempt to look it up from the DB and if found we're OK. self.inst_obj.deleted = 1 flavor = self.controller._get_flavor(self.context, self.inst_obj, {}) self.assertEqual(objects.Flavor, type(flavor)) self.assertEqual(FAKE_INST_TYPE['id'], flavor.id) @mock.patch('nova.objects.Instance.get_flavor', side_effect=exception.NotFound()) def test_get_flavor_from_deleted_with_id_of_deleted(self, fake_get_flavor): # Verify the legacy behavior of instance_type_id pointing to a # missing type being non-fatal self.inst_obj.deleted = 1 self.inst_obj.instance_type_id = 99 flavor = self.controller._get_flavor(self.context, self.inst_obj, {}) self.assertIsNone(flavor) class SimpleTenantUsageUtilsV21(test.NoDBTestCase): simple_tenant_usage = simple_tenant_usage_v21 def test_valid_string(self): dt = self.simple_tenant_usage.parse_strtime( "2014-02-21T13:47:20.824060", "%Y-%m-%dT%H:%M:%S.%f") self.assertEqual(datetime.datetime( microsecond=824060, second=20, minute=47, hour=13, day=21, month=2, year=2014), dt) def test_invalid_string(self): self.assertRaises(exception.InvalidStrTime, self.simple_tenant_usage.parse_strtime, "2014-02-21 13:47:20.824060", "%Y-%m-%dT%H:%M:%S.%f") nova-17.0.1/nova/tests/unit/api/openstack/compute/test_extended_status.py0000666000175000017500000001040513250073126026702 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_serialization import jsonutils from nova import exception from nova import objects from nova.objects import instance as instance_obj from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_instance UUID1 = '00000000-0000-0000-0000-000000000001' UUID2 = '00000000-0000-0000-0000-000000000002' UUID3 = '00000000-0000-0000-0000-000000000003' def fake_compute_get(*args, **kwargs): inst = fakes.stub_instance(1, uuid=UUID3, task_state="kayaking", vm_state="slightly crunchy", power_state=1) return fake_instance.fake_instance_obj(args[1], **inst) def fake_compute_get_all(*args, **kwargs): db_list = [ fakes.stub_instance(1, uuid=UUID1, task_state="task-1", vm_state="vm-1", power_state=1), fakes.stub_instance(2, uuid=UUID2, task_state="task-2", vm_state="vm-2", power_state=2), ] fields = instance_obj.INSTANCE_DEFAULT_FIELDS return instance_obj._make_instance_list(args[1], objects.InstanceList(), db_list, fields) class ExtendedStatusTestV21(test.TestCase): content_type = 'application/json' prefix = 'OS-EXT-STS:' fake_url = '/v2/fake' def _set_flags(self): pass def _make_request(self, url): req = fakes.HTTPRequest.blank(url) req.headers['Accept'] = self.content_type res = req.get_response(fakes.wsgi_app_v21()) return res def setUp(self): super(ExtendedStatusTestV21, self).setUp() fakes.stub_out_nw_api(self) fakes.stub_out_secgroup_api(self) self.stub_out('nova.compute.api.API.get', fake_compute_get) self.stub_out('nova.compute.api.API.get_all', fake_compute_get_all) self._set_flags() return_server = fakes.fake_instance_get() self.stub_out('nova.db.instance_get_by_uuid', return_server) def _get_server(self, body): return jsonutils.loads(body).get('server') def _get_servers(self, body): return jsonutils.loads(body).get('servers') def assertServerStates(self, server, vm_state, power_state, task_state): self.assertEqual(server.get('%svm_state' % self.prefix), vm_state) self.assertEqual(int(server.get('%spower_state' % self.prefix)), power_state) self.assertEqual(server.get('%stask_state' % self.prefix), task_state) def test_show(self): url = self.fake_url + '/servers/%s' % UUID3 res = self._make_request(url) self.assertEqual(res.status_int, 200) self.assertServerStates(self._get_server(res.body), vm_state='slightly crunchy', power_state=1, task_state='kayaking') def test_detail(self): url = self.fake_url + '/servers/detail' res = self._make_request(url) self.assertEqual(res.status_int, 200) for i, server in enumerate(self._get_servers(res.body)): self.assertServerStates(server, vm_state='vm-%s' % (i + 1), power_state=(i + 1), task_state='task-%s' % (i + 1)) def test_no_instance_passthrough_404(self): def fake_compute_get(*args, **kwargs): raise exception.InstanceNotFound(instance_id='fake') self.stub_out('nova.compute.api.API.get', fake_compute_get) url = self.fake_url + '/servers/70f6db34-de8d-4fbd-aafb-4065bdfa6115' res = self._make_request(url) self.assertEqual(res.status_int, 404) nova-17.0.1/nova/tests/unit/api/openstack/compute/test_extended_availability_zone.py0000666000175000017500000001256213250073126031072 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_serialization import jsonutils from nova import availability_zones from nova.compute import vm_states from nova import exception from nova import objects from nova.objects import instance as instance_obj from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_instance UUID1 = '00000000-0000-0000-0000-000000000001' UUID2 = '00000000-0000-0000-0000-000000000002' UUID3 = '00000000-0000-0000-0000-000000000003' def fake_compute_get_az(*args, **kwargs): inst = fakes.stub_instance(1, uuid=UUID3, host="get-host", vm_state=vm_states.ACTIVE, availability_zone='fakeaz') return fake_instance.fake_instance_obj(args[1], **inst) def fake_compute_get_empty(*args, **kwargs): inst = fakes.stub_instance(1, uuid=UUID3, host="", vm_state=vm_states.ACTIVE, availability_zone='fakeaz') return fake_instance.fake_instance_obj(args[1], **inst) def fake_compute_get(*args, **kwargs): inst = fakes.stub_instance(1, uuid=UUID3, host="get-host", vm_state=vm_states.ACTIVE) return fake_instance.fake_instance_obj(args[1], **inst) def fake_compute_get_all(*args, **kwargs): inst1 = fakes.stub_instance(1, uuid=UUID1, host="all-host", vm_state=vm_states.ACTIVE) inst2 = fakes.stub_instance(2, uuid=UUID2, host="all-host", vm_state=vm_states.ACTIVE) db_list = [inst1, inst2] fields = instance_obj.INSTANCE_DEFAULT_FIELDS return instance_obj._make_instance_list(args[1], objects.InstanceList(), db_list, fields) def fake_get_host_availability_zone(context, host): return host def fake_get_no_host_availability_zone(context, host): return None class ExtendedAvailabilityZoneTestV21(test.TestCase): content_type = 'application/json' prefix = 'OS-EXT-AZ:' base_url = '/v2/fake/servers/' def setUp(self): super(ExtendedAvailabilityZoneTestV21, self).setUp() availability_zones.reset_cache() fakes.stub_out_nw_api(self) fakes.stub_out_secgroup_api(self) self.stub_out('nova.compute.api.API.get', fake_compute_get) self.stub_out('nova.compute.api.API.get_all', fake_compute_get_all) self.stub_out('nova.availability_zones.get_host_availability_zone', fake_get_host_availability_zone) return_server = fakes.fake_instance_get() self.stub_out('nova.db.instance_get_by_uuid', return_server) def _make_request(self, url): req = fakes.HTTPRequest.blank(url) req.headers['Accept'] = self.content_type res = req.get_response(fakes.wsgi_app_v21()) return res def _get_server(self, body): return jsonutils.loads(body).get('server') def _get_servers(self, body): return jsonutils.loads(body).get('servers') def assertAvailabilityZone(self, server, az): self.assertEqual(server.get('%savailability_zone' % self.prefix), az) def test_show_no_host_az(self): self.stub_out('nova.compute.api.API.get', fake_compute_get_az) self.stub_out('nova.availability_zones.get_host_availability_zone', fake_get_no_host_availability_zone) url = self.base_url + UUID3 res = self._make_request(url) self.assertEqual(res.status_int, 200) self.assertAvailabilityZone(self._get_server(res.body), '') def test_show_empty_host_az(self): self.stub_out('nova.compute.api.API.get', fake_compute_get_empty) url = self.base_url + UUID3 res = self._make_request(url) self.assertEqual(res.status_int, 200) self.assertAvailabilityZone(self._get_server(res.body), 'fakeaz') def test_show(self): url = self.base_url + UUID3 res = self._make_request(url) self.assertEqual(res.status_int, 200) self.assertAvailabilityZone(self._get_server(res.body), 'get-host') def test_detail(self): url = self.base_url + 'detail' res = self._make_request(url) self.assertEqual(res.status_int, 200) for i, server in enumerate(self._get_servers(res.body)): self.assertAvailabilityZone(server, 'all-host') def test_no_instance_passthrough_404(self): def fake_compute_get(*args, **kwargs): raise exception.InstanceNotFound(instance_id='fake') self.stub_out('nova.compute.api.API.get', fake_compute_get) url = self.base_url + '70f6db34-de8d-4fbd-aafb-4065bdfa6115' res = self._make_request(url) self.assertEqual(res.status_int, 404) nova-17.0.1/nova/tests/unit/api/openstack/compute/test_flavorextradata.py0000666000175000017500000000643113250073126026672 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_serialization import jsonutils from nova import test from nova.tests.unit.api.openstack import fakes class FlavorExtraDataTestV21(test.NoDBTestCase): base_url = '/v2/fake/flavors' def setUp(self): super(FlavorExtraDataTestV21, self).setUp() fakes.stub_out_flavor_get_all(self) fakes.stub_out_flavor_get_by_flavor_id(self) @property def app(self): return fakes.wsgi_app_v21() def _verify_flavor_response(self, flavor, expected): for key in expected: self.assertEqual(flavor[key], expected[key]) def test_show(self): expected = { 'flavor': { 'id': fakes.FLAVORS['1'].flavorid, 'name': fakes.FLAVORS['1'].name, 'ram': fakes.FLAVORS['1'].memory_mb, 'vcpus': fakes.FLAVORS['1'].vcpus, 'disk': fakes.FLAVORS['1'].root_gb, 'OS-FLV-EXT-DATA:ephemeral': fakes.FLAVORS['1'].ephemeral_gb, } } url = self.base_url + '/1' req = fakes.HTTPRequest.blank(url) req.headers['Content-Type'] = 'application/json' res = req.get_response(self.app) body = jsonutils.loads(res.body) self._verify_flavor_response(body['flavor'], expected['flavor']) def test_detail(self): expected = [ { 'id': fakes.FLAVORS['1'].flavorid, 'name': fakes.FLAVORS['1'].name, 'ram': fakes.FLAVORS['1'].memory_mb, 'vcpus': fakes.FLAVORS['1'].vcpus, 'disk': fakes.FLAVORS['1'].root_gb, 'OS-FLV-EXT-DATA:ephemeral': fakes.FLAVORS['1'].ephemeral_gb, 'rxtx_factor': fakes.FLAVORS['1'].rxtx_factor or u'', 'os-flavor-access:is_public': fakes.FLAVORS['1'].is_public, }, { 'id': fakes.FLAVORS['2'].flavorid, 'name': fakes.FLAVORS['2'].name, 'ram': fakes.FLAVORS['2'].memory_mb, 'vcpus': fakes.FLAVORS['2'].vcpus, 'disk': fakes.FLAVORS['2'].root_gb, 'OS-FLV-EXT-DATA:ephemeral': fakes.FLAVORS['2'].ephemeral_gb, 'rxtx_factor': fakes.FLAVORS['2'].rxtx_factor or u'', 'os-flavor-access:is_public': fakes.FLAVORS['2'].is_public, }, ] url = self.base_url + '/detail' req = fakes.HTTPRequest.blank(url) req.headers['Content-Type'] = 'application/json' res = req.get_response(self.app) body = jsonutils.loads(res.body) for i, flavor in enumerate(body['flavors']): self._verify_flavor_response(flavor, expected[i]) nova-17.0.1/nova/tests/unit/api/openstack/compute/test_aggregates.py0000666000175000017500000010432013250073126025610 0ustar zuulzuul00000000000000# Copyright (c) 2012 Citrix Systems, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests for the aggregates admin api.""" import mock from webob import exc from nova.api.openstack import api_version_request from nova.api.openstack.compute import aggregates as aggregates_v21 from nova.compute import api as compute_api from nova import context from nova import exception from nova import objects from nova.objects import base as obj_base from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests import uuidsentinel def _make_agg_obj(agg_dict): return objects.Aggregate(**agg_dict) def _make_agg_list(agg_list): return objects.AggregateList(objects=[_make_agg_obj(a) for a in agg_list]) def _transform_aggregate_az(agg_dict): # the Aggregate object looks for availability_zone within metadata, # so if availability_zone is in the top-level dict, move it down into # metadata. We also have to delete the key from the top-level dict because # availability_zone is a read-only property on the Aggregate object md = agg_dict.get('metadata', {}) if 'availability_zone' in agg_dict: md['availability_zone'] = agg_dict['availability_zone'] del agg_dict['availability_zone'] agg_dict['metadata'] = md return agg_dict def _transform_aggregate_list_azs(agg_list): for agg_dict in agg_list: yield _transform_aggregate_az(agg_dict) AGGREGATE_LIST = [ {"name": "aggregate1", "id": "1", "metadata": {"availability_zone": "nova1"}}, {"name": "aggregate2", "id": "2", "metadata": {"availability_zone": "nova1"}}, {"name": "aggregate3", "id": "3", "metadata": {"availability_zone": "nova2"}}, {"name": "aggregate1", "id": "4", "metadata": {"availability_zone": "nova1"}}] AGGREGATE_LIST = _make_agg_list(AGGREGATE_LIST) AGGREGATE = {"name": "aggregate1", "id": "1", "metadata": {"foo": "bar", "availability_zone": "nova1"}, "hosts": ["host1", "host2"]} AGGREGATE = _make_agg_obj(AGGREGATE) FORMATTED_AGGREGATE = {"name": "aggregate1", "id": "1", "metadata": {"availability_zone": "nova1"}} FORMATTED_AGGREGATE = _make_agg_obj(FORMATTED_AGGREGATE) class FakeRequest(object): environ = {"nova.context": context.get_admin_context()} class AggregateTestCaseV21(test.NoDBTestCase): """Test Case for aggregates admin api.""" add_host = 'self.controller._add_host' remove_host = 'self.controller._remove_host' set_metadata = 'self.controller._set_metadata' bad_request = exception.ValidationError def _set_up(self): self.controller = aggregates_v21.AggregateController() self.req = fakes.HTTPRequest.blank('/v2/os-aggregates', use_admin_context=True) self.user_req = fakes.HTTPRequest.blank('/v2/os-aggregates') self.context = self.req.environ['nova.context'] def setUp(self): super(AggregateTestCaseV21, self).setUp() self._set_up() def test_index(self): def _list_aggregates(context): if context is None: raise Exception() return AGGREGATE_LIST with mock.patch.object(self.controller.api, 'get_aggregate_list', side_effect=_list_aggregates) as mock_list: result = self.controller.index(self.req) result = _transform_aggregate_list_azs(result['aggregates']) self._assert_agg_data(AGGREGATE_LIST, _make_agg_list(result)) self.assertTrue(mock_list.called) def test_index_no_admin(self): self.assertRaises(exception.PolicyNotAuthorized, self.controller.index, self.user_req) def test_create(self): with mock.patch.object(self.controller.api, 'create_aggregate', return_value=AGGREGATE) as mock_create: result = self.controller.create(self.req, body={"aggregate": {"name": "test", "availability_zone": "nova1"}}) result = _transform_aggregate_az(result['aggregate']) self._assert_agg_data(FORMATTED_AGGREGATE, _make_agg_obj(result)) mock_create.assert_called_once_with(self.context, 'test', 'nova1') def test_create_no_admin(self): self.assertRaises(exception.PolicyNotAuthorized, self.controller.create, self.user_req, body={"aggregate": {"name": "test", "availability_zone": "nova1"}}) def test_create_with_duplicate_aggregate_name(self): side_effect = exception.AggregateNameExists(aggregate_name="test") with mock.patch.object(self.controller.api, 'create_aggregate', side_effect=side_effect) as mock_create: self.assertRaises(exc.HTTPConflict, self.controller.create, self.req, body={"aggregate": {"name": "test", "availability_zone": "nova1"}}) mock_create.assert_called_once_with(self.context, 'test', 'nova1') @mock.patch.object(compute_api.AggregateAPI, 'create_aggregate') def test_create_with_unmigrated_aggregates(self, mock_create_aggregate): mock_create_aggregate.side_effect = \ exception.ObjectActionError(action='create', reason='main database still contains aggregates') self.assertRaises(exc.HTTPConflict, self.controller.create, self.req, body={"aggregate": {"name": "test", "availability_zone": "nova1"}}) def test_create_with_incorrect_availability_zone(self): side_effect = exception.InvalidAggregateAction( action='create_aggregate', aggregate_id="'N/A'", reason='invalid zone') with mock.patch.object(self.controller.api, 'create_aggregate', side_effect=side_effect) as mock_create: self.assertRaises(exc.HTTPBadRequest, self.controller.create, self.req, body={"aggregate": {"name": "test", "availability_zone": "nova_bad"}}) mock_create.assert_called_once_with(self.context, 'test', 'nova_bad') def test_create_with_no_aggregate(self): self.assertRaises(self.bad_request, self.controller.create, self.req, body={"foo": {"name": "test", "availability_zone": "nova1"}}) def test_create_with_no_name(self): self.assertRaises(self.bad_request, self.controller.create, self.req, body={"aggregate": {"foo": "test", "availability_zone": "nova1"}}) def test_create_name_with_leading_trailing_spaces(self): self.assertRaises(self.bad_request, self.controller.create, self.req, body={"aggregate": {"name": " test ", "availability_zone": "nova1"}}) def test_create_name_with_leading_trailing_spaces_compat_mode(self): def fake_mock_aggs(context, name, az): self.assertEqual('test', name) return AGGREGATE with mock.patch.object(compute_api.AggregateAPI, 'create_aggregate') as mock_aggs: mock_aggs.side_effect = fake_mock_aggs self.req.set_legacy_v2() self.controller.create(self.req, body={"aggregate": {"name": " test ", "availability_zone": "nova1"}}) def test_create_with_no_availability_zone(self): with mock.patch.object(self.controller.api, 'create_aggregate', return_value=AGGREGATE) as mock_create: result = self.controller.create(self.req, body={"aggregate": {"name": "test"}}) result = _transform_aggregate_az(result['aggregate']) self._assert_agg_data(FORMATTED_AGGREGATE, _make_agg_obj(result)) mock_create.assert_called_once_with(self.context, 'test', None) def test_create_with_null_name(self): self.assertRaises(self.bad_request, self.controller.create, self.req, body={"aggregate": {"name": "", "availability_zone": "nova1"}}) def test_create_with_name_too_long(self): self.assertRaises(self.bad_request, self.controller.create, self.req, body={"aggregate": {"name": "x" * 256, "availability_zone": "nova1"}}) def test_create_with_availability_zone_too_long(self): self.assertRaises(self.bad_request, self.controller.create, self.req, body={"aggregate": {"name": "test", "availability_zone": "x" * 256}}) def test_create_with_availability_zone_invalid(self): self.assertRaises(self.bad_request, self.controller.create, self.req, body={"aggregate": {"name": "test", "availability_zone": "bad:az"}}) def test_create_availability_zone_with_leading_trailing_spaces(self): self.assertRaises(self.bad_request, self.controller.create, self.req, body={"aggregate": {"name": "test", "availability_zone": " nova1 "}}) def test_create_availability_zone_with_leading_trailing_spaces_compat_mode( self): def fake_mock_aggs(context, name, az): self.assertEqual('nova1', az) return AGGREGATE with mock.patch.object(compute_api.AggregateAPI, 'create_aggregate') as mock_aggs: mock_aggs.side_effect = fake_mock_aggs self.req.set_legacy_v2() self.controller.create(self.req, body={"aggregate": {"name": "test", "availability_zone": " nova1 "}}) def test_create_with_empty_availability_zone(self): self.assertRaises(self.bad_request, self.controller.create, self.req, body={"aggregate": {"name": "test", "availability_zone": ""}}) @mock.patch('nova.compute.api.AggregateAPI.create_aggregate') def test_create_with_none_availability_zone(self, mock_create_agg): mock_create_agg.return_value = objects.Aggregate( self.context, name='test', uuid=uuidsentinel.aggregate, hosts=[], metadata={}) body = {"aggregate": {"name": "test", "availability_zone": None}} result = self.controller.create(self.req, body=body) mock_create_agg.assert_called_once_with(self.context, 'test', None) self.assertEqual(result['aggregate']['name'], 'test') def test_create_with_extra_invalid_arg(self): self.assertRaises(self.bad_request, self.controller.create, self.req, body={"name": "test", "availability_zone": "nova1", "foo": 'bar'}) def test_show(self): with mock.patch.object(self.controller.api, 'get_aggregate', return_value=AGGREGATE) as mock_get: aggregate = self.controller.show(self.req, "1") aggregate = _transform_aggregate_az(aggregate['aggregate']) self._assert_agg_data(AGGREGATE, _make_agg_obj(aggregate)) mock_get.assert_called_once_with(self.context, '1') def test_show_no_admin(self): self.assertRaises(exception.PolicyNotAuthorized, self.controller.show, self.user_req, "1") def test_show_with_invalid_id(self): side_effect = exception.AggregateNotFound(aggregate_id='2') with mock.patch.object(self.controller.api, 'get_aggregate', side_effect=side_effect) as mock_get: self.assertRaises(exc.HTTPNotFound, self.controller.show, self.req, "2") mock_get.assert_called_once_with(self.context, '2') def test_update(self): body = {"aggregate": {"name": "new_name", "availability_zone": "nova1"}} with mock.patch.object(self.controller.api, 'update_aggregate', return_value=AGGREGATE) as mock_update: result = self.controller.update(self.req, "1", body=body) result = _transform_aggregate_az(result['aggregate']) self._assert_agg_data(AGGREGATE, _make_agg_obj(result)) mock_update.assert_called_once_with(self.context, '1', body["aggregate"]) def test_update_no_admin(self): body = {"aggregate": {"availability_zone": "nova"}} self.assertRaises(exception.PolicyNotAuthorized, self.controller.update, self.user_req, "1", body=body) def test_update_with_only_name(self): body = {"aggregate": {"name": "new_name"}} with mock.patch.object(self.controller.api, 'update_aggregate', return_value=AGGREGATE) as mock_update: result = self.controller.update(self.req, "1", body=body) result = _transform_aggregate_az(result['aggregate']) self._assert_agg_data(AGGREGATE, _make_agg_obj(result)) mock_update.assert_called_once_with(self.context, '1', body["aggregate"]) def test_update_with_only_availability_zone(self): body = {"aggregate": {"availability_zone": "nova1"}} with mock.patch.object(self.controller.api, 'update_aggregate', return_value=AGGREGATE) as mock_update: result = self.controller.update(self.req, "1", body=body) result = _transform_aggregate_az(result['aggregate']) self._assert_agg_data(AGGREGATE, _make_agg_obj(result)) mock_update.assert_called_once_with(self.context, '1', body["aggregate"]) def test_update_with_no_updates(self): test_metadata = {"aggregate": {}} self.assertRaises(self.bad_request, self.controller.update, self.req, "2", body=test_metadata) def test_update_with_no_update_key(self): test_metadata = {"asdf": {}} self.assertRaises(self.bad_request, self.controller.update, self.req, "2", body=test_metadata) def test_update_with_wrong_updates(self): test_metadata = {"aggregate": {"status": "disable", "foo": "bar"}} self.assertRaises(self.bad_request, self.controller.update, self.req, "2", body=test_metadata) def test_update_with_null_name(self): test_metadata = {"aggregate": {"name": ""}} self.assertRaises(self.bad_request, self.controller.update, self.req, "2", body=test_metadata) def test_update_with_name_too_long(self): test_metadata = {"aggregate": {"name": "x" * 256}} self.assertRaises(self.bad_request, self.controller.update, self.req, "2", body=test_metadata) def test_update_with_availability_zone_too_long(self): test_metadata = {"aggregate": {"availability_zone": "x" * 256}} self.assertRaises(self.bad_request, self.controller.update, self.req, "2", body=test_metadata) def test_update_with_availability_zone_invalid(self): test_metadata = {"aggregate": {"availability_zone": "bad:az"}} self.assertRaises(self.bad_request, self.controller.update, self.req, "2", body=test_metadata) def test_update_with_empty_availability_zone(self): test_metadata = {"aggregate": {"availability_zone": ""}} self.assertRaises(self.bad_request, self.controller.update, self.req, "2", body=test_metadata) @mock.patch('nova.compute.api.AggregateAPI.update_aggregate') def test_update_with_none_availability_zone(self, mock_update_agg): agg_id = uuidsentinel.aggregate mock_update_agg.return_value = objects.Aggregate(self.context, name='test', uuid=agg_id, hosts=[], metadata={}) body = {"aggregate": {"name": "test", "availability_zone": None}} result = self.controller.update(self.req, agg_id, body=body) mock_update_agg.assert_called_once_with(self.context, agg_id, body['aggregate']) self.assertEqual(result['aggregate']['name'], 'test') def test_update_with_bad_aggregate(self): body = {"aggregate": {"name": "test_name"}} side_effect = exception.AggregateNotFound(aggregate_id=2) with mock.patch.object(self.controller.api, 'update_aggregate', side_effect=side_effect) as mock_update: self.assertRaises(exc.HTTPNotFound, self.controller.update, self.req, "2", body=body) mock_update.assert_called_once_with(self.context, '2', body["aggregate"]) def test_update_with_duplicated_name(self): body = {"aggregate": {"name": "test_name"}} side_effect = exception.AggregateNameExists(aggregate_name="test_name") with mock.patch.object(self.controller.api, 'update_aggregate', side_effect=side_effect) as mock_update: self.assertRaises(exc.HTTPConflict, self.controller.update, self.req, "2", body=body) mock_update.assert_called_once_with(self.context, '2', body["aggregate"]) def test_invalid_action(self): body = {"append_host": {"host": "host1"}} self.assertRaises(self.bad_request, eval(self.add_host), self.req, "1", body=body) def test_update_with_invalid_action(self): with mock.patch.object(self.controller.api, "update_aggregate", side_effect=exception.InvalidAggregateAction( action='invalid', aggregate_id='agg1', reason= "not empty")): body = {"aggregate": {"availability_zone": "nova"}} self.assertRaises(exc.HTTPBadRequest, self.controller.update, self.req, "1", body=body) def test_add_host(self): with mock.patch.object(self.controller.api, 'add_host_to_aggregate', return_value=AGGREGATE) as mock_add: aggregate = eval(self.add_host)(self.req, "1", body={"add_host": {"host": "host1"}}) aggregate = _transform_aggregate_az(aggregate['aggregate']) self._assert_agg_data(AGGREGATE, _make_agg_obj(aggregate)) mock_add.assert_called_once_with(self.context, "1", "host1") def test_add_host_no_admin(self): self.assertRaises(exception.PolicyNotAuthorized, eval(self.add_host), self.user_req, "1", body={"add_host": {"host": "host1"}}) def test_add_host_with_already_added_host(self): side_effect = exception.AggregateHostExists(aggregate_id="1", host="host1") with mock.patch.object(self.controller.api, 'add_host_to_aggregate', side_effect=side_effect) as mock_add: self.assertRaises(exc.HTTPConflict, eval(self.add_host), self.req, "1", body={"add_host": {"host": "host1"}}) mock_add.assert_called_once_with(self.context, "1", "host1") def test_add_host_with_bad_aggregate(self): side_effect = exception.AggregateNotFound( aggregate_id="bogus_aggregate") with mock.patch.object(self.controller.api, 'add_host_to_aggregate', side_effect=side_effect) as mock_add: self.assertRaises(exc.HTTPNotFound, eval(self.add_host), self.req, "bogus_aggregate", body={"add_host": {"host": "host1"}}) mock_add.assert_called_once_with(self.context, "bogus_aggregate", "host1") def test_add_host_with_bad_host(self): side_effect = exception.ComputeHostNotFound(host="bogus_host") with mock.patch.object(self.controller.api, 'add_host_to_aggregate', side_effect=side_effect) as mock_add: self.assertRaises(exc.HTTPNotFound, eval(self.add_host), self.req, "1", body={"add_host": {"host": "bogus_host"}}) mock_add.assert_called_once_with(self.context, "1", "bogus_host") def test_add_host_with_missing_host(self): self.assertRaises(self.bad_request, eval(self.add_host), self.req, "1", body={"add_host": {"asdf": "asdf"}}) def test_add_host_with_invalid_format_host(self): self.assertRaises(self.bad_request, eval(self.add_host), self.req, "1", body={"add_host": {"host": "a" * 300}}) def test_add_host_with_invalid_name_host(self): self.assertRaises(self.bad_request, eval(self.add_host), self.req, "1", body={"add_host": {"host": "bad:host"}}) def test_add_host_with_multiple_hosts(self): self.assertRaises(self.bad_request, eval(self.add_host), self.req, "1", body={"add_host": {"host": ["host1", "host2"]}}) def test_add_host_raises_key_error(self): with mock.patch.object(self.controller.api, 'add_host_to_aggregate', side_effect=KeyError) as mock_add: self.assertRaises(exc.HTTPInternalServerError, eval(self.add_host), self.req, "1", body={"add_host": {"host": "host1"}}) mock_add.assert_called_once_with(self.context, "1", "host1") def test_add_host_with_invalid_request(self): self.assertRaises(self.bad_request, eval(self.add_host), self.req, "1", body={"add_host": "1"}) def test_add_host_with_non_string(self): self.assertRaises(self.bad_request, eval(self.add_host), self.req, "1", body={"add_host": {"host": 1}}) def test_remove_host(self): return_value = _make_agg_obj({'metadata': {}}) with mock.patch.object(self.controller.api, 'remove_host_from_aggregate', return_value=return_value) as mock_rem: eval(self.remove_host)(self.req, "1", body={"remove_host": {"host": "host1"}}) mock_rem.assert_called_once_with(self.context, "1", "host1") def test_remove_host_no_admin(self): self.assertRaises(exception.PolicyNotAuthorized, eval(self.remove_host), self.user_req, "1", body={"remove_host": {"host": "host1"}}) def test_remove_host_with_bad_aggregate(self): side_effect = exception.AggregateNotFound( aggregate_id="bogus_aggregate") with mock.patch.object(self.controller.api, 'remove_host_from_aggregate', side_effect=side_effect) as mock_rem: self.assertRaises(exc.HTTPNotFound, eval(self.remove_host), self.req, "bogus_aggregate", body={"remove_host": {"host": "host1"}}) mock_rem.assert_called_once_with(self.context, "bogus_aggregate", "host1") def test_remove_host_with_host_not_in_aggregate(self): side_effect = exception.AggregateHostNotFound(aggregate_id="1", host="host1") with mock.patch.object(self.controller.api, 'remove_host_from_aggregate', side_effect=side_effect) as mock_rem: self.assertRaises(exc.HTTPNotFound, eval(self.remove_host), self.req, "1", body={"remove_host": {"host": "host1"}}) mock_rem.assert_called_once_with(self.context, "1", "host1") def test_remove_host_with_bad_host(self): side_effect = exception.ComputeHostNotFound(host="bogushost") with mock.patch.object(self.controller.api, 'remove_host_from_aggregate', side_effect=side_effect) as mock_rem: self.assertRaises(exc.HTTPNotFound, eval(self.remove_host), self.req, "1", body={"remove_host": {"host": "bogushost"}}) mock_rem.assert_called_once_with(self.context, "1", "bogushost") def test_remove_host_with_missing_host(self): self.assertRaises(self.bad_request, eval(self.remove_host), self.req, "1", body={"asdf": "asdf"}) def test_remove_host_with_multiple_hosts(self): self.assertRaises(self.bad_request, eval(self.remove_host), self.req, "1", body={"remove_host": {"host": ["host1", "host2"]}}) def test_remove_host_with_extra_param(self): self.assertRaises(self.bad_request, eval(self.remove_host), self.req, "1", body={"remove_host": {"asdf": "asdf", "host": "asdf"}}) def test_remove_host_with_invalid_request(self): self.assertRaises(self.bad_request, eval(self.remove_host), self.req, "1", body={"remove_host": "1"}) def test_remove_host_with_missing_host_empty(self): self.assertRaises(self.bad_request, eval(self.remove_host), self.req, "1", body={"remove_host": {}}) def test_set_metadata(self): body = {"set_metadata": {"metadata": {"foo": "bar"}}} with mock.patch.object(self.controller.api, 'update_aggregate_metadata', return_value=AGGREGATE) as mock_update: result = eval(self.set_metadata)(self.req, "1", body=body) result = _transform_aggregate_az(result['aggregate']) self._assert_agg_data(AGGREGATE, _make_agg_obj(result)) mock_update.assert_called_once_with(self.context, "1", body["set_metadata"]['metadata']) def test_set_metadata_delete(self): body = {"set_metadata": {"metadata": {"foo": None}}} with mock.patch.object(self.controller.api, 'update_aggregate_metadata') as mocked: mocked.return_value = AGGREGATE result = eval(self.set_metadata)(self.req, "1", body=body) result = _transform_aggregate_az(result['aggregate']) self._assert_agg_data(AGGREGATE, _make_agg_obj(result)) mocked.assert_called_once_with(self.context, "1", body["set_metadata"]["metadata"]) def test_set_metadata_no_admin(self): self.assertRaises(exception.PolicyNotAuthorized, eval(self.set_metadata), self.user_req, "1", body={"set_metadata": {"metadata": {"foo": "bar"}}}) def test_set_metadata_with_bad_aggregate(self): body = {"set_metadata": {"metadata": {"foo": "bar"}}} side_effect = exception.AggregateNotFound(aggregate_id="bad_aggregate") with mock.patch.object(self.controller.api, 'update_aggregate_metadata', side_effect=side_effect) as mock_update: self.assertRaises(exc.HTTPNotFound, eval(self.set_metadata), self.req, "bad_aggregate", body=body) mock_update.assert_called_once_with(self.context, "bad_aggregate", body["set_metadata"]['metadata']) def test_set_metadata_with_missing_metadata(self): body = {"asdf": {"foo": "bar"}} self.assertRaises(self.bad_request, eval(self.set_metadata), self.req, "1", body=body) def test_set_metadata_with_extra_params(self): body = {"metadata": {"foo": "bar"}, "asdf": {"foo": "bar"}} self.assertRaises(self.bad_request, eval(self.set_metadata), self.req, "1", body=body) def test_set_metadata_without_dict(self): body = {"set_metadata": {'metadata': 1}} self.assertRaises(self.bad_request, eval(self.set_metadata), self.req, "1", body=body) def test_set_metadata_with_empty_key(self): body = {"set_metadata": {"metadata": {"": "value"}}} self.assertRaises(self.bad_request, eval(self.set_metadata), self.req, "1", body=body) def test_set_metadata_with_key_too_long(self): body = {"set_metadata": {"metadata": {"x" * 256: "value"}}} self.assertRaises(self.bad_request, eval(self.set_metadata), self.req, "1", body=body) def test_set_metadata_with_value_too_long(self): body = {"set_metadata": {"metadata": {"key": "x" * 256}}} self.assertRaises(self.bad_request, eval(self.set_metadata), self.req, "1", body=body) def test_set_metadata_with_string(self): body = {"set_metadata": {"metadata": "test"}} self.assertRaises(self.bad_request, eval(self.set_metadata), self.req, "1", body=body) def test_delete_aggregate(self): with mock.patch.object(self.controller.api, 'delete_aggregate') as mock_del: self.controller.delete(self.req, "1") mock_del.assert_called_once_with(self.context, "1") def test_delete_aggregate_no_admin(self): self.assertRaises(exception.PolicyNotAuthorized, self.controller.delete, self.user_req, "1") def test_delete_aggregate_with_bad_aggregate(self): side_effect = exception.AggregateNotFound( aggregate_id="bogus_aggregate") with mock.patch.object(self.controller.api, 'delete_aggregate', side_effect=side_effect) as mock_del: self.assertRaises(exc.HTTPNotFound, self.controller.delete, self.req, "bogus_aggregate") mock_del.assert_called_once_with(self.context, "bogus_aggregate") def test_delete_aggregate_with_host(self): with mock.patch.object(self.controller.api, "delete_aggregate", side_effect=exception.InvalidAggregateAction( action="delete", aggregate_id="agg1", reason="not empty")): self.assertRaises(exc.HTTPBadRequest, self.controller.delete, self.req, "agg1") def test_marshall_aggregate(self): # _marshall_aggregate() just basically turns the aggregate returned # from the AggregateAPI into a dict, so this tests that transform. # We would expect the dictionary that comes out is the same one # that we pump into the aggregate object in the first place agg = {'name': 'aggregate1', 'id': 1, 'uuid': uuidsentinel.aggregate, 'metadata': {'foo': 'bar', 'availability_zone': 'nova'}, 'hosts': ['host1', 'host2']} agg_obj = _make_agg_obj(agg) # _marshall_aggregate() puts all fields and obj_extra_fields in the # top-level dict, so we need to put availability_zone at the top also agg['availability_zone'] = 'nova' avr_v240 = api_version_request.APIVersionRequest("2.40") avr_v241 = api_version_request.APIVersionRequest("2.41") req = mock.MagicMock(api_version_request=avr_v241) marshalled_agg = self.controller._marshall_aggregate(req, agg_obj) self.assertEqual(agg, marshalled_agg['aggregate']) req = mock.MagicMock(api_version_request=avr_v240) marshalled_agg = self.controller._marshall_aggregate(req, agg_obj) # UUID isn't in microversion 2.40 and before del agg['uuid'] self.assertEqual(agg, marshalled_agg['aggregate']) def _assert_agg_data(self, expected, actual): self.assertTrue(obj_base.obj_equal_prims(expected, actual), "The aggregate objects were not equal") nova-17.0.1/nova/tests/unit/api/openstack/compute/test_fping.py0000666000175000017500000001631513250073126024610 0ustar zuulzuul00000000000000# Copyright 2011 Grid Dynamics # Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import webob from nova.api.openstack.compute import fping as fping_v21 from nova import exception from nova import test from nova.tests.unit.api.openstack import fakes FAKE_UUID = fakes.FAKE_UUID def execute(*cmd, **args): return "".join(["%s is alive" % ip for ip in cmd[1:]]) class FpingTestV21(test.TestCase): controller_cls = fping_v21.FpingController def setUp(self): super(FpingTestV21, self).setUp() self.flags(use_ipv6=False) return_server = fakes.fake_instance_get() return_servers = fakes.fake_instance_get_all_by_filters() self.stub_out("nova.compute.instance_list.get_instances_sorted", return_servers) self.stub_out("nova.db.instance_get_by_uuid", return_server) self.stub_out('nova.utils.execute', execute) self.stub_out("nova.api.openstack.compute.fping.FpingController." "check_fping", lambda self: None) self.controller = self.controller_cls() def _get_url(self): return "/v2/1234" def test_fping_index(self): req = fakes.HTTPRequest.blank(self._get_url() + "/os-fping") res_dict = self.controller.index(req) self.assertIn("servers", res_dict) for srv in res_dict["servers"]: for key in "project_id", "id", "alive": self.assertIn(key, srv) def test_fping_index_policy(self): req = fakes.HTTPRequest.blank(self._get_url() + "os-fping?all_tenants=1") self.assertRaises(exception.Forbidden, self.controller.index, req) req = fakes.HTTPRequest.blank(self._get_url() + "/os-fping?all_tenants=1") req.environ["nova.context"].is_admin = True res_dict = self.controller.index(req) self.assertIn("servers", res_dict) def test_fping_index_include(self): req = fakes.HTTPRequest.blank(self._get_url() + "/os-fping") res_dict = self.controller.index(req) ids = [srv["id"] for srv in res_dict["servers"]] req = fakes.HTTPRequest.blank(self._get_url() + "/os-fping?include=%s" % ids[0]) res_dict = self.controller.index(req) self.assertEqual(len(res_dict["servers"]), 1) self.assertEqual(res_dict["servers"][0]["id"], ids[0]) def test_fping_index_exclude(self): req = fakes.HTTPRequest.blank(self._get_url() + "/os-fping") res_dict = self.controller.index(req) ids = [srv["id"] for srv in res_dict["servers"]] req = fakes.HTTPRequest.blank(self._get_url() + "/os-fping?exclude=%s" % ",".join(ids[1:])) res_dict = self.controller.index(req) self.assertEqual(len(res_dict["servers"]), 1) self.assertEqual(res_dict["servers"][0]["id"], ids[0]) def test_fping_index_with_negative_int_filters(self): req = fakes.HTTPRequest.blank(self._get_url() + '/os-fping?all_tenants=-1&include=-21&exclude=-3', use_admin_context=True) self.controller.index(req) def test_fping_index_with_string_filter(self): req = fakes.HTTPRequest.blank(self._get_url() + '/os-fping?all_tenants=abc&include=abc1&exclude=abc2', use_admin_context=True) self.controller.index(req) def test_fping_index_duplicate_query_parameters_validation(self): params = { 'all_tenants': 1, 'include': 'UUID1', 'exclude': 'UUID2' } for param, value in params.items(): req = fakes.HTTPRequest.blank(self._get_url() + '/os-fping?%s=%s&%s=%s' % (param, value, param, value), use_admin_context=True) self.controller.index(req) def test_fping_index_pagination_with_additional_filter(self): req = fakes.HTTPRequest.blank(self._get_url() + '/os-fping?all_tenants=1&include=UUID1&additional=3', use_admin_context=True) self.controller.index(req) def test_fping_show(self): req = fakes.HTTPRequest.blank(self._get_url() + "os-fping/%s" % FAKE_UUID) res_dict = self.controller.show(req, FAKE_UUID) self.assertIn("server", res_dict) srv = res_dict["server"] for key in "project_id", "id", "alive": self.assertIn(key, srv) @mock.patch('nova.db.instance_get_by_uuid') def test_fping_show_with_not_found(self, mock_get_instance): mock_get_instance.side_effect = exception.InstanceNotFound( instance_id='') req = fakes.HTTPRequest.blank(self._get_url() + "os-fping/%s" % FAKE_UUID) self.assertRaises(webob.exc.HTTPNotFound, self.controller.show, req, FAKE_UUID) class FpingPolicyEnforcementV21(test.NoDBTestCase): def setUp(self): super(FpingPolicyEnforcementV21, self).setUp() self.controller = fping_v21.FpingController() self.req = fakes.HTTPRequest.blank('') def common_policy_check(self, rule, func, *arg, **kwarg): self.policy.set_rules(rule) exc = self.assertRaises( exception.PolicyNotAuthorized, func, *arg, **kwarg) self.assertEqual( "Policy doesn't allow %s to be performed." % rule.popitem()[0], exc.format_message()) def test_list_policy_failed(self): rule = {"os_compute_api:os-fping": "project:non_fake"} self.common_policy_check(rule, self.controller.index, self.req) self.req.GET.update({"all_tenants": "True"}) rule = {"os_compute_api:os-fping:all_tenants": "project:non_fake"} self.common_policy_check(rule, self.controller.index, self.req) def test_show_policy_failed(self): rule = {"os_compute_api:os-fping": "project:non_fake"} self.common_policy_check( rule, self.controller.show, self.req, FAKE_UUID) class FpingTestDeprecation(test.NoDBTestCase): def setUp(self): super(FpingTestDeprecation, self).setUp() self.controller = fping_v21.FpingController() self.req = fakes.HTTPRequest.blank('', version='2.36') def test_all_apis_return_not_found(self): self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.show, self.req, fakes.FAKE_UUID) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.index, self.req) nova-17.0.1/nova/tests/unit/api/openstack/compute/test_flavor_disabled.py0000666000175000017500000000452413250073126026624 0ustar zuulzuul00000000000000# Copyright 2012 Nebula, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_serialization import jsonutils from nova import test from nova.tests.unit.api.openstack import fakes def fake_get_db_flavor(req, flavorid): return fakes.FLAVORS[flavorid] class FlavorDisabledTestV21(test.NoDBTestCase): base_url = '/v2/fake/flavors' content_type = 'application/json' prefix = "OS-FLV-DISABLED:" def setUp(self): super(FlavorDisabledTestV21, self).setUp() fakes.stub_out_nw_api(self) fakes.stub_out_flavor_get_all(self) fakes.stub_out_flavor_get_by_flavor_id(self) self.stub_out('nova.api.openstack.wsgi.Request.get_db_flavor', fake_get_db_flavor) def _make_request(self, url): req = fakes.HTTPRequest.blank(url) req.headers['Accept'] = self.content_type res = req.get_response(fakes.wsgi_app_v21()) return res def _get_flavor(self, body): return jsonutils.loads(body).get('flavor') def _get_flavors(self, body): return jsonutils.loads(body).get('flavors') def assertFlavorDisabled(self, flavor, disabled): self.assertEqual(flavor.get('%sdisabled' % self.prefix), disabled) def test_show(self): url = self.base_url + '/1' res = self._make_request(url) self.assertEqual(res.status_int, 200) self.assertFlavorDisabled(self._get_flavor(res.body), fakes.FLAVORS['1'].disabled) def test_detail(self): url = self.base_url + '/detail' res = self._make_request(url) self.assertEqual(res.status_int, 200) flavors = self._get_flavors(res.body) self.assertFlavorDisabled(flavors[0], fakes.FLAVORS['1'].disabled) self.assertFlavorDisabled(flavors[1], fakes.FLAVORS['2'].disabled) nova-17.0.1/nova/tests/unit/api/openstack/compute/test_networks.py0000666000175000017500000007730213250073126025364 0ustar zuulzuul00000000000000# Copyright 2011 Grid Dynamics # Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import datetime import math import iso8601 import mock import netaddr from oslo_config import cfg import webob from nova.api.openstack.compute import networks as networks_v21 from nova.api.openstack.compute import networks_associate \ as networks_associate_v21 import nova.context from nova import exception from nova.network import manager from nova.network.neutronv2 import api as neutron from nova import objects from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests import uuidsentinel as uuids import nova.utils CONF = cfg.CONF FAKE_NETWORK_PROJECT_ID = '6133f8b603924f45bc0c9e21f6df12fa' UTC = iso8601.UTC FAKE_NETWORKS = [ { 'bridge': 'br100', 'vpn_public_port': 1000, 'dhcp_start': '10.0.0.3', 'bridge_interface': 'eth0', 'updated_at': datetime.datetime(2011, 8, 16, 9, 26, 13, 48257, tzinfo=UTC), 'id': 1, 'uuid': uuids.network_1, 'cidr_v6': None, 'deleted_at': None, 'gateway': '10.0.0.1', 'label': 'mynet_0', 'project_id': FAKE_NETWORK_PROJECT_ID, 'rxtx_base': None, 'vpn_private_address': '10.0.0.2', 'deleted': False, 'vlan': 100, 'broadcast': '10.0.0.7', 'netmask': '255.255.255.248', 'injected': False, 'cidr': '10.0.0.0/29', 'vpn_public_address': '127.0.0.1', 'multi_host': False, 'dns1': None, 'dns2': None, 'host': 'nsokolov-desktop', 'gateway_v6': None, 'netmask_v6': None, 'priority': None, 'created_at': datetime.datetime(2011, 8, 15, 6, 19, 19, 387525, tzinfo=UTC), 'mtu': None, 'dhcp_server': '10.0.0.1', 'enable_dhcp': True, 'share_address': False, }, { 'bridge': 'br101', 'vpn_public_port': 1001, 'dhcp_start': '10.0.0.11', 'bridge_interface': 'eth0', 'updated_at': None, 'id': 2, 'cidr_v6': None, 'uuid': uuids.network_2, 'deleted_at': None, 'gateway': '10.0.0.9', 'label': 'mynet_1', 'project_id': None, 'vpn_private_address': '10.0.0.10', 'deleted': False, 'vlan': 101, 'broadcast': '10.0.0.15', 'rxtx_base': None, 'netmask': '255.255.255.248', 'injected': False, 'cidr': '10.0.0.10/29', 'vpn_public_address': None, 'multi_host': False, 'dns1': None, 'dns2': None, 'host': None, 'gateway_v6': None, 'netmask_v6': None, 'priority': None, 'created_at': datetime.datetime(2011, 8, 15, 6, 19, 19, 885495, tzinfo=UTC), 'mtu': None, 'dhcp_server': '10.0.0.9', 'enable_dhcp': True, 'share_address': False, }, ] FAKE_USER_NETWORKS = [ { 'id': 1, 'cidr': '10.0.0.0/29', 'netmask': '255.255.255.248', 'gateway': '10.0.0.1', 'broadcast': '10.0.0.7', 'dns1': None, 'dns2': None, 'cidr_v6': None, 'gateway_v6': None, 'label': 'mynet_0', 'netmask_v6': None, 'uuid': uuids.network_1, }, { 'id': 2, 'cidr': '10.0.0.10/29', 'netmask': '255.255.255.248', 'gateway': '10.0.0.9', 'broadcast': '10.0.0.15', 'dns1': None, 'dns2': None, 'cidr_v6': None, 'gateway_v6': None, 'label': 'mynet_1', 'netmask_v6': None, 'uuid': uuids.network_2, }, ] NEW_NETWORK = { "network": { "bridge_interface": "eth0", "cidr": "10.20.105.0/24", "label": "new net 111", "vlan_start": 111, "multi_host": False, 'dhcp_server': '10.0.0.1', 'enable_dhcp': True, 'share_address': False, } } class FakeNetworkAPI(object): _sentinel = object() def __init__(self): self.networks = copy.deepcopy(FAKE_NETWORKS) def delete(self, context, network_id): if network_id == 'always_delete': return True if network_id == -1: raise exception.NetworkInUse(network_id=network_id) for i, network in enumerate(self.networks): if network['id'] == network_id: del self.networks[0] return True raise exception.NetworkNotFoundForUUID(uuid=network_id) def disassociate(self, context, network_uuid): for network in self.networks: if network.get('uuid') == network_uuid: network['project_id'] = None return True raise exception.NetworkNotFound(network_id=network_uuid) def associate(self, context, network_uuid, host=_sentinel, project=_sentinel): for network in self.networks: if network.get('uuid') == network_uuid: if host is not FakeNetworkAPI._sentinel: network['host'] = host if project is not FakeNetworkAPI._sentinel: network['project_id'] = project return True raise exception.NetworkNotFound(network_id=network_uuid) def add_network_to_project(self, context, project_id, network_uuid=None): if network_uuid: for network in self.networks: if network.get('project_id', None) is None: network['project_id'] = project_id return return for network in self.networks: if network.get('uuid') == network_uuid: network['project_id'] = project_id return def get_all(self, context): return self._fake_db_network_get_all(context, project_only=True) def _fake_db_network_get_all(self, context, project_only="allow_none"): project_id = context.project_id nets = self.networks if nova.context.is_user_context(context) and project_only: if project_only == 'allow_none': nets = [n for n in self.networks if (n['project_id'] == project_id or n['project_id'] is None)] else: nets = [n for n in self.networks if n['project_id'] == project_id] objs = [objects.Network._from_db_object(context, objects.Network(), net) for net in nets] return objects.NetworkList(objects=objs) def get(self, context, network_id): for network in self.networks: if network.get('uuid') == network_id: if 'injected' in network and network['injected'] is None: # NOTE: This is a workaround for passing unit tests. # When using nova-network, 'injected' value should be # boolean because of the definition of objects.Network(). # However, 'injected' value can be None if neutron. # So here changes the value to False just for passing # following _from_db_object(). network['injected'] = False return objects.Network._from_db_object(context, objects.Network(), network) raise exception.NetworkNotFound(network_id=network_id) def create(self, context, **kwargs): subnet_bits = int(math.ceil(math.log(kwargs.get( 'network_size', CONF.network_size), 2))) fixed_net_v4 = netaddr.IPNetwork(kwargs['cidr']) prefixlen_v4 = 32 - subnet_bits subnets_v4 = list(fixed_net_v4.subnet( prefixlen_v4, count=kwargs.get('num_networks', CONF.num_networks))) new_networks = [] new_id = max((net['id'] for net in self.networks)) for index, subnet_v4 in enumerate(subnets_v4): new_id += 1 net = {'id': new_id, 'uuid': uuids.fake} net['cidr'] = str(subnet_v4) net['netmask'] = str(subnet_v4.netmask) net['gateway'] = kwargs.get('gateway') or str(subnet_v4[1]) net['broadcast'] = str(subnet_v4.broadcast) net['dhcp_start'] = str(subnet_v4[2]) for key in FAKE_NETWORKS[0]: net.setdefault(key, kwargs.get(key)) new_networks.append(net) self.networks += new_networks return new_networks # NOTE(vish): tests that network create Exceptions actually return # the proper error responses class NetworkCreateExceptionsTestV21(test.TestCase): validation_error = exception.ValidationError class PassthroughAPI(object): def __init__(self): self.network_manager = manager.FlatDHCPManager() def create(self, *args, **kwargs): if kwargs['label'] == 'fail_NetworkNotCreated': raise exception.NetworkNotCreated(req='fake_fail') return self.network_manager.create_networks(*args, **kwargs) def setUp(self): super(NetworkCreateExceptionsTestV21, self).setUp() self._setup() fakes.stub_out_networking(self) self.new_network = copy.deepcopy(NEW_NETWORK) def _setup(self): self.req = fakes.HTTPRequest.blank('') self.controller = networks_v21.NetworkController(self.PassthroughAPI()) def test_network_create_bad_vlan(self): self.new_network['network']['vlan_start'] = 'foo' self.assertRaises(self.validation_error, self.controller.create, self.req, body=self.new_network) def test_network_create_no_cidr(self): del self.new_network['network']['cidr'] self.assertRaises(self.validation_error, self.controller.create, self.req, body=self.new_network) def test_network_create_no_label(self): del self.new_network['network']['label'] self.assertRaises(self.validation_error, self.controller.create, self.req, body=self.new_network) def test_network_create_label_too_long(self): self.new_network['network']['label'] = "x" * 256 self.assertRaises(self.validation_error, self.controller.create, self.req, body=self.new_network) def test_network_create_invalid_fixed_cidr(self): self.new_network['network']['fixed_cidr'] = 'foo' self.assertRaises(self.validation_error, self.controller.create, self.req, body=self.new_network) def test_network_create_invalid_start(self): self.new_network['network']['allowed_start'] = 'foo' self.assertRaises(self.validation_error, self.controller.create, self.req, body=self.new_network) def test_network_create_bad_cidr(self): self.new_network['network']['cidr'] = '128.0.0.0/900' self.assertRaises(self.validation_error, self.controller.create, self.req, body=self.new_network) def test_network_create_handle_network_not_created(self): self.new_network['network']['label'] = 'fail_NetworkNotCreated' self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, body=self.new_network) @mock.patch.object(objects.NetworkList, 'get_all') def test_network_create_cidr_conflict(self, mock_get_all): def fake_get_all(context): ret = objects.NetworkList(context=context, objects=[]) net = objects.Network(cidr='10.0.0.0/23') ret.objects.append(net) return ret mock_get_all.side_effect = fake_get_all self.new_network['network']['cidr'] = '10.0.0.0/24' self.assertRaises(webob.exc.HTTPConflict, self.controller.create, self.req, body=self.new_network) def test_network_create_vlan_conflict(self): @staticmethod def get_all(context): ret = objects.NetworkList(context=context, objects=[]) net = objects.Network(cidr='10.0.0.0/24', vlan=100) ret.objects.append(net) return ret def fake_create(context): raise exception.DuplicateVlan(vlan=100) self.stub_out('nova.objects.NetworkList.get_all', get_all) self.stub_out('nova.objects.Network.create', fake_create) self.new_network['network']['cidr'] = '20.0.0.0/24' self.assertRaises(webob.exc.HTTPConflict, self.controller.create, self.req, body=self.new_network) class NetworksTestV21(test.NoDBTestCase): validation_error = exception.ValidationError def setUp(self): super(NetworksTestV21, self).setUp() self.fake_network_api = FakeNetworkAPI() self._setup() fakes.stub_out_networking(self) self.new_network = copy.deepcopy(NEW_NETWORK) self.non_admin_req = fakes.HTTPRequest.blank( '', project_id=fakes.FAKE_PROJECT_ID) self.admin_req = fakes.HTTPRequest.blank('', project_id=fakes.FAKE_PROJECT_ID, use_admin_context=True) def _setup(self): self.controller = networks_v21.NetworkController( self.fake_network_api) self.neutron_ctrl = networks_v21.NetworkController( neutron.API()) self.req = fakes.HTTPRequest.blank('', project_id=fakes.FAKE_PROJECT_ID) def _check_status(self, res, method, code): self.assertEqual(method.wsgi_code, code) @staticmethod def network_uuid_to_id(network): network['id'] = network['uuid'] del network['uuid'] def test_network_list_all_as_user(self): self.maxDiff = None res_dict = self.controller.index(self.non_admin_req) self.assertEqual(res_dict, {'networks': []}) project_id = self.req.environ["nova.context"].project_id cxt = self.req.environ["nova.context"] uuid = FAKE_NETWORKS[0]['uuid'] self.fake_network_api.associate(context=cxt, network_uuid=uuid, project=project_id) res_dict = self.controller.index(self.non_admin_req) expected = [copy.deepcopy(FAKE_USER_NETWORKS[0])] for network in expected: self.network_uuid_to_id(network) self.assertEqual({'networks': expected}, res_dict) def test_network_list_all_as_admin(self): res_dict = self.controller.index(self.admin_req) expected = copy.deepcopy(FAKE_NETWORKS) for network in expected: self.network_uuid_to_id(network) self.assertEqual({'networks': expected}, res_dict) def test_network_disassociate(self): uuid = FAKE_NETWORKS[0]['uuid'] disassociate = self.controller._disassociate_host_and_project res = disassociate(self.req, uuid, {'disassociate': None}) self._check_status(res, disassociate, 202) self.assertIsNone(self.fake_network_api.networks[0]['project_id']) self.assertIsNone(self.fake_network_api.networks[0]['host']) def test_network_disassociate_not_found(self): self.assertRaises(webob.exc.HTTPNotFound, self.controller._disassociate_host_and_project, self.req, 100, {'disassociate': None}) def test_network_get_as_user(self): uuid = FAKE_USER_NETWORKS[0]['uuid'] res_dict = self.controller.show(self.non_admin_req, uuid) expected = {'network': copy.deepcopy(FAKE_USER_NETWORKS[0])} self.network_uuid_to_id(expected['network']) self.assertEqual(expected, res_dict) def test_network_get_as_admin(self): uuid = FAKE_NETWORKS[0]['uuid'] res_dict = self.controller.show(self.admin_req, uuid) expected = {'network': copy.deepcopy(FAKE_NETWORKS[0])} self.network_uuid_to_id(expected['network']) self.assertEqual(expected, res_dict) def test_network_get_not_found(self): self.assertRaises(webob.exc.HTTPNotFound, self.controller.show, self.req, 100) def test_network_delete(self): delete_method = self.controller.delete res = delete_method(self.req, 1) self._check_status(res, delete_method, 202) def test_network_delete_not_found(self): self.assertRaises(webob.exc.HTTPNotFound, self.controller.delete, self.req, 100) def test_network_delete_in_use(self): self.assertRaises(webob.exc.HTTPConflict, self.controller.delete, self.req, -1) def test_network_add(self): uuid = FAKE_NETWORKS[1]['uuid'] add = self.controller.add res = add(self.req, body={'id': uuid}) self._check_status(res, add, 202) res_dict = self.controller.show(self.admin_req, uuid) self.assertEqual(res_dict['network']['project_id'], fakes.FAKE_PROJECT_ID) @mock.patch('nova.tests.unit.api.openstack.compute.test_networks.' 'FakeNetworkAPI.add_network_to_project', side_effect=exception.NoMoreNetworks) def test_network_add_no_more_networks_fail(self, mock_add): uuid = FAKE_NETWORKS[1]['uuid'] self.assertRaises(webob.exc.HTTPBadRequest, self.controller.add, self.req, body={'id': uuid}) @mock.patch('nova.tests.unit.api.openstack.compute.test_networks.' 'FakeNetworkAPI.add_network_to_project', side_effect=exception. NetworkNotFoundForUUID(uuid=fakes.FAKE_PROJECT_ID)) def test_network_add_network_not_found_networks_fail(self, mock_add): uuid = FAKE_NETWORKS[1]['uuid'] self.assertRaises(webob.exc.HTTPBadRequest, self.controller.add, self.req, body={'id': uuid}) def test_network_add_network_without_body(self): self.assertRaises(self.validation_error, self.controller.add, self.req, body=None) def test_network_add_network_with_invalid_id(self): self.assertRaises(exception.ValidationError, self.controller.add, self.req, body={'id': 123}) def test_network_add_network_with_extra_arg(self): uuid = FAKE_NETWORKS[1]['uuid'] self.assertRaises(exception.ValidationError, self.controller.add, self.req, body={'id': uuid, 'extra_arg': 123}) def test_network_add_network_with_none_id(self): add = self.controller.add res = add(self.req, body={'id': None}) self._check_status(res, add, 202) def test_network_create(self): res_dict = self.controller.create(self.req, body=self.new_network) self.assertIn('network', res_dict) uuid = res_dict['network']['id'] res_dict = self.controller.show(self.req, uuid) self.assertTrue(res_dict['network']['label']. startswith(NEW_NETWORK['network']['label'])) def test_network_create_large(self): self.new_network['network']['cidr'] = '128.0.0.0/4' res_dict = self.controller.create(self.req, body=self.new_network) self.assertEqual(res_dict['network']['cidr'], self.new_network['network']['cidr']) def test_network_neutron_disassociate_not_implemented(self): uuid = FAKE_NETWORKS[1]['uuid'] self.assertRaises(webob.exc.HTTPNotImplemented, self.neutron_ctrl._disassociate_host_and_project, self.req, uuid, {'disassociate': None}) class NetworksAssociateTestV21(test.NoDBTestCase): def setUp(self): super(NetworksAssociateTestV21, self).setUp() self.fake_network_api = FakeNetworkAPI() self._setup() fakes.stub_out_networking(self) self.admin_req = fakes.HTTPRequest.blank('', use_admin_context=True) def _setup(self): self.controller = networks_v21.NetworkController(self.fake_network_api) self.associate_controller = networks_associate_v21\ .NetworkAssociateActionController(self.fake_network_api) self.neutron_assoc_ctrl = ( networks_associate_v21.NetworkAssociateActionController( neutron.API())) self.req = fakes.HTTPRequest.blank('') def _check_status(self, res, method, code): self.assertEqual(method.wsgi_code, code) def test_network_disassociate_host_only(self): uuid = FAKE_NETWORKS[0]['uuid'] disassociate = self.associate_controller._disassociate_host_only res = disassociate( self.req, uuid, {'disassociate_host': None}) self._check_status(res, disassociate, 202) self.assertIsNotNone(self.fake_network_api.networks[0]['project_id']) self.assertIsNone(self.fake_network_api.networks[0]['host']) def test_network_disassociate_project_only(self): uuid = FAKE_NETWORKS[0]['uuid'] disassociate = self.associate_controller._disassociate_project_only res = disassociate(self.req, uuid, {'disassociate_project': None}) self._check_status(res, disassociate, 202) self.assertIsNone(self.fake_network_api.networks[0]['project_id']) self.assertIsNotNone(self.fake_network_api.networks[0]['host']) def test_network_disassociate_project_network_delete(self): uuid = FAKE_NETWORKS[1]['uuid'] disassociate = self.associate_controller._disassociate_project_only res = disassociate( self.req, uuid, {'disassociate_project': None}) self._check_status(res, disassociate, 202) self.assertIsNone(self.fake_network_api.networks[1]['project_id']) delete = self.controller.delete res = delete(self.req, 1) # NOTE: On v2.1 code, delete method doesn't return anything and # the status code is decorated on wsgi_code of the method. self.assertIsNone(res) self.assertEqual(202, delete.wsgi_code) def test_network_associate_project_delete_fail(self): uuid = FAKE_NETWORKS[0]['uuid'] req = fakes.HTTPRequest.blank('/v2/1234/os-networks/%s/action' % uuid) self.assertRaises(webob.exc.HTTPConflict, self.controller.delete, req, -1) def test_network_associate_with_host(self): uuid = FAKE_NETWORKS[1]['uuid'] associate = self.associate_controller._associate_host res = associate(self.req, uuid, body={'associate_host': "TestHost"}) self._check_status(res, associate, 202) res_dict = self.controller.show(self.admin_req, uuid) self.assertEqual(res_dict['network']['host'], 'TestHost') def test_network_neutron_associate_not_implemented(self): uuid = FAKE_NETWORKS[1]['uuid'] self.assertRaises(webob.exc.HTTPNotImplemented, self.neutron_assoc_ctrl._associate_host, self.req, uuid, body={'associate_host': "TestHost"}) def _test_network_neutron_associate_host_validation_failed(self, body): uuid = FAKE_NETWORKS[1]['uuid'] self.assertRaises(exception.ValidationError, self.associate_controller._associate_host, self.req, uuid, body=body) def test_network_neutron_associate_host_non_string(self): self._test_network_neutron_associate_host_validation_failed( {'associate_host': 123}) def test_network_neutron_associate_host_empty_body(self): self._test_network_neutron_associate_host_validation_failed({}) def test_network_neutron_associate_bad_associate_host_key(self): self._test_network_neutron_associate_host_validation_failed( {'badassociate_host': "TestHost"}) def test_network_neutron_associate_host_extra_arg(self): self._test_network_neutron_associate_host_validation_failed( {'associate_host': "TestHost", 'extra_arg': "extra_arg"}) def test_network_neutron_disassociate_project_not_implemented(self): uuid = FAKE_NETWORKS[1]['uuid'] self.assertRaises(webob.exc.HTTPNotImplemented, self.neutron_assoc_ctrl._disassociate_project_only, self.req, uuid, {'disassociate_project': None}) def test_network_neutron_disassociate_host_not_implemented(self): uuid = FAKE_NETWORKS[1]['uuid'] self.assertRaises(webob.exc.HTTPNotImplemented, self.neutron_assoc_ctrl._disassociate_host_only, self.req, uuid, {'disassociate_host': None}) class NetworksEnforcementV21(test.NoDBTestCase): def setUp(self): super(NetworksEnforcementV21, self).setUp() self.controller = networks_v21.NetworkController() self.req = fakes.HTTPRequest.blank('') def test_show_policy_failed(self): rule_name = 'os_compute_api:os-networks:view' self.policy.set_rules({rule_name: "project:non_fake"}) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller.show, self.req, fakes.FAKE_UUID) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) def test_index_policy_failed(self): rule_name = 'os_compute_api:os-networks:view' self.policy.set_rules({rule_name: "project:non_fake"}) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller.index, self.req) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) def test_create_policy_failed(self): rule_name = 'os_compute_api:os-networks' self.policy.set_rules({rule_name: "project:non_fake"}) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller.create, self.req, body=NEW_NETWORK) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) def test_delete_policy_failed(self): rule_name = 'os_compute_api:os-networks' self.policy.set_rules({rule_name: "project:non_fake"}) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller.delete, self.req, fakes.FAKE_UUID) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) def test_add_policy_failed(self): rule_name = 'os_compute_api:os-networks' self.policy.set_rules({rule_name: "project:non_fake"}) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller.add, self.req, body={'id': fakes.FAKE_UUID}) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) def test_disassociate_policy_failed(self): rule_name = 'os_compute_api:os-networks' self.policy.set_rules({rule_name: "project:non_fake"}) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller._disassociate_host_and_project, self.req, fakes.FAKE_UUID, body={'network': {}}) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) class NetworksAssociateEnforcementV21(test.NoDBTestCase): def setUp(self): super(NetworksAssociateEnforcementV21, self).setUp() self.controller = (networks_associate_v21. NetworkAssociateActionController()) self.req = fakes.HTTPRequest.blank('') def test_disassociate_host_policy_failed(self): rule_name = 'os_compute_api:os-networks-associate' self.policy.set_rules({rule_name: "project:non_fake"}) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller._disassociate_host_only, self.req, fakes.FAKE_UUID, body={'disassociate_host': {}}) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) def test_disassociate_project_only_policy_failed(self): rule_name = 'os_compute_api:os-networks-associate' self.policy.set_rules({rule_name: "project:non_fake"}) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller._disassociate_project_only, self.req, fakes.FAKE_UUID, body={'disassociate_project': {}}) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) def test_disassociate_host_only_policy_failed(self): rule_name = 'os_compute_api:os-networks-associate' self.policy.set_rules({rule_name: "project:non_fake"}) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller._associate_host, self.req, fakes.FAKE_UUID, body={'associate_host': 'fake_host'}) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) class NetworksDeprecationTest(test.NoDBTestCase): def setUp(self): super(NetworksDeprecationTest, self).setUp() self.controller = networks_v21.NetworkController() self.req = fakes.HTTPRequest.blank('', version='2.36') def test_all_api_return_not_found(self): self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.show, self.req, fakes.FAKE_UUID) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.delete, self.req, fakes.FAKE_UUID) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.index, self.req) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller._disassociate_host_and_project, self.req, {}) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.add, self.req, {}) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.create, self.req, {}) class NetworksAssociateDeprecationTest(test.NoDBTestCase): def setUp(self): super(NetworksAssociateDeprecationTest, self).setUp() self.controller = networks_associate_v21\ .NetworkAssociateActionController() self.req = fakes.HTTPRequest.blank('', version='2.36') def test_all_api_return_not_found(self): self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller._associate_host, self.req, fakes.FAKE_UUID, {}) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller._disassociate_project_only, self.req, fakes.FAKE_UUID, {}) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller._disassociate_host_only, self.req, fakes.FAKE_UUID, {}) nova-17.0.1/nova/tests/unit/api/openstack/compute/test_instance_actions.py0000666000175000017500000003051313250073126027025 0ustar zuulzuul00000000000000# Copyright 2013 Rackspace Hosting # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import mock from oslo_policy import policy as oslo_policy import six from webob import exc from nova.api.openstack.compute import instance_actions as instance_actions_v21 from nova.api.openstack import wsgi as os_wsgi from nova.compute import api as compute_api from nova.db.sqlalchemy import models from nova import exception from nova import objects from nova import policy from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_server_actions from nova.tests import uuidsentinel as uuids FAKE_UUID = fake_server_actions.FAKE_UUID FAKE_REQUEST_ID = fake_server_actions.FAKE_REQUEST_ID1 FAKE_EVENT_ID = fake_server_actions.FAKE_ACTION_ID1 FAKE_REQUEST_NOTFOUND_ID = 'req-' + uuids.req_not_found def format_action(action, expect_traceback=True): '''Remove keys that aren't serialized.''' to_delete = ('id', 'finish_time', 'created_at', 'updated_at', 'deleted_at', 'deleted') for key in to_delete: if key in action: del(action[key]) if 'start_time' in action: # NOTE(danms): Without WSGI above us, these will be just stringified action['start_time'] = str(action['start_time'].replace(tzinfo=None)) for event in action.get('events', []): format_event(event, expect_traceback) return action def format_event(event, expect_traceback=True): '''Remove keys that aren't serialized.''' to_delete = ['id', 'created_at', 'updated_at', 'deleted_at', 'deleted', 'action_id'] if not expect_traceback: to_delete.append('traceback') for key in to_delete: if key in event: del(event[key]) if 'start_time' in event: # NOTE(danms): Without WSGI above us, these will be just stringified event['start_time'] = str(event['start_time'].replace(tzinfo=None)) if 'finish_time' in event: # NOTE(danms): Without WSGI above us, these will be just stringified event['finish_time'] = str(event['finish_time'].replace(tzinfo=None)) return event class InstanceActionsPolicyTestV21(test.NoDBTestCase): instance_actions = instance_actions_v21 def setUp(self): super(InstanceActionsPolicyTestV21, self).setUp() self.controller = self.instance_actions.InstanceActionsController() def _get_http_req(self, action): fake_url = '/123/servers/12/%s' % action return fakes.HTTPRequest.blank(fake_url) def _get_instance_other_project(self, req): context = req.environ['nova.context'] project_id = '%s_unequal' % context.project_id return objects.Instance(project_id=project_id) def _set_policy_rules(self): rules = {'compute:get': '', 'os_compute_api:os-instance-actions': 'project_id:%(project_id)s'} policy.set_rules(oslo_policy.Rules.from_dict(rules)) @mock.patch('nova.api.openstack.common.get_instance') def test_list_actions_restricted_by_project(self, mock_instance_get): self._set_policy_rules() req = self._get_http_req('os-instance-actions') mock_instance_get.return_value = self._get_instance_other_project(req) self.assertRaises(exception.Forbidden, self.controller.index, req, uuids.fake) @mock.patch('nova.api.openstack.common.get_instance') def test_get_action_restricted_by_project(self, mock_instance_get): self._set_policy_rules() req = self._get_http_req('os-instance-actions/1') mock_instance_get.return_value = self._get_instance_other_project(req) self.assertRaises(exception.Forbidden, self.controller.show, req, uuids.fake, '1') class InstanceActionsTestV21(test.NoDBTestCase): instance_actions = instance_actions_v21 wsgi_api_version = os_wsgi.DEFAULT_API_VERSION expect_events_non_admin = False def fake_get(self, context, instance_uuid, expected_attrs=None): return objects.Instance(uuid=instance_uuid) def setUp(self): super(InstanceActionsTestV21, self).setUp() self.controller = self.instance_actions.InstanceActionsController() self.fake_actions = copy.deepcopy(fake_server_actions.FAKE_ACTIONS) self.fake_events = copy.deepcopy(fake_server_actions.FAKE_EVENTS) self.stubs.Set(compute_api.API, 'get', self.fake_get) def _get_http_req(self, action, use_admin_context=False): fake_url = '/123/servers/12/%s' % action return fakes.HTTPRequest.blank(fake_url, use_admin_context=use_admin_context, version=self.wsgi_api_version) def _set_policy_rules(self): rules = {'compute:get': '', 'os_compute_api:os-instance-actions': '', 'os_compute_api:os-instance-actions:events': 'is_admin:True'} policy.set_rules(oslo_policy.Rules.from_dict(rules)) def test_list_actions(self): def fake_get_actions(context, uuid, limit=None, marker=None, filters=None): actions = [] for act in six.itervalues(self.fake_actions[uuid]): action = models.InstanceAction() action.update(act) actions.append(action) return actions self.stub_out('nova.db.actions_get', fake_get_actions) req = self._get_http_req('os-instance-actions') res_dict = self.controller.index(req, FAKE_UUID) for res in res_dict['instanceActions']: fake_action = self.fake_actions[FAKE_UUID][res['request_id']] self.assertEqual(format_action(fake_action), format_action(res)) def test_get_action_with_events_allowed(self): def fake_get_action(context, uuid, request_id): action = models.InstanceAction() action.update(self.fake_actions[uuid][request_id]) return action def fake_get_events(context, action_id): events = [] for evt in self.fake_events[action_id]: event = models.InstanceActionEvent() event.update(evt) events.append(event) return events self.stub_out('nova.db.action_get_by_request_id', fake_get_action) self.stub_out('nova.db.action_events_get', fake_get_events) req = self._get_http_req('os-instance-actions/1', use_admin_context=True) res_dict = self.controller.show(req, FAKE_UUID, FAKE_REQUEST_ID) fake_action = self.fake_actions[FAKE_UUID][FAKE_REQUEST_ID] fake_events = self.fake_events[fake_action['id']] fake_action['events'] = fake_events self.assertEqual(format_action(fake_action), format_action(res_dict['instanceAction'])) def test_get_action_with_events_not_allowed(self): def fake_get_action(context, uuid, request_id): return self.fake_actions[uuid][request_id] def fake_get_events(context, action_id): return self.fake_events[action_id] self.stub_out('nova.db.action_get_by_request_id', fake_get_action) self.stub_out('nova.db.action_events_get', fake_get_events) self._set_policy_rules() req = self._get_http_req('os-instance-actions/1') res_dict = self.controller.show(req, FAKE_UUID, FAKE_REQUEST_ID) fake_action = self.fake_actions[FAKE_UUID][FAKE_REQUEST_ID] if self.expect_events_non_admin: fake_event = fake_server_actions.FAKE_EVENTS[FAKE_EVENT_ID] fake_action['events'] = copy.deepcopy(fake_event) # By default, non-admins are not allowed to see traceback details. self.assertEqual(format_action(fake_action, expect_traceback=False), format_action(res_dict['instanceAction'], expect_traceback=False)) def test_action_not_found(self): def fake_no_action(context, uuid, action_id): return None self.stub_out('nova.db.action_get_by_request_id', fake_no_action) req = self._get_http_req('os-instance-actions/1') self.assertRaises(exc.HTTPNotFound, self.controller.show, req, FAKE_UUID, FAKE_REQUEST_ID) def test_index_instance_not_found(self): def fake_get(self, context, instance_uuid, expected_attrs=None): raise exception.InstanceNotFound(instance_id=instance_uuid) self.stubs.Set(compute_api.API, 'get', fake_get) req = self._get_http_req('os-instance-actions') self.assertRaises(exc.HTTPNotFound, self.controller.index, req, FAKE_UUID) def test_show_instance_not_found(self): def fake_get(self, context, instance_uuid, expected_attrs=None): raise exception.InstanceNotFound(instance_id=instance_uuid) self.stubs.Set(compute_api.API, 'get', fake_get) req = self._get_http_req('os-instance-actions/fake') self.assertRaises(exc.HTTPNotFound, self.controller.show, req, FAKE_UUID, 'fake') class InstanceActionsTestV221(InstanceActionsTestV21): wsgi_api_version = "2.21" def fake_get(self, context, instance_uuid, expected_attrs=None): self.assertEqual('yes', context.read_deleted) return objects.Instance(uuid=instance_uuid) class InstanceActionsTestV251(InstanceActionsTestV221): wsgi_api_version = "2.51" expect_events_non_admin = True class InstanceActionsTestV258(InstanceActionsTestV251): wsgi_api_version = "2.58" @mock.patch('nova.objects.InstanceActionList.get_by_instance_uuid') def test_get_action_with_invalid_marker(self, mock_actions_get): """Tests detail paging with an invalid marker (not found).""" mock_actions_get.side_effect = exception.MarkerNotFound( marker=FAKE_REQUEST_NOTFOUND_ID) req = self._get_http_req('os-instance-actions?' 'marker=%s' % FAKE_REQUEST_NOTFOUND_ID) self.assertRaises(exc.HTTPBadRequest, self.controller.index, req, FAKE_UUID) def test_get_action_with_invalid_limit(self): """Tests get paging with an invalid limit.""" req = self._get_http_req('os-instance-actions?limit=x') self.assertRaises(exception.ValidationError, self.controller.index, req) req = self._get_http_req('os-instance-actions?limit=-1') self.assertRaises(exception.ValidationError, self.controller.index, req) def test_get_action_with_invalid_change_since(self): """Tests get paging with a invalid change_since.""" req = self._get_http_req('os-instance-actions?' 'changes-since=wrong_time') ex = self.assertRaises(exception.ValidationError, self.controller.index, req) self.assertIn('Invalid input for query parameters changes-since', six.text_type(ex)) def test_get_action_with_invalid_params(self): """Tests get paging with a invalid change_since.""" req = self._get_http_req('os-instance-actions?' 'wrong_params=xxx') ex = self.assertRaises(exception.ValidationError, self.controller.index, req) self.assertIn('Additional properties are not allowed', six.text_type(ex)) def test_get_action_with_multi_params(self): """Tests get paging with multi markers.""" req = self._get_http_req('os-instance-actions?marker=A&marker=B') ex = self.assertRaises(exception.ValidationError, self.controller.index, req) self.assertIn('Invalid input for query parameters marker', six.text_type(ex)) nova-17.0.1/nova/tests/unit/api/openstack/compute/test_urlmap.py0000666000175000017500000001172513250073126025005 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_serialization import jsonutils from nova import test from nova.tests.unit.api.openstack import fakes import nova.tests.unit.image.fake class UrlmapTest(test.NoDBTestCase): def setUp(self): super(UrlmapTest, self).setUp() nova.tests.unit.image.fake.stub_out_image_service(self) def tearDown(self): super(UrlmapTest, self).tearDown() nova.tests.unit.image.fake.FakeImageService_reset() def test_path_version_v2(self): # Test URL path specifying v2 returns v2 content. req = fakes.HTTPRequest.blank('/v2/') req.accept = "application/json" res = req.get_response(fakes.wsgi_app_v21(v2_compatible=True)) self.assertEqual(200, res.status_int) self.assertEqual("application/json", res.content_type) body = jsonutils.loads(res.body) self.assertEqual('v2.0', body['version']['id']) def test_content_type_version_v2(self): # Test Content-Type specifying v2 returns v2 content. req = fakes.HTTPRequest.blank('/') req.content_type = "application/json;version=2" req.accept = "application/json" res = req.get_response(fakes.wsgi_app_v21(v2_compatible=True)) self.assertEqual(200, res.status_int) self.assertEqual("application/json", res.content_type) body = jsonutils.loads(res.body) self.assertEqual('v2.0', body['version']['id']) def test_accept_version_v2(self): # Test Accept header specifying v2 returns v2 content. req = fakes.HTTPRequest.blank('/') req.accept = "application/json;version=2" res = req.get_response(fakes.wsgi_app_v21(v2_compatible=True)) self.assertEqual(200, res.status_int) self.assertEqual("application/json", res.content_type) body = jsonutils.loads(res.body) self.assertEqual('v2.0', body['version']['id']) def test_accept_content_type(self): # Test Accept header specifying JSON returns JSON content. url = '/v2/fake/images/cedef40a-ed67-4d10-800e-17455edce175' req = fakes.HTTPRequest.blank(url) req.accept = "application/xml;q=0.8, application/json" res = req.get_response(fakes.wsgi_app_v21()) self.assertEqual(200, res.status_int) self.assertEqual("application/json", res.content_type) body = jsonutils.loads(res.body) self.assertEqual('cedef40a-ed67-4d10-800e-17455edce175', body['image']['id']) def test_path_version_v21(self): # Test URL path specifying v2.1 returns v2.1 content. req = fakes.HTTPRequest.blank('/v2.1/') req.accept = "application/json" res = req.get_response(fakes.wsgi_app_v21()) self.assertEqual(200, res.status_int) self.assertEqual("application/json", res.content_type) body = jsonutils.loads(res.body) self.assertEqual('v2.1', body['version']['id']) def test_content_type_version_v21(self): # Test Content-Type specifying v2.1 returns v2 content. req = fakes.HTTPRequest.blank('/') req.content_type = "application/json;version=2.1" req.accept = "application/json" res = req.get_response(fakes.wsgi_app_v21()) self.assertEqual(200, res.status_int) self.assertEqual("application/json", res.content_type) body = jsonutils.loads(res.body) self.assertEqual('v2.1', body['version']['id']) def test_accept_version_v21(self): # Test Accept header specifying v2.1 returns v2.1 content. req = fakes.HTTPRequest.blank('/') req.accept = "application/json;version=2.1" res = req.get_response(fakes.wsgi_app_v21()) self.assertEqual(200, res.status_int) self.assertEqual("application/json", res.content_type) body = jsonutils.loads(res.body) self.assertEqual('v2.1', body['version']['id']) def test_accept_content_type_v21(self): # Test Accept header specifying JSON returns JSON content. req = fakes.HTTPRequest.blank('/') req.content_type = "application/json;version=2.1" req.accept = "application/xml;q=0.8, application/json" res = req.get_response(fakes.wsgi_app_v21()) self.assertEqual(200, res.status_int) self.assertEqual("application/json", res.content_type) body = jsonutils.loads(res.body) self.assertEqual('v2.1', body['version']['id']) nova-17.0.1/nova/tests/unit/api/openstack/compute/test_server_tags.py0000666000175000017500000003565413250073126026040 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from webob import exc from nova.api.openstack.compute import server_tags from nova.api.openstack.compute import servers from nova.compute import vm_states from nova import context from nova.db.sqlalchemy import models from nova import exception from nova import objects from nova.objects import instance from nova.objects import tag as tag_obj from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_instance UUID = 'b48316c5-71e8-45e4-9884-6c78055b9b13' TAG1 = 'tag1' TAG2 = 'tag2' TAG3 = 'tag3' TAGS = [TAG1, TAG2, TAG3] NON_EXISTING_UUID = '123' def return_server(compute_api, context, instance_id, expected_attrs=None): return fake_instance.fake_instance_obj(context, vm_state=vm_states.ACTIVE) def return_invalid_server(compute_api, context, instance_id, expected_attrs=None): return fake_instance.fake_instance_obj(context, vm_state=vm_states.BUILDING) class ServerTagsTest(test.TestCase): api_version = '2.26' def setUp(self): super(ServerTagsTest, self).setUp() self.controller = server_tags.ServerTagsController() inst_map = objects.InstanceMapping( cell_mapping=objects.CellMappingList.get_all( context.get_admin_context())[1]) self.stub_out('nova.objects.InstanceMapping.get_by_instance_uuid', lambda s, c, u: inst_map) def _get_tag(self, tag_name): tag = models.Tag() tag.tag = tag_name tag.resource_id = UUID return tag def _get_request(self, url, method): request = fakes.HTTPRequest.blank(url, version=self.api_version) request.method = method return request @mock.patch('nova.db.instance_tag_exists') def test_show(self, mock_exists): mock_exists.return_value = True req = self._get_request( '/v2/fake/servers/%s/tags/%s' % (UUID, TAG1), 'GET') self.controller.show(req, UUID, TAG1) mock_exists.assert_called_once_with(mock.ANY, UUID, TAG1) @mock.patch('nova.db.instance_tag_get_by_instance_uuid') def test_index(self, mock_db_get_inst_tags): fake_tags = [self._get_tag(tag) for tag in TAGS] mock_db_get_inst_tags.return_value = fake_tags req = self._get_request('/v2/fake/servers/%s/tags' % UUID, 'GET') res = self.controller.index(req, UUID) self.assertEqual(TAGS, res.get('tags')) mock_db_get_inst_tags.assert_called_once_with(mock.ANY, UUID) @mock.patch('nova.notifications.base.send_instance_update_notification') @mock.patch('nova.db.instance_tag_set') def test_update_all(self, mock_db_set_inst_tags, mock_notify): self.stub_out('nova.api.openstack.common.get_instance', return_server) fake_tags = [self._get_tag(tag) for tag in TAGS] mock_db_set_inst_tags.return_value = fake_tags req = self._get_request( '/v2/fake/servers/%s/tags' % UUID, 'PUT') res = self.controller.update_all(req, UUID, body={'tags': TAGS}) self.assertEqual(TAGS, res['tags']) mock_db_set_inst_tags.assert_called_once_with(mock.ANY, UUID, TAGS) self.assertEqual(1, mock_notify.call_count) def test_update_all_too_many_tags(self): self.stub_out('nova.api.openstack.common.get_instance', return_server) fake_tags = {'tags': [str(i) for i in range( instance.MAX_TAG_COUNT + 1)]} req = self._get_request( '/v2/fake/servers/%s/tags' % UUID, 'PUT') self.assertRaises(exception.ValidationError, self.controller.update_all, req, UUID, body=fake_tags) def test_update_all_forbidden_characters(self): self.stub_out('nova.api.openstack.common.get_instance', return_server) req = self._get_request('/v2/fake/servers/%s/tags' % UUID, 'PUT') for tag in ['tag,1', 'tag/1']: self.assertRaises(exception.ValidationError, self.controller.update_all, req, UUID, body={'tags': [tag, 'tag2']}) def test_update_all_invalid_tag_type(self): req = self._get_request('/v2/fake/servers/%s/tags' % UUID, 'PUT') self.assertRaises(exception.ValidationError, self.controller.update_all, req, UUID, body={'tags': [1]}) def test_update_all_tags_with_one_tag_empty_string(self): req = self._get_request('/v2/fake/servers/%s/tags' % UUID, 'PUT') self.assertRaises(exception.ValidationError, self.controller.update_all, req, UUID, body={'tags': ['tag1', '']}) def test_update_all_too_long_tag(self): self.stub_out('nova.api.openstack.common.get_instance', return_server) req = self._get_request('/v2/fake/servers/%s/tags' % UUID, 'PUT') tag = "a" * (tag_obj.MAX_TAG_LENGTH + 1) self.assertRaises(exception.ValidationError, self.controller.update_all, req, UUID, body={'tags': [tag]}) def test_update_all_invalid_tag_list_type(self): req = self._get_request('/v2/ake/servers/%s/tags' % UUID, 'PUT') self.assertRaises(exception.ValidationError, self.controller.update_all, req, UUID, body={'tags': {'tag': 'tag'}}) def test_update_all_invalid_instance_state(self): self.stub_out('nova.api.openstack.common.get_instance', return_invalid_server) req = self._get_request('/v2/fake/servers/%s/tags' % UUID, 'PUT') self.assertRaises(exc.HTTPConflict, self.controller.update_all, req, UUID, body={'tags': TAGS}) @mock.patch('nova.db.instance_tag_exists') def test_show_non_existing_tag(self, mock_exists): mock_exists.return_value = False req = self._get_request( '/v2/fake/servers/%s/tags/%s' % (UUID, TAG1), 'GET') self.assertRaises(exc.HTTPNotFound, self.controller.show, req, UUID, TAG1) @mock.patch('nova.notifications.base.send_instance_update_notification') @mock.patch('nova.db.instance_tag_add') @mock.patch('nova.db.instance_tag_get_by_instance_uuid') def test_update(self, mock_db_get_inst_tags, mock_db_add_inst_tags, mock_notify): self.stub_out('nova.api.openstack.common.get_instance', return_server) mock_db_get_inst_tags.return_value = [self._get_tag(TAG1)] mock_db_add_inst_tags.return_value = self._get_tag(TAG2) url = '/v2/fake/servers/%s/tags/%s' % (UUID, TAG2) location = 'http://localhost' + url req = self._get_request(url, 'PUT') res = self.controller.update(req, UUID, TAG2, body=None) self.assertEqual(201, res.status_int) self.assertEqual(0, len(res.body)) self.assertEqual(location, res.headers['Location']) mock_db_add_inst_tags.assert_called_once_with(mock.ANY, UUID, TAG2) self.assertEqual(2, mock_db_get_inst_tags.call_count) self.assertEqual(1, mock_notify.call_count) @mock.patch('nova.db.instance_tag_get_by_instance_uuid') def test_update_existing_tag(self, mock_db_get_inst_tags): self.stub_out('nova.api.openstack.common.get_instance', return_server) mock_db_get_inst_tags.return_value = [self._get_tag(TAG1)] req = self._get_request( '/v2/fake/servers/%s/tags/%s' % (UUID, TAG1), 'PUT') res = self.controller.update(req, UUID, TAG1, body=None) self.assertEqual(204, res.status_int) self.assertEqual(0, len(res.body)) mock_db_get_inst_tags.assert_called_once_with(mock.ANY, UUID) @mock.patch('nova.db.instance_tag_get_by_instance_uuid') def test_update_tag_limit_exceed(self, mock_db_get_inst_tags): self.stub_out('nova.api.openstack.common.get_instance', return_server) fake_tags = [self._get_tag(str(i)) for i in range(instance.MAX_TAG_COUNT)] mock_db_get_inst_tags.return_value = fake_tags req = self._get_request( '/v2/fake/servers/%s/tags/%s' % (UUID, TAG2), 'PUT') self.assertRaises(exc.HTTPBadRequest, self.controller.update, req, UUID, TAG2, body=None) @mock.patch('nova.db.instance_tag_get_by_instance_uuid') def test_update_too_long_tag(self, mock_db_get_inst_tags): self.stub_out('nova.api.openstack.common.get_instance', return_server) mock_db_get_inst_tags.return_value = [] tag = "a" * (tag_obj.MAX_TAG_LENGTH + 1) req = self._get_request( '/v2/fake/servers/%s/tags/%s' % (UUID, tag), 'PUT') self.assertRaises(exc.HTTPBadRequest, self.controller.update, req, UUID, tag, body=None) @mock.patch('nova.db.instance_tag_get_by_instance_uuid') def test_update_forbidden_characters(self, mock_db_get_inst_tags): self.stub_out('nova.api.openstack.common.get_instance', return_server) mock_db_get_inst_tags.return_value = [] for tag in ['tag,1', 'tag/1']: req = self._get_request( '/v2/fake/servers/%s/tags/%s' % (UUID, tag), 'PUT') self.assertRaises(exc.HTTPBadRequest, self.controller.update, req, UUID, tag, body=None) def test_update_invalid_instance_state(self): self.stub_out('nova.api.openstack.common.get_instance', return_invalid_server) req = self._get_request( '/v2/fake/servers/%s/tags/%s' % (UUID, TAG1), 'PUT') self.assertRaises(exc.HTTPConflict, self.controller.update, req, UUID, TAG1, body=None) @mock.patch('nova.db.instance_tag_get_by_instance_uuid') @mock.patch('nova.notifications.base.send_instance_update_notification') @mock.patch('nova.db.instance_tag_delete') def test_delete(self, mock_db_delete_inst_tags, mock_notify, mock_db_get_inst_tags): self.stub_out('nova.api.openstack.common.get_instance', return_server) req = self._get_request( '/v2/fake/servers/%s/tags/%s' % (UUID, TAG2), 'DELETE') self.controller.delete(req, UUID, TAG2) mock_db_delete_inst_tags.assert_called_once_with(mock.ANY, UUID, TAG2) mock_db_get_inst_tags.assert_called_once_with(mock.ANY, UUID) self.assertEqual(1, mock_notify.call_count) @mock.patch('nova.db.instance_tag_delete') def test_delete_non_existing_tag(self, mock_db_delete_inst_tags): self.stub_out('nova.api.openstack.common.get_instance', return_server) def fake_db_delete_tag(context, instance_uuid, tag): self.assertEqual(UUID, instance_uuid) self.assertEqual(TAG1, tag) raise exception.InstanceTagNotFound(instance_id=instance_uuid, tag=tag) mock_db_delete_inst_tags.side_effect = fake_db_delete_tag req = self._get_request( '/v2/fake/servers/%s/tags/%s' % (UUID, TAG1), 'DELETE') self.assertRaises(exc.HTTPNotFound, self.controller.delete, req, UUID, TAG1) def test_delete_invalid_instance_state(self): self.stub_out('nova.api.openstack.common.get_instance', return_invalid_server) req = self._get_request( '/v2/fake/servers/%s/tags/%s' % (UUID, TAG2), 'DELETE') self.assertRaises(exc.HTTPConflict, self.controller.delete, req, UUID, TAG1) @mock.patch('nova.notifications.base.send_instance_update_notification') @mock.patch('nova.db.instance_tag_delete_all') def test_delete_all(self, mock_db_delete_inst_tags, mock_notify): self.stub_out('nova.api.openstack.common.get_instance', return_server) req = self._get_request('/v2/fake/servers/%s/tags' % UUID, 'DELETE') self.controller.delete_all(req, UUID) mock_db_delete_inst_tags.assert_called_once_with(mock.ANY, UUID) self.assertEqual(1, mock_notify.call_count) def test_delete_all_invalid_instance_state(self): self.stub_out('nova.api.openstack.common.get_instance', return_invalid_server) req = self._get_request('/v2/fake/servers/%s/tags' % UUID, 'DELETE') self.assertRaises(exc.HTTPConflict, self.controller.delete_all, req, UUID) def test_show_non_existing_instance(self): req = self._get_request( '/v2/fake/servers/%s/tags/%s' % (NON_EXISTING_UUID, TAG1), 'GET') self.assertRaises(exc.HTTPNotFound, self.controller.show, req, NON_EXISTING_UUID, TAG1) def test_show_with_details_information_non_existing_instance(self): req = self._get_request( '/v2/fake/servers/%s' % NON_EXISTING_UUID, 'GET') servers_controller = servers.ServersController() self.assertRaises(exc.HTTPNotFound, servers_controller.show, req, NON_EXISTING_UUID) def test_index_non_existing_instance(self): req = self._get_request( 'v2/fake/servers/%s/tags' % NON_EXISTING_UUID, 'GET') self.assertRaises(exc.HTTPNotFound, self.controller.index, req, NON_EXISTING_UUID) def test_update_non_existing_instance(self): req = self._get_request( '/v2/fake/servers/%s/tags/%s' % (NON_EXISTING_UUID, TAG1), 'PUT') self.assertRaises(exc.HTTPNotFound, self.controller.update, req, NON_EXISTING_UUID, TAG1, body=None) def test_update_all_non_existing_instance(self): req = self._get_request( '/v2/fake/servers/%s/tags' % NON_EXISTING_UUID, 'PUT') self.assertRaises(exc.HTTPNotFound, self.controller.update_all, req, NON_EXISTING_UUID, body={'tags': TAGS}) def test_delete_non_existing_instance(self): req = self._get_request( '/v2/fake/servers/%s/tags/%s' % (NON_EXISTING_UUID, TAG1), 'DELETE') self.assertRaises(exc.HTTPNotFound, self.controller.delete, req, NON_EXISTING_UUID, TAG1) def test_delete_all_non_existing_instance(self): req = self._get_request( '/v2/fake/servers/%s/tags' % NON_EXISTING_UUID, 'DELETE') self.assertRaises(exc.HTTPNotFound, self.controller.delete_all, req, NON_EXISTING_UUID) nova-17.0.1/nova/tests/unit/api/openstack/compute/test_extended_ips_mac.py0000666000175000017500000001155613250073126027002 0ustar zuulzuul00000000000000# Copyright 2013 IBM Corp. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_serialization import jsonutils import six from nova import objects from nova import test from nova.tests.unit.api.openstack import fakes UUID1 = '00000000-0000-0000-0000-000000000001' UUID2 = '00000000-0000-0000-0000-000000000002' UUID3 = '00000000-0000-0000-0000-000000000003' NW_CACHE = [ { 'address': 'aa:aa:aa:aa:aa:aa', 'id': 1, 'network': { 'bridge': 'br0', 'id': 1, 'label': 'private', 'subnets': [ { 'cidr': '192.168.1.0/24', 'ips': [ { 'address': '192.168.1.100', 'type': 'fixed', 'floating_ips': [ {'address': '5.0.0.1', 'type': 'floating'}, ], }, ], }, ] } }, { 'address': 'bb:bb:bb:bb:bb:bb', 'id': 2, 'network': { 'bridge': 'br1', 'id': 2, 'label': 'public', 'subnets': [ { 'cidr': '10.0.0.0/24', 'ips': [ { 'address': '10.0.0.100', 'type': 'fixed', 'floating_ips': [ {'address': '5.0.0.2', 'type': 'floating'}, ], } ], }, ] } } ] ALL_IPS = [] for cache in NW_CACHE: for subnet in cache['network']['subnets']: for fixed in subnet['ips']: sanitized = dict(fixed) sanitized['mac_address'] = cache['address'] sanitized.pop('floating_ips') sanitized.pop('type') ALL_IPS.append(sanitized) for floating in fixed['floating_ips']: sanitized = dict(floating) sanitized['mac_address'] = cache['address'] sanitized.pop('type') ALL_IPS.append(sanitized) ALL_IPS.sort(key=lambda x: '%s-%s' % (x['address'], x['mac_address'])) def fake_compute_get(*args, **kwargs): inst = fakes.stub_instance_obj(None, 1, uuid=UUID3, nw_cache=NW_CACHE) return inst def fake_compute_get_all(*args, **kwargs): inst_list = [ fakes.stub_instance_obj(None, 1, uuid=UUID1, nw_cache=NW_CACHE), fakes.stub_instance_obj(None, 2, uuid=UUID2, nw_cache=NW_CACHE), ] return objects.InstanceList(objects=inst_list) class ExtendedIpsMacTestV21(test.TestCase): content_type = 'application/json' prefix = 'OS-EXT-IPS-MAC:' def setUp(self): super(ExtendedIpsMacTestV21, self).setUp() fakes.stub_out_nw_api(self) fakes.stub_out_secgroup_api(self) self.stub_out('nova.compute.api.API.get', fake_compute_get) self.stub_out('nova.compute.api.API.get_all', fake_compute_get_all) def _make_request(self, url): req = fakes.HTTPRequest.blank(url) req.headers['Accept'] = self.content_type res = req.get_response(fakes.wsgi_app_v21()) return res def _get_server(self, body): return jsonutils.loads(body).get('server') def _get_servers(self, body): return jsonutils.loads(body).get('servers') def _get_ips(self, server): for network in six.itervalues(server['addresses']): for ip in network: yield ip def assertServerStates(self, server): results = [] for ip in self._get_ips(server): results.append({'address': ip.get('addr'), 'mac_address': ip.get('%smac_addr' % self.prefix)}) self.assertJsonEqual(ALL_IPS, results) def test_show(self): url = '/v2/fake/servers/%s' % UUID3 res = self._make_request(url) self.assertEqual(200, res.status_int) self.assertServerStates(self._get_server(res.body)) def test_detail(self): url = '/v2/fake/servers/detail' res = self._make_request(url) self.assertEqual(200, res.status_int) for _i, server in enumerate(self._get_servers(res.body)): self.assertServerStates(server) nova-17.0.1/nova/tests/unit/api/openstack/compute/test_config_drive.py0000666000175000017500000001551213250073126026141 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_config import cfg from oslo_serialization import jsonutils from nova.api.openstack.compute import servers as servers_v21 from nova.compute import api as compute_api from nova import exception from nova import objects from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests.unit.image import fake from nova.tests import uuidsentinel as uuids CONF = cfg.CONF class ConfigDriveTestV21(test.TestCase): base_url = '/v2/fake/servers/' def _setup_wsgi(self): self.app = fakes.wsgi_app_v21() def setUp(self): super(ConfigDriveTestV21, self).setUp() fakes.stub_out_networking(self) fake.stub_out_image_service(self) fakes.stub_out_secgroup_api(self) self._setup_wsgi() def test_show(self): self.stub_out('nova.db.instance_get', fakes.fake_instance_get()) self.stub_out('nova.db.instance_get_by_uuid', fakes.fake_instance_get()) # NOTE(sdague): because of the way extensions work, we have to # also stub out the Request compute cache with a real compute # object. Delete this once we remove all the gorp of # extensions modifying the server objects. self.stub_out('nova.api.openstack.wsgi.Request.get_db_instance', fakes.fake_compute_get()) req = fakes.HTTPRequest.blank(self.base_url + uuids.sentinel) req.headers['Content-Type'] = 'application/json' response = req.get_response(self.app) self.assertEqual(response.status_int, 200) res_dict = jsonutils.loads(response.body) self.assertIn('config_drive', res_dict['server']) @mock.patch('nova.compute.api.API.get_all') def test_detail_servers(self, mock_get_all): # NOTE(danms): Orphan these fakes (no context) so that we # are sure that the API is requesting what it needs without # having to lazy-load. mock_get_all.return_value = objects.InstanceList( objects=[fakes.stub_instance_obj(ctxt=None, id=1), fakes.stub_instance_obj(ctxt=None, id=2)]) req = fakes.HTTPRequest.blank(self.base_url + 'detail') res = req.get_response(self.app) server_dicts = jsonutils.loads(res.body)['servers'] self.assertNotEqual(len(server_dicts), 0) for server_dict in server_dicts: self.assertIn('config_drive', server_dict) class ServersControllerCreateTestV21(test.TestCase): base_url = '/v2/fake/' bad_request = exception.ValidationError def _set_up_controller(self): self.controller = servers_v21.ServersController() def _verify_config_drive(self, **kwargs): self.assertNotIn('config_drive', kwargs) def _initialize_extension(self): pass def setUp(self): """Shared implementation for tests below that create instance.""" super(ServersControllerCreateTestV21, self).setUp() self.instance_cache_num = 0 fakes.stub_out_nw_api(self) self._set_up_controller() fake.stub_out_image_service(self) def create_db_entry_for_new_instance(*args, **kwargs): instance = args[4] instance.uuid = fakes.FAKE_UUID return instance self.stub_out('nova.compute.api.API.create_db_entry_for_new_instance', create_db_entry_for_new_instance) def _test_create_extra(self, params): image_uuid = 'c905cedb-7281-47e4-8a62-f26bc5fc4c77' server = dict(name='server_test', imageRef=image_uuid, flavorRef=2) server.update(params) body = dict(server=server) req = fakes.HTTPRequest.blank(self.base_url + 'servers') req.method = 'POST' req.body = jsonutils.dump_as_bytes(body) req.headers["content-type"] = "application/json" server = self.controller.create(req, body=body).obj['server'] def _create_instance_body_of_config_drive(self, param): self._initialize_extension() def create(*args, **kwargs): self.assertIn('config_drive', kwargs) return old_create(*args, **kwargs) old_create = compute_api.API.create self.stub_out('nova.compute.api.API.create', create) image_href = '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6' flavor_ref = ('http://localhost' + self.base_url + 'flavors/3') body = { 'server': { 'name': 'config_drive_test', 'imageRef': image_href, 'flavorRef': flavor_ref, 'config_drive': param, }, } req = fakes.HTTPRequest.blank(self.base_url + 'servers') req.method = 'POST' req.body = jsonutils.dump_as_bytes(body) req.headers["content-type"] = "application/json" return req, body def test_create_instance_with_config_drive(self): param = True req, body = self._create_instance_body_of_config_drive(param) res = self.controller.create(req, body=body).obj server = res['server'] self.assertEqual(fakes.FAKE_UUID, server['id']) def test_create_instance_with_config_drive_as_boolean_string(self): param = 'false' req, body = self._create_instance_body_of_config_drive(param) res = self.controller.create(req, body=body).obj server = res['server'] self.assertEqual(fakes.FAKE_UUID, server['id']) def test_create_instance_with_bad_config_drive(self): param = 12345 req, body = self._create_instance_body_of_config_drive(param) self.assertRaises(self.bad_request, self.controller.create, req, body=body) def test_create_instance_without_config_drive(self): param = True req, body = self._create_instance_body_of_config_drive(param) del body['server']['config_drive'] res = self.controller.create(req, body=body).obj server = res['server'] self.assertEqual(fakes.FAKE_UUID, server['id']) def test_create_instance_with_empty_config_drive(self): param = '' req, body = self._create_instance_body_of_config_drive(param) self.assertRaises(exception.ValidationError, self.controller.create, req, body=body) nova-17.0.1/nova/tests/unit/api/openstack/compute/test_extended_server_attributes.py0000666000175000017500000002357713250073126031151 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_config import cfg from oslo_serialization import jsonutils from nova.api.openstack import wsgi as os_wsgi from nova import compute from nova import exception from nova import objects from nova import test from nova.tests.unit.api.openstack import fakes NAME_FMT = cfg.CONF.instance_name_template UUID1 = '00000000-0000-0000-0000-000000000001' UUID2 = '00000000-0000-0000-0000-000000000002' UUID3 = '00000000-0000-0000-0000-000000000003' UUID4 = '00000000-0000-0000-0000-000000000004' UUID5 = '00000000-0000-0000-0000-000000000005' def fake_services(host): service_list = [objects.Service(id=0, host=host, forced_down=True, binary='nova-compute')] return objects.ServiceList(objects=service_list) def fake_compute_get(*args, **kwargs): return fakes.stub_instance_obj( None, 1, uuid=UUID3, host="host-fake", node="node-fake", reservation_id="r-1", launch_index=0, kernel_id=UUID4, ramdisk_id=UUID5, display_name="hostname-1", root_device_name="/dev/vda", user_data="userdata", services=fake_services("host-fake")) def fake_compute_get_all(*args, **kwargs): inst_list = [ fakes.stub_instance_obj( None, 1, uuid=UUID1, host="host-1", node="node-1", reservation_id="r-1", launch_index=0, kernel_id=UUID4, ramdisk_id=UUID5, display_name="hostname-1", root_device_name="/dev/vda", user_data="userdata", services=fake_services("host-1")), fakes.stub_instance_obj( None, 2, uuid=UUID2, host="host-2", node="node-2", reservation_id="r-2", launch_index=1, kernel_id=UUID4, ramdisk_id=UUID5, display_name="hostname-2", root_device_name="/dev/vda", user_data="userdata", services=fake_services("host-2")), ] return objects.InstanceList(objects=inst_list) class ExtendedServerAttributesTestV21(test.TestCase): content_type = 'application/json' prefix = 'OS-EXT-SRV-ATTR:' fake_url = '/v2/fake' wsgi_api_version = os_wsgi.DEFAULT_API_VERSION def setUp(self): super(ExtendedServerAttributesTestV21, self).setUp() fakes.stub_out_nw_api(self) fakes.stub_out_secgroup_api(self) self.stub_out('nova.compute.api.API.get', fake_compute_get) self.stub_out('nova.compute.api.API.get_all', fake_compute_get_all) self.stub_out('nova.db.instance_get_by_uuid', fake_compute_get) def _make_request(self, url): req = fakes.HTTPRequest.blank(url) req.headers['Accept'] = self.content_type req.headers = {os_wsgi.API_VERSION_REQUEST_HEADER: 'compute %s' % self.wsgi_api_version} res = req.get_response( fakes.wsgi_app_v21()) return res def _get_server(self, body): return jsonutils.loads(body).get('server') def _get_servers(self, body): return jsonutils.loads(body).get('servers') def assertServerAttributes(self, server, host, node, instance_name): self.assertEqual(server.get('%shost' % self.prefix), host) self.assertEqual(server.get('%sinstance_name' % self.prefix), instance_name) self.assertEqual(server.get('%shypervisor_hostname' % self.prefix), node) def test_show(self): url = self.fake_url + '/servers/%s' % UUID3 res = self._make_request(url) self.assertEqual(res.status_int, 200) self.assertServerAttributes(self._get_server(res.body), host='host-fake', node='node-fake', instance_name=NAME_FMT % 1) def test_detail(self): url = self.fake_url + '/servers/detail' res = self._make_request(url) self.assertEqual(res.status_int, 200) for i, server in enumerate(self._get_servers(res.body)): self.assertServerAttributes(server, host='host-%s' % (i + 1), node='node-%s' % (i + 1), instance_name=NAME_FMT % (i + 1)) @mock.patch.object(compute.api.API, 'get_all') def test_detail_empty_instance_list_invalid_status(self, mock_get_all_method): mock_get_all_method.return_value = objects.InstanceList(objects=[]) url = "%s%s" % (self.fake_url, '/servers/detail?status=invalid_status') res = self._make_request(url) # check status code 200 with empty instance list self.assertEqual(200, res.status_int) self.assertEqual(0, len(self._get_servers(res.body))) def test_no_instance_passthrough_404(self): def fake_compute_get(*args, **kwargs): raise exception.InstanceNotFound(instance_id='fake') self.stub_out('nova.compute.api.API.get', fake_compute_get) url = self.fake_url + '/servers/70f6db34-de8d-4fbd-aafb-4065bdfa6115' res = self._make_request(url) self.assertEqual(res.status_int, 404) class ExtendedServerAttributesTestV23(ExtendedServerAttributesTestV21): wsgi_api_version = '2.3' def assertServerAttributes(self, server, host, node, instance_name, reservation_id, launch_index, kernel_id, ramdisk_id, hostname, root_device_name, user_data): super(ExtendedServerAttributesTestV23, self).assertServerAttributes( server, host, node, instance_name) self.assertEqual(server.get('%sreservation_id' % self.prefix), reservation_id) self.assertEqual(server.get('%slaunch_index' % self.prefix), launch_index) self.assertEqual(server.get('%skernel_id' % self.prefix), kernel_id) self.assertEqual(server.get('%sramdisk_id' % self.prefix), ramdisk_id) self.assertEqual(server.get('%shostname' % self.prefix), hostname) self.assertEqual(server.get('%sroot_device_name' % self.prefix), root_device_name) self.assertEqual(server.get('%suser_data' % self.prefix), user_data) def test_show(self): url = self.fake_url + '/servers/%s' % UUID3 res = self._make_request(url) self.assertEqual(res.status_int, 200) self.assertServerAttributes(self._get_server(res.body), host='host-fake', node='node-fake', instance_name=NAME_FMT % 1, reservation_id="r-1", launch_index=0, kernel_id=UUID4, ramdisk_id=UUID5, hostname="hostname-1", root_device_name="/dev/vda", user_data="userdata") def test_detail(self): url = self.fake_url + '/servers/detail' res = self._make_request(url) self.assertEqual(res.status_int, 200) for i, server in enumerate(self._get_servers(res.body)): self.assertServerAttributes(server, host='host-%s' % (i + 1), node='node-%s' % (i + 1), instance_name=NAME_FMT % (i + 1), reservation_id="r-%s" % (i + 1), launch_index=i, kernel_id=UUID4, ramdisk_id=UUID5, hostname="hostname-%s" % (i + 1), root_device_name="/dev/vda", user_data="userdata") class ExtendedServerAttributesTestV216(ExtendedServerAttributesTestV21): wsgi_api_version = '2.16' def assertServerAttributes(self, server, host, node, instance_name, host_status): super(ExtendedServerAttributesTestV216, self).assertServerAttributes( server, host, node, instance_name) self.assertEqual(server.get('host_status'), host_status) def test_show(self): url = self.fake_url + '/servers/%s' % UUID3 res = self._make_request(url) self.assertEqual(res.status_int, 200) self.assertServerAttributes(self._get_server(res.body), host='host-fake', node='node-fake', instance_name=NAME_FMT % 1, host_status="DOWN") def test_detail(self): url = self.fake_url + '/servers/detail' res = self._make_request(url) self.assertEqual(res.status_int, 200) for i, server in enumerate(self._get_servers(res.body)): self.assertServerAttributes(server, host='host-%s' % (i + 1), node='node-%s' % (i + 1), instance_name=NAME_FMT % (i + 1), host_status="DOWN") nova-17.0.1/nova/tests/unit/api/openstack/compute/test_extended_hypervisors.py0000666000175000017500000001154413250073126027761 0ustar zuulzuul00000000000000# Copyright 2014 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import mock from nova.api.openstack.compute import hypervisors \ as hypervisors_v21 from nova import exception from nova import objects from nova import test from nova.tests.unit.api.openstack.compute import test_hypervisors from nova.tests.unit.api.openstack import fakes def fake_compute_node_get(context, compute_id): for hyper in test_hypervisors.TEST_HYPERS_OBJ: if hyper.id == int(compute_id): return hyper raise exception.ComputeHostNotFound(host=compute_id) def fake_compute_node_get_all(context, limit=None, marker=None): return test_hypervisors.TEST_HYPERS_OBJ def fake_service_get_by_compute_host(context, host): for service in test_hypervisors.TEST_SERVICES: if service.host == host: return service class ExtendedHypervisorsTestV21(test.NoDBTestCase): DETAIL_HYPERS_DICTS = copy.deepcopy(test_hypervisors.TEST_HYPERS) del DETAIL_HYPERS_DICTS[0]['service_id'] del DETAIL_HYPERS_DICTS[1]['service_id'] del DETAIL_HYPERS_DICTS[0]['host'] del DETAIL_HYPERS_DICTS[1]['host'] del DETAIL_HYPERS_DICTS[0]['uuid'] del DETAIL_HYPERS_DICTS[1]['uuid'] DETAIL_HYPERS_DICTS[0].update({'state': 'up', 'status': 'enabled', 'service': dict(id=1, host='compute1', disabled_reason=None)}) DETAIL_HYPERS_DICTS[1].update({'state': 'up', 'status': 'enabled', 'service': dict(id=2, host='compute2', disabled_reason=None)}) def _set_up_controller(self): self.controller = hypervisors_v21.HypervisorsController() def _get_request(self): return fakes.HTTPRequest.blank('/v2/fake/os-hypervisors/detail', use_admin_context=True) def setUp(self): super(ExtendedHypervisorsTestV21, self).setUp() self._set_up_controller() def test_view_hypervisor_detail_noservers(self): with mock.patch.object(self.controller.servicegroup_api, 'service_is_up', return_value=True) as mock_service_is_up: req = self._get_request() result = self.controller._view_hypervisor( test_hypervisors.TEST_HYPERS_OBJ[0], test_hypervisors.TEST_SERVICES[0], True, req) self.assertEqual(self.DETAIL_HYPERS_DICTS[0], result) self.assertTrue(mock_service_is_up.called) @mock.patch.object(objects.Service, 'get_by_compute_host', side_effect=fake_service_get_by_compute_host) def test_detail(self, mock_get_by_host): with test.nested( mock.patch.object(self.controller.host_api, 'compute_node_get_all', side_effect=fake_compute_node_get_all), mock.patch.object(self.controller.servicegroup_api, 'service_is_up', return_value=True), ) as (mock_node_get_all, mock_service_is_up): req = self._get_request() result = self.controller.detail(req) self.assertEqual(dict(hypervisors=self.DETAIL_HYPERS_DICTS), result) self.assertTrue(mock_service_is_up.called) self.assertTrue(mock_get_by_host.called) self.assertTrue(mock_node_get_all.called) @mock.patch.object(objects.Service, 'get_by_compute_host', side_effect=fake_service_get_by_compute_host) def test_show_withid(self, mock_get_by_host): with test.nested( mock.patch.object(self.controller.host_api, 'compute_node_get', side_effect=fake_compute_node_get), mock.patch.object(self.controller.servicegroup_api, 'service_is_up', return_value=True), ) as (mock_node_get, mock_service_is_up): req = self._get_request() result = self.controller.show(req, '1') self.assertEqual(dict(hypervisor=self.DETAIL_HYPERS_DICTS[0]), result) self.assertTrue(mock_service_is_up.called) self.assertTrue(mock_get_by_host.called) self.assertTrue(mock_node_get.called) nova-17.0.1/nova/tests/unit/api/openstack/compute/test_server_reset_state.py0000666000175000017500000001214013250073126027405 0ustar zuulzuul00000000000000# Copyright 2015 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_utils import uuidutils import webob from nova.api.openstack.compute import admin_actions \ as admin_actions_v21 from nova.compute import vm_states from nova import exception from nova import objects from nova import test from nova.tests.unit.api.openstack import fakes class ResetStateTestsV21(test.NoDBTestCase): admin_act = admin_actions_v21 bad_request = exception.ValidationError def setUp(self): super(ResetStateTestsV21, self).setUp() self.uuid = uuidutils.generate_uuid() self.admin_api = self.admin_act.AdminActionsController() self.compute_api = self.admin_api.compute_api self.request = self._get_request() self.context = self.request.environ['nova.context'] self.instance = self._create_instance() def _create_instance(self): instance = objects.Instance() instance.uuid = self.uuid instance.vm_state = 'fake' instance.task_state = 'fake' instance.obj_reset_changes() return instance def _check_instance_state(self, expected): self.assertEqual(set(expected.keys()), self.instance.obj_what_changed()) for k, v in expected.items(): self.assertEqual(v, getattr(self.instance, k), "Instance.%s doesn't match" % k) self.instance.obj_reset_changes() def _get_request(self): return fakes.HTTPRequest.blank('') def test_no_state(self): self.assertRaises(self.bad_request, self.admin_api._reset_state, self.request, self.uuid, body={"os-resetState": None}) def test_bad_state(self): self.assertRaises(self.bad_request, self.admin_api._reset_state, self.request, self.uuid, body={"os-resetState": {"state": "spam"}}) def test_no_instance(self): self.compute_api.get = mock.MagicMock( side_effect=exception.InstanceNotFound(instance_id='inst_ud')) self.assertRaises(webob.exc.HTTPNotFound, self.admin_api._reset_state, self.request, self.uuid, body={"os-resetState": {"state": "active"}}) self.compute_api.get.assert_called_once_with( self.context, self.uuid, expected_attrs=None) def test_reset_active(self): expected = dict(vm_state=vm_states.ACTIVE, task_state=None) self.instance.save = mock.MagicMock( side_effect=lambda **kw: self._check_instance_state(expected)) self.compute_api.get = mock.MagicMock(return_value=self.instance) body = {"os-resetState": {"state": "active"}} result = self.admin_api._reset_state(self.request, self.uuid, body=body) # NOTE: on v2.1, http status code is set as wsgi_code of API # method instead of status_int in a response object. if isinstance(self.admin_api, admin_actions_v21.AdminActionsController): status_int = self.admin_api._reset_state.wsgi_code else: status_int = result.status_int self.assertEqual(202, status_int) self.compute_api.get.assert_called_once_with( self.context, self.instance.uuid, expected_attrs=None) self.instance.save.assert_called_once_with(admin_state_reset=True) def test_reset_error(self): expected = dict(vm_state=vm_states.ERROR, task_state=None) self.instance.save = mock.MagicMock( side_effect=lambda **kw: self._check_instance_state(expected)) self.compute_api.get = mock.MagicMock(return_value=self.instance) body = {"os-resetState": {"state": "error"}} result = self.admin_api._reset_state(self.request, self.uuid, body=body) # NOTE: on v2.1, http status code is set as wsgi_code of API # method instead of status_int in a response object. if isinstance(self.admin_api, admin_actions_v21.AdminActionsController): status_int = self.admin_api._reset_state.wsgi_code else: status_int = result.status_int self.assertEqual(202, status_int) self.compute_api.get.assert_called_once_with( self.context, self.instance.uuid, expected_attrs=None) self.instance.save.assert_called_once_with(admin_state_reset=True) nova-17.0.1/nova/tests/unit/api/openstack/compute/test_used_limits.py0000666000175000017500000002331413250073126026023 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import webob from nova.api.openstack import api_version_request from nova.api.openstack.compute import used_limits \ as used_limits_v21 from nova.api.openstack import wsgi import nova.context from nova import exception from nova.policies import used_limits as ul_policies from nova import quota from nova import test class FakeRequest(object): def __init__(self, context, reserved=False): self.environ = {'nova.context': context} self.reserved = reserved self.api_version_request = api_version_request.min_api_version() if reserved: self.GET = webob.request.MultiDict({'reserved': 1}) else: self.GET = webob.request.MultiDict({}) def is_legacy_v2(self): return False class UsedLimitsTestCaseV21(test.NoDBTestCase): used_limit_extension = "os_compute_api:os-used-limits" include_server_group_quotas = True def setUp(self): """Run before each test.""" super(UsedLimitsTestCaseV21, self).setUp() self._set_up_controller() self.fake_context = nova.context.RequestContext('fake', 'fake') def _set_up_controller(self): self.controller = used_limits_v21.UsedLimitsController() patcher = self.mock_can = mock.patch('nova.context.RequestContext.can') self.mock_can = patcher.start() self.addCleanup(patcher.stop) def _do_test_used_limits(self, reserved): fake_req = FakeRequest(self.fake_context, reserved=reserved) obj = { "limits": { "rate": [], "absolute": {}, }, } res = wsgi.ResponseObject(obj) quota_map = { 'totalRAMUsed': 'ram', 'totalCoresUsed': 'cores', 'totalInstancesUsed': 'instances', 'totalFloatingIpsUsed': 'floating_ips', 'totalSecurityGroupsUsed': 'security_groups', 'totalServerGroupsUsed': 'server_groups', } limits = {} expected_abs_limits = [] for display_name, q in quota_map.items(): limits[q] = {'limit': len(display_name), 'in_use': len(display_name) / 2, 'reserved': 0} if (self.include_server_group_quotas or display_name != 'totalServerGroupsUsed'): expected_abs_limits.append(display_name) def stub_get_project_quotas(context, project_id, usages=True): return limits self.stub_out('nova.quota.QUOTAS.get_project_quotas', stub_get_project_quotas) self.controller.index(fake_req, res) abs_limits = res.obj['limits']['absolute'] for limit in expected_abs_limits: value = abs_limits[limit] r = limits[quota_map[limit]]['reserved'] if reserved else 0 self.assertEqual(limits[quota_map[limit]]['in_use'] + r, value) def test_used_limits_basic(self): self._do_test_used_limits(False) def test_used_limits_with_reserved(self): self._do_test_used_limits(True) def test_admin_can_fetch_limits_for_a_given_tenant_id(self): project_id = "123456" user_id = "A1234" tenant_id = 'abcd' self.fake_context.project_id = project_id self.fake_context.user_id = user_id obj = { "limits": { "rate": [], "absolute": {}, }, } target = { "project_id": tenant_id, "user_id": user_id } fake_req = FakeRequest(self.fake_context) fake_req.GET = webob.request.MultiDict({'tenant_id': tenant_id}) with mock.patch.object(quota.QUOTAS, 'get_project_quotas', return_value={}) as mock_get_quotas: res = wsgi.ResponseObject(obj) self.controller.index(fake_req, res) self.mock_can.assert_called_once_with(ul_policies.BASE_POLICY_NAME, target) mock_get_quotas.assert_called_once_with(self.fake_context, tenant_id, usages=True) def _test_admin_can_fetch_used_limits_for_own_project(self, req_get): project_id = "123456" if 'tenant_id' in req_get: project_id = req_get['tenant_id'] user_id = "A1234" self.fake_context.project_id = project_id self.fake_context.user_id = user_id obj = { "limits": { "rate": [], "absolute": {}, }, } fake_req = FakeRequest(self.fake_context) fake_req.GET = webob.request.MultiDict(req_get) with mock.patch.object(quota.QUOTAS, 'get_project_quotas', return_value={}) as mock_get_quotas: res = wsgi.ResponseObject(obj) self.controller.index(fake_req, res) mock_get_quotas.assert_called_once_with(self.fake_context, project_id, usages=True) def test_admin_can_fetch_used_limits_for_own_project(self): req_get = {} self._test_admin_can_fetch_used_limits_for_own_project(req_get) def test_admin_can_fetch_used_limits_for_dummy_only(self): # for back compatible we allow additional param to be send to req.GET # it can be removed when we add restrictions to query param later req_get = {'dummy': 'dummy'} self._test_admin_can_fetch_used_limits_for_own_project(req_get) def test_admin_can_fetch_used_limits_with_positive_int(self): req_get = {'tenant_id': 123} self._test_admin_can_fetch_used_limits_for_own_project(req_get) def test_admin_can_fetch_used_limits_with_negative_int(self): req_get = {'tenant_id': -1} self._test_admin_can_fetch_used_limits_for_own_project(req_get) def test_admin_can_fetch_used_limits_with_unkown_param(self): req_get = {'tenant_id': '123', 'unknown': 'unknown'} self._test_admin_can_fetch_used_limits_for_own_project(req_get) def test_non_admin_cannot_fetch_used_limits_for_any_other_project(self): project_id = "123456" user_id = "A1234" tenant_id = "abcd" self.fake_context.project_id = project_id self.fake_context.user_id = user_id obj = { "limits": { "rate": [], "absolute": {}, }, } target = { "project_id": tenant_id, "user_id": user_id } fake_req = FakeRequest(self.fake_context) fake_req.GET = webob.request.MultiDict({'tenant_id': tenant_id}) self.mock_can.side_effect = exception.PolicyNotAuthorized( action=self.used_limit_extension) res = wsgi.ResponseObject(obj) self.assertRaises(exception.PolicyNotAuthorized, self.controller.index, fake_req, res) self.mock_can.assert_called_once_with(ul_policies.BASE_POLICY_NAME, target) def test_used_limits_fetched_for_context_project_id(self): project_id = "123456" self.fake_context.project_id = project_id obj = { "limits": { "rate": [], "absolute": {}, }, } fake_req = FakeRequest(self.fake_context) with mock.patch.object(quota.QUOTAS, 'get_project_quotas', return_value={}) as mock_get_quotas: res = wsgi.ResponseObject(obj) self.controller.index(fake_req, res) mock_get_quotas.assert_called_once_with(self.fake_context, project_id, usages=True) def test_used_ram_added(self): fake_req = FakeRequest(self.fake_context) obj = { "limits": { "rate": [], "absolute": { "maxTotalRAMSize": 512, }, }, } res = wsgi.ResponseObject(obj) def stub_get_project_quotas(context, project_id, usages=True): return {'ram': {'limit': 512, 'in_use': 256}} with mock.patch.object(quota.QUOTAS, 'get_project_quotas', side_effect=stub_get_project_quotas ) as mock_get_quotas: self.controller.index(fake_req, res) abs_limits = res.obj['limits']['absolute'] self.assertIn('totalRAMUsed', abs_limits) self.assertEqual(256, abs_limits['totalRAMUsed']) self.assertEqual(1, mock_get_quotas.call_count) def test_no_ram_quota(self): fake_req = FakeRequest(self.fake_context) obj = { "limits": { "rate": [], "absolute": {}, }, } res = wsgi.ResponseObject(obj) with mock.patch.object(quota.QUOTAS, 'get_project_quotas', return_value={}) as mock_get_quotas: self.controller.index(fake_req, res) abs_limits = res.obj['limits']['absolute'] self.assertNotIn('totalRAMUsed', abs_limits) self.assertEqual(1, mock_get_quotas.call_count) nova-17.0.1/nova/tests/unit/api/openstack/compute/test_server_migrations.py0000666000175000017500000003713613250073126027253 0ustar zuulzuul00000000000000# Copyright 2016 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import datetime import mock import webob from nova.api.openstack.compute import server_migrations from nova import exception from nova import objects from nova.objects import base from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests import uuidsentinel as uuids SERVER_UUID = uuids.server_uuid fake_migrations = [ { 'id': 1234, 'source_node': 'node1', 'dest_node': 'node2', 'source_compute': 'compute1', 'dest_compute': 'compute2', 'dest_host': '1.2.3.4', 'status': 'running', 'instance_uuid': SERVER_UUID, 'old_instance_type_id': 1, 'new_instance_type_id': 2, 'migration_type': 'live-migration', 'hidden': False, 'memory_total': 123456, 'memory_processed': 12345, 'memory_remaining': 111111, 'disk_total': 234567, 'disk_processed': 23456, 'disk_remaining': 211111, 'created_at': datetime.datetime(2012, 10, 29, 13, 42, 2), 'updated_at': datetime.datetime(2012, 10, 29, 13, 42, 2), 'deleted_at': None, 'deleted': False, 'uuid': uuids.migration1, }, { 'id': 5678, 'source_node': 'node10', 'dest_node': 'node20', 'source_compute': 'compute10', 'dest_compute': 'compute20', 'dest_host': '5.6.7.8', 'status': 'running', 'instance_uuid': SERVER_UUID, 'old_instance_type_id': 5, 'new_instance_type_id': 6, 'migration_type': 'live-migration', 'hidden': False, 'memory_total': 456789, 'memory_processed': 56789, 'memory_remaining': 400000, 'disk_total': 96789, 'disk_processed': 6789, 'disk_remaining': 90000, 'created_at': datetime.datetime(2013, 10, 22, 13, 42, 2), 'updated_at': datetime.datetime(2013, 10, 22, 13, 42, 2), 'deleted_at': None, 'deleted': False, 'uuid': uuids.migration2, } ] migrations_obj = base.obj_make_list( 'fake-context', objects.MigrationList(), objects.Migration, fake_migrations) class ServerMigrationsTestsV21(test.NoDBTestCase): wsgi_api_version = '2.22' def setUp(self): super(ServerMigrationsTestsV21, self).setUp() self.req = fakes.HTTPRequest.blank('', version=self.wsgi_api_version) self.context = self.req.environ['nova.context'] self.controller = server_migrations.ServerMigrationsController() self.compute_api = self.controller.compute_api def test_force_complete_succeeded(self): @mock.patch.object(self.compute_api, 'live_migrate_force_complete') @mock.patch.object(self.compute_api, 'get') def _do_test(compute_api_get, live_migrate_force_complete): self.controller._force_complete(self.req, '1', '1', body={'force_complete': None}) live_migrate_force_complete.assert_called_once_with( self.context, compute_api_get(), '1') _do_test() def _test_force_complete_failed_with_exception(self, fake_exc, expected_exc): @mock.patch.object(self.compute_api, 'live_migrate_force_complete', side_effect=fake_exc) @mock.patch.object(self.compute_api, 'get') def _do_test(compute_api_get, live_migrate_force_complete): self.assertRaises(expected_exc, self.controller._force_complete, self.req, '1', '1', body={'force_complete': None}) _do_test() def test_force_complete_instance_not_migrating(self): self._test_force_complete_failed_with_exception( exception.InstanceInvalidState(instance_uuid='', state='', attr='', method=''), webob.exc.HTTPConflict) def test_force_complete_migration_not_found(self): self._test_force_complete_failed_with_exception( exception.MigrationNotFoundByStatus(instance_id='', status=''), webob.exc.HTTPBadRequest) def test_force_complete_instance_is_locked(self): self._test_force_complete_failed_with_exception( exception.InstanceIsLocked(instance_uuid=''), webob.exc.HTTPConflict) def test_force_complete_invalid_migration_state(self): self._test_force_complete_failed_with_exception( exception.InvalidMigrationState(migration_id='', instance_uuid='', state='', method=''), webob.exc.HTTPBadRequest) def test_force_complete_instance_not_found(self): self._test_force_complete_failed_with_exception( exception.InstanceNotFound(instance_id=''), webob.exc.HTTPNotFound) def test_force_complete_unexpected_error(self): self._test_force_complete_failed_with_exception( exception.NovaException(), webob.exc.HTTPInternalServerError) class ServerMigrationsTestsV223(ServerMigrationsTestsV21): wsgi_api_version = '2.23' def setUp(self): super(ServerMigrationsTestsV223, self).setUp() self.req = fakes.HTTPRequest.blank('', version=self.wsgi_api_version, use_admin_context=True) self.context = self.req.environ['nova.context'] @mock.patch('nova.compute.api.API.get_migrations_in_progress_by_instance') @mock.patch('nova.compute.api.API.get') def test_index(self, m_get_instance, m_get_mig): migrations = [server_migrations.output(mig) for mig in migrations_obj] migrations_in_progress = {'migrations': migrations} for mig in migrations_in_progress['migrations']: self.assertIn('id', mig) self.assertNotIn('deleted', mig) self.assertNotIn('deleted_at', mig) m_get_mig.return_value = migrations_obj response = self.controller.index(self.req, SERVER_UUID) self.assertEqual(migrations_in_progress, response) m_get_instance.assert_called_once_with(self.context, SERVER_UUID, expected_attrs=None) @mock.patch('nova.compute.api.API.get') def test_index_invalid_instance(self, m_get_instance): m_get_instance.side_effect = exception.InstanceNotFound(instance_id=1) self.assertRaises(webob.exc.HTTPNotFound, self.controller.index, self.req, SERVER_UUID) m_get_instance.assert_called_once_with(self.context, SERVER_UUID, expected_attrs=None) @mock.patch('nova.compute.api.API.get_migration_by_id_and_instance') @mock.patch('nova.compute.api.API.get') def test_show(self, m_get_instance, m_get_mig): migrations = [server_migrations.output(mig) for mig in migrations_obj] m_get_mig.return_value = migrations_obj[0] response = self.controller.show(self.req, SERVER_UUID, migrations_obj[0].id) self.assertEqual(migrations[0], response['migration']) m_get_instance.assert_called_once_with(self.context, SERVER_UUID, expected_attrs=None) @mock.patch('nova.compute.api.API.get_migration_by_id_and_instance') @mock.patch('nova.compute.api.API.get') def test_show_migration_non_progress(self, m_get_instance, m_get_mig): non_progress_mig = copy.deepcopy(migrations_obj[0]) non_progress_mig.status = "reverted" m_get_mig.return_value = non_progress_mig self.assertRaises(webob.exc.HTTPNotFound, self.controller.show, self.req, SERVER_UUID, non_progress_mig.id) m_get_instance.assert_called_once_with(self.context, SERVER_UUID, expected_attrs=None) @mock.patch('nova.compute.api.API.get_migration_by_id_and_instance') @mock.patch('nova.compute.api.API.get') def test_show_migration_not_live_migration(self, m_get_instance, m_get_mig): non_progress_mig = copy.deepcopy(migrations_obj[0]) non_progress_mig.migration_type = "resize" m_get_mig.return_value = non_progress_mig self.assertRaises(webob.exc.HTTPNotFound, self.controller.show, self.req, SERVER_UUID, non_progress_mig.id) m_get_instance.assert_called_once_with(self.context, SERVER_UUID, expected_attrs=None) @mock.patch('nova.compute.api.API.get_migration_by_id_and_instance') @mock.patch('nova.compute.api.API.get') def test_show_migration_not_exist(self, m_get_instance, m_get_mig): m_get_mig.side_effect = exception.MigrationNotFoundForInstance( migration_id=migrations_obj[0].id, instance_id=SERVER_UUID) self.assertRaises(webob.exc.HTTPNotFound, self.controller.show, self.req, SERVER_UUID, migrations_obj[0].id) m_get_instance.assert_called_once_with(self.context, SERVER_UUID, expected_attrs=None) @mock.patch('nova.compute.api.API.get') def test_show_migration_invalid_instance(self, m_get_instance): m_get_instance.side_effect = exception.InstanceNotFound(instance_id=1) self.assertRaises(webob.exc.HTTPNotFound, self.controller.show, self.req, SERVER_UUID, migrations_obj[0].id) m_get_instance.assert_called_once_with(self.context, SERVER_UUID, expected_attrs=None) class ServerMigrationsTestsV224(ServerMigrationsTestsV21): wsgi_api_version = '2.24' def setUp(self): super(ServerMigrationsTestsV224, self).setUp() self.req = fakes.HTTPRequest.blank('', version=self.wsgi_api_version, use_admin_context=True) self.context = self.req.environ['nova.context'] def test_cancel_live_migration_succeeded(self): @mock.patch.object(self.compute_api, 'live_migrate_abort') @mock.patch.object(self.compute_api, 'get') def _do_test(mock_get, mock_abort): self.controller.delete(self.req, 'server-id', 'migration-id') mock_abort.assert_called_once_with(self.context, mock_get(), 'migration-id') _do_test() def _test_cancel_live_migration_failed(self, fake_exc, expected_exc): @mock.patch.object(self.compute_api, 'live_migrate_abort', side_effect=fake_exc) @mock.patch.object(self.compute_api, 'get') def _do_test(mock_get, mock_abort): self.assertRaises(expected_exc, self.controller.delete, self.req, 'server-id', 'migration-id') _do_test() def test_cancel_live_migration_invalid_state(self): self._test_cancel_live_migration_failed( exception.InstanceInvalidState(instance_uuid='', state='', attr='', method=''), webob.exc.HTTPConflict) def test_cancel_live_migration_migration_not_found(self): self._test_cancel_live_migration_failed( exception.MigrationNotFoundForInstance(migration_id='', instance_id=''), webob.exc.HTTPNotFound) def test_cancel_live_migration_invalid_migration_state(self): self._test_cancel_live_migration_failed( exception.InvalidMigrationState(migration_id='', instance_uuid='', state='', method=''), webob.exc.HTTPBadRequest) def test_cancel_live_migration_instance_not_found(self): self.assertRaises(webob.exc.HTTPNotFound, self.controller.delete, self.req, 'server-id', 'migration-id') class ServerMigrationsPolicyEnforcementV21(test.NoDBTestCase): wsgi_api_version = '2.22' def setUp(self): super(ServerMigrationsPolicyEnforcementV21, self).setUp() self.controller = server_migrations.ServerMigrationsController() self.req = fakes.HTTPRequest.blank('', version=self.wsgi_api_version) def test_force_complete_policy_failed(self): rule_name = "os_compute_api:servers:migrations:force_complete" self.policy.set_rules({rule_name: "project:non_fake"}) body_args = {'force_complete': None} exc = self.assertRaises(exception.PolicyNotAuthorized, self.controller._force_complete, self.req, fakes.FAKE_UUID, fakes.FAKE_UUID, body=body_args) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) class ServerMigrationsPolicyEnforcementV223( ServerMigrationsPolicyEnforcementV21): wsgi_api_version = '2.23' def test_migration_index_failed(self): rule_name = "os_compute_api:servers:migrations:index" self.policy.set_rules({rule_name: "project:non_fake"}) exc = self.assertRaises(exception.PolicyNotAuthorized, self.controller.index, self.req, fakes.FAKE_UUID) self.assertEqual("Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) def test_migration_show_failed(self): rule_name = "os_compute_api:servers:migrations:show" self.policy.set_rules({rule_name: "project:non_fake"}) exc = self.assertRaises(exception.PolicyNotAuthorized, self.controller.show, self.req, fakes.FAKE_UUID, 1) self.assertEqual("Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) class ServerMigrationsPolicyEnforcementV224( ServerMigrationsPolicyEnforcementV223): wsgi_api_version = '2.24' def test_migrate_delete_failed(self): rule_name = "os_compute_api:servers:migrations:delete" self.policy.set_rules({rule_name: "project:non_fake"}) self.assertRaises(exception.PolicyNotAuthorized, self.controller.delete, self.req, fakes.FAKE_UUID, '10') nova-17.0.1/nova/tests/unit/api/openstack/compute/test_hosts.py0000666000175000017500000004747013250073126024653 0ustar zuulzuul00000000000000# Copyright (c) 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import testtools import webob.exc from nova.api.openstack.compute import hosts as os_hosts_v21 from nova.compute import power_state from nova.compute import vm_states from nova import context as context_maker from nova import db from nova import exception from nova import test from nova.tests import fixtures from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_hosts from nova.tests import uuidsentinel def stub_service_get_all(context, disabled=None): return fake_hosts.SERVICES_LIST def stub_service_get_by_host_and_binary(context, host_name, binary): for service in stub_service_get_all(context): if service['host'] == host_name and service['binary'] == binary: return service def stub_set_host_enabled(context, host_name, enabled): """Simulates three possible behaviours for VM drivers or compute drivers when enabling or disabling a host. 'enabled' means new instances can go to this host 'disabled' means they can't """ results = {True: "enabled", False: "disabled"} if host_name == "notimplemented": # The vm driver for this host doesn't support this feature raise NotImplementedError() elif host_name == "dummydest": # The host does not exist raise exception.ComputeHostNotFound(host=host_name) elif host_name == "service_not_available": # The service is not available raise exception.ComputeServiceUnavailable(host=host_name) elif host_name == "host_c2": # Simulate a failure return results[not enabled] else: # Do the right thing return results[enabled] def stub_set_host_maintenance(context, host_name, mode): # We'll simulate success and failure by assuming # that 'host_c1' always succeeds, and 'host_c2' # always fails results = {True: "on_maintenance", False: "off_maintenance"} if host_name == "notimplemented": # The vm driver for this host doesn't support this feature raise NotImplementedError() elif host_name == "dummydest": # The host does not exist raise exception.ComputeHostNotFound(host=host_name) elif host_name == "service_not_available": # The service is not available raise exception.ComputeServiceUnavailable(host=host_name) elif host_name == "host_c2": # Simulate a failure return results[not mode] else: # Do the right thing return results[mode] def stub_host_power_action(context, host_name, action): if host_name == "notimplemented": raise NotImplementedError() elif host_name == "dummydest": # The host does not exist raise exception.ComputeHostNotFound(host=host_name) elif host_name == "service_not_available": # The service is not available raise exception.ComputeServiceUnavailable(host=host_name) return action def _create_instance(**kwargs): """Create a test instance.""" ctxt = context_maker.get_admin_context() return db.instance_create(ctxt, _create_instance_dict(**kwargs)) def _create_instance_dict(**kwargs): """Create a dictionary for a test instance.""" inst = {} inst['image_ref'] = 'cedef40a-ed67-4d10-800e-17455edce175' inst['reservation_id'] = 'r-fakeres' inst['user_id'] = kwargs.get('user_id', 'admin') inst['project_id'] = kwargs.get('project_id', 'fake') inst['instance_type_id'] = '1' if 'host' in kwargs: inst['host'] = kwargs.get('host') inst['vcpus'] = kwargs.get('vcpus', 1) inst['memory_mb'] = kwargs.get('memory_mb', 20) inst['root_gb'] = kwargs.get('root_gb', 30) inst['ephemeral_gb'] = kwargs.get('ephemeral_gb', 30) inst['vm_state'] = kwargs.get('vm_state', vm_states.ACTIVE) inst['power_state'] = kwargs.get('power_state', power_state.RUNNING) inst['task_state'] = kwargs.get('task_state', None) inst['availability_zone'] = kwargs.get('availability_zone', None) inst['ami_launch_index'] = 0 inst['launched_on'] = kwargs.get('launched_on', 'dummy') return inst class HostTestCaseV21(test.TestCase): """Test Case for hosts.""" validation_ex = exception.ValidationError Controller = os_hosts_v21.HostController policy_ex = exception.PolicyNotAuthorized def _setup_stubs(self): # Pretend we have fake_hosts.HOST_LIST in the DB self.stub_out('nova.db.service_get_all', stub_service_get_all) # Only hosts in our fake DB exist self.stub_out('nova.db.service_get_by_host_and_binary', stub_service_get_by_host_and_binary) # 'host_c1' always succeeds, and 'host_c2' self.stubs.Set(self.hosts_api, 'set_host_enabled', stub_set_host_enabled) # 'host_c1' always succeeds, and 'host_c2' self.stubs.Set(self.hosts_api, 'set_host_maintenance', stub_set_host_maintenance) self.stubs.Set(self.hosts_api, 'host_power_action', stub_host_power_action) def setUp(self): super(HostTestCaseV21, self).setUp() self.controller = self.Controller() self.hosts_api = self.controller.api self.req = fakes.HTTPRequest.blank('', use_admin_context=True) self.useFixture(fixtures.SingleCellSimple()) self._setup_stubs() def _test_host_update(self, host, key, val, expected_value): body = {key: val} result = self.controller.update(self.req, host, body=body) self.assertEqual(result[key], expected_value) def test_list_hosts(self): """Verify that the compute hosts are returned.""" result = self.controller.index(self.req) self.assertIn('hosts', result) hosts = result['hosts'] self.assertEqual(fake_hosts.HOST_LIST, hosts) def test_list_host_with_multi_filter(self): query_string = 'zone=nova1&zone=nova' req = fakes.HTTPRequest.blank('', use_admin_context=True, query_string=query_string) result = self.controller.index(req) self.assertIn('hosts', result) hosts = result['hosts'] self.assertEqual(fake_hosts.HOST_LIST_NOVA_ZONE, hosts) def test_list_host_query_allow_negative_int_as_string(self): req = fakes.HTTPRequest.blank('', use_admin_context=True, query_string='zone=-1') result = self.controller.index(req) self.assertIn('hosts', result) hosts = result['hosts'] self.assertEqual([], hosts) def test_list_host_query_allow_int_as_string(self): req = fakes.HTTPRequest.blank('', use_admin_context=True, query_string='zone=123') result = self.controller.index(req) self.assertIn('hosts', result) hosts = result['hosts'] self.assertEqual([], hosts) def test_list_host_with_unknown_filter(self): query_string = 'unknown_filter=abc' req = fakes.HTTPRequest.blank('', use_admin_context=True, query_string=query_string) result = self.controller.index(req) self.assertIn('hosts', result) hosts = result['hosts'] self.assertEqual(fake_hosts.HOST_LIST, hosts) def test_list_host_with_hypervisor_and_additional_filter(self): query_string = 'zone=nova&additional_filter=nova2' req = fakes.HTTPRequest.blank('', use_admin_context=True, query_string=query_string) result = self.controller.index(req) self.assertIn('hosts', result) hosts = result['hosts'] self.assertEqual(fake_hosts.HOST_LIST_NOVA_ZONE, hosts) def test_disable_host(self): self._test_host_update('host_c1', 'status', 'disable', 'disabled') self._test_host_update('host_c2', 'status', 'disable', 'enabled') def test_enable_host(self): self._test_host_update('host_c1', 'status', 'enable', 'enabled') self._test_host_update('host_c2', 'status', 'enable', 'disabled') def test_enable_maintenance(self): self._test_host_update('host_c1', 'maintenance_mode', 'enable', 'on_maintenance') def test_disable_maintenance(self): self._test_host_update('host_c1', 'maintenance_mode', 'disable', 'off_maintenance') def _test_host_update_notimpl(self, key, val): def stub_service_get_all_notimpl(self, req): return [{'host': 'notimplemented', 'topic': None, 'availability_zone': None}] self.stub_out('nova.db.service_get_all', stub_service_get_all_notimpl) body = {key: val} self.assertRaises(webob.exc.HTTPNotImplemented, self.controller.update, self.req, 'notimplemented', body=body) def test_disable_host_notimpl(self): self._test_host_update_notimpl('status', 'disable') def test_enable_maintenance_notimpl(self): self._test_host_update_notimpl('maintenance_mode', 'enable') def test_host_startup(self): result = self.controller.startup(self.req, "host_c1") self.assertEqual(result["power_action"], "startup") def test_host_shutdown(self): result = self.controller.shutdown(self.req, "host_c1") self.assertEqual(result["power_action"], "shutdown") def test_host_reboot(self): result = self.controller.reboot(self.req, "host_c1") self.assertEqual(result["power_action"], "reboot") def _test_host_power_action_notimpl(self, method): self.assertRaises(webob.exc.HTTPNotImplemented, method, self.req, "notimplemented") def test_host_startup_notimpl(self): self._test_host_power_action_notimpl(self.controller.startup) def test_host_shutdown_notimpl(self): self._test_host_power_action_notimpl(self.controller.shutdown) def test_host_reboot_notimpl(self): self._test_host_power_action_notimpl(self.controller.reboot) def test_host_status_bad_host(self): # A host given as an argument does not exist. self.req.environ["nova.context"].is_admin = True dest = 'dummydest' with testtools.ExpectedException(webob.exc.HTTPNotFound, ".*%s.*" % dest): self.controller.update(self.req, dest, body={'status': 'enable'}) def test_host_maintenance_bad_host(self): # A host given as an argument does not exist. self.req.environ["nova.context"].is_admin = True dest = 'dummydest' with testtools.ExpectedException(webob.exc.HTTPNotFound, ".*%s.*" % dest): self.controller.update(self.req, dest, body={'maintenance_mode': 'enable'}) def test_host_power_action_bad_host(self): # A host given as an argument does not exist. self.req.environ["nova.context"].is_admin = True dest = 'dummydest' with testtools.ExpectedException(webob.exc.HTTPNotFound, ".*%s.*" % dest): self.controller.reboot(self.req, dest) def test_host_status_bad_status(self): # A host given as an argument does not exist. self.req.environ["nova.context"].is_admin = True dest = 'service_not_available' with testtools.ExpectedException(webob.exc.HTTPBadRequest, ".*%s.*" % dest): self.controller.update(self.req, dest, body={'status': 'enable'}) def test_host_maintenance_bad_status(self): # A host given as an argument does not exist. self.req.environ["nova.context"].is_admin = True dest = 'service_not_available' with testtools.ExpectedException(webob.exc.HTTPBadRequest, ".*%s.*" % dest): self.controller.update(self.req, dest, body={'maintenance_mode': 'enable'}) def test_host_power_action_bad_status(self): # A host given as an argument does not exist. self.req.environ["nova.context"].is_admin = True dest = 'service_not_available' with testtools.ExpectedException(webob.exc.HTTPBadRequest, ".*%s.*" % dest): self.controller.reboot(self.req, dest) def test_bad_status_value(self): bad_body = {"status": "bad"} self.assertRaises(self.validation_ex, self.controller.update, self.req, "host_c1", body=bad_body) bad_body2 = {"status": "disablabc"} self.assertRaises(self.validation_ex, self.controller.update, self.req, "host_c1", body=bad_body2) def test_bad_update_key(self): bad_body = {"crazy": "bad"} self.assertRaises(self.validation_ex, self.controller.update, self.req, "host_c1", body=bad_body) def test_bad_update_key_and_correct_update_key(self): bad_body = {"status": "disable", "crazy": "bad"} self.assertRaises(self.validation_ex, self.controller.update, self.req, "host_c1", body=bad_body) def test_good_update_keys(self): body = {"status": "disable", "maintenance_mode": "enable"} result = self.controller.update(self.req, 'host_c1', body=body) self.assertEqual(result["host"], "host_c1") self.assertEqual(result["status"], "disabled") self.assertEqual(result["maintenance_mode"], "on_maintenance") def test_show_host_not_exist(self): # A host given as an argument does not exist. self.req.environ["nova.context"].is_admin = True dest = 'dummydest' with testtools.ExpectedException(webob.exc.HTTPNotFound, ".*%s.*" % dest): self.controller.show(self.req, dest) def _create_compute_service(self): """Create compute-manager(ComputeNode and Service record).""" ctxt = self.req.environ["nova.context"] dic = {'host': 'dummy', 'binary': 'nova-compute', 'topic': 'compute', 'report_count': 0} s_ref = db.service_create(ctxt, dic) dic = {'service_id': s_ref['id'], 'host': s_ref['host'], 'uuid': uuidsentinel.compute_node, 'vcpus': 16, 'memory_mb': 32, 'local_gb': 100, 'vcpus_used': 16, 'memory_mb_used': 32, 'local_gb_used': 10, 'hypervisor_type': 'qemu', 'hypervisor_version': 12003, 'cpu_info': '', 'stats': ''} db.compute_node_create(ctxt, dic) return db.service_get(ctxt, s_ref['id']) def test_show_no_project(self): """No instances are running on the given host.""" ctxt = context_maker.get_admin_context() s_ref = self._create_compute_service() result = self.controller.show(self.req, s_ref['host']) proj = ['(total)', '(used_now)', '(used_max)'] column = ['host', 'project', 'cpu', 'memory_mb', 'disk_gb'] self.assertEqual(len(result['host']), 3) for resource in result['host']: self.assertIn(resource['resource']['project'], proj) self.assertEqual(len(resource['resource']), 5) self.assertEqual(set(column), set(resource['resource'].keys())) db.service_destroy(ctxt, s_ref['id']) def test_show_works_correctly(self): """show() works correctly as expected.""" ctxt = context_maker.get_admin_context() s_ref = self._create_compute_service() i_ref1 = _create_instance(project_id='p-01', host=s_ref['host']) i_ref2 = _create_instance(project_id='p-02', vcpus=3, host=s_ref['host']) result = self.controller.show(self.req, s_ref['host']) proj = ['(total)', '(used_now)', '(used_max)', 'p-01', 'p-02'] column = ['host', 'project', 'cpu', 'memory_mb', 'disk_gb'] self.assertEqual(len(result['host']), 5) for resource in result['host']: self.assertIn(resource['resource']['project'], proj) self.assertEqual(len(resource['resource']), 5) self.assertEqual(set(column), set(resource['resource'].keys())) db.service_destroy(ctxt, s_ref['id']) db.instance_destroy(ctxt, i_ref1['uuid']) db.instance_destroy(ctxt, i_ref2['uuid']) def test_show_late_host_mapping_gone(self): s_ref = self._create_compute_service() with mock.patch.object(self.controller.api, 'instance_get_all_by_host') as m: m.side_effect = exception.HostMappingNotFound(name='something') self.assertRaises(webob.exc.HTTPNotFound, self.controller.show, self.req, s_ref['host']) def test_list_hosts_with_zone(self): query_string = 'zone=nova' req = fakes.HTTPRequest.blank('', use_admin_context=True, query_string=query_string) result = self.controller.index(req) self.assertIn('hosts', result) hosts = result['hosts'] self.assertEqual(fake_hosts.HOST_LIST_NOVA_ZONE, hosts) class HostsPolicyEnforcementV21(test.NoDBTestCase): def setUp(self): super(HostsPolicyEnforcementV21, self).setUp() self.controller = os_hosts_v21.HostController() self.req = fakes.HTTPRequest.blank('') def test_index_policy_failed(self): rule_name = "os_compute_api:os-hosts" self.policy.set_rules({rule_name: "project_id:non_fake"}) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller.index, self.req) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) def test_show_policy_failed(self): rule_name = "os_compute_api:os-hosts" self.policy.set_rules({rule_name: "project_id:non_fake"}) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller.show, self.req, 1) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) class HostControllerDeprecationTest(test.NoDBTestCase): def setUp(self): super(HostControllerDeprecationTest, self).setUp() self.controller = os_hosts_v21.HostController() self.req = fakes.HTTPRequest.blank('', version='2.43') def test_not_found_for_all_host_api(self): self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.show, self.req, fakes.FAKE_UUID) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.startup, self.req, fakes.FAKE_UUID) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.shutdown, self.req, fakes.FAKE_UUID) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.reboot, self.req, fakes.FAKE_UUID) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.index, self.req) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.update, self.req, fakes.FAKE_UUID, body={}) nova-17.0.1/nova/tests/unit/api/openstack/compute/test_snapshots.py0000666000175000017500000002302013250073126025516 0ustar zuulzuul00000000000000# Copyright 2011 Denali Systems, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import webob from nova.api.openstack.compute import volumes as volumes_v21 from nova import exception from nova import test from nova.tests.unit.api.openstack import fakes from nova.volume import cinder FAKE_UUID = 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa' class SnapshotApiTestV21(test.NoDBTestCase): controller = volumes_v21.SnapshotController() validation_error = exception.ValidationError def setUp(self): super(SnapshotApiTestV21, self).setUp() fakes.stub_out_networking(self) self.stub_out("nova.volume.cinder.API.create_snapshot", fakes.stub_snapshot_create) self.stub_out("nova.volume.cinder.API.create_snapshot_force", fakes.stub_snapshot_create) self.stub_out("nova.volume.cinder.API.delete_snapshot", fakes.stub_snapshot_delete) self.stub_out("nova.volume.cinder.API.get_snapshot", fakes.stub_snapshot_get) self.stub_out("nova.volume.cinder.API.get_all_snapshots", fakes.stub_snapshot_get_all) self.stub_out("nova.volume.cinder.API.get", fakes.stub_volume_get) self.req = fakes.HTTPRequest.blank('') def _test_snapshot_create(self, force): snapshot = {"volume_id": '12', "force": force, "display_name": "Snapshot Test Name", "display_description": "Snapshot Test Desc"} body = dict(snapshot=snapshot) resp_dict = self.controller.create(self.req, body=body) self.assertIn('snapshot', resp_dict) self.assertEqual(snapshot['display_name'], resp_dict['snapshot']['displayName']) self.assertEqual(snapshot['display_description'], resp_dict['snapshot']['displayDescription']) self.assertEqual(snapshot['volume_id'], resp_dict['snapshot']['volumeId']) def test_snapshot_create(self): self._test_snapshot_create(False) def test_snapshot_create_force(self): self._test_snapshot_create(True) def test_snapshot_create_invalid_force_param(self): body = {'snapshot': {'volume_id': '1', 'force': '**&&^^%%$$##@@'}} self.assertRaises(self.validation_error, self.controller.create, self.req, body=body) def test_snapshot_delete(self): snapshot_id = '123' delete = self.controller.delete result = delete(self.req, snapshot_id) # NOTE: on v2.1, http status code is set as wsgi_code of API # method instead of status_int in a response object. if isinstance(self.controller, volumes_v21.SnapshotController): status_int = delete.wsgi_code else: status_int = result.status_int self.assertEqual(202, status_int) @mock.patch.object(cinder.API, 'delete_snapshot', side_effect=exception.SnapshotNotFound(snapshot_id=FAKE_UUID)) def test_delete_snapshot_not_exists(self, mock_mr): self.assertRaises(webob.exc.HTTPNotFound, self.controller.delete, self.req, FAKE_UUID) def test_snapshot_delete_invalid_id(self): self.assertRaises(webob.exc.HTTPNotFound, self.controller.delete, self.req, '-1') def test_snapshot_show(self): snapshot_id = '123' resp_dict = self.controller.show(self.req, snapshot_id) self.assertIn('snapshot', resp_dict) self.assertEqual(str(snapshot_id), resp_dict['snapshot']['id']) def test_snapshot_show_invalid_id(self): self.assertRaises(webob.exc.HTTPNotFound, self.controller.show, self.req, '-1') def test_snapshot_detail(self): resp_dict = self.controller.detail(self.req) self.assertIn('snapshots', resp_dict) resp_snapshots = resp_dict['snapshots'] self.assertEqual(3, len(resp_snapshots)) resp_snapshot = resp_snapshots.pop() self.assertEqual(102, resp_snapshot['id']) def test_snapshot_detail_offset_and_limit(self): path = '/v2/fake/os-snapshots/detail?offset=1&limit=1' req = fakes.HTTPRequest.blank(path) resp_dict = self.controller.detail(req) self.assertIn('snapshots', resp_dict) resp_snapshots = resp_dict['snapshots'] self.assertEqual(1, len(resp_snapshots)) resp_snapshot = resp_snapshots.pop() self.assertEqual(101, resp_snapshot['id']) def test_snapshot_index(self): resp_dict = self.controller.index(self.req) self.assertIn('snapshots', resp_dict) resp_snapshots = resp_dict['snapshots'] self.assertEqual(3, len(resp_snapshots)) def test_snapshot_index_offset_and_limit(self): path = '/v2/fake/os-snapshots?offset=1&limit=1' req = fakes.HTTPRequest.blank(path) resp_dict = self.controller.index(req) self.assertIn('snapshots', resp_dict) resp_snapshots = resp_dict['snapshots'] self.assertEqual(1, len(resp_snapshots)) def _test_list_with_invalid_filter(self, url): prefix = '/os-snapshots' req = fakes.HTTPRequest.blank(prefix + url) controller_list = self.controller.index if 'detail' in url: controller_list = self.controller.detail self.assertRaises(exception.ValidationError, controller_list, req) def test_list_with_invalid_non_int_limit(self): self._test_list_with_invalid_filter('?limit=-9') def test_list_with_invalid_string_limit(self): self._test_list_with_invalid_filter('?limit=abc') def test_list_duplicate_query_with_invalid_string_limit(self): self._test_list_with_invalid_filter( '?limit=1&limit=abc') def test_detail_list_with_invalid_non_int_limit(self): self._test_list_with_invalid_filter('/detail?limit=-9') def test_detail_list_with_invalid_string_limit(self): self._test_list_with_invalid_filter('/detail?limit=abc') def test_detail_list_duplicate_query_with_invalid_string_limit(self): self._test_list_with_invalid_filter( '/detail?limit=1&limit=abc') def test_list_with_invalid_non_int_offset(self): self._test_list_with_invalid_filter('?offset=-9') def test_list_with_invalid_string_offset(self): self._test_list_with_invalid_filter('?offset=abc') def test_list_duplicate_query_with_invalid_string_offset(self): self._test_list_with_invalid_filter( '?offset=1&offset=abc') def test_detail_list_with_invalid_non_int_offset(self): self._test_list_with_invalid_filter('/detail?offset=-9') def test_detail_list_with_invalid_string_offset(self): self._test_list_with_invalid_filter('/detail?offset=abc') def test_detail_list_duplicate_query_with_invalid_string_offset(self): self._test_list_with_invalid_filter( '/detail?offset=1&offset=abc') def _test_list_duplicate_query_parameters_validation(self, url): params = { 'limit': 1, 'offset': 1 } controller_list = self.controller.index if 'detail' in url: controller_list = self.controller.detail for param, value in params.items(): req = fakes.HTTPRequest.blank( url + '?%s=%s&%s=%s' % (param, value, param, value)) controller_list(req) def test_list_duplicate_query_parameters_validation(self): self._test_list_duplicate_query_parameters_validation('/os-snapshots') def test_detail_list_duplicate_query_parameters_validation(self): self._test_list_duplicate_query_parameters_validation( '/os-snapshots/detail') def test_list_with_additional_filter(self): req = fakes.HTTPRequest.blank( '/os-snapshots?limit=1&offset=1&additional=something') self.controller.index(req) def test_detail_list_with_additional_filter(self): req = fakes.HTTPRequest.blank( '/os-snapshots/detail?limit=1&offset=1&additional=something') self.controller.detail(req) class TestSnapshotAPIDeprecation(test.NoDBTestCase): def setUp(self): super(TestSnapshotAPIDeprecation, self).setUp() self.controller = volumes_v21.SnapshotController() self.req = fakes.HTTPRequest.blank('', version='2.36') def test_all_apis_return_not_found(self): self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.show, self.req, fakes.FAKE_UUID) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.delete, self.req, fakes.FAKE_UUID) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.index, self.req) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.create, self.req, {}) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.detail, self.req) nova-17.0.1/nova/tests/unit/api/openstack/compute/test_server_usage.py0000666000175000017500000001127213250073126026174 0ustar zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime from oslo_serialization import jsonutils from oslo_utils import fixture as utils_fixture from oslo_utils import timeutils from nova import exception from nova import objects from nova.objects import instance as instance_obj from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_instance UUID1 = '00000000-0000-0000-0000-000000000001' UUID2 = '00000000-0000-0000-0000-000000000002' UUID3 = '00000000-0000-0000-0000-000000000003' DATE1 = datetime.datetime(year=2013, month=4, day=5, hour=12) DATE2 = datetime.datetime(year=2013, month=4, day=5, hour=13) DATE3 = datetime.datetime(year=2013, month=4, day=5, hour=14) def fake_compute_get(*args, **kwargs): inst = fakes.stub_instance(1, uuid=UUID3, launched_at=DATE1, terminated_at=DATE2) return fake_instance.fake_instance_obj(args[1], **inst) def fake_compute_get_all(*args, **kwargs): db_list = [ fakes.stub_instance(2, uuid=UUID1, launched_at=DATE2, terminated_at=DATE3), fakes.stub_instance(3, uuid=UUID2, launched_at=DATE1, terminated_at=DATE3), ] fields = instance_obj.INSTANCE_DEFAULT_FIELDS return instance_obj._make_instance_list(args[1], objects.InstanceList(), db_list, fields) class ServerUsageTestV21(test.TestCase): content_type = 'application/json' prefix = 'OS-SRV-USG:' _prefix = "/v2/fake" def setUp(self): super(ServerUsageTestV21, self).setUp() fakes.stub_out_nw_api(self) fakes.stub_out_secgroup_api(self) self.stub_out('nova.compute.api.API.get', fake_compute_get) self.stub_out('nova.compute.api.API.get_all', fake_compute_get_all) return_server = fakes.fake_instance_get() self.stub_out('nova.db.instance_get_by_uuid', return_server) def _make_request(self, url): req = fakes.HTTPRequest.blank(url) req.accept = self.content_type res = req.get_response(self._get_app()) return res def _get_app(self): return fakes.wsgi_app_v21() def _get_server(self, body): return jsonutils.loads(body).get('server') def _get_servers(self, body): return jsonutils.loads(body).get('servers') def assertServerUsage(self, server, launched_at, terminated_at): resp_launched_at = timeutils.parse_isotime( server.get('%slaunched_at' % self.prefix)) self.assertEqual(timeutils.normalize_time(resp_launched_at), launched_at) resp_terminated_at = timeutils.parse_isotime( server.get('%sterminated_at' % self.prefix)) self.assertEqual(timeutils.normalize_time(resp_terminated_at), terminated_at) def test_show(self): url = self._prefix + ('/servers/%s' % UUID3) res = self._make_request(url) self.assertEqual(res.status_int, 200) self.useFixture(utils_fixture.TimeFixture()) self.assertServerUsage(self._get_server(res.body), launched_at=DATE1, terminated_at=DATE2) def test_detail(self): url = self._prefix + '/servers/detail' res = self._make_request(url) self.assertEqual(res.status_int, 200) servers = self._get_servers(res.body) self.assertServerUsage(servers[0], launched_at=DATE2, terminated_at=DATE3) self.assertServerUsage(servers[1], launched_at=DATE1, terminated_at=DATE3) def test_no_instance_passthrough_404(self): def fake_compute_get(*args, **kwargs): raise exception.InstanceNotFound(instance_id='fake') self.stub_out('nova.compute.api.API.get', fake_compute_get) url = self._prefix + '/servers/70f6db34-de8d-4fbd-aafb-4065bdfa6115' res = self._make_request(url) self.assertEqual(res.status_int, 404) nova-17.0.1/nova/tests/unit/api/openstack/compute/test_security_groups.py0000666000175000017500000021445213250073126026755 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # Copyright 2012 Justin Santa Barbara # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_config import cfg from oslo_serialization import jsonutils from oslo_utils import encodeutils import webob from nova.api.openstack.compute import security_groups as \ secgroups_v21 from nova.api.openstack import wsgi from nova import compute from nova.compute import power_state from nova import context as context_maker import nova.db from nova import exception from nova import objects from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_instance from nova.tests import uuidsentinel as uuids CONF = cfg.CONF FAKE_UUID1 = 'a47ae74e-ab08-447f-8eee-ffd43fc46c16' FAKE_UUID2 = 'c6e6430a-6563-4efa-9542-5e93c9e97d18' UUID_SERVER = uuids.server class AttrDict(dict): def __getattr__(self, k): return self[k] def security_group_request_template(**kwargs): sg = kwargs.copy() sg.setdefault('name', 'test') sg.setdefault('description', 'test-description') return sg def security_group_template(**kwargs): sg = kwargs.copy() sg.setdefault('tenant_id', '123') sg.setdefault('name', 'test') sg.setdefault('description', 'test-description') return sg def security_group_db(security_group, id=None): attrs = security_group.copy() if 'tenant_id' in attrs: attrs['project_id'] = attrs.pop('tenant_id') if id is not None: attrs['id'] = id attrs.setdefault('rules', []) attrs.setdefault('instances', []) return AttrDict(attrs) def security_group_rule_template(**kwargs): rule = kwargs.copy() rule.setdefault('ip_protocol', 'tcp') rule.setdefault('from_port', 22) rule.setdefault('to_port', 22) rule.setdefault('parent_group_id', 2) return rule def security_group_rule_db(rule, id=None): attrs = rule.copy() if 'ip_protocol' in attrs: attrs['protocol'] = attrs.pop('ip_protocol') return AttrDict(attrs) def return_server(context, server_id, columns_to_join=None, use_slave=False): return fake_instance.fake_db_instance( **{'id': 1, 'power_state': 0x01, 'host': "localhost", 'uuid': server_id, 'name': 'asdf'}) def return_server_by_uuid(context, server_uuid, columns_to_join=None, use_slave=False): return fake_instance.fake_db_instance( **{'id': 1, 'power_state': 0x01, 'host': "localhost", 'uuid': server_uuid, 'name': 'asdf'}) def return_non_running_server(context, server_id, columns_to_join=None): return fake_instance.fake_db_instance( **{'id': 1, 'power_state': power_state.SHUTDOWN, 'uuid': server_id, 'host': "localhost", 'name': 'asdf'}) def return_security_group_by_name(context, project_id, group_name, columns_to_join=None): return {'id': 1, 'name': group_name, "instances": [{'id': 1, 'uuid': UUID_SERVER}]} def return_security_group_without_instances(context, project_id, group_name): return {'id': 1, 'name': group_name} def return_server_nonexistent(context, server_id, columns_to_join=None): raise exception.InstanceNotFound(instance_id=server_id) class TestSecurityGroupsV21(test.TestCase): secgrp_ctl_cls = secgroups_v21.SecurityGroupController server_secgrp_ctl_cls = secgroups_v21.ServerSecurityGroupController secgrp_act_ctl_cls = secgroups_v21.SecurityGroupActionController # This class is subclassed by Neutron security group API tests so we need # to be able to override this before creating the controller object. use_neutron = False def setUp(self): super(TestSecurityGroupsV21, self).setUp() # Neutron security groups are tested in test_neutron_security_groups.py self.flags(use_neutron=self.use_neutron) self.controller = self.secgrp_ctl_cls() self.server_controller = self.server_secgrp_ctl_cls() self.manager = self.secgrp_act_ctl_cls() # This needs to be done here to set fake_id because the derived # class needs to be called first if it wants to set # 'security_group_api' and this setUp method needs to be called. if self.controller.security_group_api.id_is_uuid: self.fake_id = '11111111-1111-1111-1111-111111111111' else: self.fake_id = '11111111' self.req = fakes.HTTPRequest.blank('') self.admin_req = fakes.HTTPRequest.blank('', use_admin_context=True) def _assert_security_groups_in_use(self, project_id, user_id, in_use): context = context_maker.get_admin_context() count = objects.Quotas.count_as_dict(context, 'security_groups', project_id, user_id) self.assertEqual(in_use, count['project']['security_groups']) self.assertEqual(in_use, count['user']['security_groups']) def test_create_security_group(self): sg = security_group_request_template() res_dict = self.controller.create(self.req, {'security_group': sg}) self.assertEqual(res_dict['security_group']['name'], 'test') self.assertEqual(res_dict['security_group']['description'], 'test-description') def test_create_security_group_with_no_name(self): sg = security_group_request_template() del sg['name'] self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, {'security_group': sg}) def test_create_security_group_with_no_description(self): sg = security_group_request_template() del sg['description'] self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, {'security_group': sg}) def test_create_security_group_with_empty_description(self): sg = security_group_request_template() sg['description'] = "" try: self.controller.create(self.req, {'security_group': sg}) self.fail('Should have raised BadRequest exception') except webob.exc.HTTPBadRequest as exc: self.assertEqual('description has a minimum character requirement' ' of 1.', exc.explanation) except exception.InvalidInput: self.fail('Should have raised BadRequest exception instead of') def test_create_security_group_with_blank_name(self): sg = security_group_request_template(name='') self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, {'security_group': sg}) def test_create_security_group_with_whitespace_name(self): sg = security_group_request_template(name=' ') self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, {'security_group': sg}) def test_create_security_group_with_blank_description(self): sg = security_group_request_template(description='') self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, {'security_group': sg}) def test_create_security_group_with_whitespace_description(self): sg = security_group_request_template(description=' ') self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, {'security_group': sg}) def test_create_security_group_with_duplicate_name(self): sg = security_group_request_template() # FIXME: Stub out _get instead of creating twice self.controller.create(self.req, {'security_group': sg}) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, {'security_group': sg}) def test_create_security_group_with_no_body(self): self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, None) def test_create_security_group_with_no_security_group(self): body = {'no-securityGroup': None} self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, body) def test_create_security_group_above_255_characters_name(self): sg = security_group_request_template(name='1234567890' * 26) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, {'security_group': sg}) def test_create_security_group_above_255_characters_description(self): sg = security_group_request_template(description='1234567890' * 26) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, {'security_group': sg}) def test_create_security_group_non_string_name(self): sg = security_group_request_template(name=12) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, {'security_group': sg}) def test_create_security_group_non_string_description(self): sg = security_group_request_template(description=12) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, {'security_group': sg}) def test_create_security_group_quota_limit(self): for num in range(1, CONF.quota.security_groups): name = 'test%s' % num sg = security_group_request_template(name=name) res_dict = self.controller.create(self.req, {'security_group': sg}) self.assertEqual(res_dict['security_group']['name'], name) sg = security_group_request_template() self.assertRaises(webob.exc.HTTPForbidden, self.controller.create, self.req, {'security_group': sg}) @mock.patch('nova.objects.quotas.Quotas.check_deltas') def test_create_security_group_over_quota_during_recheck(self, check_mock): # Simulate a race where the first check passes and the recheck fails. check_mock.side_effect = [None, exception.OverQuota(overs='security_groups')] ctxt = self.req.environ['nova.context'] sg = security_group_request_template() self.assertRaises(webob.exc.HTTPForbidden, self.controller.create, self.req, {'security_group': sg}) self.assertEqual(2, check_mock.call_count) call1 = mock.call(ctxt, {'security_groups': 1}, ctxt.project_id, user_id=ctxt.user_id) call2 = mock.call(ctxt, {'security_groups': 0}, ctxt.project_id, user_id=ctxt.user_id) check_mock.assert_has_calls([call1, call2]) # Verify we removed the security group that was added after the first # quota check passed. self.assertRaises(exception.SecurityGroupNotFound, objects.SecurityGroup.get_by_name, ctxt, ctxt.project_id, sg['name']) @mock.patch('nova.objects.quotas.Quotas.check_deltas') def test_create_security_group_no_quota_recheck(self, check_mock): # Disable recheck_quota. self.flags(recheck_quota=False, group='quota') ctxt = self.req.environ['nova.context'] sg = security_group_request_template() self.controller.create(self.req, {'security_group': sg}) # check_deltas should have been called only once. check_mock.assert_called_once_with(ctxt, {'security_groups': 1}, ctxt.project_id, user_id=ctxt.user_id) def test_get_security_group_list(self): self._test_get_security_group_list() def test_get_security_group_list_offset_and_limit(self): self._test_get_security_group_list(limited=True) def _test_get_security_group_list(self, limited=False): groups = [] for i, name in enumerate(['default', 'test']): sg = security_group_template(id=i + 1, name=name, description=name + '-desc', rules=[]) groups.append(sg) if limited: expected = {'security_groups': [groups[1]]} else: expected = {'security_groups': groups} def return_security_groups(context, project_id): return [security_group_db(sg) for sg in groups] self.stub_out('nova.db.security_group_get_by_project', return_security_groups) path = '/v2/fake/os-security-groups' if limited: path += '?offset=1&limit=1' req = fakes.HTTPRequest.blank(path, use_admin_context=True) res_dict = self.controller.index(req) self.assertEqual(res_dict, expected) def test_get_security_group_list_missing_group_id_rule(self): groups = [] rule1 = security_group_rule_template(cidr='10.2.3.124/24', parent_group_id=1, group_id={}, id=88, protocol='TCP') rule2 = security_group_rule_template(cidr='10.2.3.125/24', parent_group_id=1, id=99, protocol=88, group_id='HAS_BEEN_DELETED') sg = security_group_template(id=1, name='test', description='test-desc', rules=[rule1, rule2]) groups.append(sg) # An expected rule here needs to be created as the api returns # different attributes on the rule for a response than what was # passed in. For example: # "cidr": "0.0.0.0/0" ->"ip_range": {"cidr": "0.0.0.0/0"} expected_rule = security_group_rule_template( ip_range={'cidr': '10.2.3.124/24'}, parent_group_id=1, group={}, id=88, ip_protocol='TCP') expected = security_group_template(id=1, name='test', description='test-desc', rules=[expected_rule]) expected = {'security_groups': [expected]} def return_security_groups(context, project, search_opts): return [security_group_db(sg) for sg in groups] self.stubs.Set(self.controller.security_group_api, 'list', return_security_groups) res_dict = self.controller.index(self.req) self.assertEqual(res_dict, expected) def test_get_security_group_list_all_tenants(self): all_groups = [] tenant_groups = [] for i, name in enumerate(['default', 'test']): sg = security_group_template(id=i + 1, name=name, description=name + '-desc', rules=[]) all_groups.append(sg) if name == 'default': tenant_groups.append(sg) all = {'security_groups': all_groups} tenant_specific = {'security_groups': tenant_groups} def return_all_security_groups(context): return [security_group_db(sg) for sg in all_groups] self.stub_out('nova.db.security_group_get_all', return_all_security_groups) def return_tenant_security_groups(context, project_id): return [security_group_db(sg) for sg in tenant_groups] self.stub_out('nova.db.security_group_get_by_project', return_tenant_security_groups) path = '/v2/fake/os-security-groups' req = fakes.HTTPRequest.blank(path, use_admin_context=True) res_dict = self.controller.index(req) self.assertEqual(res_dict, tenant_specific) req = fakes.HTTPRequest.blank('%s?all_tenants=1' % path, use_admin_context=True) res_dict = self.controller.index(req) self.assertEqual(res_dict, all) def test_get_security_group_by_instance(self): groups = [] for i, name in enumerate(['default', 'test']): # Create two rules per group to test that we don't perform # redundant group lookups. For the default group, the rule group_id # is the group itself. For the test group, the rule group_id points # to a non-existent group. group_id = i + 1 if name == 'default' else 'HAS_BEEN_DELETED' rule1 = security_group_rule_template( cidr='10.2.3.125/24', parent_group_id=1, id=99, protocol='TCP', group_id=group_id) rule2 = security_group_rule_template( cidr='10.2.3.126/24', parent_group_id=1, id=77, protocol='UDP', group_id=group_id) sg = security_group_template( id=i + 1, name=name, description=name + '-desc', rules=[rule1, rule2], tenant_id='fake') groups.append(sg) # An expected rule here needs to be created as the api returns # different attributes on the rule for a response than what was # passed in. expected_rule1 = security_group_rule_template( ip_range={}, parent_group_id=1, ip_protocol='TCP', group={'name': 'default', 'tenant_id': 'fake'}, id=99) expected_rule2 = security_group_rule_template( ip_range={}, parent_group_id=1, ip_protocol='UDP', group={'name': 'default', 'tenant_id': 'fake'}, id=77) expected_group1 = security_group_template( id=1, name='default', description='default-desc', rules=[expected_rule1, expected_rule2], tenant_id='fake') expected_group2 = security_group_template( id=2, name='test', description='test-desc', rules=[], tenant_id='fake') expected = {'security_groups': [expected_group1, expected_group2]} def return_instance(context, server_id, columns_to_join=None, use_slave=False): self.assertEqual(server_id, FAKE_UUID1) return return_server_by_uuid(context, server_id) self.stub_out('nova.db.instance_get_by_uuid', return_instance) def return_security_groups(context, instance_uuid): self.assertEqual(instance_uuid, FAKE_UUID1) return [security_group_db(sg) for sg in groups] self.stub_out('nova.db.security_group_get_by_instance', return_security_groups) # Stub out the security group API get() method to assert that we only # call it at most once per group ID. original_sg_get = self.server_controller.security_group_api.get queried_group_ids = [] def fake_security_group_api_get(_self, context, name=None, id=None, map_exception=False): if id in queried_group_ids: self.fail('Queried security group %s more than once.' % id) queried_group_ids.append(id) return original_sg_get(context, id=id) self.stub_out('nova.compute.api.SecurityGroupAPI.get', fake_security_group_api_get) res_dict = self.server_controller.index(self.req, FAKE_UUID1) self.assertEqual(expected, res_dict) @mock.patch('nova.db.instance_get_by_uuid') @mock.patch('nova.db.security_group_get_by_instance', return_value=[]) def test_get_security_group_empty_for_instance(self, mock_sec_group, mock_db_get_ins): expected = {'security_groups': []} def return_instance(context, server_id, columns_to_join=None, use_slave=False): self.assertEqual(server_id, FAKE_UUID1) return return_server_by_uuid(context, server_id) mock_db_get_ins.side_effect = return_instance res_dict = self.server_controller.index(self.req, FAKE_UUID1) self.assertEqual(expected, res_dict) mock_sec_group.assert_called_once_with( self.req.environ['nova.context'], FAKE_UUID1) def test_get_security_group_by_instance_non_existing(self): self.stub_out('nova.db.instance_get', return_server_nonexistent) self.stub_out('nova.db.instance_get_by_uuid', return_server_nonexistent) self.assertRaises(webob.exc.HTTPNotFound, self.server_controller.index, self.req, '1') def test_get_security_group_by_instance_invalid_id(self): self.assertRaises(webob.exc.HTTPNotFound, self.server_controller.index, self.req, 'invalid') def test_get_security_group_by_id(self): sg = security_group_template(id=2, rules=[]) def return_security_group(context, group_id, columns_to_join=None): self.assertEqual(sg['id'], group_id) return security_group_db(sg) self.stub_out('nova.db.security_group_get', return_security_group) res_dict = self.controller.show(self.req, '2') expected = {'security_group': sg} self.assertEqual(res_dict, expected) def test_get_security_group_by_invalid_id(self): self.assertRaises(webob.exc.HTTPBadRequest, self.controller.delete, self.req, 'invalid') def test_get_security_group_by_non_existing_id(self): self.assertRaises(webob.exc.HTTPNotFound, self.controller.delete, self.req, self.fake_id) def test_update_security_group(self): sg = security_group_template(id=2, rules=[]) sg_update = security_group_template(id=2, rules=[], name='update_name', description='update_desc') def return_security_group(context, group_id, columns_to_join=None): self.assertEqual(sg['id'], group_id) return security_group_db(sg) def return_update_security_group(context, group_id, values, columns_to_join=None): self.assertEqual(sg_update['id'], group_id) self.assertEqual(sg_update['name'], values['name']) self.assertEqual(sg_update['description'], values['description']) return security_group_db(sg_update) self.stub_out('nova.db.security_group_update', return_update_security_group) self.stub_out('nova.db.security_group_get', return_security_group) res_dict = self.controller.update(self.req, '2', {'security_group': sg_update}) expected = {'security_group': sg_update} self.assertEqual(res_dict, expected) def test_update_security_group_name_to_default(self): sg = security_group_template(id=2, rules=[], name='default') def return_security_group(context, group_id, columns_to_join=None): self.assertEqual(sg['id'], group_id) return security_group_db(sg) self.stub_out('nova.db.security_group_get', return_security_group) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, self.req, '2', {'security_group': sg}) def test_update_default_security_group_fail(self): sg = security_group_template() self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, self.req, '1', {'security_group': sg}) def test_delete_security_group_by_id(self): sg = security_group_template(id=1, project_id='fake_project', user_id='fake_user', rules=[]) self.called = False def security_group_destroy(context, id): self.called = True def return_security_group(context, group_id, columns_to_join=None): self.assertEqual(sg['id'], group_id) return security_group_db(sg) self.stub_out('nova.db.security_group_destroy', security_group_destroy) self.stub_out('nova.db.security_group_get', return_security_group) self.controller.delete(self.req, '1') self.assertTrue(self.called) def test_delete_security_group_by_admin(self): sg = security_group_request_template() self.controller.create(self.req, {'security_group': sg}) context = self.req.environ['nova.context'] # Ensure quota usage for security group is correct. self._assert_security_groups_in_use(context.project_id, context.user_id, 2) # Delete the security group by admin. self.controller.delete(self.admin_req, '2') # Ensure quota for security group in use is released. self._assert_security_groups_in_use(context.project_id, context.user_id, 1) def test_delete_security_group_by_invalid_id(self): self.assertRaises(webob.exc.HTTPBadRequest, self.controller.delete, self.req, 'invalid') def test_delete_security_group_by_non_existing_id(self): self.assertRaises(webob.exc.HTTPNotFound, self.controller.delete, self.req, self.fake_id) def test_delete_security_group_in_use(self): sg = security_group_template(id=1, rules=[]) def security_group_in_use(context, id): return True def return_security_group(context, group_id, columns_to_join=None): self.assertEqual(sg['id'], group_id) return security_group_db(sg) self.stub_out('nova.db.security_group_in_use', security_group_in_use) self.stub_out('nova.db.security_group_get', return_security_group) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.delete, self.req, '1') def _test_list_with_invalid_filter( self, url, expected_exception=exception.ValidationError): prefix = '/os-security-groups' req = fakes.HTTPRequest.blank(prefix + url) self.assertRaises(expected_exception, self.controller.index, req) def test_list_with_invalid_non_int_limit(self): self._test_list_with_invalid_filter('?limit=-9') def test_list_with_invalid_string_limit(self): self._test_list_with_invalid_filter('?limit=abc') def test_list_duplicate_query_with_invalid_string_limit(self): self._test_list_with_invalid_filter( '?limit=1&limit=abc') def test_list_with_invalid_non_int_offset(self): self._test_list_with_invalid_filter('?offset=-9') def test_list_with_invalid_string_offset(self): self._test_list_with_invalid_filter('?offset=abc') def test_list_duplicate_query_with_invalid_string_offset(self): self._test_list_with_invalid_filter( '?offset=1&offset=abc') def test_list_duplicate_query_parameters_validation(self): params = { 'limit': 1, 'offset': 1, 'all_tenants': 1 } for param, value in params.items(): req = fakes.HTTPRequest.blank( '/os-security-groups' + '?%s=%s&%s=%s' % (param, value, param, value)) self.controller.index(req) def test_list_with_additional_filter(self): req = fakes.HTTPRequest.blank( '/os-security-groups?limit=1&offset=1&additional=something') self.controller.index(req) def test_list_all_tenants_filter_as_string(self): req = fakes.HTTPRequest.blank( '/os-security-groups?all_tenants=abc') self.controller.index(req) def test_list_all_tenants_filter_as_positive_int(self): req = fakes.HTTPRequest.blank( '/os-security-groups?all_tenants=1') self.controller.index(req) def test_list_all_tenants_filter_as_negative_int(self): req = fakes.HTTPRequest.blank( '/os-security-groups?all_tenants=-1') self.controller.index(req) def test_associate_by_non_existing_security_group_name(self): self.stub_out('nova.db.instance_get', return_server) self.assertEqual(return_server(None, '1'), nova.db.instance_get(None, '1')) body = dict(addSecurityGroup=dict(name='non-existing')) self.assertRaises(webob.exc.HTTPNotFound, self.manager._addSecurityGroup, self.req, '1', body) def test_associate_by_invalid_server_id(self): body = dict(addSecurityGroup=dict(name='test')) self.assertRaises(webob.exc.HTTPNotFound, self.manager._addSecurityGroup, self.req, 'invalid', body) def test_associate_without_body(self): self.stub_out('nova.db.instance_get', return_server) body = dict(addSecurityGroup=None) self.assertRaises(webob.exc.HTTPBadRequest, self.manager._addSecurityGroup, self.req, '1', body) def test_associate_no_security_group_name(self): self.stub_out('nova.db.instance_get', return_server) body = dict(addSecurityGroup=dict()) self.assertRaises(webob.exc.HTTPBadRequest, self.manager._addSecurityGroup, self.req, '1', body) def test_associate_security_group_name_with_whitespaces(self): self.stub_out('nova.db.instance_get', return_server) body = dict(addSecurityGroup=dict(name=" ")) self.assertRaises(webob.exc.HTTPBadRequest, self.manager._addSecurityGroup, self.req, '1', body) def test_associate_non_existing_instance(self): self.stub_out('nova.db.instance_get', return_server_nonexistent) self.stub_out('nova.db.instance_get_by_uuid', return_server_nonexistent) body = dict(addSecurityGroup=dict(name="test")) self.assertRaises(webob.exc.HTTPNotFound, self.manager._addSecurityGroup, self.req, '1', body) def test_associate_non_running_instance(self): self.stub_out('nova.db.instance_get', return_non_running_server) self.stub_out('nova.db.instance_get_by_uuid', return_non_running_server) self.stub_out('nova.db.security_group_get_by_name', return_security_group_without_instances) body = dict(addSecurityGroup=dict(name="test")) self.manager._addSecurityGroup(self.req, UUID_SERVER, body) def test_associate_already_associated_security_group_to_instance(self): self.stub_out('nova.db.instance_get', return_server) self.stub_out('nova.db.instance_get_by_uuid', return_server_by_uuid) self.stub_out('nova.db.security_group_get_by_name', return_security_group_by_name) body = dict(addSecurityGroup=dict(name="test")) self.assertRaises(webob.exc.HTTPBadRequest, self.manager._addSecurityGroup, self.req, UUID_SERVER, body) @mock.patch.object(nova.db, 'instance_add_security_group') def test_associate(self, mock_add_security_group): self.stub_out('nova.db.instance_get', return_server) self.stub_out('nova.db.instance_get_by_uuid', return_server_by_uuid) self.stub_out('nova.db.security_group_get_by_name', return_security_group_without_instances) body = dict(addSecurityGroup=dict(name="test")) self.manager._addSecurityGroup(self.req, UUID_SERVER, body) mock_add_security_group.assert_called_once_with(mock.ANY, mock.ANY, mock.ANY) def test_disassociate_by_non_existing_security_group_name(self): self.stub_out('nova.db.instance_get', return_server) self.assertEqual(return_server(None, '1'), nova.db.instance_get(None, '1')) body = dict(removeSecurityGroup=dict(name='non-existing')) self.assertRaises(webob.exc.HTTPNotFound, self.manager._removeSecurityGroup, self.req, UUID_SERVER, body) def test_disassociate_by_invalid_server_id(self): self.stub_out('nova.db.security_group_get_by_name', return_security_group_by_name) body = dict(removeSecurityGroup=dict(name='test')) self.assertRaises(webob.exc.HTTPNotFound, self.manager._removeSecurityGroup, self.req, 'invalid', body) def test_disassociate_without_body(self): self.stub_out('nova.db.instance_get', return_server) body = dict(removeSecurityGroup=None) self.assertRaises(webob.exc.HTTPBadRequest, self.manager._removeSecurityGroup, self.req, '1', body) def test_disassociate_no_security_group_name(self): self.stub_out('nova.db.instance_get', return_server) body = dict(removeSecurityGroup=dict()) self.assertRaises(webob.exc.HTTPBadRequest, self.manager._removeSecurityGroup, self.req, '1', body) def test_disassociate_security_group_name_with_whitespaces(self): self.stub_out('nova.db.instance_get', return_server) body = dict(removeSecurityGroup=dict(name=" ")) self.assertRaises(webob.exc.HTTPBadRequest, self.manager._removeSecurityGroup, self.req, '1', body) def test_disassociate_non_existing_instance(self): self.stub_out('nova.db.instance_get', return_server_nonexistent) self.stub_out('nova.db.security_group_get_by_name', return_security_group_by_name) body = dict(removeSecurityGroup=dict(name="test")) self.assertRaises(webob.exc.HTTPNotFound, self.manager._removeSecurityGroup, self.req, '1', body) def test_disassociate_non_running_instance(self): self.stub_out('nova.db.instance_get', return_non_running_server) self.stub_out('nova.db.instance_get_by_uuid', return_non_running_server) self.stub_out('nova.db.security_group_get_by_name', return_security_group_by_name) body = dict(removeSecurityGroup=dict(name="test")) self.manager._removeSecurityGroup(self.req, UUID_SERVER, body) def test_disassociate_already_associated_security_group_to_instance(self): self.stub_out('nova.db.instance_get', return_server) self.stub_out('nova.db.instance_get_by_uuid', return_server_by_uuid) self.stub_out('nova.db.security_group_get_by_name', return_security_group_without_instances) body = dict(removeSecurityGroup=dict(name="test")) self.assertRaises(webob.exc.HTTPBadRequest, self.manager._removeSecurityGroup, self.req, UUID_SERVER, body) @mock.patch.object(nova.db, 'instance_remove_security_group') def test_disassociate(self, mock_remove_sec_group): self.stub_out('nova.db.instance_get', return_server) self.stub_out('nova.db.instance_get_by_uuid', return_server_by_uuid) self.stub_out('nova.db.security_group_get_by_name', return_security_group_by_name) body = dict(removeSecurityGroup=dict(name="test")) self.manager._removeSecurityGroup(self.req, UUID_SERVER, body) mock_remove_sec_group.assert_called_once_with(mock.ANY, mock.ANY, mock.ANY) class TestSecurityGroupRulesV21(test.TestCase): secgrp_ctl_cls = secgroups_v21.SecurityGroupRulesController # This class is subclassed by Neutron security group API tests so we need # to be able to override this before creating the controller object. use_neutron = False def setUp(self): super(TestSecurityGroupRulesV21, self).setUp() # Neutron security groups are tested in test_neutron_security_groups.py self.flags(use_neutron=self.use_neutron) self.controller = self.secgrp_ctl_cls() if self.controller.security_group_api.id_is_uuid: id1 = '11111111-1111-1111-1111-111111111111' id2 = '22222222-2222-2222-2222-222222222222' self.invalid_id = '33333333-3333-3333-3333-333333333333' else: id1 = 1 id2 = 2 self.invalid_id = '33333333' self.sg1 = security_group_template(id=id1) self.sg2 = security_group_template( id=id2, name='authorize_revoke', description='authorize-revoke testing') db1 = security_group_db(self.sg1) db2 = security_group_db(self.sg2) def return_security_group(context, group_id, columns_to_join=None): if group_id == db1['id']: return db1 if group_id == db2['id']: return db2 raise exception.SecurityGroupNotFound(security_group_id=group_id) self.stub_out('nova.db.security_group_get', return_security_group) self.parent_security_group = db2 self.req = fakes.HTTPRequest.blank('') def test_create_by_cidr(self): rule = security_group_rule_template(cidr='10.2.3.124/24', parent_group_id=self.sg2['id']) res_dict = self.controller.create(self.req, {'security_group_rule': rule}) security_group_rule = res_dict['security_group_rule'] self.assertNotEqual(security_group_rule['id'], 0) self.assertEqual(security_group_rule['parent_group_id'], self.sg2['id']) self.assertEqual(security_group_rule['ip_range']['cidr'], "10.2.3.124/24") def test_create_by_group_id(self): rule = security_group_rule_template(group_id=self.sg1['id'], parent_group_id=self.sg2['id']) res_dict = self.controller.create(self.req, {'security_group_rule': rule}) security_group_rule = res_dict['security_group_rule'] self.assertNotEqual(security_group_rule['id'], 0) self.assertEqual(security_group_rule['parent_group_id'], self.sg2['id']) def test_create_by_same_group_id(self): rule1 = security_group_rule_template(group_id=self.sg1['id'], from_port=80, to_port=80, parent_group_id=self.sg2['id']) self.parent_security_group['rules'] = [security_group_rule_db(rule1)] rule2 = security_group_rule_template(group_id=self.sg1['id'], from_port=81, to_port=81, parent_group_id=self.sg2['id']) res_dict = self.controller.create(self.req, {'security_group_rule': rule2}) security_group_rule = res_dict['security_group_rule'] self.assertNotEqual(security_group_rule['id'], 0) self.assertEqual(security_group_rule['parent_group_id'], self.sg2['id']) self.assertEqual(security_group_rule['from_port'], 81) self.assertEqual(security_group_rule['to_port'], 81) def test_create_none_value_from_to_port(self): rule = {'parent_group_id': self.sg1['id'], 'group_id': self.sg1['id']} res_dict = self.controller.create(self.req, {'security_group_rule': rule}) security_group_rule = res_dict['security_group_rule'] self.assertIsNone(security_group_rule['from_port']) self.assertIsNone(security_group_rule['to_port']) self.assertEqual(security_group_rule['group']['name'], 'test') self.assertEqual(security_group_rule['parent_group_id'], self.sg1['id']) def test_create_none_value_from_to_port_icmp(self): rule = {'parent_group_id': self.sg1['id'], 'group_id': self.sg1['id'], 'ip_protocol': 'ICMP'} res_dict = self.controller.create(self.req, {'security_group_rule': rule}) security_group_rule = res_dict['security_group_rule'] self.assertEqual(security_group_rule['ip_protocol'], 'ICMP') self.assertEqual(security_group_rule['from_port'], -1) self.assertEqual(security_group_rule['to_port'], -1) self.assertEqual(security_group_rule['group']['name'], 'test') self.assertEqual(security_group_rule['parent_group_id'], self.sg1['id']) def test_create_none_value_from_to_port_tcp(self): rule = {'parent_group_id': self.sg1['id'], 'group_id': self.sg1['id'], 'ip_protocol': 'TCP'} res_dict = self.controller.create(self.req, {'security_group_rule': rule}) security_group_rule = res_dict['security_group_rule'] self.assertEqual(security_group_rule['ip_protocol'], 'TCP') self.assertEqual(security_group_rule['from_port'], 1) self.assertEqual(security_group_rule['to_port'], 65535) self.assertEqual(security_group_rule['group']['name'], 'test') self.assertEqual(security_group_rule['parent_group_id'], self.sg1['id']) def test_create_by_invalid_cidr_json(self): rule = security_group_rule_template( ip_protocol="tcp", from_port=22, to_port=22, parent_group_id=self.sg2['id'], cidr="10.2.3.124/2433") self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, {'security_group_rule': rule}) def test_create_by_invalid_tcp_port_json(self): rule = security_group_rule_template( ip_protocol="tcp", from_port=75534, to_port=22, parent_group_id=self.sg2['id'], cidr="10.2.3.124/24") self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, {'security_group_rule': rule}) def test_create_by_invalid_icmp_port_json(self): rule = security_group_rule_template( ip_protocol="icmp", from_port=1, to_port=256, parent_group_id=self.sg2['id'], cidr="10.2.3.124/24") self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, {'security_group_rule': rule}) def test_create_add_existing_rules_by_cidr(self): rule = security_group_rule_template(cidr='10.0.0.0/24', parent_group_id=self.sg2['id']) self.parent_security_group['rules'] = [security_group_rule_db(rule)] self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, {'security_group_rule': rule}) def test_create_add_existing_rules_by_group_id(self): rule = security_group_rule_template(group_id=1) self.parent_security_group['rules'] = [security_group_rule_db(rule)] self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, {'security_group_rule': rule}) def test_create_with_no_body(self): self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, None) def test_create_with_no_security_group_rule_in_body(self): rules = {'test': 'test'} self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, rules) def test_create_with_invalid_parent_group_id(self): rule = security_group_rule_template(parent_group_id='invalid') self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, {'security_group_rule': rule}) def test_create_with_non_existing_parent_group_id(self): rule = security_group_rule_template(group_id=None, parent_group_id=self.invalid_id) self.assertRaises(webob.exc.HTTPNotFound, self.controller.create, self.req, {'security_group_rule': rule}) def test_create_with_non_existing_group_id(self): rule = security_group_rule_template(group_id='invalid', parent_group_id=self.sg2['id']) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, {'security_group_rule': rule}) def test_create_with_invalid_protocol(self): rule = security_group_rule_template(ip_protocol='invalid-protocol', cidr='10.2.2.0/24', parent_group_id=self.sg2['id']) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, {'security_group_rule': rule}) def test_create_with_no_protocol(self): rule = security_group_rule_template(cidr='10.2.2.0/24', parent_group_id=self.sg2['id']) del rule['ip_protocol'] self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, {'security_group_rule': rule}) def test_create_with_invalid_from_port(self): rule = security_group_rule_template(from_port='666666', cidr='10.2.2.0/24', parent_group_id=self.sg2['id']) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, {'security_group_rule': rule}) def test_create_with_invalid_to_port(self): rule = security_group_rule_template(to_port='666666', cidr='10.2.2.0/24', parent_group_id=self.sg2['id']) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, {'security_group_rule': rule}) def test_create_with_non_numerical_from_port(self): rule = security_group_rule_template(from_port='invalid', cidr='10.2.2.0/24', parent_group_id=self.sg2['id']) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, {'security_group_rule': rule}) def test_create_with_non_numerical_to_port(self): rule = security_group_rule_template(to_port='invalid', cidr='10.2.2.0/24', parent_group_id=self.sg2['id']) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, {'security_group_rule': rule}) def test_create_with_no_from_port(self): rule = security_group_rule_template(cidr='10.2.2.0/24', parent_group_id=self.sg2['id']) del rule['from_port'] self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, {'security_group_rule': rule}) def test_create_with_no_to_port(self): rule = security_group_rule_template(cidr='10.2.2.0/24', parent_group_id=self.sg2['id']) del rule['to_port'] self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, {'security_group_rule': rule}) def test_create_with_invalid_cidr(self): rule = security_group_rule_template(cidr='10.2.2222.0/24', parent_group_id=self.sg2['id']) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, {'security_group_rule': rule}) def test_create_with_no_cidr_group(self): rule = security_group_rule_template(parent_group_id=self.sg2['id']) res_dict = self.controller.create(self.req, {'security_group_rule': rule}) security_group_rule = res_dict['security_group_rule'] self.assertNotEqual(security_group_rule['id'], 0) self.assertEqual(security_group_rule['parent_group_id'], self.parent_security_group['id']) self.assertEqual(security_group_rule['ip_range']['cidr'], "0.0.0.0/0") def test_create_with_invalid_group_id(self): rule = security_group_rule_template(group_id='invalid', parent_group_id=self.sg2['id']) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, {'security_group_rule': rule}) def test_create_with_empty_group_id(self): rule = security_group_rule_template(group_id='', parent_group_id=self.sg2['id']) self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, {'security_group_rule': rule}) def test_create_with_nonexist_group_id(self): rule = security_group_rule_template(group_id=self.invalid_id, parent_group_id=self.sg2['id']) self.assertRaises(webob.exc.HTTPNotFound, self.controller.create, self.req, {'security_group_rule': rule}) def test_create_with_same_group_parent_id_and_group_id(self): rule = security_group_rule_template(group_id=self.sg1['id'], parent_group_id=self.sg1['id']) res_dict = self.controller.create(self.req, {'security_group_rule': rule}) security_group_rule = res_dict['security_group_rule'] self.assertNotEqual(security_group_rule['id'], 0) self.assertEqual(security_group_rule['parent_group_id'], self.sg1['id']) self.assertEqual(security_group_rule['group']['name'], self.sg1['name']) def _test_create_with_no_ports_and_no_group(self, proto): rule = {'ip_protocol': proto, 'parent_group_id': self.sg2['id']} self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, {'security_group_rule': rule}) def _test_create_with_no_ports(self, proto): rule = {'ip_protocol': proto, 'parent_group_id': self.sg2['id'], 'group_id': self.sg1['id']} res_dict = self.controller.create(self.req, {'security_group_rule': rule}) security_group_rule = res_dict['security_group_rule'] expected_rule = { 'from_port': 1, 'group': {'tenant_id': '123', 'name': 'test'}, 'ip_protocol': proto, 'to_port': 65535, 'parent_group_id': self.sg2['id'], 'ip_range': {}, 'id': security_group_rule['id'] } if proto == 'icmp': expected_rule['to_port'] = -1 expected_rule['from_port'] = -1 self.assertEqual(expected_rule, security_group_rule) def test_create_with_no_ports_icmp(self): self._test_create_with_no_ports_and_no_group('icmp') self._test_create_with_no_ports('icmp') def test_create_with_no_ports_tcp(self): self._test_create_with_no_ports_and_no_group('tcp') self._test_create_with_no_ports('tcp') def test_create_with_no_ports_udp(self): self._test_create_with_no_ports_and_no_group('udp') self._test_create_with_no_ports('udp') def _test_create_with_ports(self, proto, from_port, to_port): rule = { 'ip_protocol': proto, 'from_port': from_port, 'to_port': to_port, 'parent_group_id': self.sg2['id'], 'group_id': self.sg1['id'] } res_dict = self.controller.create(self.req, {'security_group_rule': rule}) security_group_rule = res_dict['security_group_rule'] expected_rule = { 'from_port': from_port, 'group': {'tenant_id': '123', 'name': 'test'}, 'ip_protocol': proto, 'to_port': to_port, 'parent_group_id': self.sg2['id'], 'ip_range': {}, 'id': security_group_rule['id'] } self.assertEqual(proto, security_group_rule['ip_protocol']) self.assertEqual(from_port, security_group_rule['from_port']) self.assertEqual(to_port, security_group_rule['to_port']) self.assertEqual(expected_rule, security_group_rule) def test_create_with_ports_icmp(self): self._test_create_with_ports('icmp', 0, 1) self._test_create_with_ports('icmp', 0, 0) self._test_create_with_ports('icmp', 1, 0) def test_create_with_ports_tcp(self): self._test_create_with_ports('tcp', 1, 1) self._test_create_with_ports('tcp', 1, 65535) self._test_create_with_ports('tcp', 65535, 65535) def test_create_with_ports_udp(self): self._test_create_with_ports('udp', 1, 1) self._test_create_with_ports('udp', 1, 65535) self._test_create_with_ports('udp', 65535, 65535) def test_delete(self): rule = security_group_rule_template(id=self.sg2['id'], parent_group_id=self.sg2['id']) def security_group_rule_get(context, id): return security_group_rule_db(rule) def security_group_rule_destroy(context, id): pass self.stub_out('nova.db.security_group_rule_get', security_group_rule_get) self.stub_out('nova.db.security_group_rule_destroy', security_group_rule_destroy) self.controller.delete(self.req, self.sg2['id']) def test_delete_invalid_rule_id(self): self.assertRaises(webob.exc.HTTPBadRequest, self.controller.delete, self.req, 'invalid') def test_delete_non_existing_rule_id(self): self.assertRaises(webob.exc.HTTPNotFound, self.controller.delete, self.req, self.invalid_id) def test_create_rule_quota_limit(self): for num in range(100, 100 + CONF.quota.security_group_rules): rule = { 'ip_protocol': 'tcp', 'from_port': num, 'to_port': num, 'parent_group_id': self.sg2['id'], 'group_id': self.sg1['id'] } self.controller.create(self.req, {'security_group_rule': rule}) rule = { 'ip_protocol': 'tcp', 'from_port': '121', 'to_port': '121', 'parent_group_id': self.sg2['id'], 'group_id': self.sg1['id'] } self.assertRaises(webob.exc.HTTPForbidden, self.controller.create, self.req, {'security_group_rule': rule}) @mock.patch('nova.objects.Quotas.check_deltas') def test_create_rule_over_quota_during_recheck(self, mock_check): # Simulate a race where the first check passes and the recheck fails. # First check occurs in compute/api. exc = exception.OverQuota(overs='security_group_rules', usages={'security_group_rules': 100}) mock_check.side_effect = [None, exc] rule = { 'ip_protocol': 'tcp', 'from_port': '121', 'to_port': '121', 'parent_group_id': self.sg2['id'], 'group_id': self.sg1['id'] } self.assertRaises(webob.exc.HTTPForbidden, self.controller.create, self.req, {'security_group_rule': rule}) ctxt = self.req.environ['nova.context'] self.assertEqual(2, mock_check.call_count) # parent_group_id is used for adding the rules. call1 = mock.call(ctxt, {'security_group_rules': 1}, self.sg2['id']) call2 = mock.call(ctxt, {'security_group_rules': 0}, self.sg2['id']) mock_check.assert_has_calls([call1, call2]) # Verify we removed the rule that was added after the first quota check # passed. rules = objects.SecurityGroupRuleList.get_by_security_group_id( ctxt, self.sg1['id']) self.assertEqual(0, len(rules)) @mock.patch('nova.objects.Quotas.check_deltas') def test_create_rule_no_quota_recheck(self, mock_check): # Disable recheck_quota. self.flags(recheck_quota=False, group='quota') rule = { 'ip_protocol': 'tcp', 'from_port': '121', 'to_port': '121', 'parent_group_id': self.sg2['id'], 'group_id': self.sg1['id'] } self.controller.create(self.req, {'security_group_rule': rule}) ctxt = self.req.environ['nova.context'] # check_deltas should have been called only once. # parent_group_id is used for adding the rules. mock_check.assert_called_once_with(ctxt, {'security_group_rules': 1}, self.sg2['id']) def test_create_rule_cidr_allow_all(self): rule = security_group_rule_template(cidr='0.0.0.0/0', parent_group_id=self.sg2['id']) res_dict = self.controller.create(self.req, {'security_group_rule': rule}) security_group_rule = res_dict['security_group_rule'] self.assertNotEqual(security_group_rule['id'], 0) self.assertEqual(security_group_rule['parent_group_id'], self.parent_security_group['id']) self.assertEqual(security_group_rule['ip_range']['cidr'], "0.0.0.0/0") def test_create_rule_cidr_ipv6_allow_all(self): rule = security_group_rule_template(cidr='::/0', parent_group_id=self.sg2['id']) res_dict = self.controller.create(self.req, {'security_group_rule': rule}) security_group_rule = res_dict['security_group_rule'] self.assertNotEqual(security_group_rule['id'], 0) self.assertEqual(security_group_rule['parent_group_id'], self.parent_security_group['id']) self.assertEqual(security_group_rule['ip_range']['cidr'], "::/0") def test_create_rule_cidr_allow_some(self): rule = security_group_rule_template(cidr='15.0.0.0/8', parent_group_id=self.sg2['id']) res_dict = self.controller.create(self.req, {'security_group_rule': rule}) security_group_rule = res_dict['security_group_rule'] self.assertNotEqual(security_group_rule['id'], 0) self.assertEqual(security_group_rule['parent_group_id'], self.parent_security_group['id']) self.assertEqual(security_group_rule['ip_range']['cidr'], "15.0.0.0/8") def test_create_rule_cidr_bad_netmask(self): rule = security_group_rule_template(cidr='15.0.0.0/0') self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, {'security_group_rule': rule}) UUID1 = '00000000-0000-0000-0000-000000000001' UUID2 = '00000000-0000-0000-0000-000000000002' UUID3 = '00000000-0000-0000-0000-000000000003' def fake_compute_get_all(*args, **kwargs): base = {'id': 1, 'description': 'foo', 'user_id': 'bar', 'project_id': 'baz', 'deleted': False, 'deleted_at': None, 'updated_at': None, 'created_at': None} inst_list = [ fakes.stub_instance_obj( None, 1, uuid=UUID1, security_groups=[dict(base, **{'name': 'fake-0-0'}), dict(base, **{'name': 'fake-0-1'})]), fakes.stub_instance_obj( None, 2, uuid=UUID2, security_groups=[dict(base, **{'name': 'fake-1-0'}), dict(base, **{'name': 'fake-1-1'})]) ] return objects.InstanceList(objects=inst_list) def fake_compute_get(*args, **kwargs): secgroups = objects.SecurityGroupList() secgroups.objects = [ objects.SecurityGroup(name='fake-2-0'), objects.SecurityGroup(name='fake-2-1'), ] inst = fakes.stub_instance_obj(None, 1, uuid=UUID3) inst.security_groups = secgroups return inst def fake_compute_create(*args, **kwargs): return ([fake_compute_get(*args, **kwargs)], '') def fake_get_instances_security_groups_bindings(inst, context, servers): groups = {UUID1: [{'name': 'fake-0-0'}, {'name': 'fake-0-1'}], UUID2: [{'name': 'fake-1-0'}, {'name': 'fake-1-1'}], UUID3: [{'name': 'fake-2-0'}, {'name': 'fake-2-1'}]} result = {} for server in servers: result[server['id']] = groups.get(server['id']) return result class SecurityGroupsOutputTestV21(test.TestCase): base_url = '/v2/fake/servers' content_type = 'application/json' def setUp(self): super(SecurityGroupsOutputTestV21, self).setUp() # Neutron security groups are tested in test_neutron_security_groups.py self.flags(use_neutron=False) fakes.stub_out_nw_api(self) self.stubs.Set(compute.api.API, 'get', fake_compute_get) self.stubs.Set(compute.api.API, 'get_all', fake_compute_get_all) self.stubs.Set(compute.api.API, 'create', fake_compute_create) self.app = self._setup_app() def _setup_app(self): return fakes.wsgi_app_v21() def _make_request(self, url, body=None): req = fakes.HTTPRequest.blank(url) if body: req.method = 'POST' req.body = encodeutils.safe_encode(self._encode_body(body)) req.content_type = self.content_type req.headers['Accept'] = self.content_type res = req.get_response(self.app) return res def _encode_body(self, body): return jsonutils.dumps(body) def _get_server(self, body): return jsonutils.loads(body).get('server') def _get_servers(self, body): return jsonutils.loads(body).get('servers') def _get_groups(self, server): return server.get('security_groups') def test_create(self): image_uuid = 'c905cedb-7281-47e4-8a62-f26bc5fc4c77' server = dict(name='server_test', imageRef=image_uuid, flavorRef=2) res = self._make_request(self.base_url, {'server': server}) self.assertEqual(res.status_int, 202) server = self._get_server(res.body) for i, group in enumerate(self._get_groups(server)): name = 'fake-2-%s' % i self.assertEqual(group.get('name'), name) def test_show(self): url = self.base_url + '/' + UUID3 res = self._make_request(url) self.assertEqual(res.status_int, 200) server = self._get_server(res.body) for i, group in enumerate(self._get_groups(server)): name = 'fake-2-%s' % i self.assertEqual(group.get('name'), name) def test_detail(self): url = self.base_url + '/detail' res = self._make_request(url) self.assertEqual(res.status_int, 200) for i, server in enumerate(self._get_servers(res.body)): for j, group in enumerate(self._get_groups(server)): name = 'fake-%s-%s' % (i, j) self.assertEqual(group.get('name'), name) def test_no_instance_passthrough_404(self): def fake_compute_get(*args, **kwargs): raise exception.InstanceNotFound(instance_id='fake') self.stubs.Set(compute.api.API, 'get', fake_compute_get) url = self.base_url + '/70f6db34-de8d-4fbd-aafb-4065bdfa6115' res = self._make_request(url) self.assertEqual(res.status_int, 404) class SecurityGroupsOutputPolicyEnforcementV21(test.NoDBTestCase): def setUp(self): super(SecurityGroupsOutputPolicyEnforcementV21, self).setUp() self.controller = secgroups_v21.SecurityGroupsOutputController() self.req = fakes.HTTPRequest.blank('') self.rule_name = "os_compute_api:os-security-groups" self.rule = {self.rule_name: "project:non_fake"} self.policy.set_rules(self.rule) self.fake_res = wsgi.ResponseObject({ 'server': {'id': '0'}, 'servers': [{'id': '0'}, {'id': '2'}]}) @mock.patch('nova.policy.authorize') def test_show_policy_softauth_is_called(self, mock_authorize): mock_authorize.return_value = False self.controller.show(self.req, self.fake_res, FAKE_UUID1) self.assertTrue(mock_authorize.called) @mock.patch.object(nova.network.security_group.openstack_driver, "is_neutron_security_groups") def test_show_policy_failed(self, is_neutron_security_groups): self.controller.show(self.req, self.fake_res, FAKE_UUID1) self.assertFalse(is_neutron_security_groups.called) @mock.patch('nova.policy.authorize') def test_create_policy_softauth_is_called(self, mock_authorize): mock_authorize.return_value = False self.controller.show(self.req, self.fake_res, {}) self.assertTrue(mock_authorize.called) @mock.patch.object(nova.network.security_group.openstack_driver, "is_neutron_security_groups") def test_create_policy_failed(self, is_neutron_security_groups): self.controller.create(self.req, self.fake_res, {}) self.assertFalse(is_neutron_security_groups.called) @mock.patch('nova.policy.authorize') def test_detail_policy_softauth_is_called(self, mock_authorize): mock_authorize.return_value = False self.controller.detail(self.req, self.fake_res) self.assertTrue(mock_authorize.called) @mock.patch.object(nova.network.security_group.openstack_driver, "is_neutron_security_groups") def test_detail_policy_failed(self, is_neutron_security_groups): self.controller.detail(self.req, self.fake_res) self.assertFalse(is_neutron_security_groups.called) class PolicyEnforcementV21(test.NoDBTestCase): def setUp(self): super(PolicyEnforcementV21, self).setUp() self.req = fakes.HTTPRequest.blank('') self.rule_name = "os_compute_api:os-security-groups" self.rule = {self.rule_name: "project:non_fake"} def _common_policy_check(self, func, *arg, **kwarg): self.policy.set_rules(self.rule) exc = self.assertRaises( exception.PolicyNotAuthorized, func, *arg, **kwarg) self.assertEqual( "Policy doesn't allow %s to be performed." % self.rule_name, exc.format_message()) class SecurityGroupPolicyEnforcementV21(PolicyEnforcementV21): def setUp(self): super(SecurityGroupPolicyEnforcementV21, self).setUp() self.controller = secgroups_v21.SecurityGroupController() def test_create_policy_failed(self): self._common_policy_check(self.controller.create, self.req, {}) def test_show_policy_failed(self): self._common_policy_check(self.controller.show, self.req, FAKE_UUID1) def test_delete_policy_failed(self): self._common_policy_check(self.controller.delete, self.req, FAKE_UUID1) def test_index_policy_failed(self): self._common_policy_check(self.controller.index, self.req) def test_update_policy_failed(self): self._common_policy_check( self.controller.update, self.req, FAKE_UUID1, {}) class ServerSecurityGroupPolicyEnforcementV21(PolicyEnforcementV21): def setUp(self): super(ServerSecurityGroupPolicyEnforcementV21, self).setUp() self.controller = secgroups_v21.ServerSecurityGroupController() def test_index_policy_failed(self): self._common_policy_check(self.controller.index, self.req, FAKE_UUID1) class SecurityGroupRulesPolicyEnforcementV21(PolicyEnforcementV21): def setUp(self): super(SecurityGroupRulesPolicyEnforcementV21, self).setUp() self.controller = secgroups_v21.SecurityGroupRulesController() def test_create_policy_failed(self): self._common_policy_check(self.controller.create, self.req, {}) def test_delete_policy_failed(self): self._common_policy_check(self.controller.delete, self.req, FAKE_UUID1) class SecurityGroupActionPolicyEnforcementV21(PolicyEnforcementV21): def setUp(self): super(SecurityGroupActionPolicyEnforcementV21, self).setUp() self.controller = secgroups_v21.SecurityGroupActionController() def test_add_security_group_policy_failed(self): self._common_policy_check( self.controller._addSecurityGroup, self.req, FAKE_UUID1, {}) def test_remove_security_group_policy_failed(self): self._common_policy_check( self.controller._removeSecurityGroup, self.req, FAKE_UUID1, {}) class TestSecurityGroupsDeprecation(test.NoDBTestCase): def setUp(self): super(TestSecurityGroupsDeprecation, self).setUp() self.controller = secgroups_v21.SecurityGroupController() self.req = fakes.HTTPRequest.blank('', version='2.36') def test_all_apis_return_not_found(self): self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.show, self.req, fakes.FAKE_UUID) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.delete, self.req, fakes.FAKE_UUID) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.index, self.req) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.update, self.req, fakes.FAKE_UUID, {}) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.create, self.req, {}) class TestSecurityGroupRulesDeprecation(test.NoDBTestCase): def setUp(self): super(TestSecurityGroupRulesDeprecation, self).setUp() self.controller = secgroups_v21.SecurityGroupRulesController() self.req = fakes.HTTPRequest.blank('', version='2.36') def test_all_apis_return_not_found(self): self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.create, self.req, {}) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.delete, self.req, fakes.FAKE_UUID) nova-17.0.1/nova/tests/unit/api/openstack/compute/test_user_data.py0000666000175000017500000001450413250073126025452 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack Foundation # Copyright 2013 IBM Corp. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import uuid import mock from oslo_config import cfg from oslo_serialization import base64 from oslo_serialization import jsonutils from nova.api.openstack.compute import servers from nova.api.openstack.compute import user_data from nova.compute import flavors from nova import exception from nova.network import manager from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_instance from nova.tests.unit.image import fake CONF = cfg.CONF FAKE_UUID = fakes.FAKE_UUID def fake_gen_uuid(): return FAKE_UUID def return_security_group(context, instance_id, security_group_id): pass class ServersControllerCreateTest(test.TestCase): def setUp(self): """Shared implementation for tests below that create instance.""" super(ServersControllerCreateTest, self).setUp() self.flags(enable_instance_password=True, group='api') self.instance_cache_num = 0 self.instance_cache_by_id = {} self.instance_cache_by_uuid = {} # Network API needs to be stubbed out before creating the controllers. fakes.stub_out_nw_api(self) self.controller = servers.ServersController() def instance_create(context, inst): inst_type = flavors.get_flavor_by_flavor_id(3) image_uuid = '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6' def_image_ref = 'http://localhost/images/%s' % image_uuid self.instance_cache_num += 1 instance = fake_instance.fake_db_instance(**{ 'id': self.instance_cache_num, 'display_name': inst['display_name'] or 'test', 'uuid': FAKE_UUID, 'instance_type': inst_type, 'access_ip_v4': '1.2.3.4', 'access_ip_v6': 'fead::1234', 'image_ref': inst.get('image_ref', def_image_ref), 'user_id': 'fake', 'project_id': 'fake', 'reservation_id': inst['reservation_id'], "created_at": datetime.datetime(2010, 10, 10, 12, 0, 0), "updated_at": datetime.datetime(2010, 11, 11, 11, 0, 0), user_data.ATTRIBUTE_NAME: None, "progress": 0, "fixed_ips": [], "task_state": "", "vm_state": "", "root_device_name": inst.get('root_device_name', 'vda'), }) self.instance_cache_by_id[instance['id']] = instance self.instance_cache_by_uuid[instance['uuid']] = instance return instance def instance_get(context, instance_id): """Stub for compute/api create() pulling in instance after scheduling """ return self.instance_cache_by_id[instance_id] def instance_update(context, uuid, values): instance = self.instance_cache_by_uuid[uuid] instance.update(values) return instance def server_update(context, instance_uuid, params): inst = self.instance_cache_by_uuid[instance_uuid] inst.update(params) return (inst, inst) def fake_method(*args, **kwargs): pass def project_get_networks(context, user_id): return dict(id='1', host='localhost') fakes.stub_out_key_pair_funcs(self) fake.stub_out_image_service(self) self.stubs.Set(uuid, 'uuid4', fake_gen_uuid) self.stub_out('nova.db.instance_add_security_group', return_security_group) self.stub_out('nova.db.project_get_networks', project_get_networks) self.stub_out('nova.db.instance_create', instance_create) self.stub_out('nova.db.instance_system_metadata_update', fake_method) self.stub_out('nova.db.instance_get', instance_get) self.stub_out('nova.db.instance_update', instance_update) self.stub_out('nova.db.instance_update_and_get_original', server_update) self.stubs.Set(manager.VlanManager, 'allocate_fixed_ip', fake_method) def _test_create_extra(self, params, no_image=False, legacy_v2=False): image_uuid = 'c905cedb-7281-47e4-8a62-f26bc5fc4c77' server = dict(name='server_test', imageRef=image_uuid, flavorRef=2) if no_image: server.pop('imageRef', None) server.update(params) body = dict(server=server) req = fakes.HTTPRequestV21.blank('/servers') req.method = 'POST' req.body = jsonutils.dump_as_bytes(body) req.headers["content-type"] = "application/json" if legacy_v2: req.set_legacy_v2() server = self.controller.create(req, body=body).obj['server'] return server def test_create_instance_with_user_data(self): value = base64.encode_as_text("A random string") params = {user_data.ATTRIBUTE_NAME: value} server = self._test_create_extra(params) self.assertEqual(FAKE_UUID, server['id']) def test_create_instance_with_bad_user_data(self): value = "A random string" params = {user_data.ATTRIBUTE_NAME: value} self.assertRaises(exception.ValidationError, self._test_create_extra, params) @mock.patch('nova.compute.api.API.create') def test_create_instance_with_none_allowd_for_v20_compat_mode(self, mock_create): def create(context, *args, **kwargs): self.assertIsNone(kwargs['user_data']) return ([fakes.stub_instance_obj(context)], None) mock_create.side_effect = create params = {user_data.ATTRIBUTE_NAME: None} self._test_create_extra(params, legacy_v2=True) nova-17.0.1/nova/tests/unit/api/openstack/compute/test_server_metadata.py0000666000175000017500000007626013250073126026660 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_config import cfg from oslo_serialization import jsonutils from oslo_utils import timeutils import six import webob from nova.api.openstack.compute import server_metadata \ as server_metadata_v21 from nova.compute import rpcapi as compute_rpcapi from nova.compute import vm_states import nova.db from nova import exception from nova import objects from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_instance from nova.tests import uuidsentinel as uuids CONF = cfg.CONF def return_create_instance_metadata_max(context, server_id, metadata, delete): return stub_max_server_metadata() def return_create_instance_metadata(context, server_id, metadata, delete): return stub_server_metadata() def fake_instance_save(inst, **kwargs): inst.metadata = stub_server_metadata() inst.obj_reset_changes() def return_server_metadata(context, server_id): if not isinstance(server_id, six.string_types) or not len(server_id) == 36: msg = 'id %s must be a uuid in return server metadata' % server_id raise Exception(msg) return stub_server_metadata() def return_empty_server_metadata(context, server_id): return {} def delete_server_metadata(context, server_id, key): pass def stub_server_metadata(): metadata = { "key1": "value1", "key2": "value2", "key3": "value3", } return metadata def stub_max_server_metadata(): metadata = {"metadata": {}} for num in range(CONF.quota.metadata_items): metadata['metadata']['key%i' % num] = "blah" return metadata def return_server(context, server_id, columns_to_join=None): return fake_instance.fake_db_instance( **{'id': server_id, 'uuid': '0cc3346e-9fef-4445-abe6-5d2b2690ec64', 'name': 'fake', 'locked': False, 'launched_at': timeutils.utcnow(), 'vm_state': vm_states.ACTIVE}) def return_server_by_uuid(context, server_uuid, columns_to_join=None, use_slave=False): return fake_instance.fake_db_instance( **{'id': 1, 'uuid': '0cc3346e-9fef-4445-abe6-5d2b2690ec64', 'name': 'fake', 'locked': False, 'launched_at': timeutils.utcnow(), 'metadata': stub_server_metadata(), 'vm_state': vm_states.ACTIVE}) def return_server_nonexistent(context, server_id, columns_to_join=None, use_slave=False): raise exception.InstanceNotFound(instance_id=server_id) def fake_change_instance_metadata(self, context, instance, diff): pass class ServerMetaDataTestV21(test.TestCase): validation_ex = exception.ValidationError validation_ex_large = validation_ex def setUp(self): super(ServerMetaDataTestV21, self).setUp() fakes.stub_out_key_pair_funcs(self) self.stub_out('nova.db.instance_get', return_server) self.stub_out('nova.db.instance_get_by_uuid', return_server_by_uuid) self.stub_out('nova.db.instance_metadata_get', return_server_metadata) self.stubs.Set(compute_rpcapi.ComputeAPI, 'change_instance_metadata', fake_change_instance_metadata) self._set_up_resources() def _set_up_resources(self): self.controller = server_metadata_v21.ServerMetadataController() self.uuid = uuids.fake self.url = '/fake/servers/%s/metadata' % self.uuid def _get_request(self, param_url=''): return fakes.HTTPRequestV21.blank(self.url + param_url) def test_index(self): req = self._get_request() res_dict = self.controller.index(req, self.uuid) expected = { 'metadata': { 'key1': 'value1', 'key2': 'value2', 'key3': 'value3', }, } self.assertEqual(expected, res_dict) def test_index_nonexistent_server(self): self.stub_out('nova.db.instance_metadata_get', return_server_nonexistent) req = self._get_request() self.assertRaises(webob.exc.HTTPNotFound, self.controller.index, req, self.url) def test_index_no_data(self): self.stub_out('nova.db.instance_metadata_get', return_empty_server_metadata) req = self._get_request() res_dict = self.controller.index(req, self.uuid) expected = {'metadata': {}} self.assertEqual(expected, res_dict) def test_show(self): req = self._get_request('/key2') res_dict = self.controller.show(req, self.uuid, 'key2') expected = {"meta": {'key2': 'value2'}} self.assertEqual(expected, res_dict) def test_show_nonexistent_server(self): self.stub_out('nova.db.instance_metadata_get', return_server_nonexistent) req = self._get_request('/key2') self.assertRaises(webob.exc.HTTPNotFound, self.controller.show, req, self.uuid, 'key2') def test_show_meta_not_found(self): self.stub_out('nova.db.instance_metadata_get', return_empty_server_metadata) req = self._get_request('/key6') self.assertRaises(webob.exc.HTTPNotFound, self.controller.show, req, self.uuid, 'key6') def test_delete(self): self.stub_out('nova.db.instance_metadata_get', return_server_metadata) self.stub_out('nova.db.instance_metadata_delete', delete_server_metadata) req = self._get_request('/key2') req.method = 'DELETE' res = self.controller.delete(req, self.uuid, 'key2') self.assertIsNone(res) def test_delete_nonexistent_server(self): self.stub_out('nova.db.instance_get_by_uuid', return_server_nonexistent) req = self._get_request('/key1') req.method = 'DELETE' self.assertRaises(webob.exc.HTTPNotFound, self.controller.delete, req, self.uuid, 'key1') def test_delete_meta_not_found(self): self.stub_out('nova.db.instance_metadata_get', return_empty_server_metadata) req = self._get_request('/key6') req.method = 'DELETE' self.assertRaises(webob.exc.HTTPNotFound, self.controller.delete, req, self.uuid, 'key6') def test_create(self): self.stubs.Set(objects.Instance, 'save', fake_instance_save) req = self._get_request() req.method = 'POST' req.content_type = "application/json" body = {"metadata": {"key9": "value9"}} req.body = jsonutils.dump_as_bytes(body) res_dict = self.controller.create(req, self.uuid, body=body) body['metadata'].update({ "key1": "value1", "key2": "value2", "key3": "value3", }) self.assertEqual(body, res_dict) def test_create_empty_body(self): self.stub_out('nova.db.instance_metadata_update', return_create_instance_metadata) req = self._get_request() req.method = 'POST' req.headers["content-type"] = "application/json" self.assertRaises(self.validation_ex, self.controller.create, req, self.uuid, body=None) def test_create_item_empty_key(self): self.stub_out('nova.db.instance_metadata_update', return_create_instance_metadata) req = self._get_request('/key1') req.method = 'PUT' body = {"metadata": {"": "value1"}} req.body = jsonutils.dump_as_bytes(body) req.headers["content-type"] = "application/json" self.assertRaises(self.validation_ex, self.controller.create, req, self.uuid, body=body) def test_create_item_non_dict(self): self.stub_out('nova.db.instance_metadata_update', return_create_instance_metadata) req = self._get_request('/key1') req.method = 'PUT' body = {"metadata": None} req.body = jsonutils.dump_as_bytes(body) req.headers["content-type"] = "application/json" self.assertRaises(self.validation_ex, self.controller.create, req, self.uuid, body=body) def test_create_item_key_too_long(self): self.stub_out('nova.db.instance_metadata_update', return_create_instance_metadata) req = self._get_request('/key1') req.method = 'PUT' body = {"metadata": {("a" * 260): "value1"}} req.body = jsonutils.dump_as_bytes(body) req.headers["content-type"] = "application/json" self.assertRaises(self.validation_ex_large, self.controller.create, req, self.uuid, body=body) def test_create_malformed_container(self): self.stub_out('nova.db.instance_metadata_update', return_create_instance_metadata) req = fakes.HTTPRequest.blank(self.url + '/key1') req.method = 'PUT' body = {"meta": {}} req.body = jsonutils.dump_as_bytes(body) req.headers["content-type"] = "application/json" self.assertRaises(self.validation_ex, self.controller.create, req, self.uuid, body=body) def test_create_malformed_data(self): self.stub_out('nova.db.instance_metadata_update', return_create_instance_metadata) req = fakes.HTTPRequest.blank(self.url + '/key1') req.method = 'PUT' body = {"metadata": ['asdf']} req.body = jsonutils.dump_as_bytes(body) req.headers["content-type"] = "application/json" self.assertRaises(self.validation_ex, self.controller.create, req, self.uuid, body=body) def test_create_nonexistent_server(self): self.stub_out('nova.db.instance_get_by_uuid', return_server_nonexistent) req = self._get_request() req.method = 'POST' body = {"metadata": {"key1": "value1"}} req.body = jsonutils.dump_as_bytes(body) req.headers["content-type"] = "application/json" self.assertRaises(webob.exc.HTTPNotFound, self.controller.create, req, self.uuid, body=body) def test_update_metadata(self): self.stubs.Set(objects.Instance, 'save', fake_instance_save) req = self._get_request() req.method = 'POST' req.content_type = 'application/json' expected = { 'metadata': { 'key1': 'updatedvalue', 'key29': 'newkey', } } req.body = jsonutils.dump_as_bytes(expected) response = self.controller.update_all(req, self.uuid, body=expected) self.assertEqual(expected, response) def test_update_all(self): self.stubs.Set(objects.Instance, 'save', fake_instance_save) req = self._get_request() req.method = 'PUT' req.content_type = "application/json" expected = { 'metadata': { 'key10': 'value10', 'key99': 'value99', }, } req.body = jsonutils.dump_as_bytes(expected) res_dict = self.controller.update_all(req, self.uuid, body=expected) self.assertEqual(expected, res_dict) def test_update_all_empty_container(self): self.stubs.Set(objects.Instance, 'save', fake_instance_save) req = self._get_request() req.method = 'PUT' req.content_type = "application/json" expected = {'metadata': {}} req.body = jsonutils.dump_as_bytes(expected) res_dict = self.controller.update_all(req, self.uuid, body=expected) self.assertEqual(expected, res_dict) def test_update_all_empty_body_item(self): self.stub_out('nova.db.instance_metadata_update', return_create_instance_metadata) req = fakes.HTTPRequest.blank(self.url + '/key1') req.method = 'PUT' req.headers["content-type"] = "application/json" self.assertRaises(self.validation_ex, self.controller.update_all, req, self.uuid, body=None) def test_update_all_with_non_dict_item(self): self.stub_out('nova.db.instance_metadata_update', return_create_instance_metadata) req = fakes.HTTPRequest.blank(self.url + '/bad') req.method = 'PUT' body = {"metadata": None} req.body = jsonutils.dump_as_bytes(body) req.headers["content-type"] = "application/json" self.assertRaises(self.validation_ex, self.controller.update_all, req, self.uuid, body=body) def test_update_all_malformed_container(self): self.stub_out('nova.db.instance_metadata_update', return_create_instance_metadata) req = self._get_request() req.method = 'PUT' req.content_type = "application/json" expected = {'meta': {}} req.body = jsonutils.dump_as_bytes(expected) self.assertRaises(self.validation_ex, self.controller.update_all, req, self.uuid, body=expected) def test_update_all_malformed_data(self): self.stub_out('nova.db.instance_metadata_update', return_create_instance_metadata) req = self._get_request() req.method = 'PUT' req.content_type = "application/json" expected = {'metadata': ['asdf']} req.body = jsonutils.dump_as_bytes(expected) self.assertRaises(self.validation_ex, self.controller.update_all, req, self.uuid, body=expected) def test_update_all_nonexistent_server(self): self.stub_out('nova.db.instance_get', return_server_nonexistent) req = self._get_request() req.method = 'PUT' req.content_type = "application/json" body = {'metadata': {'key10': 'value10'}} req.body = jsonutils.dump_as_bytes(body) self.assertRaises(webob.exc.HTTPNotFound, self.controller.update_all, req, '100', body=body) def test_update_all_non_dict(self): self.stub_out('nova.db.instance_metadata_update', return_create_instance_metadata) req = self._get_request() req.method = 'PUT' body = {"metadata": None} req.body = jsonutils.dump_as_bytes(body) req.headers["content-type"] = "application/json" self.assertRaises(self.validation_ex, self.controller.update_all, req, self.uuid, body=body) def test_update_item(self): self.stubs.Set(objects.Instance, 'save', fake_instance_save) req = self._get_request('/key1') req.method = 'PUT' body = {"meta": {"key1": "value1"}} req.body = jsonutils.dump_as_bytes(body) req.headers["content-type"] = "application/json" res_dict = self.controller.update(req, self.uuid, 'key1', body=body) expected = {"meta": {'key1': 'value1'}} self.assertEqual(expected, res_dict) def test_update_item_nonexistent_server(self): self.stub_out('nova.db.instance_get_by_uuid', return_server_nonexistent) req = self._get_request('/key1') req.method = 'PUT' body = {"meta": {"key1": "value1"}} req.body = jsonutils.dump_as_bytes(body) req.headers["content-type"] = "application/json" self.assertRaises(webob.exc.HTTPNotFound, self.controller.update, req, self.uuid, 'key1', body=body) def test_update_item_empty_body(self): self.stub_out('nova.db.instance_metadata_update', return_create_instance_metadata) req = self._get_request('/key1') req.method = 'PUT' req.headers["content-type"] = "application/json" self.assertRaises(self.validation_ex, self.controller.update, req, self.uuid, 'key1', body=None) def test_update_malformed_container(self): self.stub_out('nova.db.instance_metadata_update', return_create_instance_metadata) req = fakes.HTTPRequest.blank(self.url) req.method = 'PUT' expected = {'meta': {}} req.body = jsonutils.dump_as_bytes(expected) req.headers["content-type"] = "application/json" self.assertRaises(self.validation_ex, self.controller.update, req, self.uuid, 'key1', body=expected) def test_update_malformed_data(self): self.stub_out('nova.db.instance_metadata_update', return_create_instance_metadata) req = fakes.HTTPRequest.blank(self.url) req.method = 'PUT' expected = {'metadata': ['asdf']} req.body = jsonutils.dump_as_bytes(expected) req.headers["content-type"] = "application/json" self.assertRaises(self.validation_ex, self.controller.update, req, self.uuid, 'key1', body=expected) def test_update_item_empty_key(self): self.stub_out('nova.db.instance_metadata_update', return_create_instance_metadata) req = self._get_request('/key1') req.method = 'PUT' body = {"meta": {"": "value1"}} req.body = jsonutils.dump_as_bytes(body) req.headers["content-type"] = "application/json" self.assertRaises(self.validation_ex, self.controller.update, req, self.uuid, '', body=body) def test_update_item_key_too_long(self): self.stub_out('nova.db.instance_metadata_update', return_create_instance_metadata) req = self._get_request('/key1') req.method = 'PUT' body = {"meta": {("a" * 260): "value1"}} req.body = jsonutils.dump_as_bytes(body) req.headers["content-type"] = "application/json" self.assertRaises(self.validation_ex_large, self.controller.update, req, self.uuid, ("a" * 260), body=body) def test_update_item_value_too_long(self): self.stub_out('nova.db.instance_metadata_update', return_create_instance_metadata) req = self._get_request('/key1') req.method = 'PUT' body = {"meta": {"key1": ("a" * 260)}} req.body = jsonutils.dump_as_bytes(body) req.headers["content-type"] = "application/json" self.assertRaises(self.validation_ex_large, self.controller.update, req, self.uuid, "key1", body=body) def test_update_item_too_many_keys(self): self.stub_out('nova.db.instance_metadata_update', return_create_instance_metadata) req = self._get_request('/key1') req.method = 'PUT' body = {"meta": {"key1": "value1", "key2": "value2"}} req.body = jsonutils.dump_as_bytes(body) req.headers["content-type"] = "application/json" self.assertRaises(self.validation_ex, self.controller.update, req, self.uuid, 'key1', body=body) def test_update_item_body_uri_mismatch(self): self.stub_out('nova.db.instance_metadata_update', return_create_instance_metadata) req = self._get_request('/bad') req.method = 'PUT' body = {"meta": {"key1": "value1"}} req.body = jsonutils.dump_as_bytes(body) req.headers["content-type"] = "application/json" self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, req, self.uuid, 'bad', body=body) def test_update_item_non_dict(self): self.stub_out('nova.db.instance_metadata_update', return_create_instance_metadata) req = self._get_request('/bad') req.method = 'PUT' body = {"meta": None} req.body = jsonutils.dump_as_bytes(body) req.headers["content-type"] = "application/json" self.assertRaises(self.validation_ex, self.controller.update, req, self.uuid, 'bad', body=body) def test_update_empty_container(self): self.stub_out('nova.db.instance_metadata_update', return_create_instance_metadata) req = fakes.HTTPRequest.blank(self.url) req.method = 'PUT' expected = {'metadata': {}} req.body = jsonutils.dump_as_bytes(expected) req.headers["content-type"] = "application/json" self.assertRaises(self.validation_ex, self.controller.update, req, self.uuid, 'bad', body=expected) def test_too_many_metadata_items_on_create(self): self.stub_out('nova.db.instance_metadata_update', return_create_instance_metadata) data = {"metadata": {}} for num in range(CONF.quota.metadata_items + 1): data['metadata']['key%i' % num] = "blah" req = self._get_request() req.method = 'POST' req.body = jsonutils.dump_as_bytes(data) req.headers["content-type"] = "application/json" self.assertRaises(webob.exc.HTTPForbidden, self.controller.create, req, self.uuid, body=data) def test_invalid_metadata_items_on_create(self): self.stub_out('nova.db.instance_metadata_update', return_create_instance_metadata) req = self._get_request() req.method = 'POST' req.headers["content-type"] = "application/json" # test for long key data = {"metadata": {"a" * 260: "value1"}} req.body = jsonutils.dump_as_bytes(data) self.assertRaises(self.validation_ex_large, self.controller.create, req, self.uuid, body=data) # test for long value data = {"metadata": {"key": "v" * 260}} req.body = jsonutils.dump_as_bytes(data) self.assertRaises(self.validation_ex_large, self.controller.create, req, self.uuid, body=data) # test for empty key. data = {"metadata": {"": "value1"}} req.body = jsonutils.dump_as_bytes(data) self.assertRaises(self.validation_ex, self.controller.create, req, self.uuid, body=data) def test_too_many_metadata_items_on_update_item(self): self.stub_out('nova.db.instance_metadata_update', return_create_instance_metadata) data = {"metadata": {}} for num in range(CONF.quota.metadata_items + 1): data['metadata']['key%i' % num] = "blah" req = self._get_request() req.method = 'PUT' req.body = jsonutils.dump_as_bytes(data) req.headers["content-type"] = "application/json" self.assertRaises(webob.exc.HTTPForbidden, self.controller.update_all, req, self.uuid, body=data) def test_invalid_metadata_items_on_update_item(self): self.stub_out('nova.db.instance_metadata_update', return_create_instance_metadata) self.stub_out('nova.db.instance_metadata_update', return_create_instance_metadata) data = {"metadata": {}} for num in range(CONF.quota.metadata_items + 1): data['metadata']['key%i' % num] = "blah" req = self._get_request() req.method = 'PUT' req.body = jsonutils.dump_as_bytes(data) req.headers["content-type"] = "application/json" # test for long key data = {"metadata": {"a" * 260: "value1"}} req.body = jsonutils.dump_as_bytes(data) self.assertRaises(self.validation_ex_large, self.controller.update_all, req, self.uuid, body=data) # test for long value data = {"metadata": {"key": "v" * 260}} req.body = jsonutils.dump_as_bytes(data) self.assertRaises(self.validation_ex_large, self.controller.update_all, req, self.uuid, body=data) # test for empty key. data = {"metadata": {"": "value1"}} req.body = jsonutils.dump_as_bytes(data) self.assertRaises(self.validation_ex, self.controller.update_all, req, self.uuid, body=data) class BadStateServerMetaDataTestV21(test.TestCase): def setUp(self): super(BadStateServerMetaDataTestV21, self).setUp() fakes.stub_out_key_pair_funcs(self) self.stub_out('nova.db.instance_metadata_get', return_server_metadata) self.stubs.Set(compute_rpcapi.ComputeAPI, 'change_instance_metadata', fake_change_instance_metadata) self.stub_out('nova.db.instance_get', self._return_server_in_build) self.stub_out('nova.db.instance_get_by_uuid', self._return_server_in_build_by_uuid) self.stub_out('nova.db.instance_metadata_delete', delete_server_metadata) self._set_up_resources() def _set_up_resources(self): self.controller = server_metadata_v21.ServerMetadataController() self.uuid = uuids.fake self.url = '/fake/servers/%s/metadata' % self.uuid def _get_request(self, param_url=''): return fakes.HTTPRequestV21.blank(self.url + param_url) def test_invalid_state_on_delete(self): req = self._get_request('/key2') req.method = 'DELETE' self.assertRaises(webob.exc.HTTPConflict, self.controller.delete, req, self.uuid, 'key2') def test_invalid_state_on_update_metadata(self): self.stub_out('nova.db.instance_metadata_update', return_create_instance_metadata) req = self._get_request() req.method = 'POST' req.content_type = 'application/json' expected = { 'metadata': { 'key1': 'updatedvalue', 'key29': 'newkey', } } req.body = jsonutils.dump_as_bytes(expected) self.assertRaises(webob.exc.HTTPConflict, self.controller.update_all, req, self.uuid, body=expected) def _return_server_in_build(self, context, server_id, columns_to_join=None): return fake_instance.fake_db_instance( **{'id': server_id, 'uuid': '0cc3346e-9fef-4445-abe6-5d2b2690ec64', 'name': 'fake', 'locked': False, 'vm_state': vm_states.BUILDING}) def _return_server_in_build_by_uuid(self, context, server_uuid, columns_to_join=None, use_slave=False): return fake_instance.fake_db_instance( **{'id': 1, 'uuid': '0cc3346e-9fef-4445-abe6-5d2b2690ec64', 'name': 'fake', 'locked': False, 'vm_state': vm_states.BUILDING}) @mock.patch.object(nova.compute.api.API, 'update_instance_metadata', side_effect=exception.InstanceIsLocked(instance_uuid=0)) def test_instance_lock_update_metadata(self, mock_update): req = self._get_request() req.method = 'POST' req.content_type = 'application/json' expected = { 'metadata': { 'keydummy': 'newkey', } } req.body = jsonutils.dump_as_bytes(expected) self.assertRaises(webob.exc.HTTPConflict, self.controller.update_all, req, self.uuid, body=expected) class ServerMetaPolicyEnforcementV21(test.NoDBTestCase): def setUp(self): super(ServerMetaPolicyEnforcementV21, self).setUp() self.controller = server_metadata_v21.ServerMetadataController() self.req = fakes.HTTPRequest.blank('') def test_create_policy_failed(self): rule_name = "os_compute_api:server-metadata:create" self.policy.set_rules({rule_name: "project:non_fake"}) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller.create, self.req, fakes.FAKE_UUID, body={'metadata': {}}) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) def test_index_policy_failed(self): rule_name = "os_compute_api:server-metadata:index" self.policy.set_rules({rule_name: "project:non_fake"}) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller.index, self.req, fakes.FAKE_UUID) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) def test_update_policy_failed(self): rule_name = "os_compute_api:server-metadata:update" self.policy.set_rules({rule_name: "project:non_fake"}) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller.update, self.req, fakes.FAKE_UUID, fakes.FAKE_UUID, body={'meta': {'fake_meta': 'fake_meta'}}) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) def test_update_all_policy_failed(self): rule_name = "os_compute_api:server-metadata:update_all" self.policy.set_rules({rule_name: "project:non_fake"}) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller.update_all, self.req, fakes.FAKE_UUID, body={'metadata': {}}) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) def test_delete_policy_failed(self): rule_name = "os_compute_api:server-metadata:delete" self.policy.set_rules({rule_name: "project:non_fake"}) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller.delete, self.req, fakes.FAKE_UUID, fakes.FAKE_UUID) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) def test_show_policy_failed(self): rule_name = "os_compute_api:server-metadata:show" self.policy.set_rules({rule_name: "project:non_fake"}) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller.show, self.req, fakes.FAKE_UUID, fakes.FAKE_UUID) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) nova-17.0.1/nova/tests/unit/api/openstack/compute/test_server_password.py0000666000175000017500000000655213250073126026737 0ustar zuulzuul00000000000000# Copyright 2012 Nebula, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova.api.metadata import password from nova.api.openstack.compute import server_password \ as server_password_v21 from nova import compute from nova import exception from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_instance class ServerPasswordTestV21(test.NoDBTestCase): content_type = 'application/json' server_password = server_password_v21 delete_call = 'self.controller.clear' def setUp(self): super(ServerPasswordTestV21, self).setUp() fakes.stub_out_nw_api(self) self.stubs.Set( compute.api.API, 'get', lambda self, ctxt, *a, **kw: fake_instance.fake_instance_obj( ctxt, system_metadata={}, expected_attrs=['system_metadata'])) self.password = 'fakepass' self.controller = self.server_password.ServerPasswordController() self.fake_req = fakes.HTTPRequest.blank('') def fake_extract_password(instance): return self.password def fake_convert_password(context, password): self.password = password return {} self.stubs.Set(password, 'extract_password', fake_extract_password) self.stubs.Set(password, 'convert_password', fake_convert_password) def test_get_password(self): res = self.controller.index(self.fake_req, 'fake') self.assertEqual(res['password'], 'fakepass') def test_reset_password(self): with mock.patch('nova.objects.Instance._save_flavor'): eval(self.delete_call)(self.fake_req, 'fake') self.assertEqual(eval(self.delete_call).wsgi_code, 204) res = self.controller.index(self.fake_req, 'fake') self.assertEqual(res['password'], '') class ServerPasswordPolicyEnforcementV21(test.NoDBTestCase): def setUp(self): super(ServerPasswordPolicyEnforcementV21, self).setUp() self.controller = server_password_v21.ServerPasswordController() self.req = fakes.HTTPRequest.blank('') def _test_policy_failed(self, method, rule_name): self.policy.set_rules({rule_name: "project:non_fake"}) exc = self.assertRaises( exception.PolicyNotAuthorized, method, self.req, fakes.FAKE_UUID) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) def test_get_password_policy_failed(self): rule_name = "os_compute_api:os-server-password" self._test_policy_failed(self.controller.index, rule_name) def test_clear_password_policy_failed(self): rule_name = "os_compute_api:os-server-password" self._test_policy_failed(self.controller.clear, rule_name) nova-17.0.1/nova/tests/unit/api/openstack/compute/microversions.py0000666000175000017500000001131313250073126025341 0ustar zuulzuul00000000000000# Copyright 2014 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Microversions Test Extension""" import functools import webob from nova.api.openstack.compute import routes from nova.api.openstack import wsgi from nova.api import validation from nova.tests.unit.api.openstack.compute import dummy_schema class MicroversionsController(wsgi.Controller): @wsgi.Controller.api_version("2.1") def index(self, req): data = {'param': 'val'} return data @wsgi.Controller.api_version("2.2") # noqa def index(self, req): data = {'param': 'val2'} return data @wsgi.Controller.api_version("3.0") # noqa def index(self, req): raise webob.exc.HTTPBadRequest() # We have a second example controller here to help check # for accidental dependencies between API controllers # due to base class changes class MicroversionsController2(wsgi.Controller): @wsgi.Controller.api_version("2.2", "2.5") def index(self, req): data = {'param': 'controller2_val1'} return data @wsgi.Controller.api_version("2.5", "3.1") # noqa @wsgi.response(202) def index(self, req): data = {'param': 'controller2_val2'} return data class MicroversionsController3(wsgi.Controller): @wsgi.Controller.api_version("2.1") @validation.schema(dummy_schema.dummy) def create(self, req, body): data = {'param': 'create_val1'} return data @wsgi.Controller.api_version("2.1") @validation.schema(dummy_schema.dummy, "2.3", "2.8") @validation.schema(dummy_schema.dummy2, "2.9") def update(self, req, id, body): data = {'param': 'update_val1'} return data @wsgi.Controller.api_version("2.1", "2.2") @wsgi.response(202) @wsgi.action('foo') def _foo(self, req, id, body): data = {'foo': 'bar'} return data class MicroversionsController4(wsgi.Controller): @wsgi.Controller.api_version("2.1") def _create(self, req): data = {'param': 'controller4_val1'} return data @wsgi.Controller.api_version("2.2") # noqa def _create(self, req): data = {'param': 'controller4_val2'} return data def create(self, req, body): return self._create(req) class MicroversionsExtendsBaseController(wsgi.Controller): @wsgi.Controller.api_version("2.1") def show(self, req, id): return {'base_param': 'base_val'} class MicroversionsExtendsController1(wsgi.Controller): @wsgi.Controller.api_version("2.3") @wsgi.extends def show(self, req, resp_obj, id): resp_obj.obj['extend_ctrlr1'] = 'val_1' class MicroversionsExtendsController2(wsgi.Controller): @wsgi.Controller.api_version("2.4") @wsgi.extends def show(self, req, resp_obj, id): resp_obj.obj['extend_ctrlr2'] = 'val_2' class MicroversionsExtendsController3(wsgi.Controller): @wsgi.Controller.api_version("2.2", "2.3") @wsgi.extends def show(self, req, resp_obj, id): resp_obj.obj['extend_ctrlr3'] = 'val_3' mv_controller = functools.partial(routes._create_controller, MicroversionsController, [], []) mv2_controller = functools.partial(routes._create_controller, MicroversionsController2, [], []) mv3_controller = functools.partial(routes._create_controller, MicroversionsController3, [], []) mv4_controller = functools.partial(routes._create_controller, MicroversionsController4, [], []) mv5_controller = functools.partial(routes._create_controller, MicroversionsExtendsBaseController, [ MicroversionsExtendsController1, MicroversionsExtendsController2, MicroversionsExtendsController3 ], []) ROUTES = ( ('/microversions', { 'GET': [mv_controller, 'index'] }), ('/microversions2', { 'GET': [mv2_controller, 'index'] }), ('/microversions3', { 'POST': [mv3_controller, 'create'] }), ('/microversions3/{id}', { 'PUT': [mv3_controller, 'update'] }), ('/microversions3/{id}/action', { 'POST': [mv3_controller, 'action'] }), ('/microversions4', { 'POST': [mv4_controller, 'create'] }), ('/microversions5/{id}', { 'GET': [mv5_controller, 'show'] }), ) nova-17.0.1/nova/tests/unit/api/openstack/compute/test_migrate_server.py0000666000175000017500000005742213250073136026530 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_utils import uuidutils import six import webob from nova.api.openstack import api_version_request from nova.api.openstack.compute import migrate_server as \ migrate_server_v21 from nova import exception from nova import objects from nova import test from nova.tests.unit.api.openstack.compute import admin_only_action_common from nova.tests.unit.api.openstack import fakes from nova.tests import uuidsentinel as uuids class MigrateServerTestsV21(admin_only_action_common.CommonTests): migrate_server = migrate_server_v21 controller_name = 'MigrateServerController' validation_error = exception.ValidationError _api_version = '2.1' disk_over_commit = False force = None async = False host_name = None def setUp(self): super(MigrateServerTestsV21, self).setUp() self.controller = getattr(self.migrate_server, self.controller_name)() self.compute_api = self.controller.compute_api def _fake_controller(*args, **kwargs): return self.controller self.stubs.Set(self.migrate_server, self.controller_name, _fake_controller) self.mox.StubOutWithMock(self.compute_api, 'get') def _get_migration_body(self, **kwargs): return {'os-migrateLive': self._get_params(**kwargs)} def _get_params(self, **kwargs): return {'host': kwargs.get('host'), 'block_migration': kwargs.get('block_migration') or False, 'disk_over_commit': self.disk_over_commit} def test_migrate(self): method_translations = {'_migrate': 'resize', '_migrate_live': 'live_migrate'} body_map = {'_migrate_live': self._get_migration_body(host='hostname')} args_map = {'_migrate_live': ((False, self.disk_over_commit, 'hostname', self.force, self.async), {}), '_migrate': ((), {'host_name': self.host_name})} self._test_actions(['_migrate', '_migrate_live'], body_map=body_map, method_translations=method_translations, args_map=args_map) def test_migrate_none_hostname(self): method_translations = {'_migrate': 'resize', '_migrate_live': 'live_migrate'} body_map = {'_migrate_live': self._get_migration_body(host=None)} args_map = {'_migrate_live': ((False, self.disk_over_commit, None, self.force, self.async), {}), '_migrate': ((), {'host_name': None})} self._test_actions(['_migrate', '_migrate_live'], body_map=body_map, method_translations=method_translations, args_map=args_map) def test_migrate_with_non_existed_instance(self): body_map = {'_migrate_live': self._get_migration_body(host='hostname')} self._test_actions_with_non_existed_instance( ['_migrate', '_migrate_live'], body_map=body_map) def test_migrate_raise_conflict_on_invalid_state(self): method_translations = {'_migrate': 'resize', '_migrate_live': 'live_migrate'} body_map = {'_migrate_live': self._get_migration_body(host='hostname')} args_map = {'_migrate_live': ((False, self.disk_over_commit, 'hostname', self.force, self.async), {}), '_migrate': ((), {'host_name': self.host_name})} exception_arg = {'_migrate': 'migrate', '_migrate_live': 'os-migrateLive'} self._test_actions_raise_conflict_on_invalid_state( ['_migrate', '_migrate_live'], body_map=body_map, args_map=args_map, method_translations=method_translations, exception_args=exception_arg) def test_actions_with_locked_instance(self): method_translations = {'_migrate': 'resize', '_migrate_live': 'live_migrate'} body_map = {'_migrate_live': self._get_migration_body(host='hostname')} args_map = {'_migrate_live': ((False, self.disk_over_commit, 'hostname', self.force, self.async), {}), '_migrate': ((), {'host_name': self.host_name})} self._test_actions_with_locked_instance( ['_migrate', '_migrate_live'], body_map=body_map, args_map=args_map, method_translations=method_translations) def _test_migrate_exception(self, exc_info, expected_result): self.mox.StubOutWithMock(self.compute_api, 'resize') instance = self._stub_instance_get() self.compute_api.resize( self.context, instance, host_name=self.host_name).AndRaise(exc_info) self.mox.ReplayAll() self.assertRaises(expected_result, self.controller._migrate, self.req, instance['uuid'], body={'migrate': None}) def test_migrate_too_many_instances(self): exc_info = exception.TooManyInstances(overs='', req='', used=0, allowed=0) self._test_migrate_exception(exc_info, webob.exc.HTTPForbidden) def _test_migrate_live_succeeded(self, param): self.mox.StubOutWithMock(self.compute_api, 'live_migrate') instance = self._stub_instance_get() self.compute_api.live_migrate(self.context, instance, False, self.disk_over_commit, 'hostname', self.force, self.async) self.mox.ReplayAll() live_migrate_method = self.controller._migrate_live live_migrate_method(self.req, instance.uuid, body={'os-migrateLive': param}) self.assertEqual(202, live_migrate_method.wsgi_code) def test_migrate_live_enabled(self): param = self._get_params(host='hostname') self._test_migrate_live_succeeded(param) def test_migrate_live_enabled_with_string_param(self): param = {'host': 'hostname', 'block_migration': "False", 'disk_over_commit': "False"} self._test_migrate_live_succeeded(param) def test_migrate_live_without_host(self): body = self._get_migration_body() del body['os-migrateLive']['host'] self.assertRaises(self.validation_error, self.controller._migrate_live, self.req, fakes.FAKE_UUID, body=body) def test_migrate_live_without_block_migration(self): body = self._get_migration_body() del body['os-migrateLive']['block_migration'] self.assertRaises(self.validation_error, self.controller._migrate_live, self.req, fakes.FAKE_UUID, body=body) def test_migrate_live_without_disk_over_commit(self): body = {'os-migrateLive': {'host': 'hostname', 'block_migration': False}} self.assertRaises(self.validation_error, self.controller._migrate_live, self.req, fakes.FAKE_UUID, body=body) def test_migrate_live_with_invalid_block_migration(self): body = self._get_migration_body(block_migration='foo') self.assertRaises(self.validation_error, self.controller._migrate_live, self.req, fakes.FAKE_UUID, body=body) def test_migrate_live_with_invalid_disk_over_commit(self): body = {'os-migrateLive': {'host': 'hostname', 'block_migration': False, 'disk_over_commit': "foo"}} self.assertRaises(self.validation_error, self.controller._migrate_live, self.req, fakes.FAKE_UUID, body=body) def test_migrate_live_missing_dict_param(self): body = self._get_migration_body(host='hostname') del body['os-migrateLive']['host'] body['os-migrateLive']['dummy'] = 'hostname' self.assertRaises(self.validation_error, self.controller._migrate_live, self.req, fakes.FAKE_UUID, body=body) def _test_migrate_live_failed_with_exception( self, fake_exc, uuid=None, expected_exc=webob.exc.HTTPBadRequest, check_response=True): self.mox.StubOutWithMock(self.compute_api, 'live_migrate') instance = self._stub_instance_get(uuid=uuid) self.compute_api.live_migrate(self.context, instance, False, self.disk_over_commit, 'hostname', self.force, self.async ).AndRaise(fake_exc) self.mox.ReplayAll() body = self._get_migration_body(host='hostname') ex = self.assertRaises(expected_exc, self.controller._migrate_live, self.req, instance.uuid, body=body) if check_response: self.assertIn(six.text_type(fake_exc), ex.explanation) def test_migrate_live_compute_service_unavailable(self): self._test_migrate_live_failed_with_exception( exception.ComputeServiceUnavailable(host='host')) def test_migrate_live_compute_service_not_found(self): self._test_migrate_live_failed_with_exception( exception.ComputeHostNotFound(host='host')) def test_migrate_live_invalid_hypervisor_type(self): self._test_migrate_live_failed_with_exception( exception.InvalidHypervisorType()) def test_migrate_live_invalid_cpu_info(self): self._test_migrate_live_failed_with_exception( exception.InvalidCPUInfo(reason="")) def test_migrate_live_unable_to_migrate_to_self(self): uuid = uuidutils.generate_uuid() self._test_migrate_live_failed_with_exception( exception.UnableToMigrateToSelf(instance_id=uuid, host='host'), uuid=uuid) def test_migrate_live_destination_hypervisor_too_old(self): self._test_migrate_live_failed_with_exception( exception.DestinationHypervisorTooOld()) def test_migrate_live_no_valid_host(self): self._test_migrate_live_failed_with_exception( exception.NoValidHost(reason='')) def test_migrate_live_invalid_local_storage(self): self._test_migrate_live_failed_with_exception( exception.InvalidLocalStorage(path='', reason='')) def test_migrate_live_invalid_shared_storage(self): self._test_migrate_live_failed_with_exception( exception.InvalidSharedStorage(path='', reason='')) def test_migrate_live_hypervisor_unavailable(self): self._test_migrate_live_failed_with_exception( exception.HypervisorUnavailable(host="")) def test_migrate_live_instance_not_active(self): self._test_migrate_live_failed_with_exception( exception.InstanceInvalidState( instance_uuid='', state='', attr='', method=''), expected_exc=webob.exc.HTTPConflict, check_response=False) def test_migrate_live_pre_check_error(self): self._test_migrate_live_failed_with_exception( exception.MigrationPreCheckError(reason='')) def test_migrate_live_migration_precheck_client_exception(self): self._test_migrate_live_failed_with_exception( exception.MigrationPreCheckClientException(reason=''), expected_exc=webob.exc.HTTPInternalServerError, check_response=False) def test_migrate_live_migration_with_unexpected_error(self): self._test_migrate_live_failed_with_exception( exception.MigrationError(reason=''), expected_exc=webob.exc.HTTPInternalServerError, check_response=False) class MigrateServerTestsV225(MigrateServerTestsV21): # We don't have disk_over_commit in v2.25 disk_over_commit = None def setUp(self): super(MigrateServerTestsV225, self).setUp() self.req.api_version_request = api_version_request.APIVersionRequest( '2.25') def _get_params(self, **kwargs): return {'host': kwargs.get('host'), 'block_migration': kwargs.get('block_migration') or False} def test_migrate_live_enabled_with_string_param(self): param = {'host': 'hostname', 'block_migration': "False"} self._test_migrate_live_succeeded(param) def test_migrate_live_without_disk_over_commit(self): pass def test_migrate_live_with_invalid_disk_over_commit(self): pass def test_live_migrate_block_migration_auto(self): method_translations = {'_migrate_live': 'live_migrate'} body_map = {'_migrate_live': {'os-migrateLive': {'host': 'hostname', 'block_migration': 'auto'}}} args_map = {'_migrate_live': ((None, None, 'hostname', self.force, self.async), {})} self._test_actions(['_migrate_live'], body_map=body_map, method_translations=method_translations, args_map=args_map) def test_migrate_live_with_disk_over_commit_raise(self): body = {'os-migrateLive': {'host': 'hostname', 'block_migration': 'auto', 'disk_over_commit': False}} self.assertRaises(self.validation_error, self.controller._migrate_live, self.req, fakes.FAKE_UUID, body=body) def test_migrate_live_migration_with_old_nova_not_supported(self): self._test_migrate_live_failed_with_exception( exception.LiveMigrationWithOldNovaNotSupported()) class MigrateServerTestsV230(MigrateServerTestsV225): force = False def setUp(self): super(MigrateServerTestsV230, self).setUp() self.req.api_version_request = api_version_request.APIVersionRequest( '2.30') def _test_live_migrate(self, force=False): if force is True: litteral_force = 'true' else: litteral_force = 'false' method_translations = {'_migrate_live': 'live_migrate'} body_map = {'_migrate_live': {'os-migrateLive': {'host': 'hostname', 'block_migration': 'auto', 'force': litteral_force}}} args_map = {'_migrate_live': ((None, None, 'hostname', force, self.async), {})} self._test_actions(['_migrate_live'], body_map=body_map, method_translations=method_translations, args_map=args_map) def test_live_migrate(self): self._test_live_migrate() def test_live_migrate_with_forced_host(self): self._test_live_migrate(force=True) def test_forced_live_migrate_with_no_provided_host(self): body = {'os-migrateLive': {'force': 'true'}} self.assertRaises(self.validation_error, self.controller._migrate_live, self.req, fakes.FAKE_UUID, body=body) class MigrateServerTestsV234(MigrateServerTestsV230): async = True def setUp(self): super(MigrateServerTestsV234, self).setUp() self.req.api_version_request = api_version_request.APIVersionRequest( '2.34') # NOTE(tdurakov): for REST API version 2.34 and above, tests below are not # valid, as they are made in background. def test_migrate_live_compute_service_unavailable(self): pass def test_migrate_live_compute_service_not_found(self): pass def test_migrate_live_invalid_hypervisor_type(self): pass def test_migrate_live_invalid_cpu_info(self): pass def test_migrate_live_unable_to_migrate_to_self(self): pass def test_migrate_live_destination_hypervisor_too_old(self): pass def test_migrate_live_no_valid_host(self): pass def test_migrate_live_invalid_local_storage(self): pass def test_migrate_live_invalid_shared_storage(self): pass def test_migrate_live_hypervisor_unavailable(self): pass def test_migrate_live_instance_not_active(self): pass def test_migrate_live_pre_check_error(self): pass def test_migrate_live_migration_precheck_client_exception(self): pass def test_migrate_live_migration_with_unexpected_error(self): pass def test_migrate_live_migration_with_old_nova_not_supported(self): pass def test_migrate_live_unexpected_error(self): exc = exception.NoValidHost(reason="No valid host found") self.mox.StubOutWithMock(self.compute_api, 'live_migrate') instance = self._stub_instance_get() self.compute_api.live_migrate(self.context, instance, None, self.disk_over_commit, 'hostname', self.force, self.async).AndRaise(exc) self.mox.ReplayAll() body = {'os-migrateLive': {'host': 'hostname', 'block_migration': 'auto'}} self.assertRaises(webob.exc.HTTPInternalServerError, self.controller._migrate_live, self.req, instance.uuid, body=body) class MigrateServerTestsV256(MigrateServerTestsV234): host_name = 'fake-host' method_translations = {'_migrate': 'resize'} body_map = {'_migrate': {'migrate': {'host': host_name}}} args_map = {'_migrate': ((), {'host_name': host_name})} def setUp(self): super(MigrateServerTestsV256, self).setUp() self.req.api_version_request = api_version_request.APIVersionRequest( '2.56') def _test_migrate_validation_error(self, body): self.assertRaises(self.validation_error, self.controller._migrate, self.req, fakes.FAKE_UUID, body=body) def _test_migrate_exception(self, exc_info, expected_result): @mock.patch.object(self.compute_api, 'get') @mock.patch.object(self.compute_api, 'resize', side_effect=exc_info) def _test(mock_resize, mock_get): instance = objects.Instance(uuid=uuids.instance) self.assertRaises(expected_result, self.controller._migrate, self.req, instance['uuid'], body={'migrate': {'host': self.host_name}}) _test() def test_migrate(self): self._test_actions(['_migrate'], body_map=self.body_map, method_translations=self.method_translations, args_map=self.args_map) def test_migrate_without_host(self): # The request body is: '{"migrate": null}' body_map = {'_migrate': {'migrate': None}} args_map = {'_migrate': ((), {'host_name': None})} self._test_actions(['_migrate'], body_map=body_map, method_translations=self.method_translations, args_map=args_map) def test_migrate_none_hostname(self): # The request body is: '{"migrate": {"host": null}}' body_map = {'_migrate': {'migrate': {'host': None}}} args_map = {'_migrate': ((), {'host_name': None})} self._test_actions(['_migrate'], body_map=body_map, method_translations=self.method_translations, args_map=args_map) def test_migrate_with_non_existed_instance(self): self._test_actions_with_non_existed_instance( ['_migrate'], body_map=self.body_map) def test_migrate_raise_conflict_on_invalid_state(self): exception_arg = {'_migrate': 'migrate'} self._test_actions_raise_conflict_on_invalid_state( ['_migrate'], body_map=self.body_map, args_map=self.args_map, method_translations=self.method_translations, exception_args=exception_arg) def test_actions_with_locked_instance(self): self._test_actions_with_locked_instance( ['_migrate'], body_map=self.body_map, args_map=self.args_map, method_translations=self.method_translations) def test_migrate_without_migrate_object(self): self._test_migrate_validation_error({}) def test_migrate_invalid_migrate_object(self): self._test_migrate_validation_error({'migrate': 'fake-host'}) def test_migrate_with_additional_property(self): self._test_migrate_validation_error( {'migrate': {'host': self.host_name, 'additional': 'foo'}}) def test_migrate_with_host_length_more_than_255(self): self._test_migrate_validation_error( {'migrate': {'host': 'a' * 256}}) def test_migrate_nonexistent_host(self): exc_info = exception.ComputeHostNotFound(host='nonexistent_host') self._test_migrate_exception(exc_info, webob.exc.HTTPBadRequest) def test_migrate_no_request_spec(self): exc_info = exception.CannotMigrateWithTargetHost() self._test_migrate_exception(exc_info, webob.exc.HTTPConflict) def test_migrate_to_same_host(self): exc_info = exception.CannotMigrateToSameHost() self._test_migrate_exception(exc_info, webob.exc.HTTPBadRequest) class MigrateServerPolicyEnforcementV21(test.NoDBTestCase): def setUp(self): super(MigrateServerPolicyEnforcementV21, self).setUp() self.controller = migrate_server_v21.MigrateServerController() self.req = fakes.HTTPRequest.blank('') def test_migrate_policy_failed(self): rule_name = "os_compute_api:os-migrate-server:migrate" self.policy.set_rules({rule_name: "project:non_fake"}) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller._migrate, self.req, fakes.FAKE_UUID, body={'migrate': {}}) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) def test_migrate_live_policy_failed(self): rule_name = "os_compute_api:os-migrate-server:migrate_live" self.policy.set_rules({rule_name: "project:non_fake"}) body_args = {'os-migrateLive': {'host': 'hostname', 'block_migration': False, 'disk_over_commit': False}} exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller._migrate_live, self.req, fakes.FAKE_UUID, body=body_args) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) nova-17.0.1/nova/tests/unit/api/openstack/compute/test_limits.py0000666000175000017500000002534013250073126025004 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests dealing with HTTP rate-limiting. """ import mock from oslo_serialization import jsonutils from oslo_utils import encodeutils from six.moves import http_client as httplib from six.moves import StringIO from nova.api.openstack.compute import limits as limits_v21 from nova.api.openstack.compute import views from nova.api.openstack import wsgi import nova.context from nova import exception from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests.unit import matchers class BaseLimitTestSuite(test.NoDBTestCase): """Base test suite which provides relevant stubs and time abstraction.""" def setUp(self): super(BaseLimitTestSuite, self).setUp() self.time = 0.0 self.absolute_limits = {} def stub_get_project_quotas(context, project_id, usages=True): return {k: dict(limit=v) for k, v in self.absolute_limits.items()} mock_get_project_quotas = mock.patch.object( nova.quota.QUOTAS, "get_project_quotas", side_effect = stub_get_project_quotas) mock_get_project_quotas.start() self.addCleanup(mock_get_project_quotas.stop) def _get_time(self): """Return the "time" according to this test suite.""" return self.time class LimitsControllerTestV21(BaseLimitTestSuite): """Tests for `limits.LimitsController` class.""" limits_controller = limits_v21.LimitsController def setUp(self): """Run before each test.""" super(LimitsControllerTestV21, self).setUp() self.controller = wsgi.Resource(self.limits_controller()) self.ctrler = self.limits_controller() def _get_index_request(self, accept_header="application/json", tenant_id=None): """Helper to set routing arguments.""" request = fakes.HTTPRequest.blank('', version='2.1') if tenant_id: request = fakes.HTTPRequest.blank('/?tenant_id=%s' % tenant_id, version='2.1') request.accept = accept_header request.environ["wsgiorg.routing_args"] = (None, { "action": "index", "controller": "", }) context = nova.context.RequestContext('testuser', 'testproject') request.environ["nova.context"] = context return request def test_empty_index_json(self): # Test getting empty limit details in JSON. request = self._get_index_request() response = request.get_response(self.controller) expected = { "limits": { "rate": [], "absolute": {}, }, } body = jsonutils.loads(response.body) self.assertEqual(expected, body) def test_index_json(self): self._test_index_json() def test_index_json_by_tenant(self): self._test_index_json('faketenant') def _test_index_json(self, tenant_id=None): # Test getting limit details in JSON. request = self._get_index_request(tenant_id=tenant_id) context = request.environ["nova.context"] if tenant_id is None: tenant_id = context.project_id self.absolute_limits = { 'ram': 512, 'instances': 5, 'cores': 21, 'key_pairs': 10, 'floating_ips': 10, 'security_groups': 10, 'security_group_rules': 20, } expected = { "limits": { "rate": [], "absolute": { "maxTotalRAMSize": 512, "maxTotalInstances": 5, "maxTotalCores": 21, "maxTotalKeypairs": 10, "maxTotalFloatingIps": 10, "maxSecurityGroups": 10, "maxSecurityGroupRules": 20, }, }, } def _get_project_quotas(context, project_id, usages=True): return {k: dict(limit=v) for k, v in self.absolute_limits.items()} with mock.patch('nova.quota.QUOTAS.get_project_quotas') as \ get_project_quotas: get_project_quotas.side_effect = _get_project_quotas response = request.get_response(self.controller) body = jsonutils.loads(response.body) self.assertEqual(expected, body) get_project_quotas.assert_called_once_with(context, tenant_id, usages=False) class FakeHttplibSocket(object): """Fake `httplib.HTTPResponse` replacement.""" def __init__(self, response_string): """Initialize new `FakeHttplibSocket`.""" self._buffer = StringIO(response_string) def makefile(self, _mode, _other): """Returns the socket's internal buffer.""" return self._buffer class FakeHttplibConnection(object): """Fake `httplib.HTTPConnection`.""" def __init__(self, app, host): """Initialize `FakeHttplibConnection`.""" self.app = app self.host = host def request(self, method, path, body="", headers=None): """Requests made via this connection actually get translated and routed into our WSGI app, we then wait for the response and turn it back into an `httplib.HTTPResponse`. """ if not headers: headers = {} req = fakes.HTTPRequest.blank(path) req.method = method req.headers = headers req.host = self.host req.body = encodeutils.safe_encode(body) resp = str(req.get_response(self.app)) resp = "HTTP/1.0 %s" % resp sock = FakeHttplibSocket(resp) self.http_response = httplib.HTTPResponse(sock) self.http_response.begin() def getresponse(self): """Return our generated response from the request.""" return self.http_response class LimitsViewBuilderTest(test.NoDBTestCase): def setUp(self): super(LimitsViewBuilderTest, self).setUp() self.view_builder = views.limits.ViewBuilder() self.rate_limits = [] self.absolute_limits = {"metadata_items": 1, "injected_files": 5, "injected_file_content_bytes": 5} def test_build_limits(self): expected_limits = {"limits": { "rate": [], "absolute": {"maxServerMeta": 1, "maxImageMeta": 1, "maxPersonality": 5, "maxPersonalitySize": 5}}} output = self.view_builder.build(self.absolute_limits) self.assertThat(output, matchers.DictMatches(expected_limits)) def test_build_limits_empty_limits(self): expected_limits = {"limits": {"rate": [], "absolute": {}}} abs_limits = {} output = self.view_builder.build(abs_limits) self.assertThat(output, matchers.DictMatches(expected_limits)) class LimitsPolicyEnforcementV21(test.NoDBTestCase): def setUp(self): super(LimitsPolicyEnforcementV21, self).setUp() self.controller = limits_v21.LimitsController() def test_limits_index_policy_failed(self): rule_name = "os_compute_api:limits" self.policy.set_rules({rule_name: "project:non_fake"}) req = fakes.HTTPRequest.blank('') exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller.index, req=req) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) class LimitsControllerTestV236(BaseLimitTestSuite): def setUp(self): super(LimitsControllerTestV236, self).setUp() self.controller = limits_v21.LimitsController() self.req = fakes.HTTPRequest.blank("/?tenant_id=faketenant", version='2.36') def test_index_filtered(self): absolute_limits = { 'ram': 512, 'instances': 5, 'cores': 21, 'key_pairs': 10, 'floating_ips': 10, 'security_groups': 10, 'security_group_rules': 20, } def _get_project_quotas(context, project_id, usages=True): return {k: dict(limit=v) for k, v in absolute_limits.items()} with mock.patch('nova.quota.QUOTAS.get_project_quotas') as \ get_project_quotas: get_project_quotas.side_effect = _get_project_quotas response = self.controller.index(self.req) expected_response = { "limits": { "rate": [], "absolute": { "maxTotalRAMSize": 512, "maxTotalInstances": 5, "maxTotalCores": 21, "maxTotalKeypairs": 10, }, }, } self.assertEqual(expected_response, response) class LimitsControllerTestV239(BaseLimitTestSuite): def setUp(self): super(LimitsControllerTestV239, self).setUp() self.controller = limits_v21.LimitsController() self.req = fakes.HTTPRequest.blank("/?tenant_id=faketenant", version='2.39') def test_index_filtered_no_max_image_meta(self): absolute_limits = { "metadata_items": 1, } def _get_project_quotas(context, project_id, usages=True): return {k: dict(limit=v) for k, v in absolute_limits.items()} with mock.patch('nova.quota.QUOTAS.get_project_quotas') as \ get_project_quotas: get_project_quotas.side_effect = _get_project_quotas response = self.controller.index(self.req) # staring from version 2.39 there is no 'maxImageMeta' field # in response after removing 'image-metadata' proxy API expected_response = { "limits": { "rate": [], "absolute": { "maxServerMeta": 1, }, }, } self.assertEqual(expected_response, response) nova-17.0.1/nova/tests/unit/api/openstack/compute/test_server_groups.py0000666000175000017500000006655113250073126026421 0ustar zuulzuul00000000000000# Copyright (c) 2014 Cisco Systems, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_utils import uuidutils import webob from nova.api.openstack.compute import server_groups as sg_v21 from nova import context import nova.db from nova import exception from nova import objects from nova.policies import server_groups as sg_policies from nova import test from nova.tests import fixtures from nova.tests.unit.api.openstack import fakes from nova.tests.unit import policy_fixture from nova.tests import uuidsentinel class AttrDict(dict): def __getattr__(self, k): return self[k] def server_group_template(**kwargs): sgroup = kwargs.copy() sgroup.setdefault('name', 'test') return sgroup def server_group_resp_template(**kwargs): sgroup = kwargs.copy() sgroup.setdefault('name', 'test') sgroup.setdefault('policies', []) sgroup.setdefault('members', []) return sgroup def server_group_db(sg): attrs = sg.copy() if 'id' in attrs: attrs['uuid'] = attrs.pop('id') if 'policies' in attrs: policies = attrs.pop('policies') attrs['policies'] = policies else: attrs['policies'] = [] if 'members' in attrs: members = attrs.pop('members') attrs['members'] = members else: attrs['members'] = [] attrs['deleted'] = 0 attrs['deleted_at'] = None attrs['created_at'] = None attrs['updated_at'] = None if 'user_id' not in attrs: attrs['user_id'] = fakes.FAKE_USER_ID if 'project_id' not in attrs: attrs['project_id'] = fakes.FAKE_PROJECT_ID attrs['id'] = 7 return AttrDict(attrs) class ServerGroupTestV21(test.NoDBTestCase): USES_DB_SELF = True validation_error = exception.ValidationError def setUp(self): super(ServerGroupTestV21, self).setUp() self._setup_controller() self.req = fakes.HTTPRequest.blank('') self.admin_req = fakes.HTTPRequest.blank('', use_admin_context=True) self.foo_req = fakes.HTTPRequest.blank('', project_id='foo') self.policy = self.useFixture(policy_fixture.RealPolicyFixture()) self.useFixture(fixtures.Database(database='api')) cells = fixtures.CellDatabases() cells.add_cell_database(uuidsentinel.cell1) cells.add_cell_database(uuidsentinel.cell2) self.useFixture(cells) ctxt = context.get_admin_context() self.cells = {} for uuid in (uuidsentinel.cell1, uuidsentinel.cell2): cm = objects.CellMapping(context=ctxt, uuid=uuid, database_connection=uuid, transport_url=uuid) cm.create() self.cells[cm.uuid] = cm def _setup_controller(self): self.controller = sg_v21.ServerGroupController() def test_create_server_group_with_no_policies(self): sgroup = server_group_template() self.assertRaises(self.validation_error, self.controller.create, self.req, body={'server_group': sgroup}) def _create_server_group_normal(self, policies): sgroup = server_group_template() sgroup['policies'] = policies res_dict = self.controller.create(self.req, body={'server_group': sgroup}) self.assertEqual(res_dict['server_group']['name'], 'test') self.assertTrue(uuidutils.is_uuid_like(res_dict['server_group']['id'])) self.assertEqual(res_dict['server_group']['policies'], policies) def test_create_server_group(self): policies = ['affinity', 'anti-affinity'] for policy in policies: self._create_server_group_normal([policy]) def test_create_server_group_rbac_default(self): sgroup = server_group_template() sgroup['policies'] = ['affinity'] # test as admin self.controller.create(self.admin_req, body={'server_group': sgroup}) # test as non-admin self.controller.create(self.req, body={'server_group': sgroup}) def test_create_server_group_rbac_admin_only(self): sgroup = server_group_template() sgroup['policies'] = ['affinity'] # override policy to restrict to admin rule_name = sg_policies.POLICY_ROOT % 'create' rules = {rule_name: 'is_admin:True'} self.policy.set_rules(rules, overwrite=False) # check for success as admin self.controller.create(self.admin_req, body={'server_group': sgroup}) # check for failure as non-admin exc = self.assertRaises(exception.PolicyNotAuthorized, self.controller.create, self.req, body={'server_group': sgroup}) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) def _create_instance(self, ctx, cell): with context.target_cell(ctx, cell) as cctx: instance = objects.Instance(context=cctx, image_ref=uuidsentinel.fake_image_ref, node='node1', reservation_id='a', host='host1', project_id='fake', vm_state='fake', system_metadata={'key': 'value'}) instance.create() im = objects.InstanceMapping(context=ctx, project_id=ctx.project_id, user_id=ctx.user_id, cell_mapping=cell, instance_uuid=instance.uuid) im.create() return instance def _create_instance_group(self, context, members): ig = objects.InstanceGroup(context=context, name='fake_name', user_id='fake_user', project_id='fake', members=members) ig.create() return ig.uuid def _create_groups_and_instances(self, ctx): cell1 = self.cells[uuidsentinel.cell1] cell2 = self.cells[uuidsentinel.cell2] instances = [self._create_instance(ctx, cell=cell1), self._create_instance(ctx, cell=cell2), self._create_instance(ctx, cell=None)] members = [instance.uuid for instance in instances] ig_uuid = self._create_instance_group(ctx, members) return (ig_uuid, instances, members) def _test_list_server_group_all(self, api_version='2.1'): self._test_list_server_group(api_version=api_version, limited='', path='/os-server-groups?all_projects=True') def _test_list_server_group_offset_and_limit(self, api_version='2.1'): self._test_list_server_group(api_version=api_version, limited='&offset=1&limit=1', path='/os-server-groups?all_projects=True') @mock.patch.object(nova.db, 'instance_group_get_all_by_project_id') @mock.patch.object(nova.db, 'instance_group_get_all') def _test_list_server_group(self, mock_get_all, mock_get_by_project, path, api_version='2.1', limited=None): policies = ['anti-affinity'] members = [] metadata = {} # always empty names = ['default-x', 'test'] p_id = fakes.FAKE_PROJECT_ID u_id = fakes.FAKE_USER_ID if api_version >= '2.13': sg1 = server_group_resp_template(id=uuidsentinel.sg1_id, name=names[0], policies=policies, members=members, metadata=metadata, project_id=p_id, user_id=u_id) sg2 = server_group_resp_template(id=uuidsentinel.sg2_id, name=names[1], policies=policies, members=members, metadata=metadata, project_id=p_id, user_id=u_id) else: sg1 = server_group_resp_template(id=uuidsentinel.sg1_id, name=names[0], policies=policies, members=members, metadata=metadata) sg2 = server_group_resp_template(id=uuidsentinel.sg2_id, name=names[1], policies=policies, members=members, metadata=metadata) tenant_groups = [sg2] all_groups = [sg1, sg2] if limited: all = {'server_groups': [sg2]} tenant_specific = {'server_groups': []} else: all = {'server_groups': all_groups} tenant_specific = {'server_groups': tenant_groups} def return_all_server_groups(): return [server_group_db(sg) for sg in all_groups] mock_get_all.return_value = return_all_server_groups() def return_tenant_server_groups(): return [server_group_db(sg) for sg in tenant_groups] mock_get_by_project.return_value = return_tenant_server_groups() path = '/os-server-groups?all_projects=True' if limited: path += limited req = fakes.HTTPRequest.blank(path, version=api_version) admin_req = fakes.HTTPRequest.blank(path, use_admin_context=True, version=api_version) # test as admin res_dict = self.controller.index(admin_req) self.assertEqual(all, res_dict) # test as non-admin res_dict = self.controller.index(req) self.assertEqual(tenant_specific, res_dict) @mock.patch.object(nova.db, 'instance_group_get_all_by_project_id') def _test_list_server_group_by_tenant(self, mock_get_by_project, api_version='2.1'): policies = ['anti-affinity'] members = [] metadata = {} # always empty names = ['default-x', 'test'] p_id = fakes.FAKE_PROJECT_ID u_id = fakes.FAKE_USER_ID if api_version >= '2.13': sg1 = server_group_resp_template(id=uuidsentinel.sg1_id, name=names[0], policies=policies, members=members, metadata=metadata, project_id=p_id, user_id=u_id) sg2 = server_group_resp_template(id=uuidsentinel.sg2_id, name=names[1], policies=policies, members=members, metadata=metadata, project_id=p_id, user_id=u_id) else: sg1 = server_group_resp_template(id=uuidsentinel.sg1_id, name=names[0], policies=policies, members=members, metadata=metadata) sg2 = server_group_resp_template(id=uuidsentinel.sg2_id, name=names[1], policies=policies, members=members, metadata=metadata) groups = [sg1, sg2] expected = {'server_groups': groups} def return_server_groups(): return [server_group_db(sg) for sg in groups] return_get_by_project = return_server_groups() mock_get_by_project.return_value = return_get_by_project path = '/os-server-groups' req = fakes.HTTPRequest.blank(path, version=api_version) res_dict = self.controller.index(req) self.assertEqual(expected, res_dict) def test_display_members(self): ctx = context.RequestContext('fake_user', 'fake') (ig_uuid, instances, members) = self._create_groups_and_instances(ctx) res_dict = self.controller.show(self.req, ig_uuid) result_members = res_dict['server_group']['members'] self.assertEqual(3, len(result_members)) for member in members: self.assertIn(member, result_members) def test_display_members_with_nonexistent_group(self): self.assertRaises(webob.exc.HTTPNotFound, self.controller.show, self.req, uuidsentinel.group) def test_display_active_members_only(self): ctx = context.RequestContext('fake_user', 'fake') (ig_uuid, instances, members) = self._create_groups_and_instances(ctx) # delete an instance im = objects.InstanceMapping.get_by_instance_uuid(ctx, instances[1].uuid) with context.target_cell(ctx, im.cell_mapping) as cctxt: instances[1]._context = cctxt instances[1].destroy() # check that the instance does not exist self.assertRaises(exception.InstanceNotFound, objects.Instance.get_by_uuid, ctx, instances[1].uuid) res_dict = self.controller.show(self.req, ig_uuid) result_members = res_dict['server_group']['members'] # check that only the active instance is displayed self.assertEqual(2, len(result_members)) self.assertIn(instances[0].uuid, result_members) def test_display_members_rbac_default(self): ctx = context.RequestContext('fake_user', 'fake') ig_uuid = self._create_groups_and_instances(ctx)[0] # test as admin self.controller.show(self.admin_req, ig_uuid) # test as non-admin, same project self.controller.show(self.req, ig_uuid) # test as non-admin, different project self.assertRaises(webob.exc.HTTPNotFound, self.controller.show, self.foo_req, ig_uuid) def test_display_members_rbac_admin_only(self): ctx = context.RequestContext('fake_user', 'fake') ig_uuid = self._create_groups_and_instances(ctx)[0] # override policy to restrict to admin rule_name = sg_policies.POLICY_ROOT % 'show' rules = {rule_name: 'is_admin:True'} self.policy.set_rules(rules, overwrite=False) # check for success as admin self.controller.show(self.admin_req, ig_uuid) # check for failure as non-admin exc = self.assertRaises(exception.PolicyNotAuthorized, self.controller.show, self.req, ig_uuid) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) def test_create_server_group_with_non_alphanumeric_in_name(self): # The fix for bug #1434335 expanded the allowable character set # for server group names to include non-alphanumeric characters # if they are printable. sgroup = server_group_template(name='good* $%name', policies=['affinity']) res_dict = self.controller.create(self.req, body={'server_group': sgroup}) self.assertEqual(res_dict['server_group']['name'], 'good* $%name') def test_create_server_group_with_illegal_name(self): # blank name sgroup = server_group_template(name='', policies=['test_policy']) self.assertRaises(self.validation_error, self.controller.create, self.req, body={'server_group': sgroup}) # name with length 256 sgroup = server_group_template(name='1234567890' * 26, policies=['test_policy']) self.assertRaises(self.validation_error, self.controller.create, self.req, body={'server_group': sgroup}) # non-string name sgroup = server_group_template(name=12, policies=['test_policy']) self.assertRaises(self.validation_error, self.controller.create, self.req, body={'server_group': sgroup}) # name with leading spaces sgroup = server_group_template(name=' leading spaces', policies=['test_policy']) self.assertRaises(self.validation_error, self.controller.create, self.req, body={'server_group': sgroup}) # name with trailing spaces sgroup = server_group_template(name='trailing space ', policies=['test_policy']) self.assertRaises(self.validation_error, self.controller.create, self.req, body={'server_group': sgroup}) # name with all spaces sgroup = server_group_template(name=' ', policies=['test_policy']) self.assertRaises(self.validation_error, self.controller.create, self.req, body={'server_group': sgroup}) # name with unprintable character sgroup = server_group_template(name='bad\x00name', policies=['test_policy']) self.assertRaises(self.validation_error, self.controller.create, self.req, body={'server_group': sgroup}) # name with out of range char U0001F4A9 sgroup = server_group_template(name=u"\U0001F4A9", policies=['affinity']) self.assertRaises(self.validation_error, self.controller.create, self.req, body={'server_group': sgroup}) def test_create_server_group_with_illegal_policies(self): # blank policy sgroup = server_group_template(name='fake-name', policies='') self.assertRaises(self.validation_error, self.controller.create, self.req, body={'server_group': sgroup}) # policy as integer sgroup = server_group_template(name='fake-name', policies=7) self.assertRaises(self.validation_error, self.controller.create, self.req, body={'server_group': sgroup}) # policy as string sgroup = server_group_template(name='fake-name', policies='invalid') self.assertRaises(self.validation_error, self.controller.create, self.req, body={'server_group': sgroup}) # policy as None sgroup = server_group_template(name='fake-name', policies=None) self.assertRaises(self.validation_error, self.controller.create, self.req, body={'server_group': sgroup}) def test_create_server_group_conflicting_policies(self): sgroup = server_group_template() policies = ['anti-affinity', 'affinity'] sgroup['policies'] = policies self.assertRaises(self.validation_error, self.controller.create, self.req, body={'server_group': sgroup}) def test_create_server_group_with_duplicate_policies(self): sgroup = server_group_template() policies = ['affinity', 'affinity'] sgroup['policies'] = policies self.assertRaises(self.validation_error, self.controller.create, self.req, body={'server_group': sgroup}) def test_create_server_group_not_supported(self): sgroup = server_group_template() policies = ['storage-affinity', 'anti-affinity', 'rack-affinity'] sgroup['policies'] = policies self.assertRaises(self.validation_error, self.controller.create, self.req, body={'server_group': sgroup}) def test_create_server_group_with_no_body(self): self.assertRaises(self.validation_error, self.controller.create, self.req, body=None) def test_create_server_group_with_no_server_group(self): body = {'no-instanceGroup': None} self.assertRaises(self.validation_error, self.controller.create, self.req, body=body) def test_list_server_group_by_tenant(self): self._test_list_server_group_by_tenant(api_version='2.1') def test_list_server_group_all_v20(self): self._test_list_server_group_all(api_version='2.0') def test_list_server_group_all(self): self._test_list_server_group_all(api_version='2.1') def test_list_server_group_offset_and_limit(self): self._test_list_server_group_offset_and_limit(api_version='2.1') def test_list_server_groups_rbac_default(self): # test as admin self.controller.index(self.admin_req) # test as non-admin self.controller.index(self.req) def test_list_server_group_multiple_param(self): self._test_list_server_group(api_version='2.1', limited='&offset=2&limit=2&limit=1&offset=1', path='/os-server-groups?all_projects=False&all_projects=True') def test_list_server_group_additional_param(self): self._test_list_server_group(api_version='2.1', limited='&offset=1&limit=1', path='/os-server-groups?dummy=False&all_projects=True') def test_list_server_group_param_as_int(self): self._test_list_server_group(api_version='2.1', limited='&offset=1&limit=1', path='/os-server-groups?all_projects=1') def test_list_server_group_negative_int_as_offset(self): self.assertRaises(exception.ValidationError, self._test_list_server_group, api_version='2.1', limited='&offset=-1', path='/os-server-groups?all_projects=1') def test_list_server_group_string_int_as_offset(self): self.assertRaises(exception.ValidationError, self._test_list_server_group, api_version='2.1', limited='&offset=dummy', path='/os-server-groups?all_projects=1') def test_list_server_group_multiparam_string_as_offset(self): self.assertRaises(exception.ValidationError, self._test_list_server_group, api_version='2.1', limited='&offset=dummy&offset=1', path='/os-server-groups?all_projects=1') def test_list_server_group_negative_int_as_limit(self): self.assertRaises(exception.ValidationError, self._test_list_server_group, api_version='2.1', limited='&limit=-1', path='/os-server-groups?all_projects=1') def test_list_server_group_string_int_as_limit(self): self.assertRaises(exception.ValidationError, self._test_list_server_group, api_version='2.1', limited='&limit=dummy', path='/os-server-groups?all_projects=1') def test_list_server_group_multiparam_string_as_limit(self): self.assertRaises(exception.ValidationError, self._test_list_server_group, api_version='2.1', limited='&limit=dummy&limit=1', path='/os-server-groups?all_projects=1') def test_list_server_groups_rbac_admin_only(self): # override policy to restrict to admin rule_name = sg_policies.POLICY_ROOT % 'index' rules = {rule_name: 'is_admin:True'} self.policy.set_rules(rules, overwrite=False) # check for success as admin self.controller.index(self.admin_req) # check for failure as non-admin exc = self.assertRaises(exception.PolicyNotAuthorized, self.controller.index, self.req) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) def test_delete_server_group_by_id(self): sg = server_group_template(id=uuidsentinel.sg1_id) self.called = False def server_group_delete(context, id): self.called = True def return_server_group(context, group_id): self.assertEqual(sg['id'], group_id) return server_group_db(sg) self.stub_out('nova.db.instance_group_delete', server_group_delete) self.stub_out('nova.db.instance_group_get', return_server_group) resp = self.controller.delete(self.req, uuidsentinel.sg1_id) self.assertTrue(self.called) # NOTE: on v2.1, http status code is set as wsgi_code of API # method instead of status_int in a response object. if isinstance(self.controller, sg_v21.ServerGroupController): status_int = self.controller.delete.wsgi_code else: status_int = resp.status_int self.assertEqual(204, status_int) def test_delete_non_existing_server_group(self): self.assertRaises(webob.exc.HTTPNotFound, self.controller.delete, self.req, 'invalid') def test_delete_server_group_rbac_default(self): ctx = context.RequestContext('fake_user', 'fake') # test as admin ig_uuid = self._create_groups_and_instances(ctx)[0] self.controller.delete(self.admin_req, ig_uuid) # test as non-admin ig_uuid = self._create_groups_and_instances(ctx)[0] self.controller.delete(self.req, ig_uuid) def test_delete_server_group_rbac_admin_only(self): ctx = context.RequestContext('fake_user', 'fake') # override policy to restrict to admin rule_name = sg_policies.POLICY_ROOT % 'delete' rules = {rule_name: 'is_admin:True'} self.policy.set_rules(rules, overwrite=False) # check for success as admin ig_uuid = self._create_groups_and_instances(ctx)[0] self.controller.delete(self.admin_req, ig_uuid) # check for failure as non-admin ig_uuid = self._create_groups_and_instances(ctx)[0] exc = self.assertRaises(exception.PolicyNotAuthorized, self.controller.delete, self.req, ig_uuid) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) class ServerGroupTestV213(ServerGroupTestV21): wsgi_api_version = '2.13' def _setup_controller(self): self.controller = sg_v21.ServerGroupController() def test_list_server_group_all(self): self._test_list_server_group_all(api_version='2.13') def test_list_server_group_offset_and_limit(self): self._test_list_server_group_offset_and_limit(api_version='2.13') def test_list_server_group_by_tenant(self): self._test_list_server_group_by_tenant(api_version='2.13') nova-17.0.1/nova/tests/unit/api/openstack/compute/test_cloudpipe.py0000666000175000017500000000331413250073126025464 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils import uuidutils from webob import exc from nova.api.openstack.compute import cloudpipe as cloudpipe_v21 from nova import test from nova.tests.unit.api.openstack import fakes project_id = uuidutils.generate_uuid(dashed=False) class CloudpipeTestV21(test.NoDBTestCase): cloudpipe = cloudpipe_v21 url = '/v2/fake/os-cloudpipe' def setUp(self): super(CloudpipeTestV21, self).setUp() self.controller = self.cloudpipe.CloudpipeController() self.req = fakes.HTTPRequest.blank('') def test_cloudpipe_list(self): self.assertRaises(exc.HTTPGone, self.controller.index, self.req) def test_cloudpipe_create(self): body = {'cloudpipe': {'project_id': project_id}} self.assertRaises(exc.HTTPGone, self.controller.create, self.req, body=body) def test_cloudpipe_configure_project(self): body = {"configure_project": {"vpn_ip": "1.2.3.4", "vpn_port": 222}} self.assertRaises(exc.HTTPGone, self.controller.update, self.req, 'configure-project', body=body) nova-17.0.1/nova/tests/unit/api/openstack/compute/test_images.py0000666000175000017500000004617113250073136024756 0ustar zuulzuul00000000000000# Copyright 2010 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests of the new image services, both as a service layer, and as a WSGI layer """ import copy import mock import six.moves.urllib.parse as urlparse import webob from nova.api.openstack.compute import images as images_v21 from nova.api.openstack.compute.views import images as images_view from nova import exception from nova.image import glance from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests.unit import image_fixtures from nova.tests.unit import matchers NS = "{http://docs.openstack.org/compute/api/v1.1}" ATOMNS = "{http://www.w3.org/2005/Atom}" NOW_API_FORMAT = "2010-10-11T10:30:22Z" IMAGE_FIXTURES = image_fixtures.get_image_fixtures() class ImagesControllerTestV21(test.NoDBTestCase): """Test of the OpenStack API /images application controller w/Glance. """ image_controller_class = images_v21.ImagesController url_base = '/v2/fake' bookmark_base = '/fake' http_request = fakes.HTTPRequestV21 def setUp(self): """Run before each test.""" super(ImagesControllerTestV21, self).setUp() self.flags(api_servers=['http://localhost:9292'], group='glance') fakes.stub_out_networking(self) fakes.stub_out_key_pair_funcs(self) fakes.stub_out_compute_api_snapshot(self.stubs) fakes.stub_out_compute_api_backup(self.stubs) self.controller = self.image_controller_class() self.url_prefix = "http://localhost%s/images" % self.url_base self.bookmark_prefix = "http://localhost%s/images" % self.bookmark_base self.uuid = 'fa95aaf5-ab3b-4cd8-88c0-2be7dd051aaf' self.server_uuid = "aa640691-d1a7-4a67-9d3c-d35ee6b3cc74" self.server_href = ( "http://localhost%s/servers/%s" % (self.url_base, self.server_uuid)) self.server_bookmark = ( "http://localhost%s/servers/%s" % (self.bookmark_base, self.server_uuid)) self.alternate = "%s/images/%s" self.expected_image_123 = { "image": {'id': '123', 'name': 'public image', 'metadata': {'key1': 'value1'}, 'updated': NOW_API_FORMAT, 'created': NOW_API_FORMAT, 'status': 'ACTIVE', 'minDisk': 10, 'progress': 100, 'minRam': 128, "links": [{ "rel": "self", "href": "%s/123" % self.url_prefix }, { "rel": "bookmark", "href": "%s/123" % self.bookmark_prefix }, { "rel": "alternate", "type": "application/vnd.openstack.image", "href": self.alternate % (glance.generate_glance_url('ctx'), 123), }], }, } self.expected_image_124 = { "image": {'id': '124', 'name': 'queued snapshot', 'metadata': { u'instance_uuid': self.server_uuid, u'user_id': u'fake', }, 'updated': NOW_API_FORMAT, 'created': NOW_API_FORMAT, 'status': 'SAVING', 'progress': 25, 'minDisk': 0, 'minRam': 0, 'server': { 'id': self.server_uuid, "links": [{ "rel": "self", "href": self.server_href, }, { "rel": "bookmark", "href": self.server_bookmark, }], }, "links": [{ "rel": "self", "href": "%s/124" % self.url_prefix }, { "rel": "bookmark", "href": "%s/124" % self.bookmark_prefix }, { "rel": "alternate", "type": "application/vnd.openstack.image", "href": self.alternate % (glance.generate_glance_url('ctx'), 124), }], }, } @mock.patch('nova.image.api.API.get', return_value=IMAGE_FIXTURES[0]) def test_get_image(self, get_mocked): request = self.http_request.blank(self.url_base + 'images/123') actual_image = self.controller.show(request, '123') self.assertThat(actual_image, matchers.DictMatches(self.expected_image_123)) get_mocked.assert_called_once_with(mock.ANY, '123') @mock.patch('nova.image.api.API.get', return_value=IMAGE_FIXTURES[1]) def test_get_image_with_custom_prefix(self, _get_mocked): self.flags(compute_link_prefix='https://zoo.com:42', glance_link_prefix='http://circus.com:34', group='api') fake_req = self.http_request.blank(self.url_base + 'images/124') actual_image = self.controller.show(fake_req, '124') expected_image = self.expected_image_124 expected_image["image"]["links"][0]["href"] = ( "https://zoo.com:42%s/images/124" % self.url_base) expected_image["image"]["links"][1]["href"] = ( "https://zoo.com:42%s/images/124" % self.bookmark_base) expected_image["image"]["links"][2]["href"] = ( "http://circus.com:34/images/124") expected_image["image"]["server"]["links"][0]["href"] = ( "https://zoo.com:42%s/servers/%s" % (self.url_base, self.server_uuid)) expected_image["image"]["server"]["links"][1]["href"] = ( "https://zoo.com:42%s/servers/%s" % (self.bookmark_base, self.server_uuid)) self.assertThat(actual_image, matchers.DictMatches(expected_image)) @mock.patch('nova.image.api.API.get', side_effect=exception.ImageNotFound(image_id='')) def test_get_image_404(self, _get_mocked): fake_req = self.http_request.blank(self.url_base + 'images/unknown') self.assertRaises(webob.exc.HTTPNotFound, self.controller.show, fake_req, 'unknown') @mock.patch('nova.image.api.API.get_all', return_value=IMAGE_FIXTURES) def test_get_image_details(self, get_all_mocked): request = self.http_request.blank(self.url_base + 'images/detail') response = self.controller.detail(request) get_all_mocked.assert_called_once_with(mock.ANY, filters={}) response_list = response["images"] image_125 = copy.deepcopy(self.expected_image_124["image"]) image_125['id'] = '125' image_125['name'] = 'saving snapshot' image_125['progress'] = 50 image_125["links"][0]["href"] = "%s/125" % self.url_prefix image_125["links"][1]["href"] = "%s/125" % self.bookmark_prefix image_125["links"][2]["href"] = ( "%s/images/125" % glance.generate_glance_url('ctx')) image_126 = copy.deepcopy(self.expected_image_124["image"]) image_126['id'] = '126' image_126['name'] = 'active snapshot' image_126['status'] = 'ACTIVE' image_126['progress'] = 100 image_126["links"][0]["href"] = "%s/126" % self.url_prefix image_126["links"][1]["href"] = "%s/126" % self.bookmark_prefix image_126["links"][2]["href"] = ( "%s/images/126" % glance.generate_glance_url('ctx')) image_127 = copy.deepcopy(self.expected_image_124["image"]) image_127['id'] = '127' image_127['name'] = 'killed snapshot' image_127['status'] = 'ERROR' image_127['progress'] = 0 image_127["links"][0]["href"] = "%s/127" % self.url_prefix image_127["links"][1]["href"] = "%s/127" % self.bookmark_prefix image_127["links"][2]["href"] = ( "%s/images/127" % glance.generate_glance_url('ctx')) image_128 = copy.deepcopy(self.expected_image_124["image"]) image_128['id'] = '128' image_128['name'] = 'deleted snapshot' image_128['status'] = 'DELETED' image_128['progress'] = 0 image_128["links"][0]["href"] = "%s/128" % self.url_prefix image_128["links"][1]["href"] = "%s/128" % self.bookmark_prefix image_128["links"][2]["href"] = ( "%s/images/128" % glance.generate_glance_url('ctx')) image_129 = copy.deepcopy(self.expected_image_124["image"]) image_129['id'] = '129' image_129['name'] = 'pending_delete snapshot' image_129['status'] = 'DELETED' image_129['progress'] = 0 image_129["links"][0]["href"] = "%s/129" % self.url_prefix image_129["links"][1]["href"] = "%s/129" % self.bookmark_prefix image_129["links"][2]["href"] = ( "%s/images/129" % glance.generate_glance_url('ctx')) image_130 = copy.deepcopy(self.expected_image_123["image"]) image_130['id'] = '130' image_130['name'] = None image_130['metadata'] = {} image_130['minDisk'] = 0 image_130['minRam'] = 0 image_130["links"][0]["href"] = "%s/130" % self.url_prefix image_130["links"][1]["href"] = "%s/130" % self.bookmark_prefix image_130["links"][2]["href"] = ( "%s/images/130" % glance.generate_glance_url('ctx')) image_131 = copy.deepcopy(self.expected_image_123["image"]) image_131['id'] = '131' image_131['name'] = None image_131['metadata'] = {} image_131['minDisk'] = 0 image_131['minRam'] = 0 image_131["links"][0]["href"] = "%s/131" % self.url_prefix image_131["links"][1]["href"] = "%s/131" % self.bookmark_prefix image_131["links"][2]["href"] = ( "%s/images/131" % glance.generate_glance_url('ctx')) expected = [self.expected_image_123["image"], self.expected_image_124["image"], image_125, image_126, image_127, image_128, image_129, image_130, image_131] self.assertThat(expected, matchers.DictListMatches(response_list)) @mock.patch('nova.image.api.API.get_all') def test_get_image_details_with_limit(self, get_all_mocked): request = self.http_request.blank(self.url_base + 'images/detail?limit=2') self.controller.detail(request) get_all_mocked.assert_called_once_with(mock.ANY, limit=2, filters={}) @mock.patch('nova.image.api.API.get_all') def test_get_image_details_with_limit_and_page_size(self, get_all_mocked): request = self.http_request.blank( self.url_base + 'images/detail?limit=2&page_size=1') self.controller.detail(request) get_all_mocked.assert_called_once_with(mock.ANY, limit=2, filters={}, page_size=1) @mock.patch('nova.image.api.API.get_all') def _detail_request(self, filters, request, get_all_mocked): self.controller.detail(request) get_all_mocked.assert_called_once_with(mock.ANY, filters=filters) def test_image_detail_filter_with_name(self): filters = {'name': 'testname'} request = self.http_request.blank(self.url_base + 'images/detail' '?name=testname') self._detail_request(filters, request) def test_image_detail_filter_with_status(self): filters = {'status': 'active'} request = self.http_request.blank(self.url_base + 'images/detail' '?status=ACTIVE') self._detail_request(filters, request) def test_image_detail_filter_with_property(self): filters = {'property-test': '3'} request = self.http_request.blank(self.url_base + 'images/detail' '?property-test=3') self._detail_request(filters, request) def test_image_detail_filter_server_href(self): filters = {'property-instance_uuid': self.uuid} request = self.http_request.blank( self.url_base + 'images/detail?server=' + self.uuid) self._detail_request(filters, request) def test_image_detail_filter_server_uuid(self): filters = {'property-instance_uuid': self.uuid} request = self.http_request.blank( self.url_base + 'images/detail?server=' + self.uuid) self._detail_request(filters, request) def test_image_detail_filter_changes_since(self): filters = {'changes-since': '2011-01-24T17:08Z'} request = self.http_request.blank(self.url_base + 'images/detail' '?changes-since=2011-01-24T17:08Z') self._detail_request(filters, request) def test_image_detail_filter_with_type(self): filters = {'property-image_type': 'BASE'} request = self.http_request.blank( self.url_base + 'images/detail?type=BASE') self._detail_request(filters, request) def test_image_detail_filter_not_supported(self): filters = {'status': 'active'} request = self.http_request.blank( self.url_base + 'images/detail?status=' 'ACTIVE&UNSUPPORTEDFILTER=testname') self._detail_request(filters, request) def test_image_detail_no_filters(self): filters = {} request = self.http_request.blank(self.url_base + 'images/detail') self._detail_request(filters, request) @mock.patch('nova.image.api.API.get_all', side_effect=exception.Invalid) def test_image_detail_invalid_marker(self, _get_all_mocked): request = self.http_request.blank(self.url_base + '?marker=invalid') self.assertRaises(webob.exc.HTTPBadRequest, self.controller.detail, request) def test_generate_alternate_link(self): view = images_view.ViewBuilder() request = self.http_request.blank(self.url_base + 'images/1') generated_url = view._get_alternate_link(request, 1) actual_url = "%s/images/1" % glance.generate_glance_url('ctx') self.assertEqual(generated_url, actual_url) def _check_response(self, controller_method, response, expected_code): self.assertEqual(expected_code, controller_method.wsgi_code) @mock.patch('nova.image.api.API.delete') def test_delete_image(self, delete_mocked): request = self.http_request.blank(self.url_base + 'images/124') request.method = 'DELETE' delete_method = self.controller.delete response = delete_method(request, '124') self._check_response(delete_method, response, 204) delete_mocked.assert_called_once_with(mock.ANY, '124') @mock.patch('nova.image.api.API.delete', side_effect=exception.ImageNotAuthorized(image_id='123')) def test_delete_deleted_image(self, _delete_mocked): # If you try to delete a deleted image, you get back 403 Forbidden. request = self.http_request.blank(self.url_base + 'images/123') request.method = 'DELETE' self.assertRaises(webob.exc.HTTPForbidden, self.controller.delete, request, '123') @mock.patch('nova.image.api.API.delete', side_effect=exception.ImageNotFound(image_id='123')) def test_delete_image_not_found(self, _delete_mocked): request = self.http_request.blank(self.url_base + 'images/300') request.method = 'DELETE' self.assertRaises(webob.exc.HTTPNotFound, self.controller.delete, request, '300') @mock.patch('nova.image.api.API.get_all', return_value=[IMAGE_FIXTURES[0]]) def test_get_image_next_link(self, get_all_mocked): request = self.http_request.blank( self.url_base + 'imagesl?limit=1') response = self.controller.index(request) response_links = response['images_links'] href_parts = urlparse.urlparse(response_links[0]['href']) self.assertEqual(self.url_base + '/images', href_parts.path) params = urlparse.parse_qs(href_parts.query) self.assertThat({'limit': ['1'], 'marker': [IMAGE_FIXTURES[0]['id']]}, matchers.DictMatches(params)) @mock.patch('nova.image.api.API.get_all', return_value=[IMAGE_FIXTURES[0]]) def test_get_image_details_next_link(self, get_all_mocked): request = self.http_request.blank( self.url_base + 'images/detail?limit=1') response = self.controller.detail(request) response_links = response['images_links'] href_parts = urlparse.urlparse(response_links[0]['href']) self.assertEqual(self.url_base + '/images/detail', href_parts.path) params = urlparse.parse_qs(href_parts.query) self.assertThat({'limit': ['1'], 'marker': [IMAGE_FIXTURES[0]['id']]}, matchers.DictMatches(params)) class ImagesControllerDeprecationTest(test.NoDBTestCase): def setUp(self): super(ImagesControllerDeprecationTest, self).setUp() self.controller = images_v21.ImagesController() self.req = fakes.HTTPRequest.blank('', version='2.36') def test_not_found_for_all_images_api(self): self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.show, self.req, fakes.FAKE_UUID) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.delete, self.req, fakes.FAKE_UUID) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.index, self.req) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.detail, self.req) nova-17.0.1/nova/tests/unit/api/openstack/compute/test_availability_zone.py0000666000175000017500000002625213250073136027214 0ustar zuulzuul00000000000000# Copyright 2012 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import iso8601 import mock from oslo_config import cfg from nova.api.openstack.compute import availability_zone as az_v21 from nova.api.openstack.compute import servers as servers_v21 from nova import availability_zones from nova.compute import api as compute_api from nova import context from nova import db from nova import exception from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests.unit.image import fake from nova.tests.unit import matchers from nova.tests.unit.objects import test_service from nova.tests import uuidsentinel CONF = cfg.CONF FAKE_UUID = fakes.FAKE_UUID def fake_service_get_all(context, disabled=None): def __fake_service(binary, availability_zone, created_at, updated_at, host, disabled): return dict(test_service.fake_service, binary=binary, availability_zone=availability_zone, available_zones=availability_zone, created_at=created_at, updated_at=updated_at, host=host, disabled=disabled) if disabled: return [__fake_service("nova-compute", "zone-2", datetime.datetime(2012, 11, 14, 9, 53, 25, 0), datetime.datetime(2012, 12, 26, 14, 45, 25, 0), "fake_host-1", True), __fake_service("nova-scheduler", "internal", datetime.datetime(2012, 11, 14, 9, 57, 3, 0), datetime.datetime(2012, 12, 26, 14, 45, 25, 0), "fake_host-1", True), __fake_service("nova-network", "internal", datetime.datetime(2012, 11, 16, 7, 25, 46, 0), datetime.datetime(2012, 12, 26, 14, 45, 24, 0), "fake_host-2", True)] else: return [__fake_service("nova-compute", "zone-1", datetime.datetime(2012, 11, 14, 9, 53, 25, 0), datetime.datetime(2012, 12, 26, 14, 45, 25, 0), "fake_host-1", False), __fake_service("nova-sched", "internal", datetime.datetime(2012, 11, 14, 9, 57, 3, 0), datetime.datetime(2012, 12, 26, 14, 45, 25, 0), "fake_host-1", False), __fake_service("nova-network", "internal", datetime.datetime(2012, 11, 16, 7, 25, 46, 0), datetime.datetime(2012, 12, 26, 14, 45, 24, 0), "fake_host-2", False)] class AvailabilityZoneApiTestV21(test.NoDBTestCase): availability_zone = az_v21 def setUp(self): super(AvailabilityZoneApiTestV21, self).setUp() availability_zones.reset_cache() fakes.stub_out_nw_api(self) self.stub_out('nova.db.service_get_all', fake_service_get_all) self.stub_out('nova.availability_zones.set_availability_zones', lambda c, services: services) self.stub_out('nova.servicegroup.API.service_is_up', lambda s, service: service['binary'] != u"nova-network") self.controller = self.availability_zone.AvailabilityZoneController() self.req = fakes.HTTPRequest.blank('') def test_filtered_availability_zones(self): zones = ['zone1', 'internal'] expected = [{'zoneName': 'zone1', 'zoneState': {'available': True}, "hosts": None}] result = self.controller._get_filtered_availability_zones(zones, True) self.assertEqual(result, expected) expected = [{'zoneName': 'zone1', 'zoneState': {'available': False}, "hosts": None}] result = self.controller._get_filtered_availability_zones(zones, False) self.assertEqual(result, expected) def test_availability_zone_index(self): resp_dict = self.controller.index(self.req) self.assertIn('availabilityZoneInfo', resp_dict) zones = resp_dict['availabilityZoneInfo'] self.assertEqual(len(zones), 2) self.assertEqual(zones[0]['zoneName'], u'zone-1') self.assertTrue(zones[0]['zoneState']['available']) self.assertIsNone(zones[0]['hosts']) self.assertEqual(zones[1]['zoneName'], u'zone-2') self.assertFalse(zones[1]['zoneState']['available']) self.assertIsNone(zones[1]['hosts']) def test_availability_zone_detail(self): resp_dict = self.controller.detail(self.req) self.assertIn('availabilityZoneInfo', resp_dict) zones = resp_dict['availabilityZoneInfo'] self.assertEqual(len(zones), 3) timestamp = iso8601.parse_date("2012-12-26T14:45:25Z") nova_network_timestamp = iso8601.parse_date("2012-12-26T14:45:24Z") expected = [{'zoneName': 'zone-1', 'zoneState': {'available': True}, 'hosts': {'fake_host-1': { 'nova-compute': {'active': True, 'available': True, 'updated_at': timestamp}}}}, {'zoneName': 'internal', 'zoneState': {'available': True}, 'hosts': {'fake_host-1': { 'nova-sched': {'active': True, 'available': True, 'updated_at': timestamp}}, 'fake_host-2': { 'nova-network': { 'active': True, 'available': False, 'updated_at': nova_network_timestamp}}}}, {'zoneName': 'zone-2', 'zoneState': {'available': False}, 'hosts': None}] self.assertEqual(expected, zones) @mock.patch.object(availability_zones, 'get_availability_zones', return_value=[['nova'], []]) def test_availability_zone_detail_no_services(self, mock_get_az): expected_response = {'availabilityZoneInfo': [{'zoneState': {'available': True}, 'hosts': {}, 'zoneName': 'nova'}]} resp_dict = self.controller.detail(self.req) self.assertThat(resp_dict, matchers.DictMatches(expected_response)) class ServersControllerCreateTestV21(test.TestCase): base_url = '/v2/fake/' def setUp(self): """Shared implementation for tests below that create instance.""" super(ServersControllerCreateTestV21, self).setUp() self.instance_cache_num = 0 fakes.stub_out_nw_api(self) self._set_up_controller() def create_db_entry_for_new_instance(*args, **kwargs): instance = args[4] instance.uuid = FAKE_UUID return instance fake.stub_out_image_service(self) self.stub_out('nova.compute.api.API.create_db_entry_for_new_instance', create_db_entry_for_new_instance) self.req = fakes.HTTPRequest.blank('') def _set_up_controller(self): self.controller = servers_v21.ServersController() def _create_instance_with_availability_zone(self, zone_name): def create(*args, **kwargs): self.assertIn('availability_zone', kwargs) self.assertEqual('nova', kwargs['availability_zone']) return old_create(*args, **kwargs) old_create = compute_api.API.create self.stub_out('nova.compute.api.API.create', create) image_href = '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6' flavor_ref = ('http://localhost' + self.base_url + 'flavors/3') body = { 'server': { 'name': 'server_test', 'imageRef': image_href, 'flavorRef': flavor_ref, 'metadata': { 'hello': 'world', 'open': 'stack', }, 'availability_zone': zone_name, }, } admin_context = context.get_admin_context() db.service_create(admin_context, {'host': 'host1_zones', 'binary': "nova-compute", 'topic': 'compute', 'report_count': 0}) agg = db.aggregate_create(admin_context, {'name': 'agg1', 'uuid': uuidsentinel.agg_uuid}, {'availability_zone': 'nova'}) db.aggregate_host_add(admin_context, agg['id'], 'host1_zones') return self.req, body def test_create_instance_with_availability_zone(self): zone_name = 'nova' req, body = self._create_instance_with_availability_zone(zone_name) res = self.controller.create(req, body=body).obj server = res['server'] self.assertEqual(fakes.FAKE_UUID, server['id']) def test_create_instance_with_invalid_availability_zone_too_long(self): zone_name = 'a' * 256 req, body = self._create_instance_with_availability_zone(zone_name) self.assertRaises(exception.ValidationError, self.controller.create, req, body=body) def test_create_instance_with_invalid_availability_zone_too_short(self): zone_name = '' req, body = self._create_instance_with_availability_zone(zone_name) self.assertRaises(exception.ValidationError, self.controller.create, req, body=body) def test_create_instance_with_invalid_availability_zone_not_str(self): zone_name = 111 req, body = self._create_instance_with_availability_zone(zone_name) self.assertRaises(exception.ValidationError, self.controller.create, req, body=body) def test_create_instance_without_availability_zone(self): image_href = '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6' flavor_ref = ('http://localhost' + self.base_url + 'flavors/3') body = { 'server': { 'name': 'server_test', 'imageRef': image_href, 'flavorRef': flavor_ref, 'metadata': { 'hello': 'world', 'open': 'stack', }, }, } res = self.controller.create(self.req, body=body).obj server = res['server'] self.assertEqual(fakes.FAKE_UUID, server['id']) nova-17.0.1/nova/tests/unit/api/openstack/compute/test_extended_volumes.py0000666000175000017500000002271113250073126027054 0ustar zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_serialization import jsonutils from nova.api.openstack.compute import (extended_volumes as extended_volumes_v21) from nova.api.openstack import wsgi as os_wsgi from nova import compute from nova import context as nova_context from nova import objects from nova.objects import instance as instance_obj from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_block_device from nova.tests.unit import fake_instance from nova.tests import uuidsentinel as uuids from nova import volume UUID1 = '00000000-0000-0000-0000-000000000001' UUID2 = '00000000-0000-0000-0000-000000000002' UUID3 = '00000000-0000-0000-0000-000000000003' def fake_compute_get(*args, **kwargs): inst = fakes.stub_instance(1, uuid=UUID1) return fake_instance.fake_instance_obj(args[1], **inst) def fake_compute_get_all(*args, **kwargs): db_list = [ fakes.stub_instance(1, uuid=UUID1), fakes.stub_instance(2, uuid=UUID2), ] fields = instance_obj.INSTANCE_DEFAULT_FIELDS return instance_obj._make_instance_list(args[1], objects.InstanceList(), db_list, fields) def fake_bdms_get_all_by_instance_uuids(*args, **kwargs): return [ fake_block_device.FakeDbBlockDeviceDict({ 'id': 1, 'volume_id': 'some_volume_1', 'instance_uuid': UUID1, 'source_type': 'volume', 'destination_type': 'volume', 'delete_on_termination': True, }), fake_block_device.FakeDbBlockDeviceDict({ 'id': 2, 'volume_id': 'some_volume_2', 'instance_uuid': UUID2, 'source_type': 'volume', 'destination_type': 'volume', 'delete_on_termination': False, }), fake_block_device.FakeDbBlockDeviceDict({ 'id': 3, 'volume_id': 'some_volume_3', 'instance_uuid': UUID2, 'source_type': 'volume', 'destination_type': 'volume', 'delete_on_termination': False, }), ] def fake_volume_get(*args, **kwargs): pass class ExtendedVolumesTestV21(test.TestCase): content_type = 'application/json' prefix = 'os-extended-volumes:' exp_volumes_show = [{'id': 'some_volume_1'}] exp_volumes_detail = [ [{'id': 'some_volume_1'}], [{'id': 'some_volume_2'}, {'id': 'some_volume_3'}], ] wsgi_api_version = os_wsgi.DEFAULT_API_VERSION def setUp(self): super(ExtendedVolumesTestV21, self).setUp() fakes.stub_out_nw_api(self) fakes.stub_out_secgroup_api(self) self.stubs.Set(compute.api.API, 'get', fake_compute_get) self.stubs.Set(compute.api.API, 'get_all', fake_compute_get_all) self.stub_out('nova.db.block_device_mapping_get_all_by_instance_uuids', fake_bdms_get_all_by_instance_uuids) self._setUp() self.app = self._setup_app() return_server = fakes.fake_instance_get() self.stub_out('nova.db.instance_get_by_uuid', return_server) def _setup_app(self): return fakes.wsgi_app_v21() def _setUp(self): self.Controller = extended_volumes_v21.ExtendedVolumesController() self.stubs.Set(volume.cinder.API, 'get', fake_volume_get) def _make_request(self, url, body=None): req = fakes.HTTPRequest.blank('/v2/fake/servers' + url) req.headers['Accept'] = self.content_type req.headers = {os_wsgi.API_VERSION_REQUEST_HEADER: 'compute %s' % self.wsgi_api_version} if body: req.body = jsonutils.dump_as_bytes(body) req.method = 'POST' req.content_type = self.content_type res = req.get_response(self.app) return res def _get_server(self, body): return jsonutils.loads(body).get('server') def _get_servers(self, body): return jsonutils.loads(body).get('servers') def test_show(self): res = self._make_request('/%s' % UUID1) self.assertEqual(200, res.status_int) server = self._get_server(res.body) actual = server.get('%svolumes_attached' % self.prefix) self.assertEqual(self.exp_volumes_show, actual) @mock.patch.object(objects.InstanceMappingList, 'get_by_instance_uuids') def test_detail(self, mock_get_by_instance_uuids): mock_get_by_instance_uuids.return_value = [ objects.InstanceMapping( instance_uuid=UUID1, cell_mapping=objects.CellMapping( uuid=uuids.cell1, transport_url='fake://nowhere/', database_connection=uuids.cell1)), objects.InstanceMapping( instance_uuid=UUID2, cell_mapping=objects.CellMapping( uuid=uuids.cell1, transport_url='fake://nowhere/', database_connection=uuids.cell1))] res = self._make_request('/detail') mock_get_by_instance_uuids.assert_called_once_with( test.MatchType(nova_context.RequestContext), [UUID1, UUID2]) self.assertEqual(200, res.status_int) for i, server in enumerate(self._get_servers(res.body)): actual = server.get('%svolumes_attached' % self.prefix) self.assertEqual(self.exp_volumes_detail[i], actual) @mock.patch.object(objects.InstanceMappingList, 'get_by_instance_uuids') @mock.patch('nova.context.scatter_gather_cells') def test_detail_with_cell_failures(self, mock_sg, mock_get_by_instance_uuids): mock_get_by_instance_uuids.return_value = [ objects.InstanceMapping( instance_uuid=UUID1, cell_mapping=objects.CellMapping( uuid=uuids.cell1, transport_url='fake://nowhere/', database_connection=uuids.cell1)), objects.InstanceMapping( instance_uuid=UUID2, cell_mapping=objects.CellMapping( uuid=uuids.cell2, transport_url='fake://nowhere/', database_connection=uuids.cell2)) ] bdm = fake_bdms_get_all_by_instance_uuids() fake_bdm = fake_block_device.fake_bdm_object( nova_context.RequestContext, bdm[0]) mock_sg.return_value = { uuids.cell1: {UUID1: [fake_bdm]}, uuids.cell2: nova_context.raised_exception_sentinel } res = self._make_request('/detail') mock_get_by_instance_uuids.assert_called_once_with( test.MatchType(nova_context.RequestContext), [UUID1, UUID2]) self.assertEqual(200, res.status_int) # we would get an empty list for the second instance # which is in the down cell, however this would printed # in the logs. for i, server in enumerate(self._get_servers(res.body)): actual = server.get('%svolumes_attached' % self.prefix) if i == 0: self.assertEqual(self.exp_volumes_detail[i], actual) else: self.assertEqual([], actual) class ExtendedVolumesTestV23(ExtendedVolumesTestV21): exp_volumes_show = [ {'id': 'some_volume_1', 'delete_on_termination': True}, ] exp_volumes_detail = [ [ {'id': 'some_volume_1', 'delete_on_termination': True}, ], [ {'id': 'some_volume_2', 'delete_on_termination': False}, {'id': 'some_volume_3', 'delete_on_termination': False}, ], ] wsgi_api_version = '2.3' class ExtendedVolumesEnforcementV21(test.NoDBTestCase): def setUp(self): super(ExtendedVolumesEnforcementV21, self).setUp() self.controller = extended_volumes_v21.ExtendedVolumesController() self.req = fakes.HTTPRequest.blank('') @mock.patch.object(extended_volumes_v21.ExtendedVolumesController, '_extend_server') def test_extend_show_policy_failed(self, mock_extend): rule_name = 'os_compute_api:os-extended-volumes' self.policy.set_rules({rule_name: "project:non_fake"}) # Pass ResponseObj as None, the code shouldn't touch the None. self.controller.show(self.req, None, fakes.FAKE_UUID) self.assertFalse(mock_extend.called) @mock.patch.object(extended_volumes_v21.ExtendedVolumesController, '_extend_server') def test_extend_detail_policy_failed(self, mock_extend): rule_name = 'os_compute_api:os-extended-volumes' self.policy.set_rules({rule_name: "project:non_fake"}) # Pass ResponseObj as None, the code shouldn't touch the None. self.controller.detail(self.req, None) self.assertFalse(mock_extend.called) nova-17.0.1/nova/tests/unit/api/openstack/compute/test_cloudpipe_update.py0000666000175000017500000000254613250073126027034 0ustar zuulzuul00000000000000# Copyright 2012 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import webob from nova.api.openstack.compute import cloudpipe as clup_v21 from nova import test from nova.tests.unit.api.openstack import fakes class CloudpipeUpdateTestV21(test.NoDBTestCase): def setUp(self): super(CloudpipeUpdateTestV21, self).setUp() self.controller = clup_v21.CloudpipeController() self.req = fakes.HTTPRequest.blank('') def _check_status(self, expected_status, res, controller_method): self.assertEqual(expected_status, controller_method.wsgi_code) def test_cloudpipe_configure_project(self): body = {"configure_project": {"vpn_ip": "1.2.3.4", "vpn_port": 222}} self.assertRaises(webob.exc.HTTPGone, self.controller.update, self.req, 'configure-project', body=body) nova-17.0.1/nova/tests/unit/api/openstack/compute/test_server_actions.py0000666000175000017500000013671313250073136026541 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_utils import uuidutils import webob from nova.api.openstack.compute import servers as servers_v21 from nova.compute import api as compute_api from nova.compute import task_states from nova.compute import vm_states import nova.conf from nova import exception from nova import image from nova.image import glance from nova import objects from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_block_device from nova.tests.unit import fake_instance from nova.tests.unit.image import fake from nova.tests import uuidsentinel as uuids CONF = nova.conf.CONF FAKE_UUID = fakes.FAKE_UUID INSTANCE_IDS = {FAKE_UUID: 1} def return_server_not_found(*arg, **kwarg): raise exception.InstanceNotFound(instance_id=FAKE_UUID) def instance_update_and_get_original(context, instance_uuid, values, columns_to_join=None, ): inst = fakes.stub_instance(INSTANCE_IDS[instance_uuid], host='fake_host') inst = dict(inst, **values) return (inst, inst) def instance_update(context, instance_uuid, kwargs): inst = fakes.stub_instance(INSTANCE_IDS[instance_uuid], host='fake_host') return inst class MockSetAdminPassword(object): def __init__(self): self.instance_id = None self.password = None def __call__(self, context, instance, password): self.instance_id = instance['uuid'] self.password = password class ServerActionsControllerTestV21(test.TestCase): image_uuid = '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6' image_base_url = 'http://localhost:9292/images/' image_href = image_base_url + '/' + image_uuid servers = servers_v21 validation_error = exception.ValidationError request_too_large_error = exception.ValidationError image_url = None def setUp(self): super(ServerActionsControllerTestV21, self).setUp() self.flags(group='glance', api_servers=['http://localhost:9292']) self.stub_out('nova.db.instance_get_by_uuid', fakes.fake_instance_get(vm_state=vm_states.ACTIVE, host='fake_host')) self.stub_out('nova.db.instance_update_and_get_original', instance_update_and_get_original) fakes.stub_out_nw_api(self) fakes.stub_out_compute_api_snapshot(self.stubs) fake.stub_out_image_service(self) self.flags(allow_instance_snapshots=True, enable_instance_password=True, group='api') self._image_href = '155d900f-4e14-4e4c-a73d-069cbf4541e6' self.controller = self._get_controller() self.compute_api = self.controller.compute_api self.req = fakes.HTTPRequest.blank('') self.context = self.req.environ['nova.context'] self.image_api = image.API() def _get_controller(self): return self.servers.ServersController() def _set_fake_extension(self): pass def _test_locked_instance(self, action, method=None, body_map=None, compute_api_args_map=None): if body_map is None: body_map = {} if compute_api_args_map is None: compute_api_args_map = {} args, kwargs = compute_api_args_map.get(action, ((), {})) uuid = uuidutils.generate_uuid() context = self.req.environ['nova.context'] instance = fake_instance.fake_db_instance( id=1, uuid=uuid, vm_state=vm_states.ACTIVE, task_state=None, project_id=context.project_id, user_id=context.user_id) instance = objects.Instance._from_db_object( self.context, objects.Instance(), instance) with test.nested( mock.patch.object(compute_api.API, 'get', return_value=instance), mock.patch.object(compute_api.API, method, side_effect=exception.InstanceIsLocked( instance_uuid=instance['uuid'])), ) as (mock_get, mock_method): controller_function = 'self.controller.' + action self.assertRaises(webob.exc.HTTPConflict, eval(controller_function), self.req, instance['uuid'], body=body_map.get(action)) mock_get.assert_called_once_with(self.context, uuid, expected_attrs=['flavor', 'numa_topology']) mock_method.assert_called_once_with(self.context, instance, *args, **kwargs) def test_actions_with_locked_instance(self): actions = ['_action_resize', '_action_confirm_resize', '_action_revert_resize', '_action_reboot', '_action_rebuild'] method_translations = {'_action_resize': 'resize', '_action_confirm_resize': 'confirm_resize', '_action_revert_resize': 'revert_resize', '_action_reboot': 'reboot', '_action_rebuild': 'rebuild'} body_map = {'_action_resize': {'resize': {'flavorRef': '2'}}, '_action_reboot': {'reboot': {'type': 'HARD'}}, '_action_rebuild': {'rebuild': { 'imageRef': self.image_uuid, 'adminPass': 'TNc53Dr8s7vw'}}} args_map = {'_action_resize': (('2'), {}), '_action_confirm_resize': ((), {}), '_action_reboot': (('HARD',), {}), '_action_rebuild': ((self.image_uuid, 'TNc53Dr8s7vw'), {})} for action in actions: method = method_translations.get(action) self._test_locked_instance(action, method=method, body_map=body_map, compute_api_args_map=args_map) def test_reboot_hard(self): body = dict(reboot=dict(type="HARD")) self.controller._action_reboot(self.req, FAKE_UUID, body=body) def test_reboot_soft(self): body = dict(reboot=dict(type="SOFT")) self.controller._action_reboot(self.req, FAKE_UUID, body=body) def test_reboot_incorrect_type(self): body = dict(reboot=dict(type="NOT_A_TYPE")) self.assertRaises(self.validation_error, self.controller._action_reboot, self.req, FAKE_UUID, body=body) def test_reboot_missing_type(self): body = dict(reboot=dict()) self.assertRaises(self.validation_error, self.controller._action_reboot, self.req, FAKE_UUID, body=body) def test_reboot_none(self): body = dict(reboot=dict(type=None)) self.assertRaises(self.validation_error, self.controller._action_reboot, self.req, FAKE_UUID, body=body) def test_reboot_not_found(self): self.stub_out('nova.db.instance_get_by_uuid', return_server_not_found) body = dict(reboot=dict(type="HARD")) self.assertRaises(webob.exc.HTTPNotFound, self.controller._action_reboot, self.req, uuids.fake, body=body) def test_reboot_raises_conflict_on_invalid_state(self): body = dict(reboot=dict(type="HARD")) def fake_reboot(*args, **kwargs): raise exception.InstanceInvalidState(attr='fake_attr', state='fake_state', method='fake_method', instance_uuid='fake') self.stub_out('nova.compute.api.API.reboot', fake_reboot) self.assertRaises(webob.exc.HTTPConflict, self.controller._action_reboot, self.req, FAKE_UUID, body=body) def test_reboot_soft_with_soft_in_progress_raises_conflict(self): body = dict(reboot=dict(type="SOFT")) self.stub_out('nova.db.instance_get_by_uuid', fakes.fake_instance_get(vm_state=vm_states.ACTIVE, task_state=task_states.REBOOTING)) self.assertRaises(webob.exc.HTTPConflict, self.controller._action_reboot, self.req, FAKE_UUID, body=body) def test_reboot_hard_with_soft_in_progress_does_not_raise(self): body = dict(reboot=dict(type="HARD")) self.stub_out('nova.db.instance_get_by_uuid', fakes.fake_instance_get(vm_state=vm_states.ACTIVE, task_state=task_states.REBOOTING)) self.controller._action_reboot(self.req, FAKE_UUID, body=body) def test_reboot_hard_with_hard_in_progress(self): body = dict(reboot=dict(type="HARD")) self.stub_out('nova.db.instance_get_by_uuid', fakes.fake_instance_get(vm_state=vm_states.ACTIVE, task_state=task_states.REBOOTING_HARD)) self.controller._action_reboot(self.req, FAKE_UUID, body=body) def test_reboot_soft_with_hard_in_progress_raises_conflict(self): body = dict(reboot=dict(type="SOFT")) self.stub_out('nova.db.instance_get_by_uuid', fakes.fake_instance_get(vm_state=vm_states.ACTIVE, task_state=task_states.REBOOTING_HARD)) self.assertRaises(webob.exc.HTTPConflict, self.controller._action_reboot, self.req, FAKE_UUID, body=body) def _test_rebuild_preserve_ephemeral(self, value=None): self._set_fake_extension() return_server = fakes.fake_instance_get(image_ref='2', vm_state=vm_states.ACTIVE, host='fake_host') self.stub_out('nova.db.instance_get_by_uuid', return_server) body = { "rebuild": { "imageRef": self._image_href, }, } if value is not None: body['rebuild']['preserve_ephemeral'] = value with mock.patch.object(compute_api.API, 'rebuild') as mock_rebuild: self.controller._action_rebuild(self.req, FAKE_UUID, body=body) if value is not None: mock_rebuild.assert_called_once_with(self.context, mock.ANY, self._image_href, mock.ANY, preserve_ephemeral=value) else: mock_rebuild.assert_called_once_with(self.context, mock.ANY, self._image_href, mock.ANY) def test_rebuild_preserve_ephemeral_true(self): self._test_rebuild_preserve_ephemeral(True) def test_rebuild_preserve_ephemeral_false(self): self._test_rebuild_preserve_ephemeral(False) def test_rebuild_preserve_ephemeral_default(self): self._test_rebuild_preserve_ephemeral() def test_rebuild_accepted_minimum(self): return_server = fakes.fake_instance_get(image_ref='2', vm_state=vm_states.ACTIVE, host='fake_host') self.stub_out('nova.db.instance_get_by_uuid', return_server) self_href = 'http://localhost/v2/servers/%s' % FAKE_UUID body = { "rebuild": { "imageRef": self._image_href, }, } robj = self.controller._action_rebuild(self.req, FAKE_UUID, body=body) body = robj.obj self.assertEqual(body['server']['image']['id'], '2') self.assertEqual(len(body['server']['adminPass']), CONF.password_length) self.assertEqual(robj['location'], self_href.encode('utf-8')) def test_rebuild_instance_with_image_uuid(self): info = dict(image_href_in_call=None) def rebuild(self2, context, instance, image_href, *args, **kwargs): info['image_href_in_call'] = image_href self.stub_out('nova.db.instance_get', fakes.fake_instance_get(vm_state=vm_states.ACTIVE)) self.stub_out('nova.compute.api.API.rebuild', rebuild) # proper local hrefs must start with 'http://localhost/v2/' body = { 'rebuild': { 'imageRef': self.image_uuid, }, } self.controller._action_rebuild(self.req, FAKE_UUID, body=body) self.assertEqual(info['image_href_in_call'], self.image_uuid) def test_rebuild_instance_with_image_href_uses_uuid(self): # proper local hrefs must start with 'http://localhost/v2/' body = { 'rebuild': { 'imageRef': self.image_href, }, } self.assertRaises(exception.ValidationError, self.controller._action_rebuild, self.req, FAKE_UUID, body=body) def test_rebuild_accepted_minimum_pass_disabled(self): # run with enable_instance_password disabled to verify adminPass # is missing from response. See lp bug 921814 self.flags(enable_instance_password=False, group='api') return_server = fakes.fake_instance_get(image_ref='2', vm_state=vm_states.ACTIVE, host='fake_host') self.stub_out('nova.db.instance_get_by_uuid', return_server) self_href = 'http://localhost/v2/servers/%s' % FAKE_UUID body = { "rebuild": { "imageRef": self._image_href, }, } robj = self.controller._action_rebuild(self.req, FAKE_UUID, body=body) body = robj.obj self.assertEqual(body['server']['image']['id'], '2') self.assertNotIn("adminPass", body['server']) self.assertEqual(robj['location'], self_href.encode('utf-8')) def test_rebuild_raises_conflict_on_invalid_state(self): body = { "rebuild": { "imageRef": self._image_href, }, } def fake_rebuild(*args, **kwargs): raise exception.InstanceInvalidState(attr='fake_attr', state='fake_state', method='fake_method', instance_uuid='fake') self.stub_out('nova.compute.api.API.rebuild', fake_rebuild) self.assertRaises(webob.exc.HTTPConflict, self.controller._action_rebuild, self.req, FAKE_UUID, body=body) def test_rebuild_accepted_with_metadata(self): metadata = {'new': 'metadata'} return_server = fakes.fake_instance_get(metadata=metadata, vm_state=vm_states.ACTIVE, host='fake_host') self.stub_out('nova.db.instance_get_by_uuid', return_server) body = { "rebuild": { "imageRef": self._image_href, "metadata": metadata, }, } body = self.controller._action_rebuild(self.req, FAKE_UUID, body=body).obj self.assertEqual(body['server']['metadata'], metadata) def test_rebuild_accepted_with_bad_metadata(self): body = { "rebuild": { "imageRef": self._image_href, "metadata": "stack", }, } self.assertRaises(self.validation_error, self.controller._action_rebuild, self.req, FAKE_UUID, body=body) def test_rebuild_with_too_large_metadata(self): body = { "rebuild": { "imageRef": self._image_href, "metadata": { 256 * "k": "value" } } } self.assertRaises(self.request_too_large_error, self.controller._action_rebuild, self.req, FAKE_UUID, body=body) def test_rebuild_bad_entity(self): body = { "rebuild": { "imageId": self._image_href, }, } self.assertRaises(self.validation_error, self.controller._action_rebuild, self.req, FAKE_UUID, body=body) def test_rebuild_admin_pass(self): return_server = fakes.fake_instance_get(image_ref='2', vm_state=vm_states.ACTIVE, host='fake_host') self.stub_out('nova.db.instance_get_by_uuid', return_server) body = { "rebuild": { "imageRef": self._image_href, "adminPass": "asdf", }, } body = self.controller._action_rebuild(self.req, FAKE_UUID, body=body).obj self.assertEqual(body['server']['image']['id'], '2') self.assertEqual(body['server']['adminPass'], 'asdf') def test_rebuild_admin_pass_pass_disabled(self): # run with enable_instance_password disabled to verify adminPass # is missing from response. See lp bug 921814 self.flags(enable_instance_password=False, group='api') return_server = fakes.fake_instance_get(image_ref='2', vm_state=vm_states.ACTIVE, host='fake_host') self.stub_out('nova.db.instance_get_by_uuid', return_server) body = { "rebuild": { "imageRef": self._image_href, "adminPass": "asdf", }, } body = self.controller._action_rebuild(self.req, FAKE_UUID, body=body).obj self.assertEqual(body['server']['image']['id'], '2') self.assertNotIn('adminPass', body['server']) def test_rebuild_server_not_found(self): def server_not_found(self, instance_id, columns_to_join=None, use_slave=False): raise exception.InstanceNotFound(instance_id=instance_id) self.stub_out('nova.db.instance_get_by_uuid', server_not_found) body = { "rebuild": { "imageRef": self._image_href, }, } self.assertRaises(webob.exc.HTTPNotFound, self.controller._action_rebuild, self.req, FAKE_UUID, body=body) def test_rebuild_with_bad_image(self): body = { "rebuild": { "imageRef": "foo", }, } self.assertRaises(exception.ValidationError, self.controller._action_rebuild, self.req, FAKE_UUID, body=body) def test_rebuild_accessIP(self): attributes = { 'access_ip_v4': '172.19.0.1', 'access_ip_v6': 'fe80::1', } body = { "rebuild": { "imageRef": self._image_href, "accessIPv4": "172.19.0.1", "accessIPv6": "fe80::1", }, } data = {'changes': {}} orig_get = compute_api.API.get def wrap_get(*args, **kwargs): data['instance'] = orig_get(*args, **kwargs) return data['instance'] def fake_save(context, **kwargs): data['changes'].update(data['instance'].obj_get_changes()) self.stub_out('nova.compute.api.API.get', wrap_get) self.stub_out('nova.objects.Instance.save', fake_save) self.controller._action_rebuild(self.req, FAKE_UUID, body=body) self.assertEqual(self._image_href, data['changes']['image_ref']) self.assertEqual("", data['changes']['kernel_id']) self.assertEqual("", data['changes']['ramdisk_id']) self.assertEqual(task_states.REBUILDING, data['changes']['task_state']) self.assertEqual(0, data['changes']['progress']) for attr, value in attributes.items(): self.assertEqual(value, str(data['changes'][attr])) def test_rebuild_when_kernel_not_exists(self): def return_image_meta(*args, **kwargs): image_meta_table = { '2': {'id': 2, 'status': 'active', 'container_format': 'ari'}, '155d900f-4e14-4e4c-a73d-069cbf4541e6': {'id': 3, 'status': 'active', 'container_format': 'raw', 'properties': {'kernel_id': 1, 'ramdisk_id': 2}}, } image_id = args[2] try: image_meta = image_meta_table[str(image_id)] except KeyError: raise exception.ImageNotFound(image_id=image_id) return image_meta self.stub_out('nova.tests.unit.image.fake._FakeImageService.show', return_image_meta) body = { "rebuild": { "imageRef": "155d900f-4e14-4e4c-a73d-069cbf4541e6", }, } self.assertRaises(webob.exc.HTTPBadRequest, self.controller._action_rebuild, self.req, FAKE_UUID, body=body) def test_rebuild_proper_kernel_ram(self): instance_meta = {'kernel_id': None, 'ramdisk_id': None} orig_get = compute_api.API.get def wrap_get(*args, **kwargs): inst = orig_get(*args, **kwargs) instance_meta['instance'] = inst return inst def fake_save(context, **kwargs): instance = instance_meta['instance'] for key in instance_meta.keys(): if key in instance.obj_what_changed(): instance_meta[key] = instance[key] def return_image_meta(*args, **kwargs): image_meta_table = { '1': {'id': 1, 'status': 'active', 'container_format': 'aki'}, '2': {'id': 2, 'status': 'active', 'container_format': 'ari'}, '155d900f-4e14-4e4c-a73d-069cbf4541e6': {'id': 3, 'status': 'active', 'container_format': 'raw', 'properties': {'kernel_id': 1, 'ramdisk_id': 2}}, } image_id = args[2] try: image_meta = image_meta_table[str(image_id)] except KeyError: raise exception.ImageNotFound(image_id=image_id) return image_meta self.stub_out('nova.tests.unit.image.fake._FakeImageService.show', return_image_meta) self.stub_out('nova.compute.api.API.get', wrap_get) self.stub_out('nova.objects.Instance.save', fake_save) body = { "rebuild": { "imageRef": "155d900f-4e14-4e4c-a73d-069cbf4541e6", }, } self.controller._action_rebuild(self.req, FAKE_UUID, body=body).obj self.assertEqual(instance_meta['kernel_id'], '1') self.assertEqual(instance_meta['ramdisk_id'], '2') @mock.patch.object(compute_api.API, 'rebuild') def test_rebuild_instance_raise_auto_disk_config_exc(self, mock_rebuild): body = { "rebuild": { "imageRef": self._image_href, }, } mock_rebuild.side_effect = exception.AutoDiskConfigDisabledByImage( image='dummy') self.assertRaises(webob.exc.HTTPBadRequest, self.controller._action_rebuild, self.req, FAKE_UUID, body=body) def test_resize_server(self): body = dict(resize=dict(flavorRef="http://localhost/3")) self.resize_called = False def resize_mock(*args): self.resize_called = True self.stub_out('nova.compute.api.API.resize', resize_mock) self.controller._action_resize(self.req, FAKE_UUID, body=body) self.assertTrue(self.resize_called) def test_resize_server_no_flavor(self): body = dict(resize=dict()) self.assertRaises(self.validation_error, self.controller._action_resize, self.req, FAKE_UUID, body=body) def test_resize_server_no_flavor_ref(self): body = dict(resize=dict(flavorRef=None)) self.assertRaises(self.validation_error, self.controller._action_resize, self.req, FAKE_UUID, body=body) def test_resize_server_with_extra_arg(self): body = dict(resize=dict(favorRef="http://localhost/3", extra_arg="extra_arg")) self.assertRaises(self.validation_error, self.controller._action_resize, self.req, FAKE_UUID, body=body) def test_resize_server_invalid_flavor_ref(self): body = dict(resize=dict(flavorRef=1.2)) self.assertRaises(self.validation_error, self.controller._action_resize, self.req, FAKE_UUID, body=body) def test_resize_with_server_not_found(self): body = dict(resize=dict(flavorRef="http://localhost/3")) self.stub_out('nova.compute.api.API.get', return_server_not_found) self.assertRaises(webob.exc.HTTPNotFound, self.controller._action_resize, self.req, FAKE_UUID, body=body) def test_resize_with_image_exceptions(self): body = dict(resize=dict(flavorRef="http://localhost/3")) self.resize_called = 0 image_id = 'fake_image_id' exceptions = [ (exception.ImageNotAuthorized(image_id=image_id), webob.exc.HTTPUnauthorized), (exception.ImageNotFound(image_id=image_id), webob.exc.HTTPBadRequest), (exception.Invalid, webob.exc.HTTPBadRequest), (exception.NoValidHost(reason='Bad host'), webob.exc.HTTPBadRequest), (exception.AutoDiskConfigDisabledByImage(image=image_id), webob.exc.HTTPBadRequest), ] raised, expected = map(iter, zip(*exceptions)) def _fake_resize(obj, context, instance, flavor_id): self.resize_called += 1 raise next(raised) self.stub_out('nova.compute.api.API.resize', _fake_resize) for call_no in range(len(exceptions)): next_exception = next(expected) actual = self.assertRaises(next_exception, self.controller._action_resize, self.req, FAKE_UUID, body=body) if (isinstance(exceptions[call_no][0], exception.NoValidHost)): self.assertEqual(actual.explanation, 'No valid host was found. Bad host') elif (isinstance(exceptions[call_no][0], exception.AutoDiskConfigDisabledByImage)): self.assertEqual(actual.explanation, 'Requested image fake_image_id has automatic' ' disk resize disabled.') self.assertEqual(self.resize_called, call_no + 1) @mock.patch('nova.compute.api.API.resize', side_effect=exception.CannotResizeDisk(reason='')) def test_resize_raises_cannot_resize_disk(self, mock_resize): body = dict(resize=dict(flavorRef="http://localhost/3")) self.assertRaises(webob.exc.HTTPBadRequest, self.controller._action_resize, self.req, FAKE_UUID, body=body) @mock.patch('nova.compute.api.API.resize', side_effect=exception.FlavorNotFound(reason='', flavor_id='fake_id')) def test_resize_raises_flavor_not_found(self, mock_resize): body = dict(resize=dict(flavorRef="http://localhost/3")) self.assertRaises(webob.exc.HTTPBadRequest, self.controller._action_resize, self.req, FAKE_UUID, body=body) def test_resize_with_too_many_instances(self): body = dict(resize=dict(flavorRef="http://localhost/3")) def fake_resize(*args, **kwargs): raise exception.TooManyInstances(message="TooManyInstance") self.stub_out('nova.compute.api.API.resize', fake_resize) self.assertRaises(webob.exc.HTTPForbidden, self.controller._action_resize, self.req, FAKE_UUID, body=body) def test_resize_raises_conflict_on_invalid_state(self): body = dict(resize=dict(flavorRef="http://localhost/3")) def fake_resize(*args, **kwargs): raise exception.InstanceInvalidState(attr='fake_attr', state='fake_state', method='fake_method', instance_uuid='fake') self.stub_out('nova.compute.api.API.resize', fake_resize) self.assertRaises(webob.exc.HTTPConflict, self.controller._action_resize, self.req, FAKE_UUID, body=body) @mock.patch('nova.compute.api.API.resize', side_effect=exception.NoValidHost(reason='')) def test_resize_raises_no_valid_host(self, mock_resize): body = dict(resize=dict(flavorRef="http://localhost/3")) self.assertRaises(webob.exc.HTTPBadRequest, self.controller._action_resize, self.req, FAKE_UUID, body=body) @mock.patch.object(compute_api.API, 'resize') def test_resize_instance_raise_auto_disk_config_exc(self, mock_resize): mock_resize.side_effect = exception.AutoDiskConfigDisabledByImage( image='dummy') body = dict(resize=dict(flavorRef="http://localhost/3")) self.assertRaises(webob.exc.HTTPBadRequest, self.controller._action_resize, self.req, FAKE_UUID, body=body) @mock.patch('nova.compute.api.API.resize', side_effect=exception.PciRequestAliasNotDefined( alias='fake_name')) def test_resize_pci_alias_not_defined(self, mock_resize): # Tests that PciRequestAliasNotDefined is translated to a 400 error. body = dict(resize=dict(flavorRef="http://localhost/3")) self.assertRaises(webob.exc.HTTPBadRequest, self.controller._action_resize, self.req, FAKE_UUID, body=body) def test_confirm_resize_server(self): body = dict(confirmResize=None) self.confirm_resize_called = False def cr_mock(*args): self.confirm_resize_called = True self.stub_out('nova.compute.api.API.confirm_resize', cr_mock) self.controller._action_confirm_resize(self.req, FAKE_UUID, body=body) self.assertTrue(self.confirm_resize_called) def test_confirm_resize_migration_not_found(self): body = dict(confirmResize=None) def confirm_resize_mock(*args): raise exception.MigrationNotFoundByStatus(instance_id=1, status='finished') self.stub_out('nova.compute.api.API.confirm_resize', confirm_resize_mock) self.assertRaises(webob.exc.HTTPBadRequest, self.controller._action_confirm_resize, self.req, FAKE_UUID, body=body) def test_confirm_resize_raises_conflict_on_invalid_state(self): body = dict(confirmResize=None) def fake_confirm_resize(*args, **kwargs): raise exception.InstanceInvalidState(attr='fake_attr', state='fake_state', method='fake_method', instance_uuid='fake') self.stub_out('nova.compute.api.API.confirm_resize', fake_confirm_resize) self.assertRaises(webob.exc.HTTPConflict, self.controller._action_confirm_resize, self.req, FAKE_UUID, body=body) def test_revert_resize_migration_not_found(self): body = dict(revertResize=None) def revert_resize_mock(*args): raise exception.MigrationNotFoundByStatus(instance_id=1, status='finished') self.stub_out('nova.compute.api.API.revert_resize', revert_resize_mock) self.assertRaises(webob.exc.HTTPBadRequest, self.controller._action_revert_resize, self.req, FAKE_UUID, body=body) def test_revert_resize_server_not_found(self): body = dict(revertResize=None) self.assertRaises(webob. exc.HTTPNotFound, self.controller._action_revert_resize, self.req, "bad_server_id", body=body) def test_revert_resize_server(self): body = dict(revertResize=None) self.revert_resize_called = False def revert_mock(*args): self.revert_resize_called = True self.stub_out('nova.compute.api.API.revert_resize', revert_mock) body = self.controller._action_revert_resize(self.req, FAKE_UUID, body=body) self.assertTrue(self.revert_resize_called) def test_revert_resize_raises_conflict_on_invalid_state(self): body = dict(revertResize=None) def fake_revert_resize(*args, **kwargs): raise exception.InstanceInvalidState(attr='fake_attr', state='fake_state', method='fake_method', instance_uuid='fake') self.stub_out('nova.compute.api.API.revert_resize', fake_revert_resize) self.assertRaises(webob.exc.HTTPConflict, self.controller._action_revert_resize, self.req, FAKE_UUID, body=body) def test_create_image(self): body = { 'createImage': { 'name': 'Snapshot 1', }, } response = self.controller._action_create_image(self.req, FAKE_UUID, body=body) location = response.headers['Location'] self.assertEqual(self.image_url + '123' if self.image_url else self.image_api.generate_image_url('123', self.context), location) def test_create_image_v2_45(self): """Tests the createImage server action API with the 2.45 microversion where there is a response body but no Location header. """ body = { 'createImage': { 'name': 'Snapshot 1', }, } req = fakes.HTTPRequest.blank('', version='2.45') response = self.controller._action_create_image(req, FAKE_UUID, body=body) self.assertIsInstance(response, dict) self.assertEqual('123', response['image_id']) def test_create_image_name_too_long(self): long_name = 'a' * 260 body = { 'createImage': { 'name': long_name, }, } self.assertRaises(self.validation_error, self.controller._action_create_image, self.req, FAKE_UUID, body=body) def _do_test_create_volume_backed_image( self, extra_properties, mock_vol_create_side_effect=None): def _fake_id(x): return '%s-%s-%s-%s' % (x * 8, x * 4, x * 4, x * 12) body = dict(createImage=dict(name='snapshot_of_volume_backed')) if extra_properties: body['createImage']['metadata'] = extra_properties image_service = glance.get_default_image_service() bdm = [dict(volume_id=_fake_id('a'), volume_size=1, device_name='vda', delete_on_termination=False)] def fake_block_device_mapping_get_all_by_instance(context, inst_id, use_slave=False): return [fake_block_device.FakeDbBlockDeviceDict( {'volume_id': _fake_id('a'), 'source_type': 'snapshot', 'destination_type': 'volume', 'volume_size': 1, 'device_name': 'vda', 'snapshot_id': 1, 'boot_index': 0, 'delete_on_termination': False, 'no_device': None})] self.stub_out('nova.db.block_device_mapping_get_all_by_instance', fake_block_device_mapping_get_all_by_instance) system_metadata = dict(image_kernel_id=_fake_id('b'), image_ramdisk_id=_fake_id('c'), image_root_device_name='/dev/vda', image_block_device_mapping=str(bdm), image_container_format='ami') instance = fakes.fake_instance_get(image_ref=uuids.fake, vm_state=vm_states.ACTIVE, root_device_name='/dev/vda', system_metadata=system_metadata) self.stub_out('nova.db.instance_get_by_uuid', instance) volume = dict(id=_fake_id('a'), size=1, host='fake', display_description='fake') snapshot = dict(id=_fake_id('d')) with test.nested( mock.patch.object( self.controller.compute_api.volume_api, 'get_absolute_limits', return_value={'totalSnapshotsUsed': 0, 'maxTotalSnapshots': 10}), mock.patch.object(self.controller.compute_api.compute_rpcapi, 'quiesce_instance', side_effect=exception.InstanceQuiesceNotSupported( instance_id='fake', reason='test')), mock.patch.object(self.controller.compute_api.volume_api, 'get', return_value=volume), mock.patch.object(self.controller.compute_api.volume_api, 'create_snapshot_force', return_value=snapshot), ) as (mock_get_limits, mock_quiesce, mock_vol_get, mock_vol_create): if mock_vol_create_side_effect: mock_vol_create.side_effect = mock_vol_create_side_effect response = self.controller._action_create_image(self.req, FAKE_UUID, body=body) location = response.headers['Location'] image_id = location.replace(self.image_url or self.image_api.generate_image_url('', self.context), '') image = image_service.show(None, image_id) self.assertEqual(image['name'], 'snapshot_of_volume_backed') properties = image['properties'] self.assertEqual(properties['kernel_id'], _fake_id('b')) self.assertEqual(properties['ramdisk_id'], _fake_id('c')) self.assertEqual(properties['root_device_name'], '/dev/vda') self.assertTrue(properties['bdm_v2']) bdms = properties['block_device_mapping'] self.assertEqual(len(bdms), 1) self.assertEqual(bdms[0]['boot_index'], 0) self.assertEqual(bdms[0]['source_type'], 'snapshot') self.assertEqual(bdms[0]['destination_type'], 'volume') self.assertEqual(bdms[0]['snapshot_id'], snapshot['id']) self.assertEqual('/dev/vda', bdms[0]['device_name']) for fld in ('connection_info', 'id', 'instance_uuid'): self.assertNotIn(fld, bdms[0]) for k in extra_properties.keys(): self.assertEqual(properties[k], extra_properties[k]) mock_quiesce.assert_called_once_with(mock.ANY, mock.ANY) mock_vol_get.assert_called_once_with(mock.ANY, volume['id']) mock_vol_create.assert_called_once_with(mock.ANY, volume['id'], mock.ANY, mock.ANY) def test_create_volume_backed_image_no_metadata(self): self._do_test_create_volume_backed_image({}) def test_create_volume_backed_image_with_metadata(self): self._do_test_create_volume_backed_image(dict(ImageType='Gold', ImageVersion='2.0')) def test_create_volume_backed_image_cinder_over_quota(self): self.assertRaises( webob.exc.HTTPForbidden, self._do_test_create_volume_backed_image, {}, mock_vol_create_side_effect=exception.OverQuota( overs='snapshot')) def _test_create_volume_backed_image_with_metadata_from_volume( self, extra_metadata=None): def _fake_id(x): return '%s-%s-%s-%s' % (x * 8, x * 4, x * 4, x * 12) body = dict(createImage=dict(name='snapshot_of_volume_backed')) if extra_metadata: body['createImage']['metadata'] = extra_metadata image_service = glance.get_default_image_service() def fake_block_device_mapping_get_all_by_instance(context, inst_id, use_slave=False): return [fake_block_device.FakeDbBlockDeviceDict( {'volume_id': _fake_id('a'), 'source_type': 'snapshot', 'destination_type': 'volume', 'volume_size': 1, 'device_name': 'vda', 'snapshot_id': 1, 'boot_index': 0, 'delete_on_termination': False, 'no_device': None})] self.stub_out('nova.db.block_device_mapping_get_all_by_instance', fake_block_device_mapping_get_all_by_instance) instance = fakes.fake_instance_get( image_ref='', vm_state=vm_states.ACTIVE, root_device_name='/dev/vda', system_metadata={'image_test_key1': 'test_value1', 'image_test_key2': 'test_value2'}) self.stub_out('nova.db.instance_get_by_uuid', instance) volume = dict(id=_fake_id('a'), size=1, host='fake', display_description='fake') snapshot = dict(id=_fake_id('d')) with test.nested( mock.patch.object( self.controller.compute_api.volume_api, 'get_absolute_limits', return_value={'totalSnapshotsUsed': 0, 'maxTotalSnapshots': 10}), mock.patch.object(self.controller.compute_api.compute_rpcapi, 'quiesce_instance', side_effect=exception.InstanceQuiesceNotSupported( instance_id='fake', reason='test')), mock.patch.object(self.controller.compute_api.volume_api, 'get', return_value=volume), mock.patch.object(self.controller.compute_api.volume_api, 'create_snapshot_force', return_value=snapshot), ) as (mock_get_limits, mock_quiesce, mock_vol_get, mock_vol_create): response = self.controller._action_create_image(self.req, FAKE_UUID, body=body) location = response.headers['Location'] image_id = location.replace(self.image_base_url, '') image = image_service.show(None, image_id) properties = image['properties'] self.assertEqual(properties['test_key1'], 'test_value1') self.assertEqual(properties['test_key2'], 'test_value2') if extra_metadata: for key, val in extra_metadata.items(): self.assertEqual(properties[key], val) mock_quiesce.assert_called_once_with(mock.ANY, mock.ANY) mock_vol_get.assert_called_once_with(mock.ANY, volume['id']) mock_vol_create.assert_called_once_with(mock.ANY, volume['id'], mock.ANY, mock.ANY) def test_create_vol_backed_img_with_meta_from_vol_without_extra_meta(self): self._test_create_volume_backed_image_with_metadata_from_volume() def test_create_vol_backed_img_with_meta_from_vol_with_extra_meta(self): self._test_create_volume_backed_image_with_metadata_from_volume( extra_metadata={'a': 'b'}) def test_create_image_snapshots_disabled(self): """Don't permit a snapshot if the allow_instance_snapshots flag is False """ self.flags(allow_instance_snapshots=False, group='api') body = { 'createImage': { 'name': 'Snapshot 1', }, } self.assertRaises(webob.exc.HTTPBadRequest, self.controller._action_create_image, self.req, FAKE_UUID, body=body) def test_create_image_with_metadata(self): body = { 'createImage': { 'name': 'Snapshot 1', 'metadata': {'key': 'asdf'}, }, } response = self.controller._action_create_image(self.req, FAKE_UUID, body=body) location = response.headers['Location'] self.assertEqual(self.image_url + '123' if self.image_url else self.image_api.generate_image_url('123', self.context), location) def test_create_image_with_too_much_metadata(self): body = { 'createImage': { 'name': 'Snapshot 1', 'metadata': {}, }, } for num in range(CONF.quota.metadata_items + 1): body['createImage']['metadata']['foo%i' % num] = "bar" self.assertRaises(webob.exc.HTTPForbidden, self.controller._action_create_image, self.req, FAKE_UUID, body=body) def test_create_image_no_name(self): body = { 'createImage': {}, } self.assertRaises(self.validation_error, self.controller._action_create_image, self.req, FAKE_UUID, body=body) def test_create_image_blank_name(self): body = { 'createImage': { 'name': '', } } self.assertRaises(self.validation_error, self.controller._action_create_image, self.req, FAKE_UUID, body=body) def test_create_image_bad_metadata(self): body = { 'createImage': { 'name': 'geoff', 'metadata': 'henry', }, } self.assertRaises(self.validation_error, self.controller._action_create_image, self.req, FAKE_UUID, body=body) def test_create_image_raises_conflict_on_invalid_state(self): def snapshot(*args, **kwargs): raise exception.InstanceInvalidState(attr='fake_attr', state='fake_state', method='fake_method', instance_uuid='fake') self.stub_out('nova.compute.api.API.snapshot', snapshot) body = { "createImage": { "name": "test_snapshot", }, } self.assertRaises(webob.exc.HTTPConflict, self.controller._action_create_image, self.req, FAKE_UUID, body=body) nova-17.0.1/nova/tests/unit/api/openstack/compute/test_create_backup.py0000666000175000017500000004330313250073126026272 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_utils import timeutils import six import webob from nova.api.openstack import common from nova.api.openstack.compute import create_backup \ as create_backup_v21 from nova.compute import api from nova.compute import utils as compute_utils from nova import exception from nova import test from nova.tests.unit.api.openstack.compute import admin_only_action_common from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_instance class CreateBackupTestsV21(admin_only_action_common.CommonMixin, test.NoDBTestCase): create_backup = create_backup_v21 controller_name = 'CreateBackupController' validation_error = exception.ValidationError def setUp(self): super(CreateBackupTestsV21, self).setUp() self.controller = getattr(self.create_backup, self.controller_name)() self.compute_api = self.controller.compute_api patch_get = mock.patch.object(self.compute_api, 'get') self.mock_get = patch_get.start() self.addCleanup(patch_get.stop) @mock.patch.object(common, 'check_img_metadata_properties_quota') @mock.patch.object(api.API, 'backup') def test_create_backup_with_metadata(self, mock_backup, mock_check_image): metadata = {'123': 'asdf'} body = { 'createBackup': { 'name': 'Backup 1', 'backup_type': 'daily', 'rotation': 1, 'metadata': metadata, }, } image = dict(id='fake-image-id', status='ACTIVE', name='Backup 1', properties=metadata) instance = fake_instance.fake_instance_obj(self.context) self.mock_get.return_value = instance mock_backup.return_value = image res = self.controller._create_backup(self.req, instance.uuid, body=body) mock_check_image.assert_called_once_with(self.context, metadata) mock_backup.assert_called_once_with(self.context, instance, 'Backup 1', 'daily', 1, extra_properties=metadata) self.assertEqual(202, res.status_int) self.assertIn('fake-image-id', res.headers['Location']) def test_create_backup_no_name(self): # Name is required for backups. body = { 'createBackup': { 'backup_type': 'daily', 'rotation': 1, }, } self.assertRaises(self.validation_error, self.controller._create_backup, self.req, fakes.FAKE_UUID, body=body) def test_create_backup_name_with_leading_trailing_spaces(self): body = { 'createBackup': { 'name': ' test ', 'backup_type': 'daily', 'rotation': 1, }, } self.assertRaises(self.validation_error, self.controller._create_backup, self.req, fakes.FAKE_UUID, body=body) @mock.patch.object(common, 'check_img_metadata_properties_quota') @mock.patch.object(api.API, 'backup') def test_create_backup_name_with_leading_trailing_spaces_compat_mode( self, mock_backup, mock_check_image): body = { 'createBackup': { 'name': ' test ', 'backup_type': 'daily', 'rotation': 1, }, } image = dict(id='fake-image-id', status='ACTIVE', name='Backup 1', properties={}) instance = fake_instance.fake_instance_obj(self.context) self.mock_get.return_value = instance mock_backup.return_value = image self.req.set_legacy_v2() self.controller._create_backup(self.req, instance.uuid, body=body) mock_check_image.assert_called_once_with(self.context, {}) mock_backup.assert_called_once_with(self.context, instance, 'test', 'daily', 1, extra_properties={}) def test_create_backup_no_rotation(self): # Rotation is required for backup requests. body = { 'createBackup': { 'name': 'Backup 1', 'backup_type': 'daily', }, } self.assertRaises(self.validation_error, self.controller._create_backup, self.req, fakes.FAKE_UUID, body=body) def test_create_backup_negative_rotation(self): """Rotation must be greater than or equal to zero for backup requests """ body = { 'createBackup': { 'name': 'Backup 1', 'backup_type': 'daily', 'rotation': -1, }, } self.assertRaises(self.validation_error, self.controller._create_backup, self.req, fakes.FAKE_UUID, body=body) def test_create_backup_negative_rotation_with_string_number(self): body = { 'createBackup': { 'name': 'Backup 1', 'backup_type': 'daily', 'rotation': '-1', }, } self.assertRaises(self.validation_error, self.controller._create_backup, self.req, fakes.FAKE_UUID, body=body) def test_create_backup_rotation_with_empty_string(self): body = { 'createBackup': { 'name': 'Backup 1', 'backup_type': 'daily', 'rotation': '', }, } self.assertRaises(self.validation_error, self.controller._create_backup, self.req, fakes.FAKE_UUID, body=body) def test_create_backup_no_backup_type(self): # Backup Type (daily or weekly) is required for backup requests. body = { 'createBackup': { 'name': 'Backup 1', 'rotation': 1, }, } self.assertRaises(self.validation_error, self.controller._create_backup, self.req, fakes.FAKE_UUID, body=body) def test_create_backup_non_dict_metadata(self): body = { 'createBackup': { 'name': 'Backup 1', 'backup_type': 'daily', 'rotation': 1, 'metadata': 'non_dict', }, } self.assertRaises(self.validation_error, self.controller._create_backup, self.req, fakes.FAKE_UUID, body=body) def test_create_backup_bad_entity(self): body = {'createBackup': 'go'} self.assertRaises(self.validation_error, self.controller._create_backup, self.req, fakes.FAKE_UUID, body=body) @mock.patch.object(common, 'check_img_metadata_properties_quota') @mock.patch.object(api.API, 'backup') def test_create_backup_rotation_is_zero(self, mock_backup, mock_check_image): # The happy path for creating backups if rotation is zero. body = { 'createBackup': { 'name': 'Backup 1', 'backup_type': 'daily', 'rotation': 0, }, } image = dict(id='fake-image-id', status='ACTIVE', name='Backup 1', properties={}) instance = fake_instance.fake_instance_obj(self.context) self.mock_get.return_value = instance mock_backup.return_value = image res = self.controller._create_backup(self.req, instance.uuid, body=body) mock_check_image.assert_called_once_with(self.context, {}) mock_backup.assert_called_once_with(self.context, instance, 'Backup 1', 'daily', 0, extra_properties={}) self.assertEqual(202, res.status_int) self.assertNotIn('Location', res.headers) @mock.patch.object(common, 'check_img_metadata_properties_quota') @mock.patch.object(api.API, 'backup') def test_create_backup_rotation_is_positive(self, mock_backup, mock_check_image): # The happy path for creating backups if rotation is positive. body = { 'createBackup': { 'name': 'Backup 1', 'backup_type': 'daily', 'rotation': 1, }, } image = dict(id='fake-image-id', status='ACTIVE', name='Backup 1', properties={}) instance = fake_instance.fake_instance_obj(self.context) self.mock_get.return_value = instance mock_backup.return_value = image res = self.controller._create_backup(self.req, instance.uuid, body=body) mock_check_image.assert_called_once_with(self.context, {}) mock_backup.assert_called_once_with(self.context, instance, 'Backup 1', 'daily', 1, extra_properties={}) self.assertEqual(202, res.status_int) self.assertIn('fake-image-id', res.headers['Location']) @mock.patch.object(common, 'check_img_metadata_properties_quota') @mock.patch.object(api.API, 'backup') def test_create_backup_rotation_is_string_number( self, mock_backup, mock_check_image): body = { 'createBackup': { 'name': 'Backup 1', 'backup_type': 'daily', 'rotation': '1', }, } image = dict(id='fake-image-id', status='ACTIVE', name='Backup 1', properties={}) instance = fake_instance.fake_instance_obj(self.context) self.mock_get.return_value = instance mock_backup.return_value = image res = self.controller._create_backup(self.req, instance['uuid'], body=body) mock_check_image.assert_called_once_with(self.context, {}) mock_backup.assert_called_once_with(self.context, instance, 'Backup 1', 'daily', 1, extra_properties={}) self.assertEqual(202, res.status_int) self.assertIn('fake-image-id', res.headers['Location']) @mock.patch.object(common, 'check_img_metadata_properties_quota') @mock.patch.object(api.API, 'backup', return_value=dict( id='fake-image-id', status='ACTIVE', name='Backup 1', properties={})) def test_create_backup_v2_45(self, mock_backup, mock_check_image): """Tests the 2.45 microversion to ensure the Location header is not in the response. """ body = { 'createBackup': { 'name': 'Backup 1', 'backup_type': 'daily', 'rotation': '1', }, } instance = fake_instance.fake_instance_obj(self.context) self.mock_get.return_value = instance req = fakes.HTTPRequest.blank('', version='2.45') res = self.controller._create_backup(req, instance['uuid'], body=body) self.assertIsInstance(res, dict) self.assertEqual('fake-image-id', res['image_id']) @mock.patch.object(common, 'check_img_metadata_properties_quota') @mock.patch.object(api.API, 'backup') def test_create_backup_raises_conflict_on_invalid_state(self, mock_backup, mock_check_image): body_map = { 'createBackup': { 'name': 'Backup 1', 'backup_type': 'daily', 'rotation': 1, }, } instance = fake_instance.fake_instance_obj(self.context) self.mock_get.return_value = instance mock_backup.side_effect = exception.InstanceInvalidState( attr='vm_state', instance_uuid=instance.uuid, state='foo', method='backup') ex = self.assertRaises(webob.exc.HTTPConflict, self.controller._create_backup, self.req, instance.uuid, body=body_map) self.assertIn("Cannot 'createBackup' instance %(id)s" % {'id': instance.uuid}, ex.explanation) @mock.patch.object(common, 'check_img_metadata_properties_quota') def test_create_backup_with_non_existed_instance(self, mock_check_image): body_map = { 'createBackup': { 'name': 'Backup 1', 'backup_type': 'daily', 'rotation': 1, }, } uuid = fakes.FAKE_UUID self.mock_get.side_effect = exception.InstanceNotFound( instance_id=uuid) self.assertRaises(webob.exc.HTTPNotFound, self.controller._create_backup, self.req, uuid, body=body_map) mock_check_image.assert_called_once_with(self.context, {}) def test_create_backup_with_invalid_create_backup(self): body = { 'createBackupup': { 'name': 'Backup 1', 'backup_type': 'daily', 'rotation': 1, }, } self.assertRaises(self.validation_error, self.controller._create_backup, self.req, fakes.FAKE_UUID, body=body) @mock.patch.object(common, 'check_img_metadata_properties_quota') @mock.patch.object(compute_utils, 'is_volume_backed_instance', return_value=True) def test_backup_volume_backed_instance(self, mock_is_volume_backed, mock_check_image): body = { 'createBackup': { 'name': 'BackupMe', 'backup_type': 'daily', 'rotation': 3 }, } updates = {'vm_state': 'active', 'task_state': None, 'launched_at': timeutils.utcnow()} instance = fake_instance.fake_instance_obj(self.context, **updates) instance.image_ref = None self.mock_get.return_value = instance ex = self.assertRaises(webob.exc.HTTPBadRequest, self.controller._create_backup, self.req, instance['uuid'], body=body) mock_check_image.assert_called_once_with(self.context, {}) mock_is_volume_backed.assert_called_once_with(self.context, instance) self.assertIn('Backup is not supported for volume-backed instances', six.text_type(ex)) class CreateBackupPolicyEnforcementv21(test.NoDBTestCase): def setUp(self): super(CreateBackupPolicyEnforcementv21, self).setUp() self.controller = create_backup_v21.CreateBackupController() self.req = fakes.HTTPRequest.blank('') def test_create_backup_policy_failed(self): rule_name = "os_compute_api:os-create-backup" self.policy.set_rules({rule_name: "project:non_fake"}) metadata = {'123': 'asdf'} body = { 'createBackup': { 'name': 'Backup 1', 'backup_type': 'daily', 'rotation': 1, 'metadata': metadata, }, } exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller._create_backup, self.req, fakes.FAKE_UUID, body=body) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) class CreateBackupTestsV239(test.NoDBTestCase): def setUp(self): super(CreateBackupTestsV239, self).setUp() self.controller = create_backup_v21.CreateBackupController() self.req = fakes.HTTPRequest.blank('', version='2.39') @mock.patch.object(common, 'check_img_metadata_properties_quota') @mock.patch.object(common, 'get_instance') def test_create_backup_no_quota_checks(self, mock_get_instance, mock_check_quotas): # 'mock_get_instance' helps to skip the whole logic of the action, # but to make the test mock_get_instance.side_effect = webob.exc.HTTPNotFound metadata = {'123': 'asdf'} body = { 'createBackup': { 'name': 'Backup 1', 'backup_type': 'daily', 'rotation': 1, 'metadata': metadata, }, } self.assertRaises(webob.exc.HTTPNotFound, self.controller._create_backup, self.req, fakes.FAKE_UUID, body=body) # starting from version 2.39 no quota checks on Nova side are performed # for 'createBackup' action after removing 'image-metadata' proxy API mock_check_quotas.assert_not_called() nova-17.0.1/nova/tests/unit/api/openstack/compute/test_multiple_create.py0000666000175000017500000004267513250073126026673 0ustar zuulzuul00000000000000# Copyright 2013 IBM Corp. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import webob from nova.api.openstack.compute import block_device_mapping \ as block_device_mapping_v21 from nova.api.openstack.compute import multiple_create as multiple_create_v21 from nova.api.openstack.compute import servers as servers_v21 from nova.compute import api as compute_api import nova.conf from nova import exception from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests.unit.image import fake CONF = nova.conf.CONF def return_security_group(context, instance_id, security_group_id): pass class MultiCreateExtensionTestV21(test.TestCase): validation_error = exception.ValidationError def setUp(self): """Shared implementation for tests below that create instance.""" super(MultiCreateExtensionTestV21, self).setUp() self.flags(enable_instance_password=True, group='api') self.instance_cache_num = 0 self.instance_cache_by_id = {} self.instance_cache_by_uuid = {} # Network API needs to be stubbed out before creating the controllers. fakes.stub_out_nw_api(self) self.controller = servers_v21.ServersController() def instance_get(context, instance_id): """Stub for compute/api create() pulling in instance after scheduling """ return self.instance_cache_by_id[instance_id] def instance_update(context, uuid, values): instance = self.instance_cache_by_uuid[uuid] instance.update(values) return instance def server_update(context, instance_uuid, params, columns_to_join=None): inst = self.instance_cache_by_uuid[instance_uuid] inst.update(params) return (inst, inst) def fake_method(*args, **kwargs): pass def project_get_networks(context, user_id): return dict(id='1', host='localhost') def create_db_entry_for_new_instance(*args, **kwargs): instance = args[4] self.instance_cache_by_uuid[instance.uuid] = instance return instance fakes.stub_out_key_pair_funcs(self) fake.stub_out_image_service(self) self.stub_out('nova.db.instance_add_security_group', return_security_group) self.stub_out('nova.db.project_get_networks', project_get_networks) self.stub_out('nova.compute.api.API.create_db_entry_for_new_instance', create_db_entry_for_new_instance) self.stub_out('nova.db.instance_system_metadata_update', fake_method) self.stub_out('nova.db.instance_get', instance_get) self.stub_out('nova.db.instance_update', instance_update) self.stub_out('nova.db.instance_update_and_get_original', server_update) self.stub_out('nova.network.manager.VlanManager.allocate_fixed_ip', fake_method) self.req = fakes.HTTPRequest.blank('') def _test_create_extra(self, params, no_image=False): image_uuid = 'c905cedb-7281-47e4-8a62-f26bc5fc4c77' server = dict(name='server_test', imageRef=image_uuid, flavorRef=2) if no_image: server.pop('imageRef', None) server.update(params) body = dict(server=server) server = self.controller.create(self.req, body=body).obj['server'] def test_multiple_create_with_string_type_min_and_max(self): min_count = '2' max_count = '3' params = { multiple_create_v21.MIN_ATTRIBUTE_NAME: min_count, multiple_create_v21.MAX_ATTRIBUTE_NAME: max_count, } old_create = compute_api.API.create def create(*args, **kwargs): self.assertIsInstance(kwargs['min_count'], int) self.assertIsInstance(kwargs['max_count'], int) self.assertEqual(kwargs['min_count'], 2) self.assertEqual(kwargs['max_count'], 3) return old_create(*args, **kwargs) self.stub_out('nova.compute.api.API.create', create) self._test_create_extra(params) def test_create_instance_with_multiple_create_enabled(self): min_count = 2 max_count = 3 params = { multiple_create_v21.MIN_ATTRIBUTE_NAME: min_count, multiple_create_v21.MAX_ATTRIBUTE_NAME: max_count, } old_create = compute_api.API.create def create(*args, **kwargs): self.assertEqual(kwargs['min_count'], 2) self.assertEqual(kwargs['max_count'], 3) return old_create(*args, **kwargs) self.stub_out('nova.compute.api.API.create', create) self._test_create_extra(params) def test_create_instance_invalid_negative_min(self): image_href = '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6' flavor_ref = 'http://localhost/123/flavors/3' body = { 'server': { multiple_create_v21.MIN_ATTRIBUTE_NAME: -1, 'name': 'server_test', 'imageRef': image_href, 'flavorRef': flavor_ref, } } self.assertRaises(self.validation_error, self.controller.create, self.req, body=body) def test_create_instance_invalid_negative_max(self): image_href = '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6' flavor_ref = 'http://localhost/123/flavors/3' body = { 'server': { multiple_create_v21.MAX_ATTRIBUTE_NAME: -1, 'name': 'server_test', 'imageRef': image_href, 'flavorRef': flavor_ref, } } self.assertRaises(self.validation_error, self.controller.create, self.req, body=body) def test_create_instance_with_blank_min(self): image_href = '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6' flavor_ref = 'http://localhost/123/flavors/3' body = { 'server': { multiple_create_v21.MIN_ATTRIBUTE_NAME: '', 'name': 'server_test', 'imageRef': image_href, 'flavorRef': flavor_ref, } } self.assertRaises(self.validation_error, self.controller.create, self.req, body=body) def test_create_instance_with_blank_max(self): image_href = '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6' flavor_ref = 'http://localhost/123/flavors/3' body = { 'server': { multiple_create_v21.MAX_ATTRIBUTE_NAME: '', 'name': 'server_test', 'imageRef': image_href, 'flavorRef': flavor_ref, } } self.assertRaises(self.validation_error, self.controller.create, self.req, body=body) def test_create_instance_invalid_min_greater_than_max(self): image_href = '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6' flavor_ref = 'http://localhost/123/flavors/3' body = { 'server': { multiple_create_v21.MIN_ATTRIBUTE_NAME: 4, multiple_create_v21.MAX_ATTRIBUTE_NAME: 2, 'name': 'server_test', 'imageRef': image_href, 'flavorRef': flavor_ref, } } self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.req, body=body) def test_create_instance_invalid_alpha_min(self): image_href = '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6' flavor_ref = 'http://localhost/123/flavors/3' body = { 'server': { multiple_create_v21.MIN_ATTRIBUTE_NAME: 'abcd', 'name': 'server_test', 'imageRef': image_href, 'flavorRef': flavor_ref, } } self.assertRaises(self.validation_error, self.controller.create, self.req, body=body) def test_create_instance_invalid_alpha_max(self): image_href = '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6' flavor_ref = 'http://localhost/123/flavors/3' body = { 'server': { multiple_create_v21.MAX_ATTRIBUTE_NAME: 'abcd', 'name': 'server_test', 'imageRef': image_href, 'flavorRef': flavor_ref, } } self.assertRaises(self.validation_error, self.controller.create, self.req, body=body) def test_create_multiple_instances(self): """Test creating multiple instances but not asking for reservation_id """ image_href = '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6' flavor_ref = 'http://localhost/123/flavors/3' body = { 'server': { multiple_create_v21.MIN_ATTRIBUTE_NAME: 2, 'name': 'server_test', 'imageRef': image_href, 'flavorRef': flavor_ref, 'metadata': {'hello': 'world', 'open': 'stack'}, } } res = self.controller.create(self.req, body=body).obj instance_uuids = self.instance_cache_by_uuid.keys() self.assertIn(res["server"]["id"], instance_uuids) self._check_admin_password_len(res["server"]) def test_create_multiple_instances_pass_disabled(self): """Test creating multiple instances but not asking for reservation_id """ self.flags(enable_instance_password=False, group='api') image_href = '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6' flavor_ref = 'http://localhost/123/flavors/3' body = { 'server': { multiple_create_v21.MIN_ATTRIBUTE_NAME: 2, 'name': 'server_test', 'imageRef': image_href, 'flavorRef': flavor_ref, 'metadata': {'hello': 'world', 'open': 'stack'}, } } res = self.controller.create(self.req, body=body).obj instance_uuids = self.instance_cache_by_uuid.keys() self.assertIn(res["server"]["id"], instance_uuids) self._check_admin_password_missing(res["server"]) def _check_admin_password_len(self, server_dict): """utility function - check server_dict for admin_password length.""" self.assertEqual(CONF.password_length, len(server_dict["adminPass"])) def _check_admin_password_missing(self, server_dict): """utility function - check server_dict for admin_password absence.""" self.assertNotIn("admin_password", server_dict) def _create_multiple_instances_resv_id_return(self, resv_id_return): """Test creating multiple instances with asking for reservation_id """ image_href = '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6' flavor_ref = 'http://localhost/123/flavors/3' body = { 'server': { multiple_create_v21.MIN_ATTRIBUTE_NAME: 2, 'name': 'server_test', 'imageRef': image_href, 'flavorRef': flavor_ref, 'metadata': {'hello': 'world', 'open': 'stack'}, multiple_create_v21.RRID_ATTRIBUTE_NAME: resv_id_return } } res = self.controller.create(self.req, body=body) reservation_id = res.obj['reservation_id'] self.assertNotEqual(reservation_id, "") self.assertIsNotNone(reservation_id) self.assertGreater(len(reservation_id), 1) def test_create_multiple_instances_with_resv_id_return(self): self._create_multiple_instances_resv_id_return(True) def test_create_multiple_instances_with_string_resv_id_return(self): self._create_multiple_instances_resv_id_return("True") def test_create_multiple_instances_with_multiple_volume_bdm(self): """Test that a BadRequest is raised if multiple instances are requested with a list of block device mappings for volumes. """ min_count = 2 bdm = [{'source_type': 'volume', 'uuid': 'vol-xxxx'}, {'source_type': 'volume', 'uuid': 'vol-yyyy'} ] params = { block_device_mapping_v21.ATTRIBUTE_NAME: bdm, multiple_create_v21.MIN_ATTRIBUTE_NAME: min_count } old_create = compute_api.API.create def create(*args, **kwargs): self.assertEqual(kwargs['min_count'], 2) self.assertEqual(len(kwargs['block_device_mapping']), 2) return old_create(*args, **kwargs) self.stub_out('nova.compute.api.API.create', create) exc = self.assertRaises(webob.exc.HTTPBadRequest, self._test_create_extra, params, no_image=True) self.assertEqual("Cannot attach one or more volumes to multiple " "instances", exc.explanation) def test_create_multiple_instances_with_single_volume_bdm(self): """Test that a BadRequest is raised if multiple instances are requested to boot from a single volume. """ min_count = 2 bdm = [{'source_type': 'volume', 'uuid': 'vol-xxxx'}] params = { block_device_mapping_v21.ATTRIBUTE_NAME: bdm, multiple_create_v21.MIN_ATTRIBUTE_NAME: min_count } old_create = compute_api.API.create def create(*args, **kwargs): self.assertEqual(kwargs['min_count'], 2) self.assertEqual(kwargs['block_device_mapping'][0]['volume_id'], 'vol-xxxx') return old_create(*args, **kwargs) self.stub_out('nova.compute.api.API.create', create) exc = self.assertRaises(webob.exc.HTTPBadRequest, self._test_create_extra, params, no_image=True) self.assertEqual("Cannot attach one or more volumes to multiple " "instances", exc.explanation) def test_create_multiple_instance_with_non_integer_max_count(self): image_href = '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6' flavor_ref = 'http://localhost/123/flavors/3' body = { 'server': { multiple_create_v21.MAX_ATTRIBUTE_NAME: 2.5, 'name': 'server_test', 'imageRef': image_href, 'flavorRef': flavor_ref, 'metadata': {'hello': 'world', 'open': 'stack'}, } } self.assertRaises(self.validation_error, self.controller.create, self.req, body=body) def test_create_multiple_instance_with_non_integer_min_count(self): image_href = '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6' flavor_ref = 'http://localhost/123/flavors/3' body = { 'server': { multiple_create_v21.MIN_ATTRIBUTE_NAME: 2.5, 'name': 'server_test', 'imageRef': image_href, 'flavorRef': flavor_ref, 'metadata': {'hello': 'world', 'open': 'stack'}, } } self.assertRaises(self.validation_error, self.controller.create, self.req, body=body) def test_create_multiple_instance_max_count_overquota_min_count_ok(self): self.flags(instances=3, group='quota') image_href = '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6' flavor_ref = 'http://localhost/123/flavors/3' body = { 'server': { multiple_create_v21.MIN_ATTRIBUTE_NAME: 2, multiple_create_v21.MAX_ATTRIBUTE_NAME: 5, 'name': 'server_test', 'imageRef': image_href, 'flavorRef': flavor_ref, } } res = self.controller.create(self.req, body=body).obj instance_uuids = self.instance_cache_by_uuid.keys() self.assertIn(res["server"]["id"], instance_uuids) def test_create_multiple_instance_max_count_overquota_min_count_over(self): self.flags(instances=3, group='quota') image_href = '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6' flavor_ref = 'http://localhost/123/flavors/3' body = { 'server': { multiple_create_v21.MIN_ATTRIBUTE_NAME: 4, multiple_create_v21.MAX_ATTRIBUTE_NAME: 5, 'name': 'server_test', 'imageRef': image_href, 'flavorRef': flavor_ref, } } self.assertRaises(webob.exc.HTTPForbidden, self.controller.create, self.req, body=body) nova-17.0.1/nova/tests/unit/api/openstack/compute/test_image_metadata.py0000666000175000017500000004120413250073126026422 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import mock from oslo_serialization import jsonutils import webob from nova.api.openstack.compute import image_metadata as image_metadata_v21 from nova import exception from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests.unit import image_fixtures IMAGE_FIXTURES = image_fixtures.get_image_fixtures() CHK_QUOTA_STR = 'nova.api.openstack.common.check_img_metadata_properties_quota' def get_image_123(): return copy.deepcopy(IMAGE_FIXTURES)[0] class ImageMetaDataTestV21(test.NoDBTestCase): controller_class = image_metadata_v21.ImageMetadataController invalid_request = exception.ValidationError def setUp(self): super(ImageMetaDataTestV21, self).setUp() self.controller = self.controller_class() @mock.patch('nova.image.api.API.get', return_value=get_image_123()) def test_index(self, get_all_mocked): req = fakes.HTTPRequest.blank('/v2/fake/images/123/metadata') res_dict = self.controller.index(req, '123') expected = {'metadata': {'key1': 'value1'}} self.assertEqual(res_dict, expected) get_all_mocked.assert_called_once_with(mock.ANY, '123') @mock.patch('nova.image.api.API.get', return_value=get_image_123()) def test_show(self, get_mocked): req = fakes.HTTPRequest.blank('/v2/fake/images/123/metadata/key1') res_dict = self.controller.show(req, '123', 'key1') self.assertIn('meta', res_dict) self.assertEqual(len(res_dict['meta']), 1) self.assertEqual('value1', res_dict['meta']['key1']) get_mocked.assert_called_once_with(mock.ANY, '123') @mock.patch('nova.image.api.API.get', return_value=get_image_123()) def test_show_not_found(self, _get_mocked): req = fakes.HTTPRequest.blank('/v2/fake/images/123/metadata/key9') self.assertRaises(webob.exc.HTTPNotFound, self.controller.show, req, '123', 'key9') @mock.patch('nova.image.api.API.get', side_effect=exception.ImageNotFound(image_id='100')) def test_show_image_not_found(self, _get_mocked): req = fakes.HTTPRequest.blank('/v2/fake/images/100/metadata/key1') self.assertRaises(webob.exc.HTTPNotFound, self.controller.show, req, '100', 'key9') @mock.patch(CHK_QUOTA_STR) @mock.patch('nova.image.api.API.update') @mock.patch('nova.image.api.API.get', return_value=get_image_123()) def test_create(self, get_mocked, update_mocked, quota_mocked): mock_result = copy.deepcopy(get_image_123()) mock_result['properties']['key7'] = 'value7' update_mocked.return_value = mock_result req = fakes.HTTPRequest.blank('/v2/fake/images/123/metadata') req.method = 'POST' body = {"metadata": {"key7": "value7"}} req.body = jsonutils.dump_as_bytes(body) req.headers["content-type"] = "application/json" res = self.controller.create(req, '123', body=body) get_mocked.assert_called_once_with(mock.ANY, '123') expected = copy.deepcopy(get_image_123()) expected['properties'] = { 'key1': 'value1', # existing meta 'key7': 'value7' # new meta } quota_mocked.assert_called_once_with(mock.ANY, expected["properties"]) update_mocked.assert_called_once_with(mock.ANY, '123', expected, data=None, purge_props=True) expected_output = {'metadata': {'key1': 'value1', 'key7': 'value7'}} self.assertEqual(expected_output, res) @mock.patch(CHK_QUOTA_STR) @mock.patch('nova.image.api.API.update') @mock.patch('nova.image.api.API.get', side_effect=exception.ImageNotFound(image_id='100')) def test_create_image_not_found(self, _get_mocked, update_mocked, quota_mocked): req = fakes.HTTPRequest.blank('/v2/fake/images/100/metadata') req.method = 'POST' body = {"metadata": {"key7": "value7"}} req.body = jsonutils.dump_as_bytes(body) req.headers["content-type"] = "application/json" self.assertRaises(webob.exc.HTTPNotFound, self.controller.create, req, '100', body=body) self.assertFalse(quota_mocked.called) self.assertFalse(update_mocked.called) @mock.patch(CHK_QUOTA_STR) @mock.patch('nova.image.api.API.update') @mock.patch('nova.image.api.API.get', return_value=get_image_123()) def test_update_all(self, get_mocked, update_mocked, quota_mocked): req = fakes.HTTPRequest.blank('/v2/fake/images/123/metadata') req.method = 'PUT' body = {"metadata": {"key9": "value9"}} req.body = jsonutils.dump_as_bytes(body) req.headers["content-type"] = "application/json" res = self.controller.update_all(req, '123', body=body) get_mocked.assert_called_once_with(mock.ANY, '123') expected = copy.deepcopy(get_image_123()) expected['properties'] = { 'key9': 'value9' # replace meta } quota_mocked.assert_called_once_with(mock.ANY, expected["properties"]) update_mocked.assert_called_once_with(mock.ANY, '123', expected, data=None, purge_props=True) expected_output = {'metadata': {'key9': 'value9'}} self.assertEqual(expected_output, res) @mock.patch(CHK_QUOTA_STR) @mock.patch('nova.image.api.API.get', side_effect=exception.ImageNotFound(image_id='100')) def test_update_all_image_not_found(self, _get_mocked, quota_mocked): req = fakes.HTTPRequest.blank('/v2/fake/images/100/metadata') req.method = 'PUT' body = {"metadata": {"key9": "value9"}} req.body = jsonutils.dump_as_bytes(body) req.headers["content-type"] = "application/json" self.assertRaises(webob.exc.HTTPNotFound, self.controller.update_all, req, '100', body=body) self.assertFalse(quota_mocked.called) @mock.patch(CHK_QUOTA_STR) @mock.patch('nova.image.api.API.update') @mock.patch('nova.image.api.API.get', return_value=get_image_123()) def test_update_item(self, _get_mocked, update_mocked, quota_mocked): req = fakes.HTTPRequest.blank('/v2/fake/images/123/metadata/key1') req.method = 'PUT' body = {"meta": {"key1": "zz"}} req.body = jsonutils.dump_as_bytes(body) req.headers["content-type"] = "application/json" res = self.controller.update(req, '123', 'key1', body=body) expected = copy.deepcopy(get_image_123()) expected['properties'] = { 'key1': 'zz' # changed meta } quota_mocked.assert_called_once_with(mock.ANY, expected["properties"]) update_mocked.assert_called_once_with(mock.ANY, '123', expected, data=None, purge_props=True) expected_output = {'meta': {'key1': 'zz'}} self.assertEqual(res, expected_output) @mock.patch(CHK_QUOTA_STR) @mock.patch('nova.image.api.API.get', side_effect=exception.ImageNotFound(image_id='100')) def test_update_item_image_not_found(self, _get_mocked, quota_mocked): req = fakes.HTTPRequest.blank('/v2/fake/images/100/metadata/key1') req.method = 'PUT' body = {"meta": {"key1": "zz"}} req.body = jsonutils.dump_as_bytes(body) req.headers["content-type"] = "application/json" self.assertRaises(webob.exc.HTTPNotFound, self.controller.update, req, '100', 'key1', body=body) self.assertFalse(quota_mocked.called) @mock.patch(CHK_QUOTA_STR) @mock.patch('nova.image.api.API.update') @mock.patch('nova.image.api.API.get') def test_update_item_bad_body(self, get_mocked, update_mocked, quota_mocked): req = fakes.HTTPRequest.blank('/v2/fake/images/123/metadata/key1') req.method = 'PUT' body = {"key1": "zz"} req.body = b'' req.headers["content-type"] = "application/json" self.assertRaises(self.invalid_request, self.controller.update, req, '123', 'key1', body=body) self.assertFalse(get_mocked.called) self.assertFalse(quota_mocked.called) self.assertFalse(update_mocked.called) @mock.patch(CHK_QUOTA_STR, side_effect=webob.exc.HTTPBadRequest()) @mock.patch('nova.image.api.API.update') @mock.patch('nova.image.api.API.get') def test_update_item_too_many_keys(self, get_mocked, update_mocked, _quota_mocked): req = fakes.HTTPRequest.blank('/v2/fake/images/123/metadata/key1') req.method = 'PUT' body = {"meta": {"foo": "bar"}} req.body = jsonutils.dump_as_bytes(body) req.headers["content-type"] = "application/json" self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, req, '123', 'key1', body=body) self.assertFalse(get_mocked.called) self.assertFalse(update_mocked.called) @mock.patch(CHK_QUOTA_STR) @mock.patch('nova.image.api.API.update') @mock.patch('nova.image.api.API.get', return_value=get_image_123()) def test_update_item_body_uri_mismatch(self, _get_mocked, update_mocked, quota_mocked): req = fakes.HTTPRequest.blank('/v2/fake/images/123/metadata/bad') req.method = 'PUT' body = {"meta": {"key1": "value1"}} req.body = jsonutils.dump_as_bytes(body) req.headers["content-type"] = "application/json" self.assertRaises(webob.exc.HTTPBadRequest, self.controller.update, req, '123', 'bad', body=body) self.assertFalse(quota_mocked.called) self.assertFalse(update_mocked.called) @mock.patch('nova.image.api.API.update') @mock.patch('nova.image.api.API.get', return_value=get_image_123()) def test_delete(self, _get_mocked, update_mocked): req = fakes.HTTPRequest.blank('/v2/fake/images/123/metadata/key1') req.method = 'DELETE' res = self.controller.delete(req, '123', 'key1') expected = copy.deepcopy(get_image_123()) expected['properties'] = {} update_mocked.assert_called_once_with(mock.ANY, '123', expected, data=None, purge_props=True) self.assertIsNone(res) @mock.patch('nova.image.api.API.get', return_value=get_image_123()) def test_delete_not_found(self, _get_mocked): req = fakes.HTTPRequest.blank('/v2/fake/images/123/metadata/blah') req.method = 'DELETE' self.assertRaises(webob.exc.HTTPNotFound, self.controller.delete, req, '123', 'blah') @mock.patch('nova.image.api.API.get', side_effect=exception.ImageNotFound(image_id='100')) def test_delete_image_not_found(self, _get_mocked): req = fakes.HTTPRequest.blank('/v2/fake/images/100/metadata/key1') req.method = 'DELETE' self.assertRaises(webob.exc.HTTPNotFound, self.controller.delete, req, '100', 'key1') @mock.patch(CHK_QUOTA_STR, side_effect=webob.exc.HTTPForbidden(explanation='')) @mock.patch('nova.image.api.API.update') @mock.patch('nova.image.api.API.get', return_value=get_image_123()) def test_too_many_metadata_items_on_create(self, _get_mocked, update_mocked, _quota_mocked): body = {"metadata": {"foo": "bar"}} req = fakes.HTTPRequest.blank('/v2/fake/images/123/metadata') req.method = 'POST' req.body = jsonutils.dump_as_bytes(body) req.headers["content-type"] = "application/json" self.assertRaises(webob.exc.HTTPForbidden, self.controller.create, req, '123', body=body) self.assertFalse(update_mocked.called) @mock.patch(CHK_QUOTA_STR, side_effect=webob.exc.HTTPForbidden(explanation='')) @mock.patch('nova.image.api.API.update') @mock.patch('nova.image.api.API.get', return_value=get_image_123()) def test_too_many_metadata_items_on_put(self, _get_mocked, update_mocked, _quota_mocked): req = fakes.HTTPRequest.blank('/v2/fake/images/123/metadata/blah') req.method = 'PUT' body = {"meta": {"blah": "blah", "blah1": "blah1"}} req.body = jsonutils.dump_as_bytes(body) req.headers["content-type"] = "application/json" self.assertRaises(self.invalid_request, self.controller.update, req, '123', 'blah', body=body) self.assertFalse(update_mocked.called) @mock.patch('nova.image.api.API.get', side_effect=exception.ImageNotAuthorized(image_id='123')) def test_image_not_authorized_update(self, _get_mocked): req = fakes.HTTPRequest.blank('/v2/fake/images/123/metadata/key1') req.method = 'PUT' body = {"meta": {"key1": "value1"}} req.body = jsonutils.dump_as_bytes(body) req.headers["content-type"] = "application/json" self.assertRaises(webob.exc.HTTPForbidden, self.controller.update, req, '123', 'key1', body=body) @mock.patch('nova.image.api.API.get', side_effect=exception.ImageNotAuthorized(image_id='123')) def test_image_not_authorized_update_all(self, _get_mocked): image_id = 131 # see nova.tests.unit.api.openstack.fakes:_make_image_fixtures req = fakes.HTTPRequest.blank('/v2/fake/images/%s/metadata/key1' % image_id) req.method = 'PUT' body = {"metadata": {"key1": "value1"}} req.body = jsonutils.dump_as_bytes(body) req.headers["content-type"] = "application/json" self.assertRaises(webob.exc.HTTPForbidden, self.controller.update_all, req, image_id, body=body) @mock.patch('nova.image.api.API.get', side_effect=exception.ImageNotAuthorized(image_id='123')) def test_image_not_authorized_create(self, _get_mocked): image_id = 131 # see nova.tests.unit.api.openstack.fakes:_make_image_fixtures req = fakes.HTTPRequest.blank('/v2/fake/images/%s/metadata/key1' % image_id) req.method = 'POST' body = {"metadata": {"key1": "value1"}} req.body = jsonutils.dump_as_bytes(body) req.headers["content-type"] = "application/json" self.assertRaises(webob.exc.HTTPForbidden, self.controller.create, req, image_id, body=body) class ImageMetadataControllerV239(test.NoDBTestCase): def setUp(self): super(ImageMetadataControllerV239, self).setUp() self.controller = image_metadata_v21.ImageMetadataController() self.req = fakes.HTTPRequest.blank('', version='2.39') def test_not_found_for_all_image_metadata_api(self): self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.index, self.req) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.show, self.req, fakes.FAKE_UUID) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.create, self.req, fakes.FAKE_UUID, {'metadata': {}}) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.update, self.req, fakes.FAKE_UUID, 'id', {'metadata': {}}) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.update_all, self.req, fakes.FAKE_UUID, {'metadata': {}}) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.delete, self.req, fakes.FAKE_UUID) nova-17.0.1/nova/tests/unit/api/openstack/compute/test_flavor_manage.py0000666000175000017500000007262713250073126026316 0ustar zuulzuul00000000000000# Copyright 2011 Andrew Bogott for the Wikimedia Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_serialization import jsonutils import six import webob from nova.api.openstack import api_version_request from nova.api.openstack.compute import flavor_access as flavor_access_v21 from nova.api.openstack.compute import flavor_manage as flavormanage_v21 from nova.compute import flavors from nova import db from nova import exception from nova import objects from nova import policy from nova import test from nova.tests.unit.api.openstack import fakes def fake_create(newflavor): newflavor['flavorid'] = 1234 newflavor["name"] = 'test' newflavor["memory_mb"] = 512 newflavor["vcpus"] = 2 newflavor["root_gb"] = 1 newflavor["ephemeral_gb"] = 1 newflavor["swap"] = 512 newflavor["rxtx_factor"] = 1.0 newflavor["is_public"] = True newflavor["disabled"] = False class FlavorManageTestV21(test.NoDBTestCase): controller = flavormanage_v21.FlavorManageController() validation_error = exception.ValidationError base_url = '/v2/fake/flavors' microversion = '2.1' def setUp(self): super(FlavorManageTestV21, self).setUp() self.stub_out("nova.objects.Flavor.create", fake_create) self.request_body = { "flavor": { "name": "test", "ram": 512, "vcpus": 2, "disk": 1, "OS-FLV-EXT-DATA:ephemeral": 1, "id": six.text_type('1234'), "swap": 512, "rxtx_factor": 1, "os-flavor-access:is_public": True, } } self.expected_flavor = self.request_body def _get_http_request(self, url=''): return fakes.HTTPRequest.blank(url, version=self.microversion, use_admin_context=True) @property def app(self): return fakes.wsgi_app_v21() @mock.patch('nova.objects.Flavor.destroy') def test_delete(self, mock_destroy): res = self.controller._delete(self._get_http_request(), 1234) # NOTE: on v2.1, http status code is set as wsgi_code of API # method instead of status_int in a response object. if isinstance(self.controller, flavormanage_v21.FlavorManageController): status_int = self.controller._delete.wsgi_code else: status_int = res.status_int self.assertEqual(202, status_int) # subsequent delete should fail mock_destroy.side_effect = exception.FlavorNotFound(flavor_id=1234) self.assertRaises(webob.exc.HTTPNotFound, self.controller._delete, self._get_http_request(), 1234) def _test_create_missing_parameter(self, parameter): body = { "flavor": { "name": "azAZ09. -_", "ram": 512, "vcpus": 2, "disk": 1, "OS-FLV-EXT-DATA:ephemeral": 1, "id": six.text_type('1234'), "swap": 512, "rxtx_factor": 1, "os-flavor-access:is_public": True, } } del body['flavor'][parameter] self.assertRaises(self.validation_error, self.controller._create, self._get_http_request(), body=body) def test_create_missing_name(self): self._test_create_missing_parameter('name') def test_create_missing_ram(self): self._test_create_missing_parameter('ram') def test_create_missing_vcpus(self): self._test_create_missing_parameter('vcpus') def test_create_missing_disk(self): self._test_create_missing_parameter('disk') def _create_flavor_success_case(self, body, req=None): req = req if req else self._get_http_request(url=self.base_url) req.headers['Content-Type'] = 'application/json' req.headers['X-OpenStack-Nova-API-Version'] = self.microversion req.method = 'POST' req.body = jsonutils.dump_as_bytes(body) res = req.get_response(self.app) self.assertEqual(200, res.status_code) return jsonutils.loads(res.body) def test_create(self): body = self._create_flavor_success_case(self.request_body) for key in self.expected_flavor["flavor"]: self.assertEqual(body["flavor"][key], self.expected_flavor["flavor"][key]) def test_create_public_default(self): del self.request_body['flavor']['os-flavor-access:is_public'] body = self._create_flavor_success_case(self.request_body) for key in self.expected_flavor["flavor"]: self.assertEqual(body["flavor"][key], self.expected_flavor["flavor"][key]) def test_create_without_flavorid(self): del self.request_body['flavor']['id'] body = self._create_flavor_success_case(self.request_body) for key in self.expected_flavor["flavor"]: self.assertEqual(body["flavor"][key], self.expected_flavor["flavor"][key]) def _create_flavor_bad_request_case(self, body): self.assertRaises(self.validation_error, self.controller._create, self._get_http_request(), body=body) def test_create_invalid_name(self): self.request_body['flavor']['name'] = 'bad !@#!$%\x00 name' self._create_flavor_bad_request_case(self.request_body) def test_create_flavor_name_is_whitespace(self): self.request_body['flavor']['name'] = ' ' self._create_flavor_bad_request_case(self.request_body) def test_create_with_name_too_long(self): self.request_body['flavor']['name'] = 'a' * 256 self._create_flavor_bad_request_case(self.request_body) def test_create_with_name_leading_trailing_spaces(self): self.request_body['flavor']['name'] = ' test ' self._create_flavor_bad_request_case(self.request_body) def test_create_with_name_leading_trailing_spaces_compat_mode(self): req = self._get_http_request(url=self.base_url) req.set_legacy_v2() self.request_body['flavor']['name'] = ' test ' body = self._create_flavor_success_case(self.request_body, req) self.assertEqual('test', body['flavor']['name']) def test_create_without_flavorname(self): del self.request_body['flavor']['name'] self._create_flavor_bad_request_case(self.request_body) def test_create_empty_body(self): body = { "flavor": {} } self._create_flavor_bad_request_case(body) def test_create_no_body(self): body = {} self._create_flavor_bad_request_case(body) def test_create_invalid_format_body(self): body = { "flavor": [] } self._create_flavor_bad_request_case(body) def test_create_invalid_flavorid(self): self.request_body['flavor']['id'] = "!@#!$#!$^#&^$&" self._create_flavor_bad_request_case(self.request_body) def test_create_check_flavor_id_length(self): MAX_LENGTH = 255 self.request_body['flavor']['id'] = "a" * (MAX_LENGTH + 1) self._create_flavor_bad_request_case(self.request_body) def test_create_with_leading_trailing_whitespaces_in_flavor_id(self): self.request_body['flavor']['id'] = " bad_id " self._create_flavor_bad_request_case(self.request_body) def test_create_without_ram(self): del self.request_body['flavor']['ram'] self._create_flavor_bad_request_case(self.request_body) def test_create_with_0_ram(self): self.request_body['flavor']['ram'] = 0 self._create_flavor_bad_request_case(self.request_body) def test_create_with_ram_exceed_max_limit(self): self.request_body['flavor']['ram'] = db.MAX_INT + 1 self._create_flavor_bad_request_case(self.request_body) def test_create_without_vcpus(self): del self.request_body['flavor']['vcpus'] self._create_flavor_bad_request_case(self.request_body) def test_create_with_0_vcpus(self): self.request_body['flavor']['vcpus'] = 0 self._create_flavor_bad_request_case(self.request_body) def test_create_with_vcpus_exceed_max_limit(self): self.request_body['flavor']['vcpus'] = db.MAX_INT + 1 self._create_flavor_bad_request_case(self.request_body) def test_create_without_disk(self): del self.request_body['flavor']['disk'] self._create_flavor_bad_request_case(self.request_body) def test_create_with_minus_disk(self): self.request_body['flavor']['disk'] = -1 self._create_flavor_bad_request_case(self.request_body) def test_create_with_disk_exceed_max_limit(self): self.request_body['flavor']['disk'] = db.MAX_INT + 1 self._create_flavor_bad_request_case(self.request_body) def test_create_with_minus_ephemeral(self): self.request_body['flavor']['OS-FLV-EXT-DATA:ephemeral'] = -1 self._create_flavor_bad_request_case(self.request_body) def test_create_with_ephemeral_exceed_max_limit(self): self.request_body['flavor'][ 'OS-FLV-EXT-DATA:ephemeral'] = db.MAX_INT + 1 self._create_flavor_bad_request_case(self.request_body) def test_create_with_minus_swap(self): self.request_body['flavor']['swap'] = -1 self._create_flavor_bad_request_case(self.request_body) def test_create_with_swap_exceed_max_limit(self): self.request_body['flavor']['swap'] = db.MAX_INT + 1 self._create_flavor_bad_request_case(self.request_body) def test_create_with_minus_rxtx_factor(self): self.request_body['flavor']['rxtx_factor'] = -1 self._create_flavor_bad_request_case(self.request_body) def test_create_with_rxtx_factor_exceed_max_limit(self): self.request_body['flavor']['rxtx_factor'] = db.SQL_SP_FLOAT_MAX * 2 self._create_flavor_bad_request_case(self.request_body) def test_create_with_non_boolean_is_public(self): self.request_body['flavor']['os-flavor-access:is_public'] = 123 self._create_flavor_bad_request_case(self.request_body) def test_flavor_exists_exception_returns_409(self): expected = { "flavor": { "name": "test", "ram": 512, "vcpus": 2, "disk": 1, "OS-FLV-EXT-DATA:ephemeral": 1, "id": 1235, "swap": 512, "rxtx_factor": 1, "os-flavor-access:is_public": True, } } def fake_create(name, memory_mb, vcpus, root_gb, ephemeral_gb, flavorid, swap, rxtx_factor, is_public, description): raise exception.FlavorExists(name=name) self.stub_out('nova.compute.flavors.create', fake_create) self.assertRaises(webob.exc.HTTPConflict, self.controller._create, self._get_http_request(), body=expected) def test_invalid_memory_mb(self): """Check negative and decimal number can't be accepted.""" self.assertRaises(exception.InvalidInput, flavors.create, "abc", -512, 2, 1, 1, 1234, 512, 1, True) self.assertRaises(exception.InvalidInput, flavors.create, "abcd", 512.2, 2, 1, 1, 1234, 512, 1, True) self.assertRaises(exception.InvalidInput, flavors.create, "abcde", None, 2, 1, 1, 1234, 512, 1, True) self.assertRaises(exception.InvalidInput, flavors.create, "abcdef", 512, 2, None, 1, 1234, 512, 1, True) self.assertRaises(exception.InvalidInput, flavors.create, "abcdef", "test_memory_mb", 2, None, 1, 1234, 512, 1, True) def test_create_with_description(self): """With microversion <2.55 this should return a failure.""" self.request_body['flavor']['description'] = 'invalid' ex = self.assertRaises( self.validation_error, self.controller._create, self._get_http_request(), body=self.request_body) self.assertIn('description', six.text_type(ex)) def test_flavor_update_description(self): """With microversion <2.55 this should return a failure.""" flavor = self._create_flavor_success_case(self.request_body)['flavor'] self.assertRaises( exception.VersionNotFoundForAPIMethod, self.controller._update, self._get_http_request(), flavor['id'], body={'flavor': {'description': 'nope'}}) class FlavorManageTestV2_55(FlavorManageTestV21): microversion = '2.55' def setUp(self): super(FlavorManageTestV2_55, self).setUp() # Send a description in POST /flavors requests. self.request_body['flavor']['description'] = 'test description' def test_create_with_description(self): # test_create already tests this. pass @mock.patch('nova.objects.Flavor.get_by_flavor_id') @mock.patch('nova.objects.Flavor.save') def test_flavor_update_description(self, mock_flavor_save, mock_get): """Tests updating a flavor description.""" # First create a flavor. flavor = self._create_flavor_success_case(self.request_body)['flavor'] self.assertEqual('test description', flavor['description']) mock_get.return_value = objects.Flavor( flavorid=flavor['id'], name=flavor['name'], memory_mb=flavor['ram'], vcpus=flavor['vcpus'], root_gb=flavor['disk'], swap=flavor['swap'], ephemeral_gb=flavor['OS-FLV-EXT-DATA:ephemeral'], disabled=flavor['OS-FLV-DISABLED:disabled'], is_public=flavor['os-flavor-access:is_public'], rxtx_factor=flavor['rxtx_factor'], description=flavor['description']) # Now null out the flavor description. flavor = self.controller._update( self._get_http_request(), flavor['id'], body={'flavor': {'description': None}})['flavor'] self.assertIsNone(flavor['description']) mock_get.assert_called_once_with( test.MatchType(fakes.FakeRequestContext), flavor['id']) mock_flavor_save.assert_called_once_with() @mock.patch('nova.objects.Flavor.get_by_flavor_id', side_effect=exception.FlavorNotFound(flavor_id='notfound')) def test_flavor_update_not_found(self, mock_get): """Tests that a 404 is returned if the flavor is not found.""" self.assertRaises(webob.exc.HTTPNotFound, self.controller._update, self._get_http_request(), 'notfound', body={'flavor': {'description': None}}) def test_flavor_update_missing_description(self): """Tests that a schema validation error is raised if no description is provided in the update request body. """ self.assertRaises(self.validation_error, self.controller._update, self._get_http_request(), 'invalid', body={'flavor': {}}) def test_create_with_invalid_description(self): # NOTE(mriedem): Intentionally not using ddt for this since ddt will # create a test name that has 65536 'a's in the name which blows up # the console output. for description in ('bad !@#!$%\x00 description', # printable chars 'a' * 65536): # maxLength self.request_body['flavor']['description'] = description self.assertRaises(self.validation_error, self.controller._create, self._get_http_request(), body=self.request_body) @mock.patch('nova.objects.Flavor.get_by_flavor_id') @mock.patch('nova.objects.Flavor.save') def test_update_with_invalid_description(self, mock_flavor_save, mock_get): # First create a flavor. flavor = self._create_flavor_success_case(self.request_body)['flavor'] self.assertEqual('test description', flavor['description']) mock_get.return_value = objects.Flavor( flavorid=flavor['id'], name=flavor['name'], memory_mb=flavor['ram'], vcpus=flavor['vcpus'], root_gb=flavor['disk'], swap=flavor['swap'], ephemeral_gb=flavor['OS-FLV-EXT-DATA:ephemeral'], disabled=flavor['OS-FLV-DISABLED:disabled'], is_public=flavor['os-flavor-access:is_public'], description=flavor['description']) # NOTE(mriedem): Intentionally not using ddt for this since ddt will # create a test name that has 65536 'a's in the name which blows up # the console output. for description in ('bad !@#!$%\x00 description', # printable chars 'a' * 65536): # maxLength self.request_body['flavor']['description'] = description self.assertRaises(self.validation_error, self.controller._update, self._get_http_request(), flavor['id'], body={'flavor': {'description': description}}) class PrivateFlavorManageTestV21(test.TestCase): controller = flavormanage_v21.FlavorManageController() base_url = '/v2/fake/flavors' def setUp(self): super(PrivateFlavorManageTestV21, self).setUp() self.flavor_access_controller = (flavor_access_v21. FlavorAccessController()) self.expected = { "flavor": { "name": "test", "ram": 512, "vcpus": 2, "disk": 1, "OS-FLV-EXT-DATA:ephemeral": 1, "swap": 512, "rxtx_factor": 1 } } @property def app(self): return fakes.wsgi_app_v21(fake_auth_context=self._get_http_request(). environ['nova.context']) def _get_http_request(self, url=''): return fakes.HTTPRequest.blank(url) def _get_response(self): req = self._get_http_request(self.base_url) req.headers['Content-Type'] = 'application/json' req.method = 'POST' req.body = jsonutils.dump_as_bytes(self.expected) res = req.get_response(self.app) return jsonutils.loads(res.body) def test_create_private_flavor_should_not_grant_flavor_access(self): self.expected["flavor"]["os-flavor-access:is_public"] = False body = self._get_response() for key in self.expected["flavor"]: self.assertEqual(body["flavor"][key], self.expected["flavor"][key]) # Because for normal user can't access the non-public flavor without # access. So it need admin context at here. flavor_access_body = self.flavor_access_controller.index( fakes.HTTPRequest.blank('', use_admin_context=True), body["flavor"]["id"]) expected_flavor_access_body = { "tenant_id": 'fake', "flavor_id": "%s" % body["flavor"]["id"] } self.assertNotIn(expected_flavor_access_body, flavor_access_body["flavor_access"]) def test_create_public_flavor_should_not_create_flavor_access(self): self.expected["flavor"]["os-flavor-access:is_public"] = True body = self._get_response() for key in self.expected["flavor"]: self.assertEqual(body["flavor"][key], self.expected["flavor"][key]) class FlavorManagerPolicyEnforcementV21(test.TestCase): def setUp(self): super(FlavorManagerPolicyEnforcementV21, self).setUp() self.controller = flavormanage_v21.FlavorManageController() self.adm_req = fakes.HTTPRequest.blank('', use_admin_context=True) self.req = fakes.HTTPRequest.blank('') def test_create_policy_failed(self): rule_name = "os_compute_api:os-flavor-manage" self.policy.set_rules({rule_name: "project:non_fake"}) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller._create, self.req, body={"flavor": { "name": "test", "ram": 512, "vcpus": 2, "disk": 1, "swap": 512, "rxtx_factor": 1, }}) # The deprecated action is being enforced since the rule that is # configured is different than the default rule self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) def test_delete_policy_failed(self): rule_name = "os_compute_api:os-flavor-manage" self.policy.set_rules({rule_name: "project:non_fake"}) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller._delete, self.req, fakes.FAKE_UUID) # The deprecated action is being enforced since the rule that is # configured is different than the default rule self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) @mock.patch.object(policy.LOG, 'warning') def test_create_policy_rbac_inherit_default(self, mock_warning): """Test to verify inherited rule is working. The rule of the deprecated action is not set to the default, so the deprecated action is being enforced """ default_flavor_policy = "os_compute_api:os-flavor-manage" create_flavor_policy = "os_compute_api:os-flavor-manage:create" rules = {default_flavor_policy: 'is_admin:True', create_flavor_policy: 'rule:%s' % default_flavor_policy, "os_compute_api:os-flavor-access": "project:non_fake"} self.policy.set_rules(rules) body = { "flavor": { "name": "azAZ09. -_", "ram": 512, "vcpus": 2, "disk": 1, "OS-FLV-EXT-DATA:ephemeral": 1, "id": six.text_type('1234'), "swap": 512, "rxtx_factor": 1, "os-flavor-access:is_public": True, } } # check for success as admin self.controller._create(self.adm_req, body=body) # check for failure as non-admin exc = self.assertRaises(exception.PolicyNotAuthorized, self.controller._create, self.req, body=body) # The deprecated action is being enforced since the rule that is # configured is different than the default rule self.assertEqual( "Policy doesn't allow %s to be performed." % default_flavor_policy, exc.format_message()) mock_warning.assert_called_with("Start using the new " "action '{0}'. The existing action '{1}' is being deprecated and " "will be removed in future release.".format(create_flavor_policy, default_flavor_policy)) @mock.patch.object(policy.LOG, 'warning') def test_delete_policy_rbac_inherit_default(self, mock_warning): """Test to verify inherited rule is working. The rule of the deprecated action is not set to the default, so the deprecated action is being enforced """ default_flavor_policy = "os_compute_api:os-flavor-manage" create_flavor_policy = "os_compute_api:os-flavor-manage:create" delete_flavor_policy = "os_compute_api:os-flavor-manage:delete" rules = {default_flavor_policy: 'is_admin:True', create_flavor_policy: 'rule:%s' % default_flavor_policy, delete_flavor_policy: 'rule:%s' % default_flavor_policy} self.policy.set_rules(rules) body = { "flavor": { "name": "azAZ09. -_", "ram": 512, "vcpus": 2, "disk": 1, "OS-FLV-EXT-DATA:ephemeral": 1, "id": six.text_type('1234'), "swap": 512, "rxtx_factor": 1, "os-flavor-access:is_public": True, } } self.flavor = self.controller._create(self.adm_req, body=body) mock_warning.assert_called_once_with("Start using the new " "action '{0}'. The existing action '{1}' is being deprecated and " "will be removed in future release.".format(create_flavor_policy, default_flavor_policy)) # check for success as admin flavor = self.flavor self.controller._delete(self.adm_req, flavor['flavor']['id']) # check for failure as non-admin flavor = self.flavor exc = self.assertRaises(exception.PolicyNotAuthorized, self.controller._delete, self.req, flavor['flavor']['id']) # The deprecated action is being enforced since the rule that is # configured is different than the default rule self.assertEqual( "Policy doesn't allow %s to be performed." % default_flavor_policy, exc.format_message()) mock_warning.assert_called_with("Start using the new " "action '{0}'. The existing action '{1}' is being deprecated and " "will be removed in future release.".format(delete_flavor_policy, default_flavor_policy)) def test_create_policy_rbac_no_change_to_default_action_rule(self): """Test to verify the correct action is being enforced. When the rule configured for the deprecated action is the same as the default, the new action should be enforced. """ default_flavor_policy = "os_compute_api:os-flavor-manage" create_flavor_policy = "os_compute_api:os-flavor-manage:create" # The default rule of the deprecated action is admin_api rules = {default_flavor_policy: 'rule:admin_api', create_flavor_policy: 'rule:%s' % default_flavor_policy} self.policy.set_rules(rules) body = { "flavor": { "name": "azAZ09. -_", "ram": 512, "vcpus": 2, "disk": 1, "OS-FLV-EXT-DATA:ephemeral": 1, "id": six.text_type('1234'), "swap": 512, "rxtx_factor": 1, "os-flavor-access:is_public": True, } } exc = self.assertRaises(exception.PolicyNotAuthorized, self.controller._create, self.req, body=body) self.assertEqual( "Policy doesn't allow %s to be performed." % create_flavor_policy, exc.format_message()) def test_delete_policy_rbac_change_to_default_action_rule(self): """Test to verify the correct action is being enforced. When the rule configured for the deprecated action is the same as the default, the new action should be enforced. """ default_flavor_policy = "os_compute_api:os-flavor-manage" create_flavor_policy = "os_compute_api:os-flavor-manage:create" delete_flavor_policy = "os_compute_api:os-flavor-manage:delete" # The default rule of the deprecated action is admin_api # Set the rule of the create flavor action to is_admin:True so that # admin context can be used to create a flavor rules = {default_flavor_policy: 'rule:admin_api', create_flavor_policy: 'is_admin:True', delete_flavor_policy: 'rule:%s' % default_flavor_policy} self.policy.set_rules(rules) body = { "flavor": { "name": "azAZ09. -_", "ram": 512, "vcpus": 2, "disk": 1, "OS-FLV-EXT-DATA:ephemeral": 1, "id": six.text_type('1234'), "swap": 512, "rxtx_factor": 1, "os-flavor-access:is_public": True, } } flavor = self.controller._create(self.adm_req, body=body) exc = self.assertRaises(exception.PolicyNotAuthorized, self.controller._delete, self.req, flavor['flavor']['id']) self.assertEqual( "Policy doesn't allow %s to be performed." % delete_flavor_policy, exc.format_message()) def test_flavor_update_non_admin_fails(self): """Tests that trying to update a flavor as a non-admin fails due to the default policy. """ self.req.api_version_request = api_version_request.APIVersionRequest( '2.55') exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller._update, self.req, 'fake_id', body={"flavor": {"description": "not authorized"}}) self.assertEqual( "Policy doesn't allow os_compute_api:os-flavor-manage:update to " "be performed.", exc.format_message()) nova-17.0.1/nova/tests/unit/api/openstack/compute/test_floating_ips.py0000666000175000017500000013276213250073126026170 0ustar zuulzuul00000000000000# Copyright (c) 2011 X.commerce, a business unit of eBay Inc. # Copyright 2011 Eldar Nugaev # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import six import webob from nova.api.openstack.compute import floating_ips as fips_v21 from nova import compute from nova import context from nova import db from nova import exception from nova import network from nova import objects from nova.objects import base as obj_base from nova import test from nova.tests.unit.api.openstack import fakes from nova.tests.unit import fake_network from nova.tests import uuidsentinel as uuids FAKE_UUID = 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa' TEST_INST = 1 WRONG_INST = 9999 def network_api_get_floating_ip(self, context, id): return {'id': 1, 'address': '10.10.10.10', 'pool': 'nova', 'fixed_ip_id': None} def network_api_get_floating_ip_by_address(self, context, address): return {'id': 1, 'address': '10.10.10.10', 'pool': 'nova', 'fixed_ip_id': 10} def network_api_get_floating_ips_by_project(self, context): return [{'id': 1, 'address': '10.10.10.10', 'pool': 'nova', 'fixed_ip': {'address': '10.0.0.1', 'instance_uuid': FAKE_UUID, 'instance': objects.Instance( **{'uuid': FAKE_UUID})}}, {'id': 2, 'pool': 'nova', 'interface': 'eth0', 'address': '10.10.10.11', 'fixed_ip': None}] def compute_api_get(self, context, instance_id, expected_attrs=None): return objects.Instance(uuid=FAKE_UUID, id=instance_id, instance_type_id=1, host='bob') def network_api_allocate(self, context): return '10.10.10.10' def network_api_release(self, context, address): pass def compute_api_associate(self, context, instance_id, address): pass def network_api_associate(self, context, floating_address, fixed_address): pass def network_api_disassociate(self, context, instance, floating_address): pass def fake_instance_get(context, instance_id): return objects.Instance(**{ "id": 1, "uuid": uuids.fake, "name": 'fake', "user_id": 'fakeuser', "project_id": '123'}) def stub_nw_info(test): def get_nw_info_for_instance(instance): return fake_network.fake_get_instance_nw_info(test) return get_nw_info_for_instance def get_instance_by_floating_ip_addr(self, context, address): return None class FloatingIpTestNeutronV21(test.NoDBTestCase): floating_ips = fips_v21 def setUp(self): super(FloatingIpTestNeutronV21, self).setUp() self.flags(use_neutron=True) self.controller = self.floating_ips.FloatingIPController() def test_floatingip_delete(self): req = fakes.HTTPRequest.blank('') fip_val = {'address': '1.1.1.1', 'fixed_ip_id': '192.168.1.2'} with test.nested( mock.patch.object(self.controller.network_api, 'disassociate_floating_ip'), mock.patch.object(self.controller.network_api, 'disassociate_and_release_floating_ip'), mock.patch.object(self.controller.network_api, 'release_floating_ip'), mock.patch.object(self.controller.network_api, 'get_instance_id_by_floating_address', return_value=None), mock.patch.object(self.controller.network_api, 'get_floating_ip', return_value=fip_val)) as ( disoc_fip, dis_and_del, rel_fip, _, _): self.controller.delete(req, 1) self.assertFalse(disoc_fip.called) self.assertFalse(rel_fip.called) # Only disassociate_and_release_floating_ip is # called if using neutron self.assertTrue(dis_and_del.called) def _test_floatingip_delete_not_found(self, ex, expect_ex=webob.exc.HTTPNotFound): req = fakes.HTTPRequest.blank('') with mock.patch.object(self.controller.network_api, 'get_floating_ip', side_effect=ex): self.assertRaises(expect_ex, self.controller.delete, req, 1) def test_floatingip_delete_not_found_ip(self): ex = exception.FloatingIpNotFound(id=1) self._test_floatingip_delete_not_found(ex) def test_floatingip_delete_not_found(self): ex = exception.NotFound self._test_floatingip_delete_not_found(ex) def test_floatingip_delete_invalid_id(self): ex = exception.InvalidID(id=1) self._test_floatingip_delete_not_found(ex, webob.exc.HTTPBadRequest) def _test_floatingip_delete_error_disassociate(self, raised_exc, expected_exc): """Ensure that various exceptions are correctly transformed. Handle the myriad exceptions that could be raised from the 'disassociate_and_release_floating_ip' call. """ req = fakes.HTTPRequest.blank('') with mock.patch.object(self.controller.network_api, 'get_floating_ip', return_value={'address': 'foo'}), \ mock.patch.object(self.controller.network_api, 'get_instance_id_by_floating_address', return_value=None), \ mock.patch.object(self.controller.network_api, 'disassociate_and_release_floating_ip', side_effect=raised_exc): self.assertRaises(expected_exc, self.controller.delete, req, 1) def test_floatingip_delete_error_disassociate_1(self): raised_exc = exception.Forbidden expected_exc = webob.exc.HTTPForbidden self._test_floatingip_delete_error_disassociate(raised_exc, expected_exc) def test_floatingip_delete_error_disassociate_2(self): raised_exc = exception.CannotDisassociateAutoAssignedFloatingIP expected_exc = webob.exc.HTTPForbidden self._test_floatingip_delete_error_disassociate(raised_exc, expected_exc) def test_floatingip_delete_error_disassociate_3(self): raised_exc = exception.FloatingIpNotFoundForAddress(address='1.1.1.1') expected_exc = webob.exc.HTTPNotFound self._test_floatingip_delete_error_disassociate(raised_exc, expected_exc) class FloatingIpTestV21(test.TestCase): floating_ip = "10.10.10.10" floating_ip_2 = "10.10.10.11" floating_ips = fips_v21 validation_error = exception.ValidationError def _create_floating_ips(self, floating_ips=None): """Create a floating IP object.""" if floating_ips is None: floating_ips = [self.floating_ip] elif not isinstance(floating_ips, (list, tuple)): floating_ips = [floating_ips] dict_ = {'pool': 'nova', 'host': 'fake_host'} return db.floating_ip_bulk_create( self.context, [dict(address=ip, **dict_) for ip in floating_ips], ) def _delete_floating_ip(self): db.floating_ip_destroy(self.context, self.floating_ip) def setUp(self): super(FloatingIpTestV21, self).setUp() self.flags(use_neutron=False) self.stubs.Set(compute.api.API, "get", compute_api_get) self.stubs.Set(network.api.API, "get_floating_ip", network_api_get_floating_ip) self.stubs.Set(network.api.API, "get_floating_ip_by_address", network_api_get_floating_ip_by_address) self.stubs.Set(network.api.API, "get_floating_ips_by_project", network_api_get_floating_ips_by_project) self.stubs.Set(network.api.API, "release_floating_ip", network_api_release) self.stubs.Set(network.api.API, "disassociate_floating_ip", network_api_disassociate) self.stubs.Set(network.api.API, "get_instance_id_by_floating_address", get_instance_by_floating_ip_addr) self.stubs.Set(objects.Instance, "get_network_info", stub_nw_info(self)) fake_network.stub_out_nw_api_get_instance_nw_info(self) self.stub_out('nova.db.instance_get', fake_instance_get) self.context = context.get_admin_context() self._create_floating_ips() self.controller = self.floating_ips.FloatingIPController() self.manager = self.floating_ips.\ FloatingIPActionController() self.fake_req = fakes.HTTPRequest.blank('') def tearDown(self): self._delete_floating_ip() super(FloatingIpTestV21, self).tearDown() def test_floatingip_delete(self): fip_val = {'address': '1.1.1.1', 'fixed_ip_id': '192.168.1.2'} with test.nested( mock.patch.object(self.controller.network_api, 'disassociate_floating_ip'), mock.patch.object(self.controller.network_api, 'release_floating_ip'), mock.patch.object(self.controller.network_api, 'get_instance_id_by_floating_address', return_value=None), mock.patch.object(self.controller.network_api, 'get_floating_ip', return_value=fip_val)) as ( disoc_fip, rel_fip, _, _): self.controller.delete(self.fake_req, 1) self.assertTrue(disoc_fip.called) self.assertTrue(rel_fip.called) def _test_floatingip_delete_not_found(self, ex, expect_ex=webob.exc.HTTPNotFound): with mock.patch.object(self.controller.network_api, 'get_floating_ip', side_effect=ex): self.assertRaises(expect_ex, self.controller.delete, self.fake_req, 1) def test_floatingip_delete_not_found_ip(self): ex = exception.FloatingIpNotFound(id=1) self._test_floatingip_delete_not_found(ex) def test_floatingip_delete_not_found(self): ex = exception.NotFound self._test_floatingip_delete_not_found(ex) def test_floatingip_delete_invalid_id(self): ex = exception.InvalidID(id=1) self._test_floatingip_delete_not_found(ex, webob.exc.HTTPBadRequest) def test_translate_floating_ip_view(self): floating_ip_address = self.floating_ip floating_ip = db.floating_ip_get_by_address(self.context, floating_ip_address) # NOTE(vish): network_get uses the id not the address floating_ip = db.floating_ip_get(self.context, floating_ip['id']) floating_obj = objects.FloatingIP() objects.FloatingIP._from_db_object(self.context, floating_obj, floating_ip) view = self.floating_ips._translate_floating_ip_view(floating_obj) self.assertIn('floating_ip', view) self.assertTrue(view['floating_ip']['id']) self.assertEqual(view['floating_ip']['ip'], floating_obj.address) self.assertIsNone(view['floating_ip']['fixed_ip']) self.assertIsNone(view['floating_ip']['instance_id']) def test_translate_floating_ip_view_neutronesque(self): uuid = 'ca469a10-fa76-11e5-86aa-5e5517507c66' fixed_id = 'ae900cf4-fb73-11e5-86aa-5e5517507c66' floating_ip = objects.floating_ip.NeutronFloatingIP(id=uuid, address='1.2.3.4', pool='pool', context='ctxt', fixed_ip_id=fixed_id) view = self.floating_ips._translate_floating_ip_view(floating_ip) self.assertEqual(uuid, view['floating_ip']['id']) def test_translate_floating_ip_view_dict(self): floating_ip = {'id': 0, 'address': '10.0.0.10', 'pool': 'nova', 'fixed_ip': None} view = self.floating_ips._translate_floating_ip_view(floating_ip) self.assertIn('floating_ip', view) def test_translate_floating_ip_view_obj(self): fip = objects.FixedIP(address='192.168.1.2', instance_uuid=FAKE_UUID) floater = self._build_floating_ip('10.0.0.2', fip) result = self.floating_ips._translate_floating_ip_view(floater) expected = self._build_expected(floater, fip.address, fip.instance_uuid) self._test_result(expected, result) def test_translate_floating_ip_bad_address(self): fip = objects.FixedIP(instance_uuid=FAKE_UUID) floater = self._build_floating_ip('10.0.0.2', fip) result = self.floating_ips._translate_floating_ip_view(floater) expected = self._build_expected(floater, None, fip.instance_uuid) self._test_result(expected, result) def test_translate_floating_ip_bad_instance_id(self): fip = objects.FixedIP(address='192.168.1.2') floater = self._build_floating_ip('10.0.0.2', fip) result = self.floating_ips._translate_floating_ip_view(floater) expected = self._build_expected(floater, fip.address, None) self._test_result(expected, result) def test_translate_floating_ip_bad_instance_and_address(self): fip = objects.FixedIP() floater = self._build_floating_ip('10.0.0.2', fip) result = self.floating_ips._translate_floating_ip_view(floater) expected = self._build_expected(floater, None, None) self._test_result(expected, result) def test_translate_floating_ip_null_fixed(self): floater = self._build_floating_ip('10.0.0.2', None) result = self.floating_ips._translate_floating_ip_view(floater) expected = self._build_expected(floater, None, None) self._test_result(expected, result) def test_translate_floating_ip_unset_fixed(self): floater = objects.FloatingIP(id=1, address='10.0.0.2', pool='foo') result = self.floating_ips._translate_floating_ip_view(floater) expected = self._build_expected(floater, None, None) self._test_result(expected, result) def test_translate_floating_ips_view(self): mock_trans = mock.Mock() mock_trans.return_value = {'floating_ip': 'foo'} self.floating_ips._translate_floating_ip_view = mock_trans fip1 = objects.FixedIP(address='192.168.1.2', instance_uuid=FAKE_UUID) fip2 = objects.FixedIP(address='192.168.1.3', instance_uuid=FAKE_UUID) floaters = [self._build_floating_ip('10.0.0.2', fip1), self._build_floating_ip('10.0.0.3', fip2)] result = self.floating_ips._translate_floating_ips_view(floaters) called_floaters = [call[0][0] for call in mock_trans.call_args_list] self.assertTrue(any(obj_base.obj_equal_prims(floaters[0], f) for f in called_floaters), "_translate_floating_ip_view was not called with all " "floating ips") self.assertTrue(any(obj_base.obj_equal_prims(floaters[1], f) for f in called_floaters), "_translate_floating_ip_view was not called with all " "floating ips") expected_result = {'floating_ips': ['foo', 'foo']} self.assertEqual(expected_result, result) def test_floating_ips_list(self): res_dict = self.controller.index(self.fake_req) response = {'floating_ips': [{'instance_id': FAKE_UUID, 'ip': '10.10.10.10', 'pool': 'nova', 'fixed_ip': '10.0.0.1', 'id': 1}, {'instance_id': None, 'ip': '10.10.10.11', 'pool': 'nova', 'fixed_ip': None, 'id': 2}]} self.assertEqual(res_dict, response) def test_floating_ip_release_nonexisting(self): def fake_get_floating_ip(*args, **kwargs): raise exception.FloatingIpNotFound(id=id) self.stubs.Set(network.api.API, "get_floating_ip", fake_get_floating_ip) ex = self.assertRaises(webob.exc.HTTPNotFound, self.controller.delete, self.fake_req, '9876') self.assertIn("Floating IP not found for ID 9876", ex.explanation) def test_floating_ip_release_race_cond(self): def fake_get_floating_ip(*args, **kwargs): return {'fixed_ip_id': 1, 'address': self.floating_ip} def fake_get_instance_by_floating_ip_addr(*args, **kwargs): return 'test-inst' def fake_disassociate_floating_ip(*args, **kwargs): raise exception.FloatingIpNotAssociated(args[3]) self.stubs.Set(network.api.API, "get_floating_ip", fake_get_floating_ip) self.stubs.Set(self.floating_ips, "get_instance_by_floating_ip_addr", fake_get_instance_by_floating_ip_addr) self.stubs.Set(self.floating_ips, "disassociate_floating_ip", fake_disassociate_floating_ip) delete = self.controller.delete res = delete(self.fake_req, '9876') # NOTE: on v2.1, http status code is set as wsgi_code of API # method instead of status_int in a response object. if isinstance(self.controller, fips_v21.FloatingIPController): status_int = delete.wsgi_code else: status_int = res.status_int self.assertEqual(status_int, 202) def test_floating_ip_show(self): res_dict = self.controller.show(self.fake_req, 1) self.assertEqual(res_dict['floating_ip']['id'], 1) self.assertEqual(res_dict['floating_ip']['ip'], '10.10.10.10') self.assertIsNone(res_dict['floating_ip']['instance_id']) def test_floating_ip_show_not_found(self): def fake_get_floating_ip(*args, **kwargs): raise exception.FloatingIpNotFound(id='fake') self.stubs.Set(network.api.API, "get_floating_ip", fake_get_floating_ip) ex = self.assertRaises(webob.exc.HTTPNotFound, self.controller.show, self.fake_req, '9876') self.assertIn("Floating IP not found for ID 9876", ex.explanation) def test_show_associated_floating_ip(self): def get_floating_ip(self, context, id): return {'id': 1, 'address': '10.10.10.10', 'pool': 'nova', 'fixed_ip': {'address': '10.0.0.1', 'instance_uuid': FAKE_UUID, 'instance': {'uuid': FAKE_UUID}}} self.stubs.Set(network.api.API, "get_floating_ip", get_floating_ip) res_dict = self.controller.show(self.fake_req, 1) self.assertEqual(res_dict['floating_ip']['id'], 1) self.assertEqual(res_dict['floating_ip']['ip'], '10.10.10.10') self.assertEqual(res_dict['floating_ip']['fixed_ip'], '10.0.0.1') self.assertEqual(res_dict['floating_ip']['instance_id'], FAKE_UUID) def test_recreation_of_floating_ip(self): self._delete_floating_ip() self._create_floating_ips() def test_floating_ip_in_bulk_creation(self): self._delete_floating_ip() self._create_floating_ips([self.floating_ip, self.floating_ip_2]) all_ips = db.floating_ip_get_all(self.context) ip_list = [ip['address'] for ip in all_ips] self.assertIn(self.floating_ip, ip_list) self.assertIn(self.floating_ip_2, ip_list) def test_fail_floating_ip_in_bulk_creation(self): self.assertRaises(exception.FloatingIpExists, self._create_floating_ips, [self.floating_ip, self.floating_ip_2]) all_ips = db.floating_ip_get_all(self.context) ip_list = [ip['address'] for ip in all_ips] self.assertIn(self.floating_ip, ip_list) self.assertNotIn(self.floating_ip_2, ip_list) def test_floating_ip_allocate_no_free_ips(self): def fake_allocate(*args, **kwargs): raise exception.NoMoreFloatingIps() self.stubs.Set(network.api.API, "allocate_floating_ip", fake_allocate) ex = self.assertRaises(webob.exc.HTTPNotFound, self.controller.create, self.fake_req) self.assertIn('No more floating IPs', ex.explanation) def test_floating_ip_allocate_no_free_ips_pool(self): def fake_allocate(*args, **kwargs): raise exception.NoMoreFloatingIps() self.stubs.Set(network.api.API, "allocate_floating_ip", fake_allocate) ex = self.assertRaises(webob.exc.HTTPNotFound, self.controller.create, self.fake_req, {'pool': 'non_existent_pool'}) self.assertIn('No more floating IPs in pool non_existent_pool', ex.explanation) @mock.patch.object(network.api.API, 'allocate_floating_ip', side_effect=exception.FloatingIpBadRequest( 'Bad floatingip request: Network ' 'c8f0e88f-ae41-47cb-be6c-d8256ba80576 does not contain any ' 'IPv4 subnet')) def test_floating_ip_allocate_no_ipv4_subnet(self, allocate_mock): ex = self.assertRaises(webob.exc.HTTPBadRequest, self.controller.create, self.fake_req, {'pool': 'non_existent_pool'}) self.assertIn("does not contain any IPv4 subnet", six.text_type(ex)) @mock.patch('nova.network.api.API.allocate_floating_ip', side_effect=exception.FloatingIpLimitExceeded()) def test_floating_ip_allocate_over_quota(self, allocate_mock): ex = self.assertRaises(webob.exc.HTTPForbidden, self.controller.create, self.fake_req) self.assertIn('IP allocation over quota', ex.explanation) @mock.patch('nova.objects.FloatingIP.deallocate') @mock.patch('nova.objects.FloatingIP.allocate_address') @mock.patch('nova.objects.quotas.Quotas.check_deltas') def test_floating_ip_allocate_over_quota_during_recheck(self, check_mock, alloc_mock, dealloc_mock): ctxt = self.fake_req.environ['nova.context'] # Simulate a race where the first check passes and the recheck fails. check_mock.side_effect = [None, exception.OverQuota(overs='floating_ips')] self.assertRaises(webob.exc.HTTPForbidden, self.controller.create, self.fake_req) self.assertEqual(2, check_mock.call_count) call1 = mock.call(ctxt, {'floating_ips': 1}, ctxt.project_id) call2 = mock.call(ctxt, {'floating_ips': 0}, ctxt.project_id) check_mock.assert_has_calls([call1, call2]) # Verify we removed the floating IP that was added after the first # quota check passed. dealloc_mock.assert_called_once_with(ctxt, alloc_mock.return_value.address) @mock.patch('nova.objects.FloatingIP.allocate_address') @mock.patch('nova.objects.quotas.Quotas.check_deltas') def test_floating_ip_allocate_no_quota_recheck(self, check_mock, alloc_mock): # Disable recheck_quota. self.flags(recheck_quota=False, group='quota') ctxt = self.fake_req.environ['nova.context'] self.controller.create(self.fake_req) # check_deltas should have been called only once. check_mock.assert_called_once_with(ctxt, {'floating_ips': 1}, ctxt.project_id) @mock.patch('nova.network.api.API.allocate_floating_ip', side_effect=exception.FloatingIpLimitExceeded()) def test_floating_ip_allocate_quota_exceed_in_pool(self, allocate_mock): ex = self.assertRaises(webob.exc.HTTPForbidden, self.controller.create, self.fake_req, {'pool': 'non_existent_pool'}) self.assertIn('IP allocation over quota in pool non_existent_pool.', ex.explanation) @mock.patch('nova.network.api.API.allocate_floating_ip', side_effect=exception.FloatingIpPoolNotFound()) def test_floating_ip_create_with_unknown_pool(self, allocate_mock): ex = self.assertRaises(webob.exc.HTTPNotFound, self.controller.create, self.fake_req, {'pool': 'non_existent_pool'}) self.assertIn('Floating IP pool not found.', ex.explanation) def test_floating_ip_allocate(self): def fake1(*args, **kwargs): pass def fake2(*args, **kwargs): return {'id': 1, 'address': '10.10.10.10', 'pool': 'nova'} self.stubs.Set(network.api.API, "allocate_floating_ip", fake1) self.stubs.Set(network.api.API, "get_floating_ip_by_address", fake2) res_dict = self.controller.create(self.fake_req) ip = res_dict['floating_ip'] expected = { "id": 1, "instance_id": None, "ip": "10.10.10.10", "fixed_ip": None, "pool": 'nova'} self.assertEqual(ip, expected) def test_floating_ip_release(self): self.controller.delete(self.fake_req, 1) def _test_floating_ip_associate(self, fixed_address): def fake_associate_floating_ip(*args, **kwargs): self.assertEqual(fixed_address, kwargs['fixed_address']) self.stubs.Set(network.api.API, "associate_floating_ip", fake_associate_floating_ip) body = dict(addFloatingIp=dict(address=self.floating_ip)) rsp = self.manager._add_floating_ip(self.fake_req, TEST_INST, body=body) self.assertEqual(202, rsp.status_int) def test_floating_ip_associate(self): self._test_floating_ip_associate(fixed_address='192.168.1.100') @mock.patch.object(network.model.NetworkInfo, 'fixed_ips') def test_associate_floating_ip_v4v6_fixed_ip(self, fixed_ips_mock): fixed_address = '192.168.1.100' fixed_ips_mock.return_value = [{'address': 'fc00:2001:db8::100'}, {'address': ''}, {'address': fixed_address}] self._test_floating_ip_associate(fixed_address=fixed_address) @mock.patch.object(network.model.NetworkInfo, 'fixed_ips', return_value=[{'address': 'fc00:2001:db8::100'}]) def test_associate_floating_ip_v6_fixed_ip(self, fixed_ips_mock): body = dict(addFloatingIp=dict(address=self.floating_ip)) self.assertRaises(webob.exc.HTTPBadRequest, self.manager._add_floating_ip, self.fake_req, TEST_INST, body=body) def test_floating_ip_associate_invalid_instance(self): def fake_get(self, context, id, expected_attrs=None): raise exception.InstanceNotFound(instance_id=id) self.stubs.Set(compute.api.API, "get", fake_get) body = dict(addFloatingIp=dict(address=self.floating_ip)) self.assertRaises(webob.exc.HTTPNotFound, self.manager._add_floating_ip, self.fake_req, 'test_inst', body=body) def test_associate_not_allocated_floating_ip_to_instance(self): def fake_associate_floating_ip(self, context, instance, floating_address, fixed_address, affect_auto_assigned=False): raise exception.FloatingIpNotFoundForAddress( address=floating_address) self.stubs.Set(network.api.API, "associate_floating_ip", fake_associate_floating_ip) floating_ip = '10.10.10.11' body = dict(addFloatingIp=dict(address=floating_ip)) ex = self.assertRaises(webob.exc.HTTPNotFound, self.manager._add_floating_ip, self.fake_req, TEST_INST, body=body) self.assertIn("floating IP not found", ex.explanation) @mock.patch.object(network.api.API, 'associate_floating_ip', side_effect=exception.Forbidden) def test_associate_floating_ip_forbidden(self, associate_mock): body = dict(addFloatingIp=dict(address='10.10.10.11')) self.assertRaises(webob.exc.HTTPForbidden, self.manager._add_floating_ip, self.fake_req, TEST_INST, body=body) @mock.patch.object(network.api.API, 'associate_floating_ip', side_effect=exception.FloatingIpAssociateFailed( address='10.10.10.11')) def test_associate_floating_ip_failed(self, associate_mock): body = dict(addFloatingIp=dict(address='10.10.10.11')) self.assertRaises(webob.exc.HTTPBadRequest, self.manager._add_floating_ip, self.fake_req, TEST_INST, body=body) def test_associate_floating_ip_bad_address_key(self): body = dict(addFloatingIp=dict(bad_address='10.10.10.11')) req = fakes.HTTPRequest.blank('/v2/fake/servers/test_inst/action') self.assertRaises(self.validation_error, self.manager._add_floating_ip, req, 'test_inst', body=body) def test_associate_floating_ip_bad_addfloatingip_key(self): body = dict(bad_addFloatingIp=dict(address='10.10.10.11')) req = fakes.HTTPRequest.blank('/v2/fake/servers/test_inst/action') self.assertRaises(self.validation_error, self.manager._add_floating_ip, req, 'test_inst', body=body) def test_floating_ip_disassociate(self): def get_instance_by_floating_ip_addr(self, context, address): if address == '10.10.10.10': return TEST_INST self.stubs.Set(network.api.API, "get_instance_id_by_floating_address", get_instance_by_floating_ip_addr) body = dict(removeFloatingIp=dict(address='10.10.10.10')) rsp = self.manager._remove_floating_ip(self.fake_req, TEST_INST, body=body) self.assertEqual(202, rsp.status_int) def test_floating_ip_disassociate_missing(self): body = dict(removeFloatingIp=dict(address='10.10.10.10')) self.assertRaises(webob.exc.HTTPConflict, self.manager._remove_floating_ip, self.fake_req, 'test_inst', body=body) def test_floating_ip_associate_non_existent_ip(self): def fake_network_api_associate(self, context, instance, floating_address=None, fixed_address=None): floating_ips = ["10.10.10.10", "10.10.10.11"] if floating_address not in floating_ips: raise exception.FloatingIpNotFoundForAddress( address=floating_address) self.stubs.Set(network.api.API, "associate_floating_ip", fake_network_api_associate) body = dict(addFloatingIp=dict(address='1.1.1.1')) self.assertRaises(webob.exc.HTTPNotFound, self.manager._add_floating_ip, self.fake_req, TEST_INST, body=body) def test_floating_ip_disassociate_non_existent_ip(self): def network_api_get_floating_ip_by_address(self, context, floating_address): floating_ips = ["10.10.10.10", "10.10.10.11"] if floating_address not in floating_ips: raise exception.FloatingIpNotFoundForAddress( address=floating_address) self.stubs.Set(network.api.API, "get_floating_ip_by_address", network_api_get_floating_ip_by_address) body = dict(removeFloatingIp=dict(address='1.1.1.1')) self.assertRaises(webob.exc.HTTPNotFound, self.manager._remove_floating_ip, self.fake_req, TEST_INST, body=body) def test_floating_ip_disassociate_wrong_instance_uuid(self): def get_instance_by_floating_ip_addr(self, context, address): if address == '10.10.10.10': return TEST_INST self.stubs.Set(network.api.API, "get_instance_id_by_floating_address", get_instance_by_floating_ip_addr) wrong_uuid = 'aaaaaaaa-ffff-ffff-ffff-aaaaaaaaaaaa' body = dict(removeFloatingIp=dict(address='10.10.10.10')) self.assertRaises(webob.exc.HTTPConflict, self.manager._remove_floating_ip, self.fake_req, wrong_uuid, body=body) def test_floating_ip_disassociate_wrong_instance_id(self): def get_instance_by_floating_ip_addr(self, context, address): if address == '10.10.10.10': return WRONG_INST self.stubs.Set(network.api.API, "get_instance_id_by_floating_address", get_instance_by_floating_ip_addr) body = dict(removeFloatingIp=dict(address='10.10.10.10')) self.assertRaises(webob.exc.HTTPConflict, self.manager._remove_floating_ip, self.fake_req, TEST_INST, body=body) def test_floating_ip_disassociate_auto_assigned(self): def fake_get_floating_ip_addr_auto_assigned(self, context, address): return {'id': 1, 'address': '10.10.10.10', 'pool': 'nova', 'fixed_ip_id': 10, 'auto_assigned': 1} def get_instance_by_floating_ip_addr(self, context, address): if address == '10.10.10.10': return TEST_INST def network_api_disassociate(self, context, instance, floating_address): raise exception.CannotDisassociateAutoAssignedFloatingIP() self.stubs.Set(network.api.API, "get_floating_ip_by_address", fake_get_floating_ip_addr_auto_assigned) self.stubs.Set(network.api.API, "get_instance_id_by_floating_address", get_instance_by_floating_ip_addr) self.stubs.Set(network.api.API, "disassociate_floating_ip", network_api_disassociate) body = dict(removeFloatingIp=dict(address='10.10.10.10')) self.assertRaises(webob.exc.HTTPForbidden, self.manager._remove_floating_ip, self.fake_req, TEST_INST, body=body) def test_floating_ip_disassociate_map_authorization_exc(self): def fake_get_floating_ip_addr_auto_assigned(self, context, address): return {'id': 1, 'address': '10.10.10.10', 'pool': 'nova', 'fixed_ip_id': 10, 'auto_assigned': 1} def get_instance_by_floating_ip_addr(self, context, address): if address == '10.10.10.10': return TEST_INST def network_api_disassociate(self, context, instance, address): raise exception.Forbidden() self.stubs.Set(network.api.API, "get_floating_ip_by_address", fake_get_floating_ip_addr_auto_assigned) self.stubs.Set(network.api.API, "get_instance_id_by_floating_address", get_instance_by_floating_ip_addr) self.stubs.Set(network.api.API, "disassociate_floating_ip", network_api_disassociate) body = dict(removeFloatingIp=dict(address='10.10.10.10')) self.assertRaises(webob.exc.HTTPForbidden, self.manager._remove_floating_ip, self.fake_req, TEST_INST, body=body) # these are a few bad param tests def test_bad_address_param_in_remove_floating_ip(self): body = dict(removeFloatingIp=dict(badparam='11.0.0.1')) self.assertRaises(self.validation_error, self.manager._remove_floating_ip, self.fake_req, TEST_INST, body=body) def test_missing_dict_param_in_remove_floating_ip(self): body = dict(removeFloatingIp='11.0.0.1') self.assertRaises(self.validation_error, self.manager._remove_floating_ip, self.fake_req, TEST_INST, body=body) def test_missing_dict_param_in_add_floating_ip(self): body = dict(addFloatingIp='11.0.0.1') self.assertRaises(self.validation_error, self.manager._add_floating_ip, self.fake_req, TEST_INST, body=body) def _build_floating_ip(self, address, fixed_ip): floating = objects.FloatingIP(id=1, address=address, pool='foo', fixed_ip=fixed_ip) return floating def _build_expected(self, floating_ip, fixed_ip, instance_id): return {'floating_ip': {'id': floating_ip.id, 'ip': floating_ip.address, 'pool': floating_ip.pool, 'fixed_ip': fixed_ip, 'instance_id': instance_id}} def _test_result(self, expected, actual): expected_fl = expected['floating_ip'] actual_fl = actual['floating_ip'] self.assertEqual(expected_fl, actual_fl) class ExtendedFloatingIpTestV21(test.TestCase): floating_ip = "10.10.10.10" floating_ip_2 = "10.10.10.11" floating_ips = fips_v21 def _create_floating_ips(self, floating_ips=None): """Create a floating IP object.""" if floating_ips is None: floating_ips = [self.floating_ip] elif not isinstance(floating_ips, (list, tuple)): floating_ips = [floating_ips] dict_ = {'pool': 'nova', 'host': 'fake_host'} return db.floating_ip_bulk_create( self.context, [dict(address=ip, **dict_) for ip in floating_ips], ) def _delete_floating_ip(self): db.floating_ip_destroy(self.context, self.floating_ip) def setUp(self): super(ExtendedFloatingIpTestV21, self).setUp() self.stubs.Set(compute.api.API, "get", compute_api_get) self.stubs.Set(network.api.API, "get_floating_ip", network_api_get_floating_ip) self.stubs.Set(network.api.API, "get_floating_ip_by_address", network_api_get_floating_ip_by_address) self.stubs.Set(network.api.API, "get_floating_ips_by_project", network_api_get_floating_ips_by_project) self.stubs.Set(network.api.API, "release_floating_ip", network_api_release) self.stubs.Set(network.api.API, "disassociate_floating_ip", network_api_disassociate) self.stubs.Set(network.api.API, "get_instance_id_by_floating_address", get_instance_by_floating_ip_addr) self.stubs.Set(objects.Instance, "get_network_info", stub_nw_info(self)) fake_network.stub_out_nw_api_get_instance_nw_info(self) self.stub_out('nova.db.instance_get', fake_instance_get) self.context = context.get_admin_context() self._create_floating_ips() self.controller = self.floating_ips.FloatingIPController() self.manager = self.floating_ips.\ FloatingIPActionController() self.fake_req = fakes.HTTPRequest.blank('') def tearDown(self): self._delete_floating_ip() super(ExtendedFloatingIpTestV21, self).tearDown() def test_extended_floating_ip_associate_fixed(self): fixed_address = '192.168.1.101' def fake_associate_floating_ip(*args, **kwargs): self.assertEqual(fixed_address, kwargs['fixed_address']) body = dict(addFloatingIp=dict(address=self.floating_ip, fixed_address=fixed_address)) with mock.patch.object(self.manager.network_api, 'associate_floating_ip', fake_associate_floating_ip): rsp = self.manager._add_floating_ip(self.fake_req, TEST_INST, body=body) self.assertEqual(202, rsp.status_int) def test_extended_floating_ip_associate_fixed_not_allocated(self): def fake_associate_floating_ip(*args, **kwargs): pass self.stubs.Set(network.api.API, "associate_floating_ip", fake_associate_floating_ip) body = dict(addFloatingIp=dict(address=self.floating_ip, fixed_address='11.11.11.11')) ex = self.assertRaises(webob.exc.HTTPBadRequest, self.manager._add_floating_ip, self.fake_req, TEST_INST, body=body) self.assertIn("Specified fixed address not assigned to instance", ex.explanation) class FloatingIPPolicyEnforcementV21(test.NoDBTestCase): def setUp(self): super(FloatingIPPolicyEnforcementV21, self).setUp() self.controller = fips_v21.FloatingIPController() self.req = fakes.HTTPRequest.blank('') def _common_policy_check(self, func, *arg, **kwarg): rule_name = "os_compute_api:os-floating-ips" rule = {rule_name: "project:non_fake"} self.policy.set_rules(rule) exc = self.assertRaises( exception.PolicyNotAuthorized, func, *arg, **kwarg) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) def test_index_policy_failed(self): self._common_policy_check(self.controller.index, self.req) def test_show_policy_failed(self): self._common_policy_check(self.controller.show, self.req, FAKE_UUID) def test_create_policy_failed(self): self._common_policy_check(self.controller.create, self.req) def test_delete_policy_failed(self): self._common_policy_check(self.controller.delete, self.req, FAKE_UUID) class FloatingIPActionPolicyEnforcementV21(test.NoDBTestCase): def setUp(self): super(FloatingIPActionPolicyEnforcementV21, self).setUp() self.controller = fips_v21.FloatingIPActionController() self.req = fakes.HTTPRequest.blank('') def _common_policy_check(self, func, *arg, **kwarg): rule_name = "os_compute_api:os-floating-ips" rule = {rule_name: "project:non_fake"} self.policy.set_rules(rule) exc = self.assertRaises( exception.PolicyNotAuthorized, func, *arg, **kwarg) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) def test_add_policy_failed(self): body = dict(addFloatingIp=dict(address='10.10.10.11')) self._common_policy_check( self.controller._add_floating_ip, self.req, FAKE_UUID, body=body) def test_remove_policy_failed(self): body = dict(removeFloatingIp=dict(address='10.10.10.10')) self._common_policy_check( self.controller._remove_floating_ip, self.req, FAKE_UUID, body=body) class FloatingIpsDeprecationTest(test.NoDBTestCase): def setUp(self): super(FloatingIpsDeprecationTest, self).setUp() self.req = fakes.HTTPRequest.blank('', version='2.36') self.controller = fips_v21.FloatingIPController() def test_all_apis_return_not_found(self): self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.show, self.req, fakes.FAKE_UUID) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.index, self.req) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.create, self.req, {}) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.delete, self.req, fakes.FAKE_UUID) class FloatingIpActionDeprecationTest(test.NoDBTestCase): def setUp(self): super(FloatingIpActionDeprecationTest, self).setUp() self.req = fakes.HTTPRequest.blank('', version='2.44') self.controller = fips_v21.FloatingIPActionController() def test_add_floating_ip_not_found(self): body = dict(addFloatingIp=dict(address='10.10.10.11')) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller._add_floating_ip, self.req, FAKE_UUID, body=body) def test_remove_floating_ip_not_found(self): body = dict(removeFloatingIp=dict(address='10.10.10.10')) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller._remove_floating_ip, self.req, FAKE_UUID, body=body) nova-17.0.1/nova/tests/unit/api/openstack/compute/test_server_start_stop.py0000666000175000017500000002212513250073126027271 0ustar zuulzuul00000000000000# Copyright (c) 2012 Midokura Japan K.K. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_policy import policy as oslo_policy import six import webob from nova.api.openstack.compute import servers \ as server_v21 from nova.compute import api as compute_api from nova import db from nova import exception from nova import policy from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.unit.api.openstack import fakes from nova.tests import uuidsentinel as uuids class ServerStartStopTestV21(test.TestCase): def setUp(self): super(ServerStartStopTestV21, self).setUp() self._setup_controller() self.req = fakes.HTTPRequest.blank('') self.useFixture(nova_fixtures.SingleCellSimple()) self.stub_out('nova.db.instance_get_by_uuid', fakes.fake_instance_get()) def _setup_controller(self): self.controller = server_v21.ServersController() @mock.patch.object(compute_api.API, 'start') def test_start(self, start_mock): body = dict(start="") self.controller._start_server(self.req, uuids.instance, body) start_mock.assert_called_once_with(mock.ANY, mock.ANY) @mock.patch.object(compute_api.API, 'start', side_effect=exception.InstanceNotReady( instance_id=uuids.instance)) def test_start_not_ready(self, start_mock): body = dict(start="") self.assertRaises(webob.exc.HTTPConflict, self.controller._start_server, self.req, uuids.instance, body) @mock.patch.object(compute_api.API, 'start', side_effect=exception.InstanceIsLocked( instance_uuid=uuids.instance)) def test_start_locked_server(self, start_mock): body = dict(start="") self.assertRaises(webob.exc.HTTPConflict, self.controller._start_server, self.req, uuids.instance, body) @mock.patch.object(compute_api.API, 'start', side_effect=exception.InstanceIsLocked( instance_uuid=uuids.instance)) def test_start_invalid_state(self, start_mock): body = dict(start="") ex = self.assertRaises(webob.exc.HTTPConflict, self.controller._start_server, self.req, uuids.instance, body) self.assertIn('is locked', six.text_type(ex)) @mock.patch.object(compute_api.API, 'stop') def test_stop(self, stop_mock): body = dict(stop="") self.controller._stop_server(self.req, uuids.instance, body) stop_mock.assert_called_once_with(mock.ANY, mock.ANY) @mock.patch.object(compute_api.API, 'stop', side_effect=exception.InstanceNotReady( instance_id=uuids.instance)) def test_stop_not_ready(self, stop_mock): body = dict(stop="") self.assertRaises(webob.exc.HTTPConflict, self.controller._stop_server, self.req, uuids.instance, body) @mock.patch.object(compute_api.API, 'stop', side_effect=exception.InstanceIsLocked( instance_uuid=uuids.instance)) def test_stop_locked_server(self, stop_mock): body = dict(stop="") ex = self.assertRaises(webob.exc.HTTPConflict, self.controller._stop_server, self.req, uuids.instance, body) self.assertIn('is locked', six.text_type(ex)) @mock.patch.object(compute_api.API, 'stop', side_effect=exception.InstanceIsLocked( instance_uuid=uuids.instance)) def test_stop_invalid_state(self, stop_mock): body = dict(start="") self.assertRaises(webob.exc.HTTPConflict, self.controller._stop_server, self.req, uuids.instance, body) @mock.patch.object(db, 'instance_get_by_uuid', side_effect=exception.InstanceNotFound( instance_id=uuids.instance)) def test_start_with_bogus_id(self, get_mock): body = dict(start="") self.assertRaises(webob.exc.HTTPNotFound, self.controller._start_server, self.req, uuids.instance, body) @mock.patch.object(db, 'instance_get_by_uuid', side_effect=exception.InstanceNotFound( instance_id=uuids.instance)) def test_stop_with_bogus_id(self, get_mock): body = dict(stop="") self.assertRaises(webob.exc.HTTPNotFound, self.controller._stop_server, self.req, uuids.instance, body) class ServerStartStopPolicyEnforcementV21(test.TestCase): start_policy = "os_compute_api:servers:start" stop_policy = "os_compute_api:servers:stop" def setUp(self): super(ServerStartStopPolicyEnforcementV21, self).setUp() self.controller = server_v21.ServersController() self.req = fakes.HTTPRequest.blank('') self.useFixture(nova_fixtures.SingleCellSimple()) self.stub_out( 'nova.db.instance_get_by_uuid', fakes.fake_instance_get( project_id=self.req.environ['nova.context'].project_id)) def test_start_policy_failed(self): rules = { self.start_policy: "project_id:non_fake" } policy.set_rules(oslo_policy.Rules.from_dict(rules)) body = dict(start="") exc = self.assertRaises(exception.PolicyNotAuthorized, self.controller._start_server, self.req, uuids.instance, body) self.assertIn(self.start_policy, exc.format_message()) def test_start_overridden_policy_failed_with_other_user_in_same_project( self): rules = { self.start_policy: "user_id:%(user_id)s" } policy.set_rules(oslo_policy.Rules.from_dict(rules)) # Change the user_id in request context. self.req.environ['nova.context'].user_id = 'other-user' body = dict(start="") exc = self.assertRaises(exception.PolicyNotAuthorized, self.controller._start_server, self.req, uuids.instance, body) self.assertIn(self.start_policy, exc.format_message()) @mock.patch('nova.compute.api.API.start') def test_start_overridden_policy_pass_with_same_user(self, start_mock): rules = { self.start_policy: "user_id:%(user_id)s" } policy.set_rules(oslo_policy.Rules.from_dict(rules)) body = dict(start="") self.controller._start_server(self.req, uuids.instance, body) start_mock.assert_called_once_with(mock.ANY, mock.ANY) def test_stop_policy_failed_with_other_project(self): rules = { self.stop_policy: "project_id:%(project_id)s" } policy.set_rules(oslo_policy.Rules.from_dict(rules)) body = dict(stop="") # Change the project_id in request context. self.req.environ['nova.context'].project_id = 'other-project' exc = self.assertRaises(exception.PolicyNotAuthorized, self.controller._stop_server, self.req, uuids.instance, body) self.assertIn(self.stop_policy, exc.format_message()) @mock.patch('nova.compute.api.API.stop') def test_stop_overridden_policy_pass_with_same_project(self, stop_mock): rules = { self.stop_policy: "project_id:%(project_id)s" } policy.set_rules(oslo_policy.Rules.from_dict(rules)) body = dict(stop="") self.controller._stop_server(self.req, uuids.instance, body) stop_mock.assert_called_once_with(mock.ANY, mock.ANY) def test_stop_overridden_policy_failed_with_other_user_in_same_project( self): rules = { self.stop_policy: "user_id:%(user_id)s" } policy.set_rules(oslo_policy.Rules.from_dict(rules)) # Change the user_id in request context. self.req.environ['nova.context'].user_id = 'other-user' body = dict(stop="") exc = self.assertRaises(exception.PolicyNotAuthorized, self.controller._stop_server, self.req, uuids.instance, body) self.assertIn(self.stop_policy, exc.format_message()) @mock.patch('nova.compute.api.API.stop') def test_stop_overridden_policy_pass_with_same_user(self, stop_mock): rules = { self.stop_policy: "user_id:%(user_id)s" } policy.set_rules(oslo_policy.Rules.from_dict(rules)) body = dict(stop="") self.controller._stop_server(self.req, uuids.instance, body) stop_mock.assert_called_once_with(mock.ANY, mock.ANY) nova-17.0.1/nova/tests/unit/api/openstack/compute/test_tenant_networks.py0000666000175000017500000003614513250073126026735 0ustar zuulzuul00000000000000# Copyright 2014 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import mock from oslo_config import cfg import webob from nova.api.openstack.compute import tenant_networks \ as networks_v21 from nova import exception from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.unit.api.openstack import fakes CONF = cfg.CONF NETWORKS = [ { "id": 1, "cidr": "10.20.105.0/24", "label": "new net 1" }, { "id": 2, "cidr": "10.20.105.0/24", "label": "new net 2" } ] DEFAULT_NETWORK = [ { "id": 3, "cidr": "None", "label": "default" } ] NETWORKS_WITH_DEFAULT_NET = copy.deepcopy(NETWORKS) NETWORKS_WITH_DEFAULT_NET.extend(DEFAULT_NETWORK) DEFAULT_TENANT_ID = CONF.api.neutron_default_tenant_id def fake_network_api_get_all(context): if (context.project_id == DEFAULT_TENANT_ID): return DEFAULT_NETWORK else: return NETWORKS class TenantNetworksTestV21(test.NoDBTestCase): ctrlr = networks_v21.TenantNetworkController validation_error = exception.ValidationError use_neutron = False def setUp(self): super(TenantNetworksTestV21, self).setUp() # os-tenant-networks only supports Neutron when listing networks or # showing details about a network, create and delete operations # result in a 503 and 500 response, respectively. self.flags(enable_network_quota=True, use_neutron=self.use_neutron) self.useFixture(nova_fixtures.RegisterNetworkQuota()) self.controller = self.ctrlr() self.req = fakes.HTTPRequest.blank('') self.original_value = CONF.api.use_neutron_default_nets def tearDown(self): super(TenantNetworksTestV21, self).tearDown() CONF.set_override("use_neutron_default_nets", self.original_value, group='api') def _fake_network_api_create(self, context, **kwargs): self.assertEqual(context.project_id, kwargs['project_id']) return NETWORKS @mock.patch('nova.network.api.API.disassociate') @mock.patch('nova.network.api.API.delete') def _test_network_delete_exception(self, delete_ex, disassociate_ex, expex, delete_mock, disassociate_mock): ctxt = self.req.environ['nova.context'] if delete_mock: delete_mock.side_effect = delete_ex if disassociate_ex: disassociate_mock.side_effect = disassociate_ex if self.use_neutron: expex = webob.exc.HTTPInternalServerError self.assertRaises(expex, self.controller.delete, self.req, 1) if not self.use_neutron: disassociate_mock.assert_called_once_with(ctxt, 1) if not disassociate_ex: delete_mock.assert_called_once_with(ctxt, 1) def test_network_delete_exception_network_not_found(self): ex = exception.NetworkNotFound(network_id=1) expex = webob.exc.HTTPNotFound self._test_network_delete_exception(None, ex, expex) def test_network_delete_exception_policy_failed(self): ex = exception.PolicyNotAuthorized(action='dummy') expex = webob.exc.HTTPForbidden self._test_network_delete_exception(ex, None, expex) def test_network_delete_exception_network_in_use(self): ex = exception.NetworkInUse(network_id=1) expex = webob.exc.HTTPConflict self._test_network_delete_exception(ex, None, expex) @mock.patch('nova.network.api.API.delete') @mock.patch('nova.network.api.API.disassociate') def test_network_delete(self, disassociate_mock, delete_mock): ctxt = self.req.environ['nova.context'] delete_method = self.controller.delete res = delete_method(self.req, 1) # NOTE: on v2.1, http status code is set as wsgi_code of API # method instead of status_int in a response object. if isinstance(self.controller, networks_v21.TenantNetworkController): status_int = delete_method.wsgi_code else: status_int = res.status_int self.assertEqual(202, status_int) disassociate_mock.assert_called_once_with(ctxt, 1) delete_mock.assert_called_once_with(ctxt, 1) def test_network_show(self): with mock.patch.object(self.controller.network_api, 'get', return_value=NETWORKS[0]): res = self.controller.show(self.req, 1) self.assertEqual(NETWORKS[0], res['network']) def test_network_show_not_found(self): ctxt = self.req.environ['nova.context'] with mock.patch.object(self.controller.network_api, 'get', side_effect=exception.NetworkNotFound( network_id=1)) as get_mock: self.assertRaises(webob.exc.HTTPNotFound, self.controller.show, self.req, 1) get_mock.assert_called_once_with(ctxt, 1) def _test_network_index(self, default_net=True): CONF.set_override("use_neutron_default_nets", default_net, group='api') expected = NETWORKS if default_net: expected = NETWORKS_WITH_DEFAULT_NET with mock.patch.object(self.controller.network_api, 'get_all', side_effect=fake_network_api_get_all): res = self.controller.index(self.req) self.assertEqual(expected, res['networks']) def test_network_index_with_default_net(self): self._test_network_index() def test_network_index_without_default_net(self): self._test_network_index(default_net=False) @mock.patch('nova.objects.Quotas.check_deltas') @mock.patch('nova.network.api.API.create') def test_network_create(self, create_mock, check_mock): create_mock.side_effect = self._fake_network_api_create body = copy.deepcopy(NETWORKS[0]) del body['id'] body = {'network': body} res = self.controller.create(self.req, body=body) self.assertEqual(NETWORKS[0], res['network']) @mock.patch('nova.objects.Quotas.check_deltas') @mock.patch('nova.network.api.API.delete') @mock.patch('nova.network.api.API.create') def test_network_create_quota_error_during_recheck(self, create_mock, delete_mock, check_mock): create_mock.side_effect = self._fake_network_api_create ctxt = self.req.environ['nova.context'] # Simulate a race where the first check passes and the recheck fails. check_mock.side_effect = [None, exception.OverQuota(overs='networks')] body = copy.deepcopy(NETWORKS[0]) del body['id'] body = {'network': body} self.assertRaises(webob.exc.HTTPForbidden, self.controller.create, self.req, body=body) self.assertEqual(2, check_mock.call_count) call1 = mock.call(ctxt, {'networks': 1}, ctxt.project_id) call2 = mock.call(ctxt, {'networks': 0}, ctxt.project_id) check_mock.assert_has_calls([call1, call2]) # Verify we removed the network that was added after the first quota # check passed. delete_mock.assert_called_once_with(ctxt, NETWORKS[0]['id']) @mock.patch('nova.objects.Quotas.check_deltas') @mock.patch('nova.network.api.API.create') def test_network_create_no_quota_recheck(self, create_mock, check_mock): create_mock.side_effect = self._fake_network_api_create ctxt = self.req.environ['nova.context'] # Disable recheck_quota. self.flags(recheck_quota=False, group='quota') body = copy.deepcopy(NETWORKS[0]) del body['id'] body = {'network': body} self.controller.create(self.req, body=body) # check_deltas should have been called only once. check_mock.assert_called_once_with(ctxt, {'networks': 1}, ctxt.project_id) @mock.patch('nova.objects.Quotas.check_deltas') def test_network_create_quota_error(self, check_mock): ctxt = self.req.environ['nova.context'] check_mock.side_effect = exception.OverQuota(overs='networks') body = {'network': {"cidr": "10.20.105.0/24", "label": "new net 1"}} self.assertRaises(webob.exc.HTTPForbidden, self.controller.create, self.req, body=body) check_mock.assert_called_once_with(ctxt, {'networks': 1}, ctxt.project_id) @mock.patch('nova.objects.Quotas.check_deltas') @mock.patch('nova.network.api.API.create') def _test_network_create_exception(self, ex, expex, create_mock, check_mock): ctxt = self.req.environ['nova.context'] create_mock.side_effect = ex body = {'network': {"cidr": "10.20.105.0/24", "label": "new net 1"}} if self.use_neutron: expex = webob.exc.HTTPServiceUnavailable self.assertRaises(expex, self.controller.create, self.req, body=body) check_mock.assert_called_once_with(ctxt, {'networks': 1}, ctxt.project_id) def test_network_create_exception_policy_failed(self): ex = exception.PolicyNotAuthorized(action='dummy') expex = webob.exc.HTTPForbidden self._test_network_create_exception(ex, expex) def test_network_create_exception_conflictcidr(self): ex = exception.CidrConflict(cidr='dummy', other='dummy') expex = webob.exc.HTTPConflict self._test_network_create_exception(ex, expex) def test_network_create_exception_service_unavailable(self): ex = Exception expex = webob.exc.HTTPServiceUnavailable self._test_network_create_exception(ex, expex) def test_network_create_empty_body(self): self.assertRaises(exception.ValidationError, self.controller.create, self.req, body={}) def test_network_create_without_cidr(self): body = {'network': {"label": "new net 1"}} self.assertRaises(self.validation_error, self.controller.create, self.req, body=body) def test_network_create_bad_format_cidr(self): body = {'network': {"cidr": "123", "label": "new net 1"}} self.assertRaises(self.validation_error, self.controller.create, self.req, body=body) def test_network_create_empty_network(self): body = {'network': {}} self.assertRaises(self.validation_error, self.controller.create, self.req, body=body) def test_network_create_without_label(self): body = {'network': {"cidr": "10.20.105.0/24"}} self.assertRaises(self.validation_error, self.controller.create, self.req, body=body) class TenantNeutronNetworksTestV21(TenantNetworksTestV21): use_neutron = True def test_network_create(self): self.assertRaises( webob.exc.HTTPServiceUnavailable, super(TenantNeutronNetworksTestV21, self).test_network_create) def test_network_create_quota_error_during_recheck(self): self.assertRaises( webob.exc.HTTPServiceUnavailable, super(TenantNeutronNetworksTestV21, self) .test_network_create_quota_error_during_recheck) def test_network_create_no_quota_recheck(self): self.assertRaises( webob.exc.HTTPServiceUnavailable, super(TenantNeutronNetworksTestV21, self) .test_network_create_no_quota_recheck) def test_network_delete(self): self.assertRaises( webob.exc.HTTPInternalServerError, super(TenantNeutronNetworksTestV21, self).test_network_delete) class TenantNetworksEnforcementV21(test.NoDBTestCase): def setUp(self): super(TenantNetworksEnforcementV21, self).setUp() self.controller = networks_v21.TenantNetworkController() self.req = fakes.HTTPRequest.blank('') def test_create_policy_failed(self): rule_name = 'os_compute_api:os-tenant-networks' self.policy.set_rules({rule_name: "project:non_fake"}) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller.create, self.req, body={'network': {'label': 'test', 'cidr': '10.0.0.0/32'}}) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) def test_index_policy_failed(self): rule_name = 'os_compute_api:os-tenant-networks' self.policy.set_rules({rule_name: "project:non_fake"}) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller.index, self.req) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) def test_delete_policy_failed(self): rule_name = 'os_compute_api:os-tenant-networks' self.policy.set_rules({rule_name: "project:non_fake"}) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller.delete, self.req, fakes.FAKE_UUID) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) def test_show_policy_failed(self): rule_name = 'os_compute_api:os-tenant-networks' self.policy.set_rules({rule_name: "project:non_fake"}) exc = self.assertRaises( exception.PolicyNotAuthorized, self.controller.show, self.req, fakes.FAKE_UUID) self.assertEqual( "Policy doesn't allow %s to be performed." % rule_name, exc.format_message()) class TenantNetworksDeprecationTest(test.NoDBTestCase): def setUp(self): super(TenantNetworksDeprecationTest, self).setUp() self.controller = networks_v21.TenantNetworkController() self.req = fakes.HTTPRequest.blank('', version='2.36') def test_all_apis_return_not_found(self): self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.index, self.req) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.show, self.req, fakes.FAKE_UUID) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.delete, self.req, fakes.FAKE_UUID) self.assertRaises(exception.VersionNotFoundForAPIMethod, self.controller.create, self.req, {}) nova-17.0.1/nova/tests/unit/api/openstack/test_requestlog.py0000666000175000017500000001335013250073126024217 0ustar zuulzuul00000000000000# Copyright 2017 IBM Corp. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import fixtures as fx import testtools from nova.tests import fixtures from nova.tests.unit import conf_fixture """Test request logging middleware under various conditions. The request logging middleware is needed when running under something other than eventlet. While Nova grew up on eventlet, and it's wsgi server, it meant that our user facing data (the log stream) was a mix of what Nova was emitting, and what eventlet.wsgi was emitting on our behalf. When running under uwsgi we want to make sure that we have equivalent coverage. All these tests use GET / to hit an endpoint that doesn't require the database setup. We have to do a bit of mocking to make that work. """ class TestRequestLogMiddleware(testtools.TestCase): def setUp(self): super(TestRequestLogMiddleware, self).setUp() # this is the minimal set of magic mocks needed to convince # the API service it can start on it's own without a database. mocks = ['nova.objects.Service.get_by_host_and_binary', 'nova.objects.Service.create'] self.stdlog = fixtures.StandardLogging() self.useFixture(self.stdlog) for m in mocks: p = mock.patch(m) self.addCleanup(p.stop) p.start() @mock.patch('nova.api.openstack.requestlog.RequestLog._should_emit') def test_logs_requests(self, emit): """Ensure requests are logged. Make a standard request for / and ensure there is a log entry. """ emit.return_value = True self.useFixture(conf_fixture.ConfFixture()) self.useFixture(fixtures.RPCFixture('nova.test')) api = self.useFixture(fixtures.OSAPIFixture()).api resp = api.api_request('/', strip_version=True) log1 = ('INFO [nova.api.openstack.requestlog] 127.0.0.1 ' '"GET /v2" status: 204 len: 0 microversion: - time:') self.assertIn(log1, self.stdlog.logger.output) # the content length might vary, but the important part is # what we log is what we return to the user (which turns out # to excitingly not be the case with eventlet!) content_length = resp.headers['content-length'] log2 = ('INFO [nova.api.openstack.requestlog] 127.0.0.1 ' '"GET /" status: 200 len: %s' % content_length) self.assertIn(log2, self.stdlog.logger.output) @mock.patch('nova.api.openstack.requestlog.RequestLog._should_emit') def test_logs_mv(self, emit): """Ensure logs register microversion if passed. This makes sure that microversion logging actually shows up when appropriate. """ emit.return_value = True self.useFixture(conf_fixture.ConfFixture()) # NOTE(sdague): all these tests are using the self.useFixture( fx.MonkeyPatch( 'nova.api.openstack.compute.versions.' 'Versions.support_api_request_version', True)) self.useFixture(fixtures.RPCFixture('nova.test')) api = self.useFixture(fixtures.OSAPIFixture()).api api.microversion = '2.25' resp = api.api_request('/', strip_version=True) content_length = resp.headers['content-length'] log1 = ('INFO [nova.api.openstack.requestlog] 127.0.0.1 ' '"GET /" status: 200 len: %s microversion: 2.25 time:' % content_length) self.assertIn(log1, self.stdlog.logger.output) @mock.patch('nova.api.openstack.compute.versions.Versions.index') @mock.patch('nova.api.openstack.requestlog.RequestLog._should_emit') def test_logs_under_exception(self, emit, v_index): """Ensure that logs still emit under unexpected failure. If we get an unexpected failure all the way up to the top, we should still have a record of that request via the except block. """ emit.return_value = True v_index.side_effect = Exception("Unexpected Error") self.useFixture(conf_fixture.ConfFixture()) self.useFixture(fixtures.RPCFixture('nova.test')) api = self.useFixture(fixtures.OSAPIFixture()).api api.api_request('/', strip_version=True) log1 = ('INFO [nova.api.openstack.requestlog] 127.0.0.1 "GET /"' ' status: 500 len: 0 microversion: - time:') self.assertIn(log1, self.stdlog.logger.output) @mock.patch('nova.api.openstack.requestlog.RequestLog._should_emit') def test_no_log_under_eventlet(self, emit): """Ensure that logs don't end up under eventlet. We still set the _should_emit return value directly to prevent the situation where eventlet is removed from tests and this preventing that. NOTE(sdague): this test can be deleted when eventlet is no longer supported for the wsgi stack in Nova. """ emit.return_value = False self.useFixture(conf_fixture.ConfFixture()) self.useFixture(fixtures.RPCFixture('nova.test')) api = self.useFixture(fixtures.OSAPIFixture()).api api.api_request('/', strip_version=True) self.assertNotIn("nova.api.openstack.requestlog", self.stdlog.logger.output) nova-17.0.1/nova/tests/unit/api/openstack/test_mapper.py0000666000175000017500000000317013250073126023310 0ustar zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import webob from nova.api import openstack as openstack_api from nova import test from nova.tests.unit.api.openstack import fakes class MapperTest(test.NoDBTestCase): def test_resource_project_prefix(self): class Controller(object): def index(self, req): return 'foo' app = fakes.TestRouter(Controller(), openstack_api.ProjectMapper()) req = webob.Request.blank('/1234/tests') resp = req.get_response(app) self.assertEqual(b'foo', resp.body) self.assertEqual(resp.status_int, 200) def test_resource_no_project_prefix(self): class Controller(object): def index(self, req): return 'foo' app = fakes.TestRouter(Controller(), openstack_api.PlainMapper()) req = webob.Request.blank('/tests') resp = req.get_response(app) self.assertEqual(b'foo', resp.body) self.assertEqual(resp.status_int, 200) nova-17.0.1/nova/tests/unit/api/openstack/fakes.py0000666000175000017500000006152413250073136022066 0ustar zuulzuul00000000000000# Copyright 2010 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime from oslo_serialization import jsonutils from oslo_utils import timeutils from oslo_utils import uuidutils import routes import six from six.moves import range import webob.dec from nova.api import auth as api_auth from nova.api import openstack as openstack_api from nova.api.openstack import api_version_request as api_version from nova.api.openstack import compute from nova.api.openstack.compute import versions from nova.api.openstack import urlmap from nova.api.openstack import wsgi as os_wsgi from nova.compute import api as compute_api from nova.compute import flavors from nova.compute import vm_states import nova.conf from nova import context from nova.db.sqlalchemy import models from nova import exception as exc from nova.network.security_group import security_group_base from nova import objects from nova.objects import base from nova import quota from nova.tests.unit import fake_block_device from nova.tests.unit import fake_network from nova.tests.unit.objects import test_keypair from nova import utils from nova import wsgi CONF = nova.conf.CONF QUOTAS = quota.QUOTAS FAKE_UUID = 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa' FAKE_PROJECT_ID = '6a6a9c9eee154e9cb8cec487b98d36ab' FAKE_USER_ID = '5fae60f5cf4642609ddd31f71748beac' FAKE_UUIDS = {} @webob.dec.wsgify def fake_wsgi(self, req): return self.application def wsgi_app_v21(fake_auth_context=None, v2_compatible=False, custom_routes=None): inner_app_v21 = compute.APIRouterV21(custom_routes=custom_routes) if v2_compatible: inner_app_v21 = openstack_api.LegacyV2CompatibleWrapper(inner_app_v21) if fake_auth_context is not None: ctxt = fake_auth_context else: ctxt = context.RequestContext('fake', 'fake', auth_token=True) api_v21 = openstack_api.FaultWrapper( api_auth.InjectContext(ctxt, inner_app_v21)) mapper = urlmap.URLMap() mapper['/v2'] = api_v21 mapper['/v2.1'] = api_v21 mapper['/'] = openstack_api.FaultWrapper(versions.Versions()) return mapper def stub_out_key_pair_funcs(testcase, have_key_pair=True, **kwargs): def key_pair(context, user_id): return [dict(test_keypair.fake_keypair, name='key', public_key='public_key', **kwargs)] def one_key_pair(context, user_id, name): if name in ['key', 'new-key']: return dict(test_keypair.fake_keypair, name=name, public_key='public_key', **kwargs) else: raise exc.KeypairNotFound(user_id=user_id, name=name) def no_key_pair(context, user_id): return [] if have_key_pair: testcase.stub_out('nova.db.key_pair_get_all_by_user', key_pair) testcase.stub_out('nova.db.key_pair_get', one_key_pair) else: testcase.stub_out('nova.db.key_pair_get_all_by_user', no_key_pair) def stub_out_instance_quota(test, allowed, quota, resource='instances'): def fake_reserve(context, **deltas): requested = deltas.pop(resource, 0) if requested > allowed: quotas = dict(instances=1, cores=1, ram=1) quotas[resource] = quota usages = dict(instances=dict(in_use=0, reserved=0), cores=dict(in_use=0, reserved=0), ram=dict(in_use=0, reserved=0)) usages[resource]['in_use'] = (quotas[resource] * 9 // 10 - allowed) usages[resource]['reserved'] = quotas[resource] // 10 raise exc.OverQuota(overs=[resource], quotas=quotas, usages=usages) test.stub_out('nova.quota.QUOTAS.reserve', fake_reserve) def stub_out_networking(test): def get_my_ip(): return '127.0.0.1' test.stub_out('oslo_utils.netutils.get_my_ipv4', get_my_ip) def stub_out_compute_api_snapshot(stubs): def snapshot(self, context, instance, name, extra_properties=None): # emulate glance rejecting image names which are too long if len(name) > 256: raise exc.Invalid return dict(id='123', status='ACTIVE', name=name, properties=extra_properties) stubs.Set(compute_api.API, 'snapshot', snapshot) class stub_out_compute_api_backup(object): def __init__(self, stubs): self.stubs = stubs self.extra_props_last_call = None stubs.Set(compute_api.API, 'backup', self.backup) def backup(self, context, instance, name, backup_type, rotation, extra_properties=None): self.extra_props_last_call = extra_properties props = dict(backup_type=backup_type, rotation=rotation) props.update(extra_properties or {}) return dict(id='123', status='ACTIVE', name=name, properties=props) def stub_out_nw_api_get_instance_nw_info(test, num_networks=1, func=None): fake_network.stub_out_nw_api_get_instance_nw_info(test) def stub_out_nw_api(test, cls=None, private=None, publics=None): if not private: private = '192.168.0.3' if not publics: publics = ['1.2.3.4'] class Fake(object): def __init__(self): pass def get_instance_nw_info(*args, **kwargs): pass def get_floating_ips_by_fixed_address(*args, **kwargs): return publics def validate_networks(self, context, networks, max_count): return max_count def create_pci_requests_for_sriov_ports(self, context, system_metadata, requested_networks): pass if cls is None: cls = Fake if CONF.use_neutron: test.stub_out('nova.network.neutronv2.api.API', cls) else: test.stub_out('nova.network.api.API', cls) fake_network.stub_out_nw_api_get_instance_nw_info(test) def stub_out_secgroup_api(test, security_groups=None): class FakeSecurityGroupAPI(security_group_base.SecurityGroupBase): """This handles both nova-network and neutron style security group APIs """ def get_instances_security_groups_bindings( self, context, servers, detailed=False): # This method shouldn't be called unless using neutron. if not CONF.use_neutron: raise Exception('Invalid security group API call for nova-net') instances_security_group_bindings = {} if servers: instances_security_group_bindings = { server['id']: [] for server in servers } return instances_security_group_bindings def get_instance_security_groups( self, context, instance, detailed=False): return security_groups if security_groups is not None else [] if CONF.use_neutron: test.stub_out( 'nova.network.security_group.neutron_driver.SecurityGroupAPI', FakeSecurityGroupAPI) else: test.stub_out( 'nova.compute.api.SecurityGroupAPI', FakeSecurityGroupAPI) class FakeToken(object): id_count = 0 def __getitem__(self, key): return getattr(self, key) def __init__(self, **kwargs): FakeToken.id_count += 1 self.id = FakeToken.id_count for k, v in kwargs.items(): setattr(self, k, v) class FakeRequestContext(context.RequestContext): def __init__(self, *args, **kwargs): kwargs['auth_token'] = kwargs.get('auth_token', 'fake_auth_token') super(FakeRequestContext, self).__init__(*args, **kwargs) class HTTPRequest(os_wsgi.Request): @classmethod def blank(cls, *args, **kwargs): defaults = {'base_url': 'http://localhost/v2'} use_admin_context = kwargs.pop('use_admin_context', False) project_id = kwargs.pop('project_id', 'fake') version = kwargs.pop('version', os_wsgi.DEFAULT_API_VERSION) defaults.update(kwargs) out = super(HTTPRequest, cls).blank(*args, **defaults) out.environ['nova.context'] = FakeRequestContext( user_id='fake_user', project_id=project_id, is_admin=use_admin_context) out.api_version_request = api_version.APIVersionRequest(version) return out class HTTPRequestV21(HTTPRequest): pass class TestRouter(wsgi.Router): def __init__(self, controller, mapper=None): if not mapper: mapper = routes.Mapper() mapper.resource("test", "tests", controller=os_wsgi.Resource(controller)) super(TestRouter, self).__init__(mapper) class FakeAuthDatabase(object): data = {} @staticmethod def auth_token_get(context, token_hash): return FakeAuthDatabase.data.get(token_hash, None) @staticmethod def auth_token_create(context, token): fake_token = FakeToken(created_at=timeutils.utcnow(), **token) FakeAuthDatabase.data[fake_token.token_hash] = fake_token FakeAuthDatabase.data['id_%i' % fake_token.id] = fake_token return fake_token @staticmethod def auth_token_destroy(context, token_id): token = FakeAuthDatabase.data.get('id_%i' % token_id) if token and token.token_hash in FakeAuthDatabase.data: del FakeAuthDatabase.data[token.token_hash] del FakeAuthDatabase.data['id_%i' % token_id] def create_info_cache(nw_cache): if nw_cache is None: pub0 = ('192.168.1.100',) pub1 = ('2001:db8:0:1::1',) def _ip(ip): return {'address': ip, 'type': 'fixed'} nw_cache = [ {'address': 'aa:aa:aa:aa:aa:aa', 'id': 1, 'network': {'bridge': 'br0', 'id': 1, 'label': 'test1', 'subnets': [{'cidr': '192.168.1.0/24', 'ips': [_ip(ip) for ip in pub0]}, {'cidr': 'b33f::/64', 'ips': [_ip(ip) for ip in pub1]}]}}] if not isinstance(nw_cache, six.string_types): nw_cache = jsonutils.dumps(nw_cache) return { "info_cache": { "network_info": nw_cache, "deleted": False, "created_at": None, "deleted_at": None, "updated_at": None, } } def get_fake_uuid(token=0): if token not in FAKE_UUIDS: FAKE_UUIDS[token] = uuidutils.generate_uuid() return FAKE_UUIDS[token] def fake_instance_get(**kwargs): def _return_server(context, uuid, columns_to_join=None, use_slave=False): if 'project_id' not in kwargs: kwargs['project_id'] = 'fake' return stub_instance(1, **kwargs) return _return_server def fake_compute_get(**kwargs): def _return_server_obj(context, uuid, expected_attrs=None): return stub_instance_obj(context, **kwargs) return _return_server_obj def fake_actions_to_locked_server(self, context, instance, *args, **kwargs): raise exc.InstanceIsLocked(instance_uuid=instance['uuid']) def fake_instance_get_all_by_filters(num_servers=5, **kwargs): def _return_servers(context, *args, **kwargs): servers_list = [] marker = None limit = None found_marker = False if "marker" in kwargs: marker = kwargs["marker"] if "limit" in kwargs: limit = kwargs["limit"] if 'columns_to_join' in kwargs: kwargs.pop('columns_to_join') if 'use_slave' in kwargs: kwargs.pop('use_slave') if 'sort_keys' in kwargs: kwargs.pop('sort_keys') if 'sort_dirs' in kwargs: kwargs.pop('sort_dirs') for i in range(num_servers): uuid = get_fake_uuid(i) server = stub_instance(id=i + 1, uuid=uuid, **kwargs) servers_list.append(server) if marker is not None and uuid == marker: found_marker = True servers_list = [] if marker is not None and not found_marker: raise exc.MarkerNotFound(marker=marker) if limit is not None: servers_list = servers_list[:limit] return servers_list return _return_servers def fake_compute_get_all(num_servers=5, **kwargs): def _return_servers_objs(context, search_opts=None, limit=None, marker=None, expected_attrs=None, sort_keys=None, sort_dirs=None): db_insts = fake_instance_get_all_by_filters()(None, limit=limit, marker=marker) expected = ['metadata', 'system_metadata', 'flavor', 'info_cache', 'security_groups'] return base.obj_make_list(context, objects.InstanceList(), objects.Instance, db_insts, expected_attrs=expected) return _return_servers_objs def stub_instance(id=1, user_id=None, project_id=None, host=None, node=None, vm_state=None, task_state=None, reservation_id="", uuid=FAKE_UUID, image_ref="10", flavor_id="1", name=None, key_name='', access_ipv4=None, access_ipv6=None, progress=0, auto_disk_config=False, display_name=None, display_description=None, include_fake_metadata=True, config_drive=None, power_state=None, nw_cache=None, metadata=None, security_groups=None, root_device_name=None, limit=None, marker=None, launched_at=timeutils.utcnow(), terminated_at=timeutils.utcnow(), availability_zone='', locked_by=None, cleaned=False, memory_mb=0, vcpus=0, root_gb=0, ephemeral_gb=0, instance_type=None, launch_index=0, kernel_id="", ramdisk_id="", user_data=None, system_metadata=None, services=None): if user_id is None: user_id = 'fake_user' if project_id is None: project_id = 'fake_project' if metadata: metadata = [{'key': k, 'value': v} for k, v in metadata.items()] elif include_fake_metadata: metadata = [models.InstanceMetadata(key='seq', value=str(id))] else: metadata = [] inst_type = flavors.get_flavor_by_flavor_id(int(flavor_id)) sys_meta = flavors.save_flavor_info({}, inst_type) sys_meta.update(system_metadata or {}) if host is not None: host = str(host) if key_name: key_data = 'FAKE' else: key_data = '' if security_groups is None: security_groups = [{"id": 1, "name": "test", "description": "Foo:", "project_id": "project", "user_id": "user", "created_at": None, "updated_at": None, "deleted_at": None, "deleted": False}] # ReservationID isn't sent back, hack it in there. server_name = name or "server%s" % id if reservation_id != "": server_name = "reservation_%s" % (reservation_id, ) info_cache = create_info_cache(nw_cache) if instance_type is None: instance_type = flavors.get_default_flavor() flavorinfo = jsonutils.dumps({ 'cur': instance_type.obj_to_primitive(), 'old': None, 'new': None, }) instance = { "id": int(id), "created_at": datetime.datetime(2010, 10, 10, 12, 0, 0), "updated_at": datetime.datetime(2010, 11, 11, 11, 0, 0), "deleted_at": datetime.datetime(2010, 12, 12, 10, 0, 0), "deleted": None, "user_id": user_id, "project_id": project_id, "image_ref": image_ref, "kernel_id": kernel_id, "ramdisk_id": ramdisk_id, "launch_index": launch_index, "key_name": key_name, "key_data": key_data, "config_drive": config_drive, "vm_state": vm_state or vm_states.ACTIVE, "task_state": task_state, "power_state": power_state, "memory_mb": memory_mb, "vcpus": vcpus, "root_gb": root_gb, "ephemeral_gb": ephemeral_gb, "ephemeral_key_uuid": None, "hostname": display_name or server_name, "host": host, "node": node, "instance_type_id": 1, "instance_type": inst_type, "user_data": user_data, "reservation_id": reservation_id, "mac_address": "", "launched_at": launched_at, "terminated_at": terminated_at, "availability_zone": availability_zone, "display_name": display_name or server_name, "display_description": display_description, "locked": locked_by is not None, "locked_by": locked_by, "metadata": metadata, "access_ip_v4": access_ipv4, "access_ip_v6": access_ipv6, "uuid": uuid, "progress": progress, "auto_disk_config": auto_disk_config, "name": "instance-%s" % id, "shutdown_terminate": True, "disable_terminate": False, "security_groups": security_groups, "root_device_name": root_device_name, "system_metadata": utils.dict_to_metadata(sys_meta), "pci_devices": [], "vm_mode": "", "default_swap_device": "", "default_ephemeral_device": "", "launched_on": "", "cell_name": "", "architecture": "", "os_type": "", "extra": {"numa_topology": None, "pci_requests": None, "flavor": flavorinfo, }, "cleaned": cleaned, "services": services, "tags": []} instance.update(info_cache) instance['info_cache']['instance_uuid'] = instance['uuid'] return instance def stub_instance_obj(ctxt, *args, **kwargs): db_inst = stub_instance(*args, **kwargs) expected = ['metadata', 'system_metadata', 'flavor', 'info_cache', 'security_groups', 'tags'] inst = objects.Instance._from_db_object(ctxt, objects.Instance(), db_inst, expected_attrs=expected) inst.fault = None if db_inst["services"] is not None: # This ensures services there if one wanted so inst.services = db_inst["services"] return inst def stub_volume(id, **kwargs): volume = { 'id': id, 'user_id': 'fakeuser', 'project_id': 'fakeproject', 'host': 'fakehost', 'size': 1, 'availability_zone': 'fakeaz', 'status': 'fakestatus', 'attach_status': 'attached', 'name': 'vol name', 'display_name': 'displayname', 'display_description': 'displaydesc', 'created_at': datetime.datetime(1999, 1, 1, 1, 1, 1), 'snapshot_id': None, 'volume_type_id': 'fakevoltype', 'volume_metadata': [], 'volume_type': {'name': 'vol_type_name'}, 'multiattach': False, 'attachments': {'fakeuuid': {'mountpoint': '/'}, 'fakeuuid2': {'mountpoint': '/dev/sdb'} } } volume.update(kwargs) return volume def stub_volume_create(self, context, size, name, description, snapshot, **param): vol = stub_volume('1') vol['size'] = size vol['display_name'] = name vol['display_description'] = description try: vol['snapshot_id'] = snapshot['id'] except (KeyError, TypeError): vol['snapshot_id'] = None vol['availability_zone'] = param.get('availability_zone', 'fakeaz') return vol def stub_volume_update(self, context, *args, **param): pass def stub_volume_delete(self, context, *args, **param): pass def stub_volume_get(self, context, volume_id): return stub_volume(volume_id) def stub_volume_notfound(self, context, volume_id): raise exc.VolumeNotFound(volume_id=volume_id) def stub_volume_get_all(context, search_opts=None): return [stub_volume(100, project_id='fake'), stub_volume(101, project_id='superfake'), stub_volume(102, project_id='superduperfake')] def stub_volume_check_attach(self, context, *args, **param): pass def stub_snapshot(id, **kwargs): snapshot = { 'id': id, 'volume_id': 12, 'status': 'available', 'volume_size': 100, 'created_at': timeutils.utcnow(), 'display_name': 'Default name', 'display_description': 'Default description', 'project_id': 'fake' } snapshot.update(kwargs) return snapshot def stub_snapshot_create(self, context, volume_id, name, description): return stub_snapshot(100, volume_id=volume_id, display_name=name, display_description=description) def stub_compute_volume_snapshot_create(self, context, volume_id, create_info): return {'snapshot': {'id': "421752a6-acf6-4b2d-bc7a-119f9148cd8c", 'volumeId': volume_id}} def stub_snapshot_delete(self, context, snapshot_id): if snapshot_id == '-1': raise exc.SnapshotNotFound(snapshot_id=snapshot_id) def stub_compute_volume_snapshot_delete(self, context, volume_id, snapshot_id, delete_info): pass def stub_snapshot_get(self, context, snapshot_id): if snapshot_id == '-1': raise exc.SnapshotNotFound(snapshot_id=snapshot_id) return stub_snapshot(snapshot_id) def stub_snapshot_get_all(self, context): return [stub_snapshot(100, project_id='fake'), stub_snapshot(101, project_id='superfake'), stub_snapshot(102, project_id='superduperfake')] def stub_bdm_get_all_by_instance_uuids(context, instance_uuids, use_slave=False): i = 1 result = [] for instance_uuid in instance_uuids: for x in range(2): # add two BDMs per instance result.append(fake_block_device.FakeDbBlockDeviceDict({ 'id': i, 'source_type': 'volume', 'destination_type': 'volume', 'volume_id': 'volume_id%d' % (i), 'instance_uuid': instance_uuid, })) i += 1 return result def fake_not_implemented(*args, **kwargs): raise NotImplementedError() FLAVORS = { '1': objects.Flavor( id=1, name='flavor 1', memory_mb=256, vcpus=1, root_gb=10, ephemeral_gb=20, flavorid='1', swap=10, rxtx_factor=1.0, vcpu_weight=None, disabled=False, is_public=True, description=None ), '2': objects.Flavor( id=2, name='flavor 2', memory_mb=512, vcpus=1, root_gb=20, ephemeral_gb=10, flavorid='2', swap=5, rxtx_factor=None, vcpu_weight=None, disabled=True, is_public=True, description='flavor 2 description' ), } def stub_out_flavor_get_by_flavor_id(test): @staticmethod def fake_get_by_flavor_id(context, flavor_id, read_deleted=None): return FLAVORS[flavor_id] test.stub_out('nova.objects.Flavor.get_by_flavor_id', fake_get_by_flavor_id) def stub_out_flavor_get_all(test): @staticmethod def fake_get_all(context, inactive=False, filters=None, sort_key='flavorid', sort_dir='asc', limit=None, marker=None): if marker in ['99999']: raise exc.MarkerNotFound(marker) def reject_min(db_attr, filter_attr): return (filter_attr in filters and getattr(flavor, db_attr) < int(filters[filter_attr])) filters = filters or {} res = [] for flavor in FLAVORS.values(): if reject_min('memory_mb', 'min_memory_mb'): continue elif reject_min('root_gb', 'min_root_gb'): continue res.append(flavor) res = sorted(res, key=lambda item: getattr(item, sort_key)) output = [] marker_found = True if marker is None else False for flavor in res: if not marker_found and marker == flavor.flavorid: marker_found = True elif marker_found: if limit is None or len(output) < int(limit): output.append(flavor) return objects.FlavorList(objects=output) test.stub_out('nova.objects.FlavorList.get_all', fake_get_all) nova-17.0.1/nova/tests/unit/api/test_auth.py0000666000175000017500000001325713250073126021005 0ustar zuulzuul00000000000000# Copyright (c) 2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_middleware import request_id from oslo_serialization import jsonutils import webob import webob.exc import nova.api.auth import nova.conf from nova import test CONF = nova.conf.CONF class TestNovaKeystoneContextMiddleware(test.NoDBTestCase): def setUp(self): super(TestNovaKeystoneContextMiddleware, self).setUp() @webob.dec.wsgify() def fake_app(req): self.context = req.environ['nova.context'] return webob.Response() self.context = None self.middleware = nova.api.auth.NovaKeystoneContext(fake_app) self.request = webob.Request.blank('/') self.request.headers['X_TENANT_ID'] = 'testtenantid' self.request.headers['X_AUTH_TOKEN'] = 'testauthtoken' self.request.headers['X_SERVICE_CATALOG'] = jsonutils.dumps({}) def test_no_user_or_user_id(self): response = self.request.get_response(self.middleware) self.assertEqual(response.status, '401 Unauthorized') def test_user_id_only(self): self.request.headers['X_USER_ID'] = 'testuserid' response = self.request.get_response(self.middleware) self.assertEqual(response.status, '200 OK') self.assertEqual(self.context.user_id, 'testuserid') def test_invalid_service_catalog(self): self.request.headers['X_USER_ID'] = 'testuser' self.request.headers['X_SERVICE_CATALOG'] = "bad json" response = self.request.get_response(self.middleware) self.assertEqual(response.status, '500 Internal Server Error') def test_request_id_extracted_from_env(self): req_id = 'dummy-request-id' self.request.headers['X_PROJECT_ID'] = 'testtenantid' self.request.headers['X_USER_ID'] = 'testuserid' self.request.environ[request_id.ENV_REQUEST_ID] = req_id self.request.get_response(self.middleware) self.assertEqual(req_id, self.context.request_id) class TestKeystoneMiddlewareRoles(test.NoDBTestCase): def setUp(self): super(TestKeystoneMiddlewareRoles, self).setUp() @webob.dec.wsgify() def role_check_app(req): context = req.environ['nova.context'] if "knight" in context.roles and "bad" not in context.roles: return webob.Response(status="200 Role Match") elif not context.roles: return webob.Response(status="200 No Roles") else: raise webob.exc.HTTPBadRequest("unexpected role header") self.middleware = nova.api.auth.NovaKeystoneContext(role_check_app) self.request = webob.Request.blank('/') self.request.headers['X_USER_ID'] = 'testuser' self.request.headers['X_TENANT_ID'] = 'testtenantid' self.request.headers['X_AUTH_TOKEN'] = 'testauthtoken' self.request.headers['X_SERVICE_CATALOG'] = jsonutils.dumps({}) self.roles = "pawn, knight, rook" def test_roles(self): self.request.headers['X_ROLES'] = 'pawn,knight,rook' response = self.request.get_response(self.middleware) self.assertEqual(response.status, '200 Role Match') def test_roles_empty(self): self.request.headers['X_ROLES'] = '' response = self.request.get_response(self.middleware) self.assertEqual(response.status, '200 No Roles') def test_no_role_headers(self): # Test with no role headers set. response = self.request.get_response(self.middleware) self.assertEqual(response.status, '200 No Roles') class TestPipeLineFactory(test.NoDBTestCase): class FakeFilter(object): def __init__(self, name): self.name = name self.obj = None def __call__(self, obj): self.obj = obj return self class FakeApp(object): def __init__(self, name): self.name = name class FakeLoader(object): def get_filter(self, name): return TestPipeLineFactory.FakeFilter(name) def get_app(self, name): return TestPipeLineFactory.FakeApp(name) def _test_pipeline(self, pipeline, app): for p in pipeline.split()[:-1]: self.assertEqual(app.name, p) self.assertIsInstance(app, TestPipeLineFactory.FakeFilter) app = app.obj self.assertEqual(app.name, pipeline.split()[-1]) self.assertIsInstance(app, TestPipeLineFactory.FakeApp) def test_pipeline_factory_v21(self): fake_pipeline = 'test1 test2 test3' CONF.set_override('auth_strategy', 'noauth2', group='api') app = nova.api.auth.pipeline_factory_v21( TestPipeLineFactory.FakeLoader(), None, noauth2=fake_pipeline) self._test_pipeline(fake_pipeline, app) @mock.patch('oslo_log.versionutils.report_deprecated_feature') def test_pipeline_factory_legacy_v2_deprecated(self, mock_report_deprecated): fake_pipeline = 'test1 test2 test3' nova.api.auth.pipeline_factory(TestPipeLineFactory.FakeLoader(), None, noauth2=fake_pipeline) self.assertTrue(mock_report_deprecated.called) nova-17.0.1/nova/tests/unit/api/test_wsgi.py0000666000175000017500000000503213250073126021005 0ustar zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # Copyright 2010 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Test WSGI basics and provide some helper functions for other WSGI tests. """ import fixtures import routes from six.moves import StringIO import webob from nova import test from nova import wsgi class Test(test.NoDBTestCase): def test_debug(self): """This tests optional middleware we have which dumps the requests to stdout. """ self.output = StringIO() self.useFixture(fixtures.MonkeyPatch('sys.stdout', self.output)) class Application(wsgi.Application): """Dummy application to test debug.""" def __call__(self, environ, start_response): start_response("200", [("X-Test", "checking")]) return [b'Test result'] application = wsgi.Debug(Application()) result = webob.Request.blank('/').get_response(application) self.assertEqual(result.body, b"Test result") self.assertIn( '**************************************** REQUEST ENVIRON', self.output.getvalue()) def test_router(self): class Application(wsgi.Application): """Test application to call from router.""" def __call__(self, environ, start_response): start_response("200", []) return ['Router result'] class Router(wsgi.Router): """Test router.""" def __init__(self): mapper = routes.Mapper() mapper.connect("/test", controller=Application()) super(Router, self).__init__(mapper) result = webob.Request.blank('/test').get_response(Router()) self.assertEqual(result.body, "Router result") result = webob.Request.blank('/bad').get_response(Router()) self.assertNotEqual(result.body, "Router result") nova-17.0.1/nova/tests/unit/api/__init__.py0000666000175000017500000000000013250073126020522 0ustar zuulzuul00000000000000nova-17.0.1/nova/tests/unit/api/test_compute_req_id.py0000666000175000017500000000264513250073126023042 0ustar zuulzuul00000000000000# Copyright (c) 2014 IBM Corp. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_context import context from testtools import matchers import webob import webob.dec from nova.api import compute_req_id from nova import test ENV_REQUEST_ID = 'openstack.request_id' class RequestIdTest(test.NoDBTestCase): def test_generate_request_id(self): @webob.dec.wsgify def application(req): return req.environ[ENV_REQUEST_ID] app = compute_req_id.ComputeReqIdMiddleware(application) req = webob.Request.blank('/test') req_id = context.generate_request_id() req.environ[ENV_REQUEST_ID] = req_id res = req.get_response(app) res_id = res.headers.get(compute_req_id.HTTP_RESP_HEADER_REQUEST_ID) self.assertThat(res_id, matchers.StartsWith('req-')) self.assertEqual(res_id.encode('utf-8'), res.body) nova-17.0.1/nova/tests/unit/test_flavors.py0000666000175000017500000003623113250073126020744 0ustar zuulzuul00000000000000# Copyright 2011 Ken Pepple # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Unit Tests for flavors code """ from nova.compute import flavors from nova import context from nova import db from nova import exception from nova import objects from nova.objects import base as obj_base from nova import test class InstanceTypeTestCase(test.TestCase): """Test cases for flavor code.""" def test_will_not_get_bad_default_instance_type(self): # ensures error raised on bad default flavor. self.flags(default_flavor='unknown_flavor') self.assertRaises(exception.FlavorNotFound, flavors.get_default_flavor) def test_flavor_get_by_None_name_returns_default(self): # Ensure get by name returns default flavor with no name. default = flavors.get_default_flavor() actual = flavors.get_flavor_by_name(None) self.assertIsInstance(default, objects.Flavor) self.assertIsInstance(actual, objects.Flavor) self.assertEqual(default.flavorid, actual.flavorid) def test_will_not_get_flavor_with_bad_name(self): # Ensure get by name returns default flavor with bad name. self.assertRaises(exception.FlavorNotFound, flavors.get_flavor_by_name, 10000) def test_will_not_get_instance_by_unknown_flavor_id(self): # Ensure get by flavor raises error with wrong flavorid. self.assertRaises(exception.FlavorNotFound, flavors.get_flavor_by_flavor_id, 'unknown_flavor') def test_will_get_instance_by_flavor_id(self): default_instance_type = flavors.get_default_flavor() flavorid = default_instance_type.flavorid fetched = flavors.get_flavor_by_flavor_id(flavorid) self.assertIsInstance(fetched, objects.Flavor) self.assertEqual(default_instance_type.flavorid, fetched.flavorid) class InstanceTypeToolsTest(test.TestCase): def _dict_to_metadata(self, data): return [{'key': key, 'value': value} for key, value in data.items()] def _test_extract_flavor(self, prefix): instance_type = flavors.get_default_flavor() instance_type_p = obj_base.obj_to_primitive(instance_type) metadata = {} flavors.save_flavor_info(metadata, instance_type, prefix) instance = {'system_metadata': self._dict_to_metadata(metadata)} _instance_type = flavors.extract_flavor(instance, prefix) _instance_type_p = obj_base.obj_to_primitive(_instance_type) props = flavors.system_metadata_flavor_props.keys() for key in list(instance_type_p.keys()): if key not in props: del instance_type_p[key] self.assertEqual(instance_type_p, _instance_type_p) def test_extract_flavor(self): self._test_extract_flavor('') def test_extract_flavor_no_sysmeta(self): instance = {} prefix = '' result = flavors.extract_flavor(instance, prefix) self.assertIsNone(result) def test_extract_flavor_prefix(self): self._test_extract_flavor('foo_') def test_save_flavor_info(self): instance_type = flavors.get_default_flavor() example = {} example_prefix = {} for key in flavors.system_metadata_flavor_props.keys(): example['instance_type_%s' % key] = instance_type[key] example_prefix['fooinstance_type_%s' % key] = instance_type[key] metadata = {} flavors.save_flavor_info(metadata, instance_type) self.assertEqual(example, metadata) metadata = {} flavors.save_flavor_info(metadata, instance_type, 'foo') self.assertEqual(example_prefix, metadata) def test_delete_flavor_info(self): instance_type = flavors.get_default_flavor() metadata = {} flavors.save_flavor_info(metadata, instance_type) flavors.save_flavor_info(metadata, instance_type, '_') flavors.delete_flavor_info(metadata, '', '_') self.assertEqual(metadata, {}) def test_flavor_numa_extras_are_saved(self): instance_type = flavors.get_default_flavor() instance_type['extra_specs'] = { 'hw:numa_mem.0': '123', 'hw:numa_cpus.0': '456', 'hw:numa_mem.1': '789', 'hw:numa_cpus.1': 'ABC', 'foo': 'bar', } sysmeta = flavors.save_flavor_info({}, instance_type) _instance_type = flavors.extract_flavor({'system_metadata': sysmeta}) expected_extra_specs = { 'hw:numa_mem.0': '123', 'hw:numa_cpus.0': '456', 'hw:numa_mem.1': '789', 'hw:numa_cpus.1': 'ABC', } self.assertEqual(expected_extra_specs, _instance_type['extra_specs']) flavors.delete_flavor_info(sysmeta, '') self.assertEqual({}, sysmeta) class InstanceTypeFilteringTest(test.TestCase): """Test cases for the filter option available for instance_type_get_all.""" def setUp(self): super(InstanceTypeFilteringTest, self).setUp() self.context = context.get_admin_context() def assertFilterResults(self, filters, expected): inst_types = objects.FlavorList.get_all( self.context, filters=filters) inst_names = [i.name for i in inst_types] self.assertEqual(inst_names, expected) def test_no_filters(self): filters = None expected = ['m1.tiny', 'm1.small', 'm1.medium', 'm1.large', 'm1.xlarge', 'm1.tiny.specs'] self.assertFilterResults(filters, expected) def test_min_memory_mb_filter(self): # Exclude tiny instance which is 512 MB. filters = dict(min_memory_mb=513) expected = ['m1.small', 'm1.medium', 'm1.large', 'm1.xlarge'] self.assertFilterResults(filters, expected) def test_min_root_gb_filter(self): # Exclude everything but large and xlarge which have >= 80 GB. filters = dict(min_root_gb=80) expected = ['m1.large', 'm1.xlarge'] self.assertFilterResults(filters, expected) def test_min_memory_mb_AND_root_gb_filter(self): # Exclude everything but large and xlarge which have >= 80 GB. filters = dict(min_memory_mb=16384, min_root_gb=80) expected = ['m1.xlarge'] self.assertFilterResults(filters, expected) class CreateInstanceTypeTest(test.TestCase): def assertInvalidInput(self, *create_args, **create_kwargs): self.assertRaises(exception.InvalidInput, flavors.create, *create_args, **create_kwargs) def test_create_with_valid_name(self): # Names can contain alphanumeric and [_.- ] flavors.create('azAZ09. -_', 64, 1, 120) # And they are not limited to ascii characters # E.g.: m1.huge in simplified Chinese flavors.create(u'm1.\u5DE8\u5927', 6400, 100, 12000) def test_name_with_special_characters(self): # Names can contain all printable characters flavors.create('_foo.bar-123', 64, 1, 120) # Ensure instance types raises InvalidInput for invalid characters. self.assertInvalidInput('foobar\x00', 64, 1, 120) def test_name_with_non_printable_characters(self): # Names cannot contain printable characters self.assertInvalidInput(u'm1.\u0868 #', 64, 1, 120) def test_name_length_checks(self): MAX_LEN = 255 # Flavor name with 255 characters or less is valid. flavors.create('a' * MAX_LEN, 64, 1, 120) # Flavor name which is more than 255 characters will cause error. self.assertInvalidInput('a' * (MAX_LEN + 1), 64, 1, 120) # Flavor name which is empty should cause an error self.assertInvalidInput('', 64, 1, 120) def test_all_whitespace_flavor_names_rejected(self): self.assertInvalidInput(' ', 64, 1, 120) def test_flavorid_with_invalid_characters(self): # Ensure Flavor ID can only contain [a-zA-Z0-9_.- ] self.assertInvalidInput('a', 64, 1, 120, flavorid=u'\u2605') self.assertInvalidInput('a', 64, 1, 120, flavorid='%%$%$@#$#@$@#$^%') def test_flavorid_length_checks(self): MAX_LEN = 255 # Flavor ID which is more than 255 characters will cause error. self.assertInvalidInput('a', 64, 1, 120, flavorid='a' * (MAX_LEN + 1)) def test_memory_must_be_positive_db_integer(self): self.assertInvalidInput('flavor1', 'foo', 1, 120) self.assertInvalidInput('flavor1', -1, 1, 120) self.assertInvalidInput('flavor1', 0, 1, 120) self.assertInvalidInput('flavor1', db.MAX_INT + 1, 1, 120) flavors.create('flavor1', 1, 1, 120) def test_vcpus_must_be_positive_db_integer(self): self.assertInvalidInput('flavor`', 64, 'foo', 120) self.assertInvalidInput('flavor1', 64, -1, 120) self.assertInvalidInput('flavor1', 64, 0, 120) self.assertInvalidInput('flavor1', 64, db.MAX_INT + 1, 120) flavors.create('flavor1', 64, 1, 120) def test_root_gb_must_be_nonnegative_db_integer(self): self.assertInvalidInput('flavor1', 64, 1, 'foo') self.assertInvalidInput('flavor1', 64, 1, -1) self.assertInvalidInput('flavor1', 64, 1, db.MAX_INT + 1) flavors.create('flavor1', 64, 1, 0) flavors.create('flavor2', 64, 1, 120) def test_ephemeral_gb_must_be_nonnegative_db_integer(self): self.assertInvalidInput('flavor1', 64, 1, 120, ephemeral_gb='foo') self.assertInvalidInput('flavor1', 64, 1, 120, ephemeral_gb=-1) self.assertInvalidInput('flavor1', 64, 1, 120, ephemeral_gb=db.MAX_INT + 1) flavors.create('flavor1', 64, 1, 120, ephemeral_gb=0) flavors.create('flavor2', 64, 1, 120, ephemeral_gb=120) def test_swap_must_be_nonnegative_db_integer(self): self.assertInvalidInput('flavor1', 64, 1, 120, swap='foo') self.assertInvalidInput('flavor1', 64, 1, 120, swap=-1) self.assertInvalidInput('flavor1', 64, 1, 120, swap=db.MAX_INT + 1) flavors.create('flavor1', 64, 1, 120, swap=0) flavors.create('flavor2', 64, 1, 120, swap=1) def test_rxtx_factor_must_be_positive_float(self): self.assertInvalidInput('flavor1', 64, 1, 120, rxtx_factor='foo') self.assertInvalidInput('flavor1', 64, 1, 120, rxtx_factor=-1.0) self.assertInvalidInput('flavor1', 64, 1, 120, rxtx_factor=0.0) flavor = flavors.create('flavor1', 64, 1, 120, rxtx_factor=1.0) self.assertEqual(1.0, flavor.rxtx_factor) flavor = flavors.create('flavor2', 64, 1, 120, rxtx_factor=1.1) self.assertEqual(1.1, flavor.rxtx_factor) def test_rxtx_factor_must_be_within_sql_float_range(self): _context = context.get_admin_context() db.flavor_get_all(_context) # We do * 10 since this is an approximation and we need to make sure # the difference is noticeble. over_rxtx_factor = db.SQL_SP_FLOAT_MAX * 10 self.assertInvalidInput('flavor1', 64, 1, 120, rxtx_factor=over_rxtx_factor) flavor = flavors.create('flavor2', 64, 1, 120, rxtx_factor=db.SQL_SP_FLOAT_MAX) self.assertEqual(db.SQL_SP_FLOAT_MAX, flavor.rxtx_factor) def test_is_public_must_be_valid_bool_string(self): self.assertInvalidInput('flavor1', 64, 1, 120, is_public='foo') flavors.create('flavor1', 64, 1, 120, is_public='TRUE') flavors.create('flavor2', 64, 1, 120, is_public='False') flavors.create('flavor3', 64, 1, 120, is_public='Yes') flavors.create('flavor4', 64, 1, 120, is_public='No') flavors.create('flavor5', 64, 1, 120, is_public='Y') flavors.create('flavor6', 64, 1, 120, is_public='N') flavors.create('flavor7', 64, 1, 120, is_public='1') flavors.create('flavor8', 64, 1, 120, is_public='0') flavors.create('flavor9', 64, 1, 120, is_public='true') def test_flavorid_populated(self): flavor1 = flavors.create('flavor1', 64, 1, 120) self.assertIsNotNone(flavor1.flavorid) flavor2 = flavors.create('flavor2', 64, 1, 120, flavorid='') self.assertIsNotNone(flavor2.flavorid) flavor3 = flavors.create('flavor3', 64, 1, 120, flavorid='foo') self.assertEqual('foo', flavor3.flavorid) def test_default_values(self): flavor1 = flavors.create('flavor1', 64, 1, 120) self.assertIsNotNone(flavor1.flavorid) self.assertEqual(flavor1.ephemeral_gb, 0) self.assertEqual(flavor1.swap, 0) self.assertEqual(flavor1.rxtx_factor, 1.0) def test_basic_create(self): # Ensure instance types can be created. ctxt = context.get_admin_context() original_list = objects.FlavorList.get_all(ctxt) # Create new type and make sure values stick flavor = flavors.create('flavor', 64, 1, 120) self.assertEqual(flavor.name, 'flavor') self.assertEqual(flavor.memory_mb, 64) self.assertEqual(flavor.vcpus, 1) self.assertEqual(flavor.root_gb, 120) # Ensure new type shows up in list new_list = objects.FlavorList.get_all(ctxt) self.assertNotEqual(len(original_list), len(new_list), 'flavor was not created') def test_create_then_delete(self): ctxt = context.get_admin_context() original_list = objects.FlavorList.get_all(ctxt) flavor = flavors.create('flavor', 64, 1, 120) # Ensure new type shows up in list new_list = objects.FlavorList.get_all(ctxt) self.assertNotEqual(len(original_list), len(new_list), 'instance type was not created') flavor.destroy() self.assertRaises(exception.FlavorNotFound, objects.Flavor.get_by_name, ctxt, flavor.name) # Deleted instance should not be in list anymore new_list = objects.FlavorList.get_all(ctxt) self.assertEqual(len(original_list), len(new_list)) for i, f in enumerate(original_list): self.assertIsInstance(f, objects.Flavor) self.assertEqual(f.flavorid, new_list[i].flavorid) def test_duplicate_names_fail(self): # Ensures that name duplicates raise FlavorExists flavors.create('flavor', 256, 1, 120, 200, 'flavor1') self.assertRaises(exception.FlavorExists, flavors.create, 'flavor', 64, 1, 120) def test_duplicate_flavorids_fail(self): # Ensures that flavorid duplicates raise FlavorExists flavors.create('flavor1', 64, 1, 120, flavorid='flavorid') self.assertRaises(exception.FlavorIdExists, flavors.create, 'flavor2', 64, 1, 120, flavorid='flavorid') nova-17.0.1/nova/tests/unit/test_ipv6.py0000666000175000017500000000634713250073126020161 0ustar zuulzuul00000000000000# Copyright (c) 2011 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Test suite for IPv6.""" from nova import ipv6 from nova import test class IPv6RFC2462TestCase(test.NoDBTestCase): """Unit tests for IPv6 rfc2462 backend operations.""" def setUp(self): super(IPv6RFC2462TestCase, self).setUp() self.flags(ipv6_backend='rfc2462') ipv6.reset_backend() def test_to_global(self): addr = ipv6.to_global('2001:db8::', '02:16:3e:33:44:55', 'test') self.assertEqual(addr, '2001:db8::16:3eff:fe33:4455') def test_to_mac(self): mac = ipv6.to_mac('2001:db8::216:3eff:fe33:4455') self.assertEqual(mac, '00:16:3e:33:44:55') def test_to_global_with_bad_mac(self): bad_mac = '02:16:3e:33:44:5Z' expected_msg = 'Bad mac for to_global_ipv6: %s' % bad_mac err = self.assertRaises(TypeError, ipv6.to_global, '2001:db8::', bad_mac, 'test') self.assertEqual(expected_msg, str(err)) def test_to_global_with_bad_prefix(self): bad_prefix = '2001::1::2' expected_msg = 'Bad prefix for to_global_ipv6: %s' % bad_prefix err = self.assertRaises(TypeError, ipv6.to_global, bad_prefix, '02:16:3e:33:44:55', 'test') self.assertEqual(expected_msg, str(err)) class IPv6AccountIdentiferTestCase(test.NoDBTestCase): """Unit tests for IPv6 account_identifier backend operations.""" def setUp(self): super(IPv6AccountIdentiferTestCase, self).setUp() self.flags(ipv6_backend='account_identifier') ipv6.reset_backend() def test_to_global(self): addr = ipv6.to_global('2001:db8::', '02:16:3e:33:44:55', 'test') self.assertEqual(addr, '2001:db8::a94a:8fe5:ff33:4455') def test_to_mac(self): mac = ipv6.to_mac('2001:db8::a94a:8fe5:ff33:4455') self.assertEqual(mac, '02:16:3e:33:44:55') def test_to_global_with_bad_mac(self): bad_mac = '02:16:3e:33:44:5Z' expected_msg = 'Bad mac for to_global_ipv6: %s' % bad_mac err = self.assertRaises(TypeError, ipv6.to_global, '2001:db8::', bad_mac, 'test') self.assertEqual(expected_msg, str(err)) def test_to_global_with_bad_prefix(self): bad_prefix = '2001::1::2' expected_msg = 'Bad prefix for to_global_ipv6: %s' % bad_prefix err = self.assertRaises(TypeError, ipv6.to_global, bad_prefix, '02:16:3e:33:44:55', 'test') self.assertEqual(expected_msg, str(err)) nova-17.0.1/nova/tests/unit/fake_flavor.py0000666000175000017500000000330513250073126020504 0ustar zuulzuul00000000000000# Copyright 2015 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova import objects from nova.objects import fields def fake_db_flavor(**updates): db_flavor = { 'id': 1, 'name': 'fake_flavor', 'memory_mb': 1024, 'vcpus': 1, 'root_gb': 100, 'ephemeral_gb': 0, 'flavorid': 'abc', 'swap': 0, 'disabled': False, 'is_public': True, 'extra_specs': {}, 'projects': [], 'description': None } for name, field in objects.Flavor.fields.items(): if name in db_flavor: continue if field.nullable: db_flavor[name] = None elif field.default != fields.UnspecifiedDefault: db_flavor[name] = field.default else: raise Exception('fake_db_flavor needs help with %s' % name) if updates: db_flavor.update(updates) return db_flavor def fake_flavor_obj(context, **updates): expected_attrs = updates.pop('expected_attrs', None) return objects.Flavor._from_db_object(context, objects.Flavor(), fake_db_flavor(**updates), expected_attrs=expected_attrs) nova-17.0.1/nova/tests/unit/console/0000775000175000017500000000000013250073472017316 5ustar zuulzuul00000000000000nova-17.0.1/nova/tests/unit/console/securityproxy/0000775000175000017500000000000013250073472022267 5ustar zuulzuul00000000000000nova-17.0.1/nova/tests/unit/console/securityproxy/test_rfb.py0000666000175000017500000002424013250073126024451 0ustar zuulzuul00000000000000# Copyright (c) 2014-2016 Red Hat, Inc # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests the Console Security Proxy Framework.""" import six import mock from nova.console.rfb import auth from nova.console.rfb import authnone from nova.console.securityproxy import rfb from nova import exception from nova import test class RFBSecurityProxyTestCase(test.NoDBTestCase): """Test case for the base RFBSecurityProxy.""" def setUp(self): super(RFBSecurityProxyTestCase, self).setUp() self.manager = mock.Mock() self.tenant_sock = mock.Mock() self.compute_sock = mock.Mock() self.tenant_sock.recv.side_effect = [] self.compute_sock.recv.side_effect = [] self.expected_manager_calls = [] self.expected_tenant_calls = [] self.expected_compute_calls = [] self.proxy = rfb.RFBSecurityProxy() def _assert_expected_calls(self): self.assertEqual(self.expected_manager_calls, self.manager.mock_calls) self.assertEqual(self.expected_tenant_calls, self.tenant_sock.mock_calls) self.assertEqual(self.expected_compute_calls, self.compute_sock.mock_calls) def _version_handshake(self): full_version_str = "RFB 003.008\n" self._expect_compute_recv(auth.VERSION_LENGTH, full_version_str) self._expect_compute_send(full_version_str) self._expect_tenant_send(full_version_str) self._expect_tenant_recv(auth.VERSION_LENGTH, full_version_str) def _to_binary(self, val): if not isinstance(val, six.binary_type): val = six.binary_type(val, 'utf-8') return val def _expect_tenant_send(self, val): val = self._to_binary(val) self.expected_tenant_calls.append(mock.call.sendall(val)) def _expect_compute_send(self, val): val = self._to_binary(val) self.expected_compute_calls.append(mock.call.sendall(val)) def _expect_tenant_recv(self, amt, ret_val): ret_val = self._to_binary(ret_val) self.expected_tenant_calls.append(mock.call.recv(amt)) self.tenant_sock.recv.side_effect = ( list(self.tenant_sock.recv.side_effect) + [ret_val]) def _expect_compute_recv(self, amt, ret_val): ret_val = self._to_binary(ret_val) self.expected_compute_calls.append(mock.call.recv(amt)) self.compute_sock.recv.side_effect = ( list(self.compute_sock.recv.side_effect) + [ret_val]) def test_fail(self): """Validate behavior for invalid initial message from tenant. The spec defines the sequence that should be used in the handshaking process. Anything outside of this is invalid. """ self._expect_tenant_send("\x00\x00\x00\x01\x00\x00\x00\x04blah") self.proxy._fail(self.tenant_sock, None, 'blah') self._assert_expected_calls() def test_fail_server_message(self): """Validate behavior for invalid initial message from server. The spec defines the sequence that should be used in the handshaking process. Anything outside of this is invalid. """ self._expect_tenant_send("\x00\x00\x00\x01\x00\x00\x00\x04blah") self._expect_compute_send("\x00") self.proxy._fail(self.tenant_sock, self.compute_sock, 'blah') self._assert_expected_calls() def test_parse_version(self): """Validate behavior of version parser.""" res = self.proxy._parse_version("RFB 012.034\n") self.assertEqual(12.34, res) def test_fails_on_compute_version(self): """Validate behavior for unsupported compute RFB version. We only support RFB protocol version 3.8. """ for full_version_str in ["RFB 003.007\n", "RFB 003.009\n"]: self._expect_compute_recv(auth.VERSION_LENGTH, full_version_str) ex = self.assertRaises(exception.SecurityProxyNegotiationFailed, self.proxy.connect, self.tenant_sock, self.compute_sock) self.assertIn('version 3.8, but server', six.text_type(ex)) self._assert_expected_calls() def test_fails_on_tenant_version(self): """Validate behavior for unsupported tenant RFB version. We only support RFB protocol version 3.8. """ full_version_str = "RFB 003.008\n" for full_version_str_invalid in ["RFB 003.007\n", "RFB 003.009\n"]: self._expect_compute_recv(auth.VERSION_LENGTH, full_version_str) self._expect_compute_send(full_version_str) self._expect_tenant_send(full_version_str) self._expect_tenant_recv(auth.VERSION_LENGTH, full_version_str_invalid) ex = self.assertRaises(exception.SecurityProxyNegotiationFailed, self.proxy.connect, self.tenant_sock, self.compute_sock) self.assertIn('version 3.8, but tenant', six.text_type(ex)) self._assert_expected_calls() def test_fails_on_sec_type_cnt_zero(self): """Validate behavior if a server returns 0 supported security types. This indicates a random issue and the cause of that issues should be decoded and reported in the exception. """ self.proxy._fail = mock.Mock() self._version_handshake() self._expect_compute_recv(1, "\x00") self._expect_compute_recv(4, "\x00\x00\x00\x06") self._expect_compute_recv(6, "cheese") self._expect_tenant_send("\x00\x00\x00\x00\x06cheese") ex = self.assertRaises(exception.SecurityProxyNegotiationFailed, self.proxy.connect, self.tenant_sock, self.compute_sock) self.assertIn('cheese', six.text_type(ex)) self._assert_expected_calls() @mock.patch.object(authnone.RFBAuthSchemeNone, "security_handshake") def test_full_run(self, mock_handshake): """Validate correct behavior.""" new_sock = mock.MagicMock() mock_handshake.return_value = new_sock self._version_handshake() self._expect_compute_recv(1, "\x02") self._expect_compute_recv(2, "\x01\x02") self._expect_tenant_send("\x01\x01") self._expect_tenant_recv(1, "\x01") self._expect_compute_send("\x01") self.assertEqual(new_sock, self.proxy.connect( self.tenant_sock, self.compute_sock)) mock_handshake.assert_called_once_with(self.compute_sock) self._assert_expected_calls() def test_client_auth_invalid_fails(self): """Validate behavior if no security types are supported.""" self.proxy._fail = self.manager.proxy._fail self.proxy.security_handshake = self.manager.proxy.security_handshake self._version_handshake() self._expect_compute_recv(1, "\x02") self._expect_compute_recv(2, "\x01\x02") self._expect_tenant_send("\x01\x01") self._expect_tenant_recv(1, "\x02") self.expected_manager_calls.append( mock.call.proxy._fail(self.tenant_sock, self.compute_sock, "Only the security type " "None (1) is supported")) self.assertRaises(exception.SecurityProxyNegotiationFailed, self.proxy.connect, self.tenant_sock, self.compute_sock) self._assert_expected_calls() def test_exception_in_choose_security_type_fails(self): """Validate behavior if a given security type isn't supported.""" self.proxy._fail = self.manager.proxy._fail self.proxy.security_handshake = self.manager.proxy.security_handshake self._version_handshake() self._expect_compute_recv(1, "\x02") self._expect_compute_recv(2, "\x02\x05") self._expect_tenant_send("\x01\x01") self._expect_tenant_recv(1, "\x01") self.expected_manager_calls.extend([ mock.call.proxy._fail( self.tenant_sock, self.compute_sock, 'Unable to negotiate security with server')]) self.assertRaises(exception.SecurityProxyNegotiationFailed, self.proxy.connect, self.tenant_sock, self.compute_sock) self._assert_expected_calls() @mock.patch.object(authnone.RFBAuthSchemeNone, "security_handshake") def test_exception_security_handshake_fails(self, mock_auth): """Validate behavior if the security handshake fails for any reason.""" self.proxy._fail = self.manager.proxy._fail self._version_handshake() self._expect_compute_recv(1, "\x02") self._expect_compute_recv(2, "\x01\x02") self._expect_tenant_send("\x01\x01") self._expect_tenant_recv(1, "\x01") self._expect_compute_send("\x01") ex = exception.RFBAuthHandshakeFailed(reason="crackers") mock_auth.side_effect = ex self.expected_manager_calls.extend([ mock.call.proxy._fail(self.tenant_sock, None, 'Unable to negotiate security with server')]) self.assertRaises(exception.SecurityProxyNegotiationFailed, self.proxy.connect, self.tenant_sock, self.compute_sock) mock_auth.assert_called_once_with(self.compute_sock) self._assert_expected_calls() nova-17.0.1/nova/tests/unit/console/securityproxy/__init__.py0000666000175000017500000000000013250073126024364 0ustar zuulzuul00000000000000nova-17.0.1/nova/tests/unit/console/test_type.py0000666000175000017500000000400013250073126021700 0ustar zuulzuul00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.console import type as ctype from nova import test class TypeTestCase(test.NoDBTestCase): def test_console(self): c = ctype.Console(host='127.0.0.1', port=8945) self.assertTrue(hasattr(c, 'host')) self.assertTrue(hasattr(c, 'port')) self.assertTrue(hasattr(c, 'internal_access_path')) self.assertEqual('127.0.0.1', c.host) self.assertEqual(8945, c.port) self.assertIsNone(c.internal_access_path) self.assertEqual({ 'host': '127.0.0.1', 'port': 8945, 'internal_access_path': None, 'token': 'a-token', 'access_url': 'an-url'}, c.get_connection_info('a-token', 'an-url')) def test_console_vnc(self): c = ctype.ConsoleVNC(host='127.0.0.1', port=8945) self.assertIsInstance(c, ctype.Console) def test_console_rdp(self): c = ctype.ConsoleRDP(host='127.0.0.1', port=8945) self.assertIsInstance(c, ctype.Console) def test_console_spice(self): c = ctype.ConsoleSpice(host='127.0.0.1', port=8945, tlsPort=6547) self.assertIsInstance(c, ctype.Console) self.assertEqual(6547, c.tlsPort) self.assertEqual( 6547, c.get_connection_info('a-token', 'an-url')['tlsPort']) def test_console_serial(self): c = ctype.ConsoleSerial(host='127.0.0.1', port=8945) self.assertIsInstance(c, ctype.Console) nova-17.0.1/nova/tests/unit/console/test_websocketproxy.py0000666000175000017500000004244113250073126024022 0ustar zuulzuul00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests for nova websocketproxy.""" import mock import socket from nova.console.securityproxy import base from nova.console import websocketproxy from nova import exception from nova import test class NovaProxyRequestHandlerBaseTestCase(test.NoDBTestCase): def setUp(self): super(NovaProxyRequestHandlerBaseTestCase, self).setUp() self.flags(allowed_origins=['allowed-origin-example-1.net', 'allowed-origin-example-2.net'], group='console') self.server = websocketproxy.NovaWebSocketProxy() self.wh = websocketproxy.NovaProxyRequestHandlerBase() self.wh.server = self.server self.wh.socket = mock.MagicMock() self.wh.msg = mock.MagicMock() self.wh.do_proxy = mock.MagicMock() self.wh.headers = mock.MagicMock() fake_header = { 'cookie': 'token="123-456-789"', 'Origin': 'https://example.net:6080', 'Host': 'example.net:6080', } fake_header_ipv6 = { 'cookie': 'token="123-456-789"', 'Origin': 'https://[2001:db8::1]:6080', 'Host': '[2001:db8::1]:6080', } fake_header_bad_token = { 'cookie': 'token="XXX"', 'Origin': 'https://example.net:6080', 'Host': 'example.net:6080', } fake_header_bad_origin = { 'cookie': 'token="123-456-789"', 'Origin': 'https://bad-origin-example.net:6080', 'Host': 'example.net:6080', } fake_header_allowed_origin = { 'cookie': 'token="123-456-789"', 'Origin': 'https://allowed-origin-example-2.net:6080', 'Host': 'example.net:6080', } fake_header_blank_origin = { 'cookie': 'token="123-456-789"', 'Origin': '', 'Host': 'example.net:6080', } fake_header_no_origin = { 'cookie': 'token="123-456-789"', 'Host': 'example.net:6080', } fake_header_http = { 'cookie': 'token="123-456-789"', 'Origin': 'http://example.net:6080', 'Host': 'example.net:6080', } fake_header_malformed_cookie = { 'cookie': '?=!; token="123-456-789"', 'Origin': 'https://example.net:6080', 'Host': 'example.net:6080', } @mock.patch('nova.consoleauth.rpcapi.ConsoleAuthAPI.check_token') def test_new_websocket_client(self, check_token): check_token.return_value = { 'host': 'node1', 'port': '10000', 'console_type': 'novnc', 'access_url': 'https://example.net:6080' } self.wh.socket.return_value = '' self.wh.path = "http://127.0.0.1/?token=123-456-789" self.wh.headers = self.fake_header self.wh.new_websocket_client() check_token.assert_called_with(mock.ANY, token="123-456-789") self.wh.socket.assert_called_with('node1', 10000, connect=True) self.wh.do_proxy.assert_called_with('') @mock.patch('nova.consoleauth.rpcapi.ConsoleAuthAPI.check_token') def test_new_websocket_client_ipv6_url(self, check_token): check_token.return_value = { 'host': 'node1', 'port': '10000', 'console_type': 'novnc', 'access_url': 'https://[2001:db8::1]:6080' } self.wh.socket.return_value = '' self.wh.path = "http://[2001:db8::1]/?token=123-456-789" self.wh.headers = self.fake_header_ipv6 self.wh.new_websocket_client() check_token.assert_called_with(mock.ANY, token="123-456-789") self.wh.socket.assert_called_with('node1', 10000, connect=True) self.wh.do_proxy.assert_called_with('') @mock.patch('nova.consoleauth.rpcapi.ConsoleAuthAPI.check_token') def test_new_websocket_client_token_invalid(self, check_token): check_token.return_value = False self.wh.path = "http://127.0.0.1/?token=XXX" self.wh.headers = self.fake_header_bad_token self.assertRaises(exception.InvalidToken, self.wh.new_websocket_client) check_token.assert_called_with(mock.ANY, token="XXX") @mock.patch('nova.consoleauth.rpcapi.ConsoleAuthAPI.check_token') def test_new_websocket_client_internal_access_path(self, check_token): check_token.return_value = { 'host': 'node1', 'port': '10000', 'internal_access_path': 'vmid', 'console_type': 'novnc', 'access_url': 'https://example.net:6080' } tsock = mock.MagicMock() tsock.recv.return_value = "HTTP/1.1 200 OK\r\n\r\n" self.wh.socket.return_value = tsock self.wh.path = "http://127.0.0.1/?token=123-456-789" self.wh.headers = self.fake_header self.wh.new_websocket_client() check_token.assert_called_with(mock.ANY, token="123-456-789") self.wh.socket.assert_called_with('node1', 10000, connect=True) tsock.send.assert_called_with(test.MatchType(bytes)) self.wh.do_proxy.assert_called_with(tsock) @mock.patch('nova.consoleauth.rpcapi.ConsoleAuthAPI.check_token') def test_new_websocket_client_internal_access_path_err(self, check_token): check_token.return_value = { 'host': 'node1', 'port': '10000', 'internal_access_path': 'xxx', 'console_type': 'novnc', 'access_url': 'https://example.net:6080' } tsock = mock.MagicMock() tsock.recv.return_value = "HTTP/1.1 500 Internal Server Error\r\n\r\n" self.wh.socket.return_value = tsock self.wh.path = "http://127.0.0.1/?token=123-456-789" self.wh.headers = self.fake_header self.assertRaises(exception.InvalidConnectionInfo, self.wh.new_websocket_client) check_token.assert_called_with(mock.ANY, token="123-456-789") @mock.patch('nova.consoleauth.rpcapi.ConsoleAuthAPI.check_token') def test_new_websocket_client_internal_access_path_rfb(self, check_token): check_token.return_value = { 'host': 'node1', 'port': '10000', 'internal_access_path': 'vmid', 'console_type': 'novnc', 'access_url': 'https://example.net:6080' } tsock = mock.MagicMock() HTTP_RESP = "HTTP/1.1 200 OK\r\n\r\n" RFB_MSG = "RFB 003.003\n" # RFB negotiation message may arrive earlier. tsock.recv.side_effect = [HTTP_RESP + RFB_MSG, HTTP_RESP] self.wh.socket.return_value = tsock self.wh.path = "http://127.0.0.1/?token=123-456-789" self.wh.headers = self.fake_header self.wh.new_websocket_client() check_token.assert_called_with(mock.ANY, token="123-456-789") self.wh.socket.assert_called_with('node1', 10000, connect=True) tsock.recv.assert_has_calls([mock.call(4096, socket.MSG_PEEK), mock.call(len(HTTP_RESP))]) self.wh.do_proxy.assert_called_with(tsock) @mock.patch.object(websocketproxy, 'sys') @mock.patch('nova.consoleauth.rpcapi.ConsoleAuthAPI.check_token') def test_new_websocket_client_py273_good_scheme( self, check_token, mock_sys): mock_sys.version_info.return_value = (2, 7, 3) check_token.return_value = { 'host': 'node1', 'port': '10000', 'console_type': 'novnc', 'access_url': 'https://example.net:6080' } self.wh.socket.return_value = '' self.wh.path = "http://127.0.0.1/?token=123-456-789" self.wh.headers = self.fake_header self.wh.new_websocket_client() check_token.assert_called_with(mock.ANY, token="123-456-789") self.wh.socket.assert_called_with('node1', 10000, connect=True) self.wh.do_proxy.assert_called_with('') @mock.patch.object(websocketproxy, 'sys') @mock.patch('nova.consoleauth.rpcapi.ConsoleAuthAPI.check_token') def test_new_websocket_client_py273_special_scheme( self, check_token, mock_sys): mock_sys.version_info = (2, 7, 3) check_token.return_value = { 'host': 'node1', 'port': '10000', 'console_type': 'novnc' } self.wh.socket.return_value = '' self.wh.path = "ws://127.0.0.1/?token=123-456-789" self.wh.headers = self.fake_header self.assertRaises(exception.NovaException, self.wh.new_websocket_client) @mock.patch('socket.getfqdn') def test_address_string_doesnt_do_reverse_dns_lookup(self, getfqdn): request_mock = mock.MagicMock() request_mock.makefile().readline.side_effect = [ b'GET /vnc.html?token=123-456-789 HTTP/1.1\r\n', b'' ] server_mock = mock.MagicMock() client_address = ('8.8.8.8', 54321) handler = websocketproxy.NovaProxyRequestHandler( request_mock, client_address, server_mock) handler.log_message('log message using client address context info') self.assertFalse(getfqdn.called) # no reverse dns look up self.assertEqual(handler.address_string(), '8.8.8.8') # plain address @mock.patch('nova.consoleauth.rpcapi.ConsoleAuthAPI.check_token') def test_new_websocket_client_novnc_bad_origin_header(self, check_token): check_token.return_value = { 'host': 'node1', 'port': '10000', 'console_type': 'novnc' } self.wh.path = "http://127.0.0.1/" self.wh.headers = self.fake_header_bad_origin self.assertRaises(exception.ValidationError, self.wh.new_websocket_client) @mock.patch('nova.consoleauth.rpcapi.ConsoleAuthAPI.check_token') def test_new_websocket_client_novnc_allowed_origin_header(self, check_token): check_token.return_value = { 'host': 'node1', 'port': '10000', 'console_type': 'novnc', 'access_url': 'https://example.net:6080' } self.wh.socket.return_value = '' self.wh.path = "http://127.0.0.1/" self.wh.headers = self.fake_header_allowed_origin self.wh.new_websocket_client() check_token.assert_called_with(mock.ANY, token="123-456-789") self.wh.socket.assert_called_with('node1', 10000, connect=True) self.wh.do_proxy.assert_called_with('') @mock.patch('nova.consoleauth.rpcapi.ConsoleAuthAPI.check_token') def test_new_websocket_client_novnc_blank_origin_header(self, check_token): check_token.return_value = { 'host': 'node1', 'port': '10000', 'console_type': 'novnc' } self.wh.path = "http://127.0.0.1/" self.wh.headers = self.fake_header_blank_origin self.assertRaises(exception.ValidationError, self.wh.new_websocket_client) @mock.patch('nova.consoleauth.rpcapi.ConsoleAuthAPI.check_token') def test_new_websocket_client_novnc_no_origin_header(self, check_token): check_token.return_value = { 'host': 'node1', 'port': '10000', 'console_type': 'novnc' } self.wh.socket.return_value = '' self.wh.path = "http://127.0.0.1/" self.wh.headers = self.fake_header_no_origin self.wh.new_websocket_client() check_token.assert_called_with(mock.ANY, token="123-456-789") self.wh.socket.assert_called_with('node1', 10000, connect=True) self.wh.do_proxy.assert_called_with('') @mock.patch('nova.consoleauth.rpcapi.ConsoleAuthAPI.check_token') def test_new_websocket_client_novnc_https_origin_proto_http(self, check_token): check_token.return_value = { 'host': 'node1', 'port': '10000', 'console_type': 'novnc', 'access_url': 'http://example.net:6080' } self.wh.path = "https://127.0.0.1/" self.wh.headers = self.fake_header self.assertRaises(exception.ValidationError, self.wh.new_websocket_client) @mock.patch('nova.consoleauth.rpcapi.ConsoleAuthAPI.check_token') def test_new_websocket_client_novnc_https_origin_proto_ws(self, check_token): check_token.return_value = { 'host': 'node1', 'port': '10000', 'console_type': 'serial', 'access_url': 'ws://example.net:6080' } self.wh.path = "https://127.0.0.1/" self.wh.headers = self.fake_header self.assertRaises(exception.ValidationError, self.wh.new_websocket_client) @mock.patch('nova.consoleauth.rpcapi.ConsoleAuthAPI.check_token') def test_new_websocket_client_novnc_bad_console_type(self, check_token): check_token.return_value = { 'host': 'node1', 'port': '10000', 'console_type': 'bad-console-type' } self.wh.path = "http://127.0.0.1/" self.wh.headers = self.fake_header self.assertRaises(exception.ValidationError, self.wh.new_websocket_client) @mock.patch('nova.consoleauth.rpcapi.ConsoleAuthAPI.check_token') def test_malformed_cookie(self, check_token): check_token.return_value = { 'host': 'node1', 'port': '10000', 'console_type': 'novnc', 'access_url': 'https://example.net:6080' } self.wh.socket.return_value = '' self.wh.path = "http://127.0.0.1/" self.wh.headers = self.fake_header_malformed_cookie self.wh.new_websocket_client() check_token.assert_called_with(mock.ANY, token="123-456-789") self.wh.socket.assert_called_with('node1', 10000, connect=True) self.wh.do_proxy.assert_called_with('') class NovaWebsocketSecurityProxyTestCase(test.NoDBTestCase): def setUp(self): super(NovaWebsocketSecurityProxyTestCase, self).setUp() self.flags(allowed_origins=['allowed-origin-example-1.net', 'allowed-origin-example-2.net'], group='console') self.server = websocketproxy.NovaWebSocketProxy( security_proxy=mock.MagicMock( spec=base.SecurityProxy) ) self.wh = websocketproxy.NovaProxyRequestHandlerBase() self.wh.server = self.server self.wh.path = "http://127.0.0.1/?token=123-456-789" self.wh.socket = mock.MagicMock() self.wh.msg = mock.MagicMock() self.wh.do_proxy = mock.MagicMock() self.wh.headers = mock.MagicMock() def get_header(header): if header == 'cookie': return 'token="123-456-789"' elif header == 'Origin': return 'https://example.net:6080' elif header == 'Host': return 'example.net:6080' else: return self.wh.headers.get = get_header @mock.patch('nova.console.websocketproxy.TenantSock.close') @mock.patch('nova.console.websocketproxy.TenantSock.finish_up') @mock.patch('nova.consoleauth.rpcapi.ConsoleAuthAPI.check_token', return_value=True) def test_proxy_connect_ok(self, check_token, mock_finish, mock_close): check_token.return_value = { 'host': 'node1', 'port': '10000', 'console_type': 'novnc', 'access_url': 'https://example.net:6080' } sock = mock.MagicMock( spec=websocketproxy.TenantSock) self.server.security_proxy.connect.return_value = sock self.wh.new_websocket_client() self.wh.do_proxy.assert_called_with(sock) mock_finish.assert_called_with() self.assertEqual(len(mock_close.calls), 0) @mock.patch('nova.console.websocketproxy.TenantSock.close') @mock.patch('nova.console.websocketproxy.TenantSock.finish_up') @mock.patch('nova.consoleauth.rpcapi.ConsoleAuthAPI.check_token', return_value=True) def test_proxy_connect_err(self, check_token, mock_finish, mock_close): check_token.return_value = { 'host': 'node1', 'port': '10000', 'console_type': 'novnc', 'access_url': 'https://example.net:6080' } ex = exception.SecurityProxyNegotiationFailed("Wibble") self.server.security_proxy.connect.side_effect = ex self.assertRaises(exception.SecurityProxyNegotiationFailed, self.wh.new_websocket_client) self.assertEqual(len(self.wh.do_proxy.calls), 0) mock_close.assert_called_with() self.assertEqual(len(mock_finish.calls), 0) nova-17.0.1/nova/tests/unit/console/test_rpcapi.py0000666000175000017500000000426413250073126022211 0ustar zuulzuul00000000000000# Copyright 2012, Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Unit Tests for nova.console.rpcapi """ import mock from nova.console import rpcapi as console_rpcapi from nova import context from nova import test class ConsoleRpcAPITestCase(test.NoDBTestCase): def _test_console_api(self, method, rpc_method, **kwargs): ctxt = context.RequestContext('fake_user', 'fake_project') rpcapi = console_rpcapi.ConsoleAPI() self.assertIsNotNone(rpcapi.client) self.assertEqual(rpcapi.client.target.topic, console_rpcapi.RPC_TOPIC) orig_prepare = rpcapi.client.prepare with test.nested( mock.patch.object(rpcapi.client, rpc_method), mock.patch.object(rpcapi.client, 'prepare'), mock.patch.object(rpcapi.client, 'can_send_version'), ) as ( rpc_mock, prepare_mock, csv_mock ): prepare_mock.return_value = rpcapi.client rpc_mock.return_value = 'foo' if rpc_method == 'call' else None csv_mock.side_effect = ( lambda v: orig_prepare().can_send_version()) retval = getattr(rpcapi, method)(ctxt, **kwargs) self.assertEqual(retval, rpc_mock.return_value) prepare_mock.assert_called_once_with() rpc_mock.assert_called_once_with(ctxt, method, **kwargs) def test_add_console(self): self._test_console_api('add_console', instance_id='i', rpc_method='cast') def test_remove_console(self): self._test_console_api('remove_console', console_id='i', rpc_method='cast') nova-17.0.1/nova/tests/unit/console/test_console.py0000666000175000017500000002063113250073126022371 0ustar zuulzuul00000000000000# Copyright (c) 2010 OpenStack Foundation # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests For Console proxy.""" import fixtures import mock from nova.compute import rpcapi as compute_rpcapi import nova.conf from nova.console import api as console_api from nova.console import manager as console_manager from nova import context from nova import db from nova import exception from nova import objects from nova import test from nova.tests.unit import fake_instance from nova.tests.unit import fake_xvp_console_proxy CONF = nova.conf.CONF class ConsoleTestCase(test.TestCase): """Test case for console proxy manager.""" def setUp(self): super(ConsoleTestCase, self).setUp() self.useFixture(fixtures.MonkeyPatch( 'nova.console.manager.xvp.XVPConsoleProxy', fake_xvp_console_proxy.FakeConsoleProxy)) self.console = console_manager.ConsoleProxyManager() self.user_id = 'fake' self.project_id = 'fake' self.context = context.RequestContext(self.user_id, self.project_id) self.host = 'test_compute_host' self.pool_info = {'address': '127.0.0.1', 'username': 'test', 'password': '1234pass'} def test_reset(self): with mock.patch('nova.compute.rpcapi.ComputeAPI') as mock_rpc: old_rpcapi = self.console.compute_rpcapi self.console.reset() mock_rpc.assert_called_once_with() self.assertNotEqual(old_rpcapi, self.console.compute_rpcapi) def _create_instance(self): """Create a test instance.""" inst = {} inst['image_id'] = 1 inst['reservation_id'] = 'r-fakeres' inst['user_id'] = self.user_id inst['project_id'] = self.project_id inst['instance_type_id'] = 1 inst['ami_launch_index'] = 0 return fake_instance.fake_instance_obj(self.context, **inst) @mock.patch.object(compute_rpcapi.ComputeAPI, 'get_console_pool_info') def test_get_pool_for_instance_host(self, mock_get): mock_get.return_value = self.pool_info pool = self.console._get_pool_for_instance_host(self.context, self.host) self.assertEqual(pool['compute_host'], self.host) @mock.patch.object(compute_rpcapi.ComputeAPI, 'get_console_pool_info') def test_get_pool_creates_new_pool_if_needed(self, mock_get): mock_get.return_value = self.pool_info self.assertRaises(exception.NotFound, db.console_pool_get_by_host_type, self.context, self.host, self.console.host, self.console.driver.console_type) pool = self.console._get_pool_for_instance_host(self.context, self.host) pool2 = db.console_pool_get_by_host_type(self.context, self.host, self.console.host, self.console.driver.console_type) self.assertEqual(pool['id'], pool2['id']) def test_get_pool_does_not_create_new_pool_if_exists(self): pool_info = {'address': '127.0.0.1', 'username': 'test', 'password': '1234pass', 'host': self.console.host, 'console_type': self.console.driver.console_type, 'compute_host': 'sometesthostname'} new_pool = db.console_pool_create(self.context, pool_info) pool = self.console._get_pool_for_instance_host(self.context, 'sometesthostname') self.assertEqual(pool['id'], new_pool['id']) @mock.patch.object(compute_rpcapi.ComputeAPI, 'get_console_pool_info') @mock.patch('nova.objects.instance.Instance.get_by_id') def test_add_console(self, mock_id, mock_get): mock_get.return_value = self.pool_info instance = self._create_instance() mock_id.return_value = instance self.console.add_console(self.context, instance.id) pool = db.console_pool_get_by_host_type(self.context, instance.host, self.console.host, self.console.driver.console_type) console_instances = [con['instance_uuid'] for con in pool['consoles']] self.assertIn(instance.uuid, console_instances) @mock.patch.object(compute_rpcapi.ComputeAPI, 'get_console_pool_info') @mock.patch('nova.objects.instance.Instance.get_by_id') def test_add_console_does_not_duplicate(self, mock_id, mock_get): mock_get.return_value = self.pool_info instance = self._create_instance() mock_id.return_value = instance cons1 = self.console.add_console(self.context, instance.id) cons2 = self.console.add_console(self.context, instance.id) self.assertEqual(cons1, cons2) @mock.patch.object(compute_rpcapi.ComputeAPI, 'get_console_pool_info') @mock.patch('nova.objects.instance.Instance.get_by_id') def test_remove_console(self, mock_id, mock_get): mock_get.return_value = self.pool_info instance = self._create_instance() mock_id.return_value = instance console_id = self.console.add_console(self.context, instance.id) self.console.remove_console(self.context, console_id) self.assertRaises(exception.NotFound, db.console_get, self.context, console_id) class ConsoleAPITestCase(test.NoDBTestCase): """Test case for console API.""" def setUp(self): super(ConsoleAPITestCase, self).setUp() self.context = context.RequestContext('fake', 'fake') self.console_api = console_api.API() self.fake_uuid = '00000000-aaaa-bbbb-cccc-000000000000' self.fake_instance = { 'id': 1, 'uuid': self.fake_uuid, 'host': 'fake_host' } self.fake_console = { 'pool': {'host': 'fake_host'}, 'id': 'fake_id' } def _fake_db_console_get(_ctxt, _console_uuid, _instance_uuid): return self.fake_console self.stub_out('nova.db.console_get', _fake_db_console_get) def _fake_db_console_get_all_by_instance(_ctxt, _instance_uuid, columns_to_join): return [self.fake_console] self.stub_out('nova.db.console_get_all_by_instance', _fake_db_console_get_all_by_instance) def test_get_consoles(self): console = self.console_api.get_consoles(self.context, self.fake_uuid) self.assertEqual(console, [self.fake_console]) def test_get_console(self): console = self.console_api.get_console(self.context, self.fake_uuid, 'fake_id') self.assertEqual(console, self.fake_console) @mock.patch('nova.console.rpcapi.ConsoleAPI.remove_console') def test_delete_console(self, mock_remove): self.console_api.delete_console(self.context, self.fake_uuid, 'fake_id') mock_remove.assert_called_once_with(self.context, 'fake_id') @mock.patch.object(compute_rpcapi.ComputeAPI, 'get_console_topic', return_value='compute.fake_host') @mock.patch.object(objects.Instance, 'get_by_uuid') def test_create_console(self, mock_get_instance_by_uuid, mock_get_console_topic): mock_get_instance_by_uuid.return_value = objects.Instance( **self.fake_instance) self.console_api.create_console(self.context, self.fake_uuid) mock_get_console_topic.assert_called_once_with(self.context, 'fake_host') nova-17.0.1/nova/tests/unit/console/test_serial.py0000666000175000017500000001047713250073126022215 0ustar zuulzuul00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests for Serial Console.""" import socket import mock import six.moves from nova.console import serial from nova import exception from nova import test class SerialTestCase(test.NoDBTestCase): def setUp(self): super(SerialTestCase, self).setUp() serial.ALLOCATED_PORTS = set() def test_get_port_range(self): start, stop = serial._get_port_range() self.assertEqual(10000, start) self.assertEqual(20000, stop) def test_get_port_range_customized(self): self.flags(port_range='30000:40000', group='serial_console') start, stop = serial._get_port_range() self.assertEqual(30000, start) self.assertEqual(40000, stop) def test_get_port_range_bad_range(self): self.flags(port_range='40000:30000', group='serial_console') start, stop = serial._get_port_range() self.assertEqual(10000, start) self.assertEqual(20000, stop) @mock.patch('socket.socket') def test_verify_port(self, fake_socket): s = mock.MagicMock() fake_socket.return_value = s serial._verify_port('127.0.0.1', 10) s.bind.assert_called_once_with(('127.0.0.1', 10)) @mock.patch('socket.socket') def test_verify_port_in_use(self, fake_socket): s = mock.MagicMock() s.bind.side_effect = socket.error() fake_socket.return_value = s self.assertRaises( exception.SocketPortInUseException, serial._verify_port, '127.0.0.1', 10) s.bind.assert_called_once_with(('127.0.0.1', 10)) @mock.patch('nova.console.serial._verify_port', lambda x, y: None) def test_acquire_port(self): start, stop = 15, 20 self.flags( port_range='%d:%d' % (start, stop), group='serial_console') for port in six.moves.range(start, stop): self.assertEqual(port, serial.acquire_port('127.0.0.1')) for port in six.moves.range(start, stop): self.assertEqual(port, serial.acquire_port('127.0.0.2')) self.assertEqual(10, len(serial.ALLOCATED_PORTS)) @mock.patch('nova.console.serial._verify_port') def test_acquire_port_in_use(self, fake_verify_port): def port_10000_already_used(host, port): if port == 10000 and host == '127.0.0.1': raise exception.SocketPortInUseException( port=port, host=host, error="already in use") fake_verify_port.side_effect = port_10000_already_used self.assertEqual(10001, serial.acquire_port('127.0.0.1')) self.assertEqual(10000, serial.acquire_port('127.0.0.2')) self.assertNotIn(('127.0.0.1', 10000), serial.ALLOCATED_PORTS) self.assertIn(('127.0.0.1', 10001), serial.ALLOCATED_PORTS) self.assertIn(('127.0.0.2', 10000), serial.ALLOCATED_PORTS) @mock.patch('nova.console.serial._verify_port') def test_acquire_port_not_ble_to_bind_at_any_port(self, fake_verify_port): start, stop = 15, 20 self.flags( port_range='%d:%d' % (start, stop), group='serial_console') fake_verify_port.side_effect = ( exception.SocketPortRangeExhaustedException(host='127.0.0.1')) self.assertRaises( exception.SocketPortRangeExhaustedException, serial.acquire_port, '127.0.0.1') def test_release_port(self): serial.ALLOCATED_PORTS.add(('127.0.0.1', 100)) serial.ALLOCATED_PORTS.add(('127.0.0.2', 100)) self.assertEqual(2, len(serial.ALLOCATED_PORTS)) serial.release_port('127.0.0.1', 100) self.assertEqual(1, len(serial.ALLOCATED_PORTS)) serial.release_port('127.0.0.2', 100) self.assertEqual(0, len(serial.ALLOCATED_PORTS)) nova-17.0.1/nova/tests/unit/console/__init__.py0000666000175000017500000000000013250073126021413 0ustar zuulzuul00000000000000nova-17.0.1/nova/tests/unit/console/rfb/0000775000175000017500000000000013250073472020067 5ustar zuulzuul00000000000000nova-17.0.1/nova/tests/unit/console/rfb/test_authnone.py0000666000175000017500000000215613250073126023323 0ustar zuulzuul00000000000000# Copyright (c) 2014-2016 Red Hat, Inc # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova.console.rfb import auth from nova.console.rfb import authnone from nova import test class RFBAuthSchemeNoneTestCase(test.NoDBTestCase): def test_handshake(self): scheme = authnone.RFBAuthSchemeNone() sock = mock.MagicMock() ret = scheme.security_handshake(sock) self.assertEqual(sock, ret) def test_types(self): scheme = authnone.RFBAuthSchemeNone() self.assertEqual(auth.AuthType.NONE, scheme.security_type()) nova-17.0.1/nova/tests/unit/console/rfb/test_auth.py0000666000175000017500000000453713250073126022450 0ustar zuulzuul00000000000000# Copyright (c) 2016 Red Hat, Inc # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova.console.rfb import auth from nova.console.rfb import authnone from nova.console.rfb import auths from nova import exception from nova import test class RFBAuthSchemeListTestCase(test.NoDBTestCase): def setUp(self): super(RFBAuthSchemeListTestCase, self).setUp() self.flags(auth_schemes=["none", "vencrypt"], group="vnc") def test_load_ok(self): schemelist = auths.RFBAuthSchemeList() security_types = sorted(schemelist.schemes.keys()) self.assertEqual([auth.AuthType.NONE, auth.AuthType.VENCRYPT], security_types) def test_load_unknown(self): """Ensure invalid auth schemes are not supported. We're really testing oslo_policy functionality, but this case is esoteric enough to warrant this. """ self.assertRaises(ValueError, self.flags, auth_schemes=['none', 'wibble'], group='vnc') def test_find_scheme_ok(self): schemelist = auths.RFBAuthSchemeList() scheme = schemelist.find_scheme( [auth.AuthType.TIGHT, auth.AuthType.NONE]) self.assertIsInstance(scheme, authnone.RFBAuthSchemeNone) def test_find_scheme_fail(self): schemelist = auths.RFBAuthSchemeList() self.assertRaises(exception.RFBAuthNoAvailableScheme, schemelist.find_scheme, [auth.AuthType.TIGHT]) def test_find_scheme_priority(self): schemelist = auths.RFBAuthSchemeList() tight = mock.MagicMock(spec=auth.RFBAuthScheme) schemelist.schemes[auth.AuthType.TIGHT] = tight scheme = schemelist.find_scheme( [auth.AuthType.TIGHT, auth.AuthType.NONE]) self.assertEqual(tight, scheme) nova-17.0.1/nova/tests/unit/console/rfb/__init__.py0000666000175000017500000000000013250073126022164 0ustar zuulzuul00000000000000nova-17.0.1/nova/tests/unit/console/rfb/test_authvencrypt.py0000666000175000017500000001512613250073126024237 0ustar zuulzuul00000000000000# Copyright (c) 2016 Red Hat, Inc # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import ssl import struct import mock from nova.console.rfb import auth from nova.console.rfb import authvencrypt from nova import exception from nova import test class RFBAuthSchemeVeNCryptTestCase(test.NoDBTestCase): def setUp(self): super(RFBAuthSchemeVeNCryptTestCase, self).setUp() self.scheme = authvencrypt.RFBAuthSchemeVeNCrypt() self.compute_sock = mock.MagicMock() self.compute_sock.recv.side_effect = [] self.expected_calls = [] self.flags(vencrypt_ca_certs="/certs/ca.pem", group="vnc") def _expect_send(self, val): self.expected_calls.append(mock.call.sendall(val)) def _expect_recv(self, amt, ret_val): self.expected_calls.append(mock.call.recv(amt)) self.compute_sock.recv.side_effect = ( list(self.compute_sock.recv.side_effect) + [ret_val]) @mock.patch.object(ssl, "wrap_socket", return_value="wrapped") def test_security_handshake_with_x509(self, mock_socket): self.flags(vencrypt_client_key='/certs/keyfile', vencrypt_client_cert='/certs/cert.pem', group="vnc") self._expect_recv(1, "\x00") self._expect_recv(1, "\x02") self._expect_send(b"\x00\x02") self._expect_recv(1, "\x00") self._expect_recv(1, "\x02") subtypes_raw = [authvencrypt.AuthVeNCryptSubtype.X509NONE, authvencrypt.AuthVeNCryptSubtype.X509VNC] subtypes = struct.pack('!2I', *subtypes_raw) self._expect_recv(8, subtypes) self._expect_send(struct.pack('!I', subtypes_raw[0])) self._expect_recv(1, "\x01") self.assertEqual("wrapped", self.scheme.security_handshake( self.compute_sock)) mock_socket.assert_called_once_with( self.compute_sock, keyfile='/certs/keyfile', certfile='/certs/cert.pem', server_side=False, cert_reqs=ssl.CERT_REQUIRED, ca_certs='/certs/ca.pem') self.assertEqual(self.expected_calls, self.compute_sock.mock_calls) @mock.patch.object(ssl, "wrap_socket", return_value="wrapped") def test_security_handshake_without_x509(self, mock_socket): self._expect_recv(1, "\x00") self._expect_recv(1, "\x02") self._expect_send(b"\x00\x02") self._expect_recv(1, "\x00") self._expect_recv(1, "\x02") subtypes_raw = [authvencrypt.AuthVeNCryptSubtype.X509NONE, authvencrypt.AuthVeNCryptSubtype.X509VNC] subtypes = struct.pack('!2I', *subtypes_raw) self._expect_recv(8, subtypes) self._expect_send(struct.pack('!I', subtypes_raw[0])) self._expect_recv(1, "\x01") self.assertEqual("wrapped", self.scheme.security_handshake( self.compute_sock)) mock_socket.assert_called_once_with( self.compute_sock, keyfile=None, certfile=None, server_side=False, cert_reqs=ssl.CERT_REQUIRED, ca_certs='/certs/ca.pem' ) self.assertEqual(self.expected_calls, self.compute_sock.mock_calls) def _test_security_handshake_fails(self): self.assertRaises(exception.RFBAuthHandshakeFailed, self.scheme.security_handshake, self.compute_sock) self.assertEqual(self.expected_calls, self.compute_sock.mock_calls) def test_security_handshake_fails_on_low_version(self): self._expect_recv(1, "\x00") self._expect_recv(1, "\x01") self._test_security_handshake_fails() def test_security_handshake_fails_on_cant_use_version(self): self._expect_recv(1, "\x00") self._expect_recv(1, "\x02") self._expect_send(b"\x00\x02") self._expect_recv(1, "\x01") self._test_security_handshake_fails() def test_security_handshake_fails_on_missing_subauth(self): self._expect_recv(1, "\x00") self._expect_recv(1, "\x02") self._expect_send(b"\x00\x02") self._expect_recv(1, "\x00") self._expect_recv(1, "\x01") subtypes_raw = [authvencrypt.AuthVeNCryptSubtype.X509VNC] subtypes = struct.pack('!I', *subtypes_raw) self._expect_recv(4, subtypes) self._test_security_handshake_fails() def test_security_handshake_fails_on_auth_not_accepted(self): self._expect_recv(1, "\x00") self._expect_recv(1, "\x02") self._expect_send(b"\x00\x02") self._expect_recv(1, "\x00") self._expect_recv(1, "\x02") subtypes_raw = [authvencrypt.AuthVeNCryptSubtype.X509NONE, authvencrypt.AuthVeNCryptSubtype.X509VNC] subtypes = struct.pack('!2I', *subtypes_raw) self._expect_recv(8, subtypes) self._expect_send(struct.pack('!I', subtypes_raw[0])) self._expect_recv(1, "\x00") self._test_security_handshake_fails() @mock.patch.object(ssl, "wrap_socket") def test_security_handshake_fails_on_ssl_failure(self, mock_socket): self._expect_recv(1, "\x00") self._expect_recv(1, "\x02") self._expect_send(b"\x00\x02") self._expect_recv(1, "\x00") self._expect_recv(1, "\x02") subtypes_raw = [authvencrypt.AuthVeNCryptSubtype.X509NONE, authvencrypt.AuthVeNCryptSubtype.X509VNC] subtypes = struct.pack('!2I', *subtypes_raw) self._expect_recv(8, subtypes) self._expect_send(struct.pack('!I', subtypes_raw[0])) self._expect_recv(1, "\x01") mock_socket.side_effect = ssl.SSLError("cheese") self._test_security_handshake_fails() mock_socket.assert_called_once_with( self.compute_sock, keyfile=None, certfile=None, server_side=False, cert_reqs=ssl.CERT_REQUIRED, ca_certs='/certs/ca.pem' ) def test_types(self): scheme = authvencrypt.RFBAuthSchemeVeNCrypt() self.assertEqual(auth.AuthType.VENCRYPT, scheme.security_type()) nova-17.0.1/nova/tests/unit/fake_network_cache_model.py0000666000175000017500000000707413250073126023216 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.network import model def new_ip(ip_dict=None, version=4): if version == 6: new_ip = dict(address='fd00::1:100', version=6) elif version == 4: new_ip = dict(address='192.168.1.100') ip_dict = ip_dict or {} new_ip.update(ip_dict) return model.IP(**new_ip) def new_fixed_ip(ip_dict=None, version=4): if version == 6: new_fixed_ip = dict(address='fd00::1:100', version=6) elif version == 4: new_fixed_ip = dict(address='192.168.1.100') ip_dict = ip_dict or {} new_fixed_ip.update(ip_dict) return model.FixedIP(**new_fixed_ip) def new_route(route_dict=None, version=4): if version == 6: new_route = dict( cidr='::/48', gateway=new_ip(dict(address='fd00::1:1'), version=6), interface='eth0') elif version == 4: new_route = dict( cidr='0.0.0.0/24', gateway=new_ip(dict(address='192.168.1.1')), interface='eth0') route_dict = route_dict or {} new_route.update(route_dict) return model.Route(**new_route) def new_subnet(subnet_dict=None, version=4): if version == 6: new_subnet = dict( cidr='fd00::/48', dns=[new_ip(dict(address='1:2:3:4::'), version=6), new_ip(dict(address='2:3:4:5::'), version=6)], gateway=new_ip(dict(address='fd00::1'), version=6), ips=[new_fixed_ip(dict(address='fd00::2'), version=6), new_fixed_ip(dict(address='fd00::3'), version=6)], routes=[new_route(version=6)], version=6) elif version == 4: new_subnet = dict( cidr='10.10.0.0/24', dns=[new_ip(dict(address='1.2.3.4')), new_ip(dict(address='2.3.4.5'))], gateway=new_ip(dict(address='10.10.0.1')), ips=[new_fixed_ip(dict(address='10.10.0.2')), new_fixed_ip(dict(address='10.10.0.3'))], routes=[new_route()]) subnet_dict = subnet_dict or {} new_subnet.update(subnet_dict) return model.Subnet(**new_subnet) def new_network(network_dict=None, version=4): if version == 6: new_net = dict( id=1, bridge='br0', label='public', subnets=[new_subnet(version=6), new_subnet(dict(cidr='ffff:ffff:ffff:ffff::'), version=6)]) elif version == 4: new_net = dict( id=1, bridge='br0', label='public', subnets=[new_subnet(), new_subnet(dict(cidr='255.255.255.255'))]) network_dict = network_dict or {} new_net.update(network_dict) return model.Network(**new_net) def new_vif(vif_dict=None, version=4): vif = dict( id=1, address='aa:aa:aa:aa:aa:aa', type='bridge', network=new_network(version=version)) vif_dict = vif_dict or {} vif.update(vif_dict) return model.VIF(**vif) nova-17.0.1/nova/tests/unit/test_wsgi.py0000666000175000017500000002761313250073126020245 0ustar zuulzuul00000000000000# Copyright 2011 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Unit tests for `nova.wsgi`.""" import os.path import socket import tempfile import eventlet import eventlet.wsgi import mock from oslo_config import cfg import requests import six import testtools import webob import nova.exception from nova import test from nova.tests.unit import utils import nova.wsgi SSL_CERT_DIR = os.path.normpath(os.path.join( os.path.dirname(os.path.abspath(__file__)), 'ssl_cert')) CONF = cfg.CONF class TestLoaderNothingExists(test.NoDBTestCase): """Loader tests where os.path.exists always returns False.""" def setUp(self): super(TestLoaderNothingExists, self).setUp() self.stub_out('os.path.exists', lambda _: False) def test_relpath_config_not_found(self): self.flags(api_paste_config='api-paste.ini', group='wsgi') self.assertRaises( nova.exception.ConfigNotFound, nova.wsgi.Loader, ) def test_asbpath_config_not_found(self): self.flags(api_paste_config='/etc/nova/api-paste.ini', group='wsgi') self.assertRaises( nova.exception.ConfigNotFound, nova.wsgi.Loader, ) class TestLoaderNormalFilesystem(test.NoDBTestCase): """Loader tests with normal filesystem (unmodified os.path module).""" _paste_config = """ [app:test_app] use = egg:Paste#static document_root = /tmp """ def setUp(self): super(TestLoaderNormalFilesystem, self).setUp() self.config = tempfile.NamedTemporaryFile(mode="w+t") self.config.write(self._paste_config.lstrip()) self.config.seek(0) self.config.flush() self.loader = nova.wsgi.Loader(self.config.name) def test_config_found(self): self.assertEqual(self.config.name, self.loader.config_path) def test_app_not_found(self): self.assertRaises( nova.exception.PasteAppNotFound, self.loader.load_app, "nonexistent app", ) def test_app_found(self): url_parser = self.loader.load_app("test_app") self.assertEqual("/tmp", url_parser.directory) def tearDown(self): self.config.close() super(TestLoaderNormalFilesystem, self).tearDown() class TestWSGIServer(test.NoDBTestCase): """WSGI server tests.""" def test_no_app(self): server = nova.wsgi.Server("test_app", None) self.assertEqual("test_app", server.name) def test_custom_max_header_line(self): self.flags(max_header_line=4096, group='wsgi') # Default is 16384 nova.wsgi.Server("test_custom_max_header_line", None) self.assertEqual(CONF.wsgi.max_header_line, eventlet.wsgi.MAX_HEADER_LINE) def test_start_random_port(self): server = nova.wsgi.Server("test_random_port", None, host="127.0.0.1", port=0) server.start() self.assertNotEqual(0, server.port) server.stop() server.wait() @testtools.skipIf(not utils.is_ipv6_supported(), "no ipv6 support") def test_start_random_port_with_ipv6(self): server = nova.wsgi.Server("test_random_port", None, host="::1", port=0) server.start() self.assertEqual("::1", server.host) self.assertNotEqual(0, server.port) server.stop() server.wait() @testtools.skipIf(not utils.is_linux(), 'SO_REUSEADDR behaves differently ' 'on OSX and BSD, see bugs ' '1436895 and 1467145') def test_socket_options_for_simple_server(self): # test normal socket options has set properly self.flags(tcp_keepidle=500, group='wsgi') server = nova.wsgi.Server("test_socket_options", None, host="127.0.0.1", port=0) server.start() sock = server._socket self.assertEqual(1, sock.getsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR)) self.assertEqual(1, sock.getsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE)) if hasattr(socket, 'TCP_KEEPIDLE'): self.assertEqual(CONF.wsgi.tcp_keepidle, sock.getsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPIDLE)) server.stop() server.wait() def test_server_pool_waitall(self): # test pools waitall method gets called while stopping server server = nova.wsgi.Server("test_server", None, host="127.0.0.1") server.start() with mock.patch.object(server._pool, 'waitall') as mock_waitall: server.stop() server.wait() mock_waitall.assert_called_once_with() def test_uri_length_limit(self): server = nova.wsgi.Server("test_uri_length_limit", None, host="127.0.0.1", max_url_len=16384) server.start() uri = "http://127.0.0.1:%d/%s" % (server.port, 10000 * 'x') resp = requests.get(uri, proxies={"http": ""}) eventlet.sleep(0) self.assertNotEqual(resp.status_code, requests.codes.REQUEST_URI_TOO_LARGE) uri = "http://127.0.0.1:%d/%s" % (server.port, 20000 * 'x') resp = requests.get(uri, proxies={"http": ""}) eventlet.sleep(0) self.assertEqual(resp.status_code, requests.codes.REQUEST_URI_TOO_LARGE) server.stop() server.wait() def test_reset_pool_size_to_default(self): server = nova.wsgi.Server("test_resize", None, host="127.0.0.1", max_url_len=16384) server.start() # Stopping the server, which in turn sets pool size to 0 server.stop() self.assertEqual(server._pool.size, 0) # Resetting pool size to default server.reset() server.start() self.assertEqual(server._pool.size, CONF.wsgi.default_pool_size) def test_client_socket_timeout(self): self.flags(client_socket_timeout=5, group='wsgi') # mocking eventlet spawn method to check it is called with # configured 'client_socket_timeout' value. with mock.patch.object(eventlet, 'spawn') as mock_spawn: server = nova.wsgi.Server("test_app", None, host="127.0.0.1", port=0) server.start() _, kwargs = mock_spawn.call_args self.assertEqual(CONF.wsgi.client_socket_timeout, kwargs['socket_timeout']) server.stop() def test_keep_alive(self): self.flags(keep_alive=False, group='wsgi') # mocking eventlet spawn method to check it is called with # configured 'keep_alive' value. with mock.patch.object(eventlet, 'spawn') as mock_spawn: server = nova.wsgi.Server("test_app", None, host="127.0.0.1", port=0) server.start() _, kwargs = mock_spawn.call_args self.assertEqual(CONF.wsgi.keep_alive, kwargs['keepalive']) server.stop() @testtools.skipIf(six.PY3, "bug/1482633: test hangs on Python 3") class TestWSGIServerWithSSL(test.NoDBTestCase): """WSGI server with SSL tests.""" def setUp(self): super(TestWSGIServerWithSSL, self).setUp() self.flags(enabled_ssl_apis=['fake_ssl']) self.flags( ssl_cert_file=os.path.join(SSL_CERT_DIR, 'certificate.crt'), ssl_key_file=os.path.join(SSL_CERT_DIR, 'privatekey.key'), group='wsgi') def test_ssl_server(self): def test_app(env, start_response): start_response('200 OK', {}) return ['PONG'] fake_ssl_server = nova.wsgi.Server("fake_ssl", test_app, host="127.0.0.1", port=0, use_ssl=True) fake_ssl_server.start() self.assertNotEqual(0, fake_ssl_server.port) response = requests.post( 'https://127.0.0.1:%s/' % fake_ssl_server.port, verify=os.path.join(SSL_CERT_DIR, 'ca.crt'), data='PING') self.assertEqual(response.text, 'PONG') fake_ssl_server.stop() fake_ssl_server.wait() def test_two_servers(self): def test_app(env, start_response): start_response('200 OK', {}) return ['PONG'] fake_ssl_server = nova.wsgi.Server("fake_ssl", test_app, host="127.0.0.1", port=0, use_ssl=True) fake_ssl_server.start() self.assertNotEqual(0, fake_ssl_server.port) fake_server = nova.wsgi.Server("fake", test_app, host="127.0.0.1", port=0) fake_server.start() self.assertNotEqual(0, fake_server.port) response = requests.post( 'https://127.0.0.1:%s/' % fake_ssl_server.port, verify=os.path.join(SSL_CERT_DIR, 'ca.crt'), data='PING') self.assertEqual(response.text, 'PONG') response = requests.post('http://127.0.0.1:%s/' % fake_server.port, data='PING') self.assertEqual(response.text, 'PONG') fake_ssl_server.stop() fake_ssl_server.wait() fake_server.stop() fake_server.wait() @testtools.skipIf(not utils.is_linux(), 'SO_REUSEADDR behaves differently ' 'on OSX and BSD, see bugs ' '1436895 and 1467145') def test_socket_options_for_ssl_server(self): # test normal socket options has set properly self.flags(tcp_keepidle=500, group='wsgi') server = nova.wsgi.Server("test_socket_options", None, host="127.0.0.1", port=0, use_ssl=True) server.start() sock = server._socket self.assertEqual(1, sock.getsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR)) self.assertEqual(1, sock.getsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE)) if hasattr(socket, 'TCP_KEEPIDLE'): self.assertEqual(CONF.wsgi.tcp_keepidle, sock.getsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPIDLE)) server.stop() server.wait() @testtools.skipIf(not utils.is_ipv6_supported(), "no ipv6 support") def test_app_using_ipv6_and_ssl(self): greetings = 'Hello, World!!!' @webob.dec.wsgify def hello_world(req): return greetings server = nova.wsgi.Server("fake_ssl", hello_world, host="::1", port=0, use_ssl=True) server.start() response = requests.get('https://[::1]:%d/' % server.port, verify=os.path.join(SSL_CERT_DIR, 'ca.crt')) self.assertEqual(greetings, response.text) server.stop() server.wait() nova-17.0.1/nova/tests/unit/test_api_validation.py0000666000175000017500000014651313250073136022261 0ustar zuulzuul00000000000000# Copyright 2013 NEC Corporation. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import re import sys import fixtures from jsonschema import exceptions as jsonschema_exc import six from nova.api.openstack import api_version_request as api_version from nova.api import validation from nova.api.validation import parameter_types from nova.api.validation import validators from nova import exception from nova import test from nova.tests.unit.api.openstack import fakes query_schema = { 'type': 'object', 'properties': { 'foo': parameter_types.single_param({'type': 'string', 'format': 'uuid'}), 'foos': parameter_types.multi_params({'type': 'string'}) }, 'patternProperties': { "^_": parameter_types.multi_params({'type': 'string'})}, 'additionalProperties': True } class FakeQueryParametersController(object): @validation.query_schema(query_schema, '2.3') def get(self, req): return list(set(req.GET.keys())) class RegexFormatFakeController(object): schema = { 'type': 'object', 'properties': { 'foo': { 'format': 'regex', }, }, } @validation.schema(request_body_schema=schema) def post(self, req, body): return 'Validation succeeded.' class FakeRequest(object): api_version_request = api_version.APIVersionRequest("2.1") environ = {} legacy_v2 = False def is_legacy_v2(self): return self.legacy_v2 class ValidationRegex(test.NoDBTestCase): def test_cell_names(self): cellre = re.compile(parameter_types.valid_cell_name_regex.regex) self.assertTrue(cellre.search('foo')) self.assertFalse(cellre.search('foo.bar')) self.assertFalse(cellre.search('foo@bar')) self.assertFalse(cellre.search('foo!bar')) self.assertFalse(cellre.search(' foo!bar')) self.assertFalse(cellre.search('\nfoo!bar')) def test_build_regex_range(self): # this is much easier to think about if we only use the ascii # subset because it's a printable range we can think # about. The algorithm works for all ranges. def _get_all_chars(): for i in range(0x7F): yield six.unichr(i) self.useFixture(fixtures.MonkeyPatch( 'nova.api.validation.parameter_types._get_all_chars', _get_all_chars)) r = parameter_types._build_regex_range(ws=False) self.assertEqual(r, re.escape('!') + '-' + re.escape('~')) # if we allow whitespace the range starts earlier r = parameter_types._build_regex_range(ws=True) self.assertEqual(r, re.escape(' ') + '-' + re.escape('~')) # excluding a character will give us 2 ranges r = parameter_types._build_regex_range(ws=True, exclude=['A']) self.assertEqual(r, re.escape(' ') + '-' + re.escape('@') + 'B' + '-' + re.escape('~')) # inverting which gives us all the initial unprintable characters. r = parameter_types._build_regex_range(ws=False, invert=True) self.assertEqual(r, re.escape('\x00') + '-' + re.escape(' ')) # excluding characters that create a singleton. Naively this would be: # ' -@B-BD-~' which seems to work, but ' -@BD-~' is more natural. r = parameter_types._build_regex_range(ws=True, exclude=['A', 'C']) self.assertEqual(r, re.escape(' ') + '-' + re.escape('@') + 'B' + 'D' + '-' + re.escape('~')) # ws=True means the positive regex has printable whitespaces, # so the inverse will not. The inverse will include things we # exclude. r = parameter_types._build_regex_range( ws=True, exclude=['A', 'B', 'C', 'Z'], invert=True) self.assertEqual(r, re.escape('\x00') + '-' + re.escape('\x1f') + 'A-CZ') class APIValidationTestCase(test.NoDBTestCase): post_schema = None def setUp(self): super(APIValidationTestCase, self).setUp() self.post = None if self.post_schema is not None: @validation.schema(request_body_schema=self.post_schema) def post(req, body): return 'Validation succeeded.' self.post = post def check_validation_error(self, method, body, expected_detail, req=None): if not req: req = FakeRequest() try: method(body=body, req=req) except exception.ValidationError as ex: self.assertEqual(400, ex.kwargs['code']) if isinstance(expected_detail, list): self.assertIn(ex.kwargs['detail'], expected_detail, 'Exception details did not match expected') elif not re.match(expected_detail, ex.kwargs['detail']): self.assertEqual(expected_detail, ex.kwargs['detail'], 'Exception details did not match expected') except Exception as ex: self.fail('An unexpected exception happens: %s' % ex) else: self.fail('Any exception does not happen.') class FormatCheckerTestCase(test.NoDBTestCase): def _format_checker(self, format, value, error_message): format_checker = validators.FormatChecker() exc = self.assertRaises(jsonschema_exc.FormatError, format_checker.check, value, format) self.assertIsInstance(exc.cause, exception.InvalidName) self.assertEqual(error_message, exc.cause.format_message()) def test_format_checker_failed_with_non_string_name(self): error_message = ("An invalid 'name' value was provided. The name must " "be: printable characters. " "Can not start or end with whitespace.") self._format_checker("name", " ", error_message) self._format_checker("name", None, error_message) def test_format_checker_failed_with_non_string_cell_name(self): error_message = ("An invalid 'name' value was provided. " "The name must be: printable characters except " "!, ., @. Can not start or end with whitespace.") self._format_checker("cell_name", None, error_message) def test_format_checker_failed_name_with_leading_trailing_spaces(self): error_message = ("An invalid 'name' value was provided. " "The name must be: printable characters with at " "least one non space character") self._format_checker("name_with_leading_trailing_spaces", None, error_message) def test_format_checker_failed_cell_name_with_leading_trailing_spaces( self): error_message = ("An invalid 'name' value was provided. " "The name must be: printable characters except" " !, ., @, with at least one non space character") self._format_checker("cell_name_with_leading_trailing_spaces", None, error_message) class MicroversionsSchemaTestCase(APIValidationTestCase): def setUp(self): super(MicroversionsSchemaTestCase, self).setUp() schema_v21_int = { 'type': 'object', 'properties': { 'foo': { 'type': 'integer', } } } schema_v20_str = copy.deepcopy(schema_v21_int) schema_v20_str['properties']['foo'] = {'type': 'string'} @validation.schema(schema_v20_str, '2.0', '2.0') @validation.schema(schema_v21_int, '2.1') def post(req, body): return 'Validation succeeded.' self.post = post def test_validate_v2compatible_request(self): req = FakeRequest() req.legacy_v2 = True self.assertEqual(self.post(body={'foo': 'bar'}, req=req), 'Validation succeeded.') detail = ("Invalid input for field/attribute foo. Value: 1. " "1 is not of type 'string'") self.check_validation_error(self.post, body={'foo': 1}, expected_detail=detail, req=req) def test_validate_v21_request(self): req = FakeRequest() self.assertEqual(self.post(body={'foo': 1}, req=req), 'Validation succeeded.') detail = ("Invalid input for field/attribute foo. Value: bar. " "'bar' is not of type 'integer'") self.check_validation_error(self.post, body={'foo': 'bar'}, expected_detail=detail, req=req) def test_validate_v2compatible_request_with_none_min_version(self): schema_none = { 'type': 'object', 'properties': { 'foo': { 'type': 'integer' } } } @validation.schema(schema_none) def post(req, body): return 'Validation succeeded.' req = FakeRequest() req.legacy_v2 = True self.assertEqual('Validation succeeded.', post(body={'foo': 1}, req=req)) detail = ("Invalid input for field/attribute foo. Value: bar. " "'bar' is not of type 'integer'") self.check_validation_error(post, body={'foo': 'bar'}, expected_detail=detail, req=req) class QueryParamsSchemaTestCase(test.NoDBTestCase): def setUp(self): super(QueryParamsSchemaTestCase, self).setUp() self.controller = FakeQueryParametersController() def test_validate_request(self): req = fakes.HTTPRequest.blank("/tests?foo=%s" % fakes.FAKE_UUID) req.api_version_request = api_version.APIVersionRequest("2.3") self.assertEqual(['foo'], self.controller.get(req)) def test_validate_request_failed(self): # parameter 'foo' expect a UUID req = fakes.HTTPRequest.blank("/tests?foo=abc") req.api_version_request = api_version.APIVersionRequest("2.3") ex = self.assertRaises(exception.ValidationError, self.controller.get, req) if six.PY3: self.assertEqual("Invalid input for query parameters foo. Value: " "abc. 'abc' is not a 'uuid'", six.text_type(ex)) else: self.assertEqual("Invalid input for query parameters foo. Value: " "abc. u'abc' is not a 'uuid'", six.text_type(ex)) def test_validate_request_with_multiple_values(self): req = fakes.HTTPRequest.blank("/tests?foos=abc") req.api_version_request = api_version.APIVersionRequest("2.3") self.assertEqual(['foos'], self.controller.get(req)) req = fakes.HTTPRequest.blank("/tests?foos=abc&foos=def") self.assertEqual(['foos'], self.controller.get(req)) def test_validate_request_with_multiple_values_fails(self): req = fakes.HTTPRequest.blank( "/tests?foo=%s&foo=%s" % (fakes.FAKE_UUID, fakes.FAKE_UUID)) req.api_version_request = api_version.APIVersionRequest("2.3") self.assertRaises(exception.ValidationError, self.controller.get, req) def test_strip_out_additional_properties(self): req = fakes.HTTPRequest.blank( "/tests?foos=abc&foo=%s&bar=123&-bar=456" % fakes.FAKE_UUID) req.api_version_request = api_version.APIVersionRequest("2.3") res = self.controller.get(req) res.sort() self.assertEqual(['foo', 'foos'], res) def test_no_strip_out_additional_properties_when_not_match_version(self): req = fakes.HTTPRequest.blank( "/tests?foos=abc&foo=%s&bar=123&bar=456" % fakes.FAKE_UUID) # The JSON-schema matches to the API version 2.3 and above. Request # with version 2.1 to ensure there isn't no strip out for additional # parameters when schema didn't match the request version. req.api_version_request = api_version.APIVersionRequest("2.1") res = self.controller.get(req) res.sort() self.assertEqual(['bar', 'foo', 'foos'], res) def test_strip_out_correct_pattern_retained(self): req = fakes.HTTPRequest.blank( "/tests?foos=abc&foo=%s&bar=123&_foo_=456" % fakes.FAKE_UUID) req.api_version_request = api_version.APIVersionRequest("2.3") res = self.controller.get(req) res.sort() self.assertEqual(['_foo_', 'foo', 'foos'], res) class RequiredDisableTestCase(APIValidationTestCase): post_schema = { 'type': 'object', 'properties': { 'foo': { 'type': 'integer', }, }, } def test_validate_required_disable(self): self.assertEqual(self.post(body={'foo': 1}, req=FakeRequest()), 'Validation succeeded.') self.assertEqual(self.post(body={'abc': 1}, req=FakeRequest()), 'Validation succeeded.') class RequiredEnableTestCase(APIValidationTestCase): post_schema = { 'type': 'object', 'properties': { 'foo': { 'type': 'integer', }, }, 'required': ['foo'] } def test_validate_required_enable(self): self.assertEqual(self.post(body={'foo': 1}, req=FakeRequest()), 'Validation succeeded.') def test_validate_required_enable_fails(self): detail = "'foo' is a required property" self.check_validation_error(self.post, body={'abc': 1}, expected_detail=detail) class AdditionalPropertiesEnableTestCase(APIValidationTestCase): post_schema = { 'type': 'object', 'properties': { 'foo': { 'type': 'integer', }, }, 'required': ['foo'], } def test_validate_additionalProperties_enable(self): self.assertEqual(self.post(body={'foo': 1}, req=FakeRequest()), 'Validation succeeded.') self.assertEqual(self.post(body={'foo': 1, 'ext': 1}, req=FakeRequest()), 'Validation succeeded.') class AdditionalPropertiesDisableTestCase(APIValidationTestCase): post_schema = { 'type': 'object', 'properties': { 'foo': { 'type': 'integer', }, }, 'required': ['foo'], 'additionalProperties': False, } def test_validate_additionalProperties_disable(self): self.assertEqual(self.post(body={'foo': 1}, req=FakeRequest()), 'Validation succeeded.') def test_validate_additionalProperties_disable_fails(self): detail = "Additional properties are not allowed ('ext' was unexpected)" self.check_validation_error(self.post, body={'foo': 1, 'ext': 1}, expected_detail=detail) class PatternPropertiesTestCase(APIValidationTestCase): post_schema = { 'patternProperties': { '^[a-zA-Z0-9]{1,10}$': { 'type': 'string' }, }, 'additionalProperties': False, } def test_validate_patternProperties(self): self.assertEqual('Validation succeeded.', self.post(body={'foo': 'bar'}, req=FakeRequest())) def test_validate_patternProperties_fails(self): details = [ "Additional properties are not allowed ('__' was unexpected)", "'__' does not match any of the regexes: '^[a-zA-Z0-9]{1,10}$'" ] self.check_validation_error(self.post, body={'__': 'bar'}, expected_detail=details) details = [ "'' does not match any of the regexes: '^[a-zA-Z0-9]{1,10}$'", "Additional properties are not allowed ('' was unexpected)" ] self.check_validation_error(self.post, body={'': 'bar'}, expected_detail=details) details = [ ("'0123456789a' does not match any of the regexes: " "'^[a-zA-Z0-9]{1,10}$'"), ("Additional properties are not allowed ('0123456789a' was" " unexpected)") ] self.check_validation_error(self.post, body={'0123456789a': 'bar'}, expected_detail=details) # Note(jrosenboom): This is referencing an internal python error # string, which is no stable interface. We need a patch in the # jsonschema library in order to fix this properly. if sys.version[:3] == '3.5': detail = "expected string or bytes-like object" else: detail = "expected string or buffer" self.check_validation_error(self.post, body={None: 'bar'}, expected_detail=detail) class StringTestCase(APIValidationTestCase): post_schema = { 'type': 'object', 'properties': { 'foo': { 'type': 'string', }, }, } def test_validate_string(self): self.assertEqual(self.post(body={'foo': 'abc'}, req=FakeRequest()), 'Validation succeeded.') self.assertEqual(self.post(body={'foo': '0'}, req=FakeRequest()), 'Validation succeeded.') self.assertEqual(self.post(body={'foo': ''}, req=FakeRequest()), 'Validation succeeded.') def test_validate_string_fails(self): detail = ("Invalid input for field/attribute foo. Value: 1." " 1 is not of type 'string'") self.check_validation_error(self.post, body={'foo': 1}, expected_detail=detail) detail = ("Invalid input for field/attribute foo. Value: 1.5." " 1.5 is not of type 'string'") self.check_validation_error(self.post, body={'foo': 1.5}, expected_detail=detail) detail = ("Invalid input for field/attribute foo. Value: True." " True is not of type 'string'") self.check_validation_error(self.post, body={'foo': True}, expected_detail=detail) class StringLengthTestCase(APIValidationTestCase): post_schema = { 'type': 'object', 'properties': { 'foo': { 'type': 'string', 'minLength': 1, 'maxLength': 10, }, }, } def test_validate_string_length(self): self.assertEqual(self.post(body={'foo': '0'}, req=FakeRequest()), 'Validation succeeded.') self.assertEqual(self.post(body={'foo': '0123456789'}, req=FakeRequest()), 'Validation succeeded.') def test_validate_string_length_fails(self): detail = ("Invalid input for field/attribute foo. Value: ." " '' is too short") self.check_validation_error(self.post, body={'foo': ''}, expected_detail=detail) detail = ("Invalid input for field/attribute foo. Value: 0123456789a." " '0123456789a' is too long") self.check_validation_error(self.post, body={'foo': '0123456789a'}, expected_detail=detail) class IntegerTestCase(APIValidationTestCase): post_schema = { 'type': 'object', 'properties': { 'foo': { 'type': ['integer', 'string'], 'pattern': '^[0-9]+$', }, }, } def test_validate_integer(self): self.assertEqual(self.post(body={'foo': 1}, req=FakeRequest()), 'Validation succeeded.') self.assertEqual(self.post(body={'foo': '1'}, req=FakeRequest()), 'Validation succeeded.') self.assertEqual(self.post(body={'foo': '0123456789'}, req=FakeRequest()), 'Validation succeeded.') def test_validate_integer_fails(self): detail = ("Invalid input for field/attribute foo. Value: abc." " 'abc' does not match '^[0-9]+$'") self.check_validation_error(self.post, body={'foo': 'abc'}, expected_detail=detail) detail = ("Invalid input for field/attribute foo. Value: True." " True is not of type 'integer', 'string'") self.check_validation_error(self.post, body={'foo': True}, expected_detail=detail) detail = ("Invalid input for field/attribute foo. Value: 0xffff." " '0xffff' does not match '^[0-9]+$'") self.check_validation_error(self.post, body={'foo': '0xffff'}, expected_detail=detail) detail = ("Invalid input for field/attribute foo. Value: 1.0." " 1.0 is not of type 'integer', 'string'") self.check_validation_error(self.post, body={'foo': 1.0}, expected_detail=detail) detail = ("Invalid input for field/attribute foo. Value: 1.0." " '1.0' does not match '^[0-9]+$'") self.check_validation_error(self.post, body={'foo': '1.0'}, expected_detail=detail) class IntegerRangeTestCase(APIValidationTestCase): post_schema = { 'type': 'object', 'properties': { 'foo': { 'type': ['integer', 'string'], 'pattern': '^[0-9]+$', 'minimum': 1, 'maximum': 10, }, }, } def test_validate_integer_range(self): self.assertEqual(self.post(body={'foo': 1}, req=FakeRequest()), 'Validation succeeded.') self.assertEqual(self.post(body={'foo': 10}, req=FakeRequest()), 'Validation succeeded.') self.assertEqual(self.post(body={'foo': '1'}, req=FakeRequest()), 'Validation succeeded.') def test_validate_integer_range_fails(self): detail = ("Invalid input for field/attribute foo. Value: 0." " 0(.0)? is less than the minimum of 1") self.check_validation_error(self.post, body={'foo': 0}, expected_detail=detail) detail = ("Invalid input for field/attribute foo. Value: 11." " 11(.0)? is greater than the maximum of 10") self.check_validation_error(self.post, body={'foo': 11}, expected_detail=detail) detail = ("Invalid input for field/attribute foo. Value: 0." " 0(.0)? is less than the minimum of 1") self.check_validation_error(self.post, body={'foo': '0'}, expected_detail=detail) detail = ("Invalid input for field/attribute foo. Value: 11." " 11(.0)? is greater than the maximum of 10") self.check_validation_error(self.post, body={'foo': '11'}, expected_detail=detail) class BooleanTestCase(APIValidationTestCase): post_schema = { 'type': 'object', 'properties': { 'foo': parameter_types.boolean, }, } def test_validate_boolean(self): self.assertEqual('Validation succeeded.', self.post(body={'foo': True}, req=FakeRequest())) self.assertEqual('Validation succeeded.', self.post(body={'foo': False}, req=FakeRequest())) self.assertEqual('Validation succeeded.', self.post(body={'foo': 'True'}, req=FakeRequest())) self.assertEqual('Validation succeeded.', self.post(body={'foo': 'False'}, req=FakeRequest())) self.assertEqual('Validation succeeded.', self.post(body={'foo': '1'}, req=FakeRequest())) self.assertEqual('Validation succeeded.', self.post(body={'foo': '0'}, req=FakeRequest())) def test_validate_boolean_fails(self): enum_boolean = ("[True, 'True', 'TRUE', 'true', '1', 'ON', 'On'," " 'on', 'YES', 'Yes', 'yes'," " False, 'False', 'FALSE', 'false', '0', 'OFF', 'Off'," " 'off', 'NO', 'No', 'no']") detail = ("Invalid input for field/attribute foo. Value: bar." " 'bar' is not one of %s") % enum_boolean self.check_validation_error(self.post, body={'foo': 'bar'}, expected_detail=detail) detail = ("Invalid input for field/attribute foo. Value: 2." " '2' is not one of %s") % enum_boolean self.check_validation_error(self.post, body={'foo': '2'}, expected_detail=detail) class HostnameTestCase(APIValidationTestCase): post_schema = { 'type': 'object', 'properties': { 'foo': parameter_types.hostname, }, } def test_validate_hostname(self): self.assertEqual('Validation succeeded.', self.post(body={'foo': 'localhost'}, req=FakeRequest())) self.assertEqual('Validation succeeded.', self.post(body={'foo': 'localhost.localdomain.com'}, req=FakeRequest())) self.assertEqual('Validation succeeded.', self.post(body={'foo': 'my-host'}, req=FakeRequest())) self.assertEqual('Validation succeeded.', self.post(body={'foo': 'my_host'}, req=FakeRequest())) def test_validate_hostname_fails(self): detail = ("Invalid input for field/attribute foo. Value: True." " True is not of type 'string'") self.check_validation_error(self.post, body={'foo': True}, expected_detail=detail) detail = ("Invalid input for field/attribute foo. Value: 1." " 1 is not of type 'string'") self.check_validation_error(self.post, body={'foo': 1}, expected_detail=detail) detail = ("Invalid input for field/attribute foo. Value: my$host." " 'my$host' does not match '^[a-zA-Z0-9-._]*$'") self.check_validation_error(self.post, body={'foo': 'my$host'}, expected_detail=detail) class HostnameIPaddressTestCase(APIValidationTestCase): post_schema = { 'type': 'object', 'properties': { 'foo': parameter_types.hostname_or_ip_address, }, } def test_validate_hostname_or_ip_address(self): self.assertEqual('Validation succeeded.', self.post(body={'foo': 'localhost'}, req=FakeRequest())) self.assertEqual('Validation succeeded.', self.post(body={'foo': 'localhost.localdomain.com'}, req=FakeRequest())) self.assertEqual('Validation succeeded.', self.post(body={'foo': 'my-host'}, req=FakeRequest())) self.assertEqual('Validation succeeded.', self.post(body={'foo': 'my_host'}, req=FakeRequest())) self.assertEqual('Validation succeeded.', self.post(body={'foo': '192.168.10.100'}, req=FakeRequest())) self.assertEqual('Validation succeeded.', self.post(body={'foo': '2001:db8::9abc'}, req=FakeRequest())) def test_validate_hostname_or_ip_address_fails(self): detail = ("Invalid input for field/attribute foo. Value: True." " True is not of type 'string'") self.check_validation_error(self.post, body={'foo': True}, expected_detail=detail) detail = ("Invalid input for field/attribute foo. Value: 1." " 1 is not of type 'string'") self.check_validation_error(self.post, body={'foo': 1}, expected_detail=detail) detail = ("Invalid input for field/attribute foo. Value: my$host." " 'my$host' does not match '^[a-zA-Z0-9-_.:]*$'") self.check_validation_error(self.post, body={'foo': 'my$host'}, expected_detail=detail) class CellNameTestCase(APIValidationTestCase): post_schema = { 'type': 'object', 'properties': { 'foo': parameter_types.cell_name, }, } def test_validate_name(self): self.assertEqual('Validation succeeded.', self.post(body={'foo': 'abc'}, req=FakeRequest())) self.assertEqual('Validation succeeded.', self.post(body={'foo': 'my server'}, req=FakeRequest())) self.assertEqual('Validation succeeded.', self.post(body={'foo': u'\u0434'}, req=FakeRequest())) self.assertEqual('Validation succeeded.', self.post(body={'foo': u'\u0434\u2006\ufffd'}, req=FakeRequest())) def test_validate_name_fails(self): error = ("An invalid 'name' value was provided. The name must be: " "printable characters except !, ., @. " "Can not start or end with whitespace.") should_fail = (' ', ' server', 'server ', u'a\xa0', # trailing unicode space u'\uffff', # non-printable unicode 'abc!def', 'abc.def', 'abc@def') for item in should_fail: self.check_validation_error(self.post, body={'foo': item}, expected_detail=error) # four-byte unicode, if supported by this python build try: self.check_validation_error(self.post, body={'foo': u'\U00010000'}, expected_detail=error) except ValueError: pass class CellNameLeadingTrailingSpacesTestCase(APIValidationTestCase): post_schema = { 'type': 'object', 'properties': { 'foo': parameter_types.cell_name_leading_trailing_spaces, }, } def test_validate_name(self): self.assertEqual('Validation succeeded.', self.post(body={'foo': 'abc'}, req=FakeRequest())) self.assertEqual('Validation succeeded.', self.post(body={'foo': 'my server'}, req=FakeRequest())) self.assertEqual('Validation succeeded.', self.post(body={'foo': u'\u0434'}, req=FakeRequest())) self.assertEqual('Validation succeeded.', self.post(body={'foo': u'\u0434\u2006\ufffd'}, req=FakeRequest())) self.assertEqual('Validation succeeded.', self.post(body={'foo': ' my server'}, req=FakeRequest())) self.assertEqual('Validation succeeded.', self.post(body={'foo': 'my server '}, req=FakeRequest())) def test_validate_name_fails(self): error = ("An invalid 'name' value was provided. The name must be: " "printable characters except !, ., @, " "with at least one non space character") should_fail = ( ' ', u'\uffff', # non-printable unicode 'abc!def', 'abc.def', 'abc@def') for item in should_fail: self.check_validation_error(self.post, body={'foo': item}, expected_detail=error) # four-byte unicode, if supported by this python build try: self.check_validation_error(self.post, body={'foo': u'\U00010000'}, expected_detail=error) except ValueError: pass class NameTestCase(APIValidationTestCase): post_schema = { 'type': 'object', 'properties': { 'foo': parameter_types.name, }, } def test_validate_name(self): self.assertEqual('Validation succeeded.', self.post(body={'foo': 'm1.small'}, req=FakeRequest())) self.assertEqual('Validation succeeded.', self.post(body={'foo': 'my server'}, req=FakeRequest())) self.assertEqual('Validation succeeded.', self.post(body={'foo': 'a'}, req=FakeRequest())) self.assertEqual('Validation succeeded.', self.post(body={'foo': u'\u0434'}, req=FakeRequest())) self.assertEqual('Validation succeeded.', self.post(body={'foo': u'\u0434\u2006\ufffd'}, req=FakeRequest())) def test_validate_name_fails(self): error = ("An invalid 'name' value was provided. The name must be: " "printable characters. " "Can not start or end with whitespace.") should_fail = (' ', ' server', 'server ', u'a\xa0', # trailing unicode space u'\uffff', # non-printable unicode ) for item in should_fail: self.check_validation_error(self.post, body={'foo': item}, expected_detail=error) # four-byte unicode, if supported by this python build try: self.check_validation_error(self.post, body={'foo': u'\U00010000'}, expected_detail=error) except ValueError: pass class NameWithLeadingTrailingSpacesTestCase(APIValidationTestCase): post_schema = { 'type': 'object', 'properties': { 'foo': parameter_types.name_with_leading_trailing_spaces, }, } def test_validate_name(self): self.assertEqual('Validation succeeded.', self.post(body={'foo': 'm1.small'}, req=FakeRequest())) self.assertEqual('Validation succeeded.', self.post(body={'foo': 'my server'}, req=FakeRequest())) self.assertEqual('Validation succeeded.', self.post(body={'foo': 'a'}, req=FakeRequest())) self.assertEqual('Validation succeeded.', self.post(body={'foo': u'\u0434'}, req=FakeRequest())) self.assertEqual('Validation succeeded.', self.post(body={'foo': u'\u0434\u2006\ufffd'}, req=FakeRequest())) self.assertEqual('Validation succeeded.', self.post(body={'foo': ' abc '}, req=FakeRequest())) self.assertEqual('Validation succeeded.', self.post(body={'foo': 'abc abc abc'}, req=FakeRequest())) self.assertEqual('Validation succeeded.', self.post(body={'foo': ' abc abc abc '}, req=FakeRequest())) # leading unicode space self.assertEqual('Validation succeeded.', self.post(body={'foo': '\xa0abc'}, req=FakeRequest())) def test_validate_name_fails(self): error = ("An invalid 'name' value was provided. The name must be: " "printable characters with at least one non space character") should_fail = ( ' ', u'\xa0', # unicode space u'\uffff', # non-printable unicode ) for item in should_fail: self.check_validation_error(self.post, body={'foo': item}, expected_detail=error) # four-byte unicode, if supported by this python build try: self.check_validation_error(self.post, body={'foo': u'\U00010000'}, expected_detail=error) except ValueError: pass class NoneTypeTestCase(APIValidationTestCase): post_schema = { 'type': 'object', 'properties': { 'foo': parameter_types.none } } def test_validate_none(self): self.assertEqual('Validation succeeded.', self.post(body={'foo': 'None'}, req=FakeRequest())) self.assertEqual('Validation succeeded.', self.post(body={'foo': None}, req=FakeRequest())) self.assertEqual('Validation succeeded.', self.post(body={'foo': {}}, req=FakeRequest())) def test_validate_none_fails(self): detail = ("Invalid input for field/attribute foo. Value: ." " '' is not one of ['None', None, {}]") self.check_validation_error(self.post, body={'foo': ''}, expected_detail=detail) detail = ("Invalid input for field/attribute foo. Value: " "{'key': 'val'}. {'key': 'val'} is not one of " "['None', None, {}]") self.check_validation_error(self.post, body={'foo': {'key': 'val'}}, expected_detail=detail) class NameOrNoneTestCase(APIValidationTestCase): post_schema = { 'type': 'object', 'properties': { 'foo': parameter_types.name_or_none } } def test_valid(self): self.assertEqual('Validation succeeded.', self.post(body={'foo': None}, req=FakeRequest())) self.assertEqual('Validation succeeded.', self.post(body={'foo': '1'}, req=FakeRequest())) def test_validate_fails(self): detail = ("Invalid input for field/attribute foo. Value: 1234. 1234 " "is not valid under any of the given schemas") self.check_validation_error(self.post, body={'foo': 1234}, expected_detail=detail) detail = ("Invalid input for field/attribute foo. Value: . '' " "is not valid under any of the given schemas") self.check_validation_error(self.post, body={'foo': ''}, expected_detail=detail) too_long_name = 256 * "k" detail = ("Invalid input for field/attribute foo. Value: %s. " "'%s' is not valid under any of the " "given schemas") % (too_long_name, too_long_name) self.check_validation_error(self.post, body={'foo': too_long_name}, expected_detail=detail) class TcpUdpPortTestCase(APIValidationTestCase): post_schema = { 'type': 'object', 'properties': { 'foo': parameter_types.tcp_udp_port, }, } def test_validate_tcp_udp_port(self): self.assertEqual('Validation succeeded.', self.post(body={'foo': 1024}, req=FakeRequest())) self.assertEqual('Validation succeeded.', self.post(body={'foo': '1024'}, req=FakeRequest())) def test_validate_tcp_udp_port_fails(self): detail = ("Invalid input for field/attribute foo. Value: True." " True is not of type 'integer', 'string'") self.check_validation_error(self.post, body={'foo': True}, expected_detail=detail) detail = ("Invalid input for field/attribute foo. Value: 65536." " 65536(.0)? is greater than the maximum of 65535") self.check_validation_error(self.post, body={'foo': 65536}, expected_detail=detail) class CidrFormatTestCase(APIValidationTestCase): post_schema = { 'type': 'object', 'properties': { 'foo': { 'type': 'string', 'format': 'cidr', }, }, } def test_validate_cidr(self): self.assertEqual('Validation succeeded.', self.post( body={'foo': '192.168.10.0/24'}, req=FakeRequest() )) def test_validate_cidr_fails(self): detail = ("Invalid input for field/attribute foo." " Value: bar." " 'bar' is not a 'cidr'") self.check_validation_error(self.post, body={'foo': 'bar'}, expected_detail=detail) detail = ("Invalid input for field/attribute foo." " Value: . '' is not a 'cidr'") self.check_validation_error(self.post, body={'foo': ''}, expected_detail=detail) detail = ("Invalid input for field/attribute foo." " Value: 192.168.1.0. '192.168.1.0' is not a 'cidr'") self.check_validation_error(self.post, body={'foo': '192.168.1.0'}, expected_detail=detail) detail = ("Invalid input for field/attribute foo." " Value: 192.168.1.0 /24." " '192.168.1.0 /24' is not a 'cidr'") self.check_validation_error(self.post, body={'foo': '192.168.1.0 /24'}, expected_detail=detail) class DatetimeTestCase(APIValidationTestCase): post_schema = { 'type': 'object', 'properties': { 'foo': { 'type': 'string', 'format': 'date-time', }, }, } def test_validate_datetime(self): self.assertEqual('Validation succeeded.', self.post( body={'foo': '2014-01-14T01:00:00Z'}, req=FakeRequest() )) def test_validate_datetime_fails(self): detail = ("Invalid input for field/attribute foo." " Value: 2014-13-14T01:00:00Z." " '2014-13-14T01:00:00Z' is not a 'date-time'") self.check_validation_error(self.post, body={'foo': '2014-13-14T01:00:00Z'}, expected_detail=detail) detail = ("Invalid input for field/attribute foo." " Value: bar. 'bar' is not a 'date-time'") self.check_validation_error(self.post, body={'foo': 'bar'}, expected_detail=detail) detail = ("Invalid input for field/attribute foo. Value: 1." " '1' is not a 'date-time'") self.check_validation_error(self.post, body={'foo': '1'}, expected_detail=detail) class UuidTestCase(APIValidationTestCase): post_schema = { 'type': 'object', 'properties': { 'foo': { 'type': 'string', 'format': 'uuid', }, }, } def test_validate_uuid(self): self.assertEqual('Validation succeeded.', self.post( body={'foo': '70a599e0-31e7-49b7-b260-868f441e862b'}, req=FakeRequest() )) def test_validate_uuid_fails(self): detail = ("Invalid input for field/attribute foo." " Value: 70a599e031e749b7b260868f441e862." " '70a599e031e749b7b260868f441e862' is not a 'uuid'") self.check_validation_error(self.post, body={'foo': '70a599e031e749b7b260868f441e862'}, expected_detail=detail) detail = ("Invalid input for field/attribute foo. Value: 1." " '1' is not a 'uuid'") self.check_validation_error(self.post, body={'foo': '1'}, expected_detail=detail) detail = ("Invalid input for field/attribute foo. Value: abc." " 'abc' is not a 'uuid'") self.check_validation_error(self.post, body={'foo': 'abc'}, expected_detail=detail) class UriTestCase(APIValidationTestCase): post_schema = { 'type': 'object', 'properties': { 'foo': { 'type': 'string', 'format': 'uri', }, }, } def test_validate_uri(self): self.assertEqual('Validation succeeded.', self.post( body={'foo': 'http://localhost:8774/v2/servers'}, req=FakeRequest() )) self.assertEqual('Validation succeeded.', self.post( body={'foo': 'http://[::1]:8774/v2/servers'}, req=FakeRequest() )) def test_validate_uri_fails(self): base_detail = ("Invalid input for field/attribute foo. Value: {0}. " "'{0}' is not a 'uri'") invalid_uri = 'http://localhost:8774/v2/servers##' self.check_validation_error(self.post, body={'foo': invalid_uri}, expected_detail=base_detail.format( invalid_uri)) invalid_uri = 'http://[fdf8:01]:8774/v2/servers' self.check_validation_error(self.post, body={'foo': invalid_uri}, expected_detail=base_detail.format( invalid_uri)) invalid_uri = '1' self.check_validation_error(self.post, body={'foo': invalid_uri}, expected_detail=base_detail.format( invalid_uri)) invalid_uri = 'abc' self.check_validation_error(self.post, body={'foo': invalid_uri}, expected_detail=base_detail.format( invalid_uri)) class Ipv4TestCase(APIValidationTestCase): post_schema = { 'type': 'object', 'properties': { 'foo': { 'type': 'string', 'format': 'ipv4', }, }, } def test_validate_ipv4(self): self.assertEqual('Validation succeeded.', self.post( body={'foo': '192.168.0.100'}, req=FakeRequest() )) def test_validate_ipv4_fails(self): detail = ("Invalid input for field/attribute foo. Value: abc." " 'abc' is not a 'ipv4'") self.check_validation_error(self.post, body={'foo': 'abc'}, expected_detail=detail) detail = ("Invalid input for field/attribute foo. Value: localhost." " 'localhost' is not a 'ipv4'") self.check_validation_error(self.post, body={'foo': 'localhost'}, expected_detail=detail) detail = ("Invalid input for field/attribute foo." " Value: 2001:db8::1234:0:0:9abc." " '2001:db8::1234:0:0:9abc' is not a 'ipv4'") self.check_validation_error(self.post, body={'foo': '2001:db8::1234:0:0:9abc'}, expected_detail=detail) class Ipv6TestCase(APIValidationTestCase): post_schema = { 'type': 'object', 'properties': { 'foo': { 'type': 'string', 'format': 'ipv6', }, }, } def test_validate_ipv6(self): self.assertEqual('Validation succeeded.', self.post( body={'foo': '2001:db8::1234:0:0:9abc'}, req=FakeRequest() )) def test_validate_ipv6_fails(self): detail = ("Invalid input for field/attribute foo. Value: abc." " 'abc' is not a 'ipv6'") self.check_validation_error(self.post, body={'foo': 'abc'}, expected_detail=detail) detail = ("Invalid input for field/attribute foo. Value: localhost." " 'localhost' is not a 'ipv6'") self.check_validation_error(self.post, body={'foo': 'localhost'}, expected_detail=detail) detail = ("Invalid input for field/attribute foo." " Value: 192.168.0.100. '192.168.0.100' is not a 'ipv6'") self.check_validation_error(self.post, body={'foo': '192.168.0.100'}, expected_detail=detail) class Base64TestCase(APIValidationTestCase): post_schema = { 'type': 'object', 'properties': { 'foo': { 'type': 'string', 'format': 'base64', }, }, } def test_validate_base64(self): self.assertEqual('Validation succeeded.', self.post(body={'foo': 'aGVsbG8gd29ybGQ='}, req=FakeRequest())) # 'aGVsbG8gd29ybGQ=' is the base64 code of 'hello world' def test_validate_base64_fails(self): value = 'A random string' detail = ("Invalid input for field/attribute foo. " "Value: %s. '%s' is not a 'base64'") % (value, value) self.check_validation_error(self.post, body={'foo': value}, expected_detail=detail) class RegexFormatTestCase(APIValidationTestCase): def setUp(self): super(RegexFormatTestCase, self).setUp() self.controller = RegexFormatFakeController() def test_validate_regex(self): req = fakes.HTTPRequest.blank("") self.assertEqual('Validation succeeded.', self.controller.post(req, body={'foo': u'Myserver'})) def test_validate_regex_fails(self): value = 1 req = fakes.HTTPRequest.blank("") detail = ("Invalid input for field/attribute foo. " "Value: %s. %s is not a 'regex'") % (value, value) self.check_validation_error(self.controller.post, req=req, body={'foo': value}, expected_detail=detail) nova-17.0.1/nova/tests/unit/test_test.py0000666000175000017500000002337113250073126020250 0ustar zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests for the testing base code.""" from oslo_log import log as logging import oslo_messaging as messaging import six import nova.conf from nova import exception from nova import rpc from nova import test from nova.tests import fixtures LOG = logging.getLogger(__name__) CONF = nova.conf.CONF class IsolationTestCase(test.TestCase): """Ensure that things are cleaned up after failed tests. These tests don't really do much here, but if isolation fails a bunch of other tests should fail. """ def test_service_isolation(self): self.useFixture(fixtures.ServiceFixture('compute')) def test_rpc_consumer_isolation(self): class NeverCalled(object): def __getattribute__(self, name): if name == 'target': # oslo.messaging 5.31.0 explicitly looks for 'target' # on the endpoint and checks it's type, so we can't avoid # it here, just ignore it if that's the case. return assert False, "I should never get called. name: %s" % name server = rpc.get_server(messaging.Target(topic='compute', server=CONF.host), endpoints=[NeverCalled()]) server.start() class JsonTestCase(test.NoDBTestCase): def test_compare_dict_string(self): expected = { "employees": [ {"firstName": "Anna", "lastName": "Smith"}, {"firstName": "John", "lastName": "Doe"}, {"firstName": "Peter", "lastName": "Jones"} ], "locations": set(['Boston', 'Mumbai', 'Beijing', 'Perth']) } actual = """{ "employees": [ { "lastName": "Doe", "firstName": "John" }, { "lastName": "Smith", "firstName": "Anna" }, { "lastName": "Jones", "firstName": "Peter" } ], "locations": [ "Perth", "Boston", "Mumbai", "Beijing" ] }""" self.assertJsonEqual(expected, actual) def test_fail_on_list_length(self): expected = { 'top': { 'l1': { 'l2': ['a', 'b', 'c'] } } } actual = { 'top': { 'l1': { 'l2': ['c', 'a', 'b', 'd'] } } } try: self.assertJsonEqual(expected, actual) except Exception as e: # error reported is going to be a cryptic length failure # on the level2 structure. self.assertEqual( ("3 != 4: path: root.top.l1.l2. Different list items\n" "expected=['a', 'b', 'c']\n" "observed=['a', 'b', 'c', 'd']\n" "difference=['d']"), e.difference) self.assertIn( "actual:\n{'top': {'l1': {'l2': ['c', 'a', 'b', 'd']}}}", six.text_type(e)) self.assertIn( "expected:\n{'top': {'l1': {'l2': ['a', 'b', 'c']}}}", six.text_type(e)) else: self.fail("This should have raised a mismatch exception") def test_fail_on_dict_length(self): expected = { 'top': { 'l1': { 'l2': {'a': 1, 'b': 2, 'c': 3} } } } actual = { 'top': { 'l1': { 'l2': {'a': 1, 'b': 2} } } } try: self.assertJsonEqual(expected, actual) except Exception as e: self.assertEqual( ("3 != 2: path: root.top.l1.l2. Different dict key sets\n" "expected=['a', 'b', 'c']\n" "observed=['a', 'b']\n" "difference=['c']"), e.difference) else: self.fail("This should have raised a mismatch exception") def test_fail_on_dict_keys(self): expected = { 'top': { 'l1': { 'l2': {'a': 1, 'b': 2, 'c': 3} } } } actual = { 'top': { 'l1': { 'l2': {'a': 1, 'b': 2, 'd': 3} } } } try: self.assertJsonEqual(expected, actual) except Exception as e: self.assertIn( "path: root.top.l1.l2. Dict keys are not equal", e.difference) else: self.fail("This should have raised a mismatch exception") def test_fail_on_list_value(self): expected = { 'top': { 'l1': { 'l2': ['a', 'b', 'c'] } } } actual = { 'top': { 'l1': { 'l2': ['c', 'a', 'd'] } } } try: self.assertJsonEqual(expected, actual) except Exception as e: self.assertEqual( "'b' != 'c': path: root.top.l1.l2[1]", e.difference) self.assertIn( "actual:\n{'top': {'l1': {'l2': ['c', 'a', 'd']}}}", six.text_type(e)) self.assertIn( "expected:\n{'top': {'l1': {'l2': ['a', 'b', 'c']}}}", six.text_type(e)) else: self.fail("This should have raised a mismatch exception") def test_fail_on_dict_value(self): expected = { 'top': { 'l1': { 'l2': {'a': 1, 'b': 2, 'c': 3} } } } actual = { 'top': { 'l1': { 'l2': {'a': 1, 'b': 2, 'c': 4} } } } try: self.assertJsonEqual(expected, actual, 'test message') except Exception as e: self.assertEqual( "3 != 4: path: root.top.l1.l2.c", e.difference) self.assertIn("actual:\n{'top': {'l1': {'l2': {", six.text_type(e)) self.assertIn( "expected:\n{'top': {'l1': {'l2': {", six.text_type(e)) self.assertIn( "message: test message\n", six.text_type(e)) else: self.fail("This should have raised a mismatch exception") def test_compare_scalars(self): with self.assertRaisesRegex(AssertionError, 'True != False'): self.assertJsonEqual(True, False) class BadLogTestCase(test.NoDBTestCase): """Make sure a mis-formatted debug log will get caught.""" def test_bad_debug_log(self): self.assertRaises(KeyError, LOG.debug, "this is a misformated %(log)s", {'nothing': 'nothing'}) class MatchTypeTestCase(test.NoDBTestCase): def test_match_type_simple(self): matcher = test.MatchType(dict) self.assertEqual(matcher, {}) self.assertEqual(matcher, {"hello": "world"}) self.assertEqual(matcher, {"hello": ["world"]}) self.assertNotEqual(matcher, []) self.assertNotEqual(matcher, [{"hello": "world"}]) self.assertNotEqual(matcher, 123) self.assertNotEqual(matcher, "foo") def test_match_type_object(self): class Hello(object): pass class World(object): pass matcher = test.MatchType(Hello) self.assertEqual(matcher, Hello()) self.assertNotEqual(matcher, World()) self.assertNotEqual(matcher, 123) self.assertNotEqual(matcher, "foo") class ContainKeyValueTestCase(test.NoDBTestCase): def test_contain_key_value_normal(self): matcher = test.ContainKeyValue('foo', 'bar') self.assertEqual(matcher, {123: 'nova', 'foo': 'bar'}) self.assertNotEqual(matcher, {'foo': 123}) self.assertNotEqual(matcher, {}) def test_contain_key_value_exception(self): matcher = test.ContainKeyValue('foo', 'bar') # Raise TypeError self.assertNotEqual(matcher, 123) self.assertNotEqual(matcher, 'foo') # Raise KeyError self.assertNotEqual(matcher, {1: 2, '3': 4, 5: '6'}) self.assertNotEqual(matcher, {'bar': 'foo'}) class NovaExceptionReraiseFormatErrorTestCase(test.NoDBTestCase): """Test that format errors are reraised in tests.""" def test_format_error_in_nova_exception(self): class FakeImageException(exception.NovaException): msg_fmt = 'Image %(image_id)s has wrong type %(type)s.' # wrong kwarg ex = self.assertRaises(KeyError, FakeImageException, bogus='wrongkwarg') self.assertIn('image_id', six.text_type(ex)) # no kwarg ex = self.assertRaises(KeyError, FakeImageException) self.assertIn('image_id', six.text_type(ex)) # not enough kwargs ex = self.assertRaises(KeyError, FakeImageException, image_id='image') self.assertIn('type', six.text_type(ex)) nova-17.0.1/nova/tests/unit/test_configdrive2.py0000666000175000017500000001172513250073136021653 0ustar zuulzuul00000000000000# Copyright 2012 Michael Still and Canonical Inc # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import tempfile import mock from oslo_config import cfg from oslo_utils import fileutils from nova import context from nova import test from nova.tests.unit import fake_instance from nova import utils from nova.virt import configdrive CONF = cfg.CONF class FakeInstanceMD(object): def metadata_for_config_drive(self): yield ('this/is/a/path/hello', 'This is some content') yield ('this/is/a/path/hi', b'This is some other content') class ConfigDriveTestCase(test.NoDBTestCase): @mock.patch.object(utils, 'execute', return_value=None) def test_create_configdrive_iso(self, mock_execute): CONF.set_override('config_drive_format', 'iso9660') imagefile = None try: with configdrive.ConfigDriveBuilder(FakeInstanceMD()) as c: (fd, imagefile) = tempfile.mkstemp(prefix='cd_iso_') os.close(fd) c.make_drive(imagefile) mock_execute.assert_called_once_with('genisoimage', '-o', mock.ANY, '-ldots', '-allow-lowercase', '-allow-multidot', '-l', '-publisher', mock.ANY, '-quiet', '-J', '-r', '-V', 'config-2', mock.ANY, attempts=1, run_as_root=False) finally: if imagefile: fileutils.delete_if_exists(imagefile) @mock.patch.object(utils, 'mkfs', return_value=None) @mock.patch('nova.privsep.fs.mount', return_value=('', '')) @mock.patch('nova.privsep.fs.umount', return_value=None) @mock.patch.object(utils, 'trycmd', return_value=(None, None)) def test_create_configdrive_vfat(self, mock_trycmd, mock_umount, mock_mount, mock_mkfs): CONF.set_override('config_drive_format', 'vfat') imagefile = None try: with configdrive.ConfigDriveBuilder(FakeInstanceMD()) as c: (fd, imagefile) = tempfile.mkstemp(prefix='cd_vfat_') os.close(fd) c.make_drive(imagefile) mock_mkfs.assert_called_once_with('vfat', mock.ANY, label='config-2') mock_mount.assert_called_once() mock_umount.assert_called_once() # NOTE(mikal): we can't check for a VFAT output here because the # filesystem creation stuff has been mocked out because it # requires root permissions finally: if imagefile: fileutils.delete_if_exists(imagefile) def test_config_drive_required_by_image_property(self): inst = fake_instance.fake_instance_obj(context.get_admin_context()) inst.config_drive = '' inst.system_metadata = { utils.SM_IMAGE_PROP_PREFIX + 'img_config_drive': 'mandatory'} self.assertTrue(configdrive.required_by(inst)) inst.system_metadata = { utils.SM_IMAGE_PROP_PREFIX + 'img_config_drive': 'optional'} self.assertFalse(configdrive.required_by(inst)) @mock.patch.object(configdrive, 'required_by', return_value=False) def test_config_drive_update_instance_required_by_false(self, mock_required): inst = fake_instance.fake_instance_obj(context.get_admin_context()) inst.config_drive = '' configdrive.update_instance(inst) self.assertEqual('', inst.config_drive) inst.config_drive = True configdrive.update_instance(inst) self.assertTrue(inst.config_drive) @mock.patch.object(configdrive, 'required_by', return_value=True) def test_config_drive_update_instance(self, mock_required): inst = fake_instance.fake_instance_obj(context.get_admin_context()) inst.config_drive = '' configdrive.update_instance(inst) self.assertTrue(inst.config_drive) inst.config_drive = True configdrive.update_instance(inst) self.assertTrue(inst.config_drive) nova-17.0.1/nova/tests/unit/servicegroup/0000775000175000017500000000000013250073472020371 5ustar zuulzuul00000000000000nova-17.0.1/nova/tests/unit/servicegroup/test_mc_servicegroup.py0000666000175000017500000000726413250073126025205 0ustar zuulzuul00000000000000# Copyright (c) 2013 Akira Yoshiyama # # This is derived from test_db_servicegroup.py. # Copyright 2012 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import iso8601 import mock from nova import servicegroup from nova import test from oslo_utils import timeutils class MemcachedServiceGroupTestCase(test.NoDBTestCase): @mock.patch('nova.cache_utils.get_memcached_client') def setUp(self, mgc_mock): super(MemcachedServiceGroupTestCase, self).setUp() self.mc_client = mock.MagicMock() mgc_mock.return_value = self.mc_client self.flags(servicegroup_driver='mc') self.servicegroup_api = servicegroup.API() def test_is_up(self): service_ref = { 'host': 'fake-host', 'topic': 'compute' } self.mc_client.get.return_value = None self.assertFalse(self.servicegroup_api.service_is_up(service_ref)) self.mc_client.get.assert_called_once_with('compute:fake-host') self.mc_client.reset_mock() self.mc_client.get.return_value = True self.assertTrue(self.servicegroup_api.service_is_up(service_ref)) self.mc_client.get.assert_called_once_with('compute:fake-host') def test_join(self): service = mock.MagicMock(report_interval=1) self.servicegroup_api.join('fake-host', 'fake-topic', service) fn = self.servicegroup_api._driver._report_state service.tg.add_timer.assert_called_once_with(1, fn, 5, service) def test_report_state(self): service_ref = { 'host': 'fake-host', 'topic': 'compute' } service = mock.MagicMock(model_disconnected=False, service_ref=service_ref) fn = self.servicegroup_api._driver._report_state fn(service) self.mc_client.set.assert_called_once_with('compute:fake-host', mock.ANY) def test_get_updated_time(self): updated_at_time = timeutils.parse_strtime("2016-04-18T02:56:25.198871") service_ref = { 'host': 'fake-host', 'topic': 'compute', 'updated_at': updated_at_time.replace(tzinfo=iso8601.UTC) } self.mc_client.get.return_value = None self.assertEqual(service_ref['updated_at'], self.servicegroup_api.get_updated_time(service_ref)) self.mc_client.get.assert_called_once_with('compute:fake-host') self.mc_client.reset_mock() retval = timeutils.utcnow() self.mc_client.get.return_value = retval self.assertEqual(retval.replace(tzinfo=iso8601.UTC), self.servicegroup_api.get_updated_time(service_ref)) self.mc_client.get.assert_called_once_with('compute:fake-host') self.mc_client.reset_mock() service_ref['updated_at'] = \ retval.replace(tzinfo=iso8601.UTC) self.mc_client.get.return_value = updated_at_time self.assertEqual(service_ref['updated_at'], self.servicegroup_api.get_updated_time(service_ref)) self.mc_client.get.assert_called_once_with('compute:fake-host') nova-17.0.1/nova/tests/unit/servicegroup/test_db_servicegroup.py0000666000175000017500000001061213250073126025162 0ustar zuulzuul00000000000000# Copyright 2012 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import oslo_messaging as messaging from oslo_utils import fixture as utils_fixture from oslo_utils import timeutils from nova import objects from nova import servicegroup from nova import test class DBServiceGroupTestCase(test.NoDBTestCase): def setUp(self): super(DBServiceGroupTestCase, self).setUp() self.down_time = 15 self.flags(service_down_time=self.down_time, servicegroup_driver='db') self.servicegroup_api = servicegroup.API() def test_is_up(self): now = timeutils.utcnow() service = objects.Service( host='fake-host', topic='compute', binary='nova-compute', created_at=now, updated_at=now, last_seen_up=now, forced_down=False, ) time_fixture = self.useFixture(utils_fixture.TimeFixture(now)) # Up (equal) result = self.servicegroup_api.service_is_up(service) self.assertTrue(result) # Up time_fixture.advance_time_seconds(self.down_time) result = self.servicegroup_api.service_is_up(service) self.assertTrue(result) # Down time_fixture.advance_time_seconds(1) result = self.servicegroup_api.service_is_up(service) self.assertFalse(result) # "last_seen_up" says down, "updated_at" says up. # This can happen if we do a service disable/enable while it's down. service.updated_at = timeutils.utcnow() result = self.servicegroup_api.service_is_up(service) self.assertFalse(result) # "last_seen_up" is none before compute node reports its status, # just use 'created_at' as last_heartbeat. service.last_seen_up = None service.created_at = timeutils.utcnow() result = self.servicegroup_api.service_is_up(service) self.assertTrue(result) def test_join(self): service = mock.MagicMock(report_interval=1) self.servicegroup_api.join('fake-host', 'fake-topic', service) fn = self.servicegroup_api._driver._report_state service.tg.add_timer.assert_called_once_with(1, fn, 5, service) @mock.patch.object(objects.Service, 'save') def test_report_state(self, upd_mock): service_ref = objects.Service(host='fake-host', topic='compute', report_count=10) service = mock.MagicMock(model_disconnected=False, service_ref=service_ref) fn = self.servicegroup_api._driver._report_state fn(service) upd_mock.assert_called_once_with() self.assertEqual(11, service_ref.report_count) self.assertFalse(service.model_disconnected) @mock.patch.object(objects.Service, 'save') def _test_report_state_error(self, exc_cls, upd_mock): upd_mock.side_effect = exc_cls("service save failed") service_ref = objects.Service(host='fake-host', topic='compute', report_count=10) service = mock.MagicMock(model_disconnected=False, service_ref=service_ref) fn = self.servicegroup_api._driver._report_state fn(service) # fail if exception not caught self.assertTrue(service.model_disconnected) def test_report_state_error_handling_timeout(self): self._test_report_state_error(messaging.MessagingTimeout) def test_report_state_unexpected_error(self): self._test_report_state_error(RuntimeError) def test_get_updated_time(self): retval = "2016-11-02T22:40:31.000000" service_ref = { 'host': 'fake-host', 'topic': 'compute', 'updated_at': retval } result = self.servicegroup_api.get_updated_time(service_ref) self.assertEqual(retval, result) nova-17.0.1/nova/tests/unit/servicegroup/test_api.py0000666000175000017500000000465113250073126022557 0ustar zuulzuul00000000000000# Copyright 2015 Intel Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Test the base class for the servicegroup API """ import mock from nova import servicegroup from nova import test class ServiceGroupApiTestCase(test.NoDBTestCase): def setUp(self): super(ServiceGroupApiTestCase, self).setUp() self.flags(servicegroup_driver='db') self.servicegroup_api = servicegroup.API() self.driver = self.servicegroup_api._driver def test_join(self): """""" member = {'host': "fake-host", "topic": "compute"} group = "group" self.driver.join = mock.MagicMock(return_value=None) result = self.servicegroup_api.join(member, group) self.assertIsNone(result) self.driver.join.assert_called_with(member, group, None) def test_service_is_up(self): """""" member = {"host": "fake-host", "topic": "compute", "forced_down": False} for retval in (True, False): driver = self.servicegroup_api._driver driver.is_up = mock.MagicMock(return_value=retval) result = self.servicegroup_api.service_is_up(member) self.assertIs(result, retval) driver.is_up.assert_called_with(member) member["forced_down"] = True driver = self.servicegroup_api._driver driver.is_up = mock.MagicMock() result = self.servicegroup_api.service_is_up(member) self.assertIs(result, False) driver.is_up.assert_not_called() def test_get_updated_time(self): member = {"host": "fake-host", "topic": "compute", "forced_down": False} retval = "2016-11-02T22:40:31.000000" driver = self.servicegroup_api._driver driver.updated_time = mock.MagicMock(return_value=retval) result = self.servicegroup_api.get_updated_time(member) self.assertEqual(retval, result) nova-17.0.1/nova/tests/unit/servicegroup/__init__.py0000666000175000017500000000000013250073126022466 0ustar zuulzuul00000000000000nova-17.0.1/nova/tests/unit/test_instance_types_extra_specs.py0000666000175000017500000001061213250073126024713 0ustar zuulzuul00000000000000# Copyright 2011 University of Southern California # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Unit Tests for instance types extra specs code """ from nova import context from nova import objects from nova.objects import fields from nova import test class InstanceTypeExtraSpecsTestCase(test.TestCase): def setUp(self): super(InstanceTypeExtraSpecsTestCase, self).setUp() self.context = context.get_admin_context() flavor = objects.Flavor(context=self.context, name="cg1.4xlarge", memory_mb=22000, vcpus=8, root_gb=1690, ephemeral_gb=2000, flavorid=105) self.specs = dict(cpu_arch=fields.Architecture.X86_64, cpu_model="Nehalem", xpu_arch="fermi", xpus="2", xpu_model="Tesla 2050") flavor.extra_specs = self.specs flavor.create() self.flavor = flavor self.instance_type_id = flavor.id self.flavorid = flavor.flavorid def tearDown(self): # Remove the instance type from the database self.flavor.destroy() super(InstanceTypeExtraSpecsTestCase, self).tearDown() def test_instance_type_specs_get(self): flavor = objects.Flavor.get_by_flavor_id(self.context, self.flavorid) self.assertEqual(self.specs, flavor.extra_specs) def test_flavor_extra_specs_delete(self): del self.specs["xpu_model"] del self.flavor.extra_specs['xpu_model'] self.flavor.save() flavor = objects.Flavor.get_by_flavor_id(self.context, self.flavorid) self.assertEqual(self.specs, flavor.extra_specs) def test_instance_type_extra_specs_update(self): self.specs["cpu_model"] = "Sandy Bridge" self.flavor.extra_specs["cpu_model"] = "Sandy Bridge" self.flavor.save() flavor = objects.Flavor.get_by_flavor_id(self.context, self.flavorid) self.assertEqual(self.specs, flavor.extra_specs) def test_instance_type_extra_specs_create(self): net_attrs = { "net_arch": "ethernet", "net_mbps": "10000" } self.specs.update(net_attrs) self.flavor.extra_specs.update(net_attrs) self.flavor.save() flavor = objects.Flavor.get_by_flavor_id(self.context, self.flavorid) self.assertEqual(self.specs, flavor.extra_specs) def test_instance_type_get_with_extra_specs(self): flavor = objects.Flavor.get_by_id(self.context, 5) self.assertEqual(flavor.extra_specs, {}) def test_instance_type_get_by_name_with_extra_specs(self): flavor = objects.Flavor.get_by_name(self.context, "cg1.4xlarge") self.assertEqual(flavor.extra_specs, self.specs) flavor = objects.Flavor.get_by_name(self.context, "m1.small") self.assertEqual(flavor.extra_specs, {}) def test_instance_type_get_by_flavor_id_with_extra_specs(self): flavor = objects.Flavor.get_by_flavor_id(self.context, 105) self.assertEqual(flavor.extra_specs, self.specs) flavor = objects.Flavor.get_by_flavor_id(self.context, 2) self.assertEqual(flavor.extra_specs, {}) def test_instance_type_get_all(self): flavors = objects.FlavorList.get_all(self.context) name2specs = {flavor.name: flavor.extra_specs for flavor in flavors} self.assertEqual(name2specs['cg1.4xlarge'], self.specs) self.assertEqual(name2specs['m1.small'], {}) nova-17.0.1/nova/tests/unit/fake_pci_device_pools.py0000666000175000017500000000266113250073126022525 0ustar zuulzuul00000000000000# Copyright 2014 IBM Corp. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.objects import pci_device_pool # This represents the format that PCI device pool info was stored in the DB # before this info was made into objects. fake_pool_dict = { 'product_id': 'fake-product', 'vendor_id': 'fake-vendor', 'numa_node': 1, 't1': 'v1', 't2': 'v2', 'count': 2, } fake_pool = pci_device_pool.PciDevicePool(count=5, product_id='foo', vendor_id='bar', numa_node=0, tags={'t1': 'v1', 't2': 'v2'}) fake_pool_primitive = fake_pool.obj_to_primitive() fake_pool_list = pci_device_pool.PciDevicePoolList(objects=[fake_pool]) fake_pool_list_primitive = fake_pool_list.obj_to_primitive() nova-17.0.1/nova/tests/unit/test_iptables_network.py0000666000175000017500000003162013250073126022641 0ustar zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Unit Tests for network code.""" import mock import six from nova.network import linux_net from nova import test class IptablesManagerTestCase(test.NoDBTestCase): binary_name = linux_net.get_binary_name() sample_filter = ['#Generated by iptables-save on Fri Feb 18 15:17:05 2011', '*filter', ':INPUT ACCEPT [2223527:305688874]', ':FORWARD ACCEPT [0:0]', ':OUTPUT ACCEPT [2172501:140856656]', ':iptables-top-rule - [0:0]', ':iptables-bottom-rule - [0:0]', ':%s-FORWARD - [0:0]' % (binary_name), ':%s-INPUT - [0:0]' % (binary_name), ':%s-OUTPUT - [0:0]' % (binary_name), ':%s-local - [0:0]' % (binary_name), ':nova-filter-top - [0:0]', '[0:0] -A FORWARD -j nova-filter-top', '[0:0] -A OUTPUT -j nova-filter-top', '[0:0] -A nova-filter-top -j %s-local' % (binary_name), '[0:0] -A INPUT -j %s-INPUT' % (binary_name), '[0:0] -A OUTPUT -j %s-OUTPUT' % (binary_name), '[0:0] -A FORWARD -j %s-FORWARD' % (binary_name), '[0:0] -A INPUT -i virbr0 -p udp -m udp --dport 53 ' '-j ACCEPT', '[0:0] -A INPUT -i virbr0 -p tcp -m tcp --dport 53 ' '-j ACCEPT', '[0:0] -A INPUT -i virbr0 -p udp -m udp --dport 67 ' '-j ACCEPT', '[0:0] -A INPUT -i virbr0 -p tcp -m tcp --dport 67 ' '-j ACCEPT', '[0:0] -A FORWARD -s 192.168.122.0/24 -i virbr0 ' '-j ACCEPT', '[0:0] -A FORWARD -i virbr0 -o virbr0 -j ACCEPT', '[0:0] -A FORWARD -o virbr0 -j REJECT --reject-with ' 'icmp-port-unreachable', '[0:0] -A FORWARD -i virbr0 -j REJECT --reject-with ' 'icmp-port-unreachable', 'COMMIT', '# Completed on Fri Feb 18 15:17:05 2011'] sample_nat = ['# Generated by iptables-save on Fri Feb 18 15:17:05 2011', '*nat', ':PREROUTING ACCEPT [3936:762355]', ':INPUT ACCEPT [2447:225266]', ':OUTPUT ACCEPT [63491:4191863]', ':POSTROUTING ACCEPT [63112:4108641]', ':%s-OUTPUT - [0:0]' % (binary_name), ':%s-POSTROUTING - [0:0]' % (binary_name), ':%s-PREROUTING - [0:0]' % (binary_name), ':%s-float-snat - [0:0]' % (binary_name), ':%s-snat - [0:0]' % (binary_name), ':nova-postrouting-bottom - [0:0]', '[0:0] -A PREROUTING -j %s-PREROUTING' % (binary_name), '[0:0] -A OUTPUT -j %s-OUTPUT' % (binary_name), '[0:0] -A POSTROUTING -j %s-POSTROUTING' % (binary_name), '[0:0] -A nova-postrouting-bottom ' '-j %s-snat' % (binary_name), '[0:0] -A %s-snat ' '-j %s-float-snat' % (binary_name, binary_name), '[0:0] -A POSTROUTING -j nova-postrouting-bottom', 'COMMIT', '# Completed on Fri Feb 18 15:17:05 2011'] def setUp(self): super(IptablesManagerTestCase, self).setUp() self.manager = linux_net.IptablesManager() def test_duplicate_rules_no_dirty(self): table = self.manager.ipv4['filter'] table.dirty = False num_rules = len(table.rules) table.add_rule('FORWARD', '-s 1.2.3.4/5 -j DROP') self.assertEqual(len(table.rules), num_rules + 1) self.assertTrue(table.dirty) table.dirty = False num_rules = len(table.rules) table.add_rule('FORWARD', '-s 1.2.3.4/5 -j DROP') self.assertEqual(len(table.rules), num_rules) self.assertFalse(table.dirty) def test_clean_tables_no_apply(self): for table in six.itervalues(self.manager.ipv4): table.dirty = False for table in six.itervalues(self.manager.ipv6): table.dirty = False with mock.patch.object(self.manager, '_apply') as mock_apply: self.manager.apply() self.assertFalse(mock_apply.called) def test_filter_rules_are_wrapped(self): current_lines = self.sample_filter table = self.manager.ipv4['filter'] table.add_rule('FORWARD', '-s 1.2.3.4/5 -j DROP') new_lines = self.manager._modify_rules(current_lines, table, 'filter') self.assertIn('[0:0] -A %s-FORWARD ' '-s 1.2.3.4/5 -j DROP' % self.binary_name, new_lines) table.remove_rule('FORWARD', '-s 1.2.3.4/5 -j DROP') new_lines = self.manager._modify_rules(current_lines, table, 'filter') self.assertNotIn('[0:0] -A %s-FORWARD ' '-s 1.2.3.4/5 -j DROP' % self.binary_name, new_lines) def test_remove_rules_regex(self): current_lines = self.sample_nat table = self.manager.ipv4['nat'] table.add_rule('float-snat', '-s 10.0.0.1 -j SNAT --to 10.10.10.10' ' -d 10.0.0.1') table.add_rule('float-snat', '-s 10.0.0.1 -j SNAT --to 10.10.10.10' ' -o eth0') table.add_rule('PREROUTING', '-d 10.10.10.10 -j DNAT --to 10.0.0.1') table.add_rule('OUTPUT', '-d 10.10.10.10 -j DNAT --to 10.0.0.1') table.add_rule('float-snat', '-s 10.0.0.10 -j SNAT --to 10.10.10.11' ' -d 10.0.0.10') table.add_rule('float-snat', '-s 10.0.0.10 -j SNAT --to 10.10.10.11' ' -o eth0') table.add_rule('PREROUTING', '-d 10.10.10.11 -j DNAT --to 10.0.0.10') table.add_rule('OUTPUT', '-d 10.10.10.11 -j DNAT --to 10.0.0.10') new_lines = self.manager._modify_rules(current_lines, table, 'nat') self.assertEqual(len(new_lines) - len(current_lines), 8) regex = '.*\s+%s(/32|\s+|$)' num_removed = table.remove_rules_regex(regex % '10.10.10.10') self.assertEqual(num_removed, 4) new_lines = self.manager._modify_rules(current_lines, table, 'nat') self.assertEqual(len(new_lines) - len(current_lines), 4) num_removed = table.remove_rules_regex(regex % '10.10.10.11') self.assertEqual(num_removed, 4) new_lines = self.manager._modify_rules(current_lines, table, 'nat') self.assertEqual(current_lines, new_lines) def test_nat_rules(self): current_lines = self.sample_nat new_lines = self.manager._modify_rules(current_lines, self.manager.ipv4['nat'], 'nat') for line in [':%s-OUTPUT - [0:0]' % (self.binary_name), ':%s-float-snat - [0:0]' % (self.binary_name), ':%s-snat - [0:0]' % (self.binary_name), ':%s-PREROUTING - [0:0]' % (self.binary_name), ':%s-POSTROUTING - [0:0]' % (self.binary_name)]: self.assertIn(line, new_lines, "One of our chains went" " missing.") seen_lines = set() for line in new_lines: line = line.strip() self.assertNotIn(line, seen_lines, "Duplicate line: %s" % line) seen_lines.add(line) last_postrouting_line = '' for line in new_lines: if line.startswith('[0:0] -A POSTROUTING'): last_postrouting_line = line self.assertIn('-j nova-postrouting-bottom', last_postrouting_line, "Last POSTROUTING rule does not jump to " "nova-postouting-bottom: %s" % last_postrouting_line) for chain in ['POSTROUTING', 'PREROUTING', 'OUTPUT']: self.assertTrue('[0:0] -A %s -j %s-%s' % (chain, self.binary_name, chain) in new_lines, "Built-in chain %s not wrapped" % (chain,)) def test_filter_rules(self): current_lines = self.sample_filter new_lines = self.manager._modify_rules(current_lines, self.manager.ipv4['filter'], 'nat') for line in [':%s-FORWARD - [0:0]' % (self.binary_name), ':%s-INPUT - [0:0]' % (self.binary_name), ':%s-local - [0:0]' % (self.binary_name), ':%s-OUTPUT - [0:0]' % (self.binary_name)]: self.assertIn(line, new_lines, "One of our chains went" " missing.") seen_lines = set() for line in new_lines: line = line.strip() self.assertNotIn(line, seen_lines, "Duplicate line: %s" % line) seen_lines.add(line) for chain in ['FORWARD', 'OUTPUT']: for line in new_lines: if line.startswith('[0:0] -A %s' % chain): self.assertIn('-j nova-filter-top', line, "First %s rule does not " "jump to nova-filter-top" % chain) break self.assertTrue('[0:0] -A nova-filter-top ' '-j %s-local' % self.binary_name in new_lines, "nova-filter-top does not jump to wrapped local chain") for chain in ['INPUT', 'OUTPUT', 'FORWARD']: self.assertTrue('[0:0] -A %s -j %s-%s' % (chain, self.binary_name, chain) in new_lines, "Built-in chain %s not wrapped" % (chain,)) def test_missing_table(self): current_lines = [] new_lines = self.manager._modify_rules(current_lines, self.manager.ipv4['filter'], 'filter') for line in ['*filter', 'COMMIT']: self.assertIn(line, new_lines, "One of iptables key lines " "went missing.") self.assertGreater(len(new_lines), 4, "No iptables rules added") msg = "iptables rules not generated in the correct order" self.assertEqual("#Generated by nova", new_lines[0], msg) self.assertEqual("*filter", new_lines[1], msg) self.assertEqual("COMMIT", new_lines[-2], msg) self.assertEqual("#Completed by nova", new_lines[-1], msg) def test_iptables_top_order(self): # Test iptables_top_regex current_lines = list(self.sample_filter) current_lines[12:12] = ['[0:0] -A FORWARD -j iptables-top-rule'] self.flags(iptables_top_regex='-j iptables-top-rule') new_lines = self.manager._modify_rules(current_lines, self.manager.ipv4['filter'], 'filter') self.assertEqual(current_lines, new_lines) def test_iptables_bottom_order(self): # Test iptables_bottom_regex current_lines = list(self.sample_filter) current_lines[26:26] = ['[0:0] -A FORWARD -j iptables-bottom-rule'] self.flags(iptables_bottom_regex='-j iptables-bottom-rule') new_lines = self.manager._modify_rules(current_lines, self.manager.ipv4['filter'], 'filter') self.assertEqual(current_lines, new_lines) def test_iptables_preserve_order(self): # Test both iptables_top_regex and iptables_bottom_regex current_lines = list(self.sample_filter) current_lines[12:12] = ['[0:0] -A FORWARD -j iptables-top-rule'] current_lines[27:27] = ['[0:0] -A FORWARD -j iptables-bottom-rule'] self.flags(iptables_top_regex='-j iptables-top-rule') self.flags(iptables_bottom_regex='-j iptables-bottom-rule') new_lines = self.manager._modify_rules(current_lines, self.manager.ipv4['filter'], 'filter') self.assertEqual(current_lines, new_lines) nova-17.0.1/nova/tests/unit/test_test_utils.py0000666000175000017500000000444113250073126021465 0ustar zuulzuul00000000000000# Copyright 2010 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import errno import socket import tempfile import fixtures from nova import db from nova import test from nova.tests.unit import utils as test_utils class TestUtilsTestCase(test.TestCase): def test_get_test_admin_context(self): # get_test_admin_context's return value behaves like admin context. ctxt = test_utils.get_test_admin_context() # TODO(soren): This should verify the full interface context # objects expose. self.assertTrue(ctxt.is_admin) def test_get_test_instance(self): # get_test_instance's return value looks like an instance_ref. instance_ref = test_utils.get_test_instance() ctxt = test_utils.get_test_admin_context() db.instance_get(ctxt, instance_ref['id']) def test_ipv6_supported(self): self.assertIn(test_utils.is_ipv6_supported(), (False, True)) def fake_open(path): raise IOError def fake_socket_fail(x, y): e = socket.error() e.errno = errno.EAFNOSUPPORT raise e def fake_socket_ok(x, y): return tempfile.TemporaryFile() with fixtures.MonkeyPatch('socket.socket', fake_socket_fail): self.assertFalse(test_utils.is_ipv6_supported()) with fixtures.MonkeyPatch('socket.socket', fake_socket_ok): with fixtures.MonkeyPatch('sys.platform', 'windows'): self.assertTrue(test_utils.is_ipv6_supported()) with fixtures.MonkeyPatch('sys.platform', 'linux2'): with fixtures.MonkeyPatch('six.moves.builtins.open', fake_open): self.assertFalse(test_utils.is_ipv6_supported()) nova-17.0.1/nova/tests/unit/test_matchers.py0000666000175000017500000003333113250073126021074 0ustar zuulzuul00000000000000# Copyright 2012 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from collections import OrderedDict import pprint import testtools from testtools.tests.matchers import helpers from nova.tests.unit import matchers class TestDictMatches(testtools.TestCase, helpers.TestMatchersInterface): matches_dict = OrderedDict(sorted({'foo': 'bar', 'baz': 'DONTCARE', 'cat': {'tabby': True, 'fluffy': False}}.items())) matches_matcher = matchers.DictMatches( matches_dict ) matches_matches = [ {'foo': 'bar', 'baz': 'noox', 'cat': {'tabby': True, 'fluffy': False}}, {'foo': 'bar', 'baz': 'quux', 'cat': {'tabby': True, 'fluffy': False}}, ] matches_mismatches = [ {}, {'foo': 'bar', 'baz': 'qux'}, {'foo': 'bop', 'baz': 'qux', 'cat': {'tabby': True, 'fluffy': False}}, {'foo': 'bar', 'baz': 'quux', 'cat': {'tabby': True, 'fluffy': True}}, {'foo': 'bar', 'cat': {'tabby': True, 'fluffy': False}}, ] str_examples = [ ('DictMatches(%s)' % (pprint.pformat(matches_dict)), matches_matcher), ] describe_examples = [ ("Keys in d1 and not d2: {0}. Keys in d2 and not d1: []" .format(str(sorted(matches_dict.keys()))), {}, matches_matcher), ("Dictionaries do not match at fluffy. d1: False d2: True", {'foo': 'bar', 'baz': 'quux', 'cat': {'tabby': True, 'fluffy': True}}, matches_matcher), ("Dictionaries do not match at foo. d1: bar d2: bop", {'foo': 'bop', 'baz': 'quux', 'cat': {'tabby': True, 'fluffy': False}}, matches_matcher), ] class TestDictListMatches(testtools.TestCase, helpers.TestMatchersInterface): matches_matcher = matchers.DictListMatches( [{'foo': 'bar', 'baz': 'DONTCARE', 'cat': {'tabby': True, 'fluffy': False}}, {'dog': 'yorkie'}, ]) matches_matches = [ [{'foo': 'bar', 'baz': 'qoox', 'cat': {'tabby': True, 'fluffy': False}}, {'dog': 'yorkie'}], [{'foo': 'bar', 'baz': False, 'cat': {'tabby': True, 'fluffy': False}}, {'dog': 'yorkie'}], ] matches_mismatches = [ [], {}, [{'foo': 'bar', 'baz': 'qoox', 'cat': {'tabby': True, 'fluffy': True}}, {'dog': 'yorkie'}], [{'foo': 'bar', 'baz': False, 'cat': {'tabby': True, 'fluffy': False}}, {'cat': 'yorkie'}], [{'foo': 'bop', 'baz': False, 'cat': {'tabby': True, 'fluffy': False}}, {'dog': 'yorkie'}], ] str_examples = [ ("DictListMatches([{'baz': 'DONTCARE', 'cat':" " {'fluffy': False, 'tabby': True}, 'foo': 'bar'},\n" " {'dog': 'yorkie'}])", matches_matcher), ] describe_examples = [ ("Length mismatch: len(L1)=2 != len(L2)=0", {}, matches_matcher), ("Dictionaries do not match at fluffy. d1: True d2: False", [{'foo': 'bar', 'baz': 'qoox', 'cat': {'tabby': True, 'fluffy': True}}, {'dog': 'yorkie'}], matches_matcher), ] class TestIsSubDictOf(testtools.TestCase, helpers.TestMatchersInterface): matches_matcher = matchers.IsSubDictOf( OrderedDict(sorted({'foo': 'bar', 'baz': 'DONTCARE', 'cat': {'tabby': True, 'fluffy': False}}.items())) ) matches_matches = [ {'foo': 'bar', 'baz': 'noox', 'cat': {'tabby': True, 'fluffy': False}}, {'foo': 'bar', 'baz': 'quux'} ] matches_mismatches = [ {'foo': 'bop', 'baz': 'qux', 'cat': {'tabby': True, 'fluffy': False}}, {'foo': 'bar', 'baz': 'quux', 'cat': {'tabby': True, 'fluffy': True}}, {'foo': 'bar', 'cat': {'tabby': True, 'fluffy': False}, 'dog': None}, ] str_examples = [ ("IsSubDictOf({0})".format( str(OrderedDict(sorted({'foo': 'bar', 'baz': 'DONTCARE', 'cat': {'tabby': True, 'fluffy': False}}.items())))), matches_matcher), ] describe_examples = [ ("Dictionaries do not match at fluffy. d1: False d2: True", {'foo': 'bar', 'baz': 'quux', 'cat': {'tabby': True, 'fluffy': True}}, matches_matcher), ("Dictionaries do not match at foo. d1: bar d2: bop", {'foo': 'bop', 'baz': 'quux', 'cat': {'tabby': True, 'fluffy': False}}, matches_matcher), ] class TestXMLMatches(testtools.TestCase, helpers.TestMatchersInterface): matches_matcher = matchers.XMLMatches(""" some text here some other text here child 1 child 2 DONTCARE """, allow_mixed_nodes=False) matches_matches = [""" some text here some other text here child 1 child 2 child 3 """, """ some text here some other text here child 1 child 2 blah """, ] matches_mismatches = [""" some text here mismatch text child 1 child 2 child 3 """, """ some text here some other text here child 1 child 2 child 3 """, """ some text here some other text here child 1 child 2 child 3 """, """ some text here some other text here child 1 child 4 child 2 child 3 """, """ some text here some other text here child 1 child 2 """, """ some text here some other text here child 1 child 2 child 3 child 4 """, """ some text here some other text here child 2 child 1 DONTCARE """, """ some text here some other text here child 1 child 2 DONTCARE """, ] str_examples = [ ("XMLMatches('\\n" "\\n" " some text here\\n" " some other text here\\n" " \\n" " \\n" " \\n" " child 1\\n" " child 2\\n" " DONTCARE\\n" " \\n" " \\n" "')", matches_matcher), ] describe_examples = [ ("/root/text[1]: XML text value mismatch: expected text value: " "['some other text here']; actual value: ['mismatch text']", """ some text here mismatch text child 1 child 2 child 3 """, matches_matcher), ("/root/attrs[2]: XML attributes mismatch: keys only in expected: " "key2; keys only in actual: key3", """ some text here some other text here child 1 child 2 child 3 """, matches_matcher), ("/root/attrs[2]: XML attribute value mismatch: expected value of " "attribute key1: 'spam'; actual value: 'quux'", """ some text here some other text here child 1 child 2 child 3 """, matches_matcher), ("/root/children[3]: XML tag mismatch at index 1: expected tag " "; actual tag ", """ some text here some other text here child 1 child 4 child 2 child 3 """, matches_matcher), ("/root/children[3]: XML expected child element not " "present at index 2", """ some text here some other text here child 1 child 2 """, matches_matcher), ("/root/children[3]: XML unexpected child element " "present at index 3", """ some text here some other text here child 1 child 2 child 3 child 4 """, matches_matcher), ("/root/children[3]: XML tag mismatch at index 0: " "expected tag ; actual tag ", """ some text here some other text here child 2 child 1 child 3 """, matches_matcher), ("/: XML information mismatch(version, encoding) " "expected version 1.0, expected encoding UTF-8; " "actual version 1.1, actual encoding UTF-8", """ some text here some other text here child 1 child 2 DONTCARE """, matches_matcher), ] class TestXMLMatchesUnorderedNodes(testtools.TestCase, helpers.TestMatchersInterface): matches_matcher = matchers.XMLMatches(""" some text here some other text here DONTCARE child 2 child 1 """, allow_mixed_nodes=True) matches_matches = [""" some text here child 1 child 2 child 3 some other text here """, ] matches_mismatches = [""" some text here mismatch text child 1 child 2 child 3 """, ] describe_examples = [ ("/root: XML expected child element not present at index 4", """ some text here mismatch text child 1 child 2 child 3 """, matches_matcher), ] str_examples = [] nova-17.0.1/nova/tests/unit/__init__.py0000666000175000017500000000213313250073126017762 0ustar zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ :mod:`nova.tests.unit` -- Nova Unittests ===================================================== .. automodule:: nova.tests.unit :platform: Unix """ from nova import objects # NOTE(comstud): Make sure we have all of the objects loaded. We do this # at module import time, because we may be using mock decorators in our # tests that run at import time. objects.register_all() nova-17.0.1/nova/tests/unit/fake_policy.py0000666000175000017500000001245113250073126020514 0ustar zuulzuul00000000000000# Copyright (c) 2012 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. policy_data = """ { "context_is_admin": "role:admin or role:administrator", "os_compute_api:servers:show:host_status": "", "os_compute_api:servers:migrations:force_complete": "", "os_compute_api:os-admin-actions:inject_network_info": "", "os_compute_api:os-admin-actions:reset_network": "", "os_compute_api:os-admin-actions:reset_state": "", "os_compute_api:os-admin-password": "", "os_compute_api:os-agents": "", "os_compute_api:os-attach-interfaces": "", "os_compute_api:os-baremetal-nodes": "", "os_compute_api:os-cells": "", "os_compute_api:os-console-output": "", "os_compute_api:os-remote-consoles": "", "os_compute_api:os-consoles:create": "", "os_compute_api:os-consoles:delete": "", "os_compute_api:os-consoles:index": "", "os_compute_api:os-consoles:show": "", "os_compute_api:os-create-backup": "", "os_compute_api:os-deferred-delete": "", "os_compute_api:os-extended-server-attributes": "", "os_compute_api:ips:index": "", "os_compute_api:ips:show": "", "os_compute_api:extensions": "", "os_compute_api:os-fixed-ips": "", "os_compute_api:os-flavor-access:remove_tenant_access": "", "os_compute_api:os-flavor-access:add_tenant_access": "", "os_compute_api:os-flavor-extra-specs:index": "", "os_compute_api:os-flavor-extra-specs:show": "", "os_compute_api:os-flavor-manage": "", "os_compute_api:os-flavor-manage:create": "", "os_compute_api:os-flavor-manage:delete": "", "os_compute_api:os-floating-ip-dns": "", "os_compute_api:os-floating-ip-dns:domain:update": "", "os_compute_api:os-floating-ip-dns:domain:delete": "", "os_compute_api:os-floating-ip-pools": "", "os_compute_api:os-floating-ips": "", "os_compute_api:os-floating-ips-bulk": "", "os_compute_api:os-fping": "", "os_compute_api:os-instance-actions": "", "os_compute_api:os-instance-usage-audit-log": "", "os_compute_api:os-lock-server:lock": "", "os_compute_api:os-lock-server:unlock": "", "os_compute_api:os-migrate-server:migrate": "", "os_compute_api:os-migrate-server:migrate_live": "", "os_compute_api:os-multinic": "", "os_compute_api:os-networks": "", "os_compute_api:os-networks:view": "", "os_compute_api:os-networks-associate": "", "os_compute_api:os-tenant-networks": "", "os_compute_api:os-pause-server:pause": "", "os_compute_api:os-pause-server:unpause": "", "os_compute_api:os-quota-sets:show": "", "os_compute_api:os-quota-sets:update": "", "os_compute_api:os-quota-sets:delete": "", "os_compute_api:os-quota-sets:detail": "", "os_compute_api:os-quota-sets:defaults": "", "os_compute_api:os-quota-class-sets:update": "", "os_compute_api:os-quota-class-sets:show": "", "os_compute_api:os-rescue": "", "os_compute_api:os-security-group-default-rules": "", "os_compute_api:os-server-diagnostics": "", "os_compute_api:os-server-password": "", "os_compute_api:os-server-tags:index": "", "os_compute_api:os-server-tags:show": "", "os_compute_api:os-server-tags:update": "", "os_compute_api:os-server-tags:update_all": "", "os_compute_api:os-server-tags:delete": "", "os_compute_api:os-server-tags:delete_all": "", "os_compute_api:os-server-groups": "", "os_compute_api:os-server-groups:show": "", "os_compute_api:os-server-groups:index": "", "os_compute_api:os-server-groups:create": "", "os_compute_api:os-server-groups:delete": "", "os_compute_api:os-services": "", "os_compute_api:os-shelve:shelve": "", "os_compute_api:os-shelve:shelve_offload": "", "os_compute_api:os-simple-tenant-usage:show": "", "os_compute_api:os-simple-tenant-usage:list": "", "os_compute_api:os-shelve:unshelve": "", "os_compute_api:os-suspend-server:suspend": "", "os_compute_api:os-suspend-server:resume": "", "os_compute_api:os-virtual-interfaces": "", "os_compute_api:os-volumes": "", "os_compute_api:os-volumes-attachments:index": "", "os_compute_api:os-volumes-attachments:show": "", "os_compute_api:os-volumes-attachments:create": "", "os_compute_api:os-volumes-attachments:update": "", "os_compute_api:os-volumes-attachments:delete": "", "os_compute_api:os-availability-zone:list": "", "os_compute_api:os-availability-zone:detail": "", "os_compute_api:limits": "", "os_compute_api:os-assisted-volume-snapshots:create": "", "os_compute_api:os-assisted-volume-snapshots:delete": "", "os_compute_api:server-metadata:create": "", "os_compute_api:server-metadata:update": "", "os_compute_api:server-metadata:update_all": "", "os_compute_api:server-metadata:delete": "", "os_compute_api:server-metadata:show": "", "os_compute_api:server-metadata:index": "" } """ nova-17.0.1/nova/tests/unit/test_loadables.py0000666000175000017500000001246113250073126021215 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests For Loadable class handling. """ from nova import exception from nova import test from nova.tests.unit import fake_loadables class LoadablesTestCase(test.NoDBTestCase): def setUp(self): super(LoadablesTestCase, self).setUp() self.fake_loader = fake_loadables.FakeLoader() # The name that we imported above for testing self.test_package = 'nova.tests.unit.fake_loadables' def test_loader_init(self): self.assertEqual(self.fake_loader.package, self.test_package) # Test the path of the module ending_path = '/' + self.test_package.replace('.', '/') self.assertTrue(self.fake_loader.path.endswith(ending_path)) self.assertEqual(self.fake_loader.loadable_cls_type, fake_loadables.FakeLoadable) def _compare_classes(self, classes, expected): class_names = [cls.__name__ for cls in classes] self.assertEqual(set(class_names), set(expected)) def test_get_all_classes(self): classes = self.fake_loader.get_all_classes() expected_class_names = ['FakeLoadableSubClass1', 'FakeLoadableSubClass2', 'FakeLoadableSubClass5', 'FakeLoadableSubClass6'] self._compare_classes(classes, expected_class_names) def test_get_matching_classes(self): prefix = self.test_package test_classes = [prefix + '.fake_loadable1.FakeLoadableSubClass1', prefix + '.fake_loadable2.FakeLoadableSubClass5'] classes = self.fake_loader.get_matching_classes(test_classes) expected_class_names = ['FakeLoadableSubClass1', 'FakeLoadableSubClass5'] self._compare_classes(classes, expected_class_names) def test_get_matching_classes_with_underscore(self): prefix = self.test_package test_classes = [prefix + '.fake_loadable1.FakeLoadableSubClass1', prefix + '.fake_loadable2._FakeLoadableSubClass7'] self.assertRaises(exception.ClassNotFound, self.fake_loader.get_matching_classes, test_classes) def test_get_matching_classes_with_wrong_type1(self): prefix = self.test_package test_classes = [prefix + '.fake_loadable1.FakeLoadableSubClass4', prefix + '.fake_loadable2.FakeLoadableSubClass5'] self.assertRaises(exception.ClassNotFound, self.fake_loader.get_matching_classes, test_classes) def test_get_matching_classes_with_wrong_type2(self): prefix = self.test_package test_classes = [prefix + '.fake_loadable1.FakeLoadableSubClass1', prefix + '.fake_loadable2.FakeLoadableSubClass8'] self.assertRaises(exception.ClassNotFound, self.fake_loader.get_matching_classes, test_classes) def test_get_matching_classes_with_one_function(self): prefix = self.test_package test_classes = [prefix + '.fake_loadable1.return_valid_classes', prefix + '.fake_loadable2.FakeLoadableSubClass5'] classes = self.fake_loader.get_matching_classes(test_classes) expected_class_names = ['FakeLoadableSubClass1', 'FakeLoadableSubClass2', 'FakeLoadableSubClass5'] self._compare_classes(classes, expected_class_names) def test_get_matching_classes_with_two_functions(self): prefix = self.test_package test_classes = [prefix + '.fake_loadable1.return_valid_classes', prefix + '.fake_loadable2.return_valid_class'] classes = self.fake_loader.get_matching_classes(test_classes) expected_class_names = ['FakeLoadableSubClass1', 'FakeLoadableSubClass2', 'FakeLoadableSubClass6'] self._compare_classes(classes, expected_class_names) def test_get_matching_classes_with_function_including_invalids(self): # When using a method, no checking is done on valid classes. prefix = self.test_package test_classes = [prefix + '.fake_loadable1.return_invalid_classes', prefix + '.fake_loadable2.return_valid_class'] classes = self.fake_loader.get_matching_classes(test_classes) expected_class_names = ['FakeLoadableSubClass1', '_FakeLoadableSubClass3', 'FakeLoadableSubClass4', 'FakeLoadableSubClass6'] self._compare_classes(classes, expected_class_names) nova-17.0.1/nova/tests/unit/scheduler/0000775000175000017500000000000013250073472017632 5ustar zuulzuul00000000000000nova-17.0.1/nova/tests/unit/scheduler/test_scheduler_utils.py0000666000175000017500000004450413250073126024446 0ustar zuulzuul00000000000000# Copyright (c) 2013 Rackspace Hosting # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests For Scheduler Utils """ import mock import six from nova.compute import flavors from nova.compute import utils as compute_utils from nova import exception from nova import objects from nova.scheduler import utils as scheduler_utils from nova import test from nova.tests.unit import fake_instance from nova.tests.unit.objects import test_flavor from nova.tests import uuidsentinel as uuids class SchedulerUtilsTestCase(test.NoDBTestCase): """Test case for scheduler utils methods.""" def setUp(self): super(SchedulerUtilsTestCase, self).setUp() self.context = 'fake-context' def test_build_request_spec_without_image(self): instance = {'uuid': uuids.instance} instance_type = objects.Flavor(**test_flavor.fake_flavor) with mock.patch.object(flavors, 'extract_flavor') as mock_extract: mock_extract.return_value = instance_type request_spec = scheduler_utils.build_request_spec(None, [instance]) mock_extract.assert_called_once_with({'uuid': uuids.instance}) self.assertEqual({}, request_spec['image']) def test_build_request_spec_with_object(self): instance_type = objects.Flavor() instance = fake_instance.fake_instance_obj(self.context) with mock.patch.object(instance, 'get_flavor') as mock_get: mock_get.return_value = instance_type request_spec = scheduler_utils.build_request_spec(None, [instance]) mock_get.assert_called_once_with() self.assertIsInstance(request_spec['instance_properties'], dict) @mock.patch('nova.rpc.LegacyValidatingNotifier') @mock.patch.object(compute_utils, 'add_instance_fault_from_exc') @mock.patch.object(objects.Instance, 'save') def _test_set_vm_state_and_notify(self, mock_save, mock_add, mock_notifier, request_spec, payload_request_spec): expected_uuid = uuids.instance updates = dict(vm_state='fake-vm-state') service = 'fake-service' method = 'fake-method' exc_info = 'exc_info' payload = dict(request_spec=payload_request_spec, instance_properties=payload_request_spec.get( 'instance_properties', {}), instance_id=expected_uuid, state='fake-vm-state', method=method, reason=exc_info) event_type = '%s.%s' % (service, method) scheduler_utils.set_vm_state_and_notify(self.context, expected_uuid, service, method, updates, exc_info, request_spec) mock_save.assert_called_once_with() mock_add.assert_called_once_with(self.context, mock.ANY, exc_info, mock.ANY) self.assertIsInstance(mock_add.call_args[0][1], objects.Instance) self.assertIsInstance(mock_add.call_args[0][3], tuple) mock_notifier.return_value.error.assert_called_once_with(self.context, event_type, payload) def test_set_vm_state_and_notify_request_spec_dict(self): """Tests passing a legacy dict format request spec to set_vm_state_and_notify. """ request_spec = dict(instance_properties=dict(uuid=uuids.instance)) # The request_spec in the notification payload should be unchanged. self._test_set_vm_state_and_notify( request_spec=request_spec, payload_request_spec=request_spec) def test_set_vm_state_and_notify_request_spec_object(self): """Tests passing a RequestSpec object to set_vm_state_and_notify.""" request_spec = objects.RequestSpec.from_primitives( self.context, dict(instance_properties=dict(uuid=uuids.instance)), filter_properties=dict()) # The request_spec in the notification payload should be converted # to the legacy format. self._test_set_vm_state_and_notify( request_spec=request_spec, payload_request_spec=request_spec.to_legacy_request_spec_dict()) def test_set_vm_state_and_notify_request_spec_none(self): """Tests passing None for the request_spec to set_vm_state_and_notify. """ # The request_spec in the notification payload should be changed to # just an empty dict. self._test_set_vm_state_and_notify( request_spec=None, payload_request_spec={}) def test_build_filter_properties(self): sched_hints = {'hint': ['over-there']} forced_host = 'forced-host1' forced_node = 'forced-node1' instance_type = objects.Flavor() filt_props = scheduler_utils.build_filter_properties(sched_hints, forced_host, forced_node, instance_type) self.assertEqual(sched_hints, filt_props['scheduler_hints']) self.assertEqual([forced_host], filt_props['force_hosts']) self.assertEqual([forced_node], filt_props['force_nodes']) self.assertEqual(instance_type, filt_props['instance_type']) def test_build_filter_properties_no_forced_host_no_force_node(self): sched_hints = {'hint': ['over-there']} forced_host = None forced_node = None instance_type = objects.Flavor() filt_props = scheduler_utils.build_filter_properties(sched_hints, forced_host, forced_node, instance_type) self.assertEqual(sched_hints, filt_props['scheduler_hints']) self.assertEqual(instance_type, filt_props['instance_type']) self.assertNotIn('forced_host', filt_props) self.assertNotIn('forced_node', filt_props) def _test_populate_filter_props(self, selection_obj=True, with_retry=True, force_hosts=None, force_nodes=None, no_limits=None): if force_hosts is None: force_hosts = [] if force_nodes is None: force_nodes = [] if with_retry: if ((len(force_hosts) == 1 and len(force_nodes) <= 1) or (len(force_nodes) == 1 and len(force_hosts) <= 1)): filter_properties = dict(force_hosts=force_hosts, force_nodes=force_nodes) elif len(force_hosts) > 1 or len(force_nodes) > 1: filter_properties = dict(retry=dict(hosts=[]), force_hosts=force_hosts, force_nodes=force_nodes) else: filter_properties = dict(retry=dict(hosts=[])) else: filter_properties = dict() if no_limits: fake_limits = None else: fake_limits = objects.SchedulerLimits(vcpu=1, disk_gb=2, memory_mb=3, numa_topology=None) selection = objects.Selection(service_host="fake-host", nodename="fake-node", limits=fake_limits) if not selection_obj: selection = selection.to_dict() fake_limits = fake_limits.to_dict() scheduler_utils.populate_filter_properties(filter_properties, selection) enable_retry_force_hosts = not force_hosts or len(force_hosts) > 1 enable_retry_force_nodes = not force_nodes or len(force_nodes) > 1 if with_retry or enable_retry_force_hosts or enable_retry_force_nodes: # So we can check for 2 hosts scheduler_utils.populate_filter_properties(filter_properties, selection) if force_hosts: expected_limits = None elif no_limits: expected_limits = {} elif isinstance(fake_limits, objects.SchedulerLimits): expected_limits = fake_limits.to_dict() else: expected_limits = fake_limits self.assertEqual(expected_limits, filter_properties.get('limits')) if (with_retry and enable_retry_force_hosts and enable_retry_force_nodes): self.assertEqual([['fake-host', 'fake-node'], ['fake-host', 'fake-node']], filter_properties['retry']['hosts']) else: self.assertNotIn('retry', filter_properties) def test_populate_filter_props(self): self._test_populate_filter_props() def test_populate_filter_props_host_dict(self): self._test_populate_filter_props(selection_obj=False) def test_populate_filter_props_no_retry(self): self._test_populate_filter_props(with_retry=False) def test_populate_filter_props_force_hosts_no_retry(self): self._test_populate_filter_props(force_hosts=['force-host']) def test_populate_filter_props_force_nodes_no_retry(self): self._test_populate_filter_props(force_nodes=['force-node']) def test_populate_filter_props_multi_force_hosts_with_retry(self): self._test_populate_filter_props(force_hosts=['force-host1', 'force-host2']) def test_populate_filter_props_multi_force_nodes_with_retry(self): self._test_populate_filter_props(force_nodes=['force-node1', 'force-node2']) def test_populate_filter_props_no_limits(self): self._test_populate_filter_props(no_limits=True) def test_populate_retry_exception_at_max_attempts(self): self.flags(max_attempts=2, group='scheduler') msg = 'The exception text was preserved!' filter_properties = dict(retry=dict(num_attempts=2, hosts=[], exc_reason=[msg])) nvh = self.assertRaises(exception.MaxRetriesExceeded, scheduler_utils.populate_retry, filter_properties, uuids.instance) # make sure 'msg' is a substring of the complete exception text self.assertIn(msg, six.text_type(nvh)) def _check_parse_options(self, opts, sep, converter, expected): good = scheduler_utils.parse_options(opts, sep=sep, converter=converter) for item in expected: self.assertIn(item, good) def test_parse_options(self): # check normal self._check_parse_options(['foo=1', 'bar=-2.1'], '=', float, [('foo', 1.0), ('bar', -2.1)]) # check convert error self._check_parse_options(['foo=a1', 'bar=-2.1'], '=', float, [('bar', -2.1)]) # check separator missing self._check_parse_options(['foo', 'bar=-2.1'], '=', float, [('bar', -2.1)]) # check key missing self._check_parse_options(['=5', 'bar=-2.1'], '=', float, [('bar', -2.1)]) def test_validate_filters_configured(self): self.flags(enabled_filters='FakeFilter1,FakeFilter2', group='filter_scheduler') self.assertTrue(scheduler_utils.validate_filter('FakeFilter1')) self.assertTrue(scheduler_utils.validate_filter('FakeFilter2')) self.assertFalse(scheduler_utils.validate_filter('FakeFilter3')) def test_validate_weighers_configured(self): self.flags(weight_classes=[ 'ServerGroupSoftAntiAffinityWeigher', 'FakeFilter1'], group='filter_scheduler') self.assertTrue(scheduler_utils.validate_weigher( 'ServerGroupSoftAntiAffinityWeigher')) self.assertTrue(scheduler_utils.validate_weigher('FakeFilter1')) self.assertFalse(scheduler_utils.validate_weigher( 'ServerGroupSoftAffinityWeigher')) def test_validate_weighers_configured_all_weighers(self): self.assertTrue(scheduler_utils.validate_weigher( 'ServerGroupSoftAffinityWeigher')) self.assertTrue(scheduler_utils.validate_weigher( 'ServerGroupSoftAntiAffinityWeigher')) def _create_server_group(self, policy='anti-affinity'): instance = fake_instance.fake_instance_obj(self.context, params={'host': 'hostA'}) group = objects.InstanceGroup() group.name = 'pele' group.uuid = uuids.fake group.members = [instance.uuid] group.policies = [policy] return group def _get_group_details(self, group, policy=None): group_hosts = ['hostB'] with test.nested( mock.patch.object(objects.InstanceGroup, 'get_by_instance_uuid', return_value=group), mock.patch.object(objects.InstanceGroup, 'get_hosts', return_value=['hostA']), ) as (get_group, get_hosts): scheduler_utils._SUPPORTS_ANTI_AFFINITY = None scheduler_utils._SUPPORTS_AFFINITY = None group_info = scheduler_utils._get_group_details( self.context, 'fake_uuid', group_hosts) self.assertEqual( (set(['hostA', 'hostB']), [policy], group.members), group_info) def test_get_group_details(self): for policy in ['affinity', 'anti-affinity', 'soft-affinity', 'soft-anti-affinity']: group = self._create_server_group(policy) self._get_group_details(group, policy=policy) def test_get_group_details_with_no_instance_uuid(self): group_info = scheduler_utils._get_group_details(self.context, None) self.assertIsNone(group_info) def _get_group_details_with_filter_not_configured(self, policy): self.flags(enabled_filters=['fake'], group='filter_scheduler') self.flags(weight_classes=['fake'], group='filter_scheduler') instance = fake_instance.fake_instance_obj(self.context, params={'host': 'hostA'}) group = objects.InstanceGroup() group.uuid = uuids.fake group.members = [instance.uuid] group.policies = [policy] with test.nested( mock.patch.object(objects.InstanceGroup, 'get_by_instance_uuid', return_value=group), ) as (get_group,): scheduler_utils._SUPPORTS_ANTI_AFFINITY = None scheduler_utils._SUPPORTS_AFFINITY = None scheduler_utils._SUPPORTS_SOFT_AFFINITY = None scheduler_utils._SUPPORTS_SOFT_ANTI_AFFINITY = None self.assertRaises(exception.UnsupportedPolicyException, scheduler_utils._get_group_details, self.context, uuids.instance) def test_get_group_details_with_filter_not_configured(self): policies = ['anti-affinity', 'affinity', 'soft-affinity', 'soft-anti-affinity'] for policy in policies: self._get_group_details_with_filter_not_configured(policy) @mock.patch.object(scheduler_utils, '_get_group_details') def test_setup_instance_group_in_request_spec(self, mock_ggd): mock_ggd.return_value = scheduler_utils.GroupDetails( hosts=set(['hostA', 'hostB']), policies=['policy'], members=['instance1']) spec = objects.RequestSpec(instance_uuid=uuids.instance) spec.instance_group = objects.InstanceGroup(hosts=['hostC']) scheduler_utils.setup_instance_group(self.context, spec) mock_ggd.assert_called_once_with(self.context, uuids.instance, ['hostC']) # Given it returns a list from a set, make sure it's sorted. self.assertEqual(['hostA', 'hostB'], sorted(spec.instance_group.hosts)) self.assertEqual(['policy'], spec.instance_group.policies) self.assertEqual(['instance1'], spec.instance_group.members) @mock.patch.object(scheduler_utils, '_get_group_details') def test_setup_instance_group_with_no_group(self, mock_ggd): mock_ggd.return_value = None spec = objects.RequestSpec(instance_uuid=uuids.instance) spec.instance_group = objects.InstanceGroup(hosts=['hostC']) scheduler_utils.setup_instance_group(self.context, spec) mock_ggd.assert_called_once_with(self.context, uuids.instance, ['hostC']) # Make sure the field isn't touched by the caller. self.assertFalse(spec.instance_group.obj_attr_is_set('policies')) self.assertEqual(['hostC'], spec.instance_group.hosts) @mock.patch.object(scheduler_utils, '_get_group_details') def test_setup_instance_group_with_filter_not_configured(self, mock_ggd): mock_ggd.side_effect = exception.NoValidHost(reason='whatever') spec = {'instance_properties': {'uuid': uuids.instance}} spec = objects.RequestSpec(instance_uuid=uuids.instance) spec.instance_group = objects.InstanceGroup(hosts=['hostC']) self.assertRaises(exception.NoValidHost, scheduler_utils.setup_instance_group, self.context, spec) nova-17.0.1/nova/tests/unit/scheduler/test_ironic_host_manager.py0000666000175000017500000006507213250073126025265 0ustar zuulzuul00000000000000# Copyright (c) 2014 OpenStack Foundation # Copyright (c) 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests For IronicHostManager """ import mock from nova import context from nova import exception from nova import objects from nova.objects import base as obj_base from nova.scheduler import filters from nova.scheduler import host_manager from nova.scheduler import ironic_host_manager from nova import test from nova.tests.unit.scheduler import fakes from nova.tests.unit.scheduler import ironic_fakes from nova.tests import uuidsentinel as uuids class FakeFilterClass1(filters.BaseHostFilter): def host_passes(self, host_state, filter_properties): pass class FakeFilterClass2(filters.BaseHostFilter): def host_passes(self, host_state, filter_properties): pass class IronicHostManagerTestCase(test.NoDBTestCase): """Test case for IronicHostManager class.""" @mock.patch.object(ironic_host_manager.IronicHostManager, '_init_instance_info') @mock.patch.object(host_manager.HostManager, '_init_aggregates') def setUp(self, mock_init_agg, mock_init_inst): super(IronicHostManagerTestCase, self).setUp() self.host_manager = ironic_host_manager.IronicHostManager() @mock.patch.object(host_manager.HostManager, '_init_instance_info') @mock.patch.object(host_manager.HostManager, '_init_aggregates') def test_manager_public_api_signatures(self, mock_init_aggs, mock_init_inst): self.assertPublicAPISignatures(host_manager.HostManager(), self.host_manager) def test_state_public_api_signatures(self): self.assertPublicAPISignatures( host_manager.HostState("dummy", "dummy", uuids.cell), ironic_host_manager.IronicNodeState("dummy", "dummy", uuids.cell) ) @mock.patch('nova.objects.ServiceList.get_by_binary') @mock.patch('nova.objects.ComputeNodeList.get_all') @mock.patch('nova.objects.InstanceList.get_by_host') def test_get_all_host_states(self, mock_get_by_host, mock_get_all, mock_get_by_binary): mock_get_all.return_value = ironic_fakes.COMPUTE_NODES mock_get_by_binary.return_value = ironic_fakes.SERVICES context = 'fake_context' hosts = self.host_manager.get_all_host_states(context) self.assertEqual(0, mock_get_by_host.call_count) # get_all_host_states returns a generator, so make a map from it host_states_map = {(state.host, state.nodename): state for state in hosts} self.assertEqual(len(host_states_map), 4) for i in range(4): compute_node = ironic_fakes.COMPUTE_NODES[i] host = compute_node.host node = compute_node.hypervisor_hostname state_key = (host, node) self.assertEqual(host_states_map[state_key].service, obj_base.obj_to_primitive( ironic_fakes.get_service_by_host(host))) self.assertEqual(compute_node.stats, host_states_map[state_key].stats) self.assertEqual(compute_node.free_ram_mb, host_states_map[state_key].free_ram_mb) self.assertEqual(compute_node.free_disk_gb * 1024, host_states_map[state_key].free_disk_mb) def test_is_ironic_compute(self): ironic = ironic_fakes.COMPUTE_NODES[0] self.assertTrue(self.host_manager._is_ironic_compute(ironic)) non_ironic = fakes.COMPUTE_NODES[0] self.assertFalse(self.host_manager._is_ironic_compute(non_ironic)) @mock.patch.object(host_manager.HostManager, '_get_instance_info') def test_get_instance_info_ironic_compute_return_empty_instance_dict(self, mock_get_instance_info): compute_node = ironic_fakes.COMPUTE_NODES[0] rv = self.host_manager._get_instance_info('fake_context', compute_node) # for ironic compute nodes we always return an empty dict self.assertEqual({}, rv) # base class implementation is overridden and not called self.assertFalse(mock_get_instance_info.called) @mock.patch.object(host_manager.HostManager, '_get_instance_info') def test_get_instance_info_non_ironic_compute_call_super_class(self, mock_get_instance_info): expected_rv = {uuids.fake_instance_uuid: objects.Instance()} mock_get_instance_info.return_value = expected_rv compute_node = fakes.COMPUTE_NODES[0] rv = self.host_manager._get_instance_info('fake_context', compute_node) # for a non-ironic compute we call the base class implementation mock_get_instance_info.assert_called_once_with('fake_context', compute_node) # we return exactly what the base class implementation returned self.assertIs(expected_rv, rv) @mock.patch.object(host_manager.HostManager, '_init_instance_info') @mock.patch.object(objects.ComputeNodeList, 'get_all') def test_init_instance_info(self, mock_get_all, mock_base_init_instance_info): cn1 = objects.ComputeNode(**{'hypervisor_type': 'ironic'}) cn2 = objects.ComputeNode(**{'hypervisor_type': 'qemu'}) cn3 = objects.ComputeNode(**{'hypervisor_type': 'qemu'}) cell = objects.CellMappingList.get_all(context.get_admin_context())[0] self.host_manager.cells = [cell] mock_get_all.return_value.objects = [cn1, cn2, cn3] self.host_manager._init_instance_info() # ensure we filter out ironic nodes before calling the base class impl mock_base_init_instance_info.assert_called_once_with( {cell: [cn2, cn3]}) @mock.patch.object(host_manager.HostManager, '_init_instance_info') @mock.patch.object(objects.ComputeNodeList, 'get_all') def test_init_instance_info_compute_nodes(self, mock_get_all, mock_base_init_instance_info): cn1 = objects.ComputeNode(**{'hypervisor_type': 'ironic'}) cn2 = objects.ComputeNode(**{'hypervisor_type': 'qemu'}) cell = objects.CellMapping() self.host_manager._init_instance_info(computes_by_cell={ cell: [cn1, cn2]}) # check we don't try to get nodes list if it was passed explicitly self.assertFalse(mock_get_all.called) # ensure we filter out ironic nodes before calling the base class impl mock_base_init_instance_info.assert_called_once_with({cell: [cn2]}) class IronicHostManagerChangedNodesTestCase(test.NoDBTestCase): """Test case for IronicHostManager class.""" @mock.patch.object(ironic_host_manager.IronicHostManager, '_init_instance_info') @mock.patch.object(host_manager.HostManager, '_init_aggregates') def setUp(self, mock_init_agg, mock_init_inst): super(IronicHostManagerChangedNodesTestCase, self).setUp() self.host_manager = ironic_host_manager.IronicHostManager() ironic_driver = "nova.virt.ironic.driver.IronicDriver" supported_instances = [ objects.HVSpec.from_list(["i386", "baremetal", "baremetal"])] self.compute_node = objects.ComputeNode( id=1, local_gb=10, memory_mb=1024, vcpus=1, vcpus_used=0, local_gb_used=0, memory_mb_used=0, updated_at=None, cpu_info='baremetal cpu', stats=dict( ironic_driver=ironic_driver, cpu_arch='i386'), supported_hv_specs=supported_instances, free_disk_gb=10, free_ram_mb=1024, hypervisor_type='ironic', hypervisor_version=1, hypervisor_hostname='fake_host', cpu_allocation_ratio=16.0, ram_allocation_ratio=1.5, disk_allocation_ratio=1.0, uuid=uuids.compute_node_uuid) @mock.patch.object(ironic_host_manager.IronicNodeState, '__init__') def test_create_ironic_node_state(self, init_mock): init_mock.return_value = None compute = objects.ComputeNode(**{'hypervisor_type': 'ironic'}) host_state = self.host_manager.host_state_cls('fake-host', 'fake-node', uuids.cell, compute=compute) self.assertIs(ironic_host_manager.IronicNodeState, type(host_state)) @mock.patch.object(host_manager.HostState, '__init__') def test_create_non_ironic_host_state(self, init_mock): init_mock.return_value = None compute = objects.ComputeNode(**{'cpu_info': 'other cpu'}) host_state = self.host_manager.host_state_cls('fake-host', 'fake-node', uuids.cell, compute=compute) self.assertIs(host_manager.HostState, type(host_state)) @mock.patch.object(host_manager.HostState, '__init__') def test_create_host_state_null_compute(self, init_mock): init_mock.return_value = None host_state = self.host_manager.host_state_cls('fake-host', 'fake-node', uuids.cell) self.assertIs(host_manager.HostState, type(host_state)) @mock.patch('nova.objects.ServiceList.get_by_binary') @mock.patch('nova.objects.ComputeNodeList.get_all') def test_get_all_host_states_after_delete_one(self, mock_get_all, mock_get_by_binary): getter = (lambda n: n.hypervisor_hostname if 'hypervisor_hostname' in n else None) running_nodes = [n for n in ironic_fakes.COMPUTE_NODES if getter(n) != 'node4uuid'] mock_get_all.side_effect = [ ironic_fakes.COMPUTE_NODES, running_nodes] mock_get_by_binary.side_effect = [ ironic_fakes.SERVICES, ironic_fakes.SERVICES] context = 'fake_context' # first call: all nodes hosts = self.host_manager.get_all_host_states(context) # get_all_host_states returns a generator, so make a map from it host_states_map = {(state.host, state.nodename): state for state in hosts} self.assertEqual(4, len(host_states_map)) # second call: just running nodes hosts = self.host_manager.get_all_host_states(context) host_states_map = {(state.host, state.nodename): state for state in hosts} self.assertEqual(3, len(host_states_map)) @mock.patch('nova.objects.ServiceList.get_by_binary') @mock.patch('nova.objects.ComputeNodeList.get_all') def test_get_all_host_states_after_delete_all(self, mock_get_all, mock_get_by_binary): mock_get_all.side_effect = [ ironic_fakes.COMPUTE_NODES, []] mock_get_by_binary.side_effect = [ ironic_fakes.SERVICES, ironic_fakes.SERVICES] context = 'fake_context' # first call: all nodes hosts = self.host_manager.get_all_host_states(context) # get_all_host_states returns a generator, so make a map from it host_states_map = {(state.host, state.nodename): state for state in hosts} self.assertEqual(len(host_states_map), 4) # second call: no nodes self.host_manager.get_all_host_states(context) host_states_map = {(state.host, state.nodename): state for state in hosts} self.assertEqual(len(host_states_map), 0) def test_update_from_compute_node(self): host = ironic_host_manager.IronicNodeState("fakehost", "fakenode", uuids.cell) host.update(compute=self.compute_node) self.assertEqual(1024, host.free_ram_mb) self.assertEqual(1024, host.total_usable_ram_mb) self.assertEqual(10240, host.free_disk_mb) self.assertEqual(1, host.vcpus_total) self.assertEqual(0, host.vcpus_used) self.assertEqual(self.compute_node.stats, host.stats) self.assertEqual('ironic', host.hypervisor_type) self.assertEqual(1, host.hypervisor_version) self.assertEqual('fake_host', host.hypervisor_hostname) # Make sure the uuid is set since that's needed for the allocation # requests (claims to Placement) made in the FilterScheduler. self.assertEqual(self.compute_node.uuid, host.uuid) def test_update_from_compute_node_not_ready(self): """Tests that we ignore a compute node that does not have its free_disk_gb field set yet from the compute resource tracker. """ host = ironic_host_manager.IronicNodeState("fakehost", "fakenode", uuids.cell) self.compute_node.free_disk_gb = None host.update(compute=self.compute_node) self.assertEqual(0, host.free_disk_mb) def test_consume_identical_instance_from_compute(self): host = ironic_host_manager.IronicNodeState("fakehost", "fakenode", uuids.cell) host.update(compute=self.compute_node) self.assertIsNone(host.updated) spec_obj = objects.RequestSpec( flavor=objects.Flavor(root_gb=10, ephemeral_gb=0, memory_mb=1024, vcpus=1), uuid=uuids.instance) host.consume_from_request(spec_obj) self.assertEqual(1, host.vcpus_used) self.assertEqual(0, host.free_ram_mb) self.assertEqual(0, host.free_disk_mb) self.assertIsNotNone(host.updated) def test_consume_larger_instance_from_compute(self): host = ironic_host_manager.IronicNodeState("fakehost", "fakenode", uuids.cell) host.update(compute=self.compute_node) self.assertIsNone(host.updated) spec_obj = objects.RequestSpec( flavor=objects.Flavor(root_gb=20, ephemeral_gb=0, memory_mb=2048, vcpus=2)) host.consume_from_request(spec_obj) self.assertEqual(1, host.vcpus_used) self.assertEqual(0, host.free_ram_mb) self.assertEqual(0, host.free_disk_mb) self.assertIsNotNone(host.updated) def test_consume_smaller_instance_from_compute(self): host = ironic_host_manager.IronicNodeState("fakehost", "fakenode", uuids.cell) host.update(compute=self.compute_node) self.assertIsNone(host.updated) spec_obj = objects.RequestSpec( flavor=objects.Flavor(root_gb=5, ephemeral_gb=0, memory_mb=512, vcpus=1)) host.consume_from_request(spec_obj) self.assertEqual(1, host.vcpus_used) self.assertEqual(0, host.free_ram_mb) self.assertEqual(0, host.free_disk_mb) self.assertIsNotNone(host.updated) class IronicHostManagerTestFilters(test.NoDBTestCase): """Test filters work for IronicHostManager.""" @mock.patch.object(ironic_host_manager.IronicHostManager, '_init_instance_info') @mock.patch.object(host_manager.HostManager, '_init_aggregates') def setUp(self, mock_init_agg, mock_init_inst): super(IronicHostManagerTestFilters, self).setUp() self.flags(available_filters=[ __name__ + '.FakeFilterClass1', __name__ + '.FakeFilterClass2'], group='filter_scheduler') self.flags(enabled_filters=['FakeFilterClass1'], group='filter_scheduler') self.flags(baremetal_enabled_filters=['FakeFilterClass2'], group='filter_scheduler') self.host_manager = ironic_host_manager.IronicHostManager() cell = uuids.cell self.fake_hosts = [ironic_host_manager.IronicNodeState( 'fake_host%s' % x, 'fake-node', cell) for x in range(1, 5)] self.fake_hosts += [ironic_host_manager.IronicNodeState( 'fake_multihost', 'fake-node%s' % x, cell) for x in range(1, 5)] def test_enabled_filters(self): enabled_filters = self.host_manager.enabled_filters self.assertEqual(1, len(enabled_filters)) self.assertIsInstance(enabled_filters[0], FakeFilterClass1) def test_choose_host_filters_not_found(self): self.assertRaises(exception.SchedulerHostFilterNotFound, self.host_manager._choose_host_filters, 'FakeFilterClass3') def test_choose_host_filters(self): # Test we return 1 correct filter object host_filters = self.host_manager._choose_host_filters( ['FakeFilterClass2']) self.assertEqual(1, len(host_filters)) self.assertIsInstance(host_filters[0], FakeFilterClass2) def test_host_manager_enabled_filters(self): enabled_filters = self.host_manager.enabled_filters self.assertEqual(1, len(enabled_filters)) self.assertIsInstance(enabled_filters[0], FakeFilterClass1) @mock.patch.object(ironic_host_manager.IronicHostManager, '_init_instance_info') @mock.patch.object(host_manager.HostManager, '_init_aggregates') def test_host_manager_enabled_filters_uses_baremetal(self, mock_init_agg, mock_init_inst): self.flags(use_baremetal_filters=True, group='filter_scheduler') host_manager = ironic_host_manager.IronicHostManager() # ensure the defaults come from scheduler.baremetal_enabled_filters # and not enabled_filters enabled_filters = host_manager.enabled_filters self.assertEqual(1, len(enabled_filters)) self.assertIsInstance(enabled_filters[0], FakeFilterClass2) def test_load_filters(self): # without scheduler.use_baremetal_filters filters = self.host_manager._load_filters() self.assertEqual(['FakeFilterClass1'], filters) def test_load_filters_baremetal(self): # with scheduler.use_baremetal_filters self.flags(use_baremetal_filters=True, group='filter_scheduler') filters = self.host_manager._load_filters() self.assertEqual(['FakeFilterClass2'], filters) def _mock_get_filtered_hosts(self, info): info['got_objs'] = [] info['got_fprops'] = [] def fake_filter_one(_self, obj, filter_props): info['got_objs'].append(obj) info['got_fprops'].append(filter_props) return True self.stub_out(__name__ + '.FakeFilterClass1._filter_one', fake_filter_one) def _verify_result(self, info, result, filters=True): for x in info['got_fprops']: self.assertEqual(x, info['expected_fprops']) if filters: self.assertEqual(set(info['expected_objs']), set(info['got_objs'])) self.assertEqual(set(info['expected_objs']), set(result)) def test_get_filtered_hosts(self): fake_properties = objects.RequestSpec( instance_uuid=uuids.instance, ignore_hosts=[], force_hosts=[], force_nodes=[]) info = {'expected_objs': self.fake_hosts, 'expected_fprops': fake_properties} self._mock_get_filtered_hosts(info) result = self.host_manager.get_filtered_hosts(self.fake_hosts, fake_properties) self._verify_result(info, result) def test_get_filtered_hosts_with_ignore(self): fake_properties = objects.RequestSpec( instance_uuid=uuids.instance, ignore_hosts=['fake_host1', 'fake_host3', 'fake_host5', 'fake_multihost'], force_hosts=[], force_nodes=[]) # [1] and [3] are host2 and host4 info = {'expected_objs': [self.fake_hosts[1], self.fake_hosts[3]], 'expected_fprops': fake_properties} self._mock_get_filtered_hosts(info) result = self.host_manager.get_filtered_hosts(self.fake_hosts, fake_properties) self._verify_result(info, result) def test_get_filtered_hosts_with_force_hosts(self): fake_properties = objects.RequestSpec( instance_uuid=uuids.instance, ignore_hosts=[], force_hosts=['fake_host1', 'fake_host3', 'fake_host5'], force_nodes=[]) # [0] and [2] are host1 and host3 info = {'expected_objs': [self.fake_hosts[0], self.fake_hosts[2]], 'expected_fprops': fake_properties} self._mock_get_filtered_hosts(info) result = self.host_manager.get_filtered_hosts(self.fake_hosts, fake_properties) self._verify_result(info, result, False) def test_get_filtered_hosts_with_no_matching_force_hosts(self): fake_properties = objects.RequestSpec( instance_uuid=uuids.instance, ignore_hosts=[], force_hosts=['fake_host5', 'fake_host6'], force_nodes=[]) info = {'expected_objs': [], 'expected_fprops': fake_properties} self._mock_get_filtered_hosts(info) result = self.host_manager.get_filtered_hosts(self.fake_hosts, fake_properties) self._verify_result(info, result, False) def test_get_filtered_hosts_with_ignore_and_force_hosts(self): # Ensure ignore_hosts processed before force_hosts in host filters. fake_properties = objects.RequestSpec( instance_uuid=uuids.instance, ignore_hosts=['fake_host1'], force_hosts=['fake_host3', 'fake_host1'], force_nodes=[]) # only fake_host3 should be left. info = {'expected_objs': [self.fake_hosts[2]], 'expected_fprops': fake_properties} self._mock_get_filtered_hosts(info) result = self.host_manager.get_filtered_hosts(self.fake_hosts, fake_properties) self._verify_result(info, result, False) def test_get_filtered_hosts_with_force_host_and_many_nodes(self): # Ensure all nodes returned for a host with many nodes fake_properties = objects.RequestSpec( instance_uuid=uuids.instance, ignore_hosts=[], force_hosts=['fake_multihost'], force_nodes=[]) info = {'expected_objs': [self.fake_hosts[4], self.fake_hosts[5], self.fake_hosts[6], self.fake_hosts[7]], 'expected_fprops': fake_properties} self._mock_get_filtered_hosts(info) result = self.host_manager.get_filtered_hosts(self.fake_hosts, fake_properties) self._verify_result(info, result, False) def test_get_filtered_hosts_with_force_nodes(self): fake_properties = objects.RequestSpec( instance_uuid=uuids.instance, ignore_hosts=[], force_hosts=[], force_nodes=['fake-node2', 'fake-node4', 'fake-node9']) # [5] is fake-node2, [7] is fake-node4 info = {'expected_objs': [self.fake_hosts[5], self.fake_hosts[7]], 'expected_fprops': fake_properties} self._mock_get_filtered_hosts(info) result = self.host_manager.get_filtered_hosts(self.fake_hosts, fake_properties) self._verify_result(info, result, False) def test_get_filtered_hosts_with_force_hosts_and_nodes(self): # Ensure only overlapping results if both force host and node fake_properties = objects.RequestSpec( instance_uuid=uuids.instance, ignore_hosts=[], force_hosts=['fake_host1', 'fake_multihost'], force_nodes=['fake-node2', 'fake-node9']) # [5] is fake-node2 info = {'expected_objs': [self.fake_hosts[5]], 'expected_fprops': fake_properties} self._mock_get_filtered_hosts(info) result = self.host_manager.get_filtered_hosts(self.fake_hosts, fake_properties) self._verify_result(info, result, False) def test_get_filtered_hosts_with_force_hosts_and_wrong_nodes(self): # Ensure non-overlapping force_node and force_host yield no result fake_properties = objects.RequestSpec( instance_uuid=uuids.instance, ignore_hosts=[], force_hosts=['fake_multihost'], force_nodes=['fake-node']) info = {'expected_objs': [], 'expected_fprops': fake_properties} self._mock_get_filtered_hosts(info) result = self.host_manager.get_filtered_hosts(self.fake_hosts, fake_properties) self._verify_result(info, result, False) def test_get_filtered_hosts_with_ignore_hosts_and_force_nodes(self): # Ensure ignore_hosts can coexist with force_nodes fake_properties = objects.RequestSpec( instance_uuid=uuids.instance, ignore_hosts=['fake_host1', 'fake_host2'], force_hosts=[], force_nodes=['fake-node4', 'fake-node2']) info = {'expected_objs': [self.fake_hosts[5], self.fake_hosts[7]], 'expected_fprops': fake_properties} self._mock_get_filtered_hosts(info) result = self.host_manager.get_filtered_hosts(self.fake_hosts, fake_properties) self._verify_result(info, result, False) def test_get_filtered_hosts_with_ignore_hosts_and_force_same_nodes(self): # Ensure ignore_hosts is processed before force_nodes fake_properties = objects.RequestSpec( instance_uuid=uuids.instance, ignore_hosts=['fake_multihost'], force_hosts=[], force_nodes=['fake_node4', 'fake_node2']) info = {'expected_objs': [], 'expected_fprops': fake_properties} self._mock_get_filtered_hosts(info) result = self.host_manager.get_filtered_hosts(self.fake_hosts, fake_properties) self._verify_result(info, result, False) nova-17.0.1/nova/tests/unit/scheduler/test_chance_scheduler.py0000666000175000017500000001513313250073126024523 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests For Chance Scheduler. """ import mock from nova import exception from nova import objects from nova.scheduler import chance from nova.scheduler import host_manager from nova.tests.unit.scheduler import test_scheduler from nova.tests import uuidsentinel as uuids def _generate_fake_hosts(num): hosts = [] for i in range(num): fake_host_state = host_manager.HostState("host%s" % i, "fake_node", uuids.cell) fake_host_state.uuid = getattr(uuids, "host%s" % i) fake_host_state.limits = {} hosts.append(fake_host_state) return hosts class ChanceSchedulerTestCase(test_scheduler.SchedulerTestCase): """Test case for Chance Scheduler.""" driver_cls = chance.ChanceScheduler def test_filter_hosts_avoid(self): """Test to make sure _filter_hosts() filters original hosts if avoid_original_host is True. """ hosts = ['host1', 'host2', 'host3'] spec_obj = objects.RequestSpec(ignore_hosts=['host2']) filtered = self.driver._filter_hosts(hosts, spec_obj=spec_obj) self.assertEqual(filtered, ['host1', 'host3']) def test_filter_hosts_no_avoid(self): """Test to make sure _filter_hosts() does not filter original hosts if avoid_original_host is False. """ hosts = ['host1', 'host2', 'host3'] spec_obj = objects.RequestSpec(ignore_hosts=[]) filtered = self.driver._filter_hosts(hosts, spec_obj=spec_obj) self.assertEqual(filtered, hosts) @mock.patch("nova.scheduler.chance.ChanceScheduler.hosts_up") def test_select_destinations(self, mock_hosts_up): mock_hosts_up.return_value = _generate_fake_hosts(4) spec_obj = objects.RequestSpec(num_instances=2, ignore_hosts=None) dests = self.driver.select_destinations(self.context, spec_obj, [uuids.instance1, uuids.instance2], {}, mock.sentinel.provider_summaries) self.assertEqual(2, len(dests)) # Test that different hosts were returned self.assertIsNot(dests[0], dests[1]) @mock.patch("nova.scheduler.chance.ChanceScheduler._filter_hosts") @mock.patch("nova.scheduler.chance.ChanceScheduler.hosts_up") def test_select_destinations_no_valid_host(self, mock_hosts_up, mock_filter): mock_hosts_up.return_value = _generate_fake_hosts(4) mock_filter.return_value = [] spec_obj = objects.RequestSpec(num_instances=1) spec_obj.instance_uuid = uuids.instance self.assertRaises(exception.NoValidHost, self.driver.select_destinations, self.context, spec_obj, [spec_obj.instance_uuid], {}, mock.sentinel.provider_summaries) @mock.patch("nova.scheduler.chance.ChanceScheduler.hosts_up") def test_schedule_success_single_instance(self, mock_hosts_up): hosts = _generate_fake_hosts(20) mock_hosts_up.return_value = hosts spec_obj = objects.RequestSpec(num_instances=1, ignore_hosts=None) spec_obj.instance_uuid = uuids.instance # Set the max_attempts to 2 attempts = 2 expected = attempts self.flags(max_attempts=attempts, group="scheduler") selected_hosts = self.driver._schedule(self.context, "compute", spec_obj, [spec_obj.instance_uuid], return_alternates=True) self.assertEqual(1, len(selected_hosts)) for host_list in selected_hosts: self.assertEqual(expected, len(host_list)) # Now set max_attempts to a number larger than the available hosts. It # should return a host_list containing only as many hosts as there are # to choose from. attempts = len(hosts) + 1 expected = len(hosts) self.flags(max_attempts=attempts, group="scheduler") selected_hosts = self.driver._schedule(self.context, "compute", spec_obj, [spec_obj.instance_uuid], return_alternates=True) self.assertEqual(1, len(selected_hosts)) for host_list in selected_hosts: self.assertEqual(expected, len(host_list)) # Now verify that if we pass False for return_alternates, that we only # get one host in the host_list. attempts = 5 expected = 1 self.flags(max_attempts=attempts, group="scheduler") selected_hosts = self.driver._schedule(self.context, "compute", spec_obj, [spec_obj.instance_uuid], return_alternates=False) self.assertEqual(1, len(selected_hosts)) for host_list in selected_hosts: self.assertEqual(expected, len(host_list)) @mock.patch("nova.scheduler.chance.ChanceScheduler.hosts_up") def test_schedule_success_multiple_instances(self, mock_hosts_up): hosts = _generate_fake_hosts(20) mock_hosts_up.return_value = hosts num_instances = 4 spec_obj = objects.RequestSpec(num_instances=num_instances, ignore_hosts=None) instance_uuids = [getattr(uuids, "inst%s" % i) for i in range(num_instances)] spec_obj.instance_uuid = instance_uuids[0] # Set the max_attempts to 2 attempts = 2 self.flags(max_attempts=attempts, group="scheduler") selected_hosts = self.driver._schedule(self.context, "compute", spec_obj, instance_uuids, return_alternates=True) self.assertEqual(num_instances, len(selected_hosts)) for host_list in selected_hosts: self.assertEqual(attempts, len(host_list)) # Verify that none of the selected hosts appear as alternates # Set the max_attempts to 5 to get 4 alternates per instance attempts = 4 self.flags(max_attempts=attempts, group="scheduler") result = self.driver._schedule(self.context, "compute", spec_obj, instance_uuids) selected = [host_list[0] for host_list in result] for host_list in result: for sel in selected: self.assertNotIn(sel, host_list[1:]) nova-17.0.1/nova/tests/unit/scheduler/test_host_filters.py0000666000175000017500000000276113250073126023754 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests For Scheduler Host Filters. """ from nova.scheduler import filters from nova.scheduler.filters import all_hosts_filter from nova.scheduler.filters import compute_filter from nova import test from nova.tests.unit.scheduler import fakes class HostFiltersTestCase(test.NoDBTestCase): def test_filter_handler(self): # Double check at least a couple of known filters exist filter_handler = filters.HostFilterHandler() classes = filter_handler.get_matching_classes( ['nova.scheduler.filters.all_filters']) self.assertIn(all_hosts_filter.AllHostsFilter, classes) self.assertIn(compute_filter.ComputeFilter, classes) def test_all_host_filter(self): filt_cls = all_hosts_filter.AllHostsFilter() host = fakes.FakeHostState('host1', 'node1', {}) self.assertTrue(filt_cls.host_passes(host, {})) nova-17.0.1/nova/tests/unit/scheduler/test_caching_scheduler.py0000666000175000017500000002732713250073126024706 0ustar zuulzuul00000000000000# Copyright (c) 2014 Rackspace Hosting # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_utils import timeutils from six.moves import range from nova import exception from nova import objects from nova.scheduler import caching_scheduler from nova.scheduler import host_manager from nova.tests.unit.scheduler import test_scheduler from nova.tests import uuidsentinel as uuids ENABLE_PROFILER = False class CachingSchedulerTestCase(test_scheduler.SchedulerTestCase): """Test case for Caching Scheduler.""" driver_cls = caching_scheduler.CachingScheduler @mock.patch.object(caching_scheduler.CachingScheduler, "_get_up_hosts") def test_run_periodic_tasks_loads_hosts(self, mock_up_hosts): mock_up_hosts.return_value = [] context = mock.Mock() self.driver.run_periodic_tasks(context) self.assertTrue(mock_up_hosts.called) self.assertEqual([], self.driver.all_host_states) context.elevated.assert_called_with() @mock.patch.object(caching_scheduler.CachingScheduler, "_get_up_hosts") def test_get_all_host_states_returns_cached_value(self, mock_up_hosts): self.driver.all_host_states = {uuids.cell: []} self.driver._get_all_host_states(self.context, None, mock.sentinel.provider_uuids) self.assertFalse(mock_up_hosts.called) self.assertEqual({uuids.cell: []}, self.driver.all_host_states) @mock.patch.object(caching_scheduler.CachingScheduler, "_get_up_hosts") def test_get_all_host_states_loads_hosts(self, mock_up_hosts): host_state = self._get_fake_host_state() mock_up_hosts.return_value = {uuids.cell: [host_state]} result = self.driver._get_all_host_states(self.context, None, mock.sentinel.provider_uuids) self.assertTrue(mock_up_hosts.called) self.assertEqual({uuids.cell: [host_state]}, self.driver.all_host_states) self.assertEqual([host_state], list(result)) def test_get_up_hosts(self): with mock.patch.object(self.driver.host_manager, "get_all_host_states") as mock_get_hosts: host_state = self._get_fake_host_state() mock_get_hosts.return_value = [host_state] result = self.driver._get_up_hosts(self.context) self.assertTrue(mock_get_hosts.called) self.assertEqual({uuids.cell: [host_state]}, result) def test_select_destination_raises_with_no_hosts(self): spec_obj = self._get_fake_request_spec() self.driver.all_host_states = {uuids.cell: []} self.assertRaises(exception.NoValidHost, self.driver.select_destinations, self.context, spec_obj, [spec_obj.instance_uuid], {}, {}) @mock.patch('nova.db.instance_extra_get_by_instance_uuid', return_value={'numa_topology': None, 'pci_requests': None}) def test_select_destination_works(self, mock_get_extra): spec_obj = self._get_fake_request_spec() fake_host = self._get_fake_host_state() self.driver.all_host_states = {uuids.cell: [fake_host]} result = self._test_select_destinations(spec_obj) self.assertEqual(1, len(result)) self.assertEqual(result[0][0].service_host, fake_host.host) def _test_select_destinations(self, spec_obj): provider_summaries = {} for cell_hosts in self.driver.all_host_states.values(): for hs in cell_hosts: provider_summaries[hs.uuid] = hs return self.driver.select_destinations( self.context, spec_obj, [spec_obj.instance_uuid], {}, provider_summaries) def _get_fake_request_spec(self): # NOTE(sbauza): Prevent to stub the Flavor.get_by_id call just by # directly providing a Flavor object flavor = objects.Flavor( flavorid="small", memory_mb=512, root_gb=1, ephemeral_gb=1, vcpus=1, swap=0, ) instance_properties = { "os_type": "linux", "project_id": "1234", } request_spec = objects.RequestSpec( flavor=flavor, num_instances=1, ignore_hosts=None, force_hosts=None, force_nodes=None, retry=None, availability_zone=None, image=None, instance_group=None, pci_requests=None, numa_topology=None, instance_uuid='aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa', **instance_properties ) return request_spec def _get_fake_host_state(self, index=0): host_state = host_manager.HostState( 'host_%s' % index, 'node_%s' % index, uuids.cell) host_state.uuid = getattr(uuids, 'host_%s' % index) host_state.free_ram_mb = 50000 host_state.total_usable_ram_mb = 50000 host_state.free_disk_mb = 4096 host_state.total_usable_disk_gb = 4 host_state.service = { "disabled": False, "updated_at": timeutils.utcnow(), "created_at": timeutils.utcnow(), } host_state.cpu_allocation_ratio = 16.0 host_state.ram_allocation_ratio = 1.5 host_state.disk_allocation_ratio = 1.0 host_state.metrics = objects.MonitorMetricList(objects=[]) return host_state @mock.patch('nova.db.instance_extra_get_by_instance_uuid', return_value={'numa_topology': None, 'pci_requests': None}) def test_performance_check_select_destination(self, mock_get_extra): hosts = 2 requests = 1 self.flags(service_down_time=240) spec_obj = self._get_fake_request_spec() host_states = [] for x in range(hosts): host_state = self._get_fake_host_state(x) host_states.append(host_state) self.driver.all_host_states = {uuids.cell: host_states} provider_summaries = {hs.uuid: hs for hs in host_states} def run_test(): a = timeutils.utcnow() for x in range(requests): self.driver.select_destinations(self.context, spec_obj, [spec_obj.instance_uuid], {}, provider_summaries) b = timeutils.utcnow() c = b - a seconds = (c.days * 24 * 60 * 60 + c.seconds) microseconds = seconds * 1000 + c.microseconds / 1000.0 per_request_ms = microseconds / requests return per_request_ms per_request_ms = None if ENABLE_PROFILER: import pycallgraph from pycallgraph import output config = pycallgraph.Config(max_depth=10) config.trace_filter = pycallgraph.GlobbingFilter(exclude=[ 'pycallgraph.*', 'unittest.*', 'testtools.*', 'nova.tests.unit.*', ]) graphviz = output.GraphvizOutput(output_file='scheduler.png') with pycallgraph.PyCallGraph(output=graphviz): per_request_ms = run_test() else: per_request_ms = run_test() # This has proved to be around 1 ms on a random dev box # But this is here so you can do simply performance testing easily. self.assertLess(per_request_ms, 1000) def test_request_single_cell(self): spec_obj = self._get_fake_request_spec() spec_obj.requested_destination = objects.Destination( cell=objects.CellMapping(uuid=uuids.cell2)) host_states_cell1 = [self._get_fake_host_state(i) for i in range(1, 5)] host_states_cell2 = [self._get_fake_host_state(i) for i in range(5, 10)] self.driver.all_host_states = { uuids.cell1: host_states_cell1, uuids.cell2: host_states_cell2, } provider_summaries = { cn.uuid: cn for cn in host_states_cell1 + host_states_cell2 } d = self.driver.select_destinations(self.context, spec_obj, [spec_obj.instance_uuid], {}, provider_summaries) self.assertIn(d[0][0].service_host, [hs.host for hs in host_states_cell2]) @mock.patch("nova.scheduler.host_manager.HostState.consume_from_request") @mock.patch("nova.scheduler.caching_scheduler.CachingScheduler." "_get_sorted_hosts") @mock.patch("nova.scheduler.caching_scheduler.CachingScheduler." "_get_all_host_states") def test_alternates_same_cell(self, mock_get_all_hosts, mock_sorted, mock_consume): """Tests getting hosts plus alternates where the hosts are spread across two cells. """ all_host_states = [] for num in range(10): host_name = "host%s" % num cell_uuid = uuids.cell1 if num % 2 else uuids.cell2 hs = host_manager.HostState(host_name, "node%s" % num, cell_uuid) hs.uuid = getattr(uuids, host_name) all_host_states.append(hs) mock_get_all_hosts.return_value = all_host_states # There are two instances, so _get_sorted_hosts will be called once # per instance, and then once again before picking alternates. mock_sorted.side_effect = [all_host_states, list(reversed(all_host_states)), all_host_states] total_returned = 3 self.flags(max_attempts=total_returned, group="scheduler") instance_uuids = [uuids.inst1, uuids.inst2] num_instances = len(instance_uuids) spec_obj = objects.RequestSpec( num_instances=num_instances, flavor=objects.Flavor(memory_mb=512, root_gb=512, ephemeral_gb=0, swap=0, vcpus=1), project_id=uuids.project_id, instance_group=None) dests = self.driver._schedule(self.context, spec_obj, instance_uuids, None, None, return_alternates=True) # There should be max_attempts hosts per instance (1 selected, 2 alts) self.assertEqual(total_returned, len(dests[0])) self.assertEqual(total_returned, len(dests[1])) # Verify that the two selected hosts are not in the same cell. self.assertNotEqual(dests[0][0].cell_uuid, dests[1][0].cell_uuid) for dest in dests: selected_host = dest[0] selected_cell_uuid = selected_host.cell_uuid for alternate in dest[1:]: self.assertEqual(alternate.cell_uuid, selected_cell_uuid) if __name__ == '__main__': # A handy tool to help profile the schedulers performance ENABLE_PROFILER = True import testtools suite = testtools.ConcurrentTestSuite() test = "test_performance_check_select_destination" test_case = CachingSchedulerTestCase(test) suite.addTest(test_case) runner = testtools.TextTestResult.TextTestRunner() runner.run(suite) nova-17.0.1/nova/tests/unit/scheduler/ironic_fakes.py0000666000175000017500000001132213250073126022635 0ustar zuulzuul00000000000000# Copyright 2014 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Fake nodes for Ironic host manager tests. """ from nova import objects from nova.tests import uuidsentinel as uuids COMPUTE_NODES = [ objects.ComputeNode( id=1, local_gb=10, memory_mb=1024, vcpus=1, vcpus_used=0, local_gb_used=0, memory_mb_used=0, updated_at=None, cpu_info='baremetal cpu', host='host1', hypervisor_hostname='node1uuid', host_ip='127.0.0.1', hypervisor_version=1, hypervisor_type='ironic', stats=dict(ironic_driver= "nova.virt.ironic.driver.IronicDriver", cpu_arch='i386'), supported_hv_specs=[objects.HVSpec.from_list( ["i386", "baremetal", "baremetal"])], free_disk_gb=10, free_ram_mb=1024, cpu_allocation_ratio=16.0, ram_allocation_ratio=1.5, disk_allocation_ratio=1.0, uuid=uuids.compute_node_1), objects.ComputeNode( id=2, local_gb=20, memory_mb=2048, vcpus=1, vcpus_used=0, local_gb_used=0, memory_mb_used=0, updated_at=None, cpu_info='baremetal cpu', host='host2', hypervisor_hostname='node2uuid', host_ip='127.0.0.1', hypervisor_version=1, hypervisor_type='ironic', stats=dict(ironic_driver= "nova.virt.ironic.driver.IronicDriver", cpu_arch='i386'), supported_hv_specs=[objects.HVSpec.from_list( ["i386", "baremetal", "baremetal"])], free_disk_gb=20, free_ram_mb=2048, cpu_allocation_ratio=16.0, ram_allocation_ratio=1.5, disk_allocation_ratio=1.0, uuid=uuids.compute_node_2), objects.ComputeNode( id=3, local_gb=30, memory_mb=3072, vcpus=1, vcpus_used=0, local_gb_used=0, memory_mb_used=0, updated_at=None, cpu_info='baremetal cpu', host='host3', hypervisor_hostname='node3uuid', host_ip='127.0.0.1', hypervisor_version=1, hypervisor_type='ironic', stats=dict(ironic_driver= "nova.virt.ironic.driver.IronicDriver", cpu_arch='i386'), supported_hv_specs=[objects.HVSpec.from_list( ["i386", "baremetal", "baremetal"])], free_disk_gb=30, free_ram_mb=3072, cpu_allocation_ratio=16.0, ram_allocation_ratio=1.5, disk_allocation_ratio=1.0, uuid=uuids.compute_node_3), objects.ComputeNode( id=4, local_gb=40, memory_mb=4096, vcpus=1, vcpus_used=0, local_gb_used=0, memory_mb_used=0, updated_at=None, cpu_info='baremetal cpu', host='host4', hypervisor_hostname='node4uuid', host_ip='127.0.0.1', hypervisor_version=1, hypervisor_type='ironic', stats=dict(ironic_driver= "nova.virt.ironic.driver.IronicDriver", cpu_arch='i386'), supported_hv_specs=[objects.HVSpec.from_list( ["i386", "baremetal", "baremetal"])], free_disk_gb=40, free_ram_mb=4096, cpu_allocation_ratio=16.0, ram_allocation_ratio=1.5, disk_allocation_ratio=1.0, uuid=uuids.compute_node_4), # Broken entry objects.ComputeNode( id=5, local_gb=50, memory_mb=5120, vcpus=1, host='fake', cpu_info='baremetal cpu', stats=dict(ironic_driver= "nova.virt.ironic.driver.IronicDriver", cpu_arch='i386'), supported_hv_specs=[objects.HVSpec.from_list( ["i386", "baremetal", "baremetal"])], free_disk_gb=50, free_ram_mb=5120, hypervisor_hostname='fake-hyp', uuid=uuids.compute_node_5), ] SERVICES = [ objects.Service(host='host1', disabled=False), objects.Service(host='host2', disabled=True), objects.Service(host='host3', disabled=False), objects.Service(host='host4', disabled=False), ] def get_service_by_host(host): services = [service for service in SERVICES if service.host == host] return services[0] nova-17.0.1/nova/tests/unit/scheduler/test_host_manager.py0000666000175000017500000020227113250073126023714 0ustar zuulzuul00000000000000# Copyright (c) 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests For HostManager """ import collections import contextlib import datetime import mock from oslo_serialization import jsonutils from oslo_utils import versionutils import six import nova from nova.compute import task_states from nova.compute import vm_states from nova import context as nova_context from nova import exception from nova import objects from nova.objects import base as obj_base from nova.pci import stats as pci_stats from nova.scheduler import filters from nova.scheduler import host_manager from nova import test from nova.tests import fixtures from nova.tests.unit import fake_instance from nova.tests.unit import matchers from nova.tests.unit.scheduler import fakes from nova.tests import uuidsentinel as uuids class FakeFilterClass1(filters.BaseHostFilter): def host_passes(self, host_state, filter_properties): pass class FakeFilterClass2(filters.BaseHostFilter): def host_passes(self, host_state, filter_properties): pass class HostManagerTestCase(test.NoDBTestCase): """Test case for HostManager class.""" @mock.patch.object(host_manager.HostManager, '_init_instance_info') @mock.patch.object(host_manager.HostManager, '_init_aggregates') def setUp(self, mock_init_agg, mock_init_inst): super(HostManagerTestCase, self).setUp() self.flags(available_filters=[ __name__ + '.FakeFilterClass1', __name__ + '.FakeFilterClass2'], group='filter_scheduler') self.flags(enabled_filters=['FakeFilterClass1'], group='filter_scheduler') self.host_manager = host_manager.HostManager() cell = uuids.cell self.fake_hosts = [host_manager.HostState('fake_host%s' % x, 'fake-node', cell) for x in range(1, 5)] self.fake_hosts += [host_manager.HostState('fake_multihost', 'fake-node%s' % x, cell) for x in range(1, 5)] self.useFixture(fixtures.SpawnIsSynchronousFixture()) def test_load_filters(self): filters = self.host_manager._load_filters() self.assertEqual(filters, ['FakeFilterClass1']) @mock.patch.object(nova.objects.InstanceList, 'get_by_filters') @mock.patch.object(nova.objects.ComputeNodeList, 'get_all') def test_init_instance_info_batches(self, mock_get_all, mock_get_by_filters): cn_list = objects.ComputeNodeList() for num in range(22): host_name = 'host_%s' % num cn_list.objects.append(objects.ComputeNode(host=host_name)) mock_get_all.return_value = cn_list self.host_manager._init_instance_info() self.assertEqual(mock_get_by_filters.call_count, 3) @mock.patch.object(nova.objects.InstanceList, 'get_by_filters') @mock.patch.object(nova.objects.ComputeNodeList, 'get_all') def test_init_instance_info(self, mock_get_all, mock_get_by_filters): cn1 = objects.ComputeNode(host='host1') cn2 = objects.ComputeNode(host='host2') inst1 = objects.Instance(host='host1', uuid=uuids.instance_1) inst2 = objects.Instance(host='host1', uuid=uuids.instance_2) inst3 = objects.Instance(host='host2', uuid=uuids.instance_3) mock_get_all.return_value = objects.ComputeNodeList(objects=[cn1, cn2]) mock_get_by_filters.return_value = objects.InstanceList( objects=[inst1, inst2, inst3]) hm = self.host_manager hm._instance_info = {} hm._init_instance_info() self.assertEqual(len(hm._instance_info), 2) fake_info = hm._instance_info['host1'] self.assertIn(uuids.instance_1, fake_info['instances']) self.assertIn(uuids.instance_2, fake_info['instances']) self.assertNotIn(uuids.instance_3, fake_info['instances']) exp_filters = {'deleted': False, 'host': [u'host1', u'host2']} mock_get_by_filters.assert_called_once_with(mock.ANY, exp_filters) @mock.patch.object(nova.objects.InstanceList, 'get_by_filters') @mock.patch.object(nova.objects.ComputeNodeList, 'get_all') def test_init_instance_info_compute_nodes(self, mock_get_all, mock_get_by_filters): cn1 = objects.ComputeNode(host='host1') cn2 = objects.ComputeNode(host='host2') inst1 = objects.Instance(host='host1', uuid=uuids.instance_1) inst2 = objects.Instance(host='host1', uuid=uuids.instance_2) inst3 = objects.Instance(host='host2', uuid=uuids.instance_3) cell = objects.CellMapping(database_connection='', target_url='') mock_get_by_filters.return_value = objects.InstanceList( objects=[inst1, inst2, inst3]) hm = self.host_manager hm._instance_info = {} hm._init_instance_info({cell: [cn1, cn2]}) self.assertEqual(len(hm._instance_info), 2) fake_info = hm._instance_info['host1'] self.assertIn(uuids.instance_1, fake_info['instances']) self.assertIn(uuids.instance_2, fake_info['instances']) self.assertNotIn(uuids.instance_3, fake_info['instances']) exp_filters = {'deleted': False, 'host': [u'host1', u'host2']} mock_get_by_filters.assert_called_once_with(mock.ANY, exp_filters) # should not be called if the list of nodes was passed explicitly self.assertFalse(mock_get_all.called) def test_enabled_filters(self): enabled_filters = self.host_manager.enabled_filters self.assertEqual(1, len(enabled_filters)) self.assertIsInstance(enabled_filters[0], FakeFilterClass1) @mock.patch.object(host_manager.HostManager, '_init_instance_info') @mock.patch.object(objects.AggregateList, 'get_all') def test_init_aggregates_no_aggs(self, agg_get_all, mock_init_info): agg_get_all.return_value = [] self.host_manager = host_manager.HostManager() self.assertEqual({}, self.host_manager.aggs_by_id) self.assertEqual({}, self.host_manager.host_aggregates_map) @mock.patch.object(host_manager.HostManager, '_init_instance_info') @mock.patch.object(objects.AggregateList, 'get_all') def test_init_aggregates_one_agg_no_hosts(self, agg_get_all, mock_init_info): fake_agg = objects.Aggregate(id=1, hosts=[]) agg_get_all.return_value = [fake_agg] self.host_manager = host_manager.HostManager() self.assertEqual({1: fake_agg}, self.host_manager.aggs_by_id) self.assertEqual({}, self.host_manager.host_aggregates_map) @mock.patch.object(host_manager.HostManager, '_init_instance_info') @mock.patch.object(objects.AggregateList, 'get_all') def test_init_aggregates_one_agg_with_hosts(self, agg_get_all, mock_init_info): fake_agg = objects.Aggregate(id=1, hosts=['fake-host']) agg_get_all.return_value = [fake_agg] self.host_manager = host_manager.HostManager() self.assertEqual({1: fake_agg}, self.host_manager.aggs_by_id) self.assertEqual({'fake-host': set([1])}, self.host_manager.host_aggregates_map) def test_update_aggregates(self): fake_agg = objects.Aggregate(id=1, hosts=['fake-host']) self.host_manager.update_aggregates([fake_agg]) self.assertEqual({1: fake_agg}, self.host_manager.aggs_by_id) self.assertEqual({'fake-host': set([1])}, self.host_manager.host_aggregates_map) def test_update_aggregates_remove_hosts(self): fake_agg = objects.Aggregate(id=1, hosts=['fake-host']) self.host_manager.update_aggregates([fake_agg]) self.assertEqual({1: fake_agg}, self.host_manager.aggs_by_id) self.assertEqual({'fake-host': set([1])}, self.host_manager.host_aggregates_map) # Let's remove the host from the aggregate and update again fake_agg.hosts = [] self.host_manager.update_aggregates([fake_agg]) self.assertEqual({1: fake_agg}, self.host_manager.aggs_by_id) self.assertEqual({'fake-host': set([])}, self.host_manager.host_aggregates_map) def test_delete_aggregate(self): fake_agg = objects.Aggregate(id=1, hosts=['fake-host']) self.host_manager.host_aggregates_map = collections.defaultdict( set, {'fake-host': set([1])}) self.host_manager.aggs_by_id = {1: fake_agg} self.host_manager.delete_aggregate(fake_agg) self.assertEqual({}, self.host_manager.aggs_by_id) self.assertEqual({'fake-host': set([])}, self.host_manager.host_aggregates_map) def test_choose_host_filters_not_found(self): self.assertRaises(exception.SchedulerHostFilterNotFound, self.host_manager._choose_host_filters, 'FakeFilterClass3') def test_choose_host_filters(self): # Test we return 1 correct filter object host_filters = self.host_manager._choose_host_filters( ['FakeFilterClass2']) self.assertEqual(1, len(host_filters)) self.assertIsInstance(host_filters[0], FakeFilterClass2) def _mock_get_filtered_hosts(self, info): info['got_objs'] = [] info['got_fprops'] = [] def fake_filter_one(_self, obj, filter_props): info['got_objs'].append(obj) info['got_fprops'].append(filter_props) return True self.stub_out(__name__ + '.FakeFilterClass1._filter_one', fake_filter_one) def _verify_result(self, info, result, filters=True): for x in info['got_fprops']: self.assertEqual(x, info['expected_fprops']) if filters: self.assertEqual(set(info['expected_objs']), set(info['got_objs'])) self.assertEqual(set(info['expected_objs']), set(result)) def test_get_filtered_hosts(self): fake_properties = objects.RequestSpec(ignore_hosts=[], instance_uuid=uuids.instance, force_hosts=[], force_nodes=[]) info = {'expected_objs': self.fake_hosts, 'expected_fprops': fake_properties} self._mock_get_filtered_hosts(info) result = self.host_manager.get_filtered_hosts(self.fake_hosts, fake_properties) self._verify_result(info, result) def test_get_filtered_hosts_with_requested_destination(self): dest = objects.Destination(host='fake_host1', node='fake-node') fake_properties = objects.RequestSpec(requested_destination=dest, ignore_hosts=[], instance_uuid=uuids.fake_uuid1, force_hosts=[], force_nodes=[]) info = {'expected_objs': [self.fake_hosts[0]], 'expected_fprops': fake_properties} self._mock_get_filtered_hosts(info) result = self.host_manager.get_filtered_hosts(self.fake_hosts, fake_properties) self._verify_result(info, result) def test_get_filtered_hosts_with_wrong_requested_destination(self): dest = objects.Destination(host='dummy', node='fake-node') fake_properties = objects.RequestSpec(requested_destination=dest, ignore_hosts=[], instance_uuid=uuids.fake_uuid1, force_hosts=[], force_nodes=[]) info = {'expected_objs': [], 'expected_fprops': fake_properties} self._mock_get_filtered_hosts(info) result = self.host_manager.get_filtered_hosts(self.fake_hosts, fake_properties) self._verify_result(info, result) def test_get_filtered_hosts_with_ignore(self): fake_properties = objects.RequestSpec( instance_uuid=uuids.instance, ignore_hosts=['fake_host1', 'fake_host3', 'fake_host5', 'fake_multihost'], force_hosts=[], force_nodes=[]) # [1] and [3] are host2 and host4 info = {'expected_objs': [self.fake_hosts[1], self.fake_hosts[3]], 'expected_fprops': fake_properties} self._mock_get_filtered_hosts(info) result = self.host_manager.get_filtered_hosts(self.fake_hosts, fake_properties) self._verify_result(info, result) def test_get_filtered_hosts_with_ignore_case_insensitive(self): fake_properties = objects.RequestSpec( instance_uuids=uuids.fakehost, ignore_hosts=['FAKE_HOST1', 'FaKe_HoSt3', 'Fake_Multihost'], force_hosts=[], force_nodes=[]) # [1] and [3] are host2 and host4 info = {'expected_objs': [self.fake_hosts[1], self.fake_hosts[3]], 'expected_fprops': fake_properties} self._mock_get_filtered_hosts(info) result = self.host_manager.get_filtered_hosts(self.fake_hosts, fake_properties) self._verify_result(info, result) def test_get_filtered_hosts_with_force_hosts(self): fake_properties = objects.RequestSpec( instance_uuid=uuids.instance, ignore_hosts=[], force_hosts=['fake_host1', 'fake_host3', 'fake_host5'], force_nodes=[]) # [0] and [2] are host1 and host3 info = {'expected_objs': [self.fake_hosts[0], self.fake_hosts[2]], 'expected_fprops': fake_properties} self._mock_get_filtered_hosts(info) result = self.host_manager.get_filtered_hosts(self.fake_hosts, fake_properties) self._verify_result(info, result, False) def test_get_filtered_hosts_with_force_case_insensitive(self): fake_properties = objects.RequestSpec( instance_uuids=uuids.fakehost, ignore_hosts=[], force_hosts=['FAKE_HOST1', 'FaKe_HoSt3', 'fake_host4', 'faKe_host5'], force_nodes=[]) # [1] and [3] are host2 and host4 info = {'expected_objs': [self.fake_hosts[0], self.fake_hosts[2], self.fake_hosts[3]], 'expected_fprops': fake_properties} self._mock_get_filtered_hosts(info) result = self.host_manager.get_filtered_hosts(self.fake_hosts, fake_properties) self._verify_result(info, result, False) def test_get_filtered_hosts_with_no_matching_force_hosts(self): fake_properties = objects.RequestSpec( instance_uuid=uuids.instance, ignore_hosts=[], force_hosts=['fake_host5', 'fake_host6'], force_nodes=[]) info = {'expected_objs': [], 'expected_fprops': fake_properties} self._mock_get_filtered_hosts(info) with mock.patch.object(self.host_manager.filter_handler, 'get_filtered_objects') as fake_filter: result = self.host_manager.get_filtered_hosts(self.fake_hosts, fake_properties) self.assertFalse(fake_filter.called) self._verify_result(info, result, False) def test_get_filtered_hosts_with_ignore_and_force_hosts(self): # Ensure ignore_hosts processed before force_hosts in host filters. fake_properties = objects.RequestSpec( instance_uuid=uuids.instance, ignore_hosts=['fake_host1'], force_hosts=['fake_host3', 'fake_host1'], force_nodes=[]) # only fake_host3 should be left. info = {'expected_objs': [self.fake_hosts[2]], 'expected_fprops': fake_properties} self._mock_get_filtered_hosts(info) result = self.host_manager.get_filtered_hosts(self.fake_hosts, fake_properties) self._verify_result(info, result, False) def test_get_filtered_hosts_with_force_host_and_many_nodes(self): # Ensure all nodes returned for a host with many nodes fake_properties = objects.RequestSpec( instance_uuid=uuids.instance, ignore_hosts=[], force_hosts=['fake_multihost'], force_nodes=[]) info = {'expected_objs': [self.fake_hosts[4], self.fake_hosts[5], self.fake_hosts[6], self.fake_hosts[7]], 'expected_fprops': fake_properties} self._mock_get_filtered_hosts(info) result = self.host_manager.get_filtered_hosts(self.fake_hosts, fake_properties) self._verify_result(info, result, False) def test_get_filtered_hosts_with_force_nodes(self): fake_properties = objects.RequestSpec( instance_uuid=uuids.instance, ignore_hosts=[], force_hosts=[], force_nodes=['fake-node2', 'fake-node4', 'fake-node9']) # [5] is fake-node2, [7] is fake-node4 info = {'expected_objs': [self.fake_hosts[5], self.fake_hosts[7]], 'expected_fprops': fake_properties} self._mock_get_filtered_hosts(info) result = self.host_manager.get_filtered_hosts(self.fake_hosts, fake_properties) self._verify_result(info, result, False) def test_get_filtered_hosts_with_force_hosts_and_nodes(self): # Ensure only overlapping results if both force host and node fake_properties = objects.RequestSpec( instance_uuid=uuids.instance, ignore_hosts=[], force_hosts=['fake-host1', 'fake_multihost'], force_nodes=['fake-node2', 'fake-node9']) # [5] is fake-node2 info = {'expected_objs': [self.fake_hosts[5]], 'expected_fprops': fake_properties} self._mock_get_filtered_hosts(info) result = self.host_manager.get_filtered_hosts(self.fake_hosts, fake_properties) self._verify_result(info, result, False) def test_get_filtered_hosts_with_force_hosts_and_wrong_nodes(self): # Ensure non-overlapping force_node and force_host yield no result fake_properties = objects.RequestSpec( instance_uuid=uuids.instance, ignore_hosts=[], force_hosts=['fake_multihost'], force_nodes=['fake-node']) info = {'expected_objs': [], 'expected_fprops': fake_properties} self._mock_get_filtered_hosts(info) result = self.host_manager.get_filtered_hosts(self.fake_hosts, fake_properties) self._verify_result(info, result, False) def test_get_filtered_hosts_with_ignore_hosts_and_force_nodes(self): # Ensure ignore_hosts can coexist with force_nodes fake_properties = objects.RequestSpec( instance_uuid=uuids.instance, ignore_hosts=['fake_host1', 'fake_host2'], force_hosts=[], force_nodes=['fake-node4', 'fake-node2']) info = {'expected_objs': [self.fake_hosts[5], self.fake_hosts[7]], 'expected_fprops': fake_properties} self._mock_get_filtered_hosts(info) result = self.host_manager.get_filtered_hosts(self.fake_hosts, fake_properties) self._verify_result(info, result, False) def test_get_filtered_hosts_with_ignore_hosts_and_force_same_nodes(self): # Ensure ignore_hosts is processed before force_nodes fake_properties = objects.RequestSpec( instance_uuid=uuids.instance, ignore_hosts=['fake_multihost'], force_hosts=[], force_nodes=['fake_node4', 'fake_node2']) info = {'expected_objs': [], 'expected_fprops': fake_properties} self._mock_get_filtered_hosts(info) result = self.host_manager.get_filtered_hosts(self.fake_hosts, fake_properties) self._verify_result(info, result, False) @mock.patch('nova.scheduler.host_manager.LOG') @mock.patch('nova.objects.ServiceList.get_by_binary') @mock.patch('nova.objects.ComputeNodeList.get_all') @mock.patch('nova.objects.InstanceList.get_by_host') def test_get_all_host_states(self, mock_get_by_host, mock_get_all, mock_get_by_binary, mock_log): mock_get_by_host.return_value = objects.InstanceList() mock_get_all.return_value = fakes.COMPUTE_NODES mock_get_by_binary.return_value = fakes.SERVICES context = 'fake_context' # get_all_host_states returns a generator, so make a map from it host_states_map = {(state.host, state.nodename): state for state in self.host_manager.get_all_host_states(context)} self.assertEqual(4, len(host_states_map)) calls = [ mock.call( "Host %(hostname)s has more disk space than database " "expected (%(physical)s GB > %(database)s GB)", {'physical': 3333, 'database': 3072, 'hostname': 'node3'} ), mock.call( "No compute service record found for host %(host)s", {'host': 'fake'} ) ] self.assertEqual(calls, mock_log.warning.call_args_list) # Check that .service is set properly for i in range(4): compute_node = fakes.COMPUTE_NODES[i] host = compute_node.host node = compute_node.hypervisor_hostname state_key = (host, node) self.assertEqual(host_states_map[state_key].service, obj_base.obj_to_primitive(fakes.get_service_by_host(host))) self.assertEqual(host_states_map[('host1', 'node1')].free_ram_mb, 512) # 511GB self.assertEqual(host_states_map[('host1', 'node1')].free_disk_mb, 524288) self.assertEqual(host_states_map[('host2', 'node2')].free_ram_mb, 1024) # 1023GB self.assertEqual(host_states_map[('host2', 'node2')].free_disk_mb, 1048576) self.assertEqual(host_states_map[('host3', 'node3')].free_ram_mb, 3072) # 3071GB self.assertEqual(host_states_map[('host3', 'node3')].free_disk_mb, 3145728) self.assertThat( objects.NUMATopology.obj_from_db_obj( host_states_map[('host3', 'node3')].numa_topology )._to_dict(), matchers.DictMatches(fakes.NUMA_TOPOLOGY._to_dict())) self.assertEqual(host_states_map[('host4', 'node4')].free_ram_mb, 8192) # 8191GB self.assertEqual(host_states_map[('host4', 'node4')].free_disk_mb, 8388608) @mock.patch.object(nova.objects.InstanceList, 'get_by_host') @mock.patch.object(host_manager.HostState, '_update_from_compute_node') @mock.patch.object(objects.ComputeNodeList, 'get_all') @mock.patch.object(objects.ServiceList, 'get_by_binary') def test_get_all_host_states_with_no_aggs(self, svc_get_by_binary, cn_get_all, update_from_cn, mock_get_by_host): svc_get_by_binary.return_value = [objects.Service(host='fake')] cn_get_all.return_value = [ objects.ComputeNode(host='fake', hypervisor_hostname='fake')] mock_get_by_host.return_value = objects.InstanceList() self.host_manager.host_aggregates_map = collections.defaultdict(set) hosts = self.host_manager.get_all_host_states('fake-context') # get_all_host_states returns a generator, so make a map from it host_states_map = {(state.host, state.nodename): state for state in hosts} host_state = host_states_map[('fake', 'fake')] self.assertEqual([], host_state.aggregates) @mock.patch.object(nova.objects.InstanceList, 'get_by_host') @mock.patch.object(host_manager.HostState, '_update_from_compute_node') @mock.patch.object(objects.ComputeNodeList, 'get_all') @mock.patch.object(objects.ServiceList, 'get_by_binary') def test_get_all_host_states_with_matching_aggs(self, svc_get_by_binary, cn_get_all, update_from_cn, mock_get_by_host): svc_get_by_binary.return_value = [objects.Service(host='fake')] cn_get_all.return_value = [ objects.ComputeNode(host='fake', hypervisor_hostname='fake')] mock_get_by_host.return_value = objects.InstanceList() fake_agg = objects.Aggregate(id=1) self.host_manager.host_aggregates_map = collections.defaultdict( set, {'fake': set([1])}) self.host_manager.aggs_by_id = {1: fake_agg} hosts = self.host_manager.get_all_host_states('fake-context') # get_all_host_states returns a generator, so make a map from it host_states_map = {(state.host, state.nodename): state for state in hosts} host_state = host_states_map[('fake', 'fake')] self.assertEqual([fake_agg], host_state.aggregates) @mock.patch.object(nova.objects.InstanceList, 'get_by_host') @mock.patch.object(host_manager.HostState, '_update_from_compute_node') @mock.patch.object(objects.ComputeNodeList, 'get_all') @mock.patch.object(objects.ServiceList, 'get_by_binary') def test_get_all_host_states_with_not_matching_aggs(self, svc_get_by_binary, cn_get_all, update_from_cn, mock_get_by_host): svc_get_by_binary.return_value = [objects.Service(host='fake'), objects.Service(host='other')] cn_get_all.return_value = [ objects.ComputeNode(host='fake', hypervisor_hostname='fake'), objects.ComputeNode(host='other', hypervisor_hostname='other')] mock_get_by_host.return_value = objects.InstanceList() fake_agg = objects.Aggregate(id=1) self.host_manager.host_aggregates_map = collections.defaultdict( set, {'other': set([1])}) self.host_manager.aggs_by_id = {1: fake_agg} hosts = self.host_manager.get_all_host_states('fake-context') # get_all_host_states returns a generator, so make a map from it host_states_map = {(state.host, state.nodename): state for state in hosts} host_state = host_states_map[('fake', 'fake')] self.assertEqual([], host_state.aggregates) @mock.patch.object(nova.objects.InstanceList, 'get_by_host', return_value=objects.InstanceList()) @mock.patch.object(host_manager.HostState, '_update_from_compute_node') @mock.patch.object(objects.ComputeNodeList, 'get_all') @mock.patch.object(objects.ServiceList, 'get_by_binary') def test_get_all_host_states_corrupt_aggregates_info(self, svc_get_by_binary, cn_get_all, update_from_cn, mock_get_by_host): """Regression test for bug 1605804 A host can be in multiple host-aggregates at the same time. When a host gets removed from an aggregate in thread A and this aggregate gets deleted in thread B, there can be a race-condition where the mapping data in the host_manager can get out of sync for a moment. This test simulates this condition for the bug-fix. """ host_a = 'host_a' host_b = 'host_b' svc_get_by_binary.return_value = [objects.Service(host=host_a), objects.Service(host=host_b)] cn_get_all.return_value = [ objects.ComputeNode(host=host_a, hypervisor_hostname=host_a), objects.ComputeNode(host=host_b, hypervisor_hostname=host_b)] aggregate = objects.Aggregate(id=1) aggregate.hosts = [host_a, host_b] aggr_list = objects.AggregateList() aggr_list.objects = [aggregate] self.host_manager.update_aggregates(aggr_list) aggregate.hosts = [host_a] self.host_manager.delete_aggregate(aggregate) self.host_manager.get_all_host_states('fake-context') @mock.patch('nova.objects.ServiceList.get_by_binary') @mock.patch('nova.objects.ComputeNodeList.get_all') @mock.patch('nova.objects.InstanceList.get_by_host') def test_get_all_host_states_updated(self, mock_get_by_host, mock_get_all_comp, mock_get_svc_by_binary): mock_get_all_comp.return_value = fakes.COMPUTE_NODES mock_get_svc_by_binary.return_value = fakes.SERVICES context = 'fake_context' hm = self.host_manager inst1 = objects.Instance(uuid=uuids.instance) cn1 = objects.ComputeNode(host='host1') hm._instance_info = {'host1': {'instances': {uuids.instance: inst1}, 'updated': True}} host_state = host_manager.HostState('host1', cn1, uuids.cell) self.assertFalse(host_state.instances) mock_get_by_host.return_value = None host_state.update( inst_dict=hm._get_instance_info(context, cn1)) self.assertFalse(mock_get_by_host.called) self.assertTrue(host_state.instances) self.assertEqual(host_state.instances[uuids.instance], inst1) @mock.patch('nova.objects.ServiceList.get_by_binary') @mock.patch('nova.objects.ComputeNodeList.get_all') @mock.patch('nova.objects.InstanceList.get_by_host') def test_get_all_host_states_not_updated(self, mock_get_by_host, mock_get_all_comp, mock_get_svc_by_binary): mock_get_all_comp.return_value = fakes.COMPUTE_NODES mock_get_svc_by_binary.return_value = fakes.SERVICES context = 'fake_context' hm = self.host_manager inst1 = objects.Instance(uuid=uuids.instance) cn1 = objects.ComputeNode(host='host1') hm._instance_info = {'host1': {'instances': {uuids.instance: inst1}, 'updated': False}} host_state = host_manager.HostState('host1', cn1, uuids.cell) self.assertFalse(host_state.instances) mock_get_by_host.return_value = objects.InstanceList(objects=[inst1]) host_state.update( inst_dict=hm._get_instance_info(context, cn1)) mock_get_by_host.assert_called_once_with(context, cn1.host) self.assertTrue(host_state.instances) self.assertEqual(host_state.instances[uuids.instance], inst1) @mock.patch('nova.objects.InstanceList.get_by_host') def test_recreate_instance_info(self, mock_get_by_host): host_name = 'fake_host' inst1 = fake_instance.fake_instance_obj('fake_context', uuid=uuids.instance_1, host=host_name) inst2 = fake_instance.fake_instance_obj('fake_context', uuid=uuids.instance_2, host=host_name) orig_inst_dict = {inst1.uuid: inst1, inst2.uuid: inst2} new_inst_list = objects.InstanceList(objects=[inst1, inst2]) mock_get_by_host.return_value = new_inst_list self.host_manager._instance_info = { host_name: { 'instances': orig_inst_dict, 'updated': True, }} self.host_manager._recreate_instance_info('fake_context', host_name) new_info = self.host_manager._instance_info[host_name] self.assertEqual(len(new_info['instances']), len(new_inst_list)) self.assertFalse(new_info['updated']) def test_update_instance_info(self): host_name = 'fake_host' inst1 = fake_instance.fake_instance_obj('fake_context', uuid=uuids.instance_1, host=host_name) inst2 = fake_instance.fake_instance_obj('fake_context', uuid=uuids.instance_2, host=host_name) orig_inst_dict = {inst1.uuid: inst1, inst2.uuid: inst2} self.host_manager._instance_info = { host_name: { 'instances': orig_inst_dict, 'updated': False, }} inst3 = fake_instance.fake_instance_obj('fake_context', uuid=uuids.instance_3, host=host_name) inst4 = fake_instance.fake_instance_obj('fake_context', uuid=uuids.instance_4, host=host_name) update = objects.InstanceList(objects=[inst3, inst4]) self.host_manager.update_instance_info('fake_context', host_name, update) new_info = self.host_manager._instance_info[host_name] self.assertEqual(len(new_info['instances']), 4) self.assertTrue(new_info['updated']) def test_update_instance_info_unknown_host(self): self.host_manager._recreate_instance_info = mock.MagicMock() host_name = 'fake_host' inst1 = fake_instance.fake_instance_obj('fake_context', uuid=uuids.instance_1, host=host_name) inst2 = fake_instance.fake_instance_obj('fake_context', uuid=uuids.instance_2, host=host_name) orig_inst_dict = {inst1.uuid: inst1, inst2.uuid: inst2} self.host_manager._instance_info = { host_name: { 'instances': orig_inst_dict, 'updated': False, }} bad_host = 'bad_host' inst3 = fake_instance.fake_instance_obj('fake_context', uuid=uuids.instance_3, host=bad_host) inst_list3 = objects.InstanceList(objects=[inst3]) self.host_manager.update_instance_info('fake_context', bad_host, inst_list3) new_info = self.host_manager._instance_info[host_name] self.host_manager._recreate_instance_info.assert_called_once_with( 'fake_context', bad_host) self.assertEqual(len(new_info['instances']), len(orig_inst_dict)) self.assertFalse(new_info['updated']) @mock.patch('nova.objects.HostMapping.get_by_host', side_effect=exception.HostMappingNotFound(name='host1')) def test_update_instance_info_unknown_host_mapping_not_found(self, get_by_host): """Tests that case that update_instance_info is called with an unregistered host so the host manager attempts to recreate the instance list, but there is no host mapping found for the given host (it might have just started not be discovered for cells v2 yet). """ ctxt = nova_context.RequestContext() instance_info = objects.InstanceList() self.host_manager.update_instance_info(ctxt, 'host1', instance_info) self.assertDictEqual( {}, self.host_manager._instance_info['host1']['instances']) get_by_host.assert_called_once_with(ctxt, 'host1') def test_delete_instance_info(self): host_name = 'fake_host' inst1 = fake_instance.fake_instance_obj('fake_context', uuid=uuids.instance_1, host=host_name) inst2 = fake_instance.fake_instance_obj('fake_context', uuid=uuids.instance_2, host=host_name) orig_inst_dict = {inst1.uuid: inst1, inst2.uuid: inst2} self.host_manager._instance_info = { host_name: { 'instances': orig_inst_dict, 'updated': False, }} self.host_manager.delete_instance_info('fake_context', host_name, inst1.uuid) new_info = self.host_manager._instance_info[host_name] self.assertEqual(len(new_info['instances']), 1) self.assertTrue(new_info['updated']) def test_delete_instance_info_unknown_host(self): self.host_manager._recreate_instance_info = mock.MagicMock() host_name = 'fake_host' inst1 = fake_instance.fake_instance_obj('fake_context', uuid=uuids.instance_1, host=host_name) inst2 = fake_instance.fake_instance_obj('fake_context', uuid=uuids.instance_2, host=host_name) orig_inst_dict = {inst1.uuid: inst1, inst2.uuid: inst2} self.host_manager._instance_info = { host_name: { 'instances': orig_inst_dict, 'updated': False, }} bad_host = 'bad_host' self.host_manager.delete_instance_info('fake_context', bad_host, uuids.instance_1) new_info = self.host_manager._instance_info[host_name] self.host_manager._recreate_instance_info.assert_called_once_with( 'fake_context', bad_host) self.assertEqual(len(new_info['instances']), len(orig_inst_dict)) self.assertFalse(new_info['updated']) def test_sync_instance_info(self): self.host_manager._recreate_instance_info = mock.MagicMock() host_name = 'fake_host' inst1 = fake_instance.fake_instance_obj('fake_context', uuid=uuids.instance_1, host=host_name) inst2 = fake_instance.fake_instance_obj('fake_context', uuid=uuids.instance_2, host=host_name) orig_inst_dict = {inst1.uuid: inst1, inst2.uuid: inst2} self.host_manager._instance_info = { host_name: { 'instances': orig_inst_dict, 'updated': False, }} self.host_manager.sync_instance_info('fake_context', host_name, [uuids.instance_2, uuids.instance_1]) new_info = self.host_manager._instance_info[host_name] self.assertFalse(self.host_manager._recreate_instance_info.called) self.assertTrue(new_info['updated']) def test_sync_instance_info_fail(self): self.host_manager._recreate_instance_info = mock.MagicMock() host_name = 'fake_host' inst1 = fake_instance.fake_instance_obj('fake_context', uuid=uuids.instance_1, host=host_name) inst2 = fake_instance.fake_instance_obj('fake_context', uuid=uuids.instance_2, host=host_name) orig_inst_dict = {inst1.uuid: inst1, inst2.uuid: inst2} self.host_manager._instance_info = { host_name: { 'instances': orig_inst_dict, 'updated': False, }} self.host_manager.sync_instance_info('fake_context', host_name, [uuids.instance_2, uuids.instance_1, 'new']) new_info = self.host_manager._instance_info[host_name] self.host_manager._recreate_instance_info.assert_called_once_with( 'fake_context', host_name) self.assertFalse(new_info['updated']) @mock.patch('nova.objects.CellMappingList.get_all') @mock.patch('nova.objects.ComputeNodeList.get_all') @mock.patch('nova.objects.ServiceList.get_by_binary') def test_get_computes_for_cells(self, mock_sl, mock_cn, mock_cm): cells = [ objects.CellMapping(uuid=uuids.cell1, db_connection='none://1', transport_url='none://'), objects.CellMapping(uuid=uuids.cell2, db_connection='none://2', transport_url='none://'), ] mock_cm.return_value = cells mock_sl.side_effect = [ [objects.ServiceList(host='foo')], [objects.ServiceList(host='bar')], ] mock_cn.side_effect = [ [objects.ComputeNode(host='foo')], [objects.ComputeNode(host='bar')], ] context = nova_context.RequestContext('fake', 'fake') cns, srv = self.host_manager._get_computes_for_cells(context, cells) self.assertEqual({uuids.cell1: ['foo'], uuids.cell2: ['bar']}, {cell: [cn.host for cn in computes] for cell, computes in cns.items()}) self.assertEqual(['bar', 'foo'], sorted(list(srv.keys()))) @mock.patch('nova.objects.CellMappingList.get_all') @mock.patch('nova.objects.ComputeNodeList.get_all_by_uuids') @mock.patch('nova.objects.ServiceList.get_by_binary') def test_get_computes_for_cells_uuid(self, mock_sl, mock_cn, mock_cm): cells = [ objects.CellMapping(uuid=uuids.cell1, db_connection='none://1', transport_url='none://'), objects.CellMapping(uuid=uuids.cell2, db_connection='none://2', transport_url='none://'), ] mock_cm.return_value = cells mock_sl.side_effect = [ [objects.ServiceList(host='foo')], [objects.ServiceList(host='bar')], ] mock_cn.side_effect = [ [objects.ComputeNode(host='foo')], [objects.ComputeNode(host='bar')], ] context = nova_context.RequestContext('fake', 'fake') cns, srv = self.host_manager._get_computes_for_cells(context, cells, []) self.assertEqual({uuids.cell1: ['foo'], uuids.cell2: ['bar']}, {cell: [cn.host for cn in computes] for cell, computes in cns.items()}) self.assertEqual(['bar', 'foo'], sorted(list(srv.keys()))) @mock.patch('nova.context.target_cell') @mock.patch('nova.objects.CellMappingList.get_all') @mock.patch('nova.objects.ComputeNodeList.get_all') @mock.patch('nova.objects.ServiceList.get_by_binary') def test_get_computes_for_cells_limit_to_cell(self, mock_sl, mock_cn, mock_cm, mock_target): host_manager.LOG.debug = host_manager.LOG.error cells = [ objects.CellMapping(uuid=uuids.cell1, database_connection='none://1', transport_url='none://'), objects.CellMapping(uuid=uuids.cell2, database_connection='none://2', transport_url='none://'), ] mock_sl.return_value = [objects.ServiceList(host='foo')] mock_cn.return_value = [objects.ComputeNode(host='foo')] mock_cm.return_value = cells @contextlib.contextmanager def fake_set_target(context, cell): yield mock.sentinel.cctxt mock_target.side_effect = fake_set_target context = nova_context.RequestContext('fake', 'fake') cns, srv = self.host_manager._get_computes_for_cells( context, cells=cells[1:]) self.assertEqual({uuids.cell2: ['foo']}, {cell: [cn.host for cn in computes] for cell, computes in cns.items()}) self.assertEqual(['foo'], list(srv.keys())) # NOTE(danms): We have two cells, but we should only have # targeted one if we honored the only-cell destination requirement, # and only looked up services and compute nodes in one mock_target.assert_called_once_with(context, cells[1]) mock_cn.assert_called_once_with(mock.sentinel.cctxt) mock_sl.assert_called_once_with(mock.sentinel.cctxt, 'nova-compute', include_disabled=True) class HostManagerChangedNodesTestCase(test.NoDBTestCase): """Test case for HostManager class.""" @mock.patch.object(host_manager.HostManager, '_init_instance_info') @mock.patch.object(host_manager.HostManager, '_init_aggregates') def setUp(self, mock_init_agg, mock_init_inst): super(HostManagerChangedNodesTestCase, self).setUp() self.host_manager = host_manager.HostManager() self.fake_hosts = [ host_manager.HostState('host1', 'node1', uuids.cell), host_manager.HostState('host2', 'node2', uuids.cell), host_manager.HostState('host3', 'node3', uuids.cell), host_manager.HostState('host4', 'node4', uuids.cell) ] @mock.patch('nova.objects.ServiceList.get_by_binary') @mock.patch('nova.objects.ComputeNodeList.get_all') @mock.patch('nova.objects.InstanceList.get_by_host') def test_get_all_host_states(self, mock_get_by_host, mock_get_all, mock_get_by_binary): mock_get_by_host.return_value = objects.InstanceList() mock_get_all.return_value = fakes.COMPUTE_NODES mock_get_by_binary.return_value = fakes.SERVICES context = 'fake_context' # get_all_host_states returns a generator, so make a map from it host_states_map = {(state.host, state.nodename): state for state in self.host_manager.get_all_host_states(context)} self.assertEqual(len(host_states_map), 4) @mock.patch('nova.objects.ServiceList.get_by_binary') @mock.patch('nova.objects.ComputeNodeList.get_all') @mock.patch('nova.objects.InstanceList.get_by_host') def test_get_all_host_states_after_delete_one(self, mock_get_by_host, mock_get_all, mock_get_by_binary): getter = (lambda n: n.hypervisor_hostname if 'hypervisor_hostname' in n else None) running_nodes = [n for n in fakes.COMPUTE_NODES if getter(n) != 'node4'] mock_get_by_host.return_value = objects.InstanceList() mock_get_all.side_effect = [fakes.COMPUTE_NODES, running_nodes] mock_get_by_binary.side_effect = [fakes.SERVICES, fakes.SERVICES] context = 'fake_context' # first call: all nodes hosts = self.host_manager.get_all_host_states(context) # get_all_host_states returns a generator, so make a map from it host_states_map = {(state.host, state.nodename): state for state in hosts} self.assertEqual(len(host_states_map), 4) # second call: just running nodes hosts = self.host_manager.get_all_host_states(context) host_states_map = {(state.host, state.nodename): state for state in hosts} self.assertEqual(len(host_states_map), 3) @mock.patch('nova.objects.ServiceList.get_by_binary') @mock.patch('nova.objects.ComputeNodeList.get_all') @mock.patch('nova.objects.InstanceList.get_by_host') def test_get_all_host_states_after_delete_all(self, mock_get_by_host, mock_get_all, mock_get_by_binary): mock_get_by_host.return_value = objects.InstanceList() mock_get_all.side_effect = [fakes.COMPUTE_NODES, []] mock_get_by_binary.side_effect = [fakes.SERVICES, fakes.SERVICES] context = 'fake_context' # first call: all nodes hosts = self.host_manager.get_all_host_states(context) # get_all_host_states returns a generator, so make a map from it host_states_map = {(state.host, state.nodename): state for state in hosts} self.assertEqual(len(host_states_map), 4) # second call: no nodes hosts = self.host_manager.get_all_host_states(context) host_states_map = {(state.host, state.nodename): state for state in hosts} self.assertEqual(len(host_states_map), 0) @mock.patch('nova.objects.ServiceList.get_by_binary') @mock.patch('nova.objects.ComputeNodeList.get_all_by_uuids') @mock.patch('nova.objects.InstanceList.get_by_host') def test_get_host_states_by_uuids(self, mock_get_by_host, mock_get_all, mock_get_by_binary): mock_get_by_host.return_value = objects.InstanceList() mock_get_all.side_effect = [fakes.COMPUTE_NODES, []] mock_get_by_binary.side_effect = [fakes.SERVICES, fakes.SERVICES] # Request 1: all nodes can satisfy the request hosts1 = self.host_manager.get_host_states_by_uuids( mock.sentinel.ctxt1, mock.sentinel.uuids1, objects.RequestSpec()) # get_host_states_by_uuids returns a generator so convert the values # into an iterator host_states1 = iter(hosts1) # Request 2: no nodes can satisfy the request hosts2 = self.host_manager.get_host_states_by_uuids( mock.sentinel.ctxt2, mock.sentinel.uuids2, objects.RequestSpec()) host_states2 = iter(hosts2) # Fake a concurrent request that is still processing the first result # to make sure all nodes are still available candidates to Request 1. num_hosts1 = len(list(host_states1)) self.assertEqual(4, num_hosts1) # Verify that no nodes are available to Request 2. num_hosts2 = len(list(host_states2)) self.assertEqual(0, num_hosts2) class HostStateTestCase(test.NoDBTestCase): """Test case for HostState class.""" # update_from_compute_node() and consume_from_request() are tested # in HostManagerTestCase.test_get_all_host_states() @mock.patch('nova.utils.synchronized', side_effect=lambda a: lambda f: lambda *args: f(*args)) def test_stat_consumption_from_compute_node(self, sync_mock): stats = { 'num_instances': '5', 'num_proj_12345': '3', 'num_proj_23456': '1', 'num_vm_%s' % vm_states.BUILDING: '2', 'num_vm_%s' % vm_states.SUSPENDED: '1', 'num_task_%s' % task_states.RESIZE_MIGRATING: '1', 'num_task_%s' % task_states.MIGRATING: '2', 'num_os_type_linux': '4', 'num_os_type_windoze': '1', 'io_workload': '42', } hyper_ver_int = versionutils.convert_version_to_int('6.0.0') compute = objects.ComputeNode( uuid=uuids.cn1, stats=stats, memory_mb=1, free_disk_gb=0, local_gb=0, local_gb_used=0, free_ram_mb=0, vcpus=0, vcpus_used=0, disk_available_least=None, updated_at=datetime.datetime(2015, 11, 11, 11, 0, 0), host_ip='127.0.0.1', hypervisor_type='htype', hypervisor_hostname='hostname', cpu_info='cpu_info', supported_hv_specs=[], hypervisor_version=hyper_ver_int, numa_topology=None, pci_device_pools=None, metrics=None, cpu_allocation_ratio=16.0, ram_allocation_ratio=1.5, disk_allocation_ratio=1.0) host = host_manager.HostState("fakehost", "fakenode", uuids.cell) host.update(compute=compute) sync_mock.assert_called_once_with(("fakehost", "fakenode")) self.assertEqual(5, host.num_instances) self.assertEqual(42, host.num_io_ops) self.assertEqual(10, len(host.stats)) self.assertEqual('127.0.0.1', str(host.host_ip)) self.assertEqual('htype', host.hypervisor_type) self.assertEqual('hostname', host.hypervisor_hostname) self.assertEqual('cpu_info', host.cpu_info) self.assertEqual([], host.supported_instances) self.assertEqual(hyper_ver_int, host.hypervisor_version) def test_stat_consumption_from_compute_node_non_pci(self): stats = { 'num_instances': '5', 'num_proj_12345': '3', 'num_proj_23456': '1', 'num_vm_%s' % vm_states.BUILDING: '2', 'num_vm_%s' % vm_states.SUSPENDED: '1', 'num_task_%s' % task_states.RESIZE_MIGRATING: '1', 'num_task_%s' % task_states.MIGRATING: '2', 'num_os_type_linux': '4', 'num_os_type_windoze': '1', 'io_workload': '42', } hyper_ver_int = versionutils.convert_version_to_int('6.0.0') compute = objects.ComputeNode( uuid=uuids.cn1, stats=stats, memory_mb=0, free_disk_gb=0, local_gb=0, local_gb_used=0, free_ram_mb=0, vcpus=0, vcpus_used=0, disk_available_least=None, updated_at=datetime.datetime(2015, 11, 11, 11, 0, 0), host_ip='127.0.0.1', hypervisor_type='htype', hypervisor_hostname='hostname', cpu_info='cpu_info', supported_hv_specs=[], hypervisor_version=hyper_ver_int, numa_topology=None, pci_device_pools=None, metrics=None, cpu_allocation_ratio=16.0, ram_allocation_ratio=1.5, disk_allocation_ratio=1.0) host = host_manager.HostState("fakehost", "fakenode", uuids.cell) host.update(compute=compute) self.assertEqual([], host.pci_stats.pools) self.assertEqual(hyper_ver_int, host.hypervisor_version) def test_stat_consumption_from_compute_node_rescue_unshelving(self): stats = { 'num_instances': '5', 'num_proj_12345': '3', 'num_proj_23456': '1', 'num_vm_%s' % vm_states.BUILDING: '2', 'num_vm_%s' % vm_states.SUSPENDED: '1', 'num_task_%s' % task_states.UNSHELVING: '1', 'num_task_%s' % task_states.RESCUING: '2', 'num_os_type_linux': '4', 'num_os_type_windoze': '1', 'io_workload': '42', } hyper_ver_int = versionutils.convert_version_to_int('6.0.0') compute = objects.ComputeNode( uuid=uuids.cn1, stats=stats, memory_mb=0, free_disk_gb=0, local_gb=0, local_gb_used=0, free_ram_mb=0, vcpus=0, vcpus_used=0, disk_available_least=None, updated_at=datetime.datetime(2015, 11, 11, 11, 0, 0), host_ip='127.0.0.1', hypervisor_type='htype', hypervisor_hostname='hostname', cpu_info='cpu_info', supported_hv_specs=[], hypervisor_version=hyper_ver_int, numa_topology=None, pci_device_pools=None, metrics=None, cpu_allocation_ratio=16.0, ram_allocation_ratio=1.5, disk_allocation_ratio=1.0) host = host_manager.HostState("fakehost", "fakenode", uuids.cell) host.update(compute=compute) self.assertEqual(5, host.num_instances) self.assertEqual(42, host.num_io_ops) self.assertEqual(10, len(host.stats)) self.assertEqual([], host.pci_stats.pools) self.assertEqual(hyper_ver_int, host.hypervisor_version) @mock.patch('nova.utils.synchronized', side_effect=lambda a: lambda f: lambda *args: f(*args)) @mock.patch('nova.virt.hardware.get_host_numa_usage_from_instance') @mock.patch('nova.objects.Instance') @mock.patch('nova.virt.hardware.numa_fit_instance_to_host') @mock.patch('nova.virt.hardware.host_topology_and_format_from_host') def test_stat_consumption_from_instance(self, host_topo_mock, numa_fit_mock, instance_init_mock, numa_usage_mock, sync_mock): fake_numa_topology = objects.InstanceNUMATopology( cells=[objects.InstanceNUMACell()]) fake_host_numa_topology = mock.Mock() fake_instance = objects.Instance(numa_topology=fake_numa_topology) host_topo_mock.return_value = (fake_host_numa_topology, True) numa_usage_mock.return_value = fake_host_numa_topology numa_fit_mock.return_value = fake_numa_topology instance_init_mock.return_value = fake_instance spec_obj = objects.RequestSpec( instance_uuid=uuids.instance, flavor=objects.Flavor(root_gb=0, ephemeral_gb=0, memory_mb=0, vcpus=0), numa_topology=fake_numa_topology, pci_requests=objects.InstancePCIRequests(requests=[])) host = host_manager.HostState("fakehost", "fakenode", uuids.cell) self.assertIsNone(host.updated) host.consume_from_request(spec_obj) numa_fit_mock.assert_called_once_with(fake_host_numa_topology, fake_numa_topology, limits=None, pci_requests=None, pci_stats=None) numa_usage_mock.assert_called_once_with(host, fake_instance) sync_mock.assert_called_once_with(("fakehost", "fakenode")) self.assertEqual(fake_host_numa_topology, host.numa_topology) self.assertIsNotNone(host.updated) second_numa_topology = objects.InstanceNUMATopology( cells=[objects.InstanceNUMACell()]) spec_obj = objects.RequestSpec( instance_uuid=uuids.instance, flavor=objects.Flavor(root_gb=0, ephemeral_gb=0, memory_mb=0, vcpus=0), numa_topology=second_numa_topology, pci_requests=objects.InstancePCIRequests(requests=[])) second_host_numa_topology = mock.Mock() numa_usage_mock.return_value = second_host_numa_topology numa_fit_mock.return_value = second_numa_topology host.consume_from_request(spec_obj) self.assertEqual(2, host.num_instances) self.assertEqual(2, host.num_io_ops) self.assertEqual(2, numa_usage_mock.call_count) self.assertEqual(((host, fake_instance),), numa_usage_mock.call_args) self.assertEqual(second_host_numa_topology, host.numa_topology) self.assertIsNotNone(host.updated) def test_stat_consumption_from_instance_pci(self): inst_topology = objects.InstanceNUMATopology( cells = [objects.InstanceNUMACell( cpuset=set([0]), memory=512, id=0)]) fake_requests = [{'request_id': uuids.request_id, 'count': 1, 'spec': [{'vendor_id': '8086'}]}] fake_requests_obj = objects.InstancePCIRequests( requests=[objects.InstancePCIRequest(**r) for r in fake_requests], instance_uuid=uuids.instance) req_spec = objects.RequestSpec( instance_uuid=uuids.instance, project_id='12345', numa_topology=inst_topology, pci_requests=fake_requests_obj, flavor=objects.Flavor(root_gb=0, ephemeral_gb=0, memory_mb=512, vcpus=1)) host = host_manager.HostState("fakehost", "fakenode", uuids.cell) self.assertIsNone(host.updated) host.pci_stats = pci_stats.PciDeviceStats( [objects.PciDevicePool(vendor_id='8086', product_id='15ed', numa_node=1, count=1)]) host.numa_topology = fakes.NUMA_TOPOLOGY host.consume_from_request(req_spec) self.assertIsInstance(req_spec.numa_topology, objects.InstanceNUMATopology) self.assertEqual(512, host.numa_topology.cells[1].memory_usage) self.assertEqual(1, host.numa_topology.cells[1].cpu_usage) self.assertEqual(0, len(host.pci_stats.pools)) self.assertIsNotNone(host.updated) def test_stat_consumption_from_instance_with_pci_exception(self): fake_requests = [{'request_id': uuids.request_id, 'count': 3, 'spec': [{'vendor_id': '8086'}]}] fake_requests_obj = objects.InstancePCIRequests( requests=[objects.InstancePCIRequest(**r) for r in fake_requests], instance_uuid=uuids.instance) req_spec = objects.RequestSpec( instance_uuid=uuids.instance, project_id='12345', numa_topology=None, pci_requests=fake_requests_obj, flavor=objects.Flavor(root_gb=0, ephemeral_gb=0, memory_mb=1024, vcpus=1)) host = host_manager.HostState("fakehost", "fakenode", uuids.cell) self.assertIsNone(host.updated) fake_updated = mock.sentinel.fake_updated host.updated = fake_updated host.pci_stats = pci_stats.PciDeviceStats() with mock.patch.object(host.pci_stats, 'apply_requests', side_effect=exception.PciDeviceRequestFailed): host.consume_from_request(req_spec) self.assertEqual(fake_updated, host.updated) def test_resources_consumption_from_compute_node(self): _ts_now = datetime.datetime(2015, 11, 11, 11, 0, 0) metrics = [ dict(name='cpu.frequency', value=1.0, source='source1', timestamp=_ts_now), dict(name='numa.membw.current', numa_membw_values={"0": 10, "1": 43}, source='source2', timestamp=_ts_now), ] hyper_ver_int = versionutils.convert_version_to_int('6.0.0') compute = objects.ComputeNode( uuid=uuids.cn1, metrics=jsonutils.dumps(metrics), memory_mb=0, free_disk_gb=0, local_gb=0, local_gb_used=0, free_ram_mb=0, vcpus=0, vcpus_used=0, disk_available_least=None, updated_at=datetime.datetime(2015, 11, 11, 11, 0, 0), host_ip='127.0.0.1', hypervisor_type='htype', hypervisor_hostname='hostname', cpu_info='cpu_info', supported_hv_specs=[], hypervisor_version=hyper_ver_int, numa_topology=fakes.NUMA_TOPOLOGY._to_json(), stats=None, pci_device_pools=None, cpu_allocation_ratio=16.0, ram_allocation_ratio=1.5, disk_allocation_ratio=1.0) host = host_manager.HostState("fakehost", "fakenode", uuids.cell) host.update(compute=compute) self.assertEqual(len(host.metrics), 2) self.assertEqual(1.0, host.metrics.to_list()[0]['value']) self.assertEqual('source1', host.metrics[0].source) self.assertEqual('cpu.frequency', host.metrics[0].name) self.assertEqual('numa.membw.current', host.metrics[1].name) self.assertEqual('source2', host.metrics.to_list()[1]['source']) self.assertEqual({'0': 10, '1': 43}, host.metrics[1].numa_membw_values) self.assertIsInstance(host.numa_topology, six.string_types) def test_stat_consumption_from_compute_node_not_ready(self): compute = objects.ComputeNode(free_ram_mb=100, uuid=uuids.compute_node_uuid) host = host_manager.HostState("fakehost", "fakenode", uuids.cell) host._update_from_compute_node(compute) # Because compute record not ready, the update of free ram # will not happen and the value will still be 0 self.assertEqual(0, host.free_ram_mb) nova-17.0.1/nova/tests/unit/scheduler/test_rpcapi.py0000666000175000017500000001556513250073126022533 0ustar zuulzuul00000000000000# Copyright 2013 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Unit Tests for nova.scheduler.rpcapi """ import mock from oslo_config import cfg from nova import context from nova import exception as exc from nova import objects from nova.scheduler import rpcapi as scheduler_rpcapi from nova import test from nova.tests import uuidsentinel as uuids CONF = cfg.CONF class SchedulerRpcAPITestCase(test.NoDBTestCase): def _test_scheduler_api(self, method, rpc_method, expected_args=None, **kwargs): ctxt = context.RequestContext('fake_user', 'fake_project') rpcapi = scheduler_rpcapi.SchedulerAPI() self.assertIsNotNone(rpcapi.client) self.assertEqual(rpcapi.client.target.topic, scheduler_rpcapi.RPC_TOPIC) expected_retval = 'foo' if rpc_method == 'call' else None expected_version = kwargs.pop('version', None) expected_fanout = kwargs.pop('fanout', None) expected_kwargs = kwargs.copy() if expected_args: expected_kwargs = expected_args prepare_kwargs = {} if expected_fanout: prepare_kwargs['fanout'] = True if expected_version: prepare_kwargs['version'] = expected_version # NOTE(sbauza): We need to persist the method before mocking it orig_prepare = rpcapi.client.prepare def fake_can_send_version(version=None): return orig_prepare(version=version).can_send_version() @mock.patch.object(rpcapi.client, rpc_method, return_value=expected_retval) @mock.patch.object(rpcapi.client, 'prepare', return_value=rpcapi.client) @mock.patch.object(rpcapi.client, 'can_send_version', side_effect=fake_can_send_version) def do_test(mock_csv, mock_prepare, mock_rpc_method): retval = getattr(rpcapi, method)(ctxt, **kwargs) self.assertEqual(retval, expected_retval) mock_prepare.assert_called_once_with(**prepare_kwargs) mock_rpc_method.assert_called_once_with(ctxt, method, **expected_kwargs) do_test() def test_select_destinations(self): fake_spec = objects.RequestSpec() self._test_scheduler_api('select_destinations', rpc_method='call', expected_args={'spec_obj': fake_spec, 'instance_uuids': [uuids.instance], 'return_objects': True, 'return_alternates': True}, spec_obj=fake_spec, instance_uuids=[uuids.instance], return_objects=True, return_alternates=True, version='4.5') def test_select_destinations_4_4(self): self.flags(scheduler='4.4', group='upgrade_levels') fake_spec = objects.RequestSpec() self._test_scheduler_api('select_destinations', rpc_method='call', expected_args={'spec_obj': fake_spec, 'instance_uuids': [uuids.instance]}, spec_obj=fake_spec, instance_uuids=[uuids.instance], return_objects=False, return_alternates=False, version='4.4') def test_select_destinations_4_3(self): self.flags(scheduler='4.3', group='upgrade_levels') fake_spec = objects.RequestSpec() self._test_scheduler_api('select_destinations', rpc_method='call', expected_args={'spec_obj': fake_spec}, spec_obj=fake_spec, instance_uuids=[uuids.instance], return_alternates=False, version='4.3') def test_select_destinations_old_with_new_params(self): self.flags(scheduler='4.4', group='upgrade_levels') fake_spec = objects.RequestSpec() ctxt = context.RequestContext('fake_user', 'fake_project') rpcapi = scheduler_rpcapi.SchedulerAPI() self.assertRaises(exc.SelectionObjectsWithOldRPCVersionNotSupported, rpcapi.select_destinations, ctxt, fake_spec, ['fake_uuids'], return_objects=True, return_alternates=True) self.assertRaises(exc.SelectionObjectsWithOldRPCVersionNotSupported, rpcapi.select_destinations, ctxt, fake_spec, ['fake_uuids'], return_objects=False, return_alternates=True) self.assertRaises(exc.SelectionObjectsWithOldRPCVersionNotSupported, rpcapi.select_destinations, ctxt, fake_spec, ['fake_uuids'], return_objects=True, return_alternates=False) @mock.patch.object(objects.RequestSpec, 'to_legacy_filter_properties_dict') @mock.patch.object(objects.RequestSpec, 'to_legacy_request_spec_dict') def test_select_destinations_with_old_manager(self, to_spec, to_props): self.flags(scheduler='4.0', group='upgrade_levels') to_spec.return_value = 'fake_request_spec' to_props.return_value = 'fake_prop' fake_spec = objects.RequestSpec() self._test_scheduler_api('select_destinations', rpc_method='call', expected_args={'request_spec': 'fake_request_spec', 'filter_properties': 'fake_prop'}, spec_obj=fake_spec, instance_uuids=[uuids.instance], version='4.0') def test_update_aggregates(self): self._test_scheduler_api('update_aggregates', rpc_method='cast', aggregates='aggregates', version='4.1', fanout=True) def test_delete_aggregate(self): self._test_scheduler_api('delete_aggregate', rpc_method='cast', aggregate='aggregate', version='4.1', fanout=True) def test_update_instance_info(self): self._test_scheduler_api('update_instance_info', rpc_method='cast', host_name='fake_host', instance_info='fake_instance', fanout=True, version='4.2') def test_delete_instance_info(self): self._test_scheduler_api('delete_instance_info', rpc_method='cast', host_name='fake_host', instance_uuid='fake_uuid', fanout=True, version='4.2') def test_sync_instance_info(self): self._test_scheduler_api('sync_instance_info', rpc_method='cast', host_name='fake_host', instance_uuids=['fake1', 'fake2'], fanout=True, version='4.2') nova-17.0.1/nova/tests/unit/scheduler/client/0000775000175000017500000000000013250073472021110 5ustar zuulzuul00000000000000nova-17.0.1/nova/tests/unit/scheduler/client/test_query.py0000666000175000017500000000562713250073126023676 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova import context from nova import objects from nova.scheduler.client import query from nova import test from nova.tests import uuidsentinel as uuids class SchedulerQueryClientTestCase(test.NoDBTestCase): def setUp(self): super(SchedulerQueryClientTestCase, self).setUp() self.context = context.get_admin_context() self.client = query.SchedulerQueryClient() def test_constructor(self): self.assertIsNotNone(self.client.scheduler_rpcapi) @mock.patch('nova.scheduler.rpcapi.SchedulerAPI.select_destinations') def test_select_destinations(self, mock_select_destinations): fake_spec = objects.RequestSpec() fake_spec.instance_uuid = uuids.instance self.client.select_destinations( context=self.context, spec_obj=fake_spec, instance_uuids=[fake_spec.instance_uuid], return_objects=True, return_alternates=True, ) mock_select_destinations.assert_called_once_with(self.context, fake_spec, [fake_spec.instance_uuid], True, True) @mock.patch('nova.scheduler.rpcapi.SchedulerAPI.select_destinations') def test_select_destinations_old_call(self, mock_select_destinations): fake_spec = objects.RequestSpec() fake_spec.instance_uuid = uuids.instance self.client.select_destinations( context=self.context, spec_obj=fake_spec, instance_uuids=[fake_spec.instance_uuid] ) mock_select_destinations.assert_called_once_with(self.context, fake_spec, [fake_spec.instance_uuid], False, False) @mock.patch('nova.scheduler.rpcapi.SchedulerAPI.update_aggregates') def test_update_aggregates(self, mock_update_aggs): aggregates = [objects.Aggregate(id=1)] self.client.update_aggregates( context=self.context, aggregates=aggregates) mock_update_aggs.assert_called_once_with( self.context, aggregates) @mock.patch('nova.scheduler.rpcapi.SchedulerAPI.delete_aggregate') def test_delete_aggregate(self, mock_delete_agg): aggregate = objects.Aggregate(id=1) self.client.delete_aggregate( context=self.context, aggregate=aggregate) mock_delete_agg.assert_called_once_with( self.context, aggregate) nova-17.0.1/nova/tests/unit/scheduler/client/__init__.py0000666000175000017500000000000013250073126023205 0ustar zuulzuul00000000000000nova-17.0.1/nova/tests/unit/scheduler/client/test_report.py0000666000175000017500000047134113250073136024045 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import time from keystoneauth1 import exceptions as ks_exc import mock import requests import six from six.moves.urllib import parse import nova.conf from nova import context from nova import exception from nova import objects from nova.objects import fields from nova.scheduler.client import report from nova.scheduler import utils as scheduler_utils from nova import test from nova.tests import uuidsentinel as uuids CONF = nova.conf.CONF class SafeConnectedTestCase(test.NoDBTestCase): """Test the safe_connect decorator for the scheduler client.""" def setUp(self): super(SafeConnectedTestCase, self).setUp() self.context = context.get_admin_context() with mock.patch('keystoneauth1.loading.load_auth_from_conf_options'): self.client = report.SchedulerReportClient() @mock.patch('keystoneauth1.session.Session.request') def test_missing_endpoint(self, req): """Test EndpointNotFound behavior. A missing endpoint entry should not explode. """ req.side_effect = ks_exc.EndpointNotFound() self.client._get_resource_provider("fake") # reset the call count to demonstrate that future calls still # work req.reset_mock() self.client._get_resource_provider("fake") self.assertTrue(req.called) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_create_client') @mock.patch('keystoneauth1.session.Session.request') def test_missing_endpoint_create_client(self, req, create_client): """Test EndpointNotFound retry behavior. A missing endpoint should cause _create_client to be called. """ req.side_effect = ks_exc.EndpointNotFound() self.client._get_resource_provider("fake") # This is the second time _create_client is called, but the first since # the mock was created. self.assertTrue(create_client.called) @mock.patch('keystoneauth1.session.Session.request') def test_missing_auth(self, req): """Test Missing Auth handled correctly. A missing auth configuration should not explode. """ req.side_effect = ks_exc.MissingAuthPlugin() self.client._get_resource_provider("fake") # reset the call count to demonstrate that future calls still # work req.reset_mock() self.client._get_resource_provider("fake") self.assertTrue(req.called) @mock.patch('keystoneauth1.session.Session.request') def test_unauthorized(self, req): """Test Unauthorized handled correctly. An unauthorized configuration should not explode. """ req.side_effect = ks_exc.Unauthorized() self.client._get_resource_provider("fake") # reset the call count to demonstrate that future calls still # work req.reset_mock() self.client._get_resource_provider("fake") self.assertTrue(req.called) @mock.patch('keystoneauth1.session.Session.request') def test_connect_fail(self, req): """Test Connect Failure handled correctly. If we get a connect failure, this is transient, and we expect that this will end up working correctly later. """ req.side_effect = ks_exc.ConnectFailure() self.client._get_resource_provider("fake") # reset the call count to demonstrate that future calls do # work req.reset_mock() self.client._get_resource_provider("fake") self.assertTrue(req.called) @mock.patch.object(report, 'LOG') def test_warning_limit(self, mock_log): # Assert that __init__ initializes _warn_count as we expect self.assertEqual(0, self.client._warn_count) mock_self = mock.MagicMock() mock_self._warn_count = 0 for i in range(0, report.WARN_EVERY + 3): report.warn_limit(mock_self, 'warning') mock_log.warning.assert_has_calls([mock.call('warning'), mock.call('warning')]) @mock.patch('keystoneauth1.session.Session.request') def test_failed_discovery(self, req): """Test DiscoveryFailure behavior. Failed discovery should not blow up. """ req.side_effect = ks_exc.DiscoveryFailure() self.client._get_resource_provider("fake") # reset the call count to demonstrate that future calls still # work req.reset_mock() self.client._get_resource_provider("fake") self.assertTrue(req.called) class TestConstructor(test.NoDBTestCase): @mock.patch('keystoneauth1.loading.load_session_from_conf_options') @mock.patch('keystoneauth1.loading.load_auth_from_conf_options') def test_constructor(self, load_auth_mock, load_sess_mock): client = report.SchedulerReportClient() load_auth_mock.assert_called_once_with(CONF, 'placement') load_sess_mock.assert_called_once_with(CONF, 'placement', auth=load_auth_mock.return_value) self.assertEqual(['internal', 'public'], client._client.interface) self.assertEqual({'accept': 'application/json'}, client._client.additional_headers) @mock.patch('keystoneauth1.loading.load_session_from_conf_options') @mock.patch('keystoneauth1.loading.load_auth_from_conf_options') def test_constructor_admin_interface(self, load_auth_mock, load_sess_mock): self.flags(valid_interfaces='admin', group='placement') client = report.SchedulerReportClient() load_auth_mock.assert_called_once_with(CONF, 'placement') load_sess_mock.assert_called_once_with(CONF, 'placement', auth=load_auth_mock.return_value) self.assertEqual(['admin'], client._client.interface) self.assertEqual({'accept': 'application/json'}, client._client.additional_headers) class SchedulerReportClientTestCase(test.NoDBTestCase): def setUp(self): super(SchedulerReportClientTestCase, self).setUp() self.context = context.get_admin_context() self.ks_adap_mock = mock.Mock() self.compute_node = objects.ComputeNode( uuid=uuids.compute_node, hypervisor_hostname='foo', vcpus=8, cpu_allocation_ratio=16.0, memory_mb=1024, ram_allocation_ratio=1.5, local_gb=10, disk_allocation_ratio=1.0, ) with test.nested( mock.patch('keystoneauth1.adapter.Adapter', return_value=self.ks_adap_mock), mock.patch('keystoneauth1.loading.load_auth_from_conf_options') ): self.client = report.SchedulerReportClient() def _init_provider_tree(self, generation_override=None, resources_override=None): cn = self.compute_node resources = resources_override if resources_override is None: resources = { 'VCPU': { 'total': cn.vcpus, 'reserved': 0, 'min_unit': 1, 'max_unit': cn.vcpus, 'step_size': 1, 'allocation_ratio': cn.cpu_allocation_ratio, }, 'MEMORY_MB': { 'total': cn.memory_mb, 'reserved': 512, 'min_unit': 1, 'max_unit': cn.memory_mb, 'step_size': 1, 'allocation_ratio': cn.ram_allocation_ratio, }, 'DISK_GB': { 'total': cn.local_gb, 'reserved': 0, 'min_unit': 1, 'max_unit': cn.local_gb, 'step_size': 1, 'allocation_ratio': cn.disk_allocation_ratio, }, } generation = generation_override or 1 rp_uuid = self.client._provider_tree.new_root( cn.hypervisor_hostname, cn.uuid, generation, ) self.client._provider_tree.update_inventory(rp_uuid, resources, generation) def _validate_provider(self, name_or_uuid, **kwargs): """Validates existence and values of a provider in this client's _provider_tree. :param name_or_uuid: The name or UUID of the provider to validate. :param kwargs: Optional keyword arguments of ProviderData attributes whose values are to be validated. """ found = self.client._provider_tree.data(name_or_uuid) # If kwargs provided, their names indicate ProviderData attributes for attr, expected in kwargs.items(): try: self.assertEqual(getattr(found, attr), expected) except AttributeError: self.fail("Provider with name or UUID %s doesn't have " "attribute %s (expected value: %s)" % (name_or_uuid, attr, expected)) class TestPutAllocations(SchedulerReportClientTestCase): @mock.patch('nova.scheduler.client.report.SchedulerReportClient.put') def test_put_allocations(self, mock_put): mock_put.return_value.status_code = 204 mock_put.return_value.text = "cool" rp_uuid = mock.sentinel.rp consumer_uuid = mock.sentinel.consumer data = {"MEMORY_MB": 1024} expected_url = "/allocations/%s" % consumer_uuid resp = self.client.put_allocations(self.context, rp_uuid, consumer_uuid, data, mock.sentinel.project_id, mock.sentinel.user_id) self.assertTrue(resp) mock_put.assert_called_once_with( expected_url, mock.ANY, version='1.8', global_request_id=self.context.global_id) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.put') def test_put_allocations_fail_fallback_succeeds(self, mock_put): not_acceptable = mock.Mock() not_acceptable.status_code = 406 not_acceptable.text = 'microversion not supported' ok_request = mock.Mock() ok_request.status_code = 204 ok_request.text = 'cool' mock_put.side_effect = [not_acceptable, ok_request] rp_uuid = mock.sentinel.rp consumer_uuid = mock.sentinel.consumer data = {"MEMORY_MB": 1024} expected_url = "/allocations/%s" % consumer_uuid resp = self.client.put_allocations(self.context, rp_uuid, consumer_uuid, data, mock.sentinel.project_id, mock.sentinel.user_id) self.assertTrue(resp) # Should fall back to earlier way if 1.8 fails. call1 = mock.call(expected_url, mock.ANY, version='1.8', global_request_id=self.context.global_id) call2 = mock.call(expected_url, mock.ANY) self.assertEqual(2, mock_put.call_count) mock_put.assert_has_calls([call1, call2]) @mock.patch.object(report.LOG, 'warning') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.put') def test_put_allocations_fail(self, mock_put, mock_warn): mock_put.return_value.status_code = 400 mock_put.return_value.text = "not cool" rp_uuid = mock.sentinel.rp consumer_uuid = mock.sentinel.consumer data = {"MEMORY_MB": 1024} expected_url = "/allocations/%s" % consumer_uuid resp = self.client.put_allocations(self.context, rp_uuid, consumer_uuid, data, mock.sentinel.project_id, mock.sentinel.user_id) self.assertFalse(resp) mock_put.assert_called_once_with( expected_url, mock.ANY, version='1.8', global_request_id=self.context.global_id) log_msg = mock_warn.call_args[0][0] self.assertIn("Unable to submit allocation for instance", log_msg) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.put') def test_put_allocations_retries_conflict(self, mock_put): failed = mock.MagicMock() failed.status_code = 409 failed.text = "concurrently updated" succeeded = mock.MagicMock() succeeded.status_code = 204 mock_put.side_effect = (failed, succeeded) rp_uuid = mock.sentinel.rp consumer_uuid = mock.sentinel.consumer data = {"MEMORY_MB": 1024} expected_url = "/allocations/%s" % consumer_uuid resp = self.client.put_allocations(self.context, rp_uuid, consumer_uuid, data, mock.sentinel.project_id, mock.sentinel.user_id) self.assertTrue(resp) mock_put.assert_has_calls([ mock.call(expected_url, mock.ANY, version='1.8', global_request_id=self.context.global_id)] * 2) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.put') def test_put_allocations_retry_gives_up(self, mock_put): failed = mock.MagicMock() failed.status_code = 409 failed.text = "concurrently updated" mock_put.return_value = failed rp_uuid = mock.sentinel.rp consumer_uuid = mock.sentinel.consumer data = {"MEMORY_MB": 1024} expected_url = "/allocations/%s" % consumer_uuid resp = self.client.put_allocations(self.context, rp_uuid, consumer_uuid, data, mock.sentinel.project_id, mock.sentinel.user_id) self.assertFalse(resp) mock_put.assert_has_calls([ mock.call(expected_url, mock.ANY, version='1.8', global_request_id=self.context.global_id)] * 3) def test_claim_resources_success_with_old_version(self): get_resp_mock = mock.Mock(status_code=200) get_resp_mock.json.return_value = { 'allocations': {}, # build instance, not move } self.ks_adap_mock.get.return_value = get_resp_mock resp_mock = mock.Mock(status_code=204) self.ks_adap_mock.put.return_value = resp_mock consumer_uuid = uuids.consumer_uuid alloc_req = { 'allocations': [ { 'resource_provider': { 'uuid': uuids.cn1 }, 'resources': { 'VCPU': 1, 'MEMORY_MB': 1024, } }, ], } project_id = uuids.project_id user_id = uuids.user_id res = self.client.claim_resources( self.context, consumer_uuid, alloc_req, project_id, user_id) expected_url = "/allocations/%s" % consumer_uuid expected_payload = { 'allocations': { alloc['resource_provider']['uuid']: { 'resources': alloc['resources'] } for alloc in alloc_req['allocations'] } } expected_payload['project_id'] = project_id expected_payload['user_id'] = user_id self.ks_adap_mock.put.assert_called_once_with( expected_url, microversion='1.12', json=expected_payload, raise_exc=False, headers={'X-Openstack-Request-Id': self.context.global_id}) self.assertTrue(res) def test_claim_resources_success(self): get_resp_mock = mock.Mock(status_code=200) get_resp_mock.json.return_value = { 'allocations': {}, # build instance, not move } self.ks_adap_mock.get.return_value = get_resp_mock resp_mock = mock.Mock(status_code=204) self.ks_adap_mock.put.return_value = resp_mock consumer_uuid = uuids.consumer_uuid alloc_req = { 'allocations': { uuids.cn1: { 'resources': { 'VCPU': 1, 'MEMORY_MB': 1024, } }, }, } project_id = uuids.project_id user_id = uuids.user_id res = self.client.claim_resources(self.context, consumer_uuid, alloc_req, project_id, user_id, allocation_request_version='1.12') expected_url = "/allocations/%s" % consumer_uuid expected_payload = {'allocations': { rp_uuid: alloc for rp_uuid, alloc in alloc_req['allocations'].items()}} expected_payload['project_id'] = project_id expected_payload['user_id'] = user_id self.ks_adap_mock.put.assert_called_once_with( expected_url, microversion='1.12', json=expected_payload, raise_exc=False, headers={'X-Openstack-Request-Id': self.context.global_id}) self.assertTrue(res) def test_claim_resources_success_move_operation_no_shared(self): """Tests that when a move operation is detected (existing allocations for the same instance UUID) that we end up constructing an appropriate allocation that contains the original resources on the source host as well as the resources on the destination host. """ get_resp_mock = mock.Mock(status_code=200) get_resp_mock.json.return_value = { 'allocations': { uuids.source: { 'resource_provider_generation': 42, 'resources': { 'VCPU': 1, 'MEMORY_MB': 1024, }, }, }, } self.ks_adap_mock.get.return_value = get_resp_mock resp_mock = mock.Mock(status_code=204) self.ks_adap_mock.put.return_value = resp_mock consumer_uuid = uuids.consumer_uuid alloc_req = { 'allocations': { uuids.destination: { 'resources': { 'VCPU': 1, 'MEMORY_MB': 1024 } }, }, } project_id = uuids.project_id user_id = uuids.user_id res = self.client.claim_resources(self.context, consumer_uuid, alloc_req, project_id, user_id, allocation_request_version='1.12') expected_url = "/allocations/%s" % consumer_uuid # New allocation should include resources claimed on both the source # and destination hosts expected_payload = { 'allocations': { uuids.source: { 'resources': { 'VCPU': 1, 'MEMORY_MB': 1024 } }, uuids.destination: { 'resources': { 'VCPU': 1, 'MEMORY_MB': 1024 } }, }, } expected_payload['project_id'] = project_id expected_payload['user_id'] = user_id self.ks_adap_mock.put.assert_called_once_with( expected_url, microversion='1.12', json=mock.ANY, raise_exc=False, headers={'X-Openstack-Request-Id': self.context.global_id}) # We have to pull the json body from the mock call_args to validate # it separately otherwise hash seed issues get in the way. actual_payload = self.ks_adap_mock.put.call_args[1]['json'] self.assertEqual(expected_payload, actual_payload) self.assertTrue(res) def test_claim_resources_success_move_operation_with_shared(self): """Tests that when a move operation is detected (existing allocations for the same instance UUID) that we end up constructing an appropriate allocation that contains the original resources on the source host as well as the resources on the destination host but that when a shared storage provider is claimed against in both the original allocation as well as the new allocation request, we don't double that allocation resource request up. """ get_resp_mock = mock.Mock(status_code=200) get_resp_mock.json.return_value = { 'allocations': { uuids.source: { 'resource_provider_generation': 42, 'resources': { 'VCPU': 1, 'MEMORY_MB': 1024, }, }, uuids.shared_storage: { 'resource_provider_generation': 42, 'resources': { 'DISK_GB': 100, }, }, }, } self.ks_adap_mock.get.return_value = get_resp_mock resp_mock = mock.Mock(status_code=204) self.ks_adap_mock.put.return_value = resp_mock consumer_uuid = uuids.consumer_uuid alloc_req = { 'allocations': { uuids.destination: { 'resources': { 'VCPU': 1, 'MEMORY_MB': 1024, } }, uuids.shared_storage: { 'resources': { 'DISK_GB': 100, } }, } } project_id = uuids.project_id user_id = uuids.user_id res = self.client.claim_resources(self.context, consumer_uuid, alloc_req, project_id, user_id, allocation_request_version='1.12') expected_url = "/allocations/%s" % consumer_uuid # New allocation should include resources claimed on both the source # and destination hosts but not have a doubled-up request for the disk # resources on the shared provider expected_payload = { 'allocations': { uuids.source: { 'resources': { 'VCPU': 1, 'MEMORY_MB': 1024 } }, uuids.shared_storage: { 'resources': { 'DISK_GB': 100 } }, uuids.destination: { 'resources': { 'VCPU': 1, 'MEMORY_MB': 1024 } }, }, } expected_payload['project_id'] = project_id expected_payload['user_id'] = user_id self.ks_adap_mock.put.assert_called_once_with( expected_url, microversion='1.12', json=mock.ANY, raise_exc=False, headers={'X-Openstack-Request-Id': self.context.global_id}) # We have to pull the allocations from the json body from the # mock call_args to validate it separately otherwise hash seed # issues get in the way. actual_payload = self.ks_adap_mock.put.call_args[1]['json'] self.assertEqual(expected_payload, actual_payload) self.assertTrue(res) def test_claim_resources_success_resize_to_same_host_no_shared(self): """Tests that when a resize to the same host operation is detected (existing allocations for the same instance UUID and same resource provider) that we end up constructing an appropriate allocation that contains the original resources on the source host as well as the resources on the destination host, which in this case are the same. """ get_current_allocations_resp_mock = mock.Mock(status_code=200) get_current_allocations_resp_mock.json.return_value = { 'allocations': { uuids.same_host: { 'resource_provider_generation': 42, 'resources': { 'VCPU': 1, 'MEMORY_MB': 1024, 'DISK_GB': 20 }, }, }, } self.ks_adap_mock.get.return_value = get_current_allocations_resp_mock put_allocations_resp_mock = mock.Mock(status_code=204) self.ks_adap_mock.put.return_value = put_allocations_resp_mock consumer_uuid = uuids.consumer_uuid # This is the resize-up allocation where VCPU, MEMORY_MB and DISK_GB # are all being increased but on the same host. We also throw a custom # resource class in the new allocation to make sure it's not lost and # that we don't have a KeyError when merging the allocations. alloc_req = { 'allocations': { uuids.same_host: { 'resources': { 'VCPU': 2, 'MEMORY_MB': 2048, 'DISK_GB': 40, 'CUSTOM_FOO': 1 } }, }, } project_id = uuids.project_id user_id = uuids.user_id res = self.client.claim_resources(self.context, consumer_uuid, alloc_req, project_id, user_id, allocation_request_version='1.12') expected_url = "/allocations/%s" % consumer_uuid # New allocation should include doubled resources claimed on the same # host. expected_payload = { 'allocations': { uuids.same_host: { 'resources': { 'VCPU': 3, 'MEMORY_MB': 3072, 'DISK_GB': 60, 'CUSTOM_FOO': 1 } }, }, } expected_payload['project_id'] = project_id expected_payload['user_id'] = user_id self.ks_adap_mock.put.assert_called_once_with( expected_url, microversion='1.12', json=mock.ANY, raise_exc=False, headers={'X-Openstack-Request-Id': self.context.global_id}) # We have to pull the json body from the mock call_args to validate # it separately otherwise hash seed issues get in the way. actual_payload = self.ks_adap_mock.put.call_args[1]['json'] self.assertEqual(expected_payload, actual_payload) self.assertTrue(res) def test_claim_resources_success_resize_to_same_host_with_shared(self): """Tests that when a resize to the same host operation is detected (existing allocations for the same instance UUID and same resource provider) that we end up constructing an appropriate allocation that contains the original resources on the source host as well as the resources on the destination host, which in this case are the same. This test adds the fun wrinkle of throwing a shared storage provider in the mix when doing resize to the same host. """ get_current_allocations_resp_mock = mock.Mock(status_code=200) get_current_allocations_resp_mock.json.return_value = { 'allocations': { uuids.same_host: { 'resource_provider_generation': 42, 'resources': { 'VCPU': 1, 'MEMORY_MB': 1024 }, }, uuids.shared_storage: { 'resource_provider_generation': 42, 'resources': { 'DISK_GB': 20, }, }, }, } self.ks_adap_mock.get.return_value = get_current_allocations_resp_mock put_allocations_resp_mock = mock.Mock(status_code=204) self.ks_adap_mock.put.return_value = put_allocations_resp_mock consumer_uuid = uuids.consumer_uuid # This is the resize-up allocation where VCPU, MEMORY_MB and DISK_GB # are all being increased but DISK_GB is on a shared storage provider. alloc_req = { 'allocations': { uuids.same_host: { 'resources': { 'VCPU': 2, 'MEMORY_MB': 2048 } }, uuids.shared_storage: { 'resources': { 'DISK_GB': 40, } }, }, } project_id = uuids.project_id user_id = uuids.user_id res = self.client.claim_resources(self.context, consumer_uuid, alloc_req, project_id, user_id, allocation_request_version='1.12') expected_url = "/allocations/%s" % consumer_uuid # New allocation should include doubled resources claimed on the same # host. expected_payload = { 'allocations': { uuids.same_host: { 'resources': { 'VCPU': 3, 'MEMORY_MB': 3072 } }, uuids.shared_storage: { 'resources': { 'DISK_GB': 60 } }, }, } expected_payload['project_id'] = project_id expected_payload['user_id'] = user_id self.ks_adap_mock.put.assert_called_once_with( expected_url, microversion='1.12', json=mock.ANY, raise_exc=False, headers={'X-Openstack-Request-Id': self.context.global_id}) # We have to pull the json body from the mock call_args to validate # it separately otherwise hash seed issues get in the way. actual_payload = self.ks_adap_mock.put.call_args[1]['json'] self.assertEqual(expected_payload, actual_payload) self.assertTrue(res) def test_claim_resources_fail_retry_success(self): get_resp_mock = mock.Mock(status_code=200) get_resp_mock.json.return_value = { 'allocations': {}, # build instance, not move } self.ks_adap_mock.get.return_value = get_resp_mock resp_mocks = [ mock.Mock( status_code=409, text='Inventory changed while attempting to allocate: ' 'Another thread concurrently updated the data. ' 'Please retry your update'), mock.Mock(status_code=204), ] self.ks_adap_mock.put.side_effect = resp_mocks consumer_uuid = uuids.consumer_uuid alloc_req = { 'allocations': { uuids.cn1: { 'resources': { 'VCPU': 1, 'MEMORY_MB': 1024, } }, }, } project_id = uuids.project_id user_id = uuids.user_id res = self.client.claim_resources(self.context, consumer_uuid, alloc_req, project_id, user_id, allocation_request_version='1.12') expected_url = "/allocations/%s" % consumer_uuid expected_payload = { 'allocations': {rp_uuid: res for rp_uuid, res in alloc_req['allocations'].items()} } expected_payload['project_id'] = project_id expected_payload['user_id'] = user_id # We should have exactly two calls to the placement API that look # identical since we're retrying the same HTTP request expected_calls = [ mock.call(expected_url, microversion='1.12', json=expected_payload, raise_exc=False, headers={'X-Openstack-Request-Id': self.context.global_id})] * 2 self.assertEqual(len(expected_calls), self.ks_adap_mock.put.call_count) self.ks_adap_mock.put.assert_has_calls(expected_calls) self.assertTrue(res) @mock.patch.object(report.LOG, 'warning') def test_claim_resources_failure(self, mock_log): get_resp_mock = mock.Mock(status_code=200) get_resp_mock.json.return_value = { 'allocations': {}, # build instance, not move } self.ks_adap_mock.get.return_value = get_resp_mock resp_mock = mock.Mock(status_code=409, text='not cool') self.ks_adap_mock.put.return_value = resp_mock consumer_uuid = uuids.consumer_uuid alloc_req = { 'allocations': { uuids.cn1: { 'resources': { 'VCPU': 1, 'MEMORY_MB': 1024, } }, }, } project_id = uuids.project_id user_id = uuids.user_id res = self.client.claim_resources(self.context, consumer_uuid, alloc_req, project_id, user_id, allocation_request_version='1.12') expected_url = "/allocations/%s" % consumer_uuid expected_payload = { 'allocations': {rp_uuid: res for rp_uuid, res in alloc_req['allocations'].items()} } expected_payload['project_id'] = project_id expected_payload['user_id'] = user_id self.ks_adap_mock.put.assert_called_once_with( expected_url, microversion='1.12', json=expected_payload, raise_exc=False, headers={'X-Openstack-Request-Id': self.context.global_id}) self.assertFalse(res) self.assertTrue(mock_log.called) def test_remove_provider_from_inst_alloc_no_shared(self): """Tests that the method which manipulates an existing doubled-up allocation for a move operation to remove the source host results in sending placement the proper payload to PUT /allocations/{consumer_uuid} call. """ get_resp_mock = mock.Mock(status_code=200) get_resp_mock.json.return_value = { 'allocations': { uuids.source: { 'resource_provider_generation': 42, 'resources': { 'VCPU': 1, 'MEMORY_MB': 1024, }, }, uuids.destination: { 'resource_provider_generation': 42, 'resources': { 'VCPU': 1, 'MEMORY_MB': 1024, }, }, }, } self.ks_adap_mock.get.return_value = get_resp_mock resp_mock = mock.Mock(status_code=204) self.ks_adap_mock.put.return_value = resp_mock consumer_uuid = uuids.consumer_uuid project_id = uuids.project_id user_id = uuids.user_id res = self.client.remove_provider_from_instance_allocation( self.context, consumer_uuid, uuids.source, user_id, project_id, mock.Mock()) expected_url = "/allocations/%s" % consumer_uuid # New allocations should only include the destination... expected_payload = { 'allocations': [ { 'resource_provider': { 'uuid': uuids.destination, }, 'resources': { 'VCPU': 1, 'MEMORY_MB': 1024, }, }, ], } expected_payload['project_id'] = project_id expected_payload['user_id'] = user_id # We have to pull the json body from the mock call_args to validate # it separately otherwise hash seed issues get in the way. actual_payload = self.ks_adap_mock.put.call_args[1]['json'] sort_by_uuid = lambda x: x['resource_provider']['uuid'] expected_allocations = sorted(expected_payload['allocations'], key=sort_by_uuid) actual_allocations = sorted(actual_payload['allocations'], key=sort_by_uuid) self.assertEqual(expected_allocations, actual_allocations) self.ks_adap_mock.put.assert_called_once_with( expected_url, microversion='1.10', json=mock.ANY, raise_exc=False, headers={'X-Openstack-Request-Id': self.context.global_id}) self.assertTrue(res) def test_remove_provider_from_inst_alloc_with_shared(self): """Tests that the method which manipulates an existing doubled-up allocation with DISK_GB being consumed from a shared storage provider for a move operation to remove the source host results in sending placement the proper payload to PUT /allocations/{consumer_uuid} call. """ get_resp_mock = mock.Mock(status_code=200) get_resp_mock.json.return_value = { 'allocations': { uuids.source: { 'resource_provider_generation': 42, 'resources': { 'VCPU': 1, 'MEMORY_MB': 1024, }, }, uuids.shared_storage: { 'resource_provider_generation': 42, 'resources': { 'DISK_GB': 100, }, }, uuids.destination: { 'resource_provider_generation': 42, 'resources': { 'VCPU': 1, 'MEMORY_MB': 1024, }, }, }, } self.ks_adap_mock.get.return_value = get_resp_mock resp_mock = mock.Mock(status_code=204) self.ks_adap_mock.put.return_value = resp_mock consumer_uuid = uuids.consumer_uuid project_id = uuids.project_id user_id = uuids.user_id res = self.client.remove_provider_from_instance_allocation( self.context, consumer_uuid, uuids.source, user_id, project_id, mock.Mock()) expected_url = "/allocations/%s" % consumer_uuid # New allocations should only include the destination... expected_payload = { 'allocations': [ { 'resource_provider': { 'uuid': uuids.shared_storage, }, 'resources': { 'DISK_GB': 100, }, }, { 'resource_provider': { 'uuid': uuids.destination, }, 'resources': { 'VCPU': 1, 'MEMORY_MB': 1024, }, }, ], } expected_payload['project_id'] = project_id expected_payload['user_id'] = user_id # We have to pull the json body from the mock call_args to validate # it separately otherwise hash seed issues get in the way. actual_payload = self.ks_adap_mock.put.call_args[1]['json'] sort_by_uuid = lambda x: x['resource_provider']['uuid'] expected_allocations = sorted(expected_payload['allocations'], key=sort_by_uuid) actual_allocations = sorted(actual_payload['allocations'], key=sort_by_uuid) self.assertEqual(expected_allocations, actual_allocations) self.ks_adap_mock.put.assert_called_once_with( expected_url, microversion='1.10', json=mock.ANY, raise_exc=False, headers={'X-Openstack-Request-Id': self.context.global_id}) self.assertTrue(res) def test_remove_provider_from_inst_alloc_no_source(self): """Tests that if remove_provider_from_instance_allocation() fails to find any allocations for the source host, it just returns True and does not attempt to rewrite the allocation for the consumer. """ get_resp_mock = mock.Mock(status_code=200) # Act like the allocations already did not include the source host for # some reason get_resp_mock.json.return_value = { 'allocations': { uuids.shared_storage: { 'resource_provider_generation': 42, 'resources': { 'DISK_GB': 100, }, }, uuids.destination: { 'resource_provider_generation': 42, 'resources': { 'VCPU': 1, 'MEMORY_MB': 1024, }, }, }, } self.ks_adap_mock.get.return_value = get_resp_mock consumer_uuid = uuids.consumer_uuid project_id = uuids.project_id user_id = uuids.user_id res = self.client.remove_provider_from_instance_allocation( self.context, consumer_uuid, uuids.source, user_id, project_id, mock.Mock()) self.ks_adap_mock.get.assert_called() self.ks_adap_mock.put.assert_not_called() self.assertTrue(res) def test_remove_provider_from_inst_alloc_fail_get_allocs(self): """Tests that we gracefully exit with False from remove_provider_from_instance_allocation() if the call to get the existing allocations fails for some reason """ get_resp_mock = mock.Mock(status_code=500) self.ks_adap_mock.get.return_value = get_resp_mock consumer_uuid = uuids.consumer_uuid project_id = uuids.project_id user_id = uuids.user_id res = self.client.remove_provider_from_instance_allocation( self.context, consumer_uuid, uuids.source, user_id, project_id, mock.Mock()) self.ks_adap_mock.get.assert_called() self.ks_adap_mock.put.assert_not_called() self.assertFalse(res) class TestSetAndClearAllocations(SchedulerReportClientTestCase): def setUp(self): super(TestSetAndClearAllocations, self).setUp() # We want to reuse the mock throughout the class, but with # different return values. self.mock_post = mock.patch( 'nova.scheduler.client.report.SchedulerReportClient.post').start() self.addCleanup(self.mock_post.stop) self.mock_post.return_value.status_code = 204 self.rp_uuid = mock.sentinel.rp self.consumer_uuid = mock.sentinel.consumer self.data = {"MEMORY_MB": 1024} self.project_id = mock.sentinel.project_id self.user_id = mock.sentinel.user_id self.expected_url = '/allocations' def test_url_microversion(self): expected_microversion = '1.13' resp = self.client.set_and_clear_allocations( self.context, self.rp_uuid, self.consumer_uuid, self.data, self.project_id, self.user_id) self.assertTrue(resp) self.mock_post.assert_called_once_with( self.expected_url, mock.ANY, version=expected_microversion, global_request_id=self.context.global_id) def test_payload_no_clear(self): expected_payload = { self.consumer_uuid: { 'user_id': self.user_id, 'project_id': self.project_id, 'allocations': { self.rp_uuid: { 'resources': { 'MEMORY_MB': 1024 } } } } } resp = self.client.set_and_clear_allocations( self.context, self.rp_uuid, self.consumer_uuid, self.data, self.project_id, self.user_id) self.assertTrue(resp) args, kwargs = self.mock_post.call_args payload = args[1] self.assertEqual(expected_payload, payload) def test_payload_with_clear(self): expected_payload = { self.consumer_uuid: { 'user_id': self.user_id, 'project_id': self.project_id, 'allocations': { self.rp_uuid: { 'resources': { 'MEMORY_MB': 1024 } } } }, mock.sentinel.migration_uuid: { 'user_id': self.user_id, 'project_id': self.project_id, 'allocations': {} } } resp = self.client.set_and_clear_allocations( self.context, self.rp_uuid, self.consumer_uuid, self.data, self.project_id, self.user_id, consumer_to_clear=mock.sentinel.migration_uuid) self.assertTrue(resp) args, kwargs = self.mock_post.call_args payload = args[1] self.assertEqual(expected_payload, payload) def test_409_concurrent_update(self): self.mock_post.return_value.status_code = 409 self.mock_post.return_value.text = 'concurrently updated' resp = self.client.set_and_clear_allocations( self.context, self.rp_uuid, self.consumer_uuid, self.data, self.project_id, self.user_id, consumer_to_clear=mock.sentinel.migration_uuid) self.assertFalse(resp) # Post was attempted three times. self.assertEqual(3, self.mock_post.call_count) @mock.patch('nova.scheduler.client.report.LOG.warning') def test_not_409_failure(self, mock_log): error_message = 'placement not there' self.mock_post.return_value.status_code = 503 self.mock_post.return_value.text = error_message resp = self.client.set_and_clear_allocations( self.context, self.rp_uuid, self.consumer_uuid, self.data, self.project_id, self.user_id, consumer_to_clear=mock.sentinel.migration_uuid) self.assertFalse(resp) args, kwargs = mock_log.call_args log_message = args[0] log_args = args[1] self.assertIn('Unable to post allocations', log_message) self.assertEqual(error_message, log_args['text']) class TestProviderOperations(SchedulerReportClientTestCase): @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_create_resource_provider') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_get_provider_aggregates') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_get_provider_traits') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_get_sharing_providers') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_get_providers_in_tree') def test_ensure_resource_provider_exists_in_cache(self, get_rpt_mock, get_shr_mock, get_trait_mock, get_agg_mock, create_rp_mock): # Override the client object's cache to contain a resource provider # object for the compute host and check that # _ensure_resource_provider() doesn't call _get_resource_provider() or # _create_resource_provider() cn = self.compute_node self.client._provider_tree.new_root( cn.hypervisor_hostname, cn.uuid, 1, ) get_agg_mock.side_effect = [ set([uuids.agg1, uuids.agg2]), set([uuids.agg1, uuids.agg3]), set([uuids.agg2]), ] get_trait_mock.side_effect = [ set(['CUSTOM_GOLD', 'CUSTOM_SILVER']), set(), set(['CUSTOM_BRONZE']) ] get_shr_mock.return_value = [ { 'uuid': uuids.shr1, 'name': 'sharing1', 'generation': 1, }, { 'uuid': uuids.shr2, 'name': 'sharing2', 'generation': 2, }, ] self.client._ensure_resource_provider(self.context, cn.uuid) get_shr_mock.assert_called_once_with(set([uuids.agg1, uuids.agg2])) self.assertTrue(self.client._provider_tree.exists(uuids.shr1)) self.assertTrue(self.client._provider_tree.exists(uuids.shr2)) # _get_provider_aggregates and _traits were called thrice: one for the # compute RP and once for each of the sharing RPs. expected_calls = [mock.call(uuid) for uuid in (cn.uuid, uuids.shr1, uuids.shr2)] get_agg_mock.assert_has_calls(expected_calls) get_trait_mock.assert_has_calls(expected_calls) # The compute RP is associated with aggs 1 and 2 self.assertFalse(self.client._provider_tree.have_aggregates_changed( uuids.compute_node, [uuids.agg1, uuids.agg2])) # The first sharing RP is associated with agg1 and agg3 self.assertFalse(self.client._provider_tree.have_aggregates_changed( uuids.shr1, [uuids.agg1, uuids.agg3])) # And the second with just agg2 self.assertFalse(self.client._provider_tree.have_aggregates_changed( uuids.shr2, [uuids.agg2])) # The compute RP has gold and silver traits self.assertFalse(self.client._provider_tree.have_traits_changed( uuids.compute_node, ['CUSTOM_GOLD', 'CUSTOM_SILVER'])) # The first sharing RP has none self.assertFalse(self.client._provider_tree.have_traits_changed( uuids.shr1, [])) # The second has bronze self.assertFalse(self.client._provider_tree.have_traits_changed( uuids.shr2, ['CUSTOM_BRONZE'])) # These were not called because we had the root provider in the cache. self.assertFalse(get_rpt_mock.called) self.assertFalse(create_rp_mock.called) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_create_resource_provider') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_get_provider_aggregates') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_get_provider_traits') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_get_sharing_providers') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_get_providers_in_tree') def test_ensure_resource_provider_get(self, get_rpt_mock, get_shr_mock, get_trait_mock, get_agg_mock, create_rp_mock): # No resource provider exists in the client's cache, so validate that # if we get the resource provider from the placement API that we don't # try to create the resource provider. get_rpt_mock.return_value = [{ 'uuid': uuids.compute_node, 'name': mock.sentinel.name, 'generation': 1, }] get_agg_mock.return_value = set([uuids.agg1]) get_trait_mock.return_value = set(['CUSTOM_GOLD']) get_shr_mock.return_value = [] self.client._ensure_resource_provider(self.context, uuids.compute_node) get_rpt_mock.assert_called_once_with(uuids.compute_node) self.assertTrue(self.client._provider_tree.exists(uuids.compute_node)) get_agg_mock.assert_called_once_with(uuids.compute_node) self.assertTrue( self.client._provider_tree.in_aggregates(uuids.compute_node, [uuids.agg1])) self.assertFalse( self.client._provider_tree.in_aggregates(uuids.compute_node, [uuids.agg2])) get_trait_mock.assert_called_once_with(uuids.compute_node) self.assertTrue( self.client._provider_tree.has_traits(uuids.compute_node, ['CUSTOM_GOLD'])) self.assertFalse( self.client._provider_tree.has_traits(uuids.compute_node, ['CUSTOM_SILVER'])) get_shr_mock.assert_called_once_with(set([uuids.agg1])) self.assertTrue(self.client._provider_tree.exists(uuids.compute_node)) self.assertFalse(create_rp_mock.called) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_create_resource_provider') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_refresh_associations') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_get_providers_in_tree') def test_ensure_resource_provider_create_fail(self, get_rpt_mock, refresh_mock, create_rp_mock): # No resource provider exists in the client's cache, and # _create_provider raises, indicating there was an error with the # create call. Ensure we don't populate the resource provider cache get_rpt_mock.return_value = [] create_rp_mock.side_effect = exception.ResourceProviderCreationFailed( name=uuids.compute_node) self.assertRaises( exception.ResourceProviderCreationFailed, self.client._ensure_resource_provider, self.context, uuids.compute_node) get_rpt_mock.assert_called_once_with(uuids.compute_node) create_rp_mock.assert_called_once_with( self.context, uuids.compute_node, uuids.compute_node, parent_provider_uuid=None) self.assertFalse(self.client._provider_tree.exists(uuids.compute_node)) self.assertFalse(refresh_mock.called) self.assertRaises( ValueError, self.client._provider_tree.in_aggregates, uuids.compute_node, []) self.assertRaises( ValueError, self.client._provider_tree.has_traits, uuids.compute_node, []) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_create_resource_provider') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_refresh_associations') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_get_providers_in_tree') def test_ensure_resource_provider_create(self, get_rpt_mock, refresh_mock, create_rp_mock): # No resource provider exists in the client's cache and no resource # provider was returned from the placement API, so verify that in this # case we try to create the resource provider via the placement API. get_rpt_mock.return_value = [] create_rp_mock.return_value = { 'uuid': uuids.compute_node, 'name': 'compute-name', 'generation': 1, } self.assertEqual( uuids.compute_node, self.client._ensure_resource_provider(self.context, uuids.compute_node)) self._validate_provider(uuids.compute_node, name='compute-name', generation=1, parent_uuid=None, aggregates=set(), traits=set()) # We don't refresh for a just-created provider refresh_mock.assert_not_called() get_rpt_mock.assert_called_once_with(uuids.compute_node) create_rp_mock.assert_called_once_with( self.context, uuids.compute_node, uuids.compute_node, # name param defaults to UUID if None parent_provider_uuid=None, ) self.assertTrue(self.client._provider_tree.exists(uuids.compute_node)) create_rp_mock.reset_mock() self.assertEqual( uuids.compute_node, self.client._ensure_resource_provider(self.context, uuids.compute_node)) self._validate_provider(uuids.compute_node, name='compute-name', generation=1, parent_uuid=None) # Shouldn't be called now that provider is in cache... self.assertFalse(create_rp_mock.called) # Validate the path where we specify a name (don't default to the UUID) self.client._ensure_resource_provider( self.context, uuids.cn2, 'a-name') create_rp_mock.assert_called_once_with( self.context, uuids.cn2, 'a-name', parent_provider_uuid=None) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_refresh_associations', new=mock.Mock()) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_create_resource_provider') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_get_providers_in_tree') def test_ensure_resource_provider_tree(self, get_rpt_mock, create_rp_mock): """Test _ensure_resource_provider with a tree of providers.""" def _create_resource_provider(context, uuid, name, parent_provider_uuid=None): """Mock side effect for creating the RP with the specified args.""" return { 'uuid': uuid, 'name': name, 'generation': 0, 'parent_provider_uuid': parent_provider_uuid } create_rp_mock.side_effect = _create_resource_provider # Not initially in the placement database, so we have to create it. get_rpt_mock.return_value = [] # Create the root root = self.client._ensure_resource_provider(self.context, uuids.root) self.assertEqual(uuids.root, root) # Now create a child child1 = self.client._ensure_resource_provider( self.context, uuids.child1, name='junior', parent_provider_uuid=uuids.root) self.assertEqual(uuids.child1, child1) # If we re-ensure the child, we get the object from the tree, not a # newly-created one - i.e. the early .find() works like it should. self.assertIs(child1, self.client._ensure_resource_provider(self.context, uuids.child1)) # Make sure we can create a grandchild grandchild = self.client._ensure_resource_provider( self.context, uuids.grandchild, parent_provider_uuid=uuids.child1) self.assertEqual(uuids.grandchild, grandchild) # Now create a second child of the root and make sure it doesn't wind # up in some crazy wrong place like under child1 or grandchild child2 = self.client._ensure_resource_provider( self.context, uuids.child2, parent_provider_uuid=uuids.root) self.assertEqual(uuids.child2, child2) # At this point we should get all the providers. self.assertEqual( set([uuids.root, uuids.child1, uuids.child2, uuids.grandchild]), set(self.client._provider_tree.get_provider_uuids())) @mock.patch('nova.compute.provider_tree.ProviderTree.exists') @mock.patch('nova.compute.provider_tree.ProviderTree.get_provider_uuids') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_refresh_associations') def test_ensure_resource_provider_refresh_local(self, mock_refresh, mock_gpu, mock_exists): """Make sure refreshes are called with the appropriate UUIDs and flags when the local cache already has the provider in it. """ mock_exists.return_value = True tree_uuids = [uuids.root, uuids.one, uuids.two] mock_gpu.return_value = tree_uuids self.assertEqual(uuids.root, self.client._ensure_resource_provider(self.context, uuids.root)) mock_exists.assert_called_once_with(uuids.root) mock_gpu.assert_called_once_with(uuids.root) mock_refresh.assert_has_calls( [mock.call(uuid, force=False) for uuid in tree_uuids]) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_get_providers_in_tree') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_refresh_associations') def test_ensure_resource_provider_refresh_fetch(self, mock_refresh, mock_gpit): """Make sure refreshes are called with the appropriate UUIDs and flags when we fetch the provider tree from placement. """ tree_uuids = set([uuids.root, uuids.one, uuids.two]) mock_gpit.return_value = [{'uuid': u, 'name': u, 'generation': 42} for u in tree_uuids] self.assertEqual(uuids.root, self.client._ensure_resource_provider(self.context, uuids.root)) mock_gpit.assert_called_once_with(uuids.root) mock_refresh.assert_has_calls( [mock.call(uuid, generation=42, force=True) for uuid in tree_uuids]) self.assertEqual(tree_uuids, set(self.client._provider_tree.get_provider_uuids())) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_get_providers_in_tree') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_create_resource_provider') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_refresh_associations') def test_ensure_resource_provider_refresh_create(self, mock_refresh, mock_create, mock_gpit): """Make sure refresh is not called when we create the RP.""" mock_gpit.return_value = [] mock_create.return_value = {'name': 'cn', 'uuid': uuids.cn, 'generation': 42} self.assertEqual(uuids.root, self.client._ensure_resource_provider(self.context, uuids.root)) mock_gpit.assert_called_once_with(uuids.root) mock_create.assert_called_once_with(self.context, uuids.root, uuids.root, parent_provider_uuid=None) mock_refresh.assert_not_called() self.assertEqual([uuids.cn], self.client._provider_tree.get_provider_uuids()) def test_get_allocation_candidates(self): resp_mock = mock.Mock(status_code=200) json_data = { 'allocation_requests': mock.sentinel.alloc_reqs, 'provider_summaries': mock.sentinel.p_sums, } resources = scheduler_utils.ResourceRequest.from_extra_specs({ 'resources:VCPU': '1', 'resources:MEMORY_MB': '1024', 'resources1:DISK_GB': '30', 'trait:CUSTOM_TRAIT1': 'required', 'trait:CUSTOM_TRAIT2': 'preferred', }) expected_path = '/allocation_candidates' expected_query = {'resources': ['MEMORY_MB:1024,VCPU:1'], 'required': ['CUSTOM_TRAIT1'], 'limit': ['1000']} resp_mock.json.return_value = json_data self.ks_adap_mock.get.return_value = resp_mock alloc_reqs, p_sums, allocation_request_version = \ self.client.get_allocation_candidates(resources) self.ks_adap_mock.get.assert_called_once_with( mock.ANY, raise_exc=False, microversion='1.17') url = self.ks_adap_mock.get.call_args[0][0] split_url = parse.urlsplit(url) query = parse.parse_qs(split_url.query) self.assertEqual(expected_path, split_url.path) self.assertEqual(expected_query, query) self.assertEqual(mock.sentinel.alloc_reqs, alloc_reqs) self.assertEqual(mock.sentinel.p_sums, p_sums) def test_get_allocation_candidates_with_no_trait(self): resp_mock = mock.Mock(status_code=200) json_data = { 'allocation_requests': mock.sentinel.alloc_reqs, 'provider_summaries': mock.sentinel.p_sums, } resources = scheduler_utils.ResourceRequest.from_extra_specs({ 'resources:VCPU': '1', 'resources:MEMORY_MB': '1024', 'resources1:DISK_GB': '30', }) expected_path = '/allocation_candidates' expected_query = {'resources': ['MEMORY_MB:1024,VCPU:1'], 'limit': ['1000']} resp_mock.json.return_value = json_data self.ks_adap_mock.get.return_value = resp_mock alloc_reqs, p_sums, allocation_request_version = \ self.client.get_allocation_candidates(resources) self.ks_adap_mock.get.assert_called_once_with( mock.ANY, raise_exc=False, microversion='1.17') url = self.ks_adap_mock.get.call_args[0][0] split_url = parse.urlsplit(url) query = parse.parse_qs(split_url.query) self.assertEqual(expected_path, split_url.path) self.assertEqual(expected_query, query) self.assertEqual(mock.sentinel.alloc_reqs, alloc_reqs) def test_get_allocation_candidates_not_found(self): # Ensure _get_resource_provider() just returns None when the placement # API doesn't find a resource provider matching a UUID resp_mock = mock.Mock(status_code=404) self.ks_adap_mock.get.return_value = resp_mock expected_path = '/allocation_candidates' expected_query = {'resources': ['MEMORY_MB:1024'], 'limit': ['100']} # Make sure we're also honoring the configured limit self.flags(max_placement_results=100, group='scheduler') resources = scheduler_utils.ResourceRequest.from_extra_specs( {'resources:MEMORY_MB': '1024'}) res = self.client.get_allocation_candidates(resources) self.ks_adap_mock.get.assert_called_once_with( mock.ANY, raise_exc=False, microversion='1.17') url = self.ks_adap_mock.get.call_args[0][0] split_url = parse.urlsplit(url) query = parse.parse_qs(split_url.query) self.assertEqual(expected_path, split_url.path) self.assertEqual(expected_query, query) self.assertIsNone(res[0]) def test_get_resource_provider_found(self): # Ensure _get_resource_provider() returns a dict of resource provider # if it finds a resource provider record from the placement API uuid = uuids.compute_node resp_mock = mock.Mock(status_code=200) json_data = { 'uuid': uuid, 'name': uuid, 'generation': 42, 'parent_provider_uuid': None, } resp_mock.json.return_value = json_data self.ks_adap_mock.get.return_value = resp_mock result = self.client._get_resource_provider(uuid) expected_provider_dict = dict( uuid=uuid, name=uuid, generation=42, parent_provider_uuid=None, ) expected_url = '/resource_providers/' + uuid self.ks_adap_mock.get.assert_called_once_with( expected_url, raise_exc=False, microversion='1.14') self.assertEqual(expected_provider_dict, result) def test_get_resource_provider_not_found(self): # Ensure _get_resource_provider() just returns None when the placement # API doesn't find a resource provider matching a UUID resp_mock = mock.Mock(status_code=404) self.ks_adap_mock.get.return_value = resp_mock uuid = uuids.compute_node result = self.client._get_resource_provider(uuid) expected_url = '/resource_providers/' + uuid self.ks_adap_mock.get.assert_called_once_with( expected_url, raise_exc=False, microversion='1.14') self.assertIsNone(result) @mock.patch.object(report.LOG, 'error') def test_get_resource_provider_error(self, logging_mock): # Ensure _get_resource_provider() sets the error flag when trying to # communicate with the placement API and not getting an error we can # deal with resp_mock = mock.Mock(status_code=503) self.ks_adap_mock.get.return_value = resp_mock self.ks_adap_mock.get.return_value.headers = { 'x-openstack-request-id': uuids.request_id} uuid = uuids.compute_node self.assertRaises( exception.ResourceProviderRetrievalFailed, self.client._get_resource_provider, uuid) expected_url = '/resource_providers/' + uuid self.ks_adap_mock.get.assert_called_once_with( expected_url, raise_exc=False, microversion='1.14') # A 503 Service Unavailable should trigger an error log that # includes the placement request id and return None # from _get_resource_provider() self.assertTrue(logging_mock.called) self.assertEqual(uuids.request_id, logging_mock.call_args[0][1]['placement_req_id']) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_get_provider_traits') def test_get_sharing_providers(self, mock_get_traits): # Ensure _get_sharing_providers() returns a list of resource # provider dicts if it finds resource provider records from the # placement API resp_mock = mock.Mock(status_code=200) rpjson = [ { 'uuid': uuids.compute_node, 'name': 'compute_host', 'generation': 42, 'parent_provider_uuid': None, 'root_provider_uuid': None, 'links': [], }, { 'uuid': uuids.sharing, 'name': 'storage_provider', 'generation': 42, 'parent_provider_uuid': None, 'root_provider_uuid': None, 'links': [], }, ] resp_mock.json.return_value = {'resource_providers': rpjson} self.ks_adap_mock.get.return_value = resp_mock mock_get_traits.side_effect = [ set(['MISC_SHARES_VIA_AGGREGATE', 'CUSTOM_FOO']), set(['CUSTOM_BAR']), ] result = self.client._get_sharing_providers([uuids.agg1, uuids.agg2]) expected_url = ('/resource_providers?member_of=in:' + ','.join((uuids.agg1, uuids.agg2))) self.ks_adap_mock.get.assert_called_once_with( expected_url, raise_exc=False, microversion='1.3') self.assertEqual(rpjson[:1], result) def test_get_sharing_providers_emptylist(self): self.assertEqual( [], self.client._get_sharing_providers([])) self.ks_adap_mock.get.assert_not_called() @mock.patch.object(report.LOG, 'error') def test_get_sharing_providers_error(self, logging_mock): # Ensure _get_sharing_providers() logs an error and raises if the # placement API call doesn't respond 200 resp_mock = mock.Mock(status_code=503) self.ks_adap_mock.get.return_value = resp_mock self.ks_adap_mock.get.return_value.headers = { 'x-openstack-request-id': uuids.request_id} uuid = uuids.agg self.assertRaises(exception.ResourceProviderRetrievalFailed, self.client._get_sharing_providers, [uuid]) expected_url = '/resource_providers?member_of=in:' + uuid self.ks_adap_mock.get.assert_called_once_with( expected_url, raise_exc=False, microversion='1.3') # A 503 Service Unavailable should trigger an error log that # includes the placement request id self.assertTrue(logging_mock.called) self.assertEqual(uuids.request_id, logging_mock.call_args[0][1]['placement_req_id']) def test_get_providers_in_tree(self): # Ensure _get_providers_in_tree() returns a list of resource # provider dicts if it finds a resource provider record from the # placement API root = uuids.compute_node child = uuids.child resp_mock = mock.Mock(status_code=200) rpjson = [ { 'uuid': root, 'name': 'daddy', 'generation': 42, 'parent_provider_uuid': None, }, { 'uuid': child, 'name': 'junior', 'generation': 42, 'parent_provider_uuid': root, }, ] resp_mock.json.return_value = {'resource_providers': rpjson} self.ks_adap_mock.get.return_value = resp_mock result = self.client._get_providers_in_tree(root) expected_url = '/resource_providers?in_tree=' + root self.ks_adap_mock.get.assert_called_once_with( expected_url, raise_exc=False, microversion='1.14') self.assertEqual(rpjson, result) @mock.patch.object(report.LOG, 'error') def test_get_providers_in_tree_error(self, logging_mock): # Ensure _get_providers_in_tree() logs an error and raises if the # placement API call doesn't respond 200 resp_mock = mock.Mock(status_code=503) self.ks_adap_mock.get.return_value = resp_mock self.ks_adap_mock.get.return_value.headers = { 'x-openstack-request-id': 'req-' + uuids.request_id} uuid = uuids.compute_node self.assertRaises(exception.ResourceProviderRetrievalFailed, self.client._get_providers_in_tree, uuid) expected_url = '/resource_providers?in_tree=' + uuid self.ks_adap_mock.get.assert_called_once_with( expected_url, raise_exc=False, microversion='1.14') # A 503 Service Unavailable should trigger an error log that includes # the placement request id self.assertTrue(logging_mock.called) self.assertEqual('req-' + uuids.request_id, logging_mock.call_args[0][1]['placement_req_id']) def test_create_resource_provider(self): """Test that _create_resource_provider() sends a dict of resource provider information without a parent provider UUID. """ uuid = uuids.compute_node name = 'computehost' resp_mock = mock.Mock(status_code=201) self.ks_adap_mock.post.return_value = resp_mock self.client._create_resource_provider(self.context, uuid, name) expected_payload = { 'uuid': uuid, 'name': name, } expected_url = '/resource_providers' self.ks_adap_mock.post.assert_called_once_with( expected_url, json=expected_payload, raise_exc=False, microversion='1.14', headers={'X-Openstack-Request-Id': self.context.global_id}) def test_create_resource_provider_with_parent(self): """Test that when specifying a parent provider UUID, that the parent_provider_uuid part of the payload is properly specified. """ parent_uuid = uuids.parent uuid = uuids.compute_node name = 'computehost' resp_mock = mock.Mock(status_code=201) self.ks_adap_mock.post.return_value = resp_mock result = self.client._create_resource_provider( self.context, uuid, name, parent_provider_uuid=parent_uuid, ) expected_payload = { 'uuid': uuid, 'name': name, 'parent_provider_uuid': parent_uuid, } expected_provider_dict = dict( uuid=uuid, name=name, generation=0, parent_provider_uuid=parent_uuid, ) expected_url = '/resource_providers' self.ks_adap_mock.post.assert_called_once_with( expected_url, json=expected_payload, raise_exc=False, microversion='1.14', headers={'X-Openstack-Request-Id': self.context.global_id}) self.assertEqual(expected_provider_dict, result) @mock.patch.object(report.LOG, 'info') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_get_resource_provider') def test_create_resource_provider_concurrent_create(self, get_rp_mock, logging_mock): # Ensure _create_resource_provider() returns a dict of resource # provider gotten from _get_resource_provider() if the call to create # the resource provider in the placement API returned a 409 Conflict, # indicating another thread concurrently created the resource provider # record. uuid = uuids.compute_node name = 'computehost' self.ks_adap_mock.post.return_value = mock.Mock( status_code=409, headers={'x-openstack-request-id': uuids.request_id}, text='not a name conflict') get_rp_mock.return_value = mock.sentinel.get_rp result = self.client._create_resource_provider(self.context, uuid, name) expected_payload = { 'uuid': uuid, 'name': name, } expected_url = '/resource_providers' self.ks_adap_mock.post.assert_called_once_with( expected_url, json=expected_payload, raise_exc=False, microversion='1.14', headers={'X-Openstack-Request-Id': self.context.global_id}) self.assertEqual(mock.sentinel.get_rp, result) # The 409 response will produce a message to the info log. self.assertTrue(logging_mock.called) self.assertEqual(uuids.request_id, logging_mock.call_args[0][1]['placement_req_id']) def test_create_resource_provider_name_conflict(self): # When the API call to create the resource provider fails 409 with a # name conflict, we raise an exception. self.ks_adap_mock.post.return_value = mock.Mock( status_code=409, text='Conflicting resource provider name: foo already ' 'exists.') self.assertRaises( exception.ResourceProviderCreationFailed, self.client._create_resource_provider, self.context, uuids.compute_node, 'foo') @mock.patch.object(report.LOG, 'error') def test_create_resource_provider_error(self, logging_mock): # Ensure _create_resource_provider() sets the error flag when trying to # communicate with the placement API and not getting an error we can # deal with uuid = uuids.compute_node name = 'computehost' resp_mock = mock.Mock(status_code=503) self.ks_adap_mock.post.return_value = resp_mock self.ks_adap_mock.post.return_value.headers = { 'x-openstack-request-id': uuids.request_id} self.assertRaises( exception.ResourceProviderCreationFailed, self.client._create_resource_provider, self.context, uuid, name) expected_payload = { 'uuid': uuid, 'name': name, } expected_url = '/resource_providers' self.ks_adap_mock.post.assert_called_once_with( expected_url, json=expected_payload, raise_exc=False, microversion='1.14', headers={'X-Openstack-Request-Id': self.context.global_id}) # A 503 Service Unavailable should log an error that # includes the placement request id and # _create_resource_provider() should return None self.assertTrue(logging_mock.called) self.assertEqual(uuids.request_id, logging_mock.call_args[0][1]['placement_req_id']) def test_put_empty(self): # A simple put with an empty (not None) payload should send the empty # payload through. # Bug #1744786 url = '/resource_providers/%s/aggregates' % uuids.foo self.client.put(url, []) self.ks_adap_mock.put.assert_called_once_with( url, json=[], raise_exc=False, microversion=None, headers={}) def test_delete_provider(self): delete_mock = requests.Response() self.ks_adap_mock.delete.return_value = delete_mock for status_code in (204, 404): delete_mock.status_code = status_code # Seed the caches self.client._provider_tree.new_root('compute', uuids.root, 0) self.client.association_refresh_time[uuids.root] = 1234 self.client._delete_provider(uuids.root, global_request_id='gri') self.ks_adap_mock.delete.assert_called_once_with( '/resource_providers/' + uuids.root, headers={'X-Openstack-Request-Id': 'gri'}, microversion=None, raise_exc=False) self.assertFalse(self.client._provider_tree.exists(uuids.root)) self.assertNotIn(uuids.root, self.client.association_refresh_time) self.ks_adap_mock.delete.reset_mock() def test_delete_provider_fail(self): delete_mock = requests.Response() self.ks_adap_mock.delete.return_value = delete_mock resp_exc_map = {409: exception.ResourceProviderInUse, 503: exception.ResourceProviderDeletionFailed} for status_code, exc in resp_exc_map.items(): delete_mock.status_code = status_code self.assertRaises(exc, self.client._delete_provider, uuids.root) self.ks_adap_mock.delete.assert_called_once_with( '/resource_providers/' + uuids.root, microversion=None, headers={}, raise_exc=False) self.ks_adap_mock.delete.reset_mock() def test_set_aggregates_for_provider(self): aggs = [uuids.agg1, uuids.agg2] resp_mock = mock.Mock(status_code=200) resp_mock.json.return_value = { 'aggregates': aggs, } self.ks_adap_mock.put.return_value = resp_mock # Prime the provider tree cache self.client._provider_tree.new_root('rp', uuids.rp, 0) self.assertEqual(set(), self.client._provider_tree.data(uuids.rp).aggregates) self.client.set_aggregates_for_provider(self.context, uuids.rp, aggs) self.ks_adap_mock.put.assert_called_once_with( '/resource_providers/%s/aggregates' % uuids.rp, json=aggs, raise_exc=False, microversion='1.1', headers={'X-Openstack-Request-Id': self.context.global_id}) # Cache was updated self.assertEqual(set(aggs), self.client._provider_tree.data(uuids.rp).aggregates) def test_set_aggregates_for_provider_fail(self): self.ks_adap_mock.put.return_value = mock.Mock(status_code=503) self.assertRaises( exception.ResourceProviderUpdateFailed, self.client.set_aggregates_for_provider, self.context, uuids.rp, []) class TestAggregates(SchedulerReportClientTestCase): def test_get_provider_aggregates_found(self): uuid = uuids.compute_node resp_mock = mock.Mock(status_code=200) aggs = [ uuids.agg1, uuids.agg2, ] resp_mock.json.return_value = {'aggregates': aggs} self.ks_adap_mock.get.return_value = resp_mock result = self.client._get_provider_aggregates(uuid) expected_url = '/resource_providers/' + uuid + '/aggregates' self.ks_adap_mock.get.assert_called_once_with( expected_url, raise_exc=False, microversion='1.1') self.assertEqual(set(aggs), result) @mock.patch.object(report.LOG, 'error') def test_get_provider_aggregates_error(self, log_mock): """Test that when the placement API returns any error when looking up a provider's aggregates, we raise an exception. """ uuid = uuids.compute_node resp_mock = mock.Mock(headers={ 'x-openstack-request-id': uuids.request_id}) self.ks_adap_mock.get.return_value = resp_mock for status_code in (400, 404, 503): resp_mock.status_code = status_code self.assertRaises( exception.ResourceProviderAggregateRetrievalFailed, self.client._get_provider_aggregates, uuid) expected_url = '/resource_providers/' + uuid + '/aggregates' self.ks_adap_mock.get.assert_called_once_with( expected_url, raise_exc=False, microversion='1.1') self.assertTrue(log_mock.called) self.assertEqual(uuids.request_id, log_mock.call_args[0][1]['placement_req_id']) self.ks_adap_mock.get.reset_mock() log_mock.reset_mock() class TestTraits(SchedulerReportClientTestCase): trait_api_kwargs = {'raise_exc': False, 'microversion': '1.6'} def test_get_provider_traits_found(self): uuid = uuids.compute_node resp_mock = mock.Mock(status_code=200) traits = [ 'CUSTOM_GOLD', 'CUSTOM_SILVER', ] resp_mock.json.return_value = {'traits': traits} self.ks_adap_mock.get.return_value = resp_mock result = self.client._get_provider_traits(uuid) expected_url = '/resource_providers/' + uuid + '/traits' self.ks_adap_mock.get.assert_called_once_with( expected_url, **self.trait_api_kwargs) self.assertEqual(set(traits), result) @mock.patch.object(report.LOG, 'error') def test_get_provider_traits_error(self, log_mock): """Test that when the placement API returns any error when looking up a provider's traits, we raise an exception. """ uuid = uuids.compute_node resp_mock = mock.Mock(headers={ 'x-openstack-request-id': uuids.request_id}) self.ks_adap_mock.get.return_value = resp_mock for status_code in (400, 404, 503): resp_mock.status_code = status_code self.assertRaises( exception.ResourceProviderTraitRetrievalFailed, self.client._get_provider_traits, uuid) expected_url = '/resource_providers/' + uuid + '/traits' self.ks_adap_mock.get.assert_called_once_with( expected_url, **self.trait_api_kwargs) self.assertTrue(log_mock.called) self.assertEqual(uuids.request_id, log_mock.call_args[0][1]['placement_req_id']) self.ks_adap_mock.get.reset_mock() log_mock.reset_mock() def test_ensure_traits(self): """Successful paths, various permutations of traits existing or needing to be created. """ standard_traits = ['HW_NIC_OFFLOAD_UCS', 'HW_NIC_OFFLOAD_RDMA'] custom_traits = ['CUSTOM_GOLD', 'CUSTOM_SILVER'] all_traits = standard_traits + custom_traits get_mock = mock.Mock(status_code=200) self.ks_adap_mock.get.return_value = get_mock # Request all traits; custom traits need to be created get_mock.json.return_value = {'traits': standard_traits} self.client._ensure_traits(self.context, all_traits) self.ks_adap_mock.get.assert_called_once_with( '/traits?name=in:' + ','.join(all_traits), **self.trait_api_kwargs) self.ks_adap_mock.put.assert_has_calls( [mock.call('/traits/' + trait, headers={'X-Openstack-Request-Id': self.context.global_id}, **self.trait_api_kwargs) for trait in custom_traits], any_order=True) self.ks_adap_mock.reset_mock() # Request standard traits; no traits need to be created get_mock.json.return_value = {'traits': standard_traits} self.client._ensure_traits(self.context, standard_traits) self.ks_adap_mock.get.assert_called_once_with( '/traits?name=in:' + ','.join(standard_traits), **self.trait_api_kwargs) self.ks_adap_mock.put.assert_not_called() self.ks_adap_mock.reset_mock() # Request no traits - short circuit self.client._ensure_traits(self.context, None) self.client._ensure_traits(self.context, []) self.ks_adap_mock.get.assert_not_called() self.ks_adap_mock.put.assert_not_called() def test_ensure_traits_fail_retrieval(self): self.ks_adap_mock.get.return_value = mock.Mock(status_code=400) self.assertRaises(exception.TraitRetrievalFailed, self.client._ensure_traits, self.context, ['FOO']) self.ks_adap_mock.get.assert_called_once_with( '/traits?name=in:FOO', **self.trait_api_kwargs) self.ks_adap_mock.put.assert_not_called() def test_ensure_traits_fail_creation(self): get_mock = mock.Mock(status_code=200) get_mock.json.return_value = {'traits': []} self.ks_adap_mock.get.return_value = get_mock put_mock = requests.Response() put_mock.status_code = 400 self.ks_adap_mock.put.return_value = put_mock self.assertRaises(exception.TraitCreationFailed, self.client._ensure_traits, self.context, ['FOO']) self.ks_adap_mock.get.assert_called_once_with( '/traits?name=in:FOO', **self.trait_api_kwargs) self.ks_adap_mock.put.assert_called_once_with( '/traits/FOO', headers={'X-Openstack-Request-Id': self.context.global_id}, **self.trait_api_kwargs) def test_set_traits_for_provider(self): traits = ['HW_NIC_OFFLOAD_UCS', 'HW_NIC_OFFLOAD_RDMA'] # Make _ensure_traits succeed without PUTting get_mock = mock.Mock(status_code=200) get_mock.json.return_value = {'traits': traits} self.ks_adap_mock.get.return_value = get_mock # Prime the provider tree cache self.client._provider_tree.new_root('rp', uuids.rp, 0) # Mock the /rp/{u}/traits PUT to succeed put_mock = mock.Mock(status_code=200) put_mock.json.return_value = {'traits': traits, 'resource_provider_generation': 1} self.ks_adap_mock.put.return_value = put_mock # Invoke self.client.set_traits_for_provider(self.context, uuids.rp, traits) # Verify API calls self.ks_adap_mock.get.assert_called_once_with( '/traits?name=in:' + ','.join(traits), **self.trait_api_kwargs) self.ks_adap_mock.put.assert_called_once_with( '/resource_providers/%s/traits' % uuids.rp, json={'traits': traits, 'resource_provider_generation': 0}, headers={'X-Openstack-Request-Id': self.context.global_id}, **self.trait_api_kwargs) # And ensure the provider tree cache was updated appropriately self.assertFalse( self.client._provider_tree.have_traits_changed(uuids.rp, traits)) # Validate the generation self.assertEqual( 1, self.client._provider_tree.data(uuids.rp).generation) def test_set_traits_for_provider_fail(self): traits = ['HW_NIC_OFFLOAD_UCS', 'HW_NIC_OFFLOAD_RDMA'] get_mock = mock.Mock() self.ks_adap_mock.get.return_value = get_mock # Prime the provider tree cache self.client._provider_tree.new_root('rp', uuids.rp, 0) # _ensure_traits exception bubbles up get_mock.status_code = 400 self.assertRaises( exception.TraitRetrievalFailed, self.client.set_traits_for_provider, self.context, uuids.rp, traits) self.ks_adap_mock.put.assert_not_called() get_mock.status_code = 200 get_mock.json.return_value = {'traits': traits} # Conflict self.ks_adap_mock.put.return_value = mock.Mock(status_code=409) self.assertRaises( exception.ResourceProviderUpdateConflict, self.client.set_traits_for_provider, self.context, uuids.rp, traits) # Other error self.ks_adap_mock.put.return_value = mock.Mock(status_code=503) self.assertRaises( exception.ResourceProviderUpdateFailed, self.client.set_traits_for_provider, self.context, uuids.rp, traits) class TestAssociations(SchedulerReportClientTestCase): @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_get_provider_aggregates') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_get_provider_traits') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_get_sharing_providers') def test_refresh_associations_no_last(self, mock_shr_get, mock_trait_get, mock_agg_get): """Test that associations are refreshed when stale.""" uuid = uuids.compute_node # Seed the provider tree so _refresh_associations finds the provider self.client._provider_tree.new_root('compute', uuid, 1) mock_agg_get.return_value = set([uuids.agg1]) mock_trait_get.return_value = set(['CUSTOM_GOLD']) self.client._refresh_associations(uuid) mock_agg_get.assert_called_once_with(uuid) mock_trait_get.assert_called_once_with(uuid) mock_shr_get.assert_called_once_with(mock_agg_get.return_value) self.assertIn(uuid, self.client.association_refresh_time) self.assertTrue( self.client._provider_tree.in_aggregates(uuid, [uuids.agg1])) self.assertFalse( self.client._provider_tree.in_aggregates(uuid, [uuids.agg2])) self.assertTrue( self.client._provider_tree.has_traits(uuid, ['CUSTOM_GOLD'])) self.assertFalse( self.client._provider_tree.has_traits(uuid, ['CUSTOM_SILVER'])) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_get_provider_aggregates') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_get_provider_traits') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_get_sharing_providers') def test_refresh_associations_no_refresh_sharing(self, mock_shr_get, mock_trait_get, mock_agg_get): """Test refresh_sharing=False.""" uuid = uuids.compute_node # Seed the provider tree so _refresh_associations finds the provider self.client._provider_tree.new_root('compute', uuid, 1) mock_agg_get.return_value = set([uuids.agg1]) mock_trait_get.return_value = set(['CUSTOM_GOLD']) self.client._refresh_associations(uuid, refresh_sharing=False) mock_agg_get.assert_called_once_with(uuid) mock_trait_get.assert_called_once_with(uuid) mock_shr_get.assert_not_called() self.assertIn(uuid, self.client.association_refresh_time) self.assertTrue( self.client._provider_tree.in_aggregates(uuid, [uuids.agg1])) self.assertFalse( self.client._provider_tree.in_aggregates(uuid, [uuids.agg2])) self.assertTrue( self.client._provider_tree.has_traits(uuid, ['CUSTOM_GOLD'])) self.assertFalse( self.client._provider_tree.has_traits(uuid, ['CUSTOM_SILVER'])) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_get_provider_aggregates') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_get_provider_traits') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_get_sharing_providers') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_associations_stale') def test_refresh_associations_not_stale(self, mock_stale, mock_shr_get, mock_trait_get, mock_agg_get): """Test that refresh associations is not called when the map is not stale. """ mock_stale.return_value = False uuid = uuids.compute_node self.client._refresh_associations(uuid) mock_agg_get.assert_not_called() mock_trait_get.assert_not_called() mock_shr_get.assert_not_called() self.assertFalse(self.client.association_refresh_time) @mock.patch.object(report.LOG, 'debug') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_get_provider_aggregates') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_get_provider_traits') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_get_sharing_providers') def test_refresh_associations_time(self, mock_shr_get, mock_trait_get, mock_agg_get, log_mock): """Test that refresh associations is called when the map is stale.""" uuid = uuids.compute_node # Seed the provider tree so _refresh_associations finds the provider self.client._provider_tree.new_root('compute', uuid, 1) mock_agg_get.return_value = set([]) mock_trait_get.return_value = set([]) mock_shr_get.return_value = [] # Called a first time because association_refresh_time is empty. now = time.time() self.client._refresh_associations(uuid) mock_agg_get.assert_called_once_with(uuid) mock_trait_get.assert_called_once_with(uuid) mock_shr_get.assert_called_once_with(set()) log_mock.assert_has_calls([ mock.call('Refreshing aggregate associations for resource ' 'provider %s, aggregates: %s', uuid, 'None'), mock.call('Refreshing trait associations for resource ' 'provider %s, traits: %s', uuid, 'None') ]) self.assertIn(uuid, self.client.association_refresh_time) # Clear call count. mock_agg_get.reset_mock() mock_trait_get.reset_mock() mock_shr_get.reset_mock() with mock.patch('time.time') as mock_future: # Not called a second time because not enough time has passed. mock_future.return_value = now + report.ASSOCIATION_REFRESH / 2 self.client._refresh_associations(uuid) mock_agg_get.assert_not_called() mock_trait_get.assert_not_called() mock_shr_get.assert_not_called() # Called because time has passed. mock_future.return_value = now + report.ASSOCIATION_REFRESH + 1 self.client._refresh_associations(uuid) mock_agg_get.assert_called_once_with(uuid) mock_trait_get.assert_called_once_with(uuid) mock_shr_get.assert_called_once_with(set()) class TestComputeNodeToInventoryDict(test.NoDBTestCase): def test_compute_node_inventory(self): uuid = uuids.compute_node name = 'computehost' compute_node = objects.ComputeNode(uuid=uuid, hypervisor_hostname=name, vcpus=2, cpu_allocation_ratio=16.0, memory_mb=1024, ram_allocation_ratio=1.5, local_gb=10, disk_allocation_ratio=1.0) self.flags(reserved_host_memory_mb=1000) self.flags(reserved_host_disk_mb=200) self.flags(reserved_host_cpus=1) result = report._compute_node_to_inventory_dict(compute_node) expected = { 'VCPU': { 'total': compute_node.vcpus, 'reserved': CONF.reserved_host_cpus, 'min_unit': 1, 'max_unit': compute_node.vcpus, 'step_size': 1, 'allocation_ratio': compute_node.cpu_allocation_ratio, }, 'MEMORY_MB': { 'total': compute_node.memory_mb, 'reserved': CONF.reserved_host_memory_mb, 'min_unit': 1, 'max_unit': compute_node.memory_mb, 'step_size': 1, 'allocation_ratio': compute_node.ram_allocation_ratio, }, 'DISK_GB': { 'total': compute_node.local_gb, 'reserved': 1, # this is ceil(1000/1024) 'min_unit': 1, 'max_unit': compute_node.local_gb, 'step_size': 1, 'allocation_ratio': compute_node.disk_allocation_ratio, }, } self.assertEqual(expected, result) def test_compute_node_inventory_empty(self): uuid = uuids.compute_node name = 'computehost' compute_node = objects.ComputeNode(uuid=uuid, hypervisor_hostname=name, vcpus=0, cpu_allocation_ratio=16.0, memory_mb=0, ram_allocation_ratio=1.5, local_gb=0, disk_allocation_ratio=1.0) result = report._compute_node_to_inventory_dict(compute_node) self.assertEqual({}, result) class TestInventory(SchedulerReportClientTestCase): @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_ensure_resource_provider') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_delete_inventory') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_update_inventory') def test_update_compute_node(self, mock_ui, mock_delete, mock_erp): cn = self.compute_node self.client.update_compute_node(self.context, cn) mock_erp.assert_called_once_with(self.context, cn.uuid, cn.hypervisor_hostname) expected_inv_data = { 'VCPU': { 'total': 8, 'reserved': CONF.reserved_host_cpus, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 16.0, }, 'MEMORY_MB': { 'total': 1024, 'reserved': 512, 'min_unit': 1, 'max_unit': 1024, 'step_size': 1, 'allocation_ratio': 1.5, }, 'DISK_GB': { 'total': 10, 'reserved': 0, 'min_unit': 1, 'max_unit': 10, 'step_size': 1, 'allocation_ratio': 1.0, }, } mock_ui.assert_called_once_with( self.context, cn.uuid, expected_inv_data, ) self.assertFalse(mock_delete.called) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_ensure_resource_provider') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_delete_inventory') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_update_inventory') def test_update_compute_node_no_inv(self, mock_ui, mock_delete, mock_erp): """Ensure that if there are no inventory records, that we call _delete_inventory() instead of _update_inventory(). """ cn = self.compute_node cn.vcpus = 0 cn.memory_mb = 0 cn.local_gb = 0 self.client.update_compute_node(self.context, cn) mock_erp.assert_called_once_with(self.context, cn.uuid, cn.hypervisor_hostname) mock_delete.assert_called_once_with(self.context, cn.uuid) self.assertFalse(mock_ui.called) @mock.patch.object(report.LOG, 'info') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'put') @mock.patch.object(report.LOG, 'info') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'delete') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get') def test_delete_inventory(self, mock_get, mock_delete, mock_debug, mock_put, mock_info): cn = self.compute_node # Make sure the resource provider exists for preventing to call the API self._init_provider_tree() mock_get.return_value.json.return_value = { 'resource_provider_generation': 1, 'inventories': { 'VCPU': {'total': 16}, } } mock_delete.return_value.status_code = 204 mock_delete.return_value.headers = {'x-openstack-request-id': uuids.request_id} result = self.client._delete_inventory(self.context, cn.uuid) self.assertIsNone(result) self.assertFalse(mock_put.called) self.assertEqual(uuids.request_id, mock_info.call_args[0][1]['placement_req_id']) mock_delete.assert_called_once_with( '/resource_providers/%s/inventories' % cn.uuid, version='1.5', global_request_id=self.context.global_id) @mock.patch('nova.scheduler.client.report._extract_inventory_in_use') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'delete') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get') def test_delete_inventory_already_no_inventory(self, mock_get, mock_delete, mock_extract): cn = self.compute_node # Make sure the resource provider exists for preventing to call the API self._init_provider_tree() mock_get.return_value.json.return_value = { 'resource_provider_generation': 1, 'inventories': { } } result = self.client._delete_inventory(self.context, cn.uuid) self.assertIsNone(result) self.assertFalse(mock_delete.called) self.assertFalse(mock_extract.called) self._validate_provider(cn.uuid, generation=1) @mock.patch.object(report.LOG, 'info') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'put') @mock.patch.object(report.LOG, 'debug') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'delete') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get') def test_delete_inventory_put(self, mock_get, mock_delete, mock_debug, mock_put, mock_info): cn = self.compute_node # Make sure the resource provider exists for preventing to call the API self._init_provider_tree() mock_get.return_value.json.return_value = { 'resource_provider_generation': 1, 'inventories': { 'DISK_GB': {'total': 10}, } } mock_delete.return_value.status_code = 406 mock_put.return_value.status_code = 200 mock_put.return_value.json.return_value = { 'resource_provider_generation': 44, 'inventories': { } } mock_put.return_value.headers = {'x-openstack-request-id': uuids.request_id} result = self.client._delete_inventory(self.context, cn.uuid) self.assertIsNone(result) self.assertTrue(mock_debug.called) self.assertTrue(mock_put.called) self.assertTrue(mock_info.called) self.assertEqual(uuids.request_id, mock_info.call_args[0][1]['placement_req_id']) self._validate_provider(cn.uuid, generation=44) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'put') @mock.patch.object(report.LOG, 'debug') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'delete') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get') def test_delete_inventory_put_failover(self, mock_get, mock_delete, mock_debug, mock_put): cn = self.compute_node # Make sure the resource provider exists for preventing to call the API self._init_provider_tree() mock_get.return_value.json.return_value = { 'resource_provider_generation': 42, 'inventories': { 'DISK_GB': {'total': 10}, } } mock_delete.return_value.status_code = 406 mock_put.return_value.status_code = 200 self.client._delete_inventory(self.context, cn.uuid) self.assertTrue(mock_debug.called) exp_url = '/resource_providers/%s/inventories' % cn.uuid payload = { 'resource_provider_generation': 42, 'inventories': {}, } mock_put.assert_called_once_with( exp_url, payload, global_request_id=self.context.global_id) @mock.patch.object(report.LOG, 'error') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'put') @mock.patch.object(report.LOG, 'debug') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'delete') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get') def test_delete_inventory_put_failover_in_use(self, mock_get, mock_delete, mock_debug, mock_put, mock_error): cn = self.compute_node # Make sure the resource provider exists for preventing to call the API self._init_provider_tree() mock_get.return_value.json.return_value = { 'resource_provider_generation': 42, 'inventories': { 'DISK_GB': {'total': 10}, } } mock_delete.return_value.status_code = 406 mock_put.return_value.status_code = 409 mock_put.return_value.text = ( 'There was a *fake* failure: inventory in use' ) mock_put.return_value.json.return_value = { 'resource_provider_generation': 44, 'inventories': { } } mock_put.return_value.headers = {'x-openstack-request-id': uuids.request_id} self.client._delete_inventory(self.context, cn.uuid) self.assertTrue(mock_debug.called) exp_url = '/resource_providers/%s/inventories' % cn.uuid payload = { 'resource_provider_generation': 42, 'inventories': {}, } mock_put.assert_called_once_with( exp_url, payload, global_request_id=self.context.global_id) self.assertTrue(mock_error.called) self.assertEqual(uuids.request_id, mock_error.call_args[0][1]['placement_req_id']) @mock.patch.object(report.LOG, 'warning') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'delete') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get') def test_delete_inventory_inventory_in_use(self, mock_get, mock_delete, mock_warn): cn = self.compute_node # Make sure the resource provider exists for preventing to call the API self._init_provider_tree() mock_get.return_value.json.return_value = { 'resource_provider_generation': 1, 'inventories': { 'VCPU': {'total': 16}, 'MEMORY_MB': {'total': 1024}, 'DISK_GB': {'total': 10}, } } mock_delete.return_value.status_code = 409 mock_delete.return_value.headers = {'x-openstack-request-id': uuids.request_id} rc_str = "VCPU, MEMORY_MB" in_use_exc = exception.InventoryInUse( resource_classes=rc_str, resource_provider=cn.uuid, ) fault_text = """ 409 Conflict There was a conflict when trying to complete your request. update conflict: %s """ % six.text_type(in_use_exc) mock_delete.return_value.text = fault_text mock_delete.return_value.json.return_value = { 'resource_provider_generation': 44, 'inventories': { } } result = self.client._delete_inventory(self.context, cn.uuid) self.assertIsNone(result) self.assertTrue(mock_warn.called) self.assertEqual(uuids.request_id, mock_warn.call_args[0][1]['placement_req_id']) @mock.patch.object(report.LOG, 'debug') @mock.patch.object(report.LOG, 'info') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'delete') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get') def test_delete_inventory_inventory_404(self, mock_get, mock_delete, mock_info, mock_debug): """Test that when we attempt to delete all the inventory for a resource provider but another thread has already deleted that resource provider, that we simply remove the resource provider from our local cache and return. """ cn = self.compute_node # Make sure the resource provider exists for preventing to call the API self._init_provider_tree() self.client.association_refresh_time[uuids.cn] = mock.Mock() mock_get.return_value.json.return_value = { 'resource_provider_generation': 1, 'inventories': { 'VCPU': {'total': 16}, 'MEMORY_MB': {'total': 1024}, 'DISK_GB': {'total': 10}, } } mock_delete.return_value.status_code = 404 mock_delete.return_value.headers = {'x-openstack-request-id': uuids.request_id} result = self.client._delete_inventory(self.context, cn.uuid) self.assertIsNone(result) self.assertFalse(self.client._provider_tree.exists(cn.uuid)) self.assertTrue(mock_debug.called) self.assertNotIn(cn.uuid, self.client.association_refresh_time) self.assertIn('deleted by another thread', mock_debug.call_args[0][0]) self.assertEqual(uuids.request_id, mock_debug.call_args[0][1]['placement_req_id']) @mock.patch.object(report.LOG, 'error') @mock.patch.object(report.LOG, 'warning') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'delete') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get') def test_delete_inventory_inventory_error(self, mock_get, mock_delete, mock_warn, mock_error): cn = self.compute_node # Make sure the resource provider exists for preventing to call the API self._init_provider_tree() mock_get.return_value.json.return_value = { 'resource_provider_generation': 1, 'inventories': { 'VCPU': {'total': 16}, 'MEMORY_MB': {'total': 1024}, 'DISK_GB': {'total': 10}, } } mock_delete.return_value.status_code = 409 mock_delete.return_value.text = ( 'There was a failure' ) mock_delete.return_value.json.return_value = { 'resource_provider_generation': 44, 'inventories': { } } mock_delete.return_value.headers = {'x-openstack-request-id': uuids.request_id} result = self.client._delete_inventory(self.context, cn.uuid) self.assertIsNone(result) self.assertFalse(mock_warn.called) self.assertTrue(mock_error.called) self.assertEqual(uuids.request_id, mock_error.call_args[0][1]['placement_req_id']) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'put') def test_update_inventory_initial_empty(self, mock_put, mock_get): # Ensure _update_inventory() returns a list of Inventories objects # after creating or updating the existing values uuid = uuids.compute_node compute_node = self.compute_node # Make sure the resource provider exists for preventing to call the API self._init_provider_tree(resources_override={}) mock_get.return_value.json.return_value = { 'resource_provider_generation': 43, 'inventories': { 'VCPU': {'total': 16}, 'MEMORY_MB': {'total': 1024}, 'DISK_GB': {'total': 10}, } } mock_put.return_value.status_code = 200 mock_put.return_value.json.return_value = { 'resource_provider_generation': 44, 'inventories': { 'VCPU': {'total': 16}, 'MEMORY_MB': {'total': 1024}, 'DISK_GB': {'total': 10}, } } inv_data = report._compute_node_to_inventory_dict(compute_node) result = self.client._update_inventory_attempt( self.context, compute_node.uuid, inv_data ) self.assertTrue(result) exp_url = '/resource_providers/%s/inventories' % uuid mock_get.assert_called_once_with(exp_url) # Updated with the new inventory from the PUT call self._validate_provider(uuid, generation=44) expected = { # Called with the newly-found generation from the existing # inventory 'resource_provider_generation': 43, 'inventories': { 'VCPU': { 'total': 8, 'reserved': CONF.reserved_host_cpus, 'min_unit': 1, 'max_unit': compute_node.vcpus, 'step_size': 1, 'allocation_ratio': compute_node.cpu_allocation_ratio, }, 'MEMORY_MB': { 'total': 1024, 'reserved': CONF.reserved_host_memory_mb, 'min_unit': 1, 'max_unit': compute_node.memory_mb, 'step_size': 1, 'allocation_ratio': compute_node.ram_allocation_ratio, }, 'DISK_GB': { 'total': 10, 'reserved': CONF.reserved_host_disk_mb * 1024, 'min_unit': 1, 'max_unit': compute_node.local_gb, 'step_size': 1, 'allocation_ratio': compute_node.disk_allocation_ratio, }, } } mock_put.assert_called_once_with( exp_url, expected, global_request_id=self.context.global_id) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'put') def test_update_inventory(self, mock_put, mock_get): # Ensure _update_inventory() returns a list of Inventories objects # after creating or updating the existing values uuid = uuids.compute_node compute_node = self.compute_node # Make sure the resource provider exists for preventing to call the API self._init_provider_tree() new_vcpus_total = 240 mock_get.return_value.json.return_value = { 'resource_provider_generation': 43, 'inventories': { 'VCPU': {'total': 16}, 'MEMORY_MB': {'total': 1024}, 'DISK_GB': {'total': 10}, } } mock_put.return_value.status_code = 200 mock_put.return_value.json.return_value = { 'resource_provider_generation': 44, 'inventories': { 'VCPU': {'total': new_vcpus_total}, 'MEMORY_MB': {'total': 1024}, 'DISK_GB': {'total': 10}, } } inv_data = report._compute_node_to_inventory_dict(compute_node) # Make a change to trigger the update... inv_data['VCPU']['total'] = new_vcpus_total result = self.client._update_inventory_attempt( self.context, compute_node.uuid, inv_data ) self.assertTrue(result) exp_url = '/resource_providers/%s/inventories' % uuid mock_get.assert_called_once_with(exp_url) # Updated with the new inventory from the PUT call self._validate_provider(uuid, generation=44) expected = { # Called with the newly-found generation from the existing # inventory 'resource_provider_generation': 43, 'inventories': { 'VCPU': { 'total': new_vcpus_total, 'reserved': 0, 'min_unit': 1, 'max_unit': compute_node.vcpus, 'step_size': 1, 'allocation_ratio': compute_node.cpu_allocation_ratio, }, 'MEMORY_MB': { 'total': 1024, 'reserved': CONF.reserved_host_memory_mb, 'min_unit': 1, 'max_unit': compute_node.memory_mb, 'step_size': 1, 'allocation_ratio': compute_node.ram_allocation_ratio, }, 'DISK_GB': { 'total': 10, 'reserved': CONF.reserved_host_disk_mb * 1024, 'min_unit': 1, 'max_unit': compute_node.local_gb, 'step_size': 1, 'allocation_ratio': compute_node.disk_allocation_ratio, }, } } mock_put.assert_called_once_with( exp_url, expected, global_request_id=self.context.global_id) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'put') def test_update_inventory_no_update(self, mock_put, mock_get): """Simulate situation where scheduler client is first starting up and ends up loading information from the placement API via a GET against the resource provider's inventory but has no local cached inventory information for a resource provider. """ uuid = uuids.compute_node compute_node = self.compute_node # Make sure the resource provider exists for preventing to call the API self._init_provider_tree(generation_override=42, resources_override={}) mock_get.return_value.json.return_value = { 'resource_provider_generation': 43, 'inventories': { 'VCPU': { 'total': 8, 'reserved': CONF.reserved_host_cpus, 'min_unit': 1, 'max_unit': compute_node.vcpus, 'step_size': 1, 'allocation_ratio': compute_node.cpu_allocation_ratio, }, 'MEMORY_MB': { 'total': 1024, 'reserved': CONF.reserved_host_memory_mb, 'min_unit': 1, 'max_unit': compute_node.memory_mb, 'step_size': 1, 'allocation_ratio': compute_node.ram_allocation_ratio, }, 'DISK_GB': { 'total': 10, 'reserved': CONF.reserved_host_disk_mb * 1024, 'min_unit': 1, 'max_unit': compute_node.local_gb, 'step_size': 1, 'allocation_ratio': compute_node.disk_allocation_ratio, }, } } inv_data = report._compute_node_to_inventory_dict(compute_node) result = self.client._update_inventory_attempt( self.context, compute_node.uuid, inv_data ) self.assertTrue(result) exp_url = '/resource_providers/%s/inventories' % uuid mock_get.assert_called_once_with(exp_url) # No update so put should not be called self.assertFalse(mock_put.called) # Make sure we updated the generation from the inventory records self._validate_provider(uuid, generation=43) @mock.patch.object(report.LOG, 'info') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_get_inventory') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'put') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_ensure_resource_provider') def test_update_inventory_concurrent_update(self, mock_ensure, mock_put, mock_get, mock_info): # Ensure _update_inventory() returns a list of Inventories objects # after creating or updating the existing values uuid = uuids.compute_node compute_node = self.compute_node # Make sure the resource provider exists for preventing to call the API self.client._provider_tree.new_root( compute_node.hypervisor_hostname, compute_node.uuid, 42, ) mock_get.return_value = { 'resource_provider_generation': 42, 'inventories': {}, } mock_put.return_value.status_code = 409 mock_put.return_value.text = 'Does not match inventory in use' mock_put.return_value.headers = {'x-openstack-request-id': uuids.request_id} inv_data = report._compute_node_to_inventory_dict(compute_node) result = self.client._update_inventory_attempt( self.context, compute_node.uuid, inv_data ) self.assertFalse(result) # Invalidated the cache self.assertFalse(self.client._provider_tree.exists(uuid)) # Refreshed our resource provider mock_ensure.assert_called_once_with(self.context, uuid) # Logged the request id in the log message self.assertEqual(uuids.request_id, mock_info.call_args[0][1]['placement_req_id']) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_get_inventory') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'put') def test_update_inventory_inventory_in_use(self, mock_put, mock_get): # Ensure _update_inventory() returns a list of Inventories objects # after creating or updating the existing values uuid = uuids.compute_node compute_node = self.compute_node # Make sure the resource provider exists for preventing to call the API self.client._provider_tree.new_root( compute_node.hypervisor_hostname, compute_node.uuid, 42, ) mock_get.return_value = { 'resource_provider_generation': 42, 'inventories': {}, } mock_put.return_value.status_code = 409 mock_put.return_value.text = ( "update conflict: Inventory for VCPU on " "resource provider 123 in use" ) inv_data = report._compute_node_to_inventory_dict(compute_node) self.assertRaises( exception.InventoryInUse, self.client._update_inventory_attempt, self.context, compute_node.uuid, inv_data, ) # Did NOT invalidate the cache self.assertTrue(self.client._provider_tree.exists(uuid)) @mock.patch.object(report.LOG, 'info') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_get_inventory') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'put') def test_update_inventory_unknown_response(self, mock_put, mock_get, mock_info): # Ensure _update_inventory() returns a list of Inventories objects # after creating or updating the existing values uuid = uuids.compute_node compute_node = self.compute_node # Make sure the resource provider exists for preventing to call the API self.client._provider_tree.new_root( compute_node.hypervisor_hostname, compute_node.uuid, 42, ) mock_get.return_value = { 'resource_provider_generation': 42, 'inventories': {}, } mock_put.return_value.status_code = 234 mock_put.return_value.headers = {'x-openstack-request-id': uuids.request_id} inv_data = report._compute_node_to_inventory_dict(compute_node) result = self.client._update_inventory_attempt( self.context, compute_node.uuid, inv_data ) self.assertFalse(result) # No cache invalidation self.assertTrue(self.client._provider_tree.exists(uuid)) @mock.patch.object(report.LOG, 'warning') @mock.patch.object(report.LOG, 'debug') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_get_inventory') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'put') def test_update_inventory_failed(self, mock_put, mock_get, mock_debug, mock_warn): # Ensure _update_inventory() returns a list of Inventories objects # after creating or updating the existing values uuid = uuids.compute_node compute_node = self.compute_node # Make sure the resource provider exists for preventing to call the API self.client._provider_tree.new_root( compute_node.hypervisor_hostname, compute_node.uuid, 42, ) mock_get.return_value = { 'resource_provider_generation': 42, 'inventories': {}, } try: mock_put.return_value.__nonzero__.return_value = False except AttributeError: # Thanks py3 mock_put.return_value.__bool__.return_value = False mock_put.return_value.headers = {'x-openstack-request-id': uuids.request_id} inv_data = report._compute_node_to_inventory_dict(compute_node) result = self.client._update_inventory_attempt( self.context, compute_node.uuid, inv_data ) self.assertFalse(result) # No cache invalidation self.assertTrue(self.client._provider_tree.exists(uuid)) # Logged the request id in the log messages self.assertEqual(uuids.request_id, mock_debug.call_args[0][1]['placement_req_id']) self.assertEqual(uuids.request_id, mock_warn.call_args[0][1]['placement_req_id']) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_ensure_resource_provider') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_update_inventory_attempt') @mock.patch('time.sleep') def test_update_inventory_fails_and_then_succeeds(self, mock_sleep, mock_update, mock_ensure): # Ensure _update_inventory() fails if we have a conflict when updating # but retries correctly. cn = self.compute_node mock_update.side_effect = (False, True) self.client._provider_tree.new_root( cn.hypervisor_hostname, cn.uuid, 42, ) result = self.client._update_inventory( self.context, cn.uuid, mock.sentinel.inv_data ) self.assertTrue(result) # Only slept once mock_sleep.assert_called_once_with(1) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_ensure_resource_provider') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_update_inventory_attempt') @mock.patch('time.sleep') def test_update_inventory_never_succeeds(self, mock_sleep, mock_update, mock_ensure): # but retries correctly. cn = self.compute_node mock_update.side_effect = (False, False, False) self.client._provider_tree.new_root( cn.hypervisor_hostname, cn.uuid, 42, ) result = self.client._update_inventory( self.context, cn.uuid, mock.sentinel.inv_data ) self.assertFalse(result) # Slept three times mock_sleep.assert_has_calls([mock.call(1), mock.call(1), mock.call(1)]) # Three attempts to update mock_update.assert_has_calls([ mock.call(self.context, cn.uuid, mock.sentinel.inv_data), mock.call(self.context, cn.uuid, mock.sentinel.inv_data), mock.call(self.context, cn.uuid, mock.sentinel.inv_data), ]) # Slept three times mock_sleep.assert_has_calls([mock.call(1), mock.call(1), mock.call(1)]) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_update_inventory') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_delete_inventory') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_ensure_resource_class') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_get_or_create_resource_class') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_ensure_resource_provider') def test_set_inventory_for_provider_no_custom(self, mock_erp, mock_gocr, mock_erc, mock_del, mock_upd): """Tests that inventory records of all standard resource classes are passed to the report client's _update_inventory() method. """ inv_data = { 'VCPU': { 'total': 24, 'reserved': 0, 'min_unit': 1, 'max_unit': 24, 'step_size': 1, 'allocation_ratio': 1.0, }, 'MEMORY_MB': { 'total': 1024, 'reserved': 0, 'min_unit': 1, 'max_unit': 1024, 'step_size': 1, 'allocation_ratio': 1.0, }, 'DISK_GB': { 'total': 100, 'reserved': 0, 'min_unit': 1, 'max_unit': 100, 'step_size': 1, 'allocation_ratio': 1.0, }, } self.client.set_inventory_for_provider( self.context, mock.sentinel.rp_uuid, mock.sentinel.rp_name, inv_data, ) mock_erp.assert_called_once_with( self.context, mock.sentinel.rp_uuid, mock.sentinel.rp_name, parent_provider_uuid=None, ) # No custom resource classes to ensure... self.assertFalse(mock_erc.called) self.assertFalse(mock_gocr.called) mock_upd.assert_called_once_with( self.context, mock.sentinel.rp_uuid, inv_data, ) self.assertFalse(mock_del.called) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_update_inventory') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_delete_inventory') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_ensure_resource_class') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_get_or_create_resource_class') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_ensure_resource_provider') def test_set_inventory_for_provider_no_inv(self, mock_erp, mock_gocr, mock_erc, mock_del, mock_upd): """Tests that passing empty set of inventory records triggers a delete of inventory for the provider. """ inv_data = {} self.client.set_inventory_for_provider( self.context, mock.sentinel.rp_uuid, mock.sentinel.rp_name, inv_data, ) mock_erp.assert_called_once_with( self.context, mock.sentinel.rp_uuid, mock.sentinel.rp_name, parent_provider_uuid=None, ) self.assertFalse(mock_gocr.called) self.assertFalse(mock_erc.called) self.assertFalse(mock_upd.called) mock_del.assert_called_once_with(self.context, mock.sentinel.rp_uuid) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_update_inventory') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_delete_inventory') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_ensure_resource_class') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_get_or_create_resource_class') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_ensure_resource_provider') def test_set_inventory_for_provider_with_custom(self, mock_erp, mock_gocr, mock_erc, mock_del, mock_upd): """Tests that inventory records that include a custom resource class are passed to the report client's _update_inventory() method and that the custom resource class is auto-created. """ inv_data = { 'VCPU': { 'total': 24, 'reserved': 0, 'min_unit': 1, 'max_unit': 24, 'step_size': 1, 'allocation_ratio': 1.0, }, 'MEMORY_MB': { 'total': 1024, 'reserved': 0, 'min_unit': 1, 'max_unit': 1024, 'step_size': 1, 'allocation_ratio': 1.0, }, 'DISK_GB': { 'total': 100, 'reserved': 0, 'min_unit': 1, 'max_unit': 100, 'step_size': 1, 'allocation_ratio': 1.0, }, 'CUSTOM_IRON_SILVER': { 'total': 1, 'reserved': 0, 'min_unit': 1, 'max_unit': 1, 'step_size': 1, 'allocation_ratio': 1.0, } } self.client.set_inventory_for_provider( self.context, mock.sentinel.rp_uuid, mock.sentinel.rp_name, inv_data, ) mock_erp.assert_called_once_with( self.context, mock.sentinel.rp_uuid, mock.sentinel.rp_name, parent_provider_uuid=None, ) mock_erc.assert_called_once_with(self.context, 'CUSTOM_IRON_SILVER') mock_upd.assert_called_once_with( self.context, mock.sentinel.rp_uuid, inv_data, ) self.assertFalse(mock_gocr.called) self.assertFalse(mock_del.called) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_delete_inventory', new=mock.Mock()) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_ensure_resource_class', new=mock.Mock()) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_ensure_resource_provider') def test_set_inventory_for_provider_with_parent(self, mock_erp): """Ensure parent UUID is sent through.""" self.client.set_inventory_for_provider( self.context, uuids.child, 'junior', {}, parent_provider_uuid=uuids.parent) mock_erp.assert_called_once_with( self.context, uuids.child, 'junior', parent_provider_uuid=uuids.parent) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'put') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_get_or_create_resource_class') def test_ensure_resource_class_microversion_failover(self, mock_gocr, mock_put): mock_put.return_value.status_code = 406 self.client._ensure_resource_class(self.context, 'CUSTOM_IRON_SILVER') mock_gocr.assert_called_once_with(self.context, 'CUSTOM_IRON_SILVER') mock_put.assert_called_once_with( '/resource_classes/CUSTOM_IRON_SILVER', None, version='1.7', global_request_id=self.context.global_id) class TestAllocations(SchedulerReportClientTestCase): @mock.patch('nova.compute.utils.is_volume_backed_instance') def test_instance_to_allocations_dict(self, mock_vbi): mock_vbi.return_value = False inst = objects.Instance( uuid=uuids.inst, flavor=objects.Flavor(root_gb=10, swap=1023, ephemeral_gb=100, memory_mb=1024, vcpus=2, extra_specs={})) result = report._instance_to_allocations_dict(inst) expected = { 'MEMORY_MB': 1024, 'VCPU': 2, 'DISK_GB': 111, } self.assertEqual(expected, result) @mock.patch('nova.compute.utils.is_volume_backed_instance') def test_instance_to_allocations_dict_overrides(self, mock_vbi): """Test that resource overrides in an instance's flavor extra_specs are reported to placement. """ mock_vbi.return_value = False specs = { 'resources:CUSTOM_DAN': '123', 'resources:%s' % fields.ResourceClass.VCPU: '4', 'resources:NOTATHING': '456', 'resources:NOTEVENANUMBER': 'catfood', 'resources:': '7', 'resources:ferret:weasel': 'smelly', 'foo': 'bar', } inst = objects.Instance( uuid=uuids.inst, flavor=objects.Flavor(root_gb=10, swap=1023, ephemeral_gb=100, memory_mb=1024, vcpus=2, extra_specs=specs)) result = report._instance_to_allocations_dict(inst) expected = { 'MEMORY_MB': 1024, 'VCPU': 4, 'DISK_GB': 111, 'CUSTOM_DAN': 123, } self.assertEqual(expected, result) @mock.patch('nova.compute.utils.is_volume_backed_instance') def test_instance_to_allocations_dict_boot_from_volume(self, mock_vbi): mock_vbi.return_value = True inst = objects.Instance( uuid=uuids.inst, flavor=objects.Flavor(root_gb=10, swap=1, ephemeral_gb=100, memory_mb=1024, vcpus=2, extra_specs={})) result = report._instance_to_allocations_dict(inst) expected = { 'MEMORY_MB': 1024, 'VCPU': 2, 'DISK_GB': 101, } self.assertEqual(expected, result) @mock.patch('nova.compute.utils.is_volume_backed_instance') def test_instance_to_allocations_dict_zero_disk(self, mock_vbi): mock_vbi.return_value = True inst = objects.Instance( uuid=uuids.inst, flavor=objects.Flavor(root_gb=10, swap=0, ephemeral_gb=0, memory_mb=1024, vcpus=2, extra_specs={})) result = report._instance_to_allocations_dict(inst) expected = { 'MEMORY_MB': 1024, 'VCPU': 2, } self.assertEqual(expected, result) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'put') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get') @mock.patch('nova.scheduler.client.report.' '_instance_to_allocations_dict') def test_update_instance_allocation_new(self, mock_a, mock_get, mock_put): cn = objects.ComputeNode(uuid=uuids.cn) inst = objects.Instance(uuid=uuids.inst, project_id=uuids.project, user_id=uuids.user) mock_get.return_value.json.return_value = {'allocations': {}} expected = { 'allocations': [ {'resource_provider': {'uuid': cn.uuid}, 'resources': mock_a.return_value}], 'project_id': inst.project_id, 'user_id': inst.user_id, } self.client.update_instance_allocation(self.context, cn, inst, 1) mock_put.assert_called_once_with( '/allocations/%s' % inst.uuid, expected, version='1.8', global_request_id=self.context.global_id) self.assertTrue(mock_get.called) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'put') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get') @mock.patch('nova.scheduler.client.report.' '_instance_to_allocations_dict') def test_update_instance_allocation_existing(self, mock_a, mock_get, mock_put): cn = objects.ComputeNode(uuid=uuids.cn) inst = objects.Instance(uuid=uuids.inst) mock_get.return_value.json.return_value = {'allocations': { cn.uuid: { 'generation': 2, 'resources': { 'DISK_GB': 123, 'MEMORY_MB': 456, } }} } mock_a.return_value = { 'DISK_GB': 123, 'MEMORY_MB': 456, } self.client.update_instance_allocation(self.context, cn, inst, 1) self.assertFalse(mock_put.called) mock_get.assert_called_once_with( '/allocations/%s' % inst.uuid) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'put') @mock.patch('nova.scheduler.client.report.' '_instance_to_allocations_dict') @mock.patch.object(report.LOG, 'warning') def test_update_instance_allocation_new_failed(self, mock_warn, mock_a, mock_put, mock_get): cn = objects.ComputeNode(uuid=uuids.cn) inst = objects.Instance(uuid=uuids.inst, project_id=uuids.project, user_id=uuids.user) try: mock_put.return_value.__nonzero__.return_value = False except AttributeError: # NOTE(danms): LOL @ py3 mock_put.return_value.__bool__.return_value = False self.client.update_instance_allocation(self.context, cn, inst, 1) self.assertTrue(mock_warn.called) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'delete') def test_update_instance_allocation_delete(self, mock_delete): cn = objects.ComputeNode(uuid=uuids.cn) inst = objects.Instance(uuid=uuids.inst) self.client.update_instance_allocation(self.context, cn, inst, -1) mock_delete.assert_called_once_with( '/allocations/%s' % inst.uuid, global_request_id=self.context.global_id) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'delete') @mock.patch.object(report.LOG, 'warning') def test_update_instance_allocation_delete_failed(self, mock_warn, mock_delete): cn = objects.ComputeNode(uuid=uuids.cn) inst = objects.Instance(uuid=uuids.inst) try: mock_delete.return_value.__nonzero__.return_value = False except AttributeError: # NOTE(danms): LOL @ py3 mock_delete.return_value.__bool__.return_value = False self.client.update_instance_allocation(self.context, cn, inst, -1) self.assertTrue(mock_warn.called) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'delete') @mock.patch('nova.scheduler.client.report.LOG') def test_delete_allocation_for_instance_ignore_404(self, mock_log, mock_delete): """Tests that we don't log a warning on a 404 response when trying to delete an allocation record. """ mock_response = mock.MagicMock(status_code=404) try: mock_response.__nonzero__.return_value = False except AttributeError: # py3 uses __bool__ mock_response.__bool__.return_value = False mock_delete.return_value = mock_response self.client.delete_allocation_for_instance(self.context, uuids.rp_uuid) # make sure we didn't screw up the logic or the mock mock_log.info.assert_not_called() # make sure warning wasn't called for the 404 mock_log.warning.assert_not_called() @mock.patch("nova.scheduler.client.report.SchedulerReportClient." "delete") @mock.patch("nova.scheduler.client.report.SchedulerReportClient." "delete_allocation_for_instance") @mock.patch("nova.objects.InstanceList.get_by_host_and_node") def test_delete_resource_provider_cascade(self, mock_by_host, mock_del_alloc, mock_delete): self.client._provider_tree.new_root(uuids.cn, uuids.cn, 1) cn = objects.ComputeNode(uuid=uuids.cn, host="fake_host", hypervisor_hostname="fake_hostname", ) inst1 = objects.Instance(uuid=uuids.inst1) inst2 = objects.Instance(uuid=uuids.inst2) mock_by_host.return_value = objects.InstanceList( objects=[inst1, inst2]) resp_mock = mock.Mock(status_code=204) mock_delete.return_value = resp_mock self.client.delete_resource_provider(self.context, cn, cascade=True) self.assertEqual(2, mock_del_alloc.call_count) exp_url = "/resource_providers/%s" % uuids.cn mock_delete.assert_called_once_with( exp_url, global_request_id=self.context.global_id) self.assertFalse(self.client._provider_tree.exists(uuids.cn)) @mock.patch("nova.scheduler.client.report.SchedulerReportClient." "delete") @mock.patch("nova.scheduler.client.report.SchedulerReportClient." "delete_allocation_for_instance") @mock.patch("nova.objects.InstanceList.get_by_host_and_node") def test_delete_resource_provider_no_cascade(self, mock_by_host, mock_del_alloc, mock_delete): self.client._provider_tree.new_root(uuids.cn, uuids.cn, 1) self.client.association_refresh_time[uuids.cn] = mock.Mock() cn = objects.ComputeNode(uuid=uuids.cn, host="fake_host", hypervisor_hostname="fake_hostname", ) inst1 = objects.Instance(uuid=uuids.inst1) inst2 = objects.Instance(uuid=uuids.inst2) mock_by_host.return_value = objects.InstanceList( objects=[inst1, inst2]) resp_mock = mock.Mock(status_code=204) mock_delete.return_value = resp_mock self.client.delete_resource_provider(self.context, cn) mock_del_alloc.assert_not_called() exp_url = "/resource_providers/%s" % uuids.cn mock_delete.assert_called_once_with( exp_url, global_request_id=self.context.global_id) self.assertNotIn(uuids.cn, self.client.association_refresh_time) @mock.patch("nova.scheduler.client.report.SchedulerReportClient." "delete") @mock.patch('nova.scheduler.client.report.LOG') def test_delete_resource_provider_log_calls(self, mock_log, mock_delete): # First, check a successful call self.client._provider_tree.new_root(uuids.cn, uuids.cn, 1) cn = objects.ComputeNode(uuid=uuids.cn, host="fake_host", hypervisor_hostname="fake_hostname", ) resp_mock = mock.MagicMock(status_code=204) try: resp_mock.__nonzero__.return_value = True except AttributeError: # py3 uses __bool__ resp_mock.__bool__.return_value = True mock_delete.return_value = resp_mock self.client.delete_resource_provider(self.context, cn) # With a 204, only the info should be called self.assertEqual(1, mock_log.info.call_count) self.assertEqual(0, mock_log.warning.call_count) # Now check a 404 response mock_log.reset_mock() resp_mock.status_code = 404 try: resp_mock.__nonzero__.return_value = False except AttributeError: # py3 uses __bool__ resp_mock.__bool__.return_value = False self.client.delete_resource_provider(self.context, cn) # With a 404, neither log message should be called self.assertEqual(0, mock_log.info.call_count) self.assertEqual(0, mock_log.warning.call_count) # Finally, check a 409 response mock_log.reset_mock() resp_mock.status_code = 409 self.client.delete_resource_provider(self.context, cn) # With a 409, only the warning should be called self.assertEqual(0, mock_log.info.call_count) self.assertEqual(1, mock_log.error.call_count) class TestResourceClass(SchedulerReportClientTestCase): @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_create_resource_class') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.get') def test_get_or_create_existing(self, mock_get, mock_crc): resp_mock = mock.Mock(status_code=200) mock_get.return_value = resp_mock rc_name = 'CUSTOM_FOO' result = self.client._get_or_create_resource_class(self.context, rc_name) mock_get.assert_called_once_with( '/resource_classes/' + rc_name, version="1.2", ) self.assertFalse(mock_crc.called) self.assertEqual(rc_name, result) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_create_resource_class') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.get') def test_get_or_create_not_existing(self, mock_get, mock_crc): resp_mock = mock.Mock(status_code=404) mock_get.return_value = resp_mock rc_name = 'CUSTOM_FOO' result = self.client._get_or_create_resource_class(self.context, rc_name) mock_get.assert_called_once_with( '/resource_classes/' + rc_name, version="1.2", ) mock_crc.assert_called_once_with(self.context, rc_name) self.assertEqual(rc_name, result) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' '_create_resource_class') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.get') def test_get_or_create_bad_get(self, mock_get, mock_crc): resp_mock = mock.Mock(status_code=500, text='server error') mock_get.return_value = resp_mock rc_name = 'CUSTOM_FOO' result = self.client._get_or_create_resource_class(self.context, rc_name) mock_get.assert_called_once_with( '/resource_classes/' + rc_name, version="1.2", ) self.assertFalse(mock_crc.called) self.assertIsNone(result) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.post') def test_create_resource_class(self, mock_post): resp_mock = mock.Mock(status_code=201) mock_post.return_value = resp_mock rc_name = 'CUSTOM_FOO' result = self.client._create_resource_class(self.context, rc_name) mock_post.assert_called_once_with( '/resource_classes', {'name': rc_name}, version="1.2", global_request_id=self.context.global_id ) self.assertIsNone(result) @mock.patch('nova.scheduler.client.report.LOG.info') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.post') def test_create_resource_class_concurrent_write(self, mock_post, mock_log): resp_mock = mock.Mock(status_code=409) mock_post.return_value = resp_mock rc_name = 'CUSTOM_FOO' result = self.client._create_resource_class(self.context, rc_name) mock_post.assert_called_once_with( '/resource_classes', {'name': rc_name}, version="1.2", global_request_id=self.context.global_id ) self.assertIsNone(result) self.assertIn('Another thread already', mock_log.call_args[0][0]) nova-17.0.1/nova/tests/unit/scheduler/__init__.py0000666000175000017500000000000013250073126021727 0ustar zuulzuul00000000000000nova-17.0.1/nova/tests/unit/scheduler/test_scheduler.py0000666000175000017500000004526413250073136023233 0ustar zuulzuul00000000000000# Copyright 2010 United States Government as represented by the # Administrator of the National Aeronautics and Space Administration. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests For Scheduler """ import mock import oslo_messaging as messaging import nova.conf from nova import context from nova import objects from nova.scheduler import caching_scheduler from nova.scheduler import chance from nova.scheduler import filter_scheduler from nova.scheduler import host_manager from nova.scheduler import ironic_host_manager from nova.scheduler import manager from nova import servicegroup from nova import test from nova.tests.unit import fake_server_actions from nova.tests.unit.scheduler import fakes from nova.tests import uuidsentinel as uuids CONF = nova.conf.CONF class SchedulerManagerInitTestCase(test.NoDBTestCase): """Test case for scheduler manager initiation.""" manager_cls = manager.SchedulerManager @mock.patch.object(host_manager.HostManager, '_init_instance_info') @mock.patch.object(host_manager.HostManager, '_init_aggregates') def test_init_using_default_schedulerdriver(self, mock_init_agg, mock_init_inst): driver = self.manager_cls().driver self.assertIsInstance(driver, filter_scheduler.FilterScheduler) @mock.patch.object(host_manager.HostManager, '_init_instance_info') @mock.patch.object(host_manager.HostManager, '_init_aggregates') def test_init_using_chance_schedulerdriver(self, mock_init_agg, mock_init_inst): self.flags(driver='chance_scheduler', group='scheduler') driver = self.manager_cls().driver self.assertIsInstance(driver, chance.ChanceScheduler) @mock.patch.object(host_manager.HostManager, '_init_instance_info') @mock.patch.object(host_manager.HostManager, '_init_aggregates') def test_init_using_caching_schedulerdriver(self, mock_init_agg, mock_init_inst): self.flags(driver='caching_scheduler', group='scheduler') driver = self.manager_cls().driver self.assertIsInstance(driver, caching_scheduler.CachingScheduler) @mock.patch.object(host_manager.HostManager, '_init_instance_info') @mock.patch.object(host_manager.HostManager, '_init_aggregates') def test_init_nonexist_schedulerdriver(self, mock_init_agg, mock_init_inst): self.flags(driver='nonexist_scheduler', group='scheduler') # The entry point has to be defined in setup.cfg and nova-scheduler has # to be deployed again before using a custom value. self.assertRaises(RuntimeError, self.manager_cls) class SchedulerManagerTestCase(test.NoDBTestCase): """Test case for scheduler manager.""" manager_cls = manager.SchedulerManager driver_cls = fakes.FakeScheduler driver_plugin_name = 'fake_scheduler' @mock.patch.object(host_manager.HostManager, '_init_instance_info') @mock.patch.object(host_manager.HostManager, '_init_aggregates') def setUp(self, mock_init_agg, mock_init_inst): super(SchedulerManagerTestCase, self).setUp() self.flags(driver=self.driver_plugin_name, group='scheduler') with mock.patch.object(host_manager.HostManager, '_init_aggregates'): self.manager = self.manager_cls() self.context = context.RequestContext('fake_user', 'fake_project') self.topic = 'fake_topic' self.fake_args = (1, 2, 3) self.fake_kwargs = {'cat': 'meow', 'dog': 'woof'} fake_server_actions.stub_out_action_events(self) def test_1_correct_init(self): # Correct scheduler driver manager = self.manager self.assertIsInstance(manager.driver, self.driver_cls) @mock.patch('nova.scheduler.utils.resources_from_request_spec') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get_allocation_candidates') def test_select_destination(self, mock_get_ac, mock_rfrs): fake_spec = objects.RequestSpec() fake_spec.instance_uuid = uuids.instance fake_version = "9.42" place_res = (fakes.ALLOC_REQS, mock.sentinel.p_sums, fake_version) mock_get_ac.return_value = place_res expected_alloc_reqs_by_rp_uuid = { cn.uuid: [fakes.ALLOC_REQS[x]] for x, cn in enumerate(fakes.COMPUTE_NODES) } with mock.patch.object(self.manager.driver, 'select_destinations' ) as select_destinations: self.manager.select_destinations(None, spec_obj=fake_spec, instance_uuids=[fake_spec.instance_uuid]) select_destinations.assert_called_once_with(None, fake_spec, [fake_spec.instance_uuid], expected_alloc_reqs_by_rp_uuid, mock.sentinel.p_sums, fake_version, False) mock_get_ac.assert_called_once_with(mock_rfrs.return_value) # Now call select_destinations() with True values for the params # introduced in RPC version 4.5 select_destinations.reset_mock() self.manager.select_destinations(None, spec_obj=fake_spec, instance_uuids=[fake_spec.instance_uuid], return_objects=True, return_alternates=True) select_destinations.assert_called_once_with(None, fake_spec, [fake_spec.instance_uuid], expected_alloc_reqs_by_rp_uuid, mock.sentinel.p_sums, fake_version, True) @mock.patch('nova.scheduler.utils.resources_from_request_spec') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get_allocation_candidates') def test_select_destination_return_objects(self, mock_get_ac, mock_rfrs): fake_spec = objects.RequestSpec() fake_spec.instance_uuid = uuids.instance fake_version = "9.42" place_res = (fakes.ALLOC_REQS, mock.sentinel.p_sums, fake_version) mock_get_ac.return_value = place_res expected_alloc_reqs_by_rp_uuid = { cn.uuid: [fakes.ALLOC_REQS[x]] for x, cn in enumerate(fakes.COMPUTE_NODES) } with mock.patch.object(self.manager.driver, 'select_destinations' ) as select_destinations: sel_obj = objects.Selection(service_host="fake_host", nodename="fake_node", compute_node_uuid=uuids.compute_node, cell_uuid=uuids.cell, limits=None) select_destinations.return_value = [[sel_obj]] # Pass True; should get the Selection object back. dests = self.manager.select_destinations(None, spec_obj=fake_spec, instance_uuids=[fake_spec.instance_uuid], return_objects=True, return_alternates=True) sel_host = dests[0][0] self.assertIsInstance(sel_host, objects.Selection) # Since both return_objects and return_alternates are True, the # driver should have been called with True for return_alternates. select_destinations.assert_called_once_with(None, fake_spec, [fake_spec.instance_uuid], expected_alloc_reqs_by_rp_uuid, mock.sentinel.p_sums, fake_version, True) # Now pass False for return objects, but keep return_alternates as # True. Verify that the manager converted the Selection object back # to a dict. select_destinations.reset_mock() dests = self.manager.select_destinations(None, spec_obj=fake_spec, instance_uuids=[fake_spec.instance_uuid], return_objects=False, return_alternates=True) sel_host = dests[0] self.assertIsInstance(sel_host, dict) # Even though return_alternates was passed as True, since # return_objects was False, the driver should have been called with # return_alternates as False. select_destinations.assert_called_once_with(None, fake_spec, [fake_spec.instance_uuid], expected_alloc_reqs_by_rp_uuid, mock.sentinel.p_sums, fake_version, False) @mock.patch('nova.scheduler.utils.resources_from_request_spec') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get_allocation_candidates') def _test_select_destination(self, get_allocation_candidates_response, mock_get_ac, mock_rfrs): fake_spec = objects.RequestSpec() fake_spec.instance_uuid = uuids.instance place_res = get_allocation_candidates_response mock_get_ac.return_value = place_res with mock.patch.object(self.manager.driver, 'select_destinations' ) as select_destinations: self.assertRaises(messaging.rpc.dispatcher.ExpectedException, self.manager.select_destinations, None, spec_obj=fake_spec, instance_uuids=[fake_spec.instance_uuid]) select_destinations.assert_not_called() mock_get_ac.assert_called_once_with(mock_rfrs.return_value) def test_select_destination_old_placement(self): """Tests that we will raise NoValidhost when the scheduler report client's get_allocation_candidates() returns None, None as it would if placement service hasn't been upgraded before scheduler. """ place_res = (None, None, None) self._test_select_destination(place_res) def test_select_destination_placement_connect_fails(self): """Tests that we will raise NoValidHost when the scheduler report client's get_allocation_candidates() returns None, which it would if the connection to Placement failed and the safe_connect decorator returns None. """ place_res = None self._test_select_destination(place_res) def test_select_destination_no_candidates(self): """Tests that we will raise NoValidHost when the scheduler report client's get_allocation_candidates() returns [], {} which it would if placement service hasn't yet had compute nodes populate inventory. """ place_res = ([], {}, None) self._test_select_destination(place_res) @mock.patch('nova.scheduler.utils.resources_from_request_spec') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get_allocation_candidates') def test_select_destination_with_4_3_client(self, mock_get_ac, mock_rfrs): fake_spec = objects.RequestSpec() place_res = (fakes.ALLOC_REQS, mock.sentinel.p_sums, "42.0") mock_get_ac.return_value = place_res expected_alloc_reqs_by_rp_uuid = { cn.uuid: [fakes.ALLOC_REQS[x]] for x, cn in enumerate(fakes.COMPUTE_NODES) } with mock.patch.object(self.manager.driver, 'select_destinations' ) as select_destinations: self.manager.select_destinations(None, spec_obj=fake_spec) select_destinations.assert_called_once_with(None, fake_spec, None, expected_alloc_reqs_by_rp_uuid, mock.sentinel.p_sums, "42.0", False) mock_get_ac.assert_called_once_with(mock_rfrs.return_value) # TODO(sbauza): Remove that test once the API v4 is removed @mock.patch('nova.scheduler.utils.resources_from_request_spec') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get_allocation_candidates') @mock.patch.object(objects.RequestSpec, 'from_primitives') def test_select_destination_with_old_client(self, from_primitives, mock_get_ac, mock_rfrs): fake_spec = objects.RequestSpec() fake_spec.instance_uuid = uuids.instance from_primitives.return_value = fake_spec place_res = (fakes.ALLOC_REQS, mock.sentinel.p_sums, "42.0") mock_get_ac.return_value = place_res expected_alloc_reqs_by_rp_uuid = { cn.uuid: [fakes.ALLOC_REQS[x]] for x, cn in enumerate(fakes.COMPUTE_NODES) } with mock.patch.object(self.manager.driver, 'select_destinations' ) as select_destinations: self.manager.select_destinations(None, request_spec='fake_spec', filter_properties='fake_props', instance_uuids=[fake_spec.instance_uuid]) select_destinations.assert_called_once_with(None, fake_spec, [fake_spec.instance_uuid], expected_alloc_reqs_by_rp_uuid, mock.sentinel.p_sums, "42.0", False) mock_get_ac.assert_called_once_with(mock_rfrs.return_value) def test_update_aggregates(self): with mock.patch.object(self.manager.driver.host_manager, 'update_aggregates' ) as update_aggregates: self.manager.update_aggregates(None, aggregates='agg') update_aggregates.assert_called_once_with('agg') def test_delete_aggregate(self): with mock.patch.object(self.manager.driver.host_manager, 'delete_aggregate' ) as delete_aggregate: self.manager.delete_aggregate(None, aggregate='agg') delete_aggregate.assert_called_once_with('agg') def test_update_instance_info(self): with mock.patch.object(self.manager.driver.host_manager, 'update_instance_info') as mock_update: self.manager.update_instance_info(mock.sentinel.context, mock.sentinel.host_name, mock.sentinel.instance_info) mock_update.assert_called_once_with(mock.sentinel.context, mock.sentinel.host_name, mock.sentinel.instance_info) def test_delete_instance_info(self): with mock.patch.object(self.manager.driver.host_manager, 'delete_instance_info') as mock_delete: self.manager.delete_instance_info(mock.sentinel.context, mock.sentinel.host_name, mock.sentinel.instance_uuid) mock_delete.assert_called_once_with(mock.sentinel.context, mock.sentinel.host_name, mock.sentinel.instance_uuid) def test_sync_instance_info(self): with mock.patch.object(self.manager.driver.host_manager, 'sync_instance_info') as mock_sync: self.manager.sync_instance_info(mock.sentinel.context, mock.sentinel.host_name, mock.sentinel.instance_uuids) mock_sync.assert_called_once_with(mock.sentinel.context, mock.sentinel.host_name, mock.sentinel.instance_uuids) @mock.patch('nova.objects.host_mapping.discover_hosts') def test_discover_hosts(self, mock_discover): cm1 = objects.CellMapping(name='cell1') cm2 = objects.CellMapping(name='cell2') mock_discover.return_value = [objects.HostMapping(host='a', cell_mapping=cm1), objects.HostMapping(host='b', cell_mapping=cm2)] self.manager._discover_hosts_in_cells(mock.sentinel.context) class SchedulerInitTestCase(test.NoDBTestCase): """Test case for base scheduler driver initiation.""" driver_cls = fakes.FakeScheduler @mock.patch.object(host_manager.HostManager, '_init_instance_info') @mock.patch.object(host_manager.HostManager, '_init_aggregates') def test_init_using_default_hostmanager(self, mock_init_agg, mock_init_inst): manager = self.driver_cls().host_manager self.assertIsInstance(manager, host_manager.HostManager) @mock.patch.object(ironic_host_manager.IronicHostManager, '_init_instance_info') @mock.patch.object(host_manager.HostManager, '_init_aggregates') def test_init_using_ironic_hostmanager(self, mock_init_agg, mock_init_inst): self.flags(host_manager='ironic_host_manager', group='scheduler') manager = self.driver_cls().host_manager self.assertIsInstance(manager, ironic_host_manager.IronicHostManager) class SchedulerTestCase(test.NoDBTestCase): """Test case for base scheduler driver class.""" # So we can subclass this test and re-use tests if we need. driver_cls = fakes.FakeScheduler @mock.patch.object(host_manager.HostManager, '_init_instance_info') @mock.patch.object(host_manager.HostManager, '_init_aggregates') def setUp(self, mock_init_agg, mock_init_inst): super(SchedulerTestCase, self).setUp() self.driver = self.driver_cls() self.context = context.RequestContext('fake_user', 'fake_project') self.topic = 'fake_topic' self.servicegroup_api = servicegroup.API() @mock.patch('nova.objects.ServiceList.get_by_topic') @mock.patch('nova.servicegroup.API.service_is_up') def test_hosts_up(self, mock_service_is_up, mock_get_by_topic): service1 = objects.Service(host='host1') service2 = objects.Service(host='host2') services = objects.ServiceList(objects=[service1, service2]) mock_get_by_topic.return_value = services mock_service_is_up.side_effect = [False, True] result = self.driver.hosts_up(self.context, self.topic) self.assertEqual(result, ['host2']) mock_get_by_topic.assert_called_once_with(self.context, self.topic) calls = [mock.call(service1), mock.call(service2)] self.assertEqual(calls, mock_service_is_up.call_args_list) nova-17.0.1/nova/tests/unit/scheduler/test_client.py0000666000175000017500000001327713250073126022531 0ustar zuulzuul00000000000000# Copyright (c) 2014 Red Hat, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import oslo_messaging as messaging from nova import objects from nova.scheduler import client as scheduler_client from nova.scheduler.client import query as scheduler_query_client from nova.scheduler.client import report as scheduler_report_client from nova import test from nova.tests import uuidsentinel as uuids """Tests for Scheduler Client.""" class SchedulerClientTestCase(test.NoDBTestCase): def setUp(self): super(SchedulerClientTestCase, self).setUp() self.client = scheduler_client.SchedulerClient() def test_constructor(self): self.assertIsNotNone(self.client.queryclient) self.assertIsNotNone(self.client.reportclient) @mock.patch.object(scheduler_query_client.SchedulerQueryClient, 'select_destinations') def test_select_destinations(self, mock_select_destinations): fake_spec = objects.RequestSpec() fake_spec.instance_uuid = uuids.instance self.assertIsNone(self.client.queryclient.instance) self.client.select_destinations('ctxt', fake_spec, [fake_spec.instance_uuid]) self.assertIsNotNone(self.client.queryclient.instance) mock_select_destinations.assert_called_once_with('ctxt', fake_spec, [fake_spec.instance_uuid], False, False) @mock.patch.object(scheduler_query_client.SchedulerQueryClient, 'select_destinations', side_effect=messaging.MessagingTimeout()) def test_select_destinations_timeout(self, mock_select_destinations): # check if the scheduler service times out properly fake_spec = objects.RequestSpec() fake_spec.instance_uuid = uuids.instance fake_args = ['ctxt', fake_spec, [fake_spec.instance_uuid], False, False] self.assertRaises(messaging.MessagingTimeout, self.client.select_destinations, *fake_args) mock_select_destinations.assert_has_calls([mock.call(*fake_args)] * 2) @mock.patch.object(scheduler_query_client.SchedulerQueryClient, 'select_destinations', side_effect=[ messaging.MessagingTimeout(), mock.DEFAULT]) def test_select_destinations_timeout_once(self, mock_select_destinations): # scenario: the scheduler service times out & recovers after failure fake_spec = objects.RequestSpec() fake_spec.instance_uuid = uuids.instance fake_args = ['ctxt', fake_spec, [fake_spec.instance_uuid], False, False] self.client.select_destinations(*fake_args) mock_select_destinations.assert_has_calls([mock.call(*fake_args)] * 2) @mock.patch.object(scheduler_query_client.SchedulerQueryClient, 'update_aggregates') def test_update_aggregates(self, mock_update_aggs): aggregates = [objects.Aggregate(id=1)] self.client.update_aggregates( context='context', aggregates=aggregates) mock_update_aggs.assert_called_once_with( 'context', aggregates) @mock.patch.object(scheduler_query_client.SchedulerQueryClient, 'delete_aggregate') def test_delete_aggregate(self, mock_delete_agg): aggregate = objects.Aggregate(id=1) self.client.delete_aggregate( context='context', aggregate=aggregate) mock_delete_agg.assert_called_once_with( 'context', aggregate) @mock.patch.object(scheduler_report_client.SchedulerReportClient, 'update_compute_node') def test_update_compute_node(self, mock_update_compute_node): self.assertIsNone(self.client.reportclient.instance) self.client.update_compute_node(mock.sentinel.ctx, mock.sentinel.cn) self.assertIsNotNone(self.client.reportclient.instance) mock_update_compute_node.assert_called_once_with( mock.sentinel.ctx, mock.sentinel.cn) @mock.patch.object(scheduler_report_client.SchedulerReportClient, 'set_inventory_for_provider') def test_set_inventory_for_provider(self, mock_set): self.client.set_inventory_for_provider( mock.sentinel.ctx, mock.sentinel.rp_uuid, mock.sentinel.rp_name, mock.sentinel.inv_data, ) mock_set.assert_called_once_with( mock.sentinel.ctx, mock.sentinel.rp_uuid, mock.sentinel.rp_name, mock.sentinel.inv_data, parent_provider_uuid=None, ) # Pass the optional parent_provider_uuid mock_set.reset_mock() self.client.set_inventory_for_provider( mock.sentinel.ctx, mock.sentinel.child_uuid, mock.sentinel.child_name, mock.sentinel.inv_data2, parent_provider_uuid=mock.sentinel.rp_uuid, ) mock_set.assert_called_once_with( mock.sentinel.ctx, mock.sentinel.child_uuid, mock.sentinel.child_name, mock.sentinel.inv_data2, parent_provider_uuid=mock.sentinel.rp_uuid, ) nova-17.0.1/nova/tests/unit/scheduler/weights/0000775000175000017500000000000013250073472021304 5ustar zuulzuul00000000000000nova-17.0.1/nova/tests/unit/scheduler/weights/test_weights_ram.py0000666000175000017500000001005413250073126025224 0ustar zuulzuul00000000000000# Copyright 2011-2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests For Scheduler RAM weights. """ from nova.scheduler import weights from nova.scheduler.weights import ram from nova import test from nova.tests.unit.scheduler import fakes class RamWeigherTestCase(test.NoDBTestCase): def setUp(self): super(RamWeigherTestCase, self).setUp() self.weight_handler = weights.HostWeightHandler() self.weighers = [ram.RAMWeigher()] def _get_weighed_host(self, hosts, weight_properties=None): if weight_properties is None: weight_properties = {} return self.weight_handler.get_weighed_objects(self.weighers, hosts, weight_properties)[0] def _get_all_hosts(self): host_values = [ ('host1', 'node1', {'free_ram_mb': 512}), ('host2', 'node2', {'free_ram_mb': 1024}), ('host3', 'node3', {'free_ram_mb': 3072}), ('host4', 'node4', {'free_ram_mb': 8192}) ] return [fakes.FakeHostState(host, node, values) for host, node, values in host_values] def test_default_of_spreading_first(self): hostinfo_list = self._get_all_hosts() # host1: free_ram_mb=512 # host2: free_ram_mb=1024 # host3: free_ram_mb=3072 # host4: free_ram_mb=8192 # so, host4 should win: weighed_host = self._get_weighed_host(hostinfo_list) self.assertEqual(1.0, weighed_host.weight) self.assertEqual('host4', weighed_host.obj.host) def test_ram_filter_multiplier1(self): self.flags(ram_weight_multiplier=0.0, group='filter_scheduler') hostinfo_list = self._get_all_hosts() # host1: free_ram_mb=512 # host2: free_ram_mb=1024 # host3: free_ram_mb=3072 # host4: free_ram_mb=8192 # We do not know the host, all have same weight. weighed_host = self._get_weighed_host(hostinfo_list) self.assertEqual(0.0, weighed_host.weight) def test_ram_filter_multiplier2(self): self.flags(ram_weight_multiplier=2.0, group='filter_scheduler') hostinfo_list = self._get_all_hosts() # host1: free_ram_mb=512 # host2: free_ram_mb=1024 # host3: free_ram_mb=3072 # host4: free_ram_mb=8192 # so, host4 should win: weighed_host = self._get_weighed_host(hostinfo_list) self.assertEqual(1.0 * 2, weighed_host.weight) self.assertEqual('host4', weighed_host.obj.host) def test_ram_filter_negative(self): self.flags(ram_weight_multiplier=1.0, group='filter_scheduler') hostinfo_list = self._get_all_hosts() host_attr = {'id': 100, 'memory_mb': 8192, 'free_ram_mb': -512} host_state = fakes.FakeHostState('negative', 'negative', host_attr) hostinfo_list = list(hostinfo_list) + [host_state] # host1: free_ram_mb=512 # host2: free_ram_mb=1024 # host3: free_ram_mb=3072 # host4: free_ram_mb=8192 # negativehost: free_ram_mb=-512 # so, host4 should win weights = self.weight_handler.get_weighed_objects(self.weighers, hostinfo_list, {}) weighed_host = weights[0] self.assertEqual(1, weighed_host.weight) self.assertEqual('host4', weighed_host.obj.host) # and negativehost should lose weighed_host = weights[-1] self.assertEqual(0, weighed_host.weight) self.assertEqual('negative', weighed_host.obj.host) nova-17.0.1/nova/tests/unit/scheduler/weights/test_weights_affinity.py0000666000175000017500000001442413250073126026263 0ustar zuulzuul00000000000000# Copyright (c) 2015 Ericsson AB # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova import objects from nova.scheduler import weights from nova.scheduler.weights import affinity from nova import test from nova.tests.unit.scheduler import fakes class SoftWeigherTestBase(test.NoDBTestCase): def setUp(self): super(SoftWeigherTestBase, self).setUp() self.weight_handler = weights.HostWeightHandler() self.weighers = [] def _get_weighed_host(self, hosts, policy): request_spec = objects.RequestSpec( instance_group=objects.InstanceGroup( policies=[policy], members=['member1', 'member2', 'member3', 'member4', 'member5', 'member6', 'member7'])) return self.weight_handler.get_weighed_objects(self.weighers, hosts, request_spec)[0] def _get_all_hosts(self): host_values = [ ('host1', 'node1', {'instances': { 'member1': mock.sentinel, 'instance13': mock.sentinel }}), ('host2', 'node2', {'instances': { 'member2': mock.sentinel, 'member3': mock.sentinel, 'member4': mock.sentinel, 'member5': mock.sentinel, 'instance14': mock.sentinel }}), ('host3', 'node3', {'instances': { 'instance15': mock.sentinel }}), ('host4', 'node4', {'instances': { 'member6': mock.sentinel, 'member7': mock.sentinel, 'instance16': mock.sentinel }})] return [fakes.FakeHostState(host, node, values) for host, node, values in host_values] def _do_test(self, policy, expected_weight, expected_host): hostinfo_list = self._get_all_hosts() weighed_host = self._get_weighed_host(hostinfo_list, policy) self.assertEqual(expected_weight, weighed_host.weight) if expected_host: self.assertEqual(expected_host, weighed_host.obj.host) class SoftAffinityWeigherTestCase(SoftWeigherTestBase): def setUp(self): super(SoftAffinityWeigherTestCase, self).setUp() self.weighers = [affinity.ServerGroupSoftAffinityWeigher()] def test_soft_affinity_weight_multiplier_by_default(self): self._do_test(policy='soft-affinity', expected_weight=1.0, expected_host='host2') def test_soft_affinity_weight_multiplier_zero_value(self): # We do not know the host, all have same weight. self.flags(soft_affinity_weight_multiplier=0.0, group='filter_scheduler') self._do_test(policy='soft-affinity', expected_weight=0.0, expected_host=None) def test_soft_affinity_weight_multiplier_positive_value(self): self.flags(soft_affinity_weight_multiplier=2.0, group='filter_scheduler') self._do_test(policy='soft-affinity', expected_weight=2.0, expected_host='host2') @mock.patch.object(affinity, 'LOG') def test_soft_affinity_weight_multiplier_negative_value(self, mock_log): self.flags(soft_affinity_weight_multiplier=-1.0, group='filter_scheduler') self._do_test(policy='soft-affinity', expected_weight=0.0, expected_host='host3') # call twice and assert that only one warning is emitted self._do_test(policy='soft-affinity', expected_weight=0.0, expected_host='host3') self.assertEqual(1, mock_log.warning.call_count) class SoftAntiAffinityWeigherTestCase(SoftWeigherTestBase): def setUp(self): super(SoftAntiAffinityWeigherTestCase, self).setUp() self.weighers = [affinity.ServerGroupSoftAntiAffinityWeigher()] def test_soft_anti_affinity_weight_multiplier_by_default(self): self._do_test(policy='soft-anti-affinity', expected_weight=1.0, expected_host='host3') def test_soft_anti_affinity_weight_multiplier_zero_value(self): # We do not know the host, all have same weight. self.flags(soft_anti_affinity_weight_multiplier=0.0, group='filter_scheduler') self._do_test(policy='soft-anti-affinity', expected_weight=0.0, expected_host=None) def test_soft_anti_affinity_weight_multiplier_positive_value(self): self.flags(soft_anti_affinity_weight_multiplier=2.0, group='filter_scheduler') self._do_test(policy='soft-anti-affinity', expected_weight=2.0, expected_host='host3') @mock.patch.object(affinity, 'LOG') def test_soft_anti_affinity_weight_multiplier_negative_value(self, mock_log): self.flags(soft_anti_affinity_weight_multiplier=-1.0, group='filter_scheduler') self._do_test(policy='soft-anti-affinity', expected_weight=0.0, expected_host='host2') # call twice and assert that only one warning is emitted self._do_test(policy='soft-anti-affinity', expected_weight=0.0, expected_host='host2') self.assertEqual(1, mock_log.warning.call_count) nova-17.0.1/nova/tests/unit/scheduler/weights/test_weights_pci.py0000666000175000017500000001512313250073126025222 0ustar zuulzuul00000000000000# Copyright (c) 2016, Red Hat Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests for Scheduler PCI weights.""" import copy from nova import objects from nova.pci import stats from nova.scheduler import weights from nova.scheduler.weights import pci from nova import test from nova.tests.unit import fake_pci_device_pools as fake_pci from nova.tests.unit.scheduler import fakes class PCIWeigherTestCase(test.NoDBTestCase): def setUp(self): super(PCIWeigherTestCase, self).setUp() self.weight_handler = weights.HostWeightHandler() self.weighers = [pci.PCIWeigher()] def _get_weighed_hosts(self, hosts, request_spec): return self.weight_handler.get_weighed_objects(self.weighers, hosts, request_spec) def _get_all_hosts(self, host_values): def _create_pci_pool(count): test_dict = copy.copy(fake_pci.fake_pool_dict) test_dict['count'] = count return objects.PciDevicePool.from_dict(test_dict) def _create_pci_stats(counts): if counts is None: # the pci_stats column is nullable return None pools = [_create_pci_pool(count) for count in counts] return stats.PciDeviceStats(pools) return [fakes.FakeHostState( host, node, {'pci_stats': _create_pci_stats(values)}) for host, node, values in host_values] def test_multiplier_no_pci_empty_hosts(self): """Test weigher with a no PCI device instance on no PCI device hosts. Ensure that the host with no PCI devices receives the highest weighting. """ hosts = [ ('host1', 'node1', [3, 1]), # 4 devs ('host2', 'node2', []), # 0 devs ] hostinfo_list = self._get_all_hosts(hosts) # we don't request PCI devices spec_obj = objects.RequestSpec(pci_requests=None) # host2, which has the least PCI devices, should win weighed_host = self._get_weighed_hosts(hostinfo_list, spec_obj)[0] self.assertEqual(1.0, weighed_host.weight) self.assertEqual('host2', weighed_host.obj.host) def test_multiplier_no_pci_non_empty_hosts(self): """Test weigher with a no PCI device instance on PCI device hosts. Ensure that the host with the least PCI devices receives the highest weighting. """ hosts = [ ('host1', 'node1', [2, 2, 2]), # 6 devs ('host2', 'node2', [3, 1]), # 4 devs ] hostinfo_list = self._get_all_hosts(hosts) # we don't request PCI devices spec_obj = objects.RequestSpec(pci_requests=None) # host2, which has the least free PCI devices, should win weighed_host = self._get_weighed_hosts(hostinfo_list, spec_obj)[0] self.assertEqual(1.0, weighed_host.weight) self.assertEqual('host2', weighed_host.obj.host) def test_multiplier_with_pci(self): """Test weigher with a PCI device instance and a multiplier. Ensure that the host with the smallest number of free PCI devices capable of meeting the requirements of the instance is chosen, enforcing a stacking (rather than spreading) behavior. """ # none of the hosts will have less than the number of devices required # by the instance: the NUMATopologyFilter takes care of this for us hosts = [ ('host1', 'node1', [4, 1]), # 5 devs ('host2', 'node2', [10]), # 10 devs ('host3', 'node3', [1, 1, 1, 1]), # 4 devs ] hostinfo_list = self._get_all_hosts(hosts) # we request PCI devices request = objects.InstancePCIRequest(count=4, spec=[{'vendor_id': '8086'}]) requests = objects.InstancePCIRequests(requests=[request]) spec_obj = objects.RequestSpec(pci_requests=requests) # host3, which has the least free PCI devices, should win weighed_host = self._get_weighed_hosts(hostinfo_list, spec_obj)[0] self.assertEqual(1.0, weighed_host.weight) self.assertEqual('host3', weighed_host.obj.host) def test_multiplier_with_many_pci(self): """Test weigher with a PCI device instance and huge hosts. Ensure that the weigher gracefully degrades when the number of PCI devices on the host exceeeds MAX_DEVS. """ hosts = [ ('host1', 'node1', [500]), # 500 devs ('host2', 'node2', [2000]), # 2000 devs ] hostinfo_list = self._get_all_hosts(hosts) # we request PCI devices request = objects.InstancePCIRequest(count=4, spec=[{'vendor_id': '8086'}]) requests = objects.InstancePCIRequests(requests=[request]) spec_obj = objects.RequestSpec(pci_requests=requests) # we do not know the host as all have same weight weighed_hosts = self._get_weighed_hosts(hostinfo_list, spec_obj) for weighed_host in weighed_hosts: # the weigher normalizes all weights to 0 if they're all equal self.assertEqual(0.0, weighed_host.weight) def test_multiplier_none(self): """Test weigher with a PCI device instance and a 0.0 multiplier. Ensure that the 0.0 multiplier disables the weigher entirely. """ self.flags(pci_weight_multiplier=0.0, group='filter_scheduler') hosts = [ ('host1', 'node1', [4, 1]), # 5 devs ('host2', 'node2', [10]), # 10 devs ('host3', 'node3', [1, 1, 1, 1]), # 4 devs ] hostinfo_list = self._get_all_hosts(hosts) request = objects.InstancePCIRequest(count=1, spec=[{'vendor_id': '8086'}]) requests = objects.InstancePCIRequests(requests=[request]) spec_obj = objects.RequestSpec(pci_requests=requests) # we do not know the host as all have same weight weighed_hosts = self._get_weighed_hosts(hostinfo_list, spec_obj) for weighed_host in weighed_hosts: # the weigher normalizes all weights to 0 if they're all equal self.assertEqual(0.0, weighed_host.weight) nova-17.0.1/nova/tests/unit/scheduler/weights/test_weights_ioopsweight.py0000666000175000017500000000542213250073126027011 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests For Scheduler IoOpsWeigher weights """ from nova.scheduler import weights from nova.scheduler.weights import io_ops from nova import test from nova.tests.unit.scheduler import fakes class IoOpsWeigherTestCase(test.NoDBTestCase): def setUp(self): super(IoOpsWeigherTestCase, self).setUp() self.weight_handler = weights.HostWeightHandler() self.weighers = [io_ops.IoOpsWeigher()] def _get_weighed_host(self, hosts, io_ops_weight_multiplier): if io_ops_weight_multiplier is not None: self.flags(io_ops_weight_multiplier=io_ops_weight_multiplier, group='filter_scheduler') return self.weight_handler.get_weighed_objects(self.weighers, hosts, {})[0] def _get_all_hosts(self): host_values = [ ('host1', 'node1', {'num_io_ops': 1}), ('host2', 'node2', {'num_io_ops': 2}), ('host3', 'node3', {'num_io_ops': 0}), ('host4', 'node4', {'num_io_ops': 4}) ] return [fakes.FakeHostState(host, node, values) for host, node, values in host_values] def _do_test(self, io_ops_weight_multiplier, expected_weight, expected_host): hostinfo_list = self._get_all_hosts() weighed_host = self._get_weighed_host(hostinfo_list, io_ops_weight_multiplier) self.assertEqual(weighed_host.weight, expected_weight) if expected_host: self.assertEqual(weighed_host.obj.host, expected_host) def test_io_ops_weight_multiplier_by_default(self): self._do_test(io_ops_weight_multiplier=None, expected_weight=0.0, expected_host='host3') def test_io_ops_weight_multiplier_zero_value(self): # We do not know the host, all have same weight. self._do_test(io_ops_weight_multiplier=0.0, expected_weight=0.0, expected_host=None) def test_io_ops_weight_multiplier_positive_value(self): self._do_test(io_ops_weight_multiplier=2.0, expected_weight=2.0, expected_host='host4') nova-17.0.1/nova/tests/unit/scheduler/weights/test_weights_hosts.py0000666000175000017500000000325013250073126025605 0ustar zuulzuul00000000000000# Copyright 2011-2014 IBM # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests For Scheduler weights. """ from nova.scheduler import weights from nova.scheduler.weights import affinity from nova.scheduler.weights import io_ops from nova.scheduler.weights import metrics from nova.scheduler.weights import ram from nova import test from nova.tests.unit import matchers from nova.tests.unit.scheduler import fakes class TestWeighedHost(test.NoDBTestCase): def test_dict_conversion(self): host_state = fakes.FakeHostState('somehost', None, {}) host = weights.WeighedHost(host_state, 'someweight') expected = {'weight': 'someweight', 'host': 'somehost'} self.assertThat(host.to_dict(), matchers.DictMatches(expected)) def test_all_weighers(self): classes = weights.all_weighers() self.assertIn(ram.RAMWeigher, classes) self.assertIn(metrics.MetricsWeigher, classes) self.assertIn(io_ops.IoOpsWeigher, classes) self.assertIn(affinity.ServerGroupSoftAffinityWeigher, classes) self.assertIn(affinity.ServerGroupSoftAntiAffinityWeigher, classes) nova-17.0.1/nova/tests/unit/scheduler/weights/__init__.py0000666000175000017500000000000013250073126023401 0ustar zuulzuul00000000000000nova-17.0.1/nova/tests/unit/scheduler/weights/test_weights_metrics.py0000666000175000017500000001656413250073126026127 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests For Scheduler metrics weights. """ from nova import exception from nova.objects import fields from nova.objects import monitor_metric from nova.scheduler import weights from nova.scheduler.weights import metrics from nova import test from nova.tests.unit.scheduler import fakes idle = fields.MonitorMetricType.CPU_IDLE_TIME kernel = fields.MonitorMetricType.CPU_KERNEL_TIME user = fields.MonitorMetricType.CPU_USER_TIME class MetricsWeigherTestCase(test.NoDBTestCase): def setUp(self): super(MetricsWeigherTestCase, self).setUp() self.weight_handler = weights.HostWeightHandler() self.weighers = [metrics.MetricsWeigher()] def _get_weighed_host(self, hosts, setting, weight_properties=None): if not weight_properties: weight_properties = {} self.flags(weight_setting=setting, group='metrics') self.weighers[0]._parse_setting() return self.weight_handler.get_weighed_objects(self.weighers, hosts, weight_properties)[0] def _get_all_hosts(self): def fake_metric(name, value): return monitor_metric.MonitorMetric(name=name, value=value) def fake_list(objs): m_list = [fake_metric(name, val) for name, val in objs] return monitor_metric.MonitorMetricList(objects=m_list) host_values = [ ('host1', 'node1', {'metrics': fake_list([(idle, 512), (kernel, 1)])}), ('host2', 'node2', {'metrics': fake_list([(idle, 1024), (kernel, 2)])}), ('host3', 'node3', {'metrics': fake_list([(idle, 3072), (kernel, 1)])}), ('host4', 'node4', {'metrics': fake_list([(idle, 8192), (kernel, 0)])}), ('host5', 'node5', {'metrics': fake_list([(idle, 768), (kernel, 0), (user, 1)])}), ('host6', 'node6', {'metrics': fake_list([(idle, 2048), (kernel, 0), (user, 2)])}), ] return [fakes.FakeHostState(host, node, values) for host, node, values in host_values] def _do_test(self, settings, expected_weight, expected_host): hostinfo_list = self._get_all_hosts() weighed_host = self._get_weighed_host(hostinfo_list, settings) self.assertEqual(expected_weight, weighed_host.weight) self.assertEqual(expected_host, weighed_host.obj.host) def test_single_resource_no_metrics(self): setting = [idle + '=1'] hostinfo_list = [fakes.FakeHostState('host1', 'node1', {'metrics': None}), fakes.FakeHostState('host2', 'node2', {'metrics': None})] self.assertRaises(exception.ComputeHostMetricNotFound, self._get_weighed_host, hostinfo_list, setting) def test_single_resource(self): # host1: idle=512 # host2: idle=1024 # host3: idle=3072 # host4: idle=8192 # so, host4 should win: setting = [idle + '=1'] self._do_test(setting, 1.0, 'host4') def test_multiple_resource(self): # host1: idle=512, kernel=1 # host2: idle=1024, kernel=2 # host3: idle=3072, kernel=1 # host4: idle=8192, kernel=0 # so, host2 should win: setting = [idle + '=0.0001', kernel + '=1'] self._do_test(setting, 1.0, 'host2') def test_single_resource_duplicate_setting(self): # host1: idle=512 # host2: idle=1024 # host3: idle=3072 # host4: idle=8192 # so, host1 should win (sum of settings is negative): setting = [idle + '=-2', idle + '=1'] self._do_test(setting, 1.0, 'host1') def test_single_resourcenegtive_ratio(self): # host1: idle=512 # host2: idle=1024 # host3: idle=3072 # host4: idle=8192 # so, host1 should win: setting = [idle + '=-1'] self._do_test(setting, 1.0, 'host1') def test_multiple_resource_missing_ratio(self): # host1: idle=512, kernel=1 # host2: idle=1024, kernel=2 # host3: idle=3072, kernel=1 # host4: idle=8192, kernel=0 # so, host4 should win: setting = [idle + '=0.0001', kernel] self._do_test(setting, 1.0, 'host4') def test_multiple_resource_wrong_ratio(self): # host1: idle=512, kernel=1 # host2: idle=1024, kernel=2 # host3: idle=3072, kernel=1 # host4: idle=8192, kernel=0 # so, host4 should win: setting = [idle + '=0.0001', kernel + ' = 2.0t'] self._do_test(setting, 1.0, 'host4') def _check_parsing_result(self, weigher, setting, results): self.flags(weight_setting=setting, group='metrics') weigher._parse_setting() self.assertEqual(len(weigher.setting), len(results)) for item in results: self.assertIn(item, weigher.setting) def test_parse_setting(self): weigher = self.weighers[0] self._check_parsing_result(weigher, [idle + '=1'], [(idle, 1.0)]) self._check_parsing_result(weigher, [idle + '=1', kernel + '=-2.1'], [(idle, 1.0), (kernel, -2.1)]) self._check_parsing_result(weigher, [idle + '=a1', kernel + '=-2.1'], [(kernel, -2.1)]) self._check_parsing_result(weigher, [idle, kernel + '=-2.1'], [(kernel, -2.1)]) self._check_parsing_result(weigher, ['=5', kernel + '=-2.1'], [(kernel, -2.1)]) def test_metric_not_found_required(self): setting = [idle + '=1', user + '=2'] self.assertRaises(exception.ComputeHostMetricNotFound, self._do_test, setting, 8192, 'host4') def test_metric_not_found_non_required(self): # host1: idle=512, kernel=1 # host2: idle=1024, kernel=2 # host3: idle=3072, kernel=1 # host4: idle=8192, kernel=0 # host5: idle=768, kernel=0, user=1 # host6: idle=2048, kernel=0, user=2 # so, host5 should win: self.flags(required=False, group='metrics') setting = [idle + '=0.0001', user + '=-1'] self._do_test(setting, 1.0, 'host5') nova-17.0.1/nova/tests/unit/scheduler/weights/test_weights_disk.py0000666000175000017500000001014313250073126025376 0ustar zuulzuul00000000000000# Copyright 2011-2016 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests For Scheduler disk weights. """ from nova.scheduler import weights from nova.scheduler.weights import disk from nova import test from nova.tests.unit.scheduler import fakes class DiskWeigherTestCase(test.NoDBTestCase): def setUp(self): super(DiskWeigherTestCase, self).setUp() self.weight_handler = weights.HostWeightHandler() self.weighers = [disk.DiskWeigher()] def _get_weighed_host(self, hosts, weight_properties=None): if weight_properties is None: weight_properties = {} return self.weight_handler.get_weighed_objects(self.weighers, hosts, weight_properties)[0] def _get_all_hosts(self): host_values = [ ('host1', 'node1', {'free_disk_mb': 5120}), ('host2', 'node2', {'free_disk_mb': 10240}), ('host3', 'node3', {'free_disk_mb': 30720}), ('host4', 'node4', {'free_disk_mb': 81920}) ] return [fakes.FakeHostState(host, node, values) for host, node, values in host_values] def test_default_of_spreading_first(self): hostinfo_list = self._get_all_hosts() # host1: free_disk_mb=5120 # host2: free_disk_mb=10240 # host3: free_disk_mb=30720 # host4: free_disk_mb=81920 # so, host4 should win: weighed_host = self._get_weighed_host(hostinfo_list) self.assertEqual(1.0, weighed_host.weight) self.assertEqual('host4', weighed_host.obj.host) def test_disk_filter_multiplier1(self): self.flags(disk_weight_multiplier=0.0, group='filter_scheduler') hostinfo_list = self._get_all_hosts() # host1: free_disk_mb=5120 # host2: free_disk_mb=10240 # host3: free_disk_mb=30720 # host4: free_disk_mb=81920 # We do not know the host, all have same weight. weighed_host = self._get_weighed_host(hostinfo_list) self.assertEqual(0.0, weighed_host.weight) def test_disk_filter_multiplier2(self): self.flags(disk_weight_multiplier=2.0, group='filter_scheduler') hostinfo_list = self._get_all_hosts() # host1: free_disk_mb=5120 # host2: free_disk_mb=10240 # host3: free_disk_mb=30720 # host4: free_disk_mb=81920 # so, host4 should win: weighed_host = self._get_weighed_host(hostinfo_list) self.assertEqual(1.0 * 2, weighed_host.weight) self.assertEqual('host4', weighed_host.obj.host) def test_disk_filter_negative(self): self.flags(disk_weight_multiplier=1.0, group='filter_scheduler') hostinfo_list = self._get_all_hosts() host_attr = {'id': 100, 'disk_mb': 81920, 'free_disk_mb': -5120} host_state = fakes.FakeHostState('negative', 'negative', host_attr) hostinfo_list = list(hostinfo_list) + [host_state] # host1: free_disk_mb=5120 # host2: free_disk_mb=10240 # host3: free_disk_mb=30720 # host4: free_disk_mb=81920 # negativehost: free_disk_mb=-5120 # so, host4 should win weights = self.weight_handler.get_weighed_objects(self.weighers, hostinfo_list, {}) weighed_host = weights[0] self.assertEqual(1, weighed_host.weight) self.assertEqual('host4', weighed_host.obj.host) # and negativehost should lose weighed_host = weights[-1] self.assertEqual(0, weighed_host.weight) self.assertEqual('negative', weighed_host.obj.host) nova-17.0.1/nova/tests/unit/scheduler/test_filter_scheduler.py0000666000175000017500000013454613250073126024601 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests For Filter Scheduler. """ import mock from oslo_serialization import jsonutils from nova import exception from nova import objects from nova.scheduler import client from nova.scheduler.client import report from nova.scheduler import filter_scheduler from nova.scheduler import host_manager from nova.scheduler import utils as scheduler_utils from nova.scheduler import weights from nova import test # noqa from nova.tests.unit.scheduler import test_scheduler from nova.tests import uuidsentinel as uuids fake_numa_limit = objects.NUMATopologyLimits(cpu_allocation_ratio=1.0, ram_allocation_ratio=1.0) fake_limit = {"memory_mb": 1024, "disk_gb": 100, "vcpus": 2, "numa_topology": fake_numa_limit} fake_limit_obj = objects.SchedulerLimits.from_dict(fake_limit) fake_alloc = {"allocations": [ {"resource_provider": {"uuid": uuids.compute_node}, "resources": {"VCPU": 1, "MEMORY_MB": 1024, "DISK_GB": 100} }]} fake_alloc_version = "1.23" json_alloc = jsonutils.dumps(fake_alloc) fake_selection = objects.Selection(service_host="fake_host", nodename="fake_node", compute_node_uuid=uuids.compute_node, cell_uuid=uuids.cell, limits=fake_limit_obj, allocation_request=json_alloc, allocation_request_version=fake_alloc_version) class FilterSchedulerTestCase(test_scheduler.SchedulerTestCase): """Test case for Filter Scheduler.""" driver_cls = filter_scheduler.FilterScheduler @mock.patch('nova.scheduler.client.SchedulerClient') def setUp(self, mock_client): pc_client = mock.Mock(spec=report.SchedulerReportClient) sched_client = mock.Mock(spec=client.SchedulerClient) sched_client.reportclient = pc_client mock_client.return_value = sched_client self.placement_client = pc_client super(FilterSchedulerTestCase, self).setUp() @mock.patch('nova.scheduler.utils.claim_resources') @mock.patch('nova.scheduler.filter_scheduler.FilterScheduler.' '_get_all_host_states') @mock.patch('nova.scheduler.filter_scheduler.FilterScheduler.' '_get_sorted_hosts') def test_schedule_placement_bad_comms(self, mock_get_hosts, mock_get_all_states, mock_claim): """If there was a problem communicating with the Placement service, alloc_reqs_by_rp_uuid will be None and we need to avoid trying to claim in the Placement API. """ spec_obj = objects.RequestSpec( num_instances=1, flavor=objects.Flavor(memory_mb=512, root_gb=512, ephemeral_gb=0, swap=0, vcpus=1), project_id=uuids.project_id, instance_group=None) host_state = mock.Mock(spec=host_manager.HostState, host="fake_host", uuid=uuids.cn1, cell_uuid=uuids.cell, nodename="fake_node", limits={}) all_host_states = [host_state] mock_get_all_states.return_value = all_host_states mock_get_hosts.return_value = all_host_states instance_uuids = [uuids.instance] ctx = mock.Mock() selected_hosts = self.driver._schedule(ctx, spec_obj, instance_uuids, None, mock.sentinel.provider_summaries) expected_hosts = [[objects.Selection.from_host_state(host_state)]] mock_get_all_states.assert_called_once_with( ctx.elevated.return_value, spec_obj, mock.sentinel.provider_summaries) mock_get_hosts.assert_called_once_with(spec_obj, all_host_states, 0) self.assertEqual(len(selected_hosts), 1) self.assertEqual(expected_hosts, selected_hosts) # Ensure that we have consumed the resources on the chosen host states host_state.consume_from_request.assert_called_once_with(spec_obj) # And ensure we never called claim_resources() self.assertFalse(mock_claim.called) @mock.patch('nova.scheduler.utils.claim_resources') @mock.patch('nova.scheduler.filter_scheduler.FilterScheduler.' '_get_all_host_states') @mock.patch('nova.scheduler.filter_scheduler.FilterScheduler.' '_get_sorted_hosts') def test_schedule_old_conductor(self, mock_get_hosts, mock_get_all_states, mock_claim): """Old conductor can call scheduler without the instance_uuids parameter. When this happens, we need to ensure we do not attempt to claim resources in the placement API since obviously we need instance UUIDs to perform those claims. """ spec_obj = objects.RequestSpec( num_instances=1, flavor=objects.Flavor(memory_mb=512, root_gb=512, ephemeral_gb=0, swap=0, vcpus=1), project_id=uuids.project_id, instance_group=None) host_state = mock.Mock(spec=host_manager.HostState, host="fake_host", nodename="fake_node", uuid=uuids.cn1, limits={}, cell_uuid=uuids.cell) all_host_states = [host_state] mock_get_all_states.return_value = all_host_states mock_get_hosts.return_value = all_host_states instance_uuids = None ctx = mock.Mock() selected_hosts = self.driver._schedule(ctx, spec_obj, instance_uuids, mock.sentinel.alloc_reqs_by_rp_uuid, mock.sentinel.provider_summaries) mock_get_all_states.assert_called_once_with( ctx.elevated.return_value, spec_obj, mock.sentinel.provider_summaries) mock_get_hosts.assert_called_once_with(spec_obj, all_host_states, 0) self.assertEqual(len(selected_hosts), 1) expected_host = objects.Selection.from_host_state(host_state) self.assertEqual([[expected_host]], selected_hosts) # Ensure that we have consumed the resources on the chosen host states host_state.consume_from_request.assert_called_once_with(spec_obj) # And ensure we never called claim_resources() self.assertFalse(mock_claim.called) @mock.patch('nova.scheduler.utils.claim_resources') @mock.patch('nova.scheduler.filter_scheduler.FilterScheduler.' '_get_all_host_states') @mock.patch('nova.scheduler.filter_scheduler.FilterScheduler.' '_get_sorted_hosts') def _test_schedule_successful_claim(self, mock_get_hosts, mock_get_all_states, mock_claim, num_instances=1): spec_obj = objects.RequestSpec( num_instances=num_instances, flavor=objects.Flavor(memory_mb=512, root_gb=512, ephemeral_gb=0, swap=0, vcpus=1), project_id=uuids.project_id, instance_group=None) host_state = mock.Mock(spec=host_manager.HostState, host="fake_host", nodename="fake_node", uuid=uuids.cn1, cell_uuid=uuids.cell1, limits={}) all_host_states = [host_state] mock_get_all_states.return_value = all_host_states mock_get_hosts.return_value = all_host_states mock_claim.return_value = True instance_uuids = [uuids.instance] fake_alloc = {"allocations": [ {"resource_provider": {"uuid": uuids.cn1}, "resources": {"VCPU": 1, "MEMORY_MB": 1024, "DISK_GB": 100} }]} alloc_reqs_by_rp_uuid = {uuids.cn1: [fake_alloc]} ctx = mock.Mock() selected_hosts = self.driver._schedule(ctx, spec_obj, instance_uuids, alloc_reqs_by_rp_uuid, mock.sentinel.provider_summaries) sel_obj = objects.Selection.from_host_state(host_state, allocation_request=fake_alloc) expected_selection = [[sel_obj]] mock_get_all_states.assert_called_once_with( ctx.elevated.return_value, spec_obj, mock.sentinel.provider_summaries) mock_get_hosts.assert_called() mock_claim.assert_called_once_with(ctx.elevated.return_value, self.placement_client, spec_obj, uuids.instance, alloc_reqs_by_rp_uuid[uuids.cn1][0], allocation_request_version=None) self.assertEqual(len(selected_hosts), 1) self.assertEqual(expected_selection, selected_hosts) # Ensure that we have consumed the resources on the chosen host states host_state.consume_from_request.assert_called_once_with(spec_obj) def test_schedule_successful_claim(self): self._test_schedule_successful_claim() def test_schedule_old_reqspec_and_move_operation(self): """This test is for verifying that in case of a move operation with an original RequestSpec created for 3 concurrent instances, we only verify the instance that is moved. """ self._test_schedule_successful_claim(num_instances=3) @mock.patch('nova.scheduler.filter_scheduler.FilterScheduler.' '_cleanup_allocations') @mock.patch('nova.scheduler.utils.claim_resources') @mock.patch('nova.scheduler.filter_scheduler.FilterScheduler.' '_get_all_host_states') @mock.patch('nova.scheduler.filter_scheduler.FilterScheduler.' '_get_sorted_hosts') def test_schedule_unsuccessful_claim(self, mock_get_hosts, mock_get_all_states, mock_claim, mock_cleanup): """Tests that we return an empty list if we are unable to successfully claim resources for the instance """ spec_obj = objects.RequestSpec( num_instances=1, flavor=objects.Flavor(memory_mb=512, root_gb=512, ephemeral_gb=0, swap=0, vcpus=1), project_id=uuids.project_id, instance_group=None) host_state = mock.Mock(spec=host_manager.HostState, host=mock.sentinel.host, uuid=uuids.cn1, cell_uuid=uuids.cell1) all_host_states = [host_state] mock_get_all_states.return_value = all_host_states mock_get_hosts.return_value = all_host_states mock_claim.return_value = False instance_uuids = [uuids.instance] alloc_reqs_by_rp_uuid = { uuids.cn1: [{"allocations": mock.sentinel.alloc_req}], } ctx = mock.Mock() fake_version = "1.99" self.assertRaises(exception.NoValidHost, self.driver._schedule, ctx, spec_obj, instance_uuids, alloc_reqs_by_rp_uuid, mock.sentinel.provider_summaries, allocation_request_version=fake_version) mock_get_all_states.assert_called_once_with( ctx.elevated.return_value, spec_obj, mock.sentinel.provider_summaries) mock_get_hosts.assert_called_once_with(spec_obj, all_host_states, 0) mock_claim.assert_called_once_with(ctx.elevated.return_value, self.placement_client, spec_obj, uuids.instance, alloc_reqs_by_rp_uuid[uuids.cn1][0], allocation_request_version=fake_version) mock_cleanup.assert_not_called() # Ensure that we have consumed the resources on the chosen host states self.assertFalse(host_state.consume_from_request.called) @mock.patch('nova.scheduler.filter_scheduler.FilterScheduler.' '_cleanup_allocations') @mock.patch('nova.scheduler.utils.claim_resources') @mock.patch('nova.scheduler.filter_scheduler.FilterScheduler.' '_get_all_host_states') @mock.patch('nova.scheduler.filter_scheduler.FilterScheduler.' '_get_sorted_hosts') def test_schedule_not_all_instance_clean_claimed(self, mock_get_hosts, mock_get_all_states, mock_claim, mock_cleanup): """Tests that we clean up previously-allocated instances if not all instances could be scheduled """ spec_obj = objects.RequestSpec( num_instances=2, flavor=objects.Flavor(memory_mb=512, root_gb=512, ephemeral_gb=0, swap=0, vcpus=1), project_id=uuids.project_id, instance_group=None) host_state = mock.Mock(spec=host_manager.HostState, host="fake_host", nodename="fake_node", uuid=uuids.cn1, cell_uuid=uuids.cell1, limits={}, updated='fake') all_host_states = [host_state] mock_get_all_states.return_value = all_host_states mock_get_hosts.side_effect = [ all_host_states, # first instance: return all the hosts (only one) [], # second: act as if no more hosts that meet criteria all_host_states, # the final call when creating alternates ] mock_claim.return_value = True instance_uuids = [uuids.instance1, uuids.instance2] fake_alloc = {"allocations": [ {"resource_provider": {"uuid": uuids.cn1}, "resources": {"VCPU": 1, "MEMORY_MB": 1024, "DISK_GB": 100} }]} alloc_reqs_by_rp_uuid = {uuids.cn1: [fake_alloc]} ctx = mock.Mock() self.assertRaises(exception.NoValidHost, self.driver._schedule, ctx, spec_obj, instance_uuids, alloc_reqs_by_rp_uuid, mock.sentinel.provider_summaries) # Ensure we cleaned up the first successfully-claimed instance mock_cleanup.assert_called_once_with(ctx, [uuids.instance1]) @mock.patch('nova.scheduler.utils.claim_resources') @mock.patch('nova.scheduler.filter_scheduler.FilterScheduler.' '_get_all_host_states') @mock.patch('nova.scheduler.filter_scheduler.FilterScheduler.' '_get_sorted_hosts') def test_selection_alloc_requests_for_alts(self, mock_get_hosts, mock_get_all_states, mock_claim): spec_obj = objects.RequestSpec( num_instances=1, flavor=objects.Flavor(memory_mb=512, root_gb=512, ephemeral_gb=0, swap=0, vcpus=1), project_id=uuids.project_id, instance_group=None) host_state0 = mock.Mock(spec=host_manager.HostState, host="fake_host0", nodename="fake_node0", uuid=uuids.cn0, cell_uuid=uuids.cell, limits={}) host_state1 = mock.Mock(spec=host_manager.HostState, host="fake_host1", nodename="fake_node1", uuid=uuids.cn1, cell_uuid=uuids.cell, limits={}) host_state2 = mock.Mock(spec=host_manager.HostState, host="fake_host2", nodename="fake_node2", uuid=uuids.cn2, cell_uuid=uuids.cell, limits={}) all_host_states = [host_state0, host_state1, host_state2] mock_get_all_states.return_value = all_host_states mock_get_hosts.return_value = all_host_states mock_claim.return_value = True instance_uuids = [uuids.instance0] fake_alloc0 = {"allocations": [ {"resource_provider": {"uuid": uuids.cn0}, "resources": {"VCPU": 1, "MEMORY_MB": 1024, "DISK_GB": 100} }]} fake_alloc1 = {"allocations": [ {"resource_provider": {"uuid": uuids.cn1}, "resources": {"VCPU": 1, "MEMORY_MB": 1024, "DISK_GB": 100} }]} fake_alloc2 = {"allocations": [ {"resource_provider": {"uuid": uuids.cn2}, "resources": {"VCPU": 1, "MEMORY_MB": 1024, "DISK_GB": 100} }]} alloc_reqs_by_rp_uuid = {uuids.cn0: [fake_alloc0], uuids.cn1: [fake_alloc1], uuids.cn2: [fake_alloc2]} ctx = mock.Mock() selected_hosts = self.driver._schedule(ctx, spec_obj, instance_uuids, alloc_reqs_by_rp_uuid, mock.sentinel.provider_summaries, return_alternates=True) sel0 = objects.Selection.from_host_state(host_state0, allocation_request=fake_alloc0) sel1 = objects.Selection.from_host_state(host_state1, allocation_request=fake_alloc1) sel2 = objects.Selection.from_host_state(host_state2, allocation_request=fake_alloc2) expected_selection = [[sel0, sel1, sel2]] self.assertEqual(expected_selection, selected_hosts) @mock.patch('nova.scheduler.utils.claim_resources') @mock.patch('nova.scheduler.filter_scheduler.FilterScheduler.' '_get_all_host_states') @mock.patch('nova.scheduler.filter_scheduler.FilterScheduler.' '_get_sorted_hosts') def test_selection_alloc_requests_no_alts(self, mock_get_hosts, mock_get_all_states, mock_claim): spec_obj = objects.RequestSpec( num_instances=1, flavor=objects.Flavor(memory_mb=512, root_gb=512, ephemeral_gb=0, swap=0, vcpus=1), project_id=uuids.project_id, instance_group=None) host_state0 = mock.Mock(spec=host_manager.HostState, host="fake_host0", nodename="fake_node0", uuid=uuids.cn0, cell_uuid=uuids.cell, limits={}) host_state1 = mock.Mock(spec=host_manager.HostState, host="fake_host1", nodename="fake_node1", uuid=uuids.cn1, cell_uuid=uuids.cell, limits={}) host_state2 = mock.Mock(spec=host_manager.HostState, host="fake_host2", nodename="fake_node2", uuid=uuids.cn2, cell_uuid=uuids.cell, limits={}) all_host_states = [host_state0, host_state1, host_state2] mock_get_all_states.return_value = all_host_states mock_get_hosts.return_value = all_host_states mock_claim.return_value = True instance_uuids = [uuids.instance0] fake_alloc0 = {"allocations": [ {"resource_provider": {"uuid": uuids.cn0}, "resources": {"VCPU": 1, "MEMORY_MB": 1024, "DISK_GB": 100} }]} fake_alloc1 = {"allocations": [ {"resource_provider": {"uuid": uuids.cn1}, "resources": {"VCPU": 1, "MEMORY_MB": 1024, "DISK_GB": 100} }]} fake_alloc2 = {"allocations": [ {"resource_provider": {"uuid": uuids.cn2}, "resources": {"VCPU": 1, "MEMORY_MB": 1024, "DISK_GB": 100} }]} alloc_reqs_by_rp_uuid = {uuids.cn0: [fake_alloc0], uuids.cn1: [fake_alloc1], uuids.cn2: [fake_alloc2]} ctx = mock.Mock() selected_hosts = self.driver._schedule(ctx, spec_obj, instance_uuids, alloc_reqs_by_rp_uuid, mock.sentinel.provider_summaries, return_alternates=False) sel0 = objects.Selection.from_host_state(host_state0, allocation_request=fake_alloc0) expected_selection = [[sel0]] self.assertEqual(expected_selection, selected_hosts) @mock.patch('nova.scheduler.utils.claim_resources') @mock.patch('nova.scheduler.filter_scheduler.FilterScheduler.' '_get_all_host_states') @mock.patch('nova.scheduler.filter_scheduler.FilterScheduler.' '_get_sorted_hosts') def test_schedule_instance_group(self, mock_get_hosts, mock_get_all_states, mock_claim): """Test that since the request spec object contains an instance group object, that upon choosing a host in the primary schedule loop, that we update the request spec's instance group information """ num_instances = 2 ig = objects.InstanceGroup(hosts=[]) spec_obj = objects.RequestSpec( num_instances=num_instances, flavor=objects.Flavor(memory_mb=512, root_gb=512, ephemeral_gb=0, swap=0, vcpus=1), project_id=uuids.project_id, instance_group=ig) hs1 = mock.Mock(spec=host_manager.HostState, host='host1', nodename="node1", limits={}, uuid=uuids.cn1, cell_uuid=uuids.cell1) hs2 = mock.Mock(spec=host_manager.HostState, host='host2', nodename="node2", limits={}, uuid=uuids.cn2, cell_uuid=uuids.cell2) all_host_states = [hs1, hs2] mock_get_all_states.return_value = all_host_states mock_claim.return_value = True alloc_reqs_by_rp_uuid = { uuids.cn1: [{"allocations": "fake_cn1_alloc"}], uuids.cn2: [{"allocations": "fake_cn2_alloc"}], } # Simulate host 1 and host 2 being randomly returned first by # _get_sorted_hosts() in the two iterations for each instance in # num_instances mock_get_hosts.side_effect = ([hs2, hs1], [hs1, hs2], [hs2, hs1], [hs1, hs2]) instance_uuids = [ getattr(uuids, 'instance%d' % x) for x in range(num_instances) ] ctx = mock.Mock() self.driver._schedule(ctx, spec_obj, instance_uuids, alloc_reqs_by_rp_uuid, mock.sentinel.provider_summaries) # Check that we called claim_resources() for both the first and second # host state claim_calls = [ mock.call(ctx.elevated.return_value, self.placement_client, spec_obj, uuids.instance0, alloc_reqs_by_rp_uuid[uuids.cn2][0], allocation_request_version=None), mock.call(ctx.elevated.return_value, self.placement_client, spec_obj, uuids.instance1, alloc_reqs_by_rp_uuid[uuids.cn1][0], allocation_request_version=None), ] mock_claim.assert_has_calls(claim_calls) # Check that _get_sorted_hosts() is called twice and that the # second time, we pass it the hosts that were returned from # _get_sorted_hosts() the first time sorted_host_calls = [ mock.call(spec_obj, all_host_states, 0), mock.call(spec_obj, [hs2, hs1], 1), ] mock_get_hosts.assert_has_calls(sorted_host_calls) # The instance group object should have both host1 and host2 in its # instance group hosts list and there should not be any "changes" to # save in the instance group object self.assertEqual(['host2', 'host1'], ig.hosts) self.assertEqual({}, ig.obj_get_changes()) @mock.patch('random.choice', side_effect=lambda x: x[1]) @mock.patch('nova.scheduler.host_manager.HostManager.get_weighed_hosts') @mock.patch('nova.scheduler.host_manager.HostManager.get_filtered_hosts') def test_get_sorted_hosts(self, mock_filt, mock_weighed, mock_rand): """Tests the call that returns a sorted list of hosts by calling the host manager's filtering and weighing routines """ self.flags(host_subset_size=2, group='filter_scheduler') hs1 = mock.Mock(spec=host_manager.HostState, host='host1', cell_uuid=uuids.cell1) hs2 = mock.Mock(spec=host_manager.HostState, host='host2', cell_uuid=uuids.cell2) all_host_states = [hs1, hs2] mock_weighed.return_value = [ weights.WeighedHost(hs1, 1.0), weights.WeighedHost(hs2, 1.0), ] results = self.driver._get_sorted_hosts(mock.sentinel.spec, all_host_states, mock.sentinel.index) mock_filt.assert_called_once_with(all_host_states, mock.sentinel.spec, mock.sentinel.index) mock_weighed.assert_called_once_with(mock_filt.return_value, mock.sentinel.spec) # We override random.choice() to pick the **second** element of the # returned weighed hosts list, which is the host state #2. This tests # the code path that combines the randomly-chosen host with the # remaining list of weighed host state objects self.assertEqual([hs2, hs1], results) @mock.patch('random.choice', side_effect=lambda x: x[0]) @mock.patch('nova.scheduler.host_manager.HostManager.get_weighed_hosts') @mock.patch('nova.scheduler.host_manager.HostManager.get_filtered_hosts') def test_get_sorted_hosts_subset_less_than_num_weighed(self, mock_filt, mock_weighed, mock_rand): """Tests that when we have >1 weighed hosts but a host subset size of 1, that we always pick the first host in the weighed host """ self.flags(host_subset_size=1, group='filter_scheduler') hs1 = mock.Mock(spec=host_manager.HostState, host='host1', cell_uuid=uuids.cell1) hs2 = mock.Mock(spec=host_manager.HostState, host='host2', cell_uuid=uuids.cell2) all_host_states = [hs1, hs2] mock_weighed.return_value = [ weights.WeighedHost(hs1, 1.0), weights.WeighedHost(hs2, 1.0), ] results = self.driver._get_sorted_hosts(mock.sentinel.spec, all_host_states, mock.sentinel.index) mock_filt.assert_called_once_with(all_host_states, mock.sentinel.spec, mock.sentinel.index) mock_weighed.assert_called_once_with(mock_filt.return_value, mock.sentinel.spec) # We should be randomly selecting only from a list of one host state mock_rand.assert_called_once_with([hs1]) self.assertEqual([hs1, hs2], results) @mock.patch('random.choice', side_effect=lambda x: x[0]) @mock.patch('nova.scheduler.host_manager.HostManager.get_weighed_hosts') @mock.patch('nova.scheduler.host_manager.HostManager.get_filtered_hosts') def test_get_sorted_hosts_subset_greater_than_num_weighed(self, mock_filt, mock_weighed, mock_rand): """Hosts should still be chosen if host subset size is larger than number of weighed hosts. """ self.flags(host_subset_size=20, group='filter_scheduler') hs1 = mock.Mock(spec=host_manager.HostState, host='host1', cell_uuid=uuids.cell1) hs2 = mock.Mock(spec=host_manager.HostState, host='host2', cell_uuid=uuids.cell2) all_host_states = [hs1, hs2] mock_weighed.return_value = [ weights.WeighedHost(hs1, 1.0), weights.WeighedHost(hs2, 1.0), ] results = self.driver._get_sorted_hosts(mock.sentinel.spec, all_host_states, mock.sentinel.index) mock_filt.assert_called_once_with(all_host_states, mock.sentinel.spec, mock.sentinel.index) mock_weighed.assert_called_once_with(mock_filt.return_value, mock.sentinel.spec) # We overrode random.choice() to return the first element in the list, # so even though we had a host_subset_size greater than the number of # weighed hosts (2), we just random.choice() on the entire set of # weighed hosts and thus return [hs1, hs2] self.assertEqual([hs1, hs2], results) @mock.patch('random.shuffle', side_effect=lambda x: x.reverse()) @mock.patch('nova.scheduler.host_manager.HostManager.get_weighed_hosts') @mock.patch('nova.scheduler.host_manager.HostManager.get_filtered_hosts') def test_get_sorted_hosts_shuffle_top_equal(self, mock_filt, mock_weighed, mock_shuffle): """Tests that top best weighed hosts are shuffled when enabled. """ self.flags(host_subset_size=1, group='filter_scheduler') self.flags(shuffle_best_same_weighed_hosts=True, group='filter_scheduler') hs1 = mock.Mock(spec=host_manager.HostState, host='host1') hs2 = mock.Mock(spec=host_manager.HostState, host='host2') hs3 = mock.Mock(spec=host_manager.HostState, host='host3') hs4 = mock.Mock(spec=host_manager.HostState, host='host4') all_host_states = [hs1, hs2, hs3, hs4] mock_weighed.return_value = [ weights.WeighedHost(hs1, 1.0), weights.WeighedHost(hs2, 1.0), weights.WeighedHost(hs3, 0.5), weights.WeighedHost(hs4, 0.5), ] results = self.driver._get_sorted_hosts(mock.sentinel.spec, all_host_states, mock.sentinel.index) mock_filt.assert_called_once_with(all_host_states, mock.sentinel.spec, mock.sentinel.index) mock_weighed.assert_called_once_with(mock_filt.return_value, mock.sentinel.spec) # We override random.shuffle() to reverse the list, thus the # head of the list should become [host#2, host#1] # (as the host_subset_size is 1) and the tail should stay the same. self.assertEqual([hs2, hs1, hs3, hs4], results) def test_cleanup_allocations(self): instance_uuids = [] # Check we don't do anything if there's no instance UUIDs to cleanup # allocations for pc = self.placement_client self.driver._cleanup_allocations(self.context, instance_uuids) self.assertFalse(pc.delete_allocation_for_instance.called) instance_uuids = [uuids.instance1, uuids.instance2] self.driver._cleanup_allocations(self.context, instance_uuids) exp_calls = [mock.call(self.context, uuids.instance1), mock.call(self.context, uuids.instance2)] pc.delete_allocation_for_instance.assert_has_calls(exp_calls) def test_add_retry_host(self): retry = dict(num_attempts=1, hosts=[]) filter_properties = dict(retry=retry) host = "fakehost" node = "fakenode" scheduler_utils._add_retry_host(filter_properties, host, node) hosts = filter_properties['retry']['hosts'] self.assertEqual(1, len(hosts)) self.assertEqual([host, node], hosts[0]) def test_post_select_populate(self): # Test addition of certain filter props after a node is selected. retry = {'hosts': [], 'num_attempts': 1} filter_properties = {'retry': retry} selection = objects.Selection(service_host="host", nodename="node", cell_uuid=uuids.cell) scheduler_utils.populate_filter_properties(filter_properties, selection) self.assertEqual(['host', 'node'], filter_properties['retry']['hosts'][0]) @mock.patch('nova.scheduler.filter_scheduler.FilterScheduler.' '_schedule') def test_select_destinations_match_num_instances(self, mock_schedule): """Tests that the select_destinations() method returns the list of hosts from the _schedule() method when the number of returned hosts equals the number of instance UUIDs passed in. """ spec_obj = objects.RequestSpec( flavor=objects.Flavor(memory_mb=512, root_gb=512, ephemeral_gb=0, swap=0, vcpus=1), project_id=uuids.project_id, num_instances=1) mock_schedule.return_value = [[fake_selection]] dests = self.driver.select_destinations(self.context, spec_obj, [mock.sentinel.instance_uuid], mock.sentinel.alloc_reqs_by_rp_uuid, mock.sentinel.p_sums, mock.sentinel.ar_version) mock_schedule.assert_called_once_with(self.context, spec_obj, [mock.sentinel.instance_uuid], mock.sentinel.alloc_reqs_by_rp_uuid, mock.sentinel.p_sums, mock.sentinel.ar_version, False) self.assertEqual([[fake_selection]], dests) @mock.patch('nova.scheduler.filter_scheduler.FilterScheduler.' '_schedule') def test_select_destinations_for_move_ops(self, mock_schedule): """Tests that the select_destinations() method verifies the number of hosts returned from the _schedule() method against the number of instance UUIDs passed as a parameter and not against the RequestSpec num_instances field since the latter could be wrong in case of a move operation. """ spec_obj = objects.RequestSpec( flavor=objects.Flavor(memory_mb=512, root_gb=512, ephemeral_gb=0, swap=0, vcpus=1), project_id=uuids.project_id, num_instances=2) mock_schedule.return_value = [[fake_selection]] dests = self.driver.select_destinations(self.context, spec_obj, [mock.sentinel.instance_uuid], mock.sentinel.alloc_reqs_by_rp_uuid, mock.sentinel.p_sums, mock.sentinel.ar_version) mock_schedule.assert_called_once_with(self.context, spec_obj, [mock.sentinel.instance_uuid], mock.sentinel.alloc_reqs_by_rp_uuid, mock.sentinel.p_sums, mock.sentinel.ar_version, False) self.assertEqual([[fake_selection]], dests) @mock.patch('nova.scheduler.utils.claim_resources', return_value=True) @mock.patch('nova.scheduler.filter_scheduler.FilterScheduler.' '_get_all_host_states') @mock.patch('nova.scheduler.filter_scheduler.FilterScheduler.' '_get_sorted_hosts') def test_schedule_fewer_num_instances(self, mock_get_hosts, mock_get_all_states, mock_claim): """Tests that the _schedule() method properly handles resetting host state objects and raising NoValidHost when there are not enough hosts available. """ spec_obj = objects.RequestSpec( num_instances=2, flavor=objects.Flavor(memory_mb=512, root_gb=512, ephemeral_gb=0, swap=0, vcpus=1), project_id=uuids.project_id, instance_group=None) host_state = mock.Mock(spec=host_manager.HostState, host="fake_host", uuid=uuids.cn1, cell_uuid=uuids.cell, nodename="fake_node", limits={}, updated="Not None") all_host_states = [host_state] mock_get_all_states.return_value = all_host_states mock_get_hosts.side_effect = [all_host_states, []] instance_uuids = [uuids.inst1, uuids.inst2] fake_allocs_by_rp = {uuids.cn1: [{}]} self.assertRaises(exception.NoValidHost, self.driver._schedule, self.context, spec_obj, instance_uuids, fake_allocs_by_rp, mock.sentinel.p_sums) self.assertIsNone(host_state.updated) @mock.patch("nova.scheduler.host_manager.HostState.consume_from_request") @mock.patch('nova.scheduler.utils.claim_resources') @mock.patch("nova.scheduler.filter_scheduler.FilterScheduler." "_get_sorted_hosts") @mock.patch("nova.scheduler.filter_scheduler.FilterScheduler." "_get_all_host_states") def _test_alternates_returned(self, mock_get_all_hosts, mock_sorted, mock_claim, mock_consume, num_instances=2, num_alternates=2): all_host_states = [] alloc_reqs = {} for num in range(10): host_name = "host%s" % num hs = host_manager.HostState(host_name, "node%s" % num, uuids.cell) hs.uuid = getattr(uuids, host_name) all_host_states.append(hs) alloc_reqs[hs.uuid] = [{}] mock_get_all_hosts.return_value = all_host_states mock_sorted.return_value = all_host_states mock_claim.return_value = True total_returned = num_alternates + 1 self.flags(max_attempts=total_returned, group="scheduler") instance_uuids = [getattr(uuids, "inst%s" % num) for num in range(num_instances)] spec_obj = objects.RequestSpec( num_instances=num_instances, flavor=objects.Flavor(memory_mb=512, root_gb=512, ephemeral_gb=0, swap=0, vcpus=1), project_id=uuids.project_id, instance_group=None) dests = self.driver._schedule(self.context, spec_obj, instance_uuids, alloc_reqs, None, return_alternates=True) self.assertEqual(num_instances, len(dests)) # Filtering and weighing hosts should be called num_instances + 1 times # unless num_instances == 1. self.assertEqual(num_instances + 1 if num_instances > 1 else 1, mock_sorted.call_count, 'Unexpected number of calls to filter hosts for %s ' 'instances.' % num_instances) selected_hosts = [dest[0] for dest in dests] for dest in dests: self.assertEqual(total_returned, len(dest)) # Verify that there are no duplicates among a destination self.assertEqual(len(dest), len(set(dest))) # Verify that none of the selected hosts appear in the alternates. for alt in dest[1:]: self.assertNotIn(alt, selected_hosts) def test_alternates_returned(self): self._test_alternates_returned(num_instances=1, num_alternates=1) self._test_alternates_returned(num_instances=3, num_alternates=0) self._test_alternates_returned(num_instances=1, num_alternates=4) self._test_alternates_returned(num_instances=2, num_alternates=3) self._test_alternates_returned(num_instances=8, num_alternates=8) @mock.patch("nova.scheduler.host_manager.HostState.consume_from_request") @mock.patch('nova.scheduler.utils.claim_resources') @mock.patch("nova.scheduler.filter_scheduler.FilterScheduler." "_get_sorted_hosts") @mock.patch("nova.scheduler.filter_scheduler.FilterScheduler." "_get_all_host_states") def test_alternates_same_cell(self, mock_get_all_hosts, mock_sorted, mock_claim, mock_consume): """Tests getting alternates plus claims where the hosts are spread across two cells. """ all_host_states = [] alloc_reqs = {} for num in range(10): host_name = "host%s" % num cell_uuid = uuids.cell1 if num % 2 else uuids.cell2 hs = host_manager.HostState(host_name, "node%s" % num, cell_uuid) hs.uuid = getattr(uuids, host_name) all_host_states.append(hs) alloc_reqs[hs.uuid] = [{}] mock_get_all_hosts.return_value = all_host_states # There are two instances so _get_sorted_hosts is called once per # instance and then once again before picking alternates. mock_sorted.side_effect = [all_host_states, list(reversed(all_host_states)), all_host_states] mock_claim.return_value = True total_returned = 3 self.flags(max_attempts=total_returned, group="scheduler") instance_uuids = [uuids.inst1, uuids.inst2] num_instances = len(instance_uuids) spec_obj = objects.RequestSpec( num_instances=num_instances, flavor=objects.Flavor(memory_mb=512, root_gb=512, ephemeral_gb=0, swap=0, vcpus=1), project_id=uuids.project_id, instance_group=None) dests = self.driver._schedule(self.context, spec_obj, instance_uuids, alloc_reqs, None, return_alternates=True) # There should be max_attempts hosts per instance (1 selected, 2 alts) self.assertEqual(total_returned, len(dests[0])) self.assertEqual(total_returned, len(dests[1])) # Verify that the two selected hosts are not in the same cell. self.assertNotEqual(dests[0][0].cell_uuid, dests[1][0].cell_uuid) for dest in dests: selected_host = dest[0] selected_cell_uuid = selected_host.cell_uuid for alternate in dest[1:]: self.assertEqual(alternate.cell_uuid, selected_cell_uuid) @mock.patch("nova.scheduler.host_manager.HostState.consume_from_request") @mock.patch('nova.scheduler.utils.claim_resources') @mock.patch("nova.scheduler.filter_scheduler.FilterScheduler." "_get_sorted_hosts") @mock.patch("nova.scheduler.filter_scheduler.FilterScheduler." "_get_all_host_states") def _test_not_enough_alternates(self, mock_get_all_hosts, mock_sorted, mock_claim, mock_consume, num_hosts, max_attempts): all_host_states = [] alloc_reqs = {} for num in range(num_hosts): host_name = "host%s" % num hs = host_manager.HostState(host_name, "node%s" % num, uuids.cell) hs.uuid = getattr(uuids, host_name) all_host_states.append(hs) alloc_reqs[hs.uuid] = [{}] mock_get_all_hosts.return_value = all_host_states mock_sorted.return_value = all_host_states mock_claim.return_value = True # Set the total returned to more than the number of available hosts self.flags(max_attempts=max_attempts, group="scheduler") instance_uuids = [uuids.inst1, uuids.inst2] num_instances = len(instance_uuids) spec_obj = objects.RequestSpec( num_instances=num_instances, flavor=objects.Flavor(memory_mb=512, root_gb=512, ephemeral_gb=0, swap=0, vcpus=1), project_id=uuids.project_id, instance_group=None) dests = self.driver._schedule(self.context, spec_obj, instance_uuids, alloc_reqs, None, return_alternates=True) self.assertEqual(num_instances, len(dests)) selected_hosts = [dest[0] for dest in dests] # The number returned for each destination should be the less of the # number of available host and the max_attempts setting. expected_number = min(num_hosts, max_attempts) for dest in dests: self.assertEqual(expected_number, len(dest)) # Verify that there are no duplicates among a destination self.assertEqual(len(dest), len(set(dest))) # Verify that none of the selected hosts appear in the alternates. for alt in dest[1:]: self.assertNotIn(alt, selected_hosts) def test_not_enough_alternates(self): self._test_not_enough_alternates(num_hosts=100, max_attempts=5) self._test_not_enough_alternates(num_hosts=5, max_attempts=5) self._test_not_enough_alternates(num_hosts=3, max_attempts=5) self._test_not_enough_alternates(num_hosts=20, max_attempts=5) @mock.patch.object(filter_scheduler.FilterScheduler, '_schedule') def test_select_destinations_notifications(self, mock_schedule): mock_schedule.return_value = ([[mock.Mock()]], [[mock.Mock()]]) with mock.patch.object(self.driver.notifier, 'info') as mock_info: expected = {'num_instances': 1, 'instance_properties': {'uuid': uuids.instance}, 'instance_type': {}, 'image': {}} spec_obj = objects.RequestSpec(num_instances=1, instance_uuid=uuids.instance) self.driver.select_destinations(self.context, spec_obj, [uuids.instance], {}, None) expected = [ mock.call(self.context, 'scheduler.select_destinations.start', dict(request_spec=expected)), mock.call(self.context, 'scheduler.select_destinations.end', dict(request_spec=expected))] self.assertEqual(expected, mock_info.call_args_list) def test_get_all_host_states_provider_summaries_is_none(self): """Tests that HostManager.get_host_states_by_uuids is called with compute_uuids being None when the incoming provider_summaries is None. """ with mock.patch.object(self.driver.host_manager, 'get_host_states_by_uuids') as get_host_states: self.driver._get_all_host_states( mock.sentinel.ctxt, mock.sentinel.spec_obj, None) # Make sure get_host_states_by_uuids was called with # compute_uuids being None. get_host_states.assert_called_once_with( mock.sentinel.ctxt, None, mock.sentinel.spec_obj) def test_get_all_host_states_provider_summaries_is_empty(self): """Tests that HostManager.get_host_states_by_uuids is called with compute_uuids being [] when the incoming provider_summaries is {}. """ with mock.patch.object(self.driver.host_manager, 'get_host_states_by_uuids') as get_host_states: self.driver._get_all_host_states( mock.sentinel.ctxt, mock.sentinel.spec_obj, {}) # Make sure get_host_states_by_uuids was called with # compute_uuids being []. get_host_states.assert_called_once_with( mock.sentinel.ctxt, [], mock.sentinel.spec_obj) nova-17.0.1/nova/tests/unit/scheduler/test_utils.py0000666000175000017500000005663713250073136022423 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova.api.openstack.placement import lib as plib from nova import context as nova_context from nova import exception from nova import objects from nova.scheduler.client import report from nova.scheduler import utils from nova import test from nova.tests.unit import fake_instance from nova.tests import uuidsentinel as uuids class TestUtils(test.NoDBTestCase): def setUp(self): super(TestUtils, self).setUp() self.context = nova_context.get_admin_context() def assertResourceRequestsEqual(self, expected, observed): ex_by_id = expected._rg_by_id ob_by_id = observed._rg_by_id self.assertEqual(set(ex_by_id), set(ob_by_id)) for ident in ex_by_id: self.assertEqual(vars(ex_by_id[ident]), vars(ob_by_id[ident])) def _test_resources_from_request_spec(self, flavor, expected): fake_spec = objects.RequestSpec(flavor=flavor) resources = utils.resources_from_request_spec(fake_spec) self.assertResourceRequestsEqual(expected, resources) def test_resources_from_request_spec(self): flavor = objects.Flavor(vcpus=1, memory_mb=1024, root_gb=10, ephemeral_gb=5, swap=0) expected_resources = utils.ResourceRequest() expected_resources._rg_by_id[None] = plib.RequestGroup( use_same_provider=False, resources={ 'VCPU': 1, 'MEMORY_MB': 1024, 'DISK_GB': 15, } ) self._test_resources_from_request_spec(flavor, expected_resources) def test_resources_from_request_spec_with_no_disk(self): flavor = objects.Flavor(vcpus=1, memory_mb=1024, root_gb=0, ephemeral_gb=0, swap=0) expected_resources = utils.ResourceRequest() expected_resources._rg_by_id[None] = plib.RequestGroup( use_same_provider=False, resources={ 'VCPU': 1, 'MEMORY_MB': 1024, } ) self._test_resources_from_request_spec(flavor, expected_resources) def test_get_resources_from_request_spec_custom_resource_class(self): flavor = objects.Flavor(vcpus=1, memory_mb=1024, root_gb=10, ephemeral_gb=5, swap=0, extra_specs={"resources:CUSTOM_TEST_CLASS": 1}) expected_resources = utils.ResourceRequest() expected_resources._rg_by_id[None] = plib.RequestGroup( use_same_provider=False, resources={ "VCPU": 1, "MEMORY_MB": 1024, "DISK_GB": 15, "CUSTOM_TEST_CLASS": 1, } ) self._test_resources_from_request_spec(flavor, expected_resources) def test_get_resources_from_request_spec_override_flavor_amounts(self): flavor = objects.Flavor(vcpus=1, memory_mb=1024, root_gb=10, ephemeral_gb=5, swap=0, extra_specs={ "resources:VCPU": 99, "resources:MEMORY_MB": 99, "resources:DISK_GB": 99}) expected_resources = utils.ResourceRequest() expected_resources._rg_by_id[None] = plib.RequestGroup( use_same_provider=False, resources={ "VCPU": 99, "MEMORY_MB": 99, "DISK_GB": 99, } ) self._test_resources_from_request_spec(flavor, expected_resources) def test_get_resources_from_request_spec_remove_flavor_amounts(self): flavor = objects.Flavor(vcpus=1, memory_mb=1024, root_gb=10, ephemeral_gb=5, swap=0, extra_specs={ "resources:VCPU": 0, "resources:DISK_GB": 0}) expected_resources = utils.ResourceRequest() expected_resources._rg_by_id[None] = plib.RequestGroup( use_same_provider=False, resources={ "MEMORY_MB": 1024, } ) self._test_resources_from_request_spec(flavor, expected_resources) def test_get_resources_from_request_spec_vgpu(self): flavor = objects.Flavor(vcpus=1, memory_mb=1024, root_gb=10, ephemeral_gb=0, swap=0, extra_specs={ "resources:VGPU": 1, "resources:VGPU_DISPLAY_HEAD": 1}) expected_resources = utils.ResourceRequest() expected_resources._rg_by_id[None] = plib.RequestGroup( use_same_provider=False, resources={ "VCPU": 1, "MEMORY_MB": 1024, "DISK_GB": 10, "VGPU": 1, "VGPU_DISPLAY_HEAD": 1, } ) self._test_resources_from_request_spec(flavor, expected_resources) def test_get_resources_from_request_spec_bad_std_resource_class(self): flavor = objects.Flavor(vcpus=1, memory_mb=1024, root_gb=10, ephemeral_gb=5, swap=0, extra_specs={ "resources:DOESNT_EXIST": 0}) fake_spec = objects.RequestSpec(flavor=flavor) with mock.patch("nova.scheduler.utils.LOG.warning") as mock_log: utils.resources_from_request_spec(fake_spec) mock_log.assert_called_once() args = mock_log.call_args[0] self.assertEqual(args[0], "Received an invalid ResourceClass " "'%(key)s' in extra_specs.") self.assertEqual(args[1], {"key": "DOESNT_EXIST"}) def test_get_resources_from_request_spec_granular(self): flavor = objects.Flavor( vcpus=1, memory_mb=1024, root_gb=10, ephemeral_gb=0, swap=0, extra_specs={'resources1:VGPU': '1', 'resources1:VGPU_DISPLAY_HEAD': '2', # Replace 'resources3:VCPU': '2', # Stay separate (don't sum) 'resources42:SRIOV_NET_VF': '1', 'resources24:SRIOV_NET_VF': '2', # Ignore 'some:bogus': 'value', # Custom in the unnumbered group (merge with DISK_GB) 'resources:CUSTOM_THING': '123', # Traits make it through 'trait3:CUSTOM_SILVER': 'required', 'trait3:CUSTOM_GOLD': 'required', # Delete standard 'resources86:MEMORY_MB': '0', # Standard and custom zeroes don't make it through 'resources:IPV4_ADDRESS': '0', 'resources:CUSTOM_FOO': '0', # Bogus values don't make it through 'resources1:MEMORY_MB': 'bogus'}) expected_resources = utils.ResourceRequest() expected_resources._rg_by_id[None] = plib.RequestGroup( use_same_provider=False, resources={ 'DISK_GB': 10, 'CUSTOM_THING': 123, } ) expected_resources._rg_by_id['1'] = plib.RequestGroup( resources={ 'VGPU': 1, 'VGPU_DISPLAY_HEAD': 2, } ) expected_resources._rg_by_id['3'] = plib.RequestGroup( resources={ 'VCPU': 2, }, required_traits={ 'CUSTOM_GOLD', 'CUSTOM_SILVER', } ) expected_resources._rg_by_id['24'] = plib.RequestGroup( resources={ 'SRIOV_NET_VF': 2, }, ) expected_resources._rg_by_id['42'] = plib.RequestGroup( resources={ 'SRIOV_NET_VF': 1, } ) self._test_resources_from_request_spec(flavor, expected_resources) @mock.patch("nova.scheduler.utils.ResourceRequest.from_extra_specs") def test_process_extra_specs_granular_called(self, mock_proc): flavor = objects.Flavor(vcpus=1, memory_mb=1024, root_gb=10, ephemeral_gb=5, swap=0, extra_specs={"resources:CUSTOM_TEST_CLASS": 1}) fake_spec = objects.RequestSpec(flavor=flavor) utils.resources_from_request_spec(fake_spec) mock_proc.assert_called_once() @mock.patch("nova.scheduler.utils.ResourceRequest.from_extra_specs") def test_process_extra_specs_granular_not_called(self, mock_proc): flavor = objects.Flavor(vcpus=1, memory_mb=1024, root_gb=10, ephemeral_gb=5, swap=0) fake_spec = objects.RequestSpec(flavor=flavor) utils.resources_from_request_spec(fake_spec) mock_proc.assert_not_called() def test_process_missing_extra_specs_value(self): flavor = objects.Flavor( vcpus=1, memory_mb=1024, root_gb=10, ephemeral_gb=5, swap=0, extra_specs={"resources:CUSTOM_TEST_CLASS": ""}) fake_spec = objects.RequestSpec(flavor=flavor) utils.resources_from_request_spec(fake_spec) @mock.patch('nova.compute.utils.is_volume_backed_instance', return_value=False) def test_resources_from_flavor_no_bfv(self, mock_is_bfv): flavor = objects.Flavor(vcpus=1, memory_mb=1024, root_gb=10, ephemeral_gb=5, swap=1024, extra_specs={}) instance = objects.Instance() expected = { 'VCPU': 1, 'MEMORY_MB': 1024, 'DISK_GB': 16, } actual = utils.resources_from_flavor(instance, flavor) self.assertEqual(expected, actual) @mock.patch('nova.compute.utils.is_volume_backed_instance', return_value=True) def test_resources_from_flavor_bfv(self, mock_is_bfv): flavor = objects.Flavor(vcpus=1, memory_mb=1024, root_gb=10, ephemeral_gb=5, swap=1024, extra_specs={}) instance = objects.Instance() expected = { 'VCPU': 1, 'MEMORY_MB': 1024, 'DISK_GB': 6, # No root disk... } actual = utils.resources_from_flavor(instance, flavor) self.assertEqual(expected, actual) @mock.patch('nova.compute.utils.is_volume_backed_instance', return_value=False) def test_resources_from_flavor_with_override(self, mock_is_bfv): flavor = objects.Flavor( vcpus=1, memory_mb=1024, root_gb=10, ephemeral_gb=5, swap=1024, extra_specs={ # Replace 'resources:VCPU': '2', # Sum up 'resources42:SRIOV_NET_VF': '1', 'resources24:SRIOV_NET_VF': '2', # Ignore 'some:bogus': 'value', # Custom 'resources:CUSTOM_THING': '123', # Ignore 'trait:CUSTOM_GOLD': 'required', # Delete standard 'resources86:MEMORY_MB': 0, # Standard and custom zeroes don't make it through 'resources:IPV4_ADDRESS': 0, 'resources:CUSTOM_FOO': 0}) instance = objects.Instance() expected = { 'VCPU': 2, 'DISK_GB': 16, 'CUSTOM_THING': 123, 'SRIOV_NET_VF': 3, } actual = utils.resources_from_flavor(instance, flavor) self.assertEqual(expected, actual) def test_resource_request_from_extra_specs(self): extra_specs = { 'resources:VCPU': '2', 'resources:MEMORY_MB': '2048', 'trait:HW_CPU_X86_AVX': 'required', # Key skipped because no colons 'nocolons': '42', 'trait:CUSTOM_MAGIC': 'required', # Resource skipped because invalid resource class name 'resources86:CUTSOM_MISSPELLED': '86', 'resources1:SRIOV_NET_VF': '1', # Resource skipped because non-int-able value 'resources86:CUSTOM_FOO': 'seven', # Resource skipped because negative value 'resources86:CUSTOM_NEGATIVE': '-7', 'resources1:IPV4_ADDRESS': '1', # Trait skipped because unsupported value 'trait86:CUSTOM_GOLD': 'preferred', 'trait1:CUSTOM_PHYSNET_NET1': 'required', 'resources2:SRIOV_NET_VF': '1', 'resources2:IPV4_ADDRESS': '2', 'trait2:CUSTOM_PHYSNET_NET2': 'required', 'trait2:HW_NIC_ACCEL_SSL': 'required', # Groupings that don't quite match the patterns are ignored 'resources_5:SRIOV_NET_VF': '7', 'traitFoo:HW_NIC_ACCEL_SSL': 'required', # Solo resource, no corresponding traits 'resources3:DISK_GB': '5', } # Build up a ResourceRequest from the inside to compare against. expected = utils.ResourceRequest() expected._rg_by_id[None] = plib.RequestGroup( use_same_provider=False, resources={ 'VCPU': 2, 'MEMORY_MB': 2048, }, required_traits={ 'HW_CPU_X86_AVX', 'CUSTOM_MAGIC', } ) expected._rg_by_id['1'] = plib.RequestGroup( resources={ 'SRIOV_NET_VF': 1, 'IPV4_ADDRESS': 1, }, required_traits={ 'CUSTOM_PHYSNET_NET1', } ) expected._rg_by_id['2'] = plib.RequestGroup( resources={ 'SRIOV_NET_VF': 1, 'IPV4_ADDRESS': 2, }, required_traits={ 'CUSTOM_PHYSNET_NET2', 'HW_NIC_ACCEL_SSL', } ) expected._rg_by_id['3'] = plib.RequestGroup( resources={ 'DISK_GB': 5, } ) self.assertResourceRequestsEqual( expected, utils.ResourceRequest.from_extra_specs(extra_specs)) def test_merge_resources(self): resources = { 'VCPU': 1, 'MEMORY_MB': 1024, } new_resources = { 'VCPU': 2, 'MEMORY_MB': 2048, 'CUSTOM_FOO': 1, } doubled = { 'VCPU': 3, 'MEMORY_MB': 3072, 'CUSTOM_FOO': 1, } saved_orig = dict(resources) utils.merge_resources(resources, new_resources) # Check to see that we've doubled our resources self.assertEqual(doubled, resources) # and then removed those doubled resources utils.merge_resources(resources, saved_orig, -1) self.assertEqual(new_resources, resources) def test_merge_resources_zero(self): """Test 0 value resources are ignored.""" resources = { 'VCPU': 1, 'MEMORY_MB': 1024, } new_resources = { 'VCPU': 2, 'MEMORY_MB': 2048, 'DISK_GB': 0, } # The result should not include the zero valued resource. doubled = { 'VCPU': 3, 'MEMORY_MB': 3072, } utils.merge_resources(resources, new_resources) self.assertEqual(doubled, resources) def test_merge_resources_original_zeroes(self): """Confirm that merging that result in a zero in the original excludes the zeroed resource class. """ resources = { 'VCPU': 3, 'MEMORY_MB': 1023, 'DISK_GB': 1, } new_resources = { 'VCPU': 1, 'MEMORY_MB': 512, 'DISK_GB': 1, } merged = { 'VCPU': 2, 'MEMORY_MB': 511, } utils.merge_resources(resources, new_resources, -1) self.assertEqual(merged, resources) def test_claim_resources_on_destination_no_source_allocations(self): """Tests the negative scenario where the instance does not have allocations in Placement on the source compute node so no claim is attempted on the destination compute node. """ reportclient = report.SchedulerReportClient() instance = fake_instance.fake_instance_obj(self.context) source_node = objects.ComputeNode( uuid=uuids.source_node, host=instance.host) dest_node = objects.ComputeNode(uuid=uuids.dest_node, host='dest-host') @mock.patch.object(reportclient, 'get_allocations_for_consumer_by_provider', return_value={}) @mock.patch.object(reportclient, 'claim_resources', new_callable=mock.NonCallableMock) def test(mock_claim, mock_get_allocs): utils.claim_resources_on_destination( self.context, reportclient, instance, source_node, dest_node) mock_get_allocs.assert_called_once_with( uuids.source_node, instance.uuid) test() def test_claim_resources_on_destination_claim_fails(self): """Tests the negative scenario where the resource allocation claim on the destination compute node fails, resulting in an error. """ reportclient = report.SchedulerReportClient() instance = fake_instance.fake_instance_obj(self.context) source_node = objects.ComputeNode( uuid=uuids.source_node, host=instance.host) dest_node = objects.ComputeNode(uuid=uuids.dest_node, host='dest-host') source_res_allocs = { 'VCPU': instance.vcpus, 'MEMORY_MB': instance.memory_mb, # This would really include ephemeral and swap too but we're lazy. 'DISK_GB': instance.root_gb } dest_alloc_request = { 'allocations': { uuids.dest_node: { 'resources': source_res_allocs } } } @mock.patch.object(reportclient, 'get_allocations_for_consumer_by_provider', return_value=source_res_allocs) @mock.patch.object(reportclient, 'claim_resources', return_value=False) def test(mock_claim, mock_get_allocs): # NOTE(danms): Don't pass source_node_allocations here to test # that they are fetched if needed. self.assertRaises(exception.NoValidHost, utils.claim_resources_on_destination, self.context, reportclient, instance, source_node, dest_node) mock_get_allocs.assert_called_once_with( uuids.source_node, instance.uuid) mock_claim.assert_called_once_with( self.context, instance.uuid, dest_alloc_request, instance.project_id, instance.user_id, allocation_request_version='1.12') test() def test_claim_resources_on_destination(self): """Happy path test where everything is successful.""" reportclient = report.SchedulerReportClient() instance = fake_instance.fake_instance_obj(self.context) source_node = objects.ComputeNode( uuid=uuids.source_node, host=instance.host) dest_node = objects.ComputeNode(uuid=uuids.dest_node, host='dest-host') source_res_allocs = { 'VCPU': instance.vcpus, 'MEMORY_MB': instance.memory_mb, # This would really include ephemeral and swap too but we're lazy. 'DISK_GB': instance.root_gb } dest_alloc_request = { 'allocations': { uuids.dest_node: { 'resources': source_res_allocs } } } @mock.patch.object(reportclient, 'get_allocations_for_consumer_by_provider') @mock.patch.object(reportclient, 'claim_resources', return_value=True) def test(mock_claim, mock_get_allocs): utils.claim_resources_on_destination( self.context, reportclient, instance, source_node, dest_node, source_res_allocs) self.assertFalse(mock_get_allocs.called) mock_claim.assert_called_once_with( self.context, instance.uuid, dest_alloc_request, instance.project_id, instance.user_id, allocation_request_version='1.12') test() @mock.patch('nova.scheduler.client.report.SchedulerReportClient') @mock.patch('nova.scheduler.utils.request_is_rebuild') def test_claim_resources(self, mock_is_rebuild, mock_client): """Tests that when claim_resources() is called, that we appropriately call the placement client to claim resources for the instance. """ mock_is_rebuild.return_value = False ctx = mock.Mock(user_id=uuids.user_id) spec_obj = mock.Mock(project_id=uuids.project_id) instance_uuid = uuids.instance alloc_req = mock.sentinel.alloc_req mock_client.claim_resources.return_value = True res = utils.claim_resources(ctx, mock_client, spec_obj, instance_uuid, alloc_req) mock_client.claim_resources.assert_called_once_with( ctx, uuids.instance, mock.sentinel.alloc_req, uuids.project_id, uuids.user_id, allocation_request_version=None) self.assertTrue(res) @mock.patch('nova.scheduler.client.report.SchedulerReportClient') @mock.patch('nova.scheduler.utils.request_is_rebuild') def test_claim_resouces_for_policy_check(self, mock_is_rebuild, mock_client): mock_is_rebuild.return_value = True ctx = mock.Mock(user_id=uuids.user_id) res = utils.claim_resources(ctx, None, mock.sentinel.spec_obj, mock.sentinel.instance_uuid, []) self.assertTrue(res) mock_is_rebuild.assert_called_once_with(mock.sentinel.spec_obj) self.assertFalse(mock_client.claim_resources.called) nova-17.0.1/nova/tests/unit/scheduler/test_filters.py0000666000175000017500000002555313250073126022723 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Tests For Scheduler Host Filters. """ import inspect import mock from six.moves import range from nova import filters from nova import loadables from nova import objects from nova import test from nova.tests import uuidsentinel as uuids class Filter1(filters.BaseFilter): """Test Filter class #1.""" pass class Filter2(filters.BaseFilter): """Test Filter class #2.""" pass class FiltersTestCase(test.NoDBTestCase): def setUp(self): super(FiltersTestCase, self).setUp() with mock.patch.object(loadables.BaseLoader, "__init__") as mock_load: mock_load.return_value = None self.filter_handler = filters.BaseFilterHandler(filters.BaseFilter) @mock.patch('nova.filters.BaseFilter._filter_one') def test_filter_all(self, mock_filter_one): mock_filter_one.side_effect = [True, False, True] filter_obj_list = ['obj1', 'obj2', 'obj3'] spec_obj = objects.RequestSpec() base_filter = filters.BaseFilter() result = base_filter.filter_all(filter_obj_list, spec_obj) self.assertTrue(inspect.isgenerator(result)) self.assertEqual(['obj1', 'obj3'], list(result)) @mock.patch('nova.filters.BaseFilter._filter_one') def test_filter_all_recursive_yields(self, mock_filter_one): # Test filter_all() allows generators from previous filter_all()s. # filter_all() yields results. We want to make sure that we can # call filter_all() with generators returned from previous calls # to filter_all(). filter_obj_list = ['obj1', 'obj2', 'obj3'] spec_obj = objects.RequestSpec() base_filter = filters.BaseFilter() # The order that _filter_one is going to get called gets # confusing because we will be recursively yielding things.. # We are going to simulate the first call to filter_all() # returning False for 'obj2'. So, 'obj1' will get yielded # 'total_iterations' number of times before the first filter_all() # call gets to processing 'obj2'. We then return 'False' for it. # After that, 'obj3' gets yielded 'total_iterations' number of # times. mock_results = [] total_iterations = 200 for x in range(total_iterations): mock_results.append(True) mock_results.append(False) for x in range(total_iterations): mock_results.append(True) mock_filter_one.side_effect = mock_results objs = iter(filter_obj_list) for x in range(total_iterations): # Pass in generators returned from previous calls. objs = base_filter.filter_all(objs, spec_obj) self.assertTrue(inspect.isgenerator(objs)) self.assertEqual(['obj1', 'obj3'], list(objs)) def test_get_filtered_objects(self): filter_objs_initial = ['initial', 'filter1', 'objects1'] filter_objs_second = ['second', 'filter2', 'objects2'] filter_objs_last = ['last', 'filter3', 'objects3'] spec_obj = objects.RequestSpec() def _fake_base_loader_init(*args, **kwargs): pass self.stub_out('nova.loadables.BaseLoader.__init__', _fake_base_loader_init) filt1_mock = mock.Mock(Filter1) filt1_mock.run_filter_for_index.return_value = True filt1_mock.filter_all.return_value = filter_objs_second filt2_mock = mock.Mock(Filter2) filt2_mock.run_filter_for_index.return_value = True filt2_mock.filter_all.return_value = filter_objs_last filter_handler = filters.BaseFilterHandler(filters.BaseFilter) filter_mocks = [filt1_mock, filt2_mock] result = filter_handler.get_filtered_objects(filter_mocks, filter_objs_initial, spec_obj) self.assertEqual(filter_objs_last, result) filt1_mock.filter_all.assert_called_once_with(filter_objs_initial, spec_obj) filt2_mock.filter_all.assert_called_once_with(filter_objs_second, spec_obj) def test_get_filtered_objects_for_index(self): """Test that we don't call a filter when its run_filter_for_index() method returns false """ filter_objs_initial = ['initial', 'filter1', 'objects1'] filter_objs_second = ['second', 'filter2', 'objects2'] spec_obj = objects.RequestSpec() def _fake_base_loader_init(*args, **kwargs): pass self.stub_out('nova.loadables.BaseLoader.__init__', _fake_base_loader_init) filt1_mock = mock.Mock(Filter1) filt1_mock.run_filter_for_index.return_value = True filt1_mock.filter_all.return_value = filter_objs_second filt2_mock = mock.Mock(Filter2) filt2_mock.run_filter_for_index.return_value = False filter_handler = filters.BaseFilterHandler(filters.BaseFilter) filter_mocks = [filt1_mock, filt2_mock] result = filter_handler.get_filtered_objects(filter_mocks, filter_objs_initial, spec_obj) self.assertEqual(filter_objs_second, result) filt1_mock.filter_all.assert_called_once_with(filter_objs_initial, spec_obj) filt2_mock.filter_all.assert_not_called() def test_get_filtered_objects_none_response(self): filter_objs_initial = ['initial', 'filter1', 'objects1'] spec_obj = objects.RequestSpec() def _fake_base_loader_init(*args, **kwargs): pass self.stub_out('nova.loadables.BaseLoader.__init__', _fake_base_loader_init) filt1_mock = mock.Mock(Filter1) filt1_mock.run_filter_for_index.return_value = True filt1_mock.filter_all.return_value = None filt2_mock = mock.Mock(Filter2) filter_handler = filters.BaseFilterHandler(filters.BaseFilter) filter_mocks = [filt1_mock, filt2_mock] result = filter_handler.get_filtered_objects(filter_mocks, filter_objs_initial, spec_obj) self.assertIsNone(result) filt1_mock.filter_all.assert_called_once_with(filter_objs_initial, spec_obj) filt2_mock.filter_all.assert_not_called() def test_get_filtered_objects_info_log_none_returned(self): LOG = filters.LOG class FilterA(filters.BaseFilter): def filter_all(self, list_objs, spec_obj): # return all but the first object return list_objs[1:] class FilterB(filters.BaseFilter): def filter_all(self, list_objs, spec_obj): # return an empty list return [] filter_a = FilterA() filter_b = FilterB() all_filters = [filter_a, filter_b] hosts = ["Host0", "Host1", "Host2"] fake_uuid = uuids.instance spec_obj = objects.RequestSpec(instance_uuid=fake_uuid) with mock.patch.object(LOG, "info") as mock_log: result = self.filter_handler.get_filtered_objects( all_filters, hosts, spec_obj) self.assertFalse(result) # FilterA should leave Host1 and Host2; FilterB should leave None. exp_output = ("['FilterA: (start: 3, end: 2)', " "'FilterB: (start: 2, end: 0)']") cargs = mock_log.call_args[0][0] self.assertIn("with instance ID '%s'" % fake_uuid, cargs) self.assertIn(exp_output, cargs) def test_get_filtered_objects_debug_log_none_returned(self): LOG = filters.LOG class FilterA(filters.BaseFilter): def filter_all(self, list_objs, spec_obj): # return all but the first object return list_objs[1:] class FilterB(filters.BaseFilter): def filter_all(self, list_objs, spec_obj): # return an empty list return [] filter_a = FilterA() filter_b = FilterB() all_filters = [filter_a, filter_b] hosts = ["Host0", "Host1", "Host2"] fake_uuid = uuids.instance spec_obj = objects.RequestSpec(instance_uuid=fake_uuid) with mock.patch.object(LOG, "debug") as mock_log: result = self.filter_handler.get_filtered_objects( all_filters, hosts, spec_obj) self.assertFalse(result) # FilterA should leave Host1 and Host2; FilterB should leave None. exp_output = ("[('FilterA', [('Host1', ''), ('Host2', '')]), " + "('FilterB', None)]") cargs = mock_log.call_args[0][0] self.assertIn("with instance ID '%s'" % fake_uuid, cargs) self.assertIn(exp_output, cargs) def test_get_filtered_objects_compatible_with_filt_props_dicts(self): LOG = filters.LOG class FilterA(filters.BaseFilter): def filter_all(self, list_objs, spec_obj): # return all but the first object return list_objs[1:] class FilterB(filters.BaseFilter): def filter_all(self, list_objs, spec_obj): # return an empty list return [] filter_a = FilterA() filter_b = FilterB() all_filters = [filter_a, filter_b] hosts = ["Host0", "Host1", "Host2"] fake_uuid = uuids.instance filt_props = {"request_spec": {"instance_properties": { "uuid": fake_uuid}}} with mock.patch.object(LOG, "info") as mock_log: result = self.filter_handler.get_filtered_objects( all_filters, hosts, filt_props) self.assertFalse(result) # FilterA should leave Host1 and Host2; FilterB should leave None. exp_output = ("['FilterA: (start: 3, end: 2)', " "'FilterB: (start: 2, end: 0)']") cargs = mock_log.call_args[0][0] self.assertIn("with instance ID '%s'" % fake_uuid, cargs) self.assertIn(exp_output, cargs) nova-17.0.1/nova/tests/unit/scheduler/filters/0000775000175000017500000000000013250073472021302 5ustar zuulzuul00000000000000nova-17.0.1/nova/tests/unit/scheduler/filters/test_extra_specs_ops.py0000666000175000017500000001566413250073126026126 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.scheduler.filters import extra_specs_ops from nova import test class ExtraSpecsOpsTestCase(test.NoDBTestCase): def _do_extra_specs_ops_test(self, value, req, matches): assertion = self.assertTrue if matches else self.assertFalse assertion(extra_specs_ops.match(value, req)) def test_extra_specs_matches_simple(self): self._do_extra_specs_ops_test( value='1', req='1', matches=True) def test_extra_specs_fails_simple(self): self._do_extra_specs_ops_test( value='', req='1', matches=False) def test_extra_specs_fails_simple2(self): self._do_extra_specs_ops_test( value='3', req='1', matches=False) def test_extra_specs_fails_simple3(self): self._do_extra_specs_ops_test( value='222', req='2', matches=False) def test_extra_specs_fails_with_bogus_ops(self): self._do_extra_specs_ops_test( value='4', req='> 2', matches=False) def test_extra_specs_matches_with_op_eq(self): self._do_extra_specs_ops_test( value='123', req='= 123', matches=True) def test_extra_specs_matches_with_op_eq2(self): self._do_extra_specs_ops_test( value='124', req='= 123', matches=True) def test_extra_specs_fails_with_op_eq(self): self._do_extra_specs_ops_test( value='34', req='= 234', matches=False) def test_extra_specs_fails_with_op_eq3(self): self._do_extra_specs_ops_test( value='34', req='=', matches=False) def test_extra_specs_matches_with_op_seq(self): self._do_extra_specs_ops_test( value='123', req='s== 123', matches=True) def test_extra_specs_fails_with_op_seq(self): self._do_extra_specs_ops_test( value='1234', req='s== 123', matches=False) def test_extra_specs_matches_with_op_sneq(self): self._do_extra_specs_ops_test( value='1234', req='s!= 123', matches=True) def test_extra_specs_fails_with_op_sneq(self): self._do_extra_specs_ops_test( value='123', req='s!= 123', matches=False) def test_extra_specs_fails_with_op_sge(self): self._do_extra_specs_ops_test( value='1000', req='s>= 234', matches=False) def test_extra_specs_fails_with_op_sle(self): self._do_extra_specs_ops_test( value='1234', req='s<= 1000', matches=False) def test_extra_specs_fails_with_op_sl(self): self._do_extra_specs_ops_test( value='2', req='s< 12', matches=False) def test_extra_specs_fails_with_op_sg(self): self._do_extra_specs_ops_test( value='12', req='s> 2', matches=False) def test_extra_specs_matches_with_op_in(self): self._do_extra_specs_ops_test( value='12311321', req=' 11', matches=True) def test_extra_specs_matches_with_op_in2(self): self._do_extra_specs_ops_test( value='12311321', req=' 12311321', matches=True) def test_extra_specs_matches_with_op_in3(self): self._do_extra_specs_ops_test( value='12311321', req=' 12311321 ', matches=True) def test_extra_specs_fails_with_op_in(self): self._do_extra_specs_ops_test( value='12310321', req=' 11', matches=False) def test_extra_specs_fails_with_op_in2(self): self._do_extra_specs_ops_test( value='12310321', req=' 11 ', matches=False) def test_extra_specs_matches_with_op_or(self): self._do_extra_specs_ops_test( value='12', req=' 11 12', matches=True) def test_extra_specs_matches_with_op_or2(self): self._do_extra_specs_ops_test( value='12', req=' 11 12 ', matches=True) def test_extra_specs_fails_with_op_or(self): self._do_extra_specs_ops_test( value='13', req=' 11 12', matches=False) def test_extra_specs_fails_with_op_or2(self): self._do_extra_specs_ops_test( value='13', req=' 11 12 ', matches=False) def test_extra_specs_matches_with_op_le(self): self._do_extra_specs_ops_test( value='2', req='<= 10', matches=True) def test_extra_specs_fails_with_op_le(self): self._do_extra_specs_ops_test( value='3', req='<= 2', matches=False) def test_extra_specs_matches_with_op_ge(self): self._do_extra_specs_ops_test( value='3', req='>= 1', matches=True) def test_extra_specs_fails_with_op_ge(self): self._do_extra_specs_ops_test( value='2', req='>= 3', matches=False) def test_extra_specs_matches_all_with_op_allin(self): values = ['aes', 'mmx', 'aux'] self._do_extra_specs_ops_test( value=str(values), req=' aes mmx', matches=True) def test_extra_specs_matches_one_with_op_allin(self): values = ['aes', 'mmx', 'aux'] self._do_extra_specs_ops_test( value=str(values), req=' mmx', matches=True) def test_extra_specs_fails_with_op_allin(self): values = ['aes', 'mmx', 'aux'] self._do_extra_specs_ops_test( value=str(values), req=' txt', matches=False) def test_extra_specs_fails_all_with_op_allin(self): values = ['aes', 'mmx', 'aux'] self._do_extra_specs_ops_test( value=str(values), req=' txt 3dnow', matches=False) def test_extra_specs_fails_match_one_with_op_allin(self): values = ['aes', 'mmx', 'aux'] self._do_extra_specs_ops_test( value=str(values), req=' txt aes', matches=False) nova-17.0.1/nova/tests/unit/scheduler/filters/test_availability_zone_filters.py0000666000175000017500000000410013250073126030141 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova import objects from nova.scheduler.filters import availability_zone_filter from nova import test from nova.tests.unit.scheduler import fakes @mock.patch('nova.scheduler.filters.utils.aggregate_metadata_get_by_host') class TestAvailabilityZoneFilter(test.NoDBTestCase): def setUp(self): super(TestAvailabilityZoneFilter, self).setUp() self.filt_cls = availability_zone_filter.AvailabilityZoneFilter() @staticmethod def _make_zone_request(zone): return objects.RequestSpec( context=mock.sentinel.ctx, availability_zone=zone) def test_availability_zone_filter_same(self, agg_mock): agg_mock.return_value = {'availability_zone': set(['nova'])} request = self._make_zone_request('nova') host = fakes.FakeHostState('host1', 'node1', {}) self.assertTrue(self.filt_cls.host_passes(host, request)) def test_availability_zone_filter_same_comma(self, agg_mock): agg_mock.return_value = {'availability_zone': set(['nova', 'nova2'])} request = self._make_zone_request('nova') host = fakes.FakeHostState('host1', 'node1', {}) self.assertTrue(self.filt_cls.host_passes(host, request)) def test_availability_zone_filter_different(self, agg_mock): agg_mock.return_value = {'availability_zone': set(['nova'])} request = self._make_zone_request('bad') host = fakes.FakeHostState('host1', 'node1', {}) self.assertFalse(self.filt_cls.host_passes(host, request)) nova-17.0.1/nova/tests/unit/scheduler/filters/test_isolated_hosts_filter.py0000666000175000017500000001101613250073126027301 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova import objects from nova.scheduler.filters import isolated_hosts_filter from nova import test from nova.tests.unit.scheduler import fakes from nova.tests import uuidsentinel as uuids class TestIsolatedHostsFilter(test.NoDBTestCase): def setUp(self): super(TestIsolatedHostsFilter, self).setUp() self.filt_cls = isolated_hosts_filter.IsolatedHostsFilter() def _do_test_isolated_hosts(self, host_in_list, image_in_list, set_flags=True, restrict_isolated_hosts_to_isolated_images=True): if set_flags: self.flags(isolated_images=[uuids.image_ref], isolated_hosts=['isolated_host'], restrict_isolated_hosts_to_isolated_images= restrict_isolated_hosts_to_isolated_images, group='filter_scheduler') host_name = 'isolated_host' if host_in_list else 'free_host' image_ref = uuids.image_ref if image_in_list else uuids.fake_image_ref spec_obj = objects.RequestSpec(image=objects.ImageMeta(id=image_ref)) host = fakes.FakeHostState(host_name, 'node', {}) return self.filt_cls.host_passes(host, spec_obj) def test_isolated_hosts_fails_isolated_on_non_isolated(self): self.assertFalse(self._do_test_isolated_hosts(False, True)) def test_isolated_hosts_fails_non_isolated_on_isolated(self): self.assertFalse(self._do_test_isolated_hosts(True, False)) def test_isolated_hosts_passes_isolated_on_isolated(self): self.assertTrue(self._do_test_isolated_hosts(True, True)) def test_isolated_hosts_passes_non_isolated_on_non_isolated(self): self.assertTrue(self._do_test_isolated_hosts(False, False)) def test_isolated_hosts_no_config(self): # If there are no hosts nor isolated images in the config, it should # not filter at all. This is the default config. self.assertTrue(self._do_test_isolated_hosts(False, True, False)) self.assertTrue(self._do_test_isolated_hosts(True, False, False)) self.assertTrue(self._do_test_isolated_hosts(True, True, False)) self.assertTrue(self._do_test_isolated_hosts(False, False, False)) def test_isolated_hosts_no_hosts_config(self): self.flags(isolated_images=[uuids.image_ref], group='filter_scheduler') # If there are no hosts in the config, it should only filter out # images that are listed self.assertFalse(self._do_test_isolated_hosts(False, True, False)) self.assertTrue(self._do_test_isolated_hosts(True, False, False)) self.assertFalse(self._do_test_isolated_hosts(True, True, False)) self.assertTrue(self._do_test_isolated_hosts(False, False, False)) def test_isolated_hosts_no_images_config(self): self.flags(isolated_hosts=['isolated_host'], group='filter_scheduler') # If there are no images in the config, it should only filter out # isolated_hosts self.assertTrue(self._do_test_isolated_hosts(False, True, False)) self.assertFalse(self._do_test_isolated_hosts(True, False, False)) self.assertFalse(self._do_test_isolated_hosts(True, True, False)) self.assertTrue(self._do_test_isolated_hosts(False, False, False)) def test_isolated_hosts_less_restrictive(self): # If there are isolated hosts and non isolated images self.assertTrue(self._do_test_isolated_hosts(True, False, True, False)) # If there are isolated hosts and isolated images self.assertTrue(self._do_test_isolated_hosts(True, True, True, False)) # If there are non isolated hosts and non isolated images self.assertTrue(self._do_test_isolated_hosts(False, False, True, False)) # If there are non isolated hosts and isolated images self.assertFalse(self._do_test_isolated_hosts(False, True, True, False)) nova-17.0.1/nova/tests/unit/scheduler/filters/test_exact_ram_filter.py0000666000175000017500000000325113250073126026222 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova import objects from nova.scheduler.filters import exact_ram_filter from nova import test from nova.tests.unit.scheduler import fakes class TestRamFilter(test.NoDBTestCase): def setUp(self): super(TestRamFilter, self).setUp() self.filt_cls = exact_ram_filter.ExactRamFilter() def test_exact_ram_filter_passes(self): spec_obj = objects.RequestSpec( flavor=objects.Flavor(memory_mb=1024)) ram_mb = 1024 host = self._get_host({'free_ram_mb': ram_mb, 'total_usable_ram_mb': ram_mb}) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) self.assertEqual(host.limits.get('memory_mb'), ram_mb) def test_exact_ram_filter_fails(self): spec_obj = objects.RequestSpec( flavor=objects.Flavor(memory_mb=512)) host = self._get_host({'free_ram_mb': 1024}) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) self.assertNotIn('memory_mb', host.limits) def _get_host(self, host_attributes): return fakes.FakeHostState('host1', 'node1', host_attributes) nova-17.0.1/nova/tests/unit/scheduler/filters/test_core_filters.py0000666000175000017500000001060713250073126025375 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova import objects from nova.scheduler.filters import core_filter from nova import test from nova.tests.unit.scheduler import fakes class TestCoreFilter(test.NoDBTestCase): def test_core_filter_passes(self): self.filt_cls = core_filter.CoreFilter() spec_obj = objects.RequestSpec(flavor=objects.Flavor(vcpus=1)) host = fakes.FakeHostState('host1', 'node1', {'vcpus_total': 4, 'vcpus_used': 7, 'cpu_allocation_ratio': 2}) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_core_filter_fails_safe(self): self.filt_cls = core_filter.CoreFilter() spec_obj = objects.RequestSpec(flavor=objects.Flavor(vcpus=1)) host = fakes.FakeHostState('host1', 'node1', {}) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_core_filter_fails(self): self.filt_cls = core_filter.CoreFilter() spec_obj = objects.RequestSpec(flavor=objects.Flavor(vcpus=1)) host = fakes.FakeHostState('host1', 'node1', {'vcpus_total': 4, 'vcpus_used': 8, 'cpu_allocation_ratio': 2}) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) def test_core_filter_single_instance_overcommit_fails(self): self.filt_cls = core_filter.CoreFilter() spec_obj = objects.RequestSpec(flavor=objects.Flavor(vcpus=2)) host = fakes.FakeHostState('host1', 'node1', {'vcpus_total': 1, 'vcpus_used': 0, 'cpu_allocation_ratio': 2}) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) @mock.patch('nova.scheduler.filters.utils.aggregate_values_from_key') def test_aggregate_core_filter_value_error(self, agg_mock): self.filt_cls = core_filter.AggregateCoreFilter() spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, flavor=objects.Flavor(vcpus=1)) host = fakes.FakeHostState('host1', 'node1', {'vcpus_total': 4, 'vcpus_used': 7, 'cpu_allocation_ratio': 2}) agg_mock.return_value = set(['XXX']) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) agg_mock.assert_called_once_with(host, 'cpu_allocation_ratio') self.assertEqual(4 * 2, host.limits['vcpu']) @mock.patch('nova.scheduler.filters.utils.aggregate_values_from_key') def test_aggregate_core_filter_default_value(self, agg_mock): self.filt_cls = core_filter.AggregateCoreFilter() spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, flavor=objects.Flavor(vcpus=1)) host = fakes.FakeHostState('host1', 'node1', {'vcpus_total': 4, 'vcpus_used': 8, 'cpu_allocation_ratio': 2}) agg_mock.return_value = set([]) # False: fallback to default flag w/o aggregates self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) agg_mock.assert_called_once_with(host, 'cpu_allocation_ratio') # True: use ratio from aggregates agg_mock.return_value = set(['3']) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) self.assertEqual(4 * 3, host.limits['vcpu']) @mock.patch('nova.scheduler.filters.utils.aggregate_values_from_key') def test_aggregate_core_filter_conflict_values(self, agg_mock): self.filt_cls = core_filter.AggregateCoreFilter() spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, flavor=objects.Flavor(vcpus=1)) host = fakes.FakeHostState('host1', 'node1', {'vcpus_total': 4, 'vcpus_used': 8, 'cpu_allocation_ratio': 1}) agg_mock.return_value = set(['2', '3']) # use the minimum ratio from aggregates self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) self.assertEqual(4 * 2, host.limits['vcpu']) nova-17.0.1/nova/tests/unit/scheduler/filters/test_num_instances_filters.py0000666000175000017500000000573613250073126027322 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova import objects from nova.scheduler.filters import num_instances_filter from nova import test from nova.tests.unit.scheduler import fakes class TestNumInstancesFilter(test.NoDBTestCase): def test_filter_num_instances_passes(self): self.flags(max_instances_per_host=5, group='filter_scheduler') self.filt_cls = num_instances_filter.NumInstancesFilter() host = fakes.FakeHostState('host1', 'node1', {'num_instances': 4}) spec_obj = objects.RequestSpec() self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_filter_num_instances_fails(self): self.flags(max_instances_per_host=5, group='filter_scheduler') self.filt_cls = num_instances_filter.NumInstancesFilter() host = fakes.FakeHostState('host1', 'node1', {'num_instances': 5}) spec_obj = objects.RequestSpec() self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) @mock.patch('nova.scheduler.filters.utils.aggregate_values_from_key') def test_filter_aggregate_num_instances_value(self, agg_mock): self.flags(max_instances_per_host=4, group='filter_scheduler') self.filt_cls = num_instances_filter.AggregateNumInstancesFilter() host = fakes.FakeHostState('host1', 'node1', {'num_instances': 5}) spec_obj = objects.RequestSpec(context=mock.sentinel.ctx) agg_mock.return_value = set([]) # No aggregate defined for that host. self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) agg_mock.assert_called_once_with(host, 'max_instances_per_host') agg_mock.return_value = set(['6']) # Aggregate defined for that host. self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) @mock.patch('nova.scheduler.filters.utils.aggregate_values_from_key') def test_filter_aggregate_num_instances_value_error(self, agg_mock): self.flags(max_instances_per_host=6, group='filter_scheduler') self.filt_cls = num_instances_filter.AggregateNumInstancesFilter() host = fakes.FakeHostState('host1', 'node1', {}) spec_obj = objects.RequestSpec(context=mock.sentinel.ctx) agg_mock.return_value = set(['XXX']) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) agg_mock.assert_called_once_with(host, 'max_instances_per_host') nova-17.0.1/nova/tests/unit/scheduler/filters/test_exact_disk_filter.py0000666000175000017500000000336213250073126026400 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova import objects from nova.scheduler.filters import exact_disk_filter from nova import test from nova.tests.unit.scheduler import fakes class TestExactDiskFilter(test.NoDBTestCase): def setUp(self): super(TestExactDiskFilter, self).setUp() self.filt_cls = exact_disk_filter.ExactDiskFilter() def test_exact_disk_filter_passes(self): spec_obj = objects.RequestSpec( flavor=objects.Flavor(root_gb=1, ephemeral_gb=1, swap=1024)) disk_gb = 3 host = self._get_host({'free_disk_mb': disk_gb * 1024, 'total_usable_disk_gb': disk_gb}) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) self.assertEqual(host.limits.get('disk_gb'), disk_gb) def test_exact_disk_filter_fails(self): spec_obj = objects.RequestSpec( flavor=objects.Flavor(root_gb=1, ephemeral_gb=1, swap=1024)) host = self._get_host({'free_disk_mb': 2 * 1024}) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) self.assertNotIn('disk_gb', host.limits) def _get_host(self, host_attributes): return fakes.FakeHostState('host1', 'node1', host_attributes) nova-17.0.1/nova/tests/unit/scheduler/filters/test_metrics_filters.py0000666000175000017500000000473213250073126026115 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime from nova import objects from nova.scheduler.filters import metrics_filter from nova import test from nova.tests.unit.scheduler import fakes class TestMetricsFilter(test.NoDBTestCase): def test_metrics_filter_pass(self): _ts_now = datetime.datetime(2015, 11, 11, 11, 0, 0) obj1 = objects.MonitorMetric(name='cpu.frequency', value=1000, timestamp=_ts_now, source='nova.virt.libvirt.driver') obj2 = objects.MonitorMetric(name='numa.membw.current', numa_membw_values={"0": 10, "1": 43}, timestamp=_ts_now, source='nova.virt.libvirt.driver') metrics_list = objects.MonitorMetricList(objects=[obj1, obj2]) self.flags(weight_setting=[ 'cpu.frequency=1', 'numa.membw.current=2'], group='metrics') filt_cls = metrics_filter.MetricsFilter() host = fakes.FakeHostState('host1', 'node1', attribute_dict={'metrics': metrics_list}) self.assertTrue(filt_cls.host_passes(host, None)) def test_metrics_filter_missing_metrics(self): _ts_now = datetime.datetime(2015, 11, 11, 11, 0, 0) obj1 = objects.MonitorMetric(name='cpu.frequency', value=1000, timestamp=_ts_now, source='nova.virt.libvirt.driver') metrics_list = objects.MonitorMetricList(objects=[obj1]) self.flags(weight_setting=['foo=1', 'bar=2'], group='metrics') filt_cls = metrics_filter.MetricsFilter() host = fakes.FakeHostState('host1', 'node1', attribute_dict={'metrics': metrics_list}) self.assertFalse(filt_cls.host_passes(host, None)) nova-17.0.1/nova/tests/unit/scheduler/filters/test_disk_filters.py0000666000175000017500000001206713250073126025401 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova import objects from nova.scheduler.filters import disk_filter from nova import test from nova.tests.unit.scheduler import fakes class TestDiskFilter(test.NoDBTestCase): def test_disk_filter_passes(self): filt_cls = disk_filter.DiskFilter() spec_obj = objects.RequestSpec( flavor=objects.Flavor(root_gb=1, ephemeral_gb=1, swap=512)) host = fakes.FakeHostState('host1', 'node1', {'free_disk_mb': 11 * 1024, 'total_usable_disk_gb': 13, 'disk_allocation_ratio': 1.0}) self.assertTrue(filt_cls.host_passes(host, spec_obj)) def test_disk_filter_fails(self): filt_cls = disk_filter.DiskFilter() spec_obj = objects.RequestSpec( flavor=objects.Flavor( root_gb=10, ephemeral_gb=1, swap=1024)) host = fakes.FakeHostState('host1', 'node1', {'free_disk_mb': 11 * 1024, 'total_usable_disk_gb': 13, 'disk_allocation_ratio': 1.0}) self.assertFalse(filt_cls.host_passes(host, spec_obj)) def test_disk_filter_oversubscribe(self): filt_cls = disk_filter.DiskFilter() spec_obj = objects.RequestSpec( flavor=objects.Flavor( root_gb=3, ephemeral_gb=3, swap=1024)) # Only 1Gb left, but with 10x overprovision a 7Gb instance should # still fit. Schedule will succeed. host = fakes.FakeHostState('host1', 'node1', {'free_disk_mb': 1 * 1024, 'total_usable_disk_gb': 12, 'disk_allocation_ratio': 10.0}) self.assertTrue(filt_cls.host_passes(host, spec_obj)) self.assertEqual(12 * 10.0, host.limits['disk_gb']) def test_disk_filter_oversubscribe_single_instance_fails(self): filt_cls = disk_filter.DiskFilter() spec_obj = objects.RequestSpec( flavor=objects.Flavor( root_gb=10, ephemeral_gb=2, swap=1024)) # According to the allocation ratio, This host has 119 Gb left, # but it doesn't matter because the requested instance is # bigger than the whole drive. Schedule will fail. host = fakes.FakeHostState('host1', 'node1', {'free_disk_mb': 11 * 1024, 'total_usable_disk_gb': 12, 'disk_allocation_ratio': 10.0}) self.assertFalse(filt_cls.host_passes(host, spec_obj)) def test_disk_filter_oversubscribe_fail(self): filt_cls = disk_filter.DiskFilter() spec_obj = objects.RequestSpec( flavor=objects.Flavor( root_gb=100, ephemeral_gb=19, swap=1024)) # 1GB used... so 119GB allowed... host = fakes.FakeHostState('host1', 'node1', {'free_disk_mb': 11 * 1024, 'total_usable_disk_gb': 12, 'disk_allocation_ratio': 10.0}) self.assertFalse(filt_cls.host_passes(host, spec_obj)) @mock.patch('nova.scheduler.filters.utils.aggregate_values_from_key') def test_aggregate_disk_filter_value_error(self, agg_mock): filt_cls = disk_filter.AggregateDiskFilter() spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, flavor=objects.Flavor( root_gb=1, ephemeral_gb=1, swap=1024)) host = fakes.FakeHostState('host1', 'node1', {'free_disk_mb': 3 * 1024, 'total_usable_disk_gb': 4, 'disk_allocation_ratio': 1.0}) agg_mock.return_value = set(['XXX']) self.assertTrue(filt_cls.host_passes(host, spec_obj)) agg_mock.assert_called_once_with(host, 'disk_allocation_ratio') @mock.patch('nova.scheduler.filters.utils.aggregate_values_from_key') def test_aggregate_disk_filter_default_value(self, agg_mock): filt_cls = disk_filter.AggregateDiskFilter() spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, flavor=objects.Flavor( root_gb=2, ephemeral_gb=1, swap=1024)) host = fakes.FakeHostState('host1', 'node1', {'free_disk_mb': 3 * 1024, 'total_usable_disk_gb': 4, 'disk_allocation_ratio': 1.0}) # Uses global conf. agg_mock.return_value = set([]) self.assertFalse(filt_cls.host_passes(host, spec_obj)) agg_mock.assert_called_once_with(host, 'disk_allocation_ratio') agg_mock.return_value = set(['2']) self.assertTrue(filt_cls.host_passes(host, spec_obj)) nova-17.0.1/nova/tests/unit/scheduler/filters/test_retry_filters.py0000666000175000017500000000431613250073126025612 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova import objects from nova.scheduler.filters import retry_filter from nova import test from nova.tests.unit.scheduler import fakes class TestRetryFilter(test.NoDBTestCase): def setUp(self): super(TestRetryFilter, self).setUp() self.filt_cls = retry_filter.RetryFilter() def test_retry_filter_disabled(self): # Test case where retry/re-scheduling is disabled. host = fakes.FakeHostState('host1', 'node1', {}) spec_obj = objects.RequestSpec(retry=None) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_retry_filter_pass(self): # Node not previously tried. host = fakes.FakeHostState('host1', 'nodeX', {}) retry = objects.SchedulerRetries( num_attempts=2, hosts=objects.ComputeNodeList(objects=[ # same host, different node objects.ComputeNode(host='host1', hypervisor_hostname='node1'), # different host and node objects.ComputeNode(host='host2', hypervisor_hostname='node2'), ])) spec_obj = objects.RequestSpec(retry=retry) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_retry_filter_fail(self): # Node was already tried. host = fakes.FakeHostState('host1', 'node1', {}) retry = objects.SchedulerRetries( num_attempts=1, hosts=objects.ComputeNodeList(objects=[ objects.ComputeNode(host='host1', hypervisor_hostname='node1') ])) spec_obj = objects.RequestSpec(retry=retry) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) nova-17.0.1/nova/tests/unit/scheduler/filters/test_numa_topology_filters.py0000666000175000017500000002623113250073126027341 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import itertools import mock from nova import objects from nova.objects import fields from nova.scheduler.filters import numa_topology_filter from nova import test from nova.tests.unit.scheduler import fakes from nova.tests import uuidsentinel as uuids class TestNUMATopologyFilter(test.NoDBTestCase): def setUp(self): super(TestNUMATopologyFilter, self).setUp() self.filt_cls = numa_topology_filter.NUMATopologyFilter() def _get_spec_obj(self, numa_topology): image_meta = objects.ImageMeta(properties=objects.ImageMetaProps()) spec_obj = objects.RequestSpec(numa_topology=numa_topology, pci_requests=None, instance_uuid=uuids.fake, flavor=objects.Flavor(extra_specs={}), image=image_meta) return spec_obj def test_numa_topology_filter_pass(self): instance_topology = objects.InstanceNUMATopology( cells=[objects.InstanceNUMACell(id=0, cpuset=set([1]), memory=512), objects.InstanceNUMACell(id=1, cpuset=set([3]), memory=512) ]) spec_obj = self._get_spec_obj(numa_topology=instance_topology) host = fakes.FakeHostState('host1', 'node1', {'numa_topology': fakes.NUMA_TOPOLOGY, 'pci_stats': None, 'cpu_allocation_ratio': 16.0, 'ram_allocation_ratio': 1.5}) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_numa_topology_filter_numa_instance_no_numa_host_fail(self): instance_topology = objects.InstanceNUMATopology( cells=[objects.InstanceNUMACell(id=0, cpuset=set([1]), memory=512), objects.InstanceNUMACell(id=1, cpuset=set([3]), memory=512) ]) spec_obj = self._get_spec_obj(numa_topology=instance_topology) host = fakes.FakeHostState('host1', 'node1', {'pci_stats': None}) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) def test_numa_topology_filter_numa_host_no_numa_instance_pass(self): spec_obj = self._get_spec_obj(numa_topology=None) host = fakes.FakeHostState('host1', 'node1', {'numa_topology': fakes.NUMA_TOPOLOGY}) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_numa_topology_filter_fail_fit(self): instance_topology = objects.InstanceNUMATopology( cells=[objects.InstanceNUMACell(id=0, cpuset=set([1]), memory=512), objects.InstanceNUMACell(id=1, cpuset=set([2]), memory=512), objects.InstanceNUMACell(id=2, cpuset=set([3]), memory=512) ]) spec_obj = self._get_spec_obj(numa_topology=instance_topology) host = fakes.FakeHostState('host1', 'node1', {'numa_topology': fakes.NUMA_TOPOLOGY, 'pci_stats': None, 'cpu_allocation_ratio': 16.0, 'ram_allocation_ratio': 1.5}) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) def test_numa_topology_filter_fail_memory(self): instance_topology = objects.InstanceNUMATopology( cells=[objects.InstanceNUMACell(id=0, cpuset=set([1]), memory=1024), objects.InstanceNUMACell(id=1, cpuset=set([3]), memory=512) ]) spec_obj = self._get_spec_obj(numa_topology=instance_topology) host = fakes.FakeHostState('host1', 'node1', {'numa_topology': fakes.NUMA_TOPOLOGY, 'pci_stats': None, 'cpu_allocation_ratio': 16.0, 'ram_allocation_ratio': 1}) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) def test_numa_topology_filter_fail_cpu(self): instance_topology = objects.InstanceNUMATopology( cells=[objects.InstanceNUMACell(id=0, cpuset=set([1]), memory=512), objects.InstanceNUMACell(id=1, cpuset=set([3, 4, 5]), memory=512)]) spec_obj = self._get_spec_obj(numa_topology=instance_topology) host = fakes.FakeHostState('host1', 'node1', {'numa_topology': fakes.NUMA_TOPOLOGY, 'pci_stats': None, 'cpu_allocation_ratio': 1, 'ram_allocation_ratio': 1.5}) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) def test_numa_topology_filter_pass_set_limit(self): instance_topology = objects.InstanceNUMATopology( cells=[objects.InstanceNUMACell(id=0, cpuset=set([1]), memory=512), objects.InstanceNUMACell(id=1, cpuset=set([3]), memory=512) ]) spec_obj = self._get_spec_obj(numa_topology=instance_topology) host = fakes.FakeHostState('host1', 'node1', {'numa_topology': fakes.NUMA_TOPOLOGY, 'pci_stats': None, 'cpu_allocation_ratio': 21, 'ram_allocation_ratio': 1.3}) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) limits = host.limits['numa_topology'] self.assertEqual(limits.cpu_allocation_ratio, 21) self.assertEqual(limits.ram_allocation_ratio, 1.3) @mock.patch('nova.objects.instance_numa_topology.InstanceNUMACell' '.cpu_pinning_requested', return_value=True) def _do_test_numa_topology_filter_cpu_policy( self, numa_topology, cpu_policy, cpu_thread_policy, passes, mock_pinning_requested): instance_topology = objects.InstanceNUMATopology( cells=[objects.InstanceNUMACell(id=0, cpuset=set([1]), memory=512), objects.InstanceNUMACell(id=1, cpuset=set([3]), memory=512) ]) spec_obj = objects.RequestSpec(numa_topology=instance_topology, pci_requests=None, instance_uuid=uuids.fake) extra_specs = [ {}, { 'hw:cpu_policy': cpu_policy, 'hw:cpu_thread_policy': cpu_thread_policy, } ] image_props = [ {}, { 'hw_cpu_policy': cpu_policy, 'hw_cpu_thread_policy': cpu_thread_policy, } ] host = fakes.FakeHostState('host1', 'node1', { 'numa_topology': numa_topology, 'pci_stats': None, 'cpu_allocation_ratio': 1, 'ram_allocation_ratio': 1.5}) assertion = self.assertTrue if passes else self.assertFalse # test combinations of image properties and extra specs for specs, props in itertools.product(extra_specs, image_props): # ...except for the one where no policy is specified if specs == props == {}: continue fake_flavor = objects.Flavor(memory_mb=1024, extra_specs=specs) fake_image_props = objects.ImageMetaProps(**props) fake_image = objects.ImageMeta(properties=fake_image_props) spec_obj.image = fake_image spec_obj.flavor = fake_flavor assertion(self.filt_cls.host_passes(host, spec_obj)) self.assertIsNone(spec_obj.numa_topology.cells[0].cpu_pinning) def test_numa_topology_filter_fail_cpu_thread_policy_require(self): cpu_policy = fields.CPUAllocationPolicy.DEDICATED cpu_thread_policy = fields.CPUThreadAllocationPolicy.REQUIRE numa_topology = fakes.NUMA_TOPOLOGY self._do_test_numa_topology_filter_cpu_policy( numa_topology, cpu_policy, cpu_thread_policy, False) def test_numa_topology_filter_pass_cpu_thread_policy_require(self): cpu_policy = fields.CPUAllocationPolicy.DEDICATED cpu_thread_policy = fields.CPUThreadAllocationPolicy.REQUIRE for numa_topology in fakes.NUMA_TOPOLOGIES_W_HT: self._do_test_numa_topology_filter_cpu_policy( numa_topology, cpu_policy, cpu_thread_policy, True) def test_numa_topology_filter_pass_cpu_thread_policy_others(self): cpu_policy = fields.CPUAllocationPolicy.DEDICATED cpu_thread_policy = fields.CPUThreadAllocationPolicy.PREFER numa_topology = fakes.NUMA_TOPOLOGY for cpu_thread_policy in [ fields.CPUThreadAllocationPolicy.PREFER, fields.CPUThreadAllocationPolicy.ISOLATE]: self._do_test_numa_topology_filter_cpu_policy( numa_topology, cpu_policy, cpu_thread_policy, True) def test_numa_topology_filter_pass_mempages(self): instance_topology = objects.InstanceNUMATopology( cells=[objects.InstanceNUMACell(id=0, cpuset=set([3]), memory=128, pagesize=4), objects.InstanceNUMACell(id=1, cpuset=set([1]), memory=128, pagesize=16) ]) spec_obj = self._get_spec_obj(numa_topology=instance_topology) host = fakes.FakeHostState('host1', 'node1', {'numa_topology': fakes.NUMA_TOPOLOGY, 'pci_stats': None, 'cpu_allocation_ratio': 16.0, 'ram_allocation_ratio': 1.5}) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_numa_topology_filter_fail_mempages(self): instance_topology = objects.InstanceNUMATopology( cells=[objects.InstanceNUMACell(id=0, cpuset=set([3]), memory=128, pagesize=8), objects.InstanceNUMACell(id=1, cpuset=set([1]), memory=128, pagesize=16) ]) spec_obj = self._get_spec_obj(numa_topology=instance_topology) host = fakes.FakeHostState('host1', 'node1', {'numa_topology': fakes.NUMA_TOPOLOGY, 'pci_stats': None, 'cpu_allocation_ratio': 16.0, 'ram_allocation_ratio': 1.5}) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) nova-17.0.1/nova/tests/unit/scheduler/filters/test_type_filters.py0000666000175000017500000001132113250073126025420 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova import objects from nova.scheduler.filters import type_filter from nova import test from nova.tests.unit.scheduler import fakes class TestTypeFilter(test.NoDBTestCase): @mock.patch('nova.scheduler.filters.utils.aggregate_values_from_key') def test_aggregate_type_filter_no_metadata(self, agg_mock): self.filt_cls = type_filter.AggregateTypeAffinityFilter() spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, flavor=objects.Flavor(name='fake1')) host = fakes.FakeHostState('fake_host', 'fake_node', {}) # tests when no instance_type is defined for aggregate agg_mock.return_value = set([]) # True as no instance_type set for aggregate self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) agg_mock.assert_called_once_with(host, 'instance_type') @mock.patch('nova.scheduler.filters.utils.aggregate_values_from_key') def test_aggregate_type_filter_single_instance_type(self, agg_mock): self.filt_cls = type_filter.AggregateTypeAffinityFilter() spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, flavor=objects.Flavor(name='fake1')) spec_obj2 = objects.RequestSpec( context=mock.sentinel.ctx, flavor=objects.Flavor(name='fake2')) host = fakes.FakeHostState('fake_host', 'fake_node', {}) # tests when a single instance_type is defined for an aggregate # using legacy single value syntax agg_mock.return_value = set(['fake1']) # True as instance_type is allowed for aggregate self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) # False as instance_type is not allowed for aggregate self.assertFalse(self.filt_cls.host_passes(host, spec_obj2)) @mock.patch('nova.scheduler.filters.utils.aggregate_values_from_key') def test_aggregate_type_filter_multi_aggregate(self, agg_mock): self.filt_cls = type_filter.AggregateTypeAffinityFilter() spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, flavor=objects.Flavor(name='fake1')) spec_obj2 = objects.RequestSpec( context=mock.sentinel.ctx, flavor=objects.Flavor(name='fake2')) spec_obj3 = objects.RequestSpec( context=mock.sentinel.ctx, flavor=objects.Flavor(name='fake3')) host = fakes.FakeHostState('fake_host', 'fake_node', {}) # tests when a single instance_type is defined for multiple aggregates # using legacy single value syntax agg_mock.return_value = set(['fake1', 'fake2']) # True as instance_type is allowed for first aggregate self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) # True as instance_type is allowed for second aggregate self.assertTrue(self.filt_cls.host_passes(host, spec_obj2)) # False as instance_type is not allowed for aggregates self.assertFalse(self.filt_cls.host_passes(host, spec_obj3)) @mock.patch('nova.scheduler.filters.utils.aggregate_values_from_key') def test_aggregate_type_filter_multi_instance_type(self, agg_mock): self.filt_cls = type_filter.AggregateTypeAffinityFilter() spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, flavor=objects.Flavor(name='fake1')) spec_obj2 = objects.RequestSpec( context=mock.sentinel.ctx, flavor=objects.Flavor(name='fake2')) spec_obj3 = objects.RequestSpec( context=mock.sentinel.ctx, flavor=objects.Flavor(name='fake3')) host = fakes.FakeHostState('fake_host', 'fake_node', {}) # tests when multiple instance_types are defined for aggregate agg_mock.return_value = set(['fake1,fake2']) # True as instance_type is allowed for aggregate self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) # True as instance_type is allowed for aggregate self.assertTrue(self.filt_cls.host_passes(host, spec_obj2)) # False as instance_type is not allowed for aggregate self.assertFalse(self.filt_cls.host_passes(host, spec_obj3)) nova-17.0.1/nova/tests/unit/scheduler/filters/test_aggregate_image_properties_isolation_filters.py0000666000175000017500000001571013250073126034072 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova import objects from nova.scheduler.filters import aggregate_image_properties_isolation as aipi from nova import test from nova.tests.unit.scheduler import fakes @mock.patch('nova.scheduler.filters.utils.aggregate_metadata_get_by_host') class TestAggImagePropsIsolationFilter(test.NoDBTestCase): def setUp(self): super(TestAggImagePropsIsolationFilter, self).setUp() self.filt_cls = aipi.AggregateImagePropertiesIsolation() def test_aggregate_image_properties_isolation_passes(self, agg_mock): agg_mock.return_value = {'hw_vm_mode': set(['hvm'])} spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, image=objects.ImageMeta(properties=objects.ImageMetaProps( hw_vm_mode='hvm'))) host = fakes.FakeHostState('host1', 'compute', {}) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_aggregate_image_properties_isolation_passes_comma(self, agg_mock): agg_mock.return_value = {'hw_vm_mode': set(['hvm', 'xen'])} spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, image=objects.ImageMeta(properties=objects.ImageMetaProps( hw_vm_mode='hvm'))) host = fakes.FakeHostState('host1', 'compute', {}) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_aggregate_image_properties_isolation_props_bad_comma(self, agg_mock): agg_mock.return_value = {'os_distro': set(['windows', 'linux'])} spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, image=objects.ImageMeta(properties=objects.ImageMetaProps( os_distro='windows,'))) host = fakes.FakeHostState('host1', 'compute', {}) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) def test_aggregate_image_properties_isolation_multi_props_passes(self, agg_mock): agg_mock.return_value = {'hw_vm_mode': set(['hvm']), 'hw_cpu_cores': set(['2'])} spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, image=objects.ImageMeta(properties=objects.ImageMetaProps( hw_vm_mode='hvm', hw_cpu_cores=2))) host = fakes.FakeHostState('host1', 'compute', {}) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_aggregate_image_properties_isolation_props_with_meta_passes(self, agg_mock): agg_mock.return_value = {'hw_vm_mode': set(['hvm'])} spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, image=objects.ImageMeta(properties=objects.ImageMetaProps())) host = fakes.FakeHostState('host1', 'compute', {}) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_aggregate_image_properties_isolation_props_imgprops_passes(self, agg_mock): agg_mock.return_value = {} spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, image=objects.ImageMeta(properties=objects.ImageMetaProps( hw_vm_mode='hvm'))) host = fakes.FakeHostState('host1', 'compute', {}) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_aggregate_image_properties_isolation_props_not_match_fails(self, agg_mock): agg_mock.return_value = {'hw_vm_mode': set(['hvm'])} spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, image=objects.ImageMeta(properties=objects.ImageMetaProps( hw_vm_mode='xen'))) host = fakes.FakeHostState('host1', 'compute', {}) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) def test_aggregate_image_properties_isolation_props_not_match2_fails(self, agg_mock): agg_mock.return_value = {'hw_vm_mode': set(['hvm']), 'hw_cpu_cores': set(['1'])} spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, image=objects.ImageMeta(properties=objects.ImageMetaProps( hw_vm_mode='hvm', hw_cpu_cores=2))) host = fakes.FakeHostState('host1', 'compute', {}) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) def test_aggregate_image_properties_isolation_props_namespace(self, agg_mock): self.flags(aggregate_image_properties_isolation_namespace='hw', group='filter_scheduler') self.flags(aggregate_image_properties_isolation_separator='_', group='filter_scheduler') agg_mock.return_value = {'hw_vm_mode': set(['hvm']), 'img_owner_id': set(['foo'])} spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, image=objects.ImageMeta(properties=objects.ImageMetaProps( hw_vm_mode='hvm', img_owner_id='wrong'))) host = fakes.FakeHostState('host1', 'compute', {}) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_aggregate_image_properties_iso_props_with_custom_meta(self, agg_mock): agg_mock.return_value = {'os': set(['linux'])} spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, image=objects.ImageMeta(properties=objects.ImageMetaProps( os_type='linux'))) host = fakes.FakeHostState('host1', 'compute', {}) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_aggregate_image_properties_iso_props_with_matching_meta_pass(self, agg_mock): agg_mock.return_value = {'os_type': set(['linux'])} spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, image=objects.ImageMeta(properties=objects.ImageMetaProps( os_type='linux'))) host = fakes.FakeHostState('host1', 'compute', {}) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_aggregate_image_properties_iso_props_with_matching_meta_fail( self, agg_mock): agg_mock.return_value = {'os_type': set(['windows'])} spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, image=objects.ImageMeta(properties=objects.ImageMetaProps( os_type='linux'))) host = fakes.FakeHostState('host1', 'compute', {}) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) nova-17.0.1/nova/tests/unit/scheduler/filters/__init__.py0000666000175000017500000000000013250073126023377 0ustar zuulzuul00000000000000nova-17.0.1/nova/tests/unit/scheduler/filters/test_io_ops_filters.py0000666000175000017500000000550313250073126025734 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova import objects from nova.scheduler.filters import io_ops_filter from nova import test from nova.tests.unit.scheduler import fakes class TestIoOpsFilter(test.NoDBTestCase): def test_filter_num_iops_passes(self): self.flags(max_io_ops_per_host=8, group='filter_scheduler') self.filt_cls = io_ops_filter.IoOpsFilter() host = fakes.FakeHostState('host1', 'node1', {'num_io_ops': 7}) spec_obj = objects.RequestSpec() self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_filter_num_iops_fails(self): self.flags(max_io_ops_per_host=8, group='filter_scheduler') self.filt_cls = io_ops_filter.IoOpsFilter() host = fakes.FakeHostState('host1', 'node1', {'num_io_ops': 8}) spec_obj = objects.RequestSpec() self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) @mock.patch('nova.scheduler.filters.utils.aggregate_values_from_key') def test_aggregate_filter_num_iops_value(self, agg_mock): self.flags(max_io_ops_per_host=7, group='filter_scheduler') self.filt_cls = io_ops_filter.AggregateIoOpsFilter() host = fakes.FakeHostState('host1', 'node1', {'num_io_ops': 7}) spec_obj = objects.RequestSpec(context=mock.sentinel.ctx) agg_mock.return_value = set([]) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) agg_mock.assert_called_once_with(host, 'max_io_ops_per_host') agg_mock.return_value = set(['8']) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) @mock.patch('nova.scheduler.filters.utils.aggregate_values_from_key') def test_aggregate_filter_num_iops_value_error(self, agg_mock): self.flags(max_io_ops_per_host=8, group='filter_scheduler') self.filt_cls = io_ops_filter.AggregateIoOpsFilter() host = fakes.FakeHostState('host1', 'node1', {'num_io_ops': 7}) agg_mock.return_value = set(['XXX']) spec_obj = objects.RequestSpec(context=mock.sentinel.ctx) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) agg_mock.assert_called_once_with(host, 'max_io_ops_per_host') nova-17.0.1/nova/tests/unit/scheduler/filters/test_compute_filters.py0000666000175000017500000000446413250073126026125 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova import objects from nova.scheduler.filters import compute_filter from nova import test from nova.tests.unit.scheduler import fakes @mock.patch('nova.servicegroup.API.service_is_up') class TestComputeFilter(test.NoDBTestCase): def test_compute_filter_manual_disable(self, service_up_mock): filt_cls = compute_filter.ComputeFilter() spec_obj = objects.RequestSpec( flavor=objects.Flavor(memory_mb=1024)) service = {'disabled': True} host = fakes.FakeHostState('host1', 'node1', {'free_ram_mb': 1024, 'service': service}) self.assertFalse(filt_cls.host_passes(host, spec_obj)) self.assertFalse(service_up_mock.called) def test_compute_filter_sgapi_passes(self, service_up_mock): filt_cls = compute_filter.ComputeFilter() spec_obj = objects.RequestSpec( flavor=objects.Flavor(memory_mb=1024)) service = {'disabled': False} host = fakes.FakeHostState('host1', 'node1', {'free_ram_mb': 1024, 'service': service}) service_up_mock.return_value = True self.assertTrue(filt_cls.host_passes(host, spec_obj)) service_up_mock.assert_called_once_with(service) def test_compute_filter_sgapi_fails(self, service_up_mock): filt_cls = compute_filter.ComputeFilter() spec_obj = objects.RequestSpec( flavor=objects.Flavor(memory_mb=1024)) service = {'disabled': False, 'updated_at': 'now'} host = fakes.FakeHostState('host1', 'node1', {'free_ram_mb': 1024, 'service': service}) service_up_mock.return_value = False self.assertFalse(filt_cls.host_passes(host, spec_obj)) service_up_mock.assert_called_once_with(service) nova-17.0.1/nova/tests/unit/scheduler/filters/test_aggregate_multitenancy_isolation_filters.py0000666000175000017500000000577513250073126033262 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova import objects from nova.scheduler.filters import aggregate_multitenancy_isolation as ami from nova import test from nova.tests.unit.scheduler import fakes @mock.patch('nova.scheduler.filters.utils.aggregate_metadata_get_by_host') class TestAggregateMultitenancyIsolationFilter(test.NoDBTestCase): def setUp(self): super(TestAggregateMultitenancyIsolationFilter, self).setUp() self.filt_cls = ami.AggregateMultiTenancyIsolation() def test_aggregate_multi_tenancy_isolation_with_meta_passes(self, agg_mock): agg_mock.return_value = {'filter_tenant_id': set(['my_tenantid'])} spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, project_id='my_tenantid') host = fakes.FakeHostState('host1', 'compute', {}) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_aggregate_multi_tenancy_isolation_with_meta_passes_comma(self, agg_mock): agg_mock.return_value = {'filter_tenant_id': set(['my_tenantid', 'mytenantid2'])} spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, project_id='my_tenantid') host = fakes.FakeHostState('host1', 'compute', {}) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_aggregate_multi_tenancy_isolation_fails(self, agg_mock): agg_mock.return_value = {'filter_tenant_id': set(['other_tenantid'])} spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, project_id='my_tenantid') host = fakes.FakeHostState('host1', 'compute', {}) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) def test_aggregate_multi_tenancy_isolation_fails_comma(self, agg_mock): agg_mock.return_value = {'filter_tenant_id': set(['other_tenantid', 'other_tenantid2'])} spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, project_id='my_tenantid') host = fakes.FakeHostState('host1', 'compute', {}) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) def test_aggregate_multi_tenancy_isolation_no_meta_passes(self, agg_mock): agg_mock.return_value = {} spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, project_id='my_tenantid') host = fakes.FakeHostState('host1', 'compute', {}) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) nova-17.0.1/nova/tests/unit/scheduler/filters/test_affinity_filters.py0000666000175000017500000002126713250073126026262 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova import objects from nova.scheduler.filters import affinity_filter from nova import test from nova.tests.unit.scheduler import fakes from nova.tests import uuidsentinel as uuids class TestDifferentHostFilter(test.NoDBTestCase): def setUp(self): super(TestDifferentHostFilter, self).setUp() self.filt_cls = affinity_filter.DifferentHostFilter() def test_affinity_different_filter_passes(self): host = fakes.FakeHostState('host1', 'node1', {}) inst1 = objects.Instance(uuid=uuids.instance) host.instances = {inst1.uuid: inst1} spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, scheduler_hints=dict(different_host=['same'])) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_affinity_different_filter_fails(self): inst1 = objects.Instance(uuid=uuids.instance) host = fakes.FakeHostState('host1', 'node1', {}) host.instances = {inst1.uuid: inst1} spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, scheduler_hints=dict(different_host=[uuids.instance])) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) def test_affinity_different_filter_handles_none(self): inst1 = objects.Instance(uuid=uuids.instance) host = fakes.FakeHostState('host1', 'node1', {}) host.instances = {inst1.uuid: inst1} spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, scheduler_hints=None) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) class TestSameHostFilter(test.NoDBTestCase): def setUp(self): super(TestSameHostFilter, self).setUp() self.filt_cls = affinity_filter.SameHostFilter() def test_affinity_same_filter_passes(self): inst1 = objects.Instance(uuid=uuids.instance) host = fakes.FakeHostState('host1', 'node1', {}) host.instances = {inst1.uuid: inst1} spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, scheduler_hints=dict(same_host=[uuids.instance])) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_affinity_same_filter_no_list_passes(self): host = fakes.FakeHostState('host1', 'node1', {}) host.instances = {} spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, scheduler_hints=dict(same_host=['same'])) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) def test_affinity_same_filter_fails(self): inst1 = objects.Instance(uuid=uuids.instance) host = fakes.FakeHostState('host1', 'node1', {}) host.instances = {inst1.uuid: inst1} spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, scheduler_hints=dict(same_host=['same'])) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) def test_affinity_same_filter_handles_none(self): inst1 = objects.Instance(uuid=uuids.instance) host = fakes.FakeHostState('host1', 'node1', {}) host.instances = {inst1.uuid: inst1} spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, scheduler_hints=None) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) class TestSimpleCIDRAffinityFilter(test.NoDBTestCase): def setUp(self): super(TestSimpleCIDRAffinityFilter, self).setUp() self.filt_cls = affinity_filter.SimpleCIDRAffinityFilter() def test_affinity_simple_cidr_filter_passes(self): host = fakes.FakeHostState('host1', 'node1', {}) host.host_ip = '10.8.1.1' affinity_ip = "10.8.1.100" spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, scheduler_hints=dict( cidr=['/24'], build_near_host_ip=[affinity_ip])) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_affinity_simple_cidr_filter_fails(self): host = fakes.FakeHostState('host1', 'node1', {}) host.host_ip = '10.8.1.1' affinity_ip = "10.8.1.100" spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, scheduler_hints=dict( cidr=['/32'], build_near_host_ip=[affinity_ip])) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) def test_affinity_simple_cidr_filter_handles_none(self): host = fakes.FakeHostState('host1', 'node1', {}) spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, scheduler_hints=None) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) class TestGroupAffinityFilter(test.NoDBTestCase): def _test_group_anti_affinity_filter_passes(self, filt_cls, policy): host = fakes.FakeHostState('host1', 'node1', {}) spec_obj = objects.RequestSpec(instance_group=None) self.assertTrue(filt_cls.host_passes(host, spec_obj)) spec_obj = objects.RequestSpec(instance_group=objects.InstanceGroup( policies=['affinity'])) self.assertTrue(filt_cls.host_passes(host, spec_obj)) spec_obj = objects.RequestSpec(instance_group=objects.InstanceGroup( policies=[policy]), instance_uuid=uuids.fake) spec_obj.instance_group.hosts = [] self.assertTrue(filt_cls.host_passes(host, spec_obj)) spec_obj.instance_group.hosts = ['host2'] self.assertTrue(filt_cls.host_passes(host, spec_obj)) def test_group_anti_affinity_filter_passes(self): self._test_group_anti_affinity_filter_passes( affinity_filter.ServerGroupAntiAffinityFilter(), 'anti-affinity') def _test_group_anti_affinity_filter_fails(self, filt_cls, policy): host = fakes.FakeHostState('host1', 'node1', {}) spec_obj = objects.RequestSpec( instance_group=objects.InstanceGroup(policies=[policy], hosts=['host1']), instance_uuid=uuids.fake) self.assertFalse(filt_cls.host_passes(host, spec_obj)) def test_group_anti_affinity_filter_fails(self): self._test_group_anti_affinity_filter_fails( affinity_filter.ServerGroupAntiAffinityFilter(), 'anti-affinity') def test_group_anti_affinity_filter_allows_instance_to_same_host(self): fake_uuid = uuids.fake mock_instance = objects.Instance(uuid=fake_uuid) host_state = fakes.FakeHostState('host1', 'node1', {}, instances=[mock_instance]) spec_obj = objects.RequestSpec(instance_group=objects.InstanceGroup( policies=['anti-affinity'], hosts=['host1', 'host2']), instance_uuid=mock_instance.uuid) self.assertTrue(affinity_filter.ServerGroupAntiAffinityFilter(). host_passes(host_state, spec_obj)) def _test_group_affinity_filter_passes(self, filt_cls, policy): host = fakes.FakeHostState('host1', 'node1', {}) spec_obj = objects.RequestSpec(instance_group=None) self.assertTrue(filt_cls.host_passes(host, spec_obj)) spec_obj = objects.RequestSpec(instance_group=objects.InstanceGroup( policies=['anti-affinity'])) self.assertTrue(filt_cls.host_passes(host, spec_obj)) spec_obj = objects.RequestSpec(instance_group=objects.InstanceGroup( policies=['affinity'], hosts=['host1'])) self.assertTrue(filt_cls.host_passes(host, spec_obj)) def test_group_affinity_filter_passes(self): self._test_group_affinity_filter_passes( affinity_filter.ServerGroupAffinityFilter(), 'affinity') def _test_group_affinity_filter_fails(self, filt_cls, policy): host = fakes.FakeHostState('host1', 'node1', {}) spec_obj = objects.RequestSpec(instance_group=objects.InstanceGroup( policies=[policy], hosts=['host2'])) self.assertFalse(filt_cls.host_passes(host, spec_obj)) def test_group_affinity_filter_fails(self): self._test_group_affinity_filter_fails( affinity_filter.ServerGroupAffinityFilter(), 'affinity') nova-17.0.1/nova/tests/unit/scheduler/filters/test_exact_core_filter.py0000666000175000017500000000365613250073126026404 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova import objects from nova.scheduler.filters import exact_core_filter from nova import test from nova.tests.unit.scheduler import fakes class TestExactCoreFilter(test.NoDBTestCase): def setUp(self): super(TestExactCoreFilter, self).setUp() self.filt_cls = exact_core_filter.ExactCoreFilter() def test_exact_core_filter_passes(self): spec_obj = objects.RequestSpec( flavor=objects.Flavor(vcpus=1)) vcpus = 3 host = self._get_host({'vcpus_total': vcpus, 'vcpus_used': vcpus - 1}) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) self.assertEqual(host.limits.get('vcpu'), vcpus) def test_exact_core_filter_fails(self): spec_obj = objects.RequestSpec( flavor=objects.Flavor(vcpus=2)) host = self._get_host({'vcpus_total': 3, 'vcpus_used': 2}) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) self.assertNotIn('vcpu', host.limits) def test_exact_core_filter_fails_host_vcpus_not_set(self): spec_obj = objects.RequestSpec( flavor=objects.Flavor(vcpus=1)) host = self._get_host({}) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) self.assertNotIn('vcpu', host.limits) def _get_host(self, host_attributes): return fakes.FakeHostState('host1', 'node1', host_attributes) nova-17.0.1/nova/tests/unit/scheduler/filters/test_pci_passthrough_filters.py0000666000175000017500000000720313250073126027645 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova import objects from nova.pci import stats from nova.scheduler.filters import pci_passthrough_filter from nova import test from nova.tests.unit.scheduler import fakes class TestPCIPassthroughFilter(test.NoDBTestCase): def setUp(self): super(TestPCIPassthroughFilter, self).setUp() self.filt_cls = pci_passthrough_filter.PciPassthroughFilter() def test_pci_passthrough_pass(self): pci_stats_mock = mock.MagicMock() pci_stats_mock.support_requests.return_value = True request = objects.InstancePCIRequest(count=1, spec=[{'vendor_id': '8086'}]) requests = objects.InstancePCIRequests(requests=[request]) spec_obj = objects.RequestSpec(pci_requests=requests) host = fakes.FakeHostState( 'host1', 'node1', attribute_dict={'pci_stats': pci_stats_mock}) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) pci_stats_mock.support_requests.assert_called_once_with( requests.requests) def test_pci_passthrough_fail(self): pci_stats_mock = mock.MagicMock() pci_stats_mock.support_requests.return_value = False request = objects.InstancePCIRequest(count=1, spec=[{'vendor_id': '8086'}]) requests = objects.InstancePCIRequests(requests=[request]) spec_obj = objects.RequestSpec(pci_requests=requests) host = fakes.FakeHostState( 'host1', 'node1', attribute_dict={'pci_stats': pci_stats_mock}) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) pci_stats_mock.support_requests.assert_called_once_with( requests.requests) def test_pci_passthrough_no_pci_request(self): spec_obj = objects.RequestSpec(pci_requests=None) host = fakes.FakeHostState('h1', 'n1', {}) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_pci_passthrough_empty_pci_request_obj(self): requests = objects.InstancePCIRequests(requests=[]) spec_obj = objects.RequestSpec(pci_requests=requests) host = fakes.FakeHostState('h1', 'n1', {}) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_pci_passthrough_no_pci_stats(self): request = objects.InstancePCIRequest(count=1, spec=[{'vendor_id': '8086'}]) requests = objects.InstancePCIRequests(requests=[request]) spec_obj = objects.RequestSpec(pci_requests=requests) host = fakes.FakeHostState('host1', 'node1', attribute_dict={'pci_stats': stats.PciDeviceStats()}) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) def test_pci_passthrough_with_pci_stats_none(self): request = objects.InstancePCIRequest(count=1, spec=[{'vendor_id': '8086'}]) requests = objects.InstancePCIRequests(requests=[request]) spec_obj = objects.RequestSpec(pci_requests=requests) host = fakes.FakeHostState('host1', 'node1', attribute_dict={'pci_stats': None}) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) nova-17.0.1/nova/tests/unit/scheduler/filters/test_ram_filters.py0000666000175000017500000001072613250073126025226 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova import objects from nova.scheduler.filters import ram_filter from nova import test from nova.tests.unit.scheduler import fakes class TestRamFilter(test.NoDBTestCase): def setUp(self): super(TestRamFilter, self).setUp() self.filt_cls = ram_filter.RamFilter() def test_ram_filter_fails_on_memory(self): spec_obj = objects.RequestSpec( flavor=objects.Flavor(memory_mb=1024)) host = fakes.FakeHostState('host1', 'node1', {'free_ram_mb': 1023, 'total_usable_ram_mb': 1024, 'ram_allocation_ratio': 1.0}) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) def test_ram_filter_passes(self): spec_obj = objects.RequestSpec( flavor=objects.Flavor(memory_mb=1024)) host = fakes.FakeHostState('host1', 'node1', {'free_ram_mb': 1024, 'total_usable_ram_mb': 1024, 'ram_allocation_ratio': 1.0}) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_ram_filter_oversubscribe(self): spec_obj = objects.RequestSpec( flavor=objects.Flavor(memory_mb=1024)) host = fakes.FakeHostState('host1', 'node1', {'free_ram_mb': -1024, 'total_usable_ram_mb': 2048, 'ram_allocation_ratio': 2.0}) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) self.assertEqual(2048 * 2.0, host.limits['memory_mb']) def test_ram_filter_oversubscribe_singe_instance_fails(self): spec_obj = objects.RequestSpec( flavor=objects.Flavor(memory_mb=1024)) host = fakes.FakeHostState('host1', 'node1', {'free_ram_mb': 512, 'total_usable_ram_mb': 512, 'ram_allocation_ratio': 2.0}) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) @mock.patch('nova.scheduler.filters.utils.aggregate_values_from_key') class TestAggregateRamFilter(test.NoDBTestCase): def setUp(self): super(TestAggregateRamFilter, self).setUp() self.filt_cls = ram_filter.AggregateRamFilter() def test_aggregate_ram_filter_value_error(self, agg_mock): spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, flavor=objects.Flavor(memory_mb=1024)) host = fakes.FakeHostState('host1', 'node1', {'free_ram_mb': 1024, 'total_usable_ram_mb': 1024, 'ram_allocation_ratio': 1.0}) agg_mock.return_value = set(['XXX']) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) self.assertEqual(1024 * 1.0, host.limits['memory_mb']) def test_aggregate_ram_filter_default_value(self, agg_mock): spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, flavor=objects.Flavor(memory_mb=1024)) host = fakes.FakeHostState('host1', 'node1', {'free_ram_mb': 1023, 'total_usable_ram_mb': 1024, 'ram_allocation_ratio': 1.0}) # False: fallback to default flag w/o aggregates agg_mock.return_value = set() self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) agg_mock.return_value = set(['2.0']) # True: use ratio from aggregates self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) self.assertEqual(1024 * 2.0, host.limits['memory_mb']) def test_aggregate_ram_filter_conflict_values(self, agg_mock): spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, flavor=objects.Flavor(memory_mb=1024)) host = fakes.FakeHostState('host1', 'node1', {'free_ram_mb': 1023, 'total_usable_ram_mb': 1024, 'ram_allocation_ratio': 1.0}) agg_mock.return_value = set(['1.5', '2.0']) # use the minimum ratio from aggregates self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) self.assertEqual(1024 * 1.5, host.limits['memory_mb']) nova-17.0.1/nova/tests/unit/scheduler/filters/test_aggregate_instance_extra_specs_filters.py0000666000175000017500000000756713250073126032672 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova import objects from nova.scheduler.filters import aggregate_instance_extra_specs as agg_specs from nova import test from nova.tests.unit.scheduler import fakes @mock.patch('nova.scheduler.filters.utils.aggregate_metadata_get_by_host') class TestAggregateInstanceExtraSpecsFilter(test.NoDBTestCase): def setUp(self): super(TestAggregateInstanceExtraSpecsFilter, self).setUp() self.filt_cls = agg_specs.AggregateInstanceExtraSpecsFilter() def test_aggregate_filter_passes_no_extra_specs(self, agg_mock): capabilities = {'opt1': 1, 'opt2': 2} spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, flavor=objects.Flavor(memory_mb=1024)) host = fakes.FakeHostState('host1', 'node1', capabilities) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) self.assertFalse(agg_mock.called) def test_aggregate_filter_passes_empty_extra_specs(self, agg_mock): capabilities = {'opt1': 1, 'opt2': 2} spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, flavor=objects.Flavor(memory_mb=1024, extra_specs={})) host = fakes.FakeHostState('host1', 'node1', capabilities) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) self.assertFalse(agg_mock.called) def _do_test_aggregate_filter_extra_specs(self, especs, passes): spec_obj = objects.RequestSpec( context=mock.sentinel.ctx, flavor=objects.Flavor(memory_mb=1024, extra_specs=especs)) host = fakes.FakeHostState('host1', 'node1', {'free_ram_mb': 1024}) assertion = self.assertTrue if passes else self.assertFalse assertion(self.filt_cls.host_passes(host, spec_obj)) def test_aggregate_filter_passes_extra_specs_simple(self, agg_mock): agg_mock.return_value = {'opt1': set(['1']), 'opt2': set(['2'])} especs = { # Un-scoped extra spec 'opt1': '1', # Scoped extra spec that applies to this filter 'aggregate_instance_extra_specs:opt2': '2', } self._do_test_aggregate_filter_extra_specs(especs, passes=True) def test_aggregate_filter_passes_extra_specs_simple_comma(self, agg_mock): agg_mock.return_value = {'opt1': set(['1', '3']), 'opt2': set(['2'])} especs = { # Un-scoped extra spec 'opt1': '1', # Scoped extra spec that applies to this filter 'aggregate_instance_extra_specs:opt1': '3', } self._do_test_aggregate_filter_extra_specs(especs, passes=True) def test_aggregate_filter_passes_with_key_same_as_scope(self, agg_mock): agg_mock.return_value = {'aggregate_instance_extra_specs': set(['1'])} especs = { # Un-scoped extra spec, make sure we don't blow up if it # happens to match our scope. 'aggregate_instance_extra_specs': '1', } self._do_test_aggregate_filter_extra_specs(especs, passes=True) def test_aggregate_filter_fails_extra_specs_simple(self, agg_mock): agg_mock.return_value = {'opt1': set(['1']), 'opt2': set(['2'])} especs = { 'opt1': '1', 'opt2': '222' } self._do_test_aggregate_filter_extra_specs(especs, passes=False) nova-17.0.1/nova/tests/unit/scheduler/filters/test_utils.py0000666000175000017500000000753013250073126024056 0ustar zuulzuul00000000000000# Copyright 2015 IBM Corp. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova import objects from nova.scheduler.filters import utils from nova import test from nova.tests.unit.scheduler import fakes from nova.tests import uuidsentinel as uuids _AGGREGATE_FIXTURES = [ objects.Aggregate( id=1, name='foo', hosts=['fake-host'], metadata={'k1': '1', 'k2': '2'}, ), objects.Aggregate( id=2, name='bar', hosts=['fake-host'], metadata={'k1': '3', 'k2': '4'}, ), objects.Aggregate( id=3, name='bar', hosts=['fake-host'], metadata={'k1': '6,7', 'k2': '8, 9'}, ), ] class TestUtils(test.NoDBTestCase): def test_aggregate_values_from_key(self): host_state = fakes.FakeHostState( 'fake', 'node', {'aggregates': _AGGREGATE_FIXTURES}) values = utils.aggregate_values_from_key(host_state, key_name='k1') self.assertEqual(set(['1', '3', '6,7']), values) def test_aggregate_values_from_key_with_wrong_key(self): host_state = fakes.FakeHostState( 'fake', 'node', {'aggregates': _AGGREGATE_FIXTURES}) values = utils.aggregate_values_from_key(host_state, key_name='k3') self.assertEqual(set(), values) def test_aggregate_metadata_get_by_host_no_key(self): host_state = fakes.FakeHostState( 'fake', 'node', {'aggregates': _AGGREGATE_FIXTURES}) metadata = utils.aggregate_metadata_get_by_host(host_state) self.assertIn('k1', metadata) self.assertEqual(set(['1', '3', '7', '6']), metadata['k1']) self.assertIn('k2', metadata) self.assertEqual(set(['9', '8', '2', '4']), metadata['k2']) def test_aggregate_metadata_get_by_host_with_key(self): host_state = fakes.FakeHostState( 'fake', 'node', {'aggregates': _AGGREGATE_FIXTURES}) metadata = utils.aggregate_metadata_get_by_host(host_state, 'k1') self.assertIn('k1', metadata) self.assertEqual(set(['1', '3', '7', '6']), metadata['k1']) def test_aggregate_metadata_get_by_host_empty_result(self): host_state = fakes.FakeHostState( 'fake', 'node', {'aggregates': []}) metadata = utils.aggregate_metadata_get_by_host(host_state, 'k3') self.assertEqual({}, metadata) def test_validate_num_values(self): f = utils.validate_num_values self.assertEqual("x", f(set(), default="x")) self.assertEqual(1, f(set(["1"]), cast_to=int)) self.assertEqual(1.0, f(set(["1"]), cast_to=float)) self.assertEqual(1, f(set([1, 2]), based_on=min)) self.assertEqual(2, f(set([1, 2]), based_on=max)) self.assertEqual(9, f(set(['10', '9']), based_on=min)) def test_instance_uuids_overlap(self): inst1 = objects.Instance(uuid=uuids.instance_1) inst2 = objects.Instance(uuid=uuids.instance_2) instances = [inst1, inst2] host_state = fakes.FakeHostState('host1', 'node1', {}) host_state.instances = {instance.uuid: instance for instance in instances} self.assertTrue(utils.instance_uuids_overlap(host_state, [uuids.instance_1])) self.assertFalse(utils.instance_uuids_overlap(host_state, ['zz'])) nova-17.0.1/nova/tests/unit/scheduler/filters/test_compute_capabilities_filters.py0000666000175000017500000001637213250073126030637 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import six from nova import objects from nova.scheduler.filters import compute_capabilities_filter from nova import test from nova.tests.unit.scheduler import fakes class TestComputeCapabilitiesFilter(test.NoDBTestCase): def setUp(self): super(TestComputeCapabilitiesFilter, self).setUp() self.filt_cls = compute_capabilities_filter.ComputeCapabilitiesFilter() def _do_test_compute_filter_extra_specs(self, ecaps, especs, passes): # In real OpenStack runtime environment,compute capabilities # value may be number, so we should use number to do unit test. capabilities = {} capabilities.update(ecaps) spec_obj = objects.RequestSpec( flavor=objects.Flavor(memory_mb=1024, extra_specs=especs)) host_state = {'free_ram_mb': 1024} host_state.update(capabilities) host = fakes.FakeHostState('host1', 'node1', host_state) assertion = self.assertTrue if passes else self.assertFalse assertion(self.filt_cls.host_passes(host, spec_obj)) def test_compute_filter_passes_without_extra_specs(self): spec_obj = objects.RequestSpec( flavor=objects.Flavor(memory_mb=1024)) host_state = {'free_ram_mb': 1024} host = fakes.FakeHostState('host1', 'node1', host_state) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_compute_filter_fails_without_host_state(self): especs = {'capabilities:opts': '1'} spec_obj = objects.RequestSpec( flavor=objects.Flavor(memory_mb=1024, extra_specs=especs)) self.assertFalse(self.filt_cls.host_passes(None, spec_obj)) def test_compute_filter_fails_without_capabilites(self): cpu_info = """ { } """ cpu_info = six.text_type(cpu_info) self._do_test_compute_filter_extra_specs( ecaps={'cpu_info': cpu_info}, especs={'capabilities:cpu_info:vendor': 'Intel'}, passes=False) def test_compute_filter_pass_cpu_info_as_text_type(self): cpu_info = """ { "vendor": "Intel", "model": "core2duo", "arch": "i686","features": ["lahf_lm", "rdtscp"], "topology": {"cores": 1, "threads":1, "sockets": 1}} """ cpu_info = six.text_type(cpu_info) self._do_test_compute_filter_extra_specs( ecaps={'cpu_info': cpu_info}, especs={'capabilities:cpu_info:vendor': 'Intel'}, passes=True) def test_compute_filter_pass_cpu_info_with_backward_compatibility(self): cpu_info = """ { "vendor": "Intel", "model": "core2duo", "arch": "i686","features": ["lahf_lm", "rdtscp"], "topology": {"cores": 1, "threads":1, "sockets": 1}} """ cpu_info = six.text_type(cpu_info) self._do_test_compute_filter_extra_specs( ecaps={'cpu_info': cpu_info}, especs={'cpu_info': cpu_info}, passes=True) def test_compute_filter_fail_cpu_info_with_backward_compatibility(self): cpu_info = """ { "vendor": "Intel", "model": "core2duo", "arch": "i686","features": ["lahf_lm", "rdtscp"], "topology": {"cores": 1, "threads":1, "sockets": 1}} """ cpu_info = six.text_type(cpu_info) self._do_test_compute_filter_extra_specs( ecaps={'cpu_info': cpu_info}, especs={'cpu_info': ''}, passes=False) def test_compute_filter_fail_cpu_info_as_text_type_not_valid(self): cpu_info = "cpu_info" cpu_info = six.text_type(cpu_info) self._do_test_compute_filter_extra_specs( ecaps={'cpu_info': cpu_info}, especs={'capabilities:cpu_info:vendor': 'Intel'}, passes=False) def test_compute_filter_passes_extra_specs_simple(self): self._do_test_compute_filter_extra_specs( ecaps={'stats': {'opt1': 1, 'opt2': 2}}, especs={'opt1': '1', 'opt2': '2'}, passes=True) def test_compute_filter_fails_extra_specs_simple(self): self._do_test_compute_filter_extra_specs( ecaps={'stats': {'opt1': 1, 'opt2': 2}}, especs={'opt1': '1', 'opt2': '222'}, passes=False) def test_compute_filter_pass_extra_specs_simple_with_scope(self): self._do_test_compute_filter_extra_specs( ecaps={'stats': {'opt1': 1, 'opt2': 2}}, especs={'capabilities:opt1': '1'}, passes=True) def test_compute_filter_pass_extra_specs_same_as_scope(self): # Make sure this still works even if the key is the same as the scope self._do_test_compute_filter_extra_specs( ecaps={'capabilities': 1}, especs={'capabilities': '1'}, passes=True) def test_compute_filter_pass_self_defined_specs(self): # Make sure this will not reject user's self-defined,irrelevant specs self._do_test_compute_filter_extra_specs( ecaps={'opt1': 1, 'opt2': 2}, especs={'XXYY': '1'}, passes=True) def test_compute_filter_extra_specs_simple_with_wrong_scope(self): self._do_test_compute_filter_extra_specs( ecaps={'opt1': 1, 'opt2': 2}, especs={'wrong_scope:opt1': '1'}, passes=True) def test_compute_filter_extra_specs_pass_multi_level_with_scope(self): self._do_test_compute_filter_extra_specs( ecaps={'stats': {'opt1': {'a': 1, 'b': {'aa': 2}}, 'opt2': 2}}, especs={'opt1:a': '1', 'capabilities:opt1:b:aa': '2'}, passes=True) def test_compute_filter_pass_ram_with_backward_compatibility(self): self._do_test_compute_filter_extra_specs( ecaps={}, especs={'free_ram_mb': '>= 300'}, passes=True) def test_compute_filter_fail_ram_with_backward_compatibility(self): self._do_test_compute_filter_extra_specs( ecaps={}, especs={'free_ram_mb': '<= 300'}, passes=False) def test_compute_filter_pass_cpu_with_backward_compatibility(self): self._do_test_compute_filter_extra_specs( ecaps={}, especs={'vcpus_used': '<= 20'}, passes=True) def test_compute_filter_fail_cpu_with_backward_compatibility(self): self._do_test_compute_filter_extra_specs( ecaps={}, especs={'vcpus_used': '>= 20'}, passes=False) def test_compute_filter_pass_disk_with_backward_compatibility(self): self._do_test_compute_filter_extra_specs( ecaps={}, especs={'free_disk_mb': 0}, passes=True) def test_compute_filter_fail_disk_with_backward_compatibility(self): self._do_test_compute_filter_extra_specs( ecaps={}, especs={'free_disk_mb': 1}, passes=False) nova-17.0.1/nova/tests/unit/scheduler/filters/test_image_props_filters.py0000666000175000017500000002727213250073126026760 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_utils import versionutils from nova import objects from nova.objects import fields as obj_fields from nova.scheduler.filters import image_props_filter from nova import test from nova.tests.unit.scheduler import fakes class TestImagePropsFilter(test.NoDBTestCase): def setUp(self): super(TestImagePropsFilter, self).setUp() self.filt_cls = image_props_filter.ImagePropertiesFilter() def test_image_properties_filter_passes_same_inst_props_and_version(self): img_props = objects.ImageMeta( properties=objects.ImageMetaProps( hw_architecture=obj_fields.Architecture.X86_64, img_hv_type=obj_fields.HVType.KVM, hw_vm_mode=obj_fields.VMMode.HVM, img_hv_requested_version='>=6.0,<6.2')) spec_obj = objects.RequestSpec(image=img_props) hypervisor_version = versionutils.convert_version_to_int('6.0.0') capabilities = { 'supported_instances': [( obj_fields.Architecture.X86_64, obj_fields.HVType.KVM, obj_fields.VMMode.HVM)], 'hypervisor_version': hypervisor_version} host = fakes.FakeHostState('host1', 'node1', capabilities) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_image_properties_filter_fails_different_inst_props(self): img_props = objects.ImageMeta( properties=objects.ImageMetaProps( hw_architecture=obj_fields.Architecture.ARMV7, img_hv_type=obj_fields.HVType.QEMU, hw_vm_mode=obj_fields.VMMode.HVM)) hypervisor_version = versionutils.convert_version_to_int('6.0.0') spec_obj = objects.RequestSpec(image=img_props) capabilities = { 'supported_instances': [( obj_fields.Architecture.X86_64, obj_fields.HVType.KVM, obj_fields.VMMode.HVM)], 'hypervisor_version': hypervisor_version} host = fakes.FakeHostState('host1', 'node1', capabilities) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) def test_image_properties_filter_fails_different_hyper_version(self): img_props = objects.ImageMeta( properties=objects.ImageMetaProps( hw_architecture=obj_fields.Architecture.X86_64, img_hv_type=obj_fields.HVType.KVM, hw_vm_mode=obj_fields.VMMode.HVM, img_hv_requested_version='>=6.2')) hypervisor_version = versionutils.convert_version_to_int('6.0.0') spec_obj = objects.RequestSpec(image=img_props) capabilities = { 'enabled': True, 'supported_instances': [( obj_fields.Architecture.X86_64, obj_fields.HVType.KVM, obj_fields.VMMode.HVM)], 'hypervisor_version': hypervisor_version} host = fakes.FakeHostState('host1', 'node1', capabilities) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) def test_image_properties_filter_passes_partial_inst_props(self): img_props = objects.ImageMeta( properties=objects.ImageMetaProps( hw_architecture=obj_fields.Architecture.X86_64, hw_vm_mode=obj_fields.VMMode.HVM)) hypervisor_version = versionutils.convert_version_to_int('6.0.0') spec_obj = objects.RequestSpec(image=img_props) capabilities = { 'supported_instances': [( obj_fields.Architecture.X86_64, obj_fields.HVType.KVM, obj_fields.VMMode.HVM)], 'hypervisor_version': hypervisor_version} host = fakes.FakeHostState('host1', 'node1', capabilities) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_image_properties_filter_fails_partial_inst_props(self): img_props = objects.ImageMeta( properties=objects.ImageMetaProps( hw_architecture=obj_fields.Architecture.X86_64, hw_vm_mode=obj_fields.VMMode.HVM)) hypervisor_version = versionutils.convert_version_to_int('6.0.0') spec_obj = objects.RequestSpec(image=img_props) capabilities = { 'supported_instances': [( obj_fields.Architecture.X86_64, obj_fields.HVType.XEN, obj_fields.VMMode.XEN)], 'hypervisor_version': hypervisor_version} host = fakes.FakeHostState('host1', 'node1', capabilities) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) def test_image_properties_filter_passes_without_inst_props(self): spec_obj = objects.RequestSpec(image=None) hypervisor_version = versionutils.convert_version_to_int('6.0.0') capabilities = { 'supported_instances': [( obj_fields.Architecture.X86_64, obj_fields.HVType.KVM, obj_fields.VMMode.HVM)], 'hypervisor_version': hypervisor_version} host = fakes.FakeHostState('host1', 'node1', capabilities) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_image_properties_filter_fails_without_host_props(self): img_props = objects.ImageMeta( properties=objects.ImageMetaProps( hw_architecture=obj_fields.Architecture.X86_64, img_hv_type=obj_fields.HVType.KVM, hw_vm_mode=obj_fields.VMMode.HVM)) hypervisor_version = versionutils.convert_version_to_int('6.0.0') spec_obj = objects.RequestSpec(image=img_props) capabilities = { 'enabled': True, 'hypervisor_version': hypervisor_version} host = fakes.FakeHostState('host1', 'node1', capabilities) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) def test_image_properties_filter_passes_without_hyper_version(self): img_props = objects.ImageMeta( properties=objects.ImageMetaProps( hw_architecture=obj_fields.Architecture.X86_64, img_hv_type=obj_fields.HVType.KVM, hw_vm_mode=obj_fields.VMMode.HVM, img_hv_requested_version='>=6.0')) spec_obj = objects.RequestSpec(image=img_props) capabilities = { 'enabled': True, 'supported_instances': [( obj_fields.Architecture.X86_64, obj_fields.HVType.KVM, obj_fields.VMMode.HVM)]} host = fakes.FakeHostState('host1', 'node1', capabilities) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_image_properties_filter_fails_with_unsupported_hyper_ver(self): img_props = objects.ImageMeta( properties=objects.ImageMetaProps( hw_architecture=obj_fields.Architecture.X86_64, img_hv_type=obj_fields.HVType.KVM, hw_vm_mode=obj_fields.VMMode.HVM, img_hv_requested_version='>=6.0')) spec_obj = objects.RequestSpec(image=img_props) capabilities = { 'enabled': True, 'supported_instances': [( obj_fields.Architecture.X86_64, obj_fields.HVType.KVM, obj_fields.VMMode.HVM)], 'hypervisor_version': 5000} host = fakes.FakeHostState('host1', 'node1', capabilities) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) def test_image_properties_filter_pv_mode_compat(self): # if an old image has 'pv' for a vm_mode it should be treated as xen img_props = objects.ImageMeta( properties=objects.ImageMetaProps( hw_vm_mode='pv')) hypervisor_version = versionutils.convert_version_to_int('6.0.0') spec_obj = objects.RequestSpec(image=img_props) capabilities = { 'supported_instances': [( obj_fields.Architecture.X86_64, obj_fields.HVType.XEN, obj_fields.VMMode.XEN)], 'hypervisor_version': hypervisor_version} host = fakes.FakeHostState('host1', 'node1', capabilities) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_image_properties_filter_hvm_mode_compat(self): # if an old image has 'hv' for a vm_mode it should be treated as xen img_props = objects.ImageMeta( properties=objects.ImageMetaProps( hw_vm_mode='hv')) hypervisor_version = versionutils.convert_version_to_int('6.0.0') spec_obj = objects.RequestSpec(image=img_props) capabilities = { 'supported_instances': [( obj_fields.Architecture.X86_64, obj_fields.HVType.KVM, obj_fields.VMMode.HVM)], 'hypervisor_version': hypervisor_version} host = fakes.FakeHostState('host1', 'node1', capabilities) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_image_properties_filter_xen_arch_compat(self): # if an old image has 'x86_32' for arch it should be treated as i686 img_props = objects.ImageMeta( properties=objects.ImageMetaProps( hw_architecture='x86_32')) hypervisor_version = versionutils.convert_version_to_int('6.0.0') spec_obj = objects.RequestSpec(image=img_props) capabilities = { 'supported_instances': [( obj_fields.Architecture.I686, obj_fields.HVType.KVM, obj_fields.VMMode.HVM)], 'hypervisor_version': hypervisor_version} host = fakes.FakeHostState('host1', 'node1', capabilities) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_image_properties_filter_xen_hv_type_compat(self): # if an old image has 'xapi' for hv_type it should be treated as xen img_props = objects.ImageMeta( properties=objects.ImageMetaProps( img_hv_type='xapi')) hypervisor_version = versionutils.convert_version_to_int('6.0.0') spec_obj = objects.RequestSpec(image=img_props) capabilities = { 'supported_instances': [( obj_fields.Architecture.I686, obj_fields.HVType.XEN, obj_fields.VMMode.HVM)], 'hypervisor_version': hypervisor_version} host = fakes.FakeHostState('host1', 'node1', capabilities) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_image_properties_filter_baremetal_vmmode_compat(self): # if an old image has 'baremetal' for vmmode it should be # treated as hvm img_props = objects.ImageMeta( properties=objects.ImageMetaProps( hw_vm_mode='baremetal')) hypervisor_version = versionutils.convert_version_to_int('6.0.0') spec_obj = objects.RequestSpec(image=img_props) capabilities = { 'supported_instances': [( obj_fields.Architecture.I686, obj_fields.HVType.BAREMETAL, obj_fields.VMMode.HVM)], 'hypervisor_version': hypervisor_version} host = fakes.FakeHostState('host1', 'node1', capabilities) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) nova-17.0.1/nova/tests/unit/scheduler/filters/test_json_filters.py0000666000175000017500000002576213250073126025426 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_serialization import jsonutils from nova import objects from nova.scheduler.filters import json_filter from nova import test from nova.tests.unit.scheduler import fakes class TestJsonFilter(test.NoDBTestCase): def setUp(self): super(TestJsonFilter, self).setUp() self.filt_cls = json_filter.JsonFilter() self.json_query = jsonutils.dumps( ['and', ['>=', '$free_ram_mb', 1024], ['>=', '$free_disk_mb', 200 * 1024]]) def test_json_filter_passes(self): spec_obj = objects.RequestSpec( flavor=objects.Flavor(memory_mb=1024, root_gb=200, ephemeral_gb=0), scheduler_hints=dict(query=[self.json_query])) host = fakes.FakeHostState('host1', 'node1', {'free_ram_mb': 1024, 'free_disk_mb': 200 * 1024}) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_json_filter_passes_with_no_query(self): spec_obj = objects.RequestSpec( flavor=objects.Flavor(memory_mb=1024, root_gb=200, ephemeral_gb=0), scheduler_hints=None) host = fakes.FakeHostState('host1', 'node1', {'free_ram_mb': 0, 'free_disk_mb': 0}) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_json_filter_fails_on_memory(self): spec_obj = objects.RequestSpec( flavor=objects.Flavor(memory_mb=1024, root_gb=200, ephemeral_gb=0), scheduler_hints=dict(query=[self.json_query])) host = fakes.FakeHostState('host1', 'node1', {'free_ram_mb': 1023, 'free_disk_mb': 200 * 1024}) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) def test_json_filter_fails_on_disk(self): spec_obj = objects.RequestSpec( flavor=objects.Flavor(memory_mb=1024, root_gb=200, ephemeral_gb=0), scheduler_hints=dict(query=[self.json_query])) host = fakes.FakeHostState('host1', 'node1', {'free_ram_mb': 1024, 'free_disk_mb': (200 * 1024) - 1}) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) def test_json_filter_fails_on_service_disabled(self): json_query = jsonutils.dumps( ['and', ['>=', '$free_ram_mb', 1024], ['>=', '$free_disk_mb', 200 * 1024], ['not', '$service.disabled']]) spec_obj = objects.RequestSpec( flavor=objects.Flavor(memory_mb=1024, local_gb=200), scheduler_hints=dict(query=[json_query])) host = fakes.FakeHostState('host1', 'node1', {'free_ram_mb': 1024, 'free_disk_mb': 200 * 1024}) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) def test_json_filter_happy_day(self): # Test json filter more thoroughly. raw = ['and', '$capabilities.enabled', ['=', '$capabilities.opt1', 'match'], ['or', ['and', ['<', '$free_ram_mb', 30], ['<', '$free_disk_mb', 300]], ['and', ['>', '$free_ram_mb', 30], ['>', '$free_disk_mb', 300]]]] spec_obj = objects.RequestSpec( scheduler_hints=dict(query=[jsonutils.dumps(raw)])) # Passes capabilities = {'opt1': 'match'} service = {'disabled': False} host = fakes.FakeHostState('host1', 'node1', {'free_ram_mb': 10, 'free_disk_mb': 200, 'capabilities': capabilities, 'service': service}) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) # Passes capabilities = {'opt1': 'match'} service = {'disabled': False} host = fakes.FakeHostState('host1', 'node1', {'free_ram_mb': 40, 'free_disk_mb': 400, 'capabilities': capabilities, 'service': service}) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) # Fails due to capabilities being disabled capabilities = {'enabled': False, 'opt1': 'match'} service = {'disabled': False} host = fakes.FakeHostState('host1', 'node1', {'free_ram_mb': 40, 'free_disk_mb': 400, 'capabilities': capabilities, 'service': service}) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) # Fails due to being exact memory/disk we don't want capabilities = {'enabled': True, 'opt1': 'match'} service = {'disabled': False} host = fakes.FakeHostState('host1', 'node1', {'free_ram_mb': 30, 'free_disk_mb': 300, 'capabilities': capabilities, 'service': service}) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) # Fails due to memory lower but disk higher capabilities = {'enabled': True, 'opt1': 'match'} service = {'disabled': False} host = fakes.FakeHostState('host1', 'node1', {'free_ram_mb': 20, 'free_disk_mb': 400, 'capabilities': capabilities, 'service': service}) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) # Fails due to capabilities 'opt1' not equal capabilities = {'enabled': True, 'opt1': 'no-match'} service = {'enabled': True} host = fakes.FakeHostState('host1', 'node1', {'free_ram_mb': 20, 'free_disk_mb': 400, 'capabilities': capabilities, 'service': service}) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) def test_json_filter_basic_operators(self): host = fakes.FakeHostState('host1', 'node1', {}) # (operator, arguments, expected_result) ops_to_test = [ ['=', [1, 1], True], ['=', [1, 2], False], ['<', [1, 2], True], ['<', [1, 1], False], ['<', [2, 1], False], ['>', [2, 1], True], ['>', [2, 2], False], ['>', [2, 3], False], ['<=', [1, 2], True], ['<=', [1, 1], True], ['<=', [2, 1], False], ['>=', [2, 1], True], ['>=', [2, 2], True], ['>=', [2, 3], False], ['in', [1, 1], True], ['in', [1, 1, 2, 3], True], ['in', [4, 1, 2, 3], False], ['not', [True], False], ['not', [False], True], ['or', [True, False], True], ['or', [False, False], False], ['and', [True, True], True], ['and', [False, False], False], ['and', [True, False], False], # Nested ((True or False) and (2 > 1)) == Passes ['and', [['or', True, False], ['>', 2, 1]], True]] for (op, args, expected) in ops_to_test: raw = [op] + args spec_obj = objects.RequestSpec( scheduler_hints=dict( query=[jsonutils.dumps(raw)])) self.assertEqual(expected, self.filt_cls.host_passes(host, spec_obj)) # This results in [False, True, False, True] and if any are True # then it passes... raw = ['not', True, False, True, False] spec_obj = objects.RequestSpec( scheduler_hints=dict( query=[jsonutils.dumps(raw)])) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) # This results in [False, False, False] and if any are True # then it passes...which this doesn't raw = ['not', True, True, True] spec_obj = objects.RequestSpec( scheduler_hints=dict( query=[jsonutils.dumps(raw)])) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) def test_json_filter_unknown_operator_raises(self): raw = ['!=', 1, 2] spec_obj = objects.RequestSpec( scheduler_hints=dict( query=[jsonutils.dumps(raw)])) host = fakes.FakeHostState('host1', 'node1', {}) self.assertRaises(KeyError, self.filt_cls.host_passes, host, spec_obj) def test_json_filter_empty_filters_pass(self): host = fakes.FakeHostState('host1', 'node1', {}) raw = [] spec_obj = objects.RequestSpec( scheduler_hints=dict( query=[jsonutils.dumps(raw)])) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) raw = {} spec_obj = objects.RequestSpec( scheduler_hints=dict( query=[jsonutils.dumps(raw)])) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) def test_json_filter_invalid_num_arguments_fails(self): host = fakes.FakeHostState('host1', 'node1', {}) raw = ['>', ['and', ['or', ['not', ['<', ['>=', ['<=', ['in', ]]]]]]]] spec_obj = objects.RequestSpec( scheduler_hints=dict( query=[jsonutils.dumps(raw)])) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) raw = ['>', 1] spec_obj = objects.RequestSpec( scheduler_hints=dict( query=[jsonutils.dumps(raw)])) self.assertFalse(self.filt_cls.host_passes(host, spec_obj)) def test_json_filter_unknown_variable_ignored(self): host = fakes.FakeHostState('host1', 'node1', {}) raw = ['=', '$........', 1, 1] spec_obj = objects.RequestSpec( scheduler_hints=dict( query=[jsonutils.dumps(raw)])) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) raw = ['=', '$foo', 2, 2] spec_obj = objects.RequestSpec( scheduler_hints=dict( query=[jsonutils.dumps(raw)])) self.assertTrue(self.filt_cls.host_passes(host, spec_obj)) nova-17.0.1/nova/tests/unit/scheduler/fakes.py0000666000175000017500000001706713250073126021306 0ustar zuulzuul00000000000000# Copyright 2011 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Fakes For Scheduler tests. """ import datetime from nova import objects from nova.scheduler import driver from nova.scheduler import host_manager from nova.tests import uuidsentinel NUMA_TOPOLOGY = objects.NUMATopology( cells=[ objects.NUMACell( id=0, cpuset=set([1, 2]), memory=512, cpu_usage=0, memory_usage=0, mempages=[ objects.NUMAPagesTopology(size_kb=16, total=387184, used=0), objects.NUMAPagesTopology(size_kb=2048, total=512, used=0)], siblings=[], pinned_cpus=set([])), objects.NUMACell( id=1, cpuset=set([3, 4]), memory=512, cpu_usage=0, memory_usage=0, mempages=[ objects.NUMAPagesTopology(size_kb=4, total=1548736, used=0), objects.NUMAPagesTopology(size_kb=2048, total=512, used=0)], siblings=[], pinned_cpus=set([]))]) NUMA_TOPOLOGIES_W_HT = [ objects.NUMATopology(cells=[ objects.NUMACell( id=0, cpuset=set([1, 2, 5, 6]), memory=512, cpu_usage=0, memory_usage=0, mempages=[], siblings=[set([1, 5]), set([2, 6])], pinned_cpus=set([])), objects.NUMACell( id=1, cpuset=set([3, 4, 7, 8]), memory=512, cpu_usage=0, memory_usage=0, mempages=[], siblings=[set([3, 4]), set([7, 8])], pinned_cpus=set([])) ]), objects.NUMATopology(cells=[ objects.NUMACell( id=0, cpuset=set([]), memory=512, cpu_usage=0, memory_usage=0, mempages=[], siblings=[], pinned_cpus=set([])), objects.NUMACell( id=1, cpuset=set([1, 2, 5, 6]), memory=512, cpu_usage=0, memory_usage=0, mempages=[], siblings=[set([1, 5]), set([2, 6])], pinned_cpus=set([])), objects.NUMACell( id=2, cpuset=set([3, 4, 7, 8]), memory=512, cpu_usage=0, memory_usage=0, mempages=[], siblings=[set([3, 4]), set([7, 8])], pinned_cpus=set([])), ]), ] COMPUTE_NODES = [ objects.ComputeNode( uuid=uuidsentinel.cn1, id=1, local_gb=1024, memory_mb=1024, vcpus=1, disk_available_least=None, free_ram_mb=512, vcpus_used=1, free_disk_gb=512, local_gb_used=0, updated_at=datetime.datetime(2015, 11, 11, 11, 0, 0), host='host1', hypervisor_hostname='node1', host_ip='127.0.0.1', hypervisor_version=0, numa_topology=None, hypervisor_type='foo', supported_hv_specs=[], pci_device_pools=None, cpu_info=None, stats=None, metrics=None, cpu_allocation_ratio=16.0, ram_allocation_ratio=1.5, disk_allocation_ratio=1.0), objects.ComputeNode( uuid=uuidsentinel.cn2, id=2, local_gb=2048, memory_mb=2048, vcpus=2, disk_available_least=1024, free_ram_mb=1024, vcpus_used=2, free_disk_gb=1024, local_gb_used=0, updated_at=datetime.datetime(2015, 11, 11, 11, 0, 0), host='host2', hypervisor_hostname='node2', host_ip='127.0.0.1', hypervisor_version=0, numa_topology=None, hypervisor_type='foo', supported_hv_specs=[], pci_device_pools=None, cpu_info=None, stats=None, metrics=None, cpu_allocation_ratio=16.0, ram_allocation_ratio=1.5, disk_allocation_ratio=1.0), objects.ComputeNode( uuid=uuidsentinel.cn3, id=3, local_gb=4096, memory_mb=4096, vcpus=4, disk_available_least=3333, free_ram_mb=3072, vcpus_used=1, free_disk_gb=3072, local_gb_used=0, updated_at=datetime.datetime(2015, 11, 11, 11, 0, 0), host='host3', hypervisor_hostname='node3', host_ip='127.0.0.1', hypervisor_version=0, numa_topology=NUMA_TOPOLOGY._to_json(), hypervisor_type='foo', supported_hv_specs=[], pci_device_pools=None, cpu_info=None, stats=None, metrics=None, cpu_allocation_ratio=16.0, ram_allocation_ratio=1.5, disk_allocation_ratio=1.0), objects.ComputeNode( uuid=uuidsentinel.cn4, id=4, local_gb=8192, memory_mb=8192, vcpus=8, disk_available_least=8192, free_ram_mb=8192, vcpus_used=0, free_disk_gb=8888, local_gb_used=0, updated_at=datetime.datetime(2015, 11, 11, 11, 0, 0), host='host4', hypervisor_hostname='node4', host_ip='127.0.0.1', hypervisor_version=0, numa_topology=None, hypervisor_type='foo', supported_hv_specs=[], pci_device_pools=None, cpu_info=None, stats=None, metrics=None, cpu_allocation_ratio=16.0, ram_allocation_ratio=1.5, disk_allocation_ratio=1.0), # Broken entry objects.ComputeNode( uuid=uuidsentinel.cn5, id=5, local_gb=1024, memory_mb=1024, vcpus=1, host='fake', hypervisor_hostname='fake-hyp'), ] ALLOC_REQS = [ { 'allocations': { cn.uuid: { 'resources': { 'VCPU': 1, 'MEMORY_MB': 512, 'DISK_GB': 512, }, } } } for cn in COMPUTE_NODES ] RESOURCE_PROVIDERS = [ dict( uuid=uuidsentinel.rp1, name='host1', generation=1), dict( uuid=uuidsentinel.rp2, name='host2', generation=1), dict( uuid=uuidsentinel.rp3, name='host3', generation=1), dict( uuid=uuidsentinel.rp4, name='host4', generation=1), ] SERVICES = [ objects.Service(host='host1', disabled=False), objects.Service(host='host2', disabled=True), objects.Service(host='host3', disabled=False), objects.Service(host='host4', disabled=False), ] def get_service_by_host(host): services = [service for service in SERVICES if service.host == host] return services[0] class FakeHostState(host_manager.HostState): def __init__(self, host, node, attribute_dict, instances=None): super(FakeHostState, self).__init__(host, node, None) if instances: self.instances = {inst.uuid: inst for inst in instances} else: self.instances = {} for (key, val) in attribute_dict.items(): setattr(self, key, val) class FakeScheduler(driver.Scheduler): def select_destinations(self, context, spec_obj, instance_uuids): return [] nova-17.0.1/nova/tests/unit/test_json_ref.py0000666000175000017500000001260113250073126021070 0ustar zuulzuul00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import mock from nova import test from nova.tests import json_ref class TestJsonRef(test.NoDBTestCase): def test_update_dict_recursively(self): input = {'foo': 1, 'bar': 13, 'a list': [], 'nesting': { 'baz': 42, 'foobar': 121 }} d = copy.deepcopy(input) json_ref._update_dict_recursively(d, {}) self.assertDictEqual(input, d) d = copy.deepcopy(input) json_ref._update_dict_recursively(d, {'foo': 111, 'new_key': 1, 'nesting': { 'baz': 142, 'new_nested': 1 }}) expected = copy.deepcopy(input) expected['foo'] = 111 expected['new_key'] = 1 expected['nesting']['baz'] = 142 expected['nesting']['new_nested'] = 1 self.assertDictEqual(expected, d) d = copy.deepcopy(input) json_ref._update_dict_recursively(d, {'nesting': 1}) expected = copy.deepcopy(input) expected['nesting'] = 1 self.assertDictEqual(expected, d) d = copy.deepcopy(input) @mock.patch('oslo_serialization.jsonutils.load') @mock.patch('nova.tests.json_ref.open') def test_resolve_ref(self, mock_open, mock_json_load): mock_json_load.return_value = {'baz': 13} actual = json_ref.resolve_refs( {'foo': 1, 'bar': {'$ref': 'another.json#'}}, 'some/base/path/') self.assertDictEqual({'foo': 1, 'bar': {'baz': 13}}, actual) mock_open.assert_called_once_with('some/base/path/another.json', 'r+b') @mock.patch('oslo_serialization.jsonutils.load') @mock.patch('nova.tests.json_ref.open') def test_resolve_ref_recursively(self, mock_open, mock_json_load): mock_json_load.side_effect = [ # this is the content of direct_ref.json {'baz': 13, 'nesting': {'$ref': 'subdir/nested_ref.json#'}}, # this is the content of subdir/nested_ref.json {'a deep key': 'happiness'}] actual = json_ref.resolve_refs( {'foo': 1, 'bar': {'$ref': 'direct_ref.json#'}}, 'some/base/path/') self.assertDictEqual({'foo': 1, 'bar': {'baz': 13, 'nesting': {'a deep key': 'happiness'}}}, actual) mock_open.assert_any_call('some/base/path/direct_ref.json', 'r+b') mock_open.assert_any_call('some/base/path/subdir/nested_ref.json', 'r+b') @mock.patch('oslo_serialization.jsonutils.load') @mock.patch('nova.tests.json_ref.open') def test_resolve_ref_with_override(self, mock_open, mock_json_load): mock_json_load.return_value = {'baz': 13, 'boo': 42} actual = json_ref.resolve_refs( {'foo': 1, 'bar': {'$ref': 'another.json#', 'boo': 0}}, 'some/base/path/') self.assertDictEqual({'foo': 1, 'bar': {'baz': 13, 'boo': 0}}, actual) mock_open.assert_called_once_with('some/base/path/another.json', 'r+b') @mock.patch('oslo_serialization.jsonutils.load') @mock.patch('nova.tests.json_ref.open') def test_resolve_ref_with_nested_override(self, mock_open, mock_json_load): mock_json_load.return_value = {'baz': 13, 'boo': {'a': 1, 'b': 2}} actual = json_ref.resolve_refs( {'foo': 1, 'bar': {'$ref': 'another.json#', 'boo': {'b': 3, 'c': 13}}}, 'some/base/path/') self.assertDictEqual({'foo': 1, 'bar': {'baz': 13, 'boo': {'a': 1, 'b': 3, 'c': 13}}}, actual) mock_open.assert_called_once_with('some/base/path/another.json', 'r+b') def test_ref_with_json_path_not_supported(self): self.assertRaises( NotImplementedError, json_ref.resolve_refs, {'foo': 1, 'bar': {'$ref': 'another.json#/key-in-another', 'boo': {'b': 3, 'c': 13}}}, 'some/base/path/') nova-17.0.1/nova/tests/unit/conductor/0000775000175000017500000000000013250073472017654 5ustar zuulzuul00000000000000nova-17.0.1/nova/tests/unit/conductor/tasks/0000775000175000017500000000000013250073472021001 5ustar zuulzuul00000000000000nova-17.0.1/nova/tests/unit/conductor/tasks/test_live_migrate.py0000666000175000017500000007541513250073126025073 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import oslo_messaging as messaging import six from nova.compute import power_state from nova.compute import rpcapi as compute_rpcapi from nova.compute import vm_states from nova.conductor.tasks import live_migrate from nova import exception from nova import objects from nova.scheduler import client as scheduler_client from nova.scheduler import utils as scheduler_utils from nova import servicegroup from nova import test from nova.tests.unit import fake_instance from nova.tests import uuidsentinel as uuids from nova import utils fake_selection1 = objects.Selection(service_host="host1", nodename="node1", cell_uuid=uuids.cell) fake_selection2 = objects.Selection(service_host="host2", nodename="node2", cell_uuid=uuids.cell) class LiveMigrationTaskTestCase(test.NoDBTestCase): def setUp(self): super(LiveMigrationTaskTestCase, self).setUp() self.context = "context" self.instance_host = "host" self.instance_uuid = uuids.instance self.instance_image = "image_ref" db_instance = fake_instance.fake_db_instance( host=self.instance_host, uuid=self.instance_uuid, power_state=power_state.RUNNING, vm_state = vm_states.ACTIVE, memory_mb=512, image_ref=self.instance_image) self.instance = objects.Instance._from_db_object( self.context, objects.Instance(), db_instance) self.instance.system_metadata = {'image_hw_disk_bus': 'scsi'} self.destination = "destination" self.block_migration = "bm" self.disk_over_commit = "doc" self.migration = objects.Migration() self.fake_spec = objects.RequestSpec() self._generate_task() def _generate_task(self): self.task = live_migrate.LiveMigrationTask(self.context, self.instance, self.destination, self.block_migration, self.disk_over_commit, self.migration, compute_rpcapi.ComputeAPI(), servicegroup.API(), scheduler_client.SchedulerClient(), self.fake_spec) def test_execute_with_destination(self, new_mode=True): dest_node = objects.ComputeNode(hypervisor_hostname='dest_node') with test.nested( mock.patch.object(self.task, '_check_host_is_up'), mock.patch.object(self.task, '_check_requested_destination', return_value=(mock.sentinel.source_node, dest_node)), mock.patch.object(scheduler_utils, 'claim_resources_on_destination'), mock.patch.object(self.migration, 'save'), mock.patch.object(self.task.compute_rpcapi, 'live_migration'), mock.patch('nova.conductor.tasks.migrate.' 'replace_allocation_with_migration'), mock.patch('nova.conductor.tasks.live_migrate.' 'should_do_migration_allocation') ) as (mock_check_up, mock_check_dest, mock_claim, mock_save, mock_mig, m_alloc, mock_sda): mock_mig.return_value = "bob" m_alloc.return_value = (mock.MagicMock(), mock.sentinel.allocs) mock_sda.return_value = new_mode self.assertEqual("bob", self.task.execute()) mock_check_up.assert_called_once_with(self.instance_host) mock_check_dest.assert_called_once_with() if new_mode: allocs = mock.sentinel.allocs else: allocs = None mock_claim.assert_called_once_with( self.context, self.task.scheduler_client.reportclient, self.instance, mock.sentinel.source_node, dest_node, source_node_allocations=allocs) mock_mig.assert_called_once_with( self.context, host=self.instance_host, instance=self.instance, dest=self.destination, block_migration=self.block_migration, migration=self.migration, migrate_data=None) self.assertTrue(mock_save.called) # make sure the source/dest fields were set on the migration object self.assertEqual(self.instance.node, self.migration.source_node) self.assertEqual(dest_node.hypervisor_hostname, self.migration.dest_node) self.assertEqual(self.task.destination, self.migration.dest_compute) if new_mode: m_alloc.assert_called_once_with(self.context, self.instance, self.migration) else: m_alloc.assert_not_called() def test_execute_with_destination_old_school(self): self.test_execute_with_destination(new_mode=False) def test_execute_without_destination(self): self.destination = None self._generate_task() self.assertIsNone(self.task.destination) with test.nested( mock.patch.object(self.task, '_check_host_is_up'), mock.patch.object(self.task, '_find_destination'), mock.patch.object(self.task.compute_rpcapi, 'live_migration'), mock.patch.object(self.migration, 'save'), mock.patch('nova.conductor.tasks.migrate.' 'replace_allocation_with_migration'), mock.patch('nova.conductor.tasks.live_migrate.' 'should_do_migration_allocation'), ) as (mock_check, mock_find, mock_mig, mock_save, mock_alloc, mock_sda): mock_find.return_value = ("found_host", "found_node") mock_mig.return_value = "bob" mock_alloc.return_value = (mock.MagicMock(), mock.MagicMock()) mock_sda.return_value = True self.assertEqual("bob", self.task.execute()) mock_check.assert_called_once_with(self.instance_host) mock_find.assert_called_once_with() mock_mig.assert_called_once_with(self.context, host=self.instance_host, instance=self.instance, dest="found_host", block_migration=self.block_migration, migration=self.migration, migrate_data=None) self.assertTrue(mock_save.called) self.assertEqual('found_host', self.migration.dest_compute) self.assertEqual('found_node', self.migration.dest_node) self.assertEqual(self.instance.node, self.migration.source_node) self.assertTrue(mock_alloc.called) def test_check_instance_is_active_passes_when_paused(self): self.task.instance['power_state'] = power_state.PAUSED self.task._check_instance_is_active() def test_check_instance_is_active_fails_when_shutdown(self): self.task.instance['power_state'] = power_state.SHUTDOWN self.assertRaises(exception.InstanceInvalidState, self.task._check_instance_is_active) @mock.patch.object(objects.Service, 'get_by_compute_host') @mock.patch.object(servicegroup.API, 'service_is_up') def test_check_instance_host_is_up(self, mock_is_up, mock_get): mock_get.return_value = "service" mock_is_up.return_value = True self.task._check_host_is_up("host") mock_get.assert_called_once_with(self.context, "host") mock_is_up.assert_called_once_with("service") @mock.patch.object(objects.Service, 'get_by_compute_host') @mock.patch.object(servicegroup.API, 'service_is_up') def test_check_instance_host_is_up_fails_if_not_up(self, mock_is_up, mock_get): mock_get.return_value = "service" mock_is_up.return_value = False self.assertRaises(exception.ComputeServiceUnavailable, self.task._check_host_is_up, "host") mock_get.assert_called_once_with(self.context, "host") mock_is_up.assert_called_once_with("service") @mock.patch.object(objects.Service, 'get_by_compute_host', side_effect=exception.ComputeHostNotFound(host='host')) def test_check_instance_host_is_up_fails_if_not_found(self, mock): self.assertRaises(exception.ComputeHostNotFound, self.task._check_host_is_up, "host") @mock.patch.object(objects.Service, 'get_by_compute_host') @mock.patch.object(live_migrate.LiveMigrationTask, '_get_compute_info') @mock.patch.object(servicegroup.API, 'service_is_up') @mock.patch.object(compute_rpcapi.ComputeAPI, 'check_can_live_migrate_destination') def test_check_requested_destination(self, mock_check, mock_is_up, mock_get_info, mock_get_host): mock_get_host.return_value = "service" mock_is_up.return_value = True hypervisor_details = objects.ComputeNode( hypervisor_type="a", hypervisor_version=6.1, free_ram_mb=513, memory_mb=512, ram_allocation_ratio=1.0) mock_get_info.return_value = hypervisor_details mock_check.return_value = "migrate_data" self.assertEqual((hypervisor_details, hypervisor_details), self.task._check_requested_destination()) self.assertEqual("migrate_data", self.task.migrate_data) mock_get_host.assert_called_once_with(self.context, self.destination) mock_is_up.assert_called_once_with("service") self.assertEqual([mock.call(self.destination), mock.call(self.instance_host), mock.call(self.destination)], mock_get_info.call_args_list) mock_check.assert_called_once_with(self.context, self.instance, self.destination, self.block_migration, self.disk_over_commit) def test_check_requested_destination_fails_with_same_dest(self): self.task.destination = "same" self.task.source = "same" self.assertRaises(exception.UnableToMigrateToSelf, self.task._check_requested_destination) @mock.patch.object(objects.Service, 'get_by_compute_host', side_effect=exception.ComputeHostNotFound(host='host')) def test_check_requested_destination_fails_when_destination_is_up(self, mock): self.assertRaises(exception.ComputeHostNotFound, self.task._check_requested_destination) @mock.patch.object(live_migrate.LiveMigrationTask, '_check_host_is_up') @mock.patch.object(objects.ComputeNode, 'get_first_node_by_host_for_old_compat') def test_check_requested_destination_fails_with_not_enough_memory( self, mock_get_first, mock_is_up): mock_get_first.return_value = ( objects.ComputeNode(free_ram_mb=513, memory_mb=1024, ram_allocation_ratio=0.9,)) # free_ram is bigger than instance.ram (512) but the allocation # ratio reduces the total available RAM to 410MB # (1024 * 0.9 - (1024 - 513)) self.assertRaises(exception.MigrationPreCheckError, self.task._check_requested_destination) mock_is_up.assert_called_once_with(self.destination) mock_get_first.assert_called_once_with(self.context, self.destination) @mock.patch.object(live_migrate.LiveMigrationTask, '_check_host_is_up') @mock.patch.object(live_migrate.LiveMigrationTask, '_check_destination_has_enough_memory') @mock.patch.object(live_migrate.LiveMigrationTask, '_get_compute_info') def test_check_requested_destination_fails_with_hypervisor_diff( self, mock_get_info, mock_check, mock_is_up): mock_get_info.side_effect = [ objects.ComputeNode(hypervisor_type='b'), objects.ComputeNode(hypervisor_type='a')] self.assertRaises(exception.InvalidHypervisorType, self.task._check_requested_destination) mock_is_up.assert_called_once_with(self.destination) mock_check.assert_called_once_with() self.assertEqual([mock.call(self.instance_host), mock.call(self.destination)], mock_get_info.call_args_list) @mock.patch.object(live_migrate.LiveMigrationTask, '_check_host_is_up') @mock.patch.object(live_migrate.LiveMigrationTask, '_check_destination_has_enough_memory') @mock.patch.object(live_migrate.LiveMigrationTask, '_get_compute_info') def test_check_requested_destination_fails_with_hypervisor_too_old( self, mock_get_info, mock_check, mock_is_up): host1 = {'hypervisor_type': 'a', 'hypervisor_version': 7} host2 = {'hypervisor_type': 'a', 'hypervisor_version': 6} mock_get_info.side_effect = [objects.ComputeNode(**host1), objects.ComputeNode(**host2)] self.assertRaises(exception.DestinationHypervisorTooOld, self.task._check_requested_destination) mock_is_up.assert_called_once_with(self.destination) mock_check.assert_called_once_with() self.assertEqual([mock.call(self.instance_host), mock.call(self.destination)], mock_get_info.call_args_list) @mock.patch.object(objects.Service, 'get_by_compute_host') @mock.patch.object(live_migrate.LiveMigrationTask, '_get_compute_info') @mock.patch.object(servicegroup.API, 'service_is_up') @mock.patch.object(compute_rpcapi.ComputeAPI, 'check_can_live_migrate_destination') @mock.patch.object(objects.HostMapping, 'get_by_host', return_value=objects.HostMapping( cell_mapping=objects.CellMapping( uuid=uuids.different))) def test_check_requested_destination_fails_different_cells( self, mock_get_host_mapping, mock_check, mock_is_up, mock_get_info, mock_get_host): mock_get_host.return_value = "service" mock_is_up.return_value = True hypervisor_details = objects.ComputeNode( hypervisor_type="a", hypervisor_version=6.1, free_ram_mb=513, memory_mb=512, ram_allocation_ratio=1.0) mock_get_info.return_value = hypervisor_details mock_check.return_value = "migrate_data" ex = self.assertRaises(exception.MigrationPreCheckError, self.task._check_requested_destination) self.assertIn('across cells', six.text_type(ex)) def test_find_destination_works(self): self.mox.StubOutWithMock(utils, 'get_image_from_system_metadata') self.mox.StubOutWithMock(scheduler_utils, 'setup_instance_group') self.mox.StubOutWithMock(objects.RequestSpec, 'reset_forced_destinations') self.mox.StubOutWithMock(self.task.scheduler_client, 'select_destinations') self.mox.StubOutWithMock(self.task, '_check_compatible_with_source_hypervisor') self.mox.StubOutWithMock(self.task, '_call_livem_checks_on_host') utils.get_image_from_system_metadata( self.instance.system_metadata).AndReturn("image") scheduler_utils.setup_instance_group( self.context, self.fake_spec) self.fake_spec.reset_forced_destinations() self.task.scheduler_client.select_destinations( self.context, self.fake_spec, [self.instance.uuid], return_objects=True, return_alternates=False).AndReturn( [[fake_selection1]]) self.task._check_compatible_with_source_hypervisor("host1") self.task._call_livem_checks_on_host("host1") self.mox.ReplayAll() self.assertEqual(("host1", "node1"), self.task._find_destination()) # Make sure the request_spec was updated to include the cell # mapping. self.assertIsNotNone(self.fake_spec.requested_destination.cell) # Make sure the spec was updated to include the project_id. self.assertEqual(self.fake_spec.project_id, self.instance.project_id) def test_find_destination_works_with_no_request_spec(self): task = live_migrate.LiveMigrationTask( self.context, self.instance, self.destination, self.block_migration, self.disk_over_commit, self.migration, compute_rpcapi.ComputeAPI(), servicegroup.API(), scheduler_client.SchedulerClient(), request_spec=None) another_spec = objects.RequestSpec() self.instance.flavor = objects.Flavor() self.instance.numa_topology = None self.instance.pci_requests = None @mock.patch.object(task, '_call_livem_checks_on_host') @mock.patch.object(task, '_check_compatible_with_source_hypervisor') @mock.patch.object(task.scheduler_client, 'select_destinations') @mock.patch.object(objects.RequestSpec, 'from_components') @mock.patch.object(scheduler_utils, 'setup_instance_group') @mock.patch.object(utils, 'get_image_from_system_metadata') def do_test(get_image, setup_ig, from_components, select_dest, check_compat, call_livem_checks): get_image.return_value = "image" from_components.return_value = another_spec select_dest.return_value = [[fake_selection1]] self.assertEqual(("host1", "node1"), task._find_destination()) get_image.assert_called_once_with(self.instance.system_metadata) setup_ig.assert_called_once_with(self.context, another_spec) select_dest.assert_called_once_with(self.context, another_spec, [self.instance.uuid], return_objects=True, return_alternates=False) # Make sure the request_spec was updated to include the cell # mapping. self.assertIsNotNone(another_spec.requested_destination.cell) check_compat.assert_called_once_with("host1") call_livem_checks.assert_called_once_with("host1") do_test() def test_find_destination_no_image_works(self): self.instance['image_ref'] = '' self.mox.StubOutWithMock(scheduler_utils, 'setup_instance_group') self.mox.StubOutWithMock(self.task.scheduler_client, 'select_destinations') self.mox.StubOutWithMock(self.task, '_check_compatible_with_source_hypervisor') self.mox.StubOutWithMock(self.task, '_call_livem_checks_on_host') scheduler_utils.setup_instance_group(self.context, self.fake_spec) self.task.scheduler_client.select_destinations(self.context, self.fake_spec, [self.instance.uuid], return_objects=True, return_alternates=False).AndReturn([[fake_selection1]]) self.task._check_compatible_with_source_hypervisor("host1") self.task._call_livem_checks_on_host("host1") self.mox.ReplayAll() self.assertEqual(("host1", "node1"), self.task._find_destination()) def _test_find_destination_retry_hypervisor_raises(self, error): self.mox.StubOutWithMock(utils, 'get_image_from_system_metadata') self.mox.StubOutWithMock(scheduler_utils, 'setup_instance_group') self.mox.StubOutWithMock(self.task.scheduler_client, 'select_destinations') self.mox.StubOutWithMock(self.task, '_check_compatible_with_source_hypervisor') self.mox.StubOutWithMock(self.task, '_call_livem_checks_on_host') utils.get_image_from_system_metadata( self.instance.system_metadata).AndReturn("image") scheduler_utils.setup_instance_group(self.context, self.fake_spec) self.task.scheduler_client.select_destinations(self.context, self.fake_spec, [self.instance.uuid], return_objects=True, return_alternates=False).AndReturn([[fake_selection1]]) self.task._check_compatible_with_source_hypervisor("host1")\ .AndRaise(error) self.task.scheduler_client.select_destinations(self.context, self.fake_spec, [self.instance.uuid], return_objects=True, return_alternates=False).AndReturn([[fake_selection2]]) self.task._check_compatible_with_source_hypervisor("host2") self.task._call_livem_checks_on_host("host2") self.mox.ReplayAll() with mock.patch.object(self.task, '_remove_host_allocations') as remove_allocs: self.assertEqual(("host2", "node2"), self.task._find_destination()) # Should have removed allocations for the first host. remove_allocs.assert_called_once_with('host1', 'node1') def test_find_destination_retry_with_old_hypervisor(self): self._test_find_destination_retry_hypervisor_raises( exception.DestinationHypervisorTooOld) def test_find_destination_retry_with_invalid_hypervisor_type(self): self._test_find_destination_retry_hypervisor_raises( exception.InvalidHypervisorType) def test_find_destination_retry_with_invalid_livem_checks(self): self.flags(migrate_max_retries=1) self.mox.StubOutWithMock(utils, 'get_image_from_system_metadata') self.mox.StubOutWithMock(scheduler_utils, 'setup_instance_group') self.mox.StubOutWithMock(self.task.scheduler_client, 'select_destinations') self.mox.StubOutWithMock(self.task, '_check_compatible_with_source_hypervisor') self.mox.StubOutWithMock(self.task, '_call_livem_checks_on_host') utils.get_image_from_system_metadata( self.instance.system_metadata).AndReturn("image") scheduler_utils.setup_instance_group(self.context, self.fake_spec) self.task.scheduler_client.select_destinations(self.context, self.fake_spec, [self.instance.uuid], return_objects=True, return_alternates=False).AndReturn([[fake_selection1]]) self.task._check_compatible_with_source_hypervisor("host1") self.task._call_livem_checks_on_host("host1")\ .AndRaise(exception.Invalid) self.task.scheduler_client.select_destinations(self.context, self.fake_spec, [self.instance.uuid], return_objects=True, return_alternates=False).AndReturn([[fake_selection2]]) self.task._check_compatible_with_source_hypervisor("host2") self.task._call_livem_checks_on_host("host2") self.mox.ReplayAll() with mock.patch.object(self.task, '_remove_host_allocations') as remove_allocs: self.assertEqual(("host2", "node2"), self.task._find_destination()) # Should have removed allocations for the first host. remove_allocs.assert_called_once_with('host1', 'node1') def test_find_destination_retry_with_failed_migration_pre_checks(self): self.flags(migrate_max_retries=1) self.mox.StubOutWithMock(utils, 'get_image_from_system_metadata') self.mox.StubOutWithMock(scheduler_utils, 'setup_instance_group') self.mox.StubOutWithMock(self.task.scheduler_client, 'select_destinations') self.mox.StubOutWithMock(self.task, '_check_compatible_with_source_hypervisor') self.mox.StubOutWithMock(self.task, '_call_livem_checks_on_host') utils.get_image_from_system_metadata( self.instance.system_metadata).AndReturn("image") scheduler_utils.setup_instance_group(self.context, self.fake_spec) self.task.scheduler_client.select_destinations(self.context, self.fake_spec, [self.instance.uuid], return_objects=True, return_alternates=False).AndReturn([[fake_selection1]]) self.task._check_compatible_with_source_hypervisor("host1") self.task._call_livem_checks_on_host("host1")\ .AndRaise(exception.MigrationPreCheckError("reason")) self.task.scheduler_client.select_destinations(self.context, self.fake_spec, [self.instance.uuid], return_objects=True, return_alternates=False).AndReturn([[fake_selection2]]) self.task._check_compatible_with_source_hypervisor("host2") self.task._call_livem_checks_on_host("host2") self.mox.ReplayAll() with mock.patch.object(self.task, '_remove_host_allocations') as remove_allocs: self.assertEqual(("host2", "node2"), self.task._find_destination()) # Should have removed allocations for the first host. remove_allocs.assert_called_once_with('host1', 'node1') def test_find_destination_retry_exceeds_max(self): self.flags(migrate_max_retries=0) self.mox.StubOutWithMock(utils, 'get_image_from_system_metadata') self.mox.StubOutWithMock(scheduler_utils, 'setup_instance_group') self.mox.StubOutWithMock(self.task.scheduler_client, 'select_destinations') self.mox.StubOutWithMock(self.task, '_check_compatible_with_source_hypervisor') utils.get_image_from_system_metadata( self.instance.system_metadata).AndReturn("image") scheduler_utils.setup_instance_group(self.context, self.fake_spec) self.task.scheduler_client.select_destinations(self.context, self.fake_spec, [self.instance.uuid], return_objects=True, return_alternates=False).AndReturn([[fake_selection1]]) self.task._check_compatible_with_source_hypervisor("host1")\ .AndRaise(exception.DestinationHypervisorTooOld) self.mox.ReplayAll() with test.nested( mock.patch.object(self.task.migration, 'save'), mock.patch.object(self.task, '_remove_host_allocations') ) as ( save_mock, remove_allocs ): self.assertRaises(exception.MaxRetriesExceeded, self.task._find_destination) self.assertEqual('failed', self.task.migration.status) save_mock.assert_called_once_with() # Should have removed allocations for the first host. remove_allocs.assert_called_once_with('host1', 'node1') def test_find_destination_when_runs_out_of_hosts(self): self.mox.StubOutWithMock(utils, 'get_image_from_system_metadata') self.mox.StubOutWithMock(scheduler_utils, 'setup_instance_group') self.mox.StubOutWithMock(self.task.scheduler_client, 'select_destinations') utils.get_image_from_system_metadata( self.instance.system_metadata).AndReturn("image") scheduler_utils.setup_instance_group(self.context, self.fake_spec) self.task.scheduler_client.select_destinations(self.context, self.fake_spec, [self.instance.uuid], return_objects=True, return_alternates=False).AndRaise( exception.NoValidHost(reason="")) self.mox.ReplayAll() self.assertRaises(exception.NoValidHost, self.task._find_destination) @mock.patch("nova.utils.get_image_from_system_metadata") @mock.patch("nova.scheduler.utils.build_request_spec") @mock.patch("nova.scheduler.utils.setup_instance_group") @mock.patch("nova.objects.RequestSpec.from_primitives") def test_find_destination_with_remoteError(self, m_from_primitives, m_setup_instance_group, m_build_request_spec, m_get_image_from_system_metadata): m_get_image_from_system_metadata.return_value = {'properties': {}} m_build_request_spec.return_value = {} fake_spec = objects.RequestSpec() m_from_primitives.return_value = fake_spec with mock.patch.object(self.task.scheduler_client, 'select_destinations') as m_select_destinations: error = messaging.RemoteError() m_select_destinations.side_effect = error self.assertRaises(exception.MigrationSchedulerRPCError, self.task._find_destination) def test_call_livem_checks_on_host(self): with mock.patch.object(self.task.compute_rpcapi, 'check_can_live_migrate_destination', side_effect=messaging.MessagingTimeout): self.assertRaises(exception.MigrationPreCheckError, self.task._call_livem_checks_on_host, {}) @mock.patch.object(objects.InstanceMapping, 'get_by_instance_uuid', side_effect=exception.InstanceMappingNotFound( uuid=uuids.instance)) def test_get_source_cell_mapping_not_found(self, mock_get): """Negative test where InstanceMappingNotFound is raised and converted to MigrationPreCheckError. """ self.assertRaises(exception.MigrationPreCheckError, self.task._get_source_cell_mapping) mock_get.assert_called_once_with( self.task.context, self.task.instance.uuid) @mock.patch.object(objects.HostMapping, 'get_by_host', side_effect=exception.HostMappingNotFound( name='destination')) def test_get_destination_cell_mapping_not_found(self, mock_get): """Negative test where HostMappingNotFound is raised and converted to MigrationPreCheckError. """ self.assertRaises(exception.MigrationPreCheckError, self.task._get_destination_cell_mapping) mock_get.assert_called_once_with( self.task.context, self.task.destination) @mock.patch.object(objects.ComputeNode, 'get_by_host_and_nodename', side_effect=exception.ComputeHostNotFound(host='host')) def test_remove_host_allocations_compute_host_not_found(self, get_cn): """Tests that failing to find a ComputeNode will not blow up the _remove_host_allocations method. """ with mock.patch.object( self.task.scheduler_client.reportclient, 'remove_provider_from_instance_allocation') as remove_provider: self.task._remove_host_allocations('host', 'node') remove_provider.assert_not_called() nova-17.0.1/nova/tests/unit/conductor/tasks/test_base.py0000666000175000017500000000310413250073126023320 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova.conductor.tasks import base from nova import test class FakeTask(base.TaskBase): def __init__(self, context, instance, fail=False): super(FakeTask, self).__init__(context, instance) self.fail = fail def _execute(self): if self.fail: raise Exception else: pass class TaskBaseTestCase(test.NoDBTestCase): def setUp(self): super(TaskBaseTestCase, self).setUp() self.task = FakeTask(mock.MagicMock(), mock.MagicMock()) @mock.patch.object(FakeTask, 'rollback') def test_wrapper_exception(self, fake_rollback): self.task.fail = True try: self.task.execute() except Exception: pass fake_rollback.assert_called_once_with() @mock.patch.object(FakeTask, 'rollback') def test_wrapper_no_exception(self, fake_rollback): try: self.task.execute() except Exception: pass self.assertFalse(fake_rollback.called) nova-17.0.1/nova/tests/unit/conductor/tasks/__init__.py0000666000175000017500000000000013250073126023076 0ustar zuulzuul00000000000000nova-17.0.1/nova/tests/unit/conductor/tasks/test_migrate.py0000666000175000017500000003173513250073126024051 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova.compute import rpcapi as compute_rpcapi from nova.conductor.tasks import migrate from nova import context from nova import exception from nova import objects from nova.scheduler import client as scheduler_client from nova.scheduler import utils as scheduler_utils from nova import test from nova.tests.unit.conductor.test_conductor import FakeContext from nova.tests.unit import fake_flavor from nova.tests.unit import fake_instance from nova.tests import uuidsentinel as uuids class MigrationTaskTestCase(test.NoDBTestCase): def setUp(self): super(MigrationTaskTestCase, self).setUp() self.user_id = 'fake' self.project_id = 'fake' self.context = FakeContext(self.user_id, self.project_id) self.flavor = fake_flavor.fake_flavor_obj(self.context) self.flavor.extra_specs = {'extra_specs': 'fake'} inst = fake_instance.fake_db_instance(image_ref='image_ref', instance_type=self.flavor) inst_object = objects.Instance( flavor=self.flavor, numa_topology=None, pci_requests=None, system_metadata={'image_hw_disk_bus': 'scsi'}) self.instance = objects.Instance._from_db_object( self.context, inst_object, inst, []) self.request_spec = objects.RequestSpec(image=objects.ImageMeta()) self.host_lists = [[objects.Selection(service_host="host1", nodename="node1", cell_uuid=uuids.cell1)]] self.filter_properties = {'limits': {}, 'retry': {'num_attempts': 1, 'hosts': [['host1', 'node1']]}} self.reservations = [] self.clean_shutdown = True def _generate_task(self): return migrate.MigrationTask(self.context, self.instance, self.flavor, self.request_spec, self.clean_shutdown, compute_rpcapi.ComputeAPI(), scheduler_client.SchedulerClient(), host_list=None) @mock.patch('nova.objects.Service.get_minimum_version_multi') @mock.patch('nova.availability_zones.get_host_availability_zone') @mock.patch.object(scheduler_utils, 'setup_instance_group') @mock.patch.object(scheduler_client.SchedulerClient, 'select_destinations') @mock.patch.object(compute_rpcapi.ComputeAPI, 'prep_resize') def test_execute_legacy_no_pre_create_migration(self, prep_resize_mock, sel_dest_mock, sig_mock, az_mock, gmv_mock): sel_dest_mock.return_value = self.host_lists az_mock.return_value = 'myaz' task = self._generate_task() legacy_request_spec = self.request_spec.to_legacy_request_spec_dict() gmv_mock.return_value = 22 task.execute() sig_mock.assert_called_once_with(self.context, self.request_spec) task.scheduler_client.select_destinations.assert_called_once_with( self.context, self.request_spec, [self.instance.uuid], return_objects=True, return_alternates=True) selection = self.host_lists[0][0] prep_resize_mock.assert_called_once_with( self.context, self.instance, legacy_request_spec['image'], self.flavor, selection.service_host, None, request_spec=legacy_request_spec, filter_properties=self.filter_properties, node=selection.nodename, clean_shutdown=self.clean_shutdown, host_list=[]) az_mock.assert_called_once_with(self.context, 'host1') self.assertIsNone(task._migration) @mock.patch.object(objects.MigrationList, 'get_by_filters') @mock.patch('nova.scheduler.client.report.SchedulerReportClient') @mock.patch('nova.objects.ComputeNode.get_by_host_and_nodename') @mock.patch('nova.objects.Migration.save') @mock.patch('nova.objects.Migration.create') @mock.patch('nova.objects.Service.get_minimum_version_multi') @mock.patch('nova.availability_zones.get_host_availability_zone') @mock.patch.object(scheduler_utils, 'setup_instance_group') @mock.patch.object(scheduler_client.SchedulerClient, 'select_destinations') @mock.patch.object(compute_rpcapi.ComputeAPI, 'prep_resize') def _test_execute(self, prep_resize_mock, sel_dest_mock, sig_mock, az_mock, gmv_mock, cm_mock, sm_mock, cn_mock, rc_mock, gbf_mock, requested_destination=False): sel_dest_mock.return_value = self.host_lists az_mock.return_value = 'myaz' gbf_mock.return_value = objects.MigrationList() if requested_destination: self.request_spec.requested_destination = objects.Destination( host='target_host', node=None) self.request_spec.retry = objects.SchedulerRetries.from_dict( self.context, self.filter_properties['retry']) self.filter_properties.pop('retry') self.filter_properties['requested_destination'] = ( self.request_spec.requested_destination) task = self._generate_task() legacy_request_spec = self.request_spec.to_legacy_request_spec_dict() gmv_mock.return_value = 23 # We just need this hook point to set a uuid on the # migration before we use it for teardown def set_migration_uuid(*a, **k): task._migration.uuid = uuids.migration return mock.MagicMock() # NOTE(danms): It's odd to do this on cn_mock, but it's just because # of when we need to have it set in the flow and where we have an easy # place to find it via self.migration. cn_mock.side_effect = set_migration_uuid task.execute() sig_mock.assert_called_once_with(self.context, self.request_spec) task.scheduler_client.select_destinations.assert_called_once_with( self.context, self.request_spec, [self.instance.uuid], return_objects=True, return_alternates=True) selection = self.host_lists[0][0] prep_resize_mock.assert_called_once_with( self.context, self.instance, legacy_request_spec['image'], self.flavor, selection.service_host, task._migration, request_spec=legacy_request_spec, filter_properties=self.filter_properties, node=selection.nodename, clean_shutdown=self.clean_shutdown, host_list=[]) az_mock.assert_called_once_with(self.context, 'host1') self.assertIsNotNone(task._migration) old_flavor = self.instance.flavor new_flavor = self.flavor self.assertEqual(old_flavor.id, task._migration.old_instance_type_id) self.assertEqual(new_flavor.id, task._migration.new_instance_type_id) self.assertEqual('pre-migrating', task._migration.status) self.assertEqual(self.instance.uuid, task._migration.instance_uuid) self.assertEqual(self.instance.host, task._migration.source_compute) self.assertEqual(self.instance.node, task._migration.source_node) if old_flavor.id != new_flavor.id: self.assertEqual('resize', task._migration.migration_type) else: self.assertEqual('migration', task._migration.migration_type) task._migration.create.assert_called_once_with() if requested_destination: self.assertIsNone(self.request_spec.retry) self.assertIn('cell', self.request_spec.requested_destination) self.assertIsNotNone(self.request_spec.requested_destination.cell) def test_execute(self): self._test_execute() def test_execute_with_destination(self): self._test_execute(requested_destination=True) def test_execute_resize(self): self.flavor = self.flavor.obj_clone() self.flavor.id = 3 self._test_execute() @mock.patch.object(objects.MigrationList, 'get_by_filters') @mock.patch('nova.conductor.tasks.migrate.revert_allocation_for_migration') @mock.patch('nova.scheduler.client.report.SchedulerReportClient') @mock.patch('nova.objects.ComputeNode.get_by_host_and_nodename') @mock.patch('nova.objects.Migration.save') @mock.patch('nova.objects.Migration.create') @mock.patch('nova.objects.Service.get_minimum_version_multi') @mock.patch('nova.availability_zones.get_host_availability_zone') @mock.patch.object(scheduler_utils, 'setup_instance_group') @mock.patch.object(scheduler_client.SchedulerClient, 'select_destinations') @mock.patch.object(compute_rpcapi.ComputeAPI, 'prep_resize') def test_execute_rollback(self, prep_resize_mock, sel_dest_mock, sig_mock, az_mock, gmv_mock, cm_mock, sm_mock, cn_mock, rc_mock, mock_ra, mock_gbf): sel_dest_mock.return_value = self.host_lists az_mock.return_value = 'myaz' task = self._generate_task() gmv_mock.return_value = 23 mock_gbf.return_value = objects.MigrationList() # We just need this hook point to set a uuid on the # migration before we use it for teardown def set_migration_uuid(*a, **k): task._migration.uuid = uuids.migration return mock.MagicMock() # NOTE(danms): It's odd to do this on cn_mock, but it's just because # of when we need to have it set in the flow and where we have an easy # place to find it via self.migration. cn_mock.side_effect = set_migration_uuid prep_resize_mock.side_effect = test.TestingException task._held_allocations = mock.sentinel.allocs self.assertRaises(test.TestingException, task.execute) self.assertIsNotNone(task._migration) task._migration.create.assert_called_once_with() task._migration.save.assert_called_once_with() self.assertEqual('error', task._migration.status) mock_ra.assert_called_once_with(task.context, task._source_cn, task.instance, task._migration, task._held_allocations) class MigrationTaskAllocationUtils(test.NoDBTestCase): @mock.patch('nova.objects.ComputeNode.get_by_host_and_nodename') def test_replace_allocation_with_migration_no_host(self, mock_cn): mock_cn.side_effect = exception.ComputeHostNotFound(host='host') migration = objects.Migration() instance = objects.Instance(host='host', node='node') self.assertRaises(exception.ComputeHostNotFound, migrate.replace_allocation_with_migration, mock.sentinel.context, instance, migration) mock_cn.assert_called_once_with(mock.sentinel.context, instance.host, instance.node) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get_allocations_for_consumer_by_provider') @mock.patch('nova.objects.ComputeNode.get_by_host_and_nodename') def test_replace_allocation_with_migration_no_allocs(self, mock_cn, mock_ga): mock_ga.return_value = None migration = objects.Migration(uuid=uuids.migration) instance = objects.Instance(uuid=uuids.instance, host='host', node='node') result = migrate.replace_allocation_with_migration( mock.sentinel.context, instance, migration) self.assertEqual((None, None), result) @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'put_allocations') @mock.patch('nova.scheduler.client.report.SchedulerReportClient.' 'get_allocations_for_consumer_by_provider') @mock.patch('nova.objects.ComputeNode.get_by_host_and_nodename') def test_replace_allocation_with_migration_allocs_fail(self, mock_cn, mock_ga, mock_pa): ctxt = context.get_admin_context() migration = objects.Migration(uuid=uuids.migration) instance = objects.Instance(uuid=uuids.instance, user_id='fake', project_id='fake', host='host', node='node') mock_pa.return_value = False self.assertRaises(exception.NoValidHost, migrate.replace_allocation_with_migration, ctxt, instance, migration) nova-17.0.1/nova/tests/unit/conductor/test_conductor.py0000666000175000017500000043605013250073136023274 0ustar zuulzuul00000000000000# Copyright 2012 IBM Corp. # Copyright 2013 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Tests for the conductor service.""" import copy import mock from mox3 import mox import oslo_messaging as messaging from oslo_serialization import jsonutils from oslo_utils import timeutils from oslo_versionedobjects import exception as ovo_exc import six from nova import block_device from nova.compute import flavors from nova.compute import rpcapi as compute_rpcapi from nova.compute import task_states from nova.compute import vm_states from nova.conductor import api as conductor_api from nova.conductor import manager as conductor_manager from nova.conductor import rpcapi as conductor_rpcapi from nova.conductor.tasks import live_migrate from nova.conductor.tasks import migrate from nova import conf from nova import context from nova import db from nova.db.sqlalchemy import api as db_api from nova.db.sqlalchemy import api_models from nova import exception as exc from nova.image import api as image_api from nova import objects from nova.objects import base as obj_base from nova.objects import fields from nova import rpc from nova.scheduler import client as scheduler_client from nova.scheduler import utils as scheduler_utils from nova import test from nova.tests import fixtures from nova.tests.unit.api.openstack import fakes from nova.tests.unit import cast_as_call from nova.tests.unit.compute import test_compute from nova.tests.unit import fake_build_request from nova.tests.unit import fake_instance from nova.tests.unit import fake_notifier from nova.tests.unit import fake_request_spec from nova.tests.unit import fake_server_actions from nova.tests.unit import utils as test_utils from nova.tests import uuidsentinel as uuids from nova import utils CONF = conf.CONF fake_alloc1 = {"allocations": [ {"resource_provider": {"uuid": uuids.host1}, "resources": {"VCPU": 1, "MEMORY_MB": 1024, "DISK_GB": 100} }]} fake_alloc2 = {"allocations": [ {"resource_provider": {"uuid": uuids.host2}, "resources": {"VCPU": 1, "MEMORY_MB": 1024, "DISK_GB": 100} }]} fake_alloc3 = {"allocations": [ {"resource_provider": {"uuid": uuids.host3}, "resources": {"VCPU": 1, "MEMORY_MB": 1024, "DISK_GB": 100} }]} fake_alloc_json1 = jsonutils.dumps(fake_alloc1) fake_alloc_json2 = jsonutils.dumps(fake_alloc2) fake_alloc_json3 = jsonutils.dumps(fake_alloc3) fake_alloc_version = "1.23" fake_selection1 = objects.Selection(service_host="host1", nodename="node1", cell_uuid=uuids.cell, limits=None, allocation_request=fake_alloc_json1, allocation_request_version=fake_alloc_version) fake_selection2 = objects.Selection(service_host="host2", nodename="node2", cell_uuid=uuids.cell, limits=None, allocation_request=fake_alloc_json2, allocation_request_version=fake_alloc_version) fake_selection3 = objects.Selection(service_host="host3", nodename="node3", cell_uuid=uuids.cell, limits=None, allocation_request=fake_alloc_json3, allocation_request_version=fake_alloc_version) fake_host_lists1 = [[fake_selection1]] fake_host_lists2 = [[fake_selection1], [fake_selection2]] fake_host_lists_alt = [[fake_selection1, fake_selection2, fake_selection3]] class FakeContext(context.RequestContext): def elevated(self): """Return a consistent elevated context so we can detect it.""" if not hasattr(self, '_elevated'): self._elevated = super(FakeContext, self).elevated() return self._elevated class _BaseTestCase(object): def setUp(self): super(_BaseTestCase, self).setUp() self.user_id = fakes.FAKE_USER_ID self.project_id = fakes.FAKE_PROJECT_ID self.context = FakeContext(self.user_id, self.project_id) fake_notifier.stub_notifier(self) self.addCleanup(fake_notifier.reset) def fake_deserialize_context(serializer, ctxt_dict): self.assertEqual(self.context.user_id, ctxt_dict['user_id']) self.assertEqual(self.context.project_id, ctxt_dict['project_id']) return self.context self.stubs.Set(rpc.RequestContextSerializer, 'deserialize_context', fake_deserialize_context) self.useFixture(fixtures.SpawnIsSynchronousFixture()) class ConductorTestCase(_BaseTestCase, test.TestCase): """Conductor Manager Tests.""" def setUp(self): super(ConductorTestCase, self).setUp() self.conductor = conductor_manager.ConductorManager() self.conductor_manager = self.conductor def _test_object_action(self, is_classmethod, raise_exception): class TestObject(obj_base.NovaObject): def foo(self, raise_exception=False): if raise_exception: raise Exception('test') else: return 'test' @classmethod def bar(cls, context, raise_exception=False): if raise_exception: raise Exception('test') else: return 'test' obj_base.NovaObjectRegistry.register(TestObject) obj = TestObject() # NOTE(danms): After a trip over RPC, any tuple will be a list, # so use a list here to make sure we can handle it fake_args = [] if is_classmethod: versions = {'TestObject': '1.0'} result = self.conductor.object_class_action_versions( self.context, TestObject.obj_name(), 'bar', versions, fake_args, {'raise_exception': raise_exception}) else: updates, result = self.conductor.object_action( self.context, obj, 'foo', fake_args, {'raise_exception': raise_exception}) self.assertEqual('test', result) def test_object_action(self): self._test_object_action(False, False) def test_object_action_on_raise(self): self.assertRaises(messaging.ExpectedException, self._test_object_action, False, True) def test_object_class_action(self): self._test_object_action(True, False) def test_object_class_action_on_raise(self): self.assertRaises(messaging.ExpectedException, self._test_object_action, True, True) def test_object_action_copies_object(self): class TestObject(obj_base.NovaObject): fields = {'dict': fields.DictOfStringsField()} def touch_dict(self): self.dict['foo'] = 'bar' self.obj_reset_changes() obj_base.NovaObjectRegistry.register(TestObject) obj = TestObject() obj.dict = {} obj.obj_reset_changes() updates, result = self.conductor.object_action( self.context, obj, 'touch_dict', tuple(), {}) # NOTE(danms): If conductor did not properly copy the object, then # the new and reference copies of the nested dict object will be # the same, and thus 'dict' will not be reported as changed self.assertIn('dict', updates) self.assertEqual({'foo': 'bar'}, updates['dict']) def test_object_class_action_versions(self): @obj_base.NovaObjectRegistry.register class TestObject(obj_base.NovaObject): VERSION = '1.10' @classmethod def foo(cls, context): return cls() versions = { 'TestObject': '1.2', 'OtherObj': '1.0', } with mock.patch.object(self.conductor_manager, '_object_dispatch') as m: m.return_value = TestObject() m.return_value.obj_to_primitive = mock.MagicMock() self.conductor.object_class_action_versions( self.context, TestObject.obj_name(), 'foo', versions, tuple(), {}) m.return_value.obj_to_primitive.assert_called_once_with( target_version='1.2', version_manifest=versions) def test_object_class_action_versions_old_object(self): # Make sure we return older than requested objects unmodified, # see bug #1596119. @obj_base.NovaObjectRegistry.register class TestObject(obj_base.NovaObject): VERSION = '1.10' @classmethod def foo(cls, context): return cls() versions = { 'TestObject': '1.10', 'OtherObj': '1.0', } with mock.patch.object(self.conductor_manager, '_object_dispatch') as m: m.return_value = TestObject() m.return_value.VERSION = '1.9' m.return_value.obj_to_primitive = mock.MagicMock() obj = self.conductor.object_class_action_versions( self.context, TestObject.obj_name(), 'foo', versions, tuple(), {}) self.assertFalse(m.return_value.obj_to_primitive.called) self.assertEqual('1.9', obj.VERSION) def test_object_class_action_versions_major_version_diff(self): @obj_base.NovaObjectRegistry.register class TestObject(obj_base.NovaObject): VERSION = '2.10' @classmethod def foo(cls, context): return cls() versions = { 'TestObject': '2.10', 'OtherObj': '1.0', } with mock.patch.object(self.conductor_manager, '_object_dispatch') as m: m.return_value = TestObject() m.return_value.VERSION = '1.9' self.assertRaises( ovo_exc.InvalidTargetVersion, self.conductor.object_class_action_versions, self.context, TestObject.obj_name(), 'foo', versions, tuple(), {}) def test_reset(self): with mock.patch.object(objects.Service, 'clear_min_version_cache' ) as mock_clear_cache: self.conductor.reset() mock_clear_cache.assert_called_once_with() def test_provider_fw_rule_get_all(self): result = self.conductor.provider_fw_rule_get_all(self.context) self.assertEqual([], result) class ConductorRPCAPITestCase(_BaseTestCase, test.TestCase): """Conductor RPC API Tests.""" def setUp(self): super(ConductorRPCAPITestCase, self).setUp() self.conductor_service = self.start_service( 'conductor', manager='nova.conductor.manager.ConductorManager') self.conductor_manager = self.conductor_service.manager self.conductor = conductor_rpcapi.ConductorAPI() class ConductorAPITestCase(_BaseTestCase, test.TestCase): """Conductor API Tests.""" def setUp(self): super(ConductorAPITestCase, self).setUp() self.conductor_service = self.start_service( 'conductor', manager='nova.conductor.manager.ConductorManager') self.conductor = conductor_api.API() self.conductor_manager = self.conductor_service.manager def test_wait_until_ready(self): timeouts = [] calls = dict(count=0) def fake_ping(context, message, timeout): timeouts.append(timeout) calls['count'] += 1 if calls['count'] < 15: raise messaging.MessagingTimeout("fake") self.stubs.Set(self.conductor.base_rpcapi, 'ping', fake_ping) self.conductor.wait_until_ready(self.context) self.assertEqual(timeouts.count(10), 10) self.assertIn(None, timeouts) class _BaseTaskTestCase(object): def setUp(self): super(_BaseTaskTestCase, self).setUp() self.user_id = fakes.FAKE_USER_ID self.project_id = fakes.FAKE_PROJECT_ID self.context = FakeContext(self.user_id, self.project_id) fake_server_actions.stub_out_action_events(self) def fake_deserialize_context(serializer, ctxt_dict): self.assertEqual(self.context.user_id, ctxt_dict['user_id']) self.assertEqual(self.context.project_id, ctxt_dict['project_id']) return self.context self.stubs.Set(rpc.RequestContextSerializer, 'deserialize_context', fake_deserialize_context) self.useFixture(fixtures.SpawnIsSynchronousFixture()) def _prepare_rebuild_args(self, update_args=None): # Args that don't get passed in to the method but do get passed to RPC migration = update_args and update_args.pop('migration', None) node = update_args and update_args.pop('node', None) limits = update_args and update_args.pop('limits', None) rebuild_args = {'new_pass': 'admin_password', 'injected_files': 'files_to_inject', 'image_ref': 'image_ref', 'orig_image_ref': 'orig_image_ref', 'orig_sys_metadata': 'orig_sys_meta', 'bdms': {}, 'recreate': False, 'on_shared_storage': False, 'preserve_ephemeral': False, 'host': 'compute-host', 'request_spec': None} if update_args: rebuild_args.update(update_args) compute_rebuild_args = copy.deepcopy(rebuild_args) compute_rebuild_args['migration'] = migration compute_rebuild_args['node'] = node compute_rebuild_args['limits'] = limits return rebuild_args, compute_rebuild_args @mock.patch.object(objects.InstanceMapping, 'get_by_instance_uuid') @mock.patch.object(objects.RequestSpec, 'save') @mock.patch.object(migrate.MigrationTask, 'execute') @mock.patch.object(utils, 'get_image_from_system_metadata') @mock.patch.object(objects.RequestSpec, 'from_components') def _test_cold_migrate(self, spec_from_components, get_image_from_metadata, migration_task_execute, spec_save, get_im, clean_shutdown=True): get_im.return_value.cell_mapping = ( objects.CellMappingList.get_all(self.context)[0]) get_image_from_metadata.return_value = 'image' inst = fake_instance.fake_db_instance(image_ref='image_ref') inst_obj = objects.Instance._from_db_object( self.context, objects.Instance(), inst, []) inst_obj.system_metadata = {'image_hw_disk_bus': 'scsi'} flavor = flavors.get_default_flavor() flavor.extra_specs = {'extra_specs': 'fake'} inst_obj.flavor = flavor fake_spec = fake_request_spec.fake_spec_obj() spec_from_components.return_value = fake_spec scheduler_hint = {'filter_properties': {}} if isinstance(self.conductor, conductor_api.ComputeTaskAPI): # The API method is actually 'resize_instance'. It gets # converted into 'migrate_server' when doing RPC. self.conductor.resize_instance( self.context, inst_obj, {}, scheduler_hint, flavor, [], clean_shutdown, host_list=None) else: self.conductor.migrate_server( self.context, inst_obj, scheduler_hint, False, False, flavor, None, None, [], clean_shutdown) get_image_from_metadata.assert_called_once_with( inst_obj.system_metadata) migration_task_execute.assert_called_once_with() spec_save.assert_called_once_with() def test_cold_migrate(self): self._test_cold_migrate() def test_cold_migrate_forced_shutdown(self): self._test_cold_migrate(clean_shutdown=False) @mock.patch('nova.objects.BuildRequest.get_by_instance_uuid') @mock.patch('nova.availability_zones.get_host_availability_zone') @mock.patch('nova.objects.Instance.save') @mock.patch.object(objects.RequestSpec, 'from_primitives') def test_build_instances(self, mock_fp, mock_save, mock_getaz, mock_buildreq): """Tests creating two instances and the scheduler returns a unique host/node combo for each instance. """ fake_spec = objects.RequestSpec mock_fp.return_value = fake_spec instance_type = flavors.get_default_flavor() # NOTE(danms): Avoid datetime timezone issues with converted flavors instance_type.created_at = None instances = [objects.Instance(context=self.context, id=i, uuid=uuids.fake, flavor=instance_type) for i in range(2)] instance_type_p = obj_base.obj_to_primitive(instance_type) instance_properties = obj_base.obj_to_primitive(instances[0]) instance_properties['system_metadata'] = flavors.save_flavor_info( {}, instance_type) self.mox.StubOutWithMock(self.conductor_manager, '_schedule_instances') self.mox.StubOutWithMock(db, 'block_device_mapping_get_all_by_instance') self.mox.StubOutWithMock(self.conductor_manager.compute_rpcapi, 'build_and_run_instance') spec = {'image': {'fake_data': 'should_pass_silently'}, 'instance_properties': instance_properties, 'instance_type': instance_type_p, 'num_instances': 2} filter_properties = {'retry': {'num_attempts': 1, 'hosts': []}} sched_return = copy.deepcopy(fake_host_lists2) self.conductor_manager._schedule_instances(self.context, fake_spec, [uuids.fake, uuids.fake], return_alternates=True ).AndReturn(sched_return) db.block_device_mapping_get_all_by_instance(self.context, instances[0].uuid).AndReturn([]) filter_properties2 = {'retry': {'num_attempts': 1, 'hosts': [['host1', 'node1']]}, 'limits': {}} self.conductor_manager.compute_rpcapi.build_and_run_instance( self.context, instance=mox.IgnoreArg(), host='host1', image={'fake_data': 'should_pass_silently'}, request_spec=fake_spec, filter_properties=filter_properties2, admin_password='admin_password', injected_files='injected_files', requested_networks=None, security_groups='security_groups', block_device_mapping=mox.IgnoreArg(), node='node1', limits=None, host_list=sched_return[0]) db.block_device_mapping_get_all_by_instance(self.context, instances[1].uuid).AndReturn([]) filter_properties3 = {'limits': {}, 'retry': {'num_attempts': 1, 'hosts': [['host2', 'node2']]}} self.conductor_manager.compute_rpcapi.build_and_run_instance( self.context, instance=mox.IgnoreArg(), host='host2', image={'fake_data': 'should_pass_silently'}, request_spec=fake_spec, filter_properties=filter_properties3, admin_password='admin_password', injected_files='injected_files', requested_networks=None, security_groups='security_groups', block_device_mapping=mox.IgnoreArg(), node='node2', limits=None, host_list=sched_return[1]) self.mox.ReplayAll() # build_instances() is a cast, we need to wait for it to complete self.useFixture(cast_as_call.CastAsCall(self)) mock_getaz.return_value = 'myaz' self.conductor.build_instances(self.context, instances=instances, image={'fake_data': 'should_pass_silently'}, filter_properties={}, admin_password='admin_password', injected_files='injected_files', requested_networks=None, security_groups='security_groups', block_device_mapping='block_device_mapping', legacy_bdm=False, host_lists=None) mock_getaz.assert_has_calls([ mock.call(self.context, 'host1'), mock.call(self.context, 'host2')]) # A RequestSpec is built from primitives once before calling the # scheduler to get hosts and then once per instance we're building. mock_fp.assert_has_calls([ mock.call(self.context, spec, filter_properties), mock.call(self.context, spec, filter_properties2), mock.call(self.context, spec, filter_properties3)]) @mock.patch.object(scheduler_utils, 'build_request_spec') @mock.patch.object(scheduler_utils, 'setup_instance_group') @mock.patch.object(scheduler_utils, 'set_vm_state_and_notify') @mock.patch.object(scheduler_client.SchedulerClient, 'select_destinations') @mock.patch.object(conductor_manager.ComputeTaskManager, '_cleanup_allocated_networks') @mock.patch.object(conductor_manager.ComputeTaskManager, '_destroy_build_request') def test_build_instances_scheduler_failure( self, dest_build_req_mock, cleanup_mock, sd_mock, state_mock, sig_mock, bs_mock): instances = [fake_instance.fake_instance_obj(self.context) for i in range(2)] image = {'fake-data': 'should_pass_silently'} spec = {'fake': 'specs', 'instance_properties': instances[0]} exception = exc.NoValidHost(reason='fake-reason') dest_build_req_mock.side_effect = ( exc.BuildRequestNotFound(uuid='fake'), None) bs_mock.return_value = spec sd_mock.side_effect = exception updates = {'vm_state': vm_states.ERROR, 'task_state': None} # build_instances() is a cast, we need to wait for it to complete self.useFixture(cast_as_call.CastAsCall(self)) self.conductor.build_instances( self.context, instances=instances, image=image, filter_properties={}, admin_password='admin_password', injected_files='injected_files', requested_networks=None, security_groups='security_groups', block_device_mapping='block_device_mapping', legacy_bdm=False) set_state_calls = [] cleanup_network_calls = [] dest_build_req_calls = [] for instance in instances: set_state_calls.append(mock.call( self.context, instance.uuid, 'compute_task', 'build_instances', updates, exception, spec)) cleanup_network_calls.append(mock.call( self.context, mock.ANY, None)) dest_build_req_calls.append( mock.call(self.context, test.MatchType(type(instance)))) state_mock.assert_has_calls(set_state_calls) cleanup_mock.assert_has_calls(cleanup_network_calls) dest_build_req_mock.assert_has_calls(dest_build_req_calls) def test_build_instances_retry_exceeded(self): instances = [fake_instance.fake_instance_obj(self.context)] image = {'fake-data': 'should_pass_silently'} filter_properties = {'retry': {'num_attempts': 10, 'hosts': []}} updates = {'vm_state': vm_states.ERROR, 'task_state': None} @mock.patch.object(conductor_manager.ComputeTaskManager, '_cleanup_allocated_networks') @mock.patch.object(scheduler_utils, 'set_vm_state_and_notify') @mock.patch.object(scheduler_utils, 'build_request_spec') @mock.patch.object(scheduler_utils, 'populate_retry') def _test(populate_retry, build_spec, set_vm_state_and_notify, cleanup_mock): # build_instances() is a cast, we need to wait for it to # complete self.useFixture(cast_as_call.CastAsCall(self)) populate_retry.side_effect = exc.MaxRetriesExceeded( reason="Too many try") self.conductor.build_instances( self.context, instances=instances, image=image, filter_properties=filter_properties, admin_password='admin_password', injected_files='injected_files', requested_networks=None, security_groups='security_groups', block_device_mapping='block_device_mapping', legacy_bdm=False) populate_retry.assert_called_once_with( filter_properties, instances[0].uuid) set_vm_state_and_notify.assert_called_once_with( self.context, instances[0].uuid, 'compute_task', 'build_instances', updates, mock.ANY, build_spec.return_value) cleanup_mock.assert_called_once_with(self.context, mock.ANY, None) _test() @mock.patch.object(scheduler_utils, 'build_request_spec') @mock.patch.object(scheduler_utils, 'setup_instance_group') @mock.patch.object(conductor_manager.ComputeTaskManager, '_set_vm_state_and_notify') @mock.patch.object(conductor_manager.ComputeTaskManager, '_cleanup_allocated_networks') def test_build_instances_scheduler_group_failure( self, cleanup_mock, state_mock, sig_mock, bs_mock): instances = [fake_instance.fake_instance_obj(self.context) for i in range(2)] image = {'fake-data': 'should_pass_silently'} spec = {'fake': 'specs', 'instance_properties': instances[0]} bs_mock.return_value = spec exception = exc.UnsupportedPolicyException(reason='fake-reason') sig_mock.side_effect = exception updates = {'vm_state': vm_states.ERROR, 'task_state': None} # build_instances() is a cast, we need to wait for it to complete self.useFixture(cast_as_call.CastAsCall(self)) self.conductor.build_instances( context=self.context, instances=instances, image=image, filter_properties={}, admin_password='admin_password', injected_files='injected_files', requested_networks=None, security_groups='security_groups', block_device_mapping='block_device_mapping', legacy_bdm=False) set_state_calls = [] cleanup_network_calls = [] for instance in instances: set_state_calls.append(mock.call( self.context, instance.uuid, 'build_instances', updates, exception, spec)) cleanup_network_calls.append(mock.call( self.context, mock.ANY, None)) state_mock.assert_has_calls(set_state_calls) cleanup_mock.assert_has_calls(cleanup_network_calls) @mock.patch.object(objects.BuildRequest, 'get_by_instance_uuid') @mock.patch.object(objects.Instance, 'save') @mock.patch.object(objects.InstanceMapping, 'get_by_instance_uuid', side_effect=exc.InstanceMappingNotFound(uuid='fake')) @mock.patch.object(objects.HostMapping, 'get_by_host') @mock.patch.object(scheduler_client.SchedulerClient, 'select_destinations') @mock.patch.object(conductor_manager.ComputeTaskManager, '_set_vm_state_and_notify') def test_build_instances_no_instance_mapping(self, _mock_set_state, mock_select_dests, mock_get_by_host, mock_get_inst_map_by_uuid, _mock_save, _mock_buildreq): mock_select_dests.return_value = [[fake_selection1], [fake_selection2]] instances = [fake_instance.fake_instance_obj(self.context) for i in range(2)] image = {'fake-data': 'should_pass_silently'} # build_instances() is a cast, we need to wait for it to complete self.useFixture(cast_as_call.CastAsCall(self)) with mock.patch.object(self.conductor_manager.compute_rpcapi, 'build_and_run_instance'): self.conductor.build_instances( context=self.context, instances=instances, image=image, filter_properties={}, admin_password='admin_password', injected_files='injected_files', requested_networks=None, security_groups='security_groups', block_device_mapping='block_device_mapping', legacy_bdm=False) mock_get_inst_map_by_uuid.assert_has_calls([ mock.call(self.context, instances[0].uuid), mock.call(self.context, instances[1].uuid)]) self.assertFalse(mock_get_by_host.called) @mock.patch("nova.scheduler.utils.claim_resources", return_value=False) @mock.patch.object(objects.Instance, 'save') def test_build_instances_exhaust_host_list(self, _mock_save, mock_claim): # A list of three alternate hosts for one instance host_lists = copy.deepcopy(fake_host_lists_alt) instance = fake_instance.fake_instance_obj(self.context) image = {'fake-data': 'should_pass_silently'} expected_claim_count = len(host_lists[0]) # build_instances() is a cast, we need to wait for it to complete self.useFixture(cast_as_call.CastAsCall(self)) # Since claim_resources() is mocked to always return False, we will run # out of alternate hosts, and MaxRetriesExceeded should be raised. self.assertRaises(exc.MaxRetriesExceeded, self.conductor.build_instances, context=self.context, instances=[instance], image=image, filter_properties={}, admin_password='admin_password', injected_files='injected_files', requested_networks=None, security_groups='security_groups', block_device_mapping=None, legacy_bdm=None, host_lists=host_lists) self.assertEqual(expected_claim_count, mock_claim.call_count) @mock.patch.object(conductor_manager.ComputeTaskManager, '_destroy_build_request') @mock.patch.object(conductor_manager.LOG, 'debug') @mock.patch("nova.scheduler.utils.claim_resources", return_value=True) @mock.patch.object(objects.Instance, 'save') def test_build_instances_logs_selected_and_alts(self, _mock_save, mock_claim, mock_debug, mock_destroy): # A list of three alternate hosts for one instance host_lists = copy.deepcopy(fake_host_lists_alt) expected_host = host_lists[0][0] expected_alts = host_lists[0][1:] instance = fake_instance.fake_instance_obj(self.context) image = {'fake-data': 'should_pass_silently'} # build_instances() is a cast, we need to wait for it to complete self.useFixture(cast_as_call.CastAsCall(self)) with mock.patch.object(self.conductor_manager.compute_rpcapi, 'build_and_run_instance'): self.conductor.build_instances(context=self.context, instances=[instance], image=image, filter_properties={}, admin_password='admin_password', injected_files='injected_files', requested_networks=None, security_groups='security_groups', block_device_mapping=None, legacy_bdm=None, host_lists=host_lists) # The last LOG.debug call should record the selected host name and the # list of alternates. last_call = mock_debug.call_args_list[-1][0] self.assertIn(expected_host.service_host, last_call) expected_alt_hosts = [(alt.service_host, alt.nodename) for alt in expected_alts] self.assertIn(expected_alt_hosts, last_call) @mock.patch.object(objects.BuildRequest, 'get_by_instance_uuid') @mock.patch.object(objects.Instance, 'save') @mock.patch.object(objects.InstanceMapping, 'get_by_instance_uuid') @mock.patch.object(objects.HostMapping, 'get_by_host', side_effect=exc.HostMappingNotFound(name='fake')) @mock.patch.object(scheduler_client.SchedulerClient, 'select_destinations') @mock.patch.object(conductor_manager.ComputeTaskManager, '_set_vm_state_and_notify') def test_build_instances_no_host_mapping(self, _mock_set_state, mock_select_dests, mock_get_by_host, mock_get_inst_map_by_uuid, _mock_save, mock_buildreq): mock_select_dests.return_value = [[fake_selection1], [fake_selection2]] num_instances = 2 instances = [fake_instance.fake_instance_obj(self.context) for i in range(num_instances)] inst_mapping_mocks = [mock.Mock() for i in range(num_instances)] mock_get_inst_map_by_uuid.side_effect = inst_mapping_mocks image = {'fake-data': 'should_pass_silently'} # build_instances() is a cast, we need to wait for it to complete self.useFixture(cast_as_call.CastAsCall(self)) with mock.patch.object(self.conductor_manager.compute_rpcapi, 'build_and_run_instance'): self.conductor.build_instances( context=self.context, instances=instances, image=image, filter_properties={}, admin_password='admin_password', injected_files='injected_files', requested_networks=None, security_groups='security_groups', block_device_mapping='block_device_mapping', legacy_bdm=False) for instance in instances: mock_get_inst_map_by_uuid.assert_any_call(self.context, instance.uuid) for inst_mapping in inst_mapping_mocks: inst_mapping.destroy.assert_called_once_with() mock_get_by_host.assert_has_calls([mock.call(self.context, 'host1'), mock.call(self.context, 'host2')]) @mock.patch.object(objects.BuildRequest, 'get_by_instance_uuid') @mock.patch.object(objects.Instance, 'save') @mock.patch.object(objects.InstanceMapping, 'get_by_instance_uuid') @mock.patch.object(objects.HostMapping, 'get_by_host') @mock.patch.object(scheduler_client.SchedulerClient, 'select_destinations') @mock.patch.object(conductor_manager.ComputeTaskManager, '_set_vm_state_and_notify') def test_build_instances_update_instance_mapping(self, _mock_set_state, mock_select_dests, mock_get_by_host, mock_get_inst_map_by_uuid, _mock_save, _mock_buildreq): mock_select_dests.return_value = [[fake_selection1], [fake_selection2]] mock_get_by_host.side_effect = [ objects.HostMapping(cell_mapping=objects.CellMapping(id=1)), objects.HostMapping(cell_mapping=objects.CellMapping(id=2))] num_instances = 2 instances = [fake_instance.fake_instance_obj(self.context) for i in range(num_instances)] inst_mapping_mocks = [mock.Mock() for i in range(num_instances)] mock_get_inst_map_by_uuid.side_effect = inst_mapping_mocks image = {'fake-data': 'should_pass_silently'} # build_instances() is a cast, we need to wait for it to complete self.useFixture(cast_as_call.CastAsCall(self)) with mock.patch.object(self.conductor_manager.compute_rpcapi, 'build_and_run_instance'): self.conductor.build_instances( context=self.context, instances=instances, image=image, filter_properties={}, admin_password='admin_password', injected_files='injected_files', requested_networks=None, security_groups='security_groups', block_device_mapping='block_device_mapping', legacy_bdm=False) for instance in instances: mock_get_inst_map_by_uuid.assert_any_call(self.context, instance.uuid) for inst_mapping in inst_mapping_mocks: inst_mapping.save.assert_called_once_with() self.assertEqual(1, inst_mapping_mocks[0].cell_mapping.id) self.assertEqual(2, inst_mapping_mocks[1].cell_mapping.id) mock_get_by_host.assert_has_calls([mock.call(self.context, 'host1'), mock.call(self.context, 'host2')]) @mock.patch.object(objects.Instance, 'save', new=mock.MagicMock()) @mock.patch.object(objects.BuildRequest, 'get_by_instance_uuid') @mock.patch.object(scheduler_client.SchedulerClient, 'select_destinations') @mock.patch.object(conductor_manager.ComputeTaskManager, '_set_vm_state_and_notify', new=mock.MagicMock()) def test_build_instances_destroy_build_request(self, mock_select_dests, mock_build_req_get): mock_select_dests.return_value = [[fake_selection1], [fake_selection2]] num_instances = 2 instances = [fake_instance.fake_instance_obj(self.context) for i in range(num_instances)] build_req_mocks = [mock.Mock() for i in range(num_instances)] mock_build_req_get.side_effect = build_req_mocks image = {'fake-data': 'should_pass_silently'} # build_instances() is a cast, we need to wait for it to complete self.useFixture(cast_as_call.CastAsCall(self)) @mock.patch.object(self.conductor_manager.compute_rpcapi, 'build_and_run_instance', new=mock.MagicMock()) @mock.patch.object(self.conductor_manager, '_populate_instance_mapping', new=mock.MagicMock()) def do_test(): self.conductor.build_instances( context=self.context, instances=instances, image=image, filter_properties={}, admin_password='admin_password', injected_files='injected_files', requested_networks=None, security_groups='security_groups', block_device_mapping='block_device_mapping', legacy_bdm=False, host_lists=None) do_test() for build_req in build_req_mocks: build_req.destroy.assert_called_once_with() @mock.patch.object(objects.Instance, 'save', new=mock.MagicMock()) @mock.patch.object(scheduler_client.SchedulerClient, 'select_destinations') @mock.patch.object(conductor_manager.ComputeTaskManager, '_set_vm_state_and_notify', new=mock.MagicMock()) def test_build_instances_reschedule_ignores_build_request(self, mock_select_dests): # This test calls build_instances as if it was a reschedule. This means # that the exc.BuildRequestNotFound() exception raised by # conductor_manager._destroy_build_request() should not cause the # build to stop. mock_select_dests.return_value = [[fake_selection1]] instance = fake_instance.fake_instance_obj(self.context) image = {'fake-data': 'should_pass_silently'} # build_instances() is a cast, we need to wait for it to complete self.useFixture(cast_as_call.CastAsCall(self)) @mock.patch.object(self.conductor_manager.compute_rpcapi, 'build_and_run_instance') @mock.patch.object(self.conductor_manager, '_populate_instance_mapping') @mock.patch.object(self.conductor_manager, '_destroy_build_request', side_effect=exc.BuildRequestNotFound(uuid='fake')) def do_test(mock_destroy_build_req, mock_pop_inst_map, mock_build_and_run): self.conductor.build_instances( context=self.context, instances=[instance], image=image, filter_properties={'retry': {'num_attempts': 1, 'hosts': []}}, admin_password='admin_password', injected_files='injected_files', requested_networks=None, security_groups='security_groups', block_device_mapping='block_device_mapping', legacy_bdm=False, host_lists=None) expected_build_run_host_list = copy.copy(fake_host_lists1[0]) if expected_build_run_host_list: expected_build_run_host_list.pop(0) mock_build_and_run.assert_called_once_with( self.context, instance=mock.ANY, host='host1', image=image, request_spec=mock.ANY, filter_properties={'retry': {'num_attempts': 2, 'hosts': [['host1', 'node1']]}, 'limits': {}}, admin_password='admin_password', injected_files='injected_files', requested_networks=None, security_groups='security_groups', block_device_mapping=test.MatchType( objects.BlockDeviceMappingList), node='node1', limits=None, host_list=expected_build_run_host_list) mock_pop_inst_map.assert_not_called() mock_destroy_build_req.assert_not_called() do_test() def test_unshelve_instance_on_host(self): instance = self._create_fake_instance_obj() instance.vm_state = vm_states.SHELVED instance.task_state = task_states.UNSHELVING instance.save() system_metadata = instance.system_metadata self.mox.StubOutWithMock(self.conductor_manager.compute_rpcapi, 'start_instance') self.mox.StubOutWithMock(self.conductor_manager.compute_rpcapi, 'unshelve_instance') self.conductor_manager.compute_rpcapi.start_instance(self.context, instance) self.mox.ReplayAll() system_metadata['shelved_at'] = timeutils.utcnow() system_metadata['shelved_image_id'] = 'fake_image_id' system_metadata['shelved_host'] = 'fake-mini' self.conductor_manager.unshelve_instance(self.context, instance) def test_unshelve_offload_instance_on_host_with_request_spec(self): instance = self._create_fake_instance_obj() instance.vm_state = vm_states.SHELVED_OFFLOADED instance.task_state = task_states.UNSHELVING instance.save() system_metadata = instance.system_metadata system_metadata['shelved_at'] = timeutils.utcnow() system_metadata['shelved_image_id'] = 'fake_image_id' system_metadata['shelved_host'] = 'fake-mini' fake_spec = fake_request_spec.fake_spec_obj() # FIXME(sbauza): Modify the fake RequestSpec object to either add a # non-empty SchedulerRetries object or nullify the field fake_spec.retry = None # FIXME(sbauza): Modify the fake RequestSpec object to either add a # non-empty SchedulerLimits object or nullify the field fake_spec.limits = None # FIXME(sbauza): Modify the fake RequestSpec object to either add a # non-empty InstanceGroup object or nullify the field fake_spec.instance_group = None filter_properties = fake_spec.to_legacy_filter_properties_dict() request_spec = fake_spec.to_legacy_request_spec_dict() host = {'host': 'host1', 'nodename': 'node1', 'limits': {}} # unshelve_instance() is a cast, we need to wait for it to complete self.useFixture(cast_as_call.CastAsCall(self)) @mock.patch.object(objects.InstanceMapping, 'get_by_instance_uuid') @mock.patch.object(self.conductor_manager.compute_rpcapi, 'unshelve_instance') @mock.patch.object(scheduler_utils, 'populate_filter_properties') @mock.patch.object(scheduler_utils, 'populate_retry') @mock.patch.object(self.conductor_manager, '_schedule_instances') @mock.patch.object(objects.RequestSpec, 'from_primitives') @mock.patch.object(objects.RequestSpec, 'to_legacy_request_spec_dict') @mock.patch.object(objects.RequestSpec, 'to_legacy_filter_properties_dict') @mock.patch.object(objects.RequestSpec, 'reset_forced_destinations') def do_test(reset_forced_destinations, to_filtprops, to_reqspec, from_primitives, sched_instances, populate_retry, populate_filter_properties, unshelve_instance, get_by_instance_uuid): cell_mapping = objects.CellMapping.get_by_uuid(self.context, uuids.cell1) get_by_instance_uuid.return_value = objects.InstanceMapping( cell_mapping=cell_mapping) to_filtprops.return_value = filter_properties to_reqspec.return_value = request_spec from_primitives.return_value = fake_spec sched_instances.return_value = [[fake_selection1]] self.conductor.unshelve_instance(self.context, instance, fake_spec) # The fake_spec already has a project_id set which doesn't match # the instance.project_id so the spec's project_id won't be # overridden using the instance.project_id. self.assertNotEqual(fake_spec.project_id, instance.project_id) reset_forced_destinations.assert_called_once_with() from_primitives.assert_called_once_with(self.context, request_spec, filter_properties) sched_instances.assert_called_once_with(self.context, fake_spec, [instance.uuid], return_alternates=False) self.assertEqual(cell_mapping, fake_spec.requested_destination.cell) # NOTE(sbauza): Since the instance is dehydrated when passing # through the RPC API, we can only assert mock.ANY for it unshelve_instance.assert_called_once_with( self.context, mock.ANY, host['host'], image=mock.ANY, filter_properties=filter_properties, node=host['nodename'] ) do_test() def test_unshelve_offloaded_instance_glance_image_not_found(self): shelved_image_id = "image_not_found" instance = self._create_fake_instance_obj() instance.vm_state = vm_states.SHELVED_OFFLOADED instance.task_state = task_states.UNSHELVING instance.save() system_metadata = instance.system_metadata self.mox.StubOutWithMock(self.conductor_manager.image_api, 'get') e = exc.ImageNotFound(image_id=shelved_image_id) self.conductor_manager.image_api.get( self.context, shelved_image_id, show_deleted=False).AndRaise(e) self.mox.ReplayAll() system_metadata['shelved_at'] = timeutils.utcnow() system_metadata['shelved_host'] = 'fake-mini' system_metadata['shelved_image_id'] = shelved_image_id self.assertRaises( exc.UnshelveException, self.conductor_manager.unshelve_instance, self.context, instance) self.assertEqual(instance.vm_state, vm_states.ERROR) def test_unshelve_offloaded_instance_image_id_is_none(self): instance = self._create_fake_instance_obj() instance.vm_state = vm_states.SHELVED_OFFLOADED instance.task_state = task_states.UNSHELVING # 'shelved_image_id' is None for volumebacked instance instance.system_metadata['shelved_image_id'] = None with test.nested( mock.patch.object(self.conductor_manager, '_schedule_instances'), mock.patch.object(self.conductor_manager.compute_rpcapi, 'unshelve_instance'), mock.patch.object(objects.InstanceMapping, 'get_by_instance_uuid'), ) as (schedule_mock, unshelve_mock, get_by_instance_uuid): schedule_mock.return_value = [[fake_selection1]] get_by_instance_uuid.return_value = objects.InstanceMapping( cell_mapping=objects.CellMapping.get_by_uuid( self.context, uuids.cell1)) self.conductor_manager.unshelve_instance(self.context, instance) self.assertEqual(1, unshelve_mock.call_count) @mock.patch.object(objects.InstanceMapping, 'get_by_instance_uuid') @mock.patch.object(objects.RequestSpec, 'from_primitives') def test_unshelve_instance_schedule_and_rebuild(self, fp, mock_im): fake_spec = objects.RequestSpec() # Set requested_destination to test setting cell_mapping in # existing object. fake_spec.requested_destination = objects.Destination( host="dummy", cell=None) fp.return_value = fake_spec cell_mapping = objects.CellMapping.get_by_uuid(self.context, uuids.cell1) mock_im.return_value = objects.InstanceMapping( cell_mapping=cell_mapping) instance = self._create_fake_instance_obj() instance.vm_state = vm_states.SHELVED_OFFLOADED instance.save() system_metadata = instance.system_metadata self.mox.StubOutWithMock(self.conductor_manager.image_api, 'get') self.mox.StubOutWithMock(scheduler_utils, 'build_request_spec') self.mox.StubOutWithMock(self.conductor_manager, '_schedule_instances') self.mox.StubOutWithMock(self.conductor_manager.compute_rpcapi, 'unshelve_instance') self.conductor_manager.image_api.get(self.context, 'fake_image_id', show_deleted=False).AndReturn('fake_image') scheduler_utils.build_request_spec('fake_image', mox.IgnoreArg()).AndReturn('req_spec') fake_selection = objects.Selection(service_host="fake_host", nodename="fake_node", limits=None) self.conductor_manager._schedule_instances(self.context, fake_spec, [instance.uuid], return_alternates=False).AndReturn( [[fake_selection]]) self.conductor_manager.compute_rpcapi.unshelve_instance(self.context, instance, 'fake_host', image='fake_image', filter_properties={'limits': {}, 'retry': {'num_attempts': 1, 'hosts': [['fake_host', 'fake_node']]}}, node='fake_node') self.mox.ReplayAll() system_metadata['shelved_at'] = timeutils.utcnow() system_metadata['shelved_image_id'] = 'fake_image_id' system_metadata['shelved_host'] = 'fake-mini' self.conductor_manager.unshelve_instance(self.context, instance) fp.assert_called_once_with(self.context, 'req_spec', mock.ANY) self.assertEqual(cell_mapping, fake_spec.requested_destination.cell) def test_unshelve_instance_schedule_and_rebuild_novalid_host(self): instance = self._create_fake_instance_obj() instance.vm_state = vm_states.SHELVED_OFFLOADED instance.save() system_metadata = instance.system_metadata def fake_schedule_instances(context, request_spec, *instances, **kwargs): raise exc.NoValidHost(reason='') with test.nested( mock.patch.object(self.conductor_manager.image_api, 'get', return_value='fake_image'), mock.patch.object(self.conductor_manager, '_schedule_instances', fake_schedule_instances), mock.patch.object(objects.InstanceMapping, 'get_by_instance_uuid'), mock.patch.object(objects.Instance, 'save') ) as (_get_image, _schedule_instances, get_by_instance_uuid, save): get_by_instance_uuid.return_value = objects.InstanceMapping( cell_mapping=objects.CellMapping.get_by_uuid( self.context, uuids.cell1)) system_metadata['shelved_at'] = timeutils.utcnow() system_metadata['shelved_image_id'] = 'fake_image_id' system_metadata['shelved_host'] = 'fake-mini' self.conductor_manager.unshelve_instance(self.context, instance) _get_image.assert_has_calls([mock.call(self.context, system_metadata['shelved_image_id'], show_deleted=False)]) self.assertEqual(vm_states.SHELVED_OFFLOADED, instance.vm_state) @mock.patch.object(objects.InstanceMapping, 'get_by_instance_uuid') @mock.patch.object(conductor_manager.ComputeTaskManager, '_schedule_instances', side_effect=messaging.MessagingTimeout()) @mock.patch.object(image_api.API, 'get', return_value='fake_image') @mock.patch.object(objects.Instance, 'save') def test_unshelve_instance_schedule_and_rebuild_messaging_exception( self, mock_save, mock_get_image, mock_schedule_instances, mock_im): mock_im.return_value = objects.InstanceMapping( cell_mapping=objects.CellMapping.get_by_uuid(self.context, uuids.cell1)) instance = self._create_fake_instance_obj() instance.vm_state = vm_states.SHELVED_OFFLOADED instance.task_state = task_states.UNSHELVING instance.save() system_metadata = instance.system_metadata system_metadata['shelved_at'] = timeutils.utcnow() system_metadata['shelved_image_id'] = 'fake_image_id' system_metadata['shelved_host'] = 'fake-mini' self.assertRaises(messaging.MessagingTimeout, self.conductor_manager.unshelve_instance, self.context, instance) mock_get_image.assert_has_calls([mock.call(self.context, system_metadata['shelved_image_id'], show_deleted=False)]) self.assertEqual(vm_states.SHELVED_OFFLOADED, instance.vm_state) self.assertIsNone(instance.task_state) @mock.patch.object(objects.InstanceMapping, 'get_by_instance_uuid') @mock.patch.object(objects.RequestSpec, 'from_primitives') def test_unshelve_instance_schedule_and_rebuild_volume_backed( self, fp, mock_im): fake_spec = objects.RequestSpec() fp.return_value = fake_spec mock_im.return_value = objects.InstanceMapping( cell_mapping=objects.CellMapping.get_by_uuid(self.context, uuids.cell1)) instance = self._create_fake_instance_obj() instance.vm_state = vm_states.SHELVED_OFFLOADED instance.save() system_metadata = instance.system_metadata self.mox.StubOutWithMock(scheduler_utils, 'build_request_spec') self.mox.StubOutWithMock(self.conductor_manager, '_schedule_instances') self.mox.StubOutWithMock(self.conductor_manager.compute_rpcapi, 'unshelve_instance') scheduler_utils.build_request_spec(None, mox.IgnoreArg()).AndReturn('req_spec') fake_selection = objects.Selection(service_host="fake_host", nodename="fake_node", limits=None) self.conductor_manager._schedule_instances(self.context, fake_spec, [instance.uuid], return_alternates=False).AndReturn( [[fake_selection]]) self.conductor_manager.compute_rpcapi.unshelve_instance(self.context, instance, 'fake_host', image=None, filter_properties={'limits': {}, 'retry': {'num_attempts': 1, 'hosts': [['fake_host', 'fake_node']]}}, node='fake_node') self.mox.ReplayAll() system_metadata['shelved_at'] = timeutils.utcnow() system_metadata['shelved_host'] = 'fake-mini' self.conductor_manager.unshelve_instance(self.context, instance) fp.assert_called_once_with(self.context, 'req_spec', mock.ANY) def test_rebuild_instance(self): inst_obj = self._create_fake_instance_obj() rebuild_args, compute_args = self._prepare_rebuild_args( {'host': inst_obj.host}) with test.nested( mock.patch.object(self.conductor_manager.compute_rpcapi, 'rebuild_instance'), mock.patch.object(self.conductor_manager.scheduler_client, 'select_destinations') ) as (rebuild_mock, select_dest_mock): self.conductor_manager.rebuild_instance(context=self.context, instance=inst_obj, **rebuild_args) self.assertFalse(select_dest_mock.called) rebuild_mock.assert_called_once_with(self.context, instance=inst_obj, **compute_args) def test_rebuild_instance_with_scheduler(self): inst_obj = self._create_fake_instance_obj() inst_obj.host = 'noselect' expected_host = 'thebesthost' expected_node = 'thebestnode' expected_limits = None fake_selection = objects.Selection(service_host=expected_host, nodename=expected_node, limits=None) rebuild_args, compute_args = self._prepare_rebuild_args( {'host': None, 'node': expected_node, 'limits': expected_limits}) request_spec = {} filter_properties = {'ignore_hosts': [(inst_obj.host)]} fake_spec = objects.RequestSpec() inst_uuids = [inst_obj.uuid] with test.nested( mock.patch.object(self.conductor_manager.compute_rpcapi, 'rebuild_instance'), mock.patch.object(scheduler_utils, 'setup_instance_group', return_value=False), mock.patch.object(objects.RequestSpec, 'from_primitives', return_value=fake_spec), mock.patch.object(self.conductor_manager.scheduler_client, 'select_destinations', return_value=[[fake_selection]]), mock.patch('nova.scheduler.utils.build_request_spec', return_value=request_spec) ) as (rebuild_mock, sig_mock, fp_mock, select_dest_mock, bs_mock): self.conductor_manager.rebuild_instance(context=self.context, instance=inst_obj, **rebuild_args) bs_mock.assert_called_once_with( obj_base.obj_to_primitive(inst_obj.image_meta), [inst_obj]) fp_mock.assert_called_once_with(self.context, request_spec, filter_properties) select_dest_mock.assert_called_once_with(self.context, fake_spec, inst_uuids, return_objects=True, return_alternates=False) compute_args['host'] = expected_host compute_args['request_spec'] = fake_spec rebuild_mock.assert_called_once_with(self.context, instance=inst_obj, **compute_args) self.assertEqual(inst_obj.project_id, fake_spec.project_id) self.assertEqual('compute.instance.rebuild.scheduled', fake_notifier.NOTIFICATIONS[0].event_type) def test_rebuild_instance_with_scheduler_no_host(self): inst_obj = self._create_fake_instance_obj() inst_obj.host = 'noselect' rebuild_args, _ = self._prepare_rebuild_args({'host': None}) request_spec = {} filter_properties = {'ignore_hosts': [(inst_obj.host)]} fake_spec = objects.RequestSpec() with test.nested( mock.patch.object(self.conductor_manager.compute_rpcapi, 'rebuild_instance'), mock.patch.object(scheduler_utils, 'setup_instance_group', return_value=False), mock.patch.object(objects.RequestSpec, 'from_primitives', return_value=fake_spec), mock.patch.object(self.conductor_manager.scheduler_client, 'select_destinations', side_effect=exc.NoValidHost(reason='')), mock.patch('nova.scheduler.utils.build_request_spec', return_value=request_spec), mock.patch.object(scheduler_utils, 'set_vm_state_and_notify') ) as (rebuild_mock, sig_mock, fp_mock, select_dest_mock, bs_mock, set_vm_state_and_notify_mock): self.assertRaises(exc.NoValidHost, self.conductor_manager.rebuild_instance, context=self.context, instance=inst_obj, **rebuild_args) fp_mock.assert_called_once_with(self.context, request_spec, filter_properties) select_dest_mock.assert_called_once_with(self.context, fake_spec, [inst_obj.uuid], return_objects=True, return_alternates=False) self.assertEqual( set_vm_state_and_notify_mock.call_args[0][4]['vm_state'], vm_states.ERROR) self.assertFalse(rebuild_mock.called) self.assertIn('No valid host', inst_obj.fault.message) @mock.patch.object(conductor_manager.compute_rpcapi.ComputeAPI, 'rebuild_instance') @mock.patch.object(scheduler_utils, 'setup_instance_group') @mock.patch.object(conductor_manager.scheduler_client.SchedulerClient, 'select_destinations') @mock.patch('nova.scheduler.utils.build_request_spec') @mock.patch.object(conductor_manager.ComputeTaskManager, '_set_vm_state_and_notify') def test_rebuild_instance_with_scheduler_group_failure(self, state_mock, bs_mock, select_dest_mock, sig_mock, rebuild_mock): inst_obj = self._create_fake_instance_obj() rebuild_args, _ = self._prepare_rebuild_args({'host': None}) request_spec = {} bs_mock.return_value = request_spec exception = exc.UnsupportedPolicyException(reason='') sig_mock.side_effect = exception # build_instances() is a cast, we need to wait for it to complete self.useFixture(cast_as_call.CastAsCall(self)) # Create the migration record (normally created by the compute API). migration = objects.Migration(self.context, source_compute=inst_obj.host, source_node=inst_obj.node, instance_uuid=inst_obj.uuid, status='accepted', migration_type='evacuation') migration.create() self.assertRaises(exc.UnsupportedPolicyException, self.conductor.rebuild_instance, self.context, inst_obj, **rebuild_args) updates = {'vm_state': vm_states.ERROR, 'task_state': None} state_mock.assert_called_once_with(self.context, inst_obj.uuid, 'rebuild_server', updates, exception, mock.ANY) self.assertFalse(select_dest_mock.called) self.assertFalse(rebuild_mock.called) self.assertIn('ServerGroup policy is not supported', inst_obj.fault.message) # Assert the migration status was updated. migration = objects.Migration.get_by_id(self.context, migration.id) self.assertEqual('error', migration.status) def test_rebuild_instance_evacuate_migration_record(self): inst_obj = self._create_fake_instance_obj() migration = objects.Migration(context=self.context, source_compute=inst_obj.host, source_node=inst_obj.node, instance_uuid=inst_obj.uuid, status='accepted', migration_type='evacuation') rebuild_args, compute_args = self._prepare_rebuild_args( {'host': inst_obj.host, 'migration': migration}) with test.nested( mock.patch.object(self.conductor_manager.compute_rpcapi, 'rebuild_instance'), mock.patch.object(self.conductor_manager.scheduler_client, 'select_destinations'), mock.patch.object(objects.Migration, 'get_by_instance_and_status', return_value=migration) ) as (rebuild_mock, select_dest_mock, get_migration_mock): self.conductor_manager.rebuild_instance(context=self.context, instance=inst_obj, **rebuild_args) self.assertFalse(select_dest_mock.called) rebuild_mock.assert_called_once_with(self.context, instance=inst_obj, **compute_args) def test_rebuild_instance_with_request_spec(self): inst_obj = self._create_fake_instance_obj() inst_obj.host = 'noselect' expected_host = 'thebesthost' expected_node = 'thebestnode' expected_limits = None fake_selection = objects.Selection(service_host=expected_host, nodename=expected_node, limits=None) fake_spec = objects.RequestSpec(ignore_hosts=[]) rebuild_args, compute_args = self._prepare_rebuild_args( {'host': None, 'node': expected_node, 'limits': expected_limits, 'request_spec': fake_spec}) with test.nested( mock.patch.object(self.conductor_manager.compute_rpcapi, 'rebuild_instance'), mock.patch.object(scheduler_utils, 'setup_instance_group', return_value=False), mock.patch.object(self.conductor_manager.scheduler_client, 'select_destinations', return_value=[[fake_selection]]), mock.patch.object(fake_spec, 'reset_forced_destinations'), ) as (rebuild_mock, sig_mock, select_dest_mock, reset_fd): self.conductor_manager.rebuild_instance(context=self.context, instance=inst_obj, **rebuild_args) if rebuild_args['recreate']: reset_fd.assert_called_once_with() else: reset_fd.assert_not_called() select_dest_mock.assert_called_once_with(self.context, fake_spec, [inst_obj.uuid], return_objects=True, return_alternates=False) compute_args['host'] = expected_host compute_args['request_spec'] = fake_spec rebuild_mock.assert_called_once_with(self.context, instance=inst_obj, **compute_args) self.assertEqual('compute.instance.rebuild.scheduled', fake_notifier.NOTIFICATIONS[0].event_type) class ConductorTaskTestCase(_BaseTaskTestCase, test_compute.BaseTestCase): """ComputeTaskManager Tests.""" NUMBER_OF_CELLS = 2 def setUp(self): super(ConductorTaskTestCase, self).setUp() self.conductor = conductor_manager.ComputeTaskManager() self.conductor_manager = self.conductor params = {} self.ctxt = params['context'] = context.RequestContext( 'fake-user', 'fake-project').elevated() build_request = fake_build_request.fake_req_obj(self.ctxt) del build_request.instance.id build_request.create() params['build_requests'] = objects.BuildRequestList( objects=[build_request]) im = objects.InstanceMapping( self.ctxt, instance_uuid=build_request.instance.uuid, cell_mapping=None, project_id=self.ctxt.project_id) im.create() rs = fake_request_spec.fake_spec_obj(remove_id=True) rs._context = self.ctxt rs.instance_uuid = build_request.instance_uuid rs.instance_group = None rs.retry = None rs.limits = None rs.create() params['request_specs'] = [rs] params['image'] = {'fake_data': 'should_pass_silently'} params['admin_password'] = 'admin_password', params['injected_files'] = 'injected_files' params['requested_networks'] = None bdm = objects.BlockDeviceMapping(self.ctxt, **dict( source_type='blank', destination_type='local', guest_format='foo', device_type='disk', disk_bus='', boot_index=1, device_name='xvda', delete_on_termination=False, snapshot_id=None, volume_id=None, volume_size=1, image_id='bar', no_device=False, connection_info=None, tag='')) params['block_device_mapping'] = objects.BlockDeviceMappingList( objects=[bdm]) tag = objects.Tag(self.ctxt, tag='tag1') params['tags'] = objects.TagList(objects=[tag]) self.params = params @mock.patch('nova.availability_zones.get_host_availability_zone') @mock.patch('nova.compute.rpcapi.ComputeAPI.build_and_run_instance') @mock.patch('nova.scheduler.rpcapi.SchedulerAPI.select_destinations') def _do_schedule_and_build_instances_test(self, params, select_destinations, build_and_run_instance, get_az): select_destinations.return_value = [[fake_selection1]] get_az.return_value = 'myaz' details = {} def _build_and_run_instance(ctxt, *args, **kwargs): details['instance'] = kwargs['instance'] self.assertTrue(kwargs['instance'].id) self.assertTrue(kwargs['filter_properties'].get('retry')) self.assertEqual(1, len(kwargs['block_device_mapping'])) # FIXME(danms): How to validate the db connection here? self.start_service('compute', host='host1') build_and_run_instance.side_effect = _build_and_run_instance self.conductor.schedule_and_build_instances(**params) self.assertTrue(build_and_run_instance.called) get_az.assert_called_once_with(mock.ANY, 'host1') instance_uuid = details['instance'].uuid bdms = objects.BlockDeviceMappingList.get_by_instance_uuid( self.ctxt, instance_uuid) ephemeral = list(filter(block_device.new_format_is_ephemeral, bdms)) self.assertEqual(1, len(ephemeral)) swap = list(filter(block_device.new_format_is_swap, bdms)) self.assertEqual(0, len(swap)) self.assertEqual(1, ephemeral[0].volume_size) return instance_uuid @mock.patch('nova.notifications.send_update_with_states') def test_schedule_and_build_instances(self, mock_notify): # NOTE(melwitt): This won't work with call_args because the call # arguments are recorded as references and not as copies of objects. # So even though the notify method was called with Instance._context # targeted, by the time we assert with call_args, the target_cell # context manager has already exited and the referenced Instance # object's _context.db_connection has been restored to None. def fake_notify(ctxt, instance, *args, **kwargs): # Assert the instance object is targeted when going through the # notification code. self.assertIsNotNone(ctxt.db_connection) self.assertIsNotNone(instance._context.db_connection) mock_notify.side_effect = fake_notify instance_uuid = self._do_schedule_and_build_instances_test( self.params) cells = objects.CellMappingList.get_all(self.ctxt) # NOTE(danms): Assert that we created the InstanceAction in the # correct cell # NOTE(Kevin Zheng): Also assert tags in the correct cell for cell in cells: with context.target_cell(self.ctxt, cell) as cctxt: actions = objects.InstanceActionList.get_by_instance_uuid( cctxt, instance_uuid) if cell.name == 'cell1': self.assertEqual(1, len(actions)) tags = objects.TagList.get_by_resource_id( cctxt, instance_uuid) self.assertEqual(1, len(tags)) else: self.assertEqual(0, len(actions)) def test_schedule_and_build_instances_no_tags_provided(self): params = copy.deepcopy(self.params) del params['tags'] instance_uuid = self._do_schedule_and_build_instances_test(params) cells = objects.CellMappingList.get_all(self.ctxt) # NOTE(danms): Assert that we created the InstanceAction in the # correct cell # NOTE(Kevin Zheng): Also assert tags in the correct cell for cell in cells: with context.target_cell(self.ctxt, cell) as cctxt: actions = objects.InstanceActionList.get_by_instance_uuid( cctxt, instance_uuid) if cell.name == 'cell1': self.assertEqual(1, len(actions)) tags = objects.TagList.get_by_resource_id( cctxt, instance_uuid) self.assertEqual(0, len(tags)) else: self.assertEqual(0, len(actions)) @mock.patch('nova.compute.rpcapi.ComputeAPI.build_and_run_instance') @mock.patch('nova.scheduler.rpcapi.SchedulerAPI.select_destinations') @mock.patch('nova.objects.HostMapping.get_by_host') def test_schedule_and_build_multiple_instances(self, get_hostmapping, select_destinations, build_and_run_instance): # This list needs to match the number of build_requests and the number # of request_specs in params. select_destinations.return_value = [[fake_selection1], [fake_selection2], [fake_selection1], [fake_selection2]] params = self.params self.start_service('compute', host='host1') self.start_service('compute', host='host2') # Because of the cache, this should only be called twice, # once for the first and once for the third request. get_hostmapping.side_effect = self.host_mappings.values() # create three additional build requests for a total of four for x in range(3): build_request = fake_build_request.fake_req_obj(self.ctxt) del build_request.instance.id build_request.create() params['build_requests'].objects.append(build_request) im2 = objects.InstanceMapping( self.ctxt, instance_uuid=build_request.instance.uuid, cell_mapping=None, project_id=self.ctxt.project_id) im2.create() params['request_specs'].append(objects.RequestSpec( instance_uuid=build_request.instance_uuid, instance_group=None)) # Now let's have some fun and delete the third build request before # passing the object on to schedule_and_build_instances so that the # instance will be created for that build request but when it calls # BuildRequest.destroy(), it will raise BuildRequestNotFound and we'll # cleanup the instance instead of passing it to build_and_run_instance # and we make sure that the fourth build request still gets processed. deleted_build_request = params['build_requests'][2] deleted_build_request.destroy() def _build_and_run_instance(ctxt, *args, **kwargs): # Make sure the instance wasn't the one that was deleted. instance = kwargs['instance'] self.assertNotEqual(deleted_build_request.instance_uuid, instance.uuid) # This just makes sure that the instance was created in the DB. self.assertTrue(kwargs['instance'].id) self.assertEqual(1, len(kwargs['block_device_mapping'])) # FIXME(danms): How to validate the db connection here? build_and_run_instance.side_effect = _build_and_run_instance self.conductor.schedule_and_build_instances(**params) self.assertEqual(3, build_and_run_instance.call_count) @mock.patch('nova.compute.rpcapi.ComputeAPI.build_and_run_instance') @mock.patch('nova.scheduler.rpcapi.SchedulerAPI.select_destinations') @mock.patch('nova.objects.HostMapping.get_by_host') def test_schedule_and_build_multiple_cells( self, get_hostmapping, select_destinations, build_and_run_instance): """Test that creates two instances in separate cells.""" # This list needs to match the number of build_requests and the number # of request_specs in params. select_destinations.return_value = [[fake_selection1], [fake_selection2]] params = self.params # The cells are created in the base TestCase setup. self.start_service('compute', host='host1', cell='cell1') self.start_service('compute', host='host2', cell='cell2') get_hostmapping.side_effect = self.host_mappings.values() # create an additional build request and request spec build_request = fake_build_request.fake_req_obj(self.ctxt) del build_request.instance.id build_request.create() params['build_requests'].objects.append(build_request) im2 = objects.InstanceMapping( self.ctxt, instance_uuid=build_request.instance.uuid, cell_mapping=None, project_id=self.ctxt.project_id) im2.create() params['request_specs'].append(objects.RequestSpec( instance_uuid=build_request.instance_uuid, instance_group=None)) instance_cells = set() def _build_and_run_instance(ctxt, *args, **kwargs): instance = kwargs['instance'] # Keep track of the cells that the instances were created in. inst_mapping = objects.InstanceMapping.get_by_instance_uuid( ctxt, instance.uuid) instance_cells.add(inst_mapping.cell_mapping.uuid) build_and_run_instance.side_effect = _build_and_run_instance self.conductor.schedule_and_build_instances(**params) self.assertEqual(2, build_and_run_instance.call_count) self.assertEqual(2, len(instance_cells)) @mock.patch('nova.scheduler.rpcapi.SchedulerAPI.select_destinations') def test_schedule_and_build_scheduler_failure(self, select_destinations): select_destinations.side_effect = Exception self.start_service('compute', host='fake-host') self.conductor.schedule_and_build_instances(**self.params) with conductor_manager.try_target_cell(self.ctxt, self.cell_mappings['cell0']): instance = objects.Instance.get_by_uuid( self.ctxt, self.params['build_requests'][0].instance_uuid) self.assertEqual('error', instance.vm_state) self.assertIsNone(instance.task_state) @mock.patch('nova.objects.TagList.destroy') @mock.patch('nova.objects.TagList.create') @mock.patch('nova.compute.utils.notify_about_instance_usage') @mock.patch('nova.compute.rpcapi.ComputeAPI.build_and_run_instance') @mock.patch('nova.scheduler.rpcapi.SchedulerAPI.select_destinations') @mock.patch('nova.objects.BuildRequest.destroy') @mock.patch('nova.conductor.manager.ComputeTaskManager._bury_in_cell0') def test_schedule_and_build_delete_during_scheduling(self, bury, br_destroy, select_destinations, build_and_run, notify, taglist_create, taglist_destroy): br_destroy.side_effect = exc.BuildRequestNotFound(uuid='foo') self.start_service('compute', host='host1') select_destinations.return_value = [[fake_selection1]] taglist_create.return_value = self.params['tags'] self.conductor.schedule_and_build_instances(**self.params) self.assertFalse(build_and_run.called) self.assertFalse(bury.called) self.assertTrue(br_destroy.called) taglist_destroy.assert_called_once_with( test.MatchType(context.RequestContext), self.params['build_requests'][0].instance_uuid) # Make sure TagList.destroy was called with the targeted context. self.assertIsNotNone(taglist_destroy.call_args[0][0].db_connection) test_utils.assert_instance_delete_notification_by_uuid( notify, self.params['build_requests'][0].instance_uuid, self.conductor.notifier, test.MatchType(context.RequestContext), expect_targeted_context=True) @mock.patch('nova.objects.Instance.destroy') @mock.patch('nova.compute.utils.notify_about_instance_usage') @mock.patch('nova.compute.rpcapi.ComputeAPI.build_and_run_instance') @mock.patch('nova.scheduler.rpcapi.SchedulerAPI.select_destinations') @mock.patch('nova.objects.BuildRequest.destroy') @mock.patch('nova.conductor.manager.ComputeTaskManager._bury_in_cell0') def test_schedule_and_build_delete_during_scheduling_host_changed( self, bury, br_destroy, select_destinations, build_and_run, notify, instance_destroy): br_destroy.side_effect = exc.BuildRequestNotFound(uuid='foo') instance_destroy.side_effect = [ exc.ObjectActionError(action='destroy', reason='host changed'), None, ] self.start_service('compute', host='host1') select_destinations.return_value = [[fake_selection1]] self.conductor.schedule_and_build_instances(**self.params) self.assertFalse(build_and_run.called) self.assertFalse(bury.called) self.assertTrue(br_destroy.called) self.assertEqual(2, instance_destroy.call_count) test_utils.assert_instance_delete_notification_by_uuid( notify, self.params['build_requests'][0].instance_uuid, self.conductor.notifier, test.MatchType(context.RequestContext), expect_targeted_context=True) @mock.patch('nova.objects.Instance.destroy') @mock.patch('nova.compute.utils.notify_about_instance_usage') @mock.patch('nova.compute.rpcapi.ComputeAPI.build_and_run_instance') @mock.patch('nova.scheduler.rpcapi.SchedulerAPI.select_destinations') @mock.patch('nova.objects.BuildRequest.destroy') @mock.patch('nova.conductor.manager.ComputeTaskManager._bury_in_cell0') def test_schedule_and_build_delete_during_scheduling_instance_not_found( self, bury, br_destroy, select_destinations, build_and_run, notify, instance_destroy): br_destroy.side_effect = exc.BuildRequestNotFound(uuid='foo') instance_destroy.side_effect = [ exc.InstanceNotFound(instance_id='fake'), None, ] self.start_service('compute', host='host1') select_destinations.return_value = [[fake_selection1]] self.conductor.schedule_and_build_instances(**self.params) self.assertFalse(build_and_run.called) self.assertFalse(bury.called) self.assertTrue(br_destroy.called) self.assertEqual(1, instance_destroy.call_count) test_utils.assert_instance_delete_notification_by_uuid( notify, self.params['build_requests'][0].instance_uuid, self.conductor.notifier, test.MatchType(context.RequestContext), expect_targeted_context=True) @mock.patch('nova.compute.rpcapi.ComputeAPI.build_and_run_instance') @mock.patch('nova.scheduler.rpcapi.SchedulerAPI.select_destinations') @mock.patch('nova.objects.BuildRequest.get_by_instance_uuid') @mock.patch('nova.objects.BuildRequest.destroy') @mock.patch('nova.conductor.manager.ComputeTaskManager._bury_in_cell0') @mock.patch('nova.objects.Instance.create') def test_schedule_and_build_delete_before_scheduling(self, inst_create, bury, br_destroy, br_get_by_inst, select_destinations, build_and_run): """Tests the case that the build request is deleted before the instance is created, so we do not create the instance. """ inst_uuid = self.params['build_requests'][0].instance.uuid br_get_by_inst.side_effect = exc.BuildRequestNotFound(uuid=inst_uuid) self.start_service('compute', host='host1') select_destinations.return_value = [[fake_selection1]] self.conductor.schedule_and_build_instances(**self.params) # we don't create the instance since the build request is gone self.assertFalse(inst_create.called) # we don't build the instance since we didn't create it self.assertFalse(build_and_run.called) # we don't bury the instance in cell0 since it's already deleted self.assertFalse(bury.called) # we don't don't destroy the build request since it's already gone self.assertFalse(br_destroy.called) @mock.patch('nova.compute.rpcapi.ComputeAPI.build_and_run_instance') @mock.patch('nova.scheduler.rpcapi.SchedulerAPI.select_destinations') @mock.patch('nova.objects.BuildRequest.destroy') @mock.patch('nova.conductor.manager.ComputeTaskManager._bury_in_cell0') def test_schedule_and_build_unmapped_host_ends_up_in_cell0(self, bury, br_destroy, select_dest, build_and_run): def _fake_bury(ctxt, request_spec, exc, build_requests=None, instances=None, block_device_mapping=None): self.assertIn('not mapped to any cell', str(exc)) self.assertEqual(1, len(build_requests)) self.assertEqual(1, len(instances)) self.assertEqual(build_requests[0].instance_uuid, instances[0].uuid) self.assertEqual(self.params['block_device_mapping'], block_device_mapping) bury.side_effect = _fake_bury select_dest.return_value = [[fake_selection1]] self.conductor.schedule_and_build_instances(**self.params) self.assertTrue(bury.called) self.assertFalse(build_and_run.called) @mock.patch('nova.objects.quotas.Quotas.check_deltas') @mock.patch('nova.scheduler.rpcapi.SchedulerAPI.select_destinations') def test_schedule_and_build_over_quota_during_recheck(self, mock_select, mock_check): mock_select.return_value = [[fake_selection1]] # Simulate a race where the first check passes and the recheck fails. # First check occurs in compute/api. fake_quotas = {'instances': 5, 'cores': 10, 'ram': 4096} fake_headroom = {'instances': 5, 'cores': 10, 'ram': 4096} fake_usages = {'instances': 5, 'cores': 10, 'ram': 4096} e = exc.OverQuota(overs=['instances'], quotas=fake_quotas, headroom=fake_headroom, usages=fake_usages) mock_check.side_effect = e original_save = objects.Instance.save def fake_save(inst, *args, **kwargs): # Make sure the context is targeted to the cell that the instance # was created in. self.assertIsNotNone( inst._context.db_connection, 'Context is not targeted') original_save(inst, *args, **kwargs) self.stub_out('nova.objects.Instance.save', fake_save) # This is needed to register the compute node in a cell. self.start_service('compute', host='host1') self.assertRaises( exc.TooManyInstances, self.conductor.schedule_and_build_instances, **self.params) project_id = self.params['context'].project_id mock_check.assert_called_once_with( self.params['context'], {'instances': 0, 'cores': 0, 'ram': 0}, project_id, user_id=None, check_project_id=project_id, check_user_id=None) # Verify we set the instance to ERROR state and set the fault message. instances = objects.InstanceList.get_all(self.ctxt) self.assertEqual(1, len(instances)) instance = instances[0] self.assertEqual(vm_states.ERROR, instance.vm_state) self.assertIsNone(instance.task_state) self.assertIn('Quota exceeded', instance.fault.message) # Verify we removed the build objects. build_requests = objects.BuildRequestList.get_all(self.ctxt) # Verify that the instance is mapped to a cell inst_mapping = objects.InstanceMapping.get_by_instance_uuid( self.ctxt, instance.uuid) self.assertIsNotNone(inst_mapping.cell_mapping) self.assertEqual(0, len(build_requests)) @db_api.api_context_manager.reader def request_spec_get_all(context): return context.session.query(api_models.RequestSpec).all() request_specs = request_spec_get_all(self.ctxt) self.assertEqual(0, len(request_specs)) @mock.patch('nova.compute.rpcapi.ComputeAPI.build_and_run_instance') @mock.patch('nova.objects.quotas.Quotas.check_deltas') @mock.patch('nova.scheduler.rpcapi.SchedulerAPI.select_destinations') def test_schedule_and_build_no_quota_recheck(self, mock_select, mock_check, mock_build): mock_select.return_value = [[fake_selection1]] # Disable recheck_quota. self.flags(recheck_quota=False, group='quota') # This is needed to register the compute node in a cell. self.start_service('compute', host='host1') self.conductor.schedule_and_build_instances(**self.params) # check_deltas should not have been called a second time. The first # check occurs in compute/api. mock_check.assert_not_called() self.assertTrue(mock_build.called) @mock.patch('nova.objects.CellMapping.get_by_uuid') def test_bury_in_cell0_no_cell0(self, mock_cm_get): mock_cm_get.side_effect = exc.CellMappingNotFound(uuid='0') # Without an iterable build_requests in the database, this # wouldn't work if it continued past the cell0 lookup. self.conductor._bury_in_cell0(self.ctxt, None, None, build_requests=1) self.assertTrue(mock_cm_get.called) def test_bury_in_cell0(self): bare_br = self.params['build_requests'][0] inst_br = fake_build_request.fake_req_obj(self.ctxt) del inst_br.instance.id inst_br.create() inst = inst_br.get_new_instance(self.ctxt) deleted_br = fake_build_request.fake_req_obj(self.ctxt) del deleted_br.instance.id deleted_br.create() deleted_inst = inst_br.get_new_instance(self.ctxt) deleted_br.destroy() fast_deleted_br = fake_build_request.fake_req_obj(self.ctxt) del fast_deleted_br.instance.id fast_deleted_br.create() fast_deleted_br.destroy() self.conductor._bury_in_cell0(self.ctxt, self.params['request_specs'][0], Exception('Foo'), build_requests=[bare_br, inst_br, deleted_br, fast_deleted_br], instances=[inst, deleted_inst]) with conductor_manager.try_target_cell(self.ctxt, self.cell_mappings['cell0']): self.ctxt.read_deleted = 'yes' build_requests = objects.BuildRequestList.get_all(self.ctxt) instances = objects.InstanceList.get_all(self.ctxt) self.assertEqual(0, len(build_requests)) self.assertEqual(4, len(instances)) inst_states = {inst.uuid: (inst.deleted, inst.vm_state) for inst in instances} expected = { bare_br.instance_uuid: (False, vm_states.ERROR), inst_br.instance_uuid: (False, vm_states.ERROR), deleted_br.instance_uuid: (True, vm_states.ERROR), fast_deleted_br.instance_uuid: (True, vm_states.ERROR), } self.assertEqual(expected, inst_states) @mock.patch.object(objects.CellMapping, 'get_by_uuid') @mock.patch.object(conductor_manager.ComputeTaskManager, '_create_block_device_mapping') def test_bury_in_cell0_with_block_device_mapping(self, mock_create_bdm, mock_get_cell): mock_get_cell.return_value = self.cell_mappings['cell0'] inst_br = fake_build_request.fake_req_obj(self.ctxt) del inst_br.instance.id inst_br.create() inst = inst_br.get_new_instance(self.ctxt) self.conductor._bury_in_cell0( self.ctxt, self.params['request_specs'][0], Exception('Foo'), build_requests=[inst_br], instances=[inst], block_device_mapping=self.params['block_device_mapping']) mock_create_bdm.assert_called_once_with( self.cell_mappings['cell0'], inst.flavor, inst.uuid, self.params['block_device_mapping']) def test_reset(self): with mock.patch('nova.compute.rpcapi.ComputeAPI') as mock_rpc: old_rpcapi = self.conductor_manager.compute_rpcapi self.conductor_manager.reset() mock_rpc.assert_called_once_with() self.assertNotEqual(old_rpcapi, self.conductor_manager.compute_rpcapi) @mock.patch.object(objects.InstanceMapping, 'get_by_instance_uuid') def test_migrate_server_fails_with_rebuild(self, get_im): get_im.return_value.cell_mapping = ( objects.CellMappingList.get_all(self.context)[0]) instance = fake_instance.fake_instance_obj(self.context, vm_state=vm_states.ACTIVE) self.assertRaises(NotImplementedError, self.conductor.migrate_server, self.context, instance, None, True, True, None, None, None) @mock.patch.object(objects.InstanceMapping, 'get_by_instance_uuid') def test_migrate_server_fails_with_flavor(self, get_im): get_im.return_value.cell_mapping = ( objects.CellMappingList.get_all(self.context)[0]) flavor = flavors.get_flavor_by_name('m1.tiny') instance = fake_instance.fake_instance_obj(self.context, vm_state=vm_states.ACTIVE, flavor=flavor) self.assertRaises(NotImplementedError, self.conductor.migrate_server, self.context, instance, None, True, False, flavor, None, None) def _build_request_spec(self, instance): return { 'instance_properties': { 'uuid': instance['uuid'], }, } @mock.patch.object(objects.InstanceMapping, 'get_by_instance_uuid') @mock.patch.object(scheduler_utils, 'set_vm_state_and_notify') @mock.patch.object(live_migrate.LiveMigrationTask, 'execute') def _test_migrate_server_deals_with_expected_exceptions(self, ex, mock_execute, mock_set, get_im): get_im.return_value.cell_mapping = ( objects.CellMappingList.get_all(self.context)[0]) instance = fake_instance.fake_db_instance(uuid=uuids.instance, vm_state=vm_states.ACTIVE) inst_obj = objects.Instance._from_db_object( self.context, objects.Instance(), instance, []) mock_execute.side_effect = ex self.conductor = utils.ExceptionHelper(self.conductor) self.assertRaises(type(ex), self.conductor.migrate_server, self.context, inst_obj, {'host': 'destination'}, True, False, None, 'block_migration', 'disk_over_commit') mock_set.assert_called_once_with(self.context, inst_obj.uuid, 'compute_task', 'migrate_server', {'vm_state': vm_states.ACTIVE, 'task_state': None, 'expected_task_state': task_states.MIGRATING}, ex, self._build_request_spec(inst_obj)) @mock.patch.object(objects.InstanceMapping, 'get_by_instance_uuid') def test_migrate_server_deals_with_invalidcpuinfo_exception(self, get_im): get_im.return_value.cell_mapping = ( objects.CellMappingList.get_all(self.context)[0]) instance = fake_instance.fake_db_instance(uuid=uuids.instance, vm_state=vm_states.ACTIVE) inst_obj = objects.Instance._from_db_object( self.context, objects.Instance(), instance, []) self.mox.StubOutWithMock(live_migrate.LiveMigrationTask, 'execute') self.mox.StubOutWithMock(scheduler_utils, 'set_vm_state_and_notify') ex = exc.InvalidCPUInfo(reason="invalid cpu info.") task = self.conductor._build_live_migrate_task( self.context, inst_obj, 'destination', 'block_migration', 'disk_over_commit', mox.IsA(objects.Migration)) task.execute().AndRaise(ex) scheduler_utils.set_vm_state_and_notify(self.context, inst_obj.uuid, 'compute_task', 'migrate_server', {'vm_state': vm_states.ACTIVE, 'task_state': None, 'expected_task_state': task_states.MIGRATING}, ex, self._build_request_spec(inst_obj)) self.mox.ReplayAll() self.conductor = utils.ExceptionHelper(self.conductor) self.assertRaises(exc.InvalidCPUInfo, self.conductor.migrate_server, self.context, inst_obj, {'host': 'destination'}, True, False, None, 'block_migration', 'disk_over_commit') def test_migrate_server_deals_with_expected_exception(self): exs = [exc.InstanceInvalidState(instance_uuid="fake", attr='', state='', method=''), exc.DestinationHypervisorTooOld(), exc.HypervisorUnavailable(host='dummy'), exc.LiveMigrationWithOldNovaNotSupported(), exc.MigrationPreCheckError(reason='dummy'), exc.MigrationPreCheckClientException(reason='dummy'), exc.InvalidSharedStorage(path='dummy', reason='dummy'), exc.NoValidHost(reason='dummy'), exc.ComputeServiceUnavailable(host='dummy'), exc.InvalidHypervisorType(), exc.InvalidCPUInfo(reason='dummy'), exc.UnableToMigrateToSelf(instance_id='dummy', host='dummy'), exc.InvalidLocalStorage(path='dummy', reason='dummy'), exc.MigrationSchedulerRPCError(reason='dummy'), exc.ComputeHostNotFound(host='dummy')] for ex in exs: self._test_migrate_server_deals_with_expected_exceptions(ex) @mock.patch.object(objects.InstanceMapping, 'get_by_instance_uuid') @mock.patch.object(scheduler_utils, 'set_vm_state_and_notify') @mock.patch.object(live_migrate.LiveMigrationTask, 'execute') def test_migrate_server_deals_with_unexpected_exceptions(self, mock_live_migrate, mock_set_state, get_im): get_im.return_value.cell_mapping = ( objects.CellMappingList.get_all(self.context)[0]) expected_ex = IOError('fake error') mock_live_migrate.side_effect = expected_ex instance = fake_instance.fake_db_instance() inst_obj = objects.Instance._from_db_object( self.context, objects.Instance(), instance, []) ex = self.assertRaises(exc.MigrationError, self.conductor.migrate_server, self.context, inst_obj, {'host': 'destination'}, True, False, None, 'block_migration', 'disk_over_commit') request_spec = {'instance_properties': { 'uuid': instance['uuid'], }, } mock_set_state.assert_called_once_with(self.context, instance['uuid'], 'compute_task', 'migrate_server', dict(vm_state=vm_states.ERROR, task_state=None, expected_task_state=task_states.MIGRATING,), expected_ex, request_spec) self.assertEqual(ex.kwargs['reason'], six.text_type(expected_ex)) def test_set_vm_state_and_notify(self): self.mox.StubOutWithMock(scheduler_utils, 'set_vm_state_and_notify') scheduler_utils.set_vm_state_and_notify( self.context, 1, 'compute_task', 'method', 'updates', 'ex', 'request_spec') self.mox.ReplayAll() self.conductor._set_vm_state_and_notify( self.context, 1, 'method', 'updates', 'ex', 'request_spec') @mock.patch.object(objects.InstanceMapping, 'get_by_instance_uuid') @mock.patch.object(objects.RequestSpec, 'from_components') @mock.patch.object(scheduler_utils, 'setup_instance_group') @mock.patch.object(utils, 'get_image_from_system_metadata') @mock.patch.object(scheduler_client.SchedulerClient, 'select_destinations') @mock.patch.object(conductor_manager.ComputeTaskManager, '_set_vm_state_and_notify') @mock.patch.object(migrate.MigrationTask, 'rollback') def test_cold_migrate_no_valid_host_back_in_active_state( self, rollback_mock, notify_mock, select_dest_mock, metadata_mock, sig_mock, spec_fc_mock, im_mock): flavor = flavors.get_flavor_by_name('m1.tiny') inst_obj = objects.Instance( image_ref='fake-image_ref', instance_type_id=flavor['id'], vm_state=vm_states.ACTIVE, system_metadata={}, uuid=uuids.instance, user_id=fakes.FAKE_USER_ID, flavor=flavor, availability_zone=None, pci_requests=None, numa_topology=None, project_id=self.context.project_id) image = 'fake-image' fake_spec = objects.RequestSpec(image=objects.ImageMeta()) spec_fc_mock.return_value = fake_spec metadata_mock.return_value = image exc_info = exc.NoValidHost(reason="") select_dest_mock.side_effect = exc_info updates = {'vm_state': vm_states.ACTIVE, 'task_state': None} im_mock.return_value = objects.InstanceMapping( cell_mapping=objects.CellMapping.get_by_uuid(self.context, uuids.cell1)) self.assertRaises(exc.NoValidHost, self.conductor._cold_migrate, self.context, inst_obj, flavor, {}, True, None, None) metadata_mock.assert_called_with({}) sig_mock.assert_called_once_with(self.context, fake_spec) self.assertEqual(inst_obj.project_id, fake_spec.project_id) notify_mock.assert_called_once_with(self.context, inst_obj.uuid, 'migrate_server', updates, exc_info, fake_spec) rollback_mock.assert_called_once_with() @mock.patch.object(objects.InstanceMapping, 'get_by_instance_uuid') @mock.patch.object(scheduler_utils, 'setup_instance_group') @mock.patch.object(objects.RequestSpec, 'from_components') @mock.patch.object(utils, 'get_image_from_system_metadata') @mock.patch.object(scheduler_client.SchedulerClient, 'select_destinations') @mock.patch.object(conductor_manager.ComputeTaskManager, '_set_vm_state_and_notify') @mock.patch.object(migrate.MigrationTask, 'rollback') def test_cold_migrate_no_valid_host_back_in_stopped_state( self, rollback_mock, notify_mock, select_dest_mock, metadata_mock, spec_fc_mock, sig_mock, im_mock): flavor = flavors.get_flavor_by_name('m1.tiny') inst_obj = objects.Instance( image_ref='fake-image_ref', vm_state=vm_states.STOPPED, instance_type_id=flavor['id'], system_metadata={}, uuid=uuids.instance, user_id=fakes.FAKE_USER_ID, flavor=flavor, numa_topology=None, pci_requests=None, availability_zone=None, project_id=self.context.project_id) image = 'fake-image' fake_spec = objects.RequestSpec(image=objects.ImageMeta()) spec_fc_mock.return_value = fake_spec im_mock.return_value = objects.InstanceMapping( cell_mapping=objects.CellMapping.get_by_uuid(self.context, uuids.cell1)) metadata_mock.return_value = image exc_info = exc.NoValidHost(reason="") select_dest_mock.side_effect = exc_info updates = {'vm_state': vm_states.STOPPED, 'task_state': None} self.assertRaises(exc.NoValidHost, self.conductor._cold_migrate, self.context, inst_obj, flavor, {}, True, None, None) metadata_mock.assert_called_with({}) sig_mock.assert_called_once_with(self.context, fake_spec) self.assertEqual(inst_obj.project_id, fake_spec.project_id) notify_mock.assert_called_once_with(self.context, inst_obj.uuid, 'migrate_server', updates, exc_info, fake_spec) rollback_mock.assert_called_once_with() def test_cold_migrate_no_valid_host_error_msg(self): flavor = flavors.get_flavor_by_name('m1.tiny') inst_obj = objects.Instance( image_ref='fake-image_ref', vm_state=vm_states.STOPPED, instance_type_id=flavor['id'], system_metadata={}, uuid=uuids.instance, user_id=fakes.FAKE_USER_ID) fake_spec = fake_request_spec.fake_spec_obj() image = 'fake-image' with test.nested( mock.patch.object(utils, 'get_image_from_system_metadata', return_value=image), mock.patch.object(self.conductor, '_set_vm_state_and_notify'), mock.patch.object(migrate.MigrationTask, 'execute', side_effect=exc.NoValidHost(reason="")), mock.patch.object(migrate.MigrationTask, 'rollback') ) as (image_mock, set_vm_mock, task_execute_mock, task_rollback_mock): nvh = self.assertRaises(exc.NoValidHost, self.conductor._cold_migrate, self.context, inst_obj, flavor, {}, True, fake_spec, None) self.assertIn('cold migrate', nvh.message) @mock.patch.object(utils, 'get_image_from_system_metadata') @mock.patch.object(migrate.MigrationTask, 'execute') @mock.patch.object(migrate.MigrationTask, 'rollback') @mock.patch.object(conductor_manager.ComputeTaskManager, '_set_vm_state_and_notify') @mock.patch.object(objects.RequestSpec, 'from_components') def test_cold_migrate_no_valid_host_in_group(self, spec_fc_mock, set_vm_mock, task_rollback_mock, task_exec_mock, image_mock): flavor = flavors.get_flavor_by_name('m1.tiny') inst_obj = objects.Instance( image_ref='fake-image_ref', vm_state=vm_states.STOPPED, instance_type_id=flavor['id'], system_metadata={}, uuid=uuids.instance, user_id=fakes.FAKE_USER_ID, flavor=flavor, numa_topology=None, pci_requests=None, availability_zone=None) image = 'fake-image' exception = exc.UnsupportedPolicyException(reason='') fake_spec = fake_request_spec.fake_spec_obj() spec_fc_mock.return_value = fake_spec image_mock.return_value = image task_exec_mock.side_effect = exception self.assertRaises(exc.UnsupportedPolicyException, self.conductor._cold_migrate, self.context, inst_obj, flavor, {}, True, None, None) updates = {'vm_state': vm_states.STOPPED, 'task_state': None} set_vm_mock.assert_called_once_with(self.context, inst_obj.uuid, 'migrate_server', updates, exception, fake_spec) @mock.patch.object(objects.InstanceMapping, 'get_by_instance_uuid') @mock.patch.object(scheduler_utils, 'setup_instance_group') @mock.patch.object(objects.RequestSpec, 'from_components') @mock.patch.object(utils, 'get_image_from_system_metadata') @mock.patch.object(scheduler_client.SchedulerClient, 'select_destinations') @mock.patch.object(conductor_manager.ComputeTaskManager, '_set_vm_state_and_notify') @mock.patch.object(migrate.MigrationTask, 'rollback') @mock.patch.object(compute_rpcapi.ComputeAPI, 'prep_resize') def test_cold_migrate_exception_host_in_error_state_and_raise( self, prep_resize_mock, rollback_mock, notify_mock, select_dest_mock, metadata_mock, spec_fc_mock, sig_mock, im_mock): flavor = flavors.get_flavor_by_name('m1.tiny') inst_obj = objects.Instance( image_ref='fake-image_ref', vm_state=vm_states.STOPPED, instance_type_id=flavor['id'], system_metadata={}, uuid=uuids.instance, user_id=fakes.FAKE_USER_ID, flavor=flavor, availability_zone=None, pci_requests=None, numa_topology=None, project_id=self.context.project_id) image = 'fake-image' fake_spec = objects.RequestSpec(image=objects.ImageMeta()) legacy_request_spec = fake_spec.to_legacy_request_spec_dict() spec_fc_mock.return_value = fake_spec im_mock.return_value = objects.InstanceMapping( cell_mapping=objects.CellMapping.get_by_uuid(self.context, uuids.cell1)) hosts = [dict(host='host1', nodename='node1', limits={})] metadata_mock.return_value = image exc_info = test.TestingException('something happened') select_dest_mock.return_value = [[fake_selection1]] updates = {'vm_state': vm_states.STOPPED, 'task_state': None} prep_resize_mock.side_effect = exc_info self.assertRaises(test.TestingException, self.conductor._cold_migrate, self.context, inst_obj, flavor, {}, True, None, None) # Filter properties are populated during code execution legacy_filter_props = {'retry': {'num_attempts': 1, 'hosts': [['host1', 'node1']]}, 'limits': {}} metadata_mock.assert_called_with({}) sig_mock.assert_called_once_with(self.context, fake_spec) self.assertEqual(inst_obj.project_id, fake_spec.project_id) select_dest_mock.assert_called_once_with(self.context, fake_spec, [inst_obj.uuid], return_objects=True, return_alternates=True) prep_resize_mock.assert_called_once_with( self.context, inst_obj, legacy_request_spec['image'], flavor, hosts[0]['host'], None, request_spec=legacy_request_spec, filter_properties=legacy_filter_props, node=hosts[0]['nodename'], clean_shutdown=True, host_list=[]) notify_mock.assert_called_once_with(self.context, inst_obj.uuid, 'migrate_server', updates, exc_info, fake_spec) rollback_mock.assert_called_once_with() @mock.patch.object(objects.RequestSpec, 'save') @mock.patch.object(migrate.MigrationTask, 'execute') @mock.patch.object(utils, 'get_image_from_system_metadata') def test_cold_migrate_updates_flavor_if_existing_reqspec(self, image_mock, task_exec_mock, spec_save_mock): flavor = flavors.get_flavor_by_name('m1.tiny') inst_obj = objects.Instance( image_ref='fake-image_ref', vm_state=vm_states.STOPPED, instance_type_id=flavor['id'], system_metadata={}, uuid=uuids.instance, user_id=fakes.FAKE_USER_ID, flavor=flavor, availability_zone=None, pci_requests=None, numa_topology=None) image = 'fake-image' fake_spec = fake_request_spec.fake_spec_obj() image_mock.return_value = image # Just make sure we have an original flavor which is different from # the new one self.assertNotEqual(flavor, fake_spec.flavor) self.conductor._cold_migrate(self.context, inst_obj, flavor, {}, True, fake_spec, None) # Now the RequestSpec should be updated... self.assertEqual(flavor, fake_spec.flavor) # ...and persisted spec_save_mock.assert_called_once_with() def test_resize_no_valid_host_error_msg(self): flavor = flavors.get_flavor_by_name('m1.tiny') flavor_new = flavors.get_flavor_by_name('m1.small') inst_obj = objects.Instance( image_ref='fake-image_ref', vm_state=vm_states.STOPPED, instance_type_id=flavor['id'], system_metadata={}, uuid=uuids.instance, user_id=fakes.FAKE_USER_ID) fake_spec = fake_request_spec.fake_spec_obj() image = 'fake-image' with test.nested( mock.patch.object(utils, 'get_image_from_system_metadata', return_value=image), mock.patch.object(scheduler_utils, 'build_request_spec', return_value=fake_spec), mock.patch.object(self.conductor, '_set_vm_state_and_notify'), mock.patch.object(migrate.MigrationTask, 'execute', side_effect=exc.NoValidHost(reason="")), mock.patch.object(migrate.MigrationTask, 'rollback') ) as (image_mock, brs_mock, vm_st_mock, task_execute_mock, task_rb_mock): nvh = self.assertRaises(exc.NoValidHost, self.conductor._cold_migrate, self.context, inst_obj, flavor_new, {}, True, fake_spec, None) self.assertIn('resize', nvh.message) @mock.patch('nova.objects.BuildRequest.get_by_instance_uuid') @mock.patch.object(objects.RequestSpec, 'from_primitives') def test_build_instances_instance_not_found(self, fp, _mock_buildreq): fake_spec = objects.RequestSpec() fp.return_value = fake_spec instances = [fake_instance.fake_instance_obj(self.context) for i in range(2)] self.mox.StubOutWithMock(instances[0], 'save') self.mox.StubOutWithMock(instances[1], 'save') image = {'fake-data': 'should_pass_silently'} spec = {'fake': 'specs', 'instance_properties': instances[0]} self.mox.StubOutWithMock(scheduler_utils, 'build_request_spec') self.mox.StubOutWithMock(self.conductor_manager, '_schedule_instances') self.mox.StubOutWithMock(self.conductor_manager.compute_rpcapi, 'build_and_run_instance') scheduler_utils.build_request_spec(image, mox.IgnoreArg()).AndReturn(spec) filter_properties = {'retry': {'num_attempts': 1, 'hosts': []}} inst_uuids = [inst.uuid for inst in instances] sched_return = copy.deepcopy(fake_host_lists2) self.conductor_manager._schedule_instances(self.context, fake_spec, inst_uuids, return_alternates=True).AndReturn( sched_return) instances[0].save().AndRaise( exc.InstanceNotFound(instance_id=instances[0].uuid)) instances[1].save() filter_properties2 = {'limits': {}, 'retry': {'num_attempts': 1, 'hosts': [['host2', 'node2']]}} self.conductor_manager.compute_rpcapi.build_and_run_instance( self.context, instance=instances[1], host='host2', image={'fake-data': 'should_pass_silently'}, request_spec=fake_spec, filter_properties=filter_properties2, admin_password='admin_password', injected_files='injected_files', requested_networks=None, security_groups='security_groups', block_device_mapping=mox.IsA(objects.BlockDeviceMappingList), node='node2', limits=None, host_list=[]) self.mox.ReplayAll() # build_instances() is a cast, we need to wait for it to complete self.useFixture(cast_as_call.CastAsCall(self)) self.conductor.build_instances(self.context, instances=instances, image=image, filter_properties={}, admin_password='admin_password', injected_files='injected_files', requested_networks=None, security_groups='security_groups', block_device_mapping='block_device_mapping', legacy_bdm=False, host_lists=None) # RequestSpec.from_primitives is called once before we call the # scheduler to select_destinations and then once per instance that # gets build in the compute. fp.assert_has_calls([ mock.call(self.context, spec, filter_properties), mock.call(self.context, spec, filter_properties2)]) @mock.patch.object(scheduler_utils, 'setup_instance_group') @mock.patch.object(scheduler_utils, 'build_request_spec') def test_build_instances_info_cache_not_found(self, build_request_spec, setup_instance_group): instances = [fake_instance.fake_instance_obj(self.context) for i in range(2)] image = {'fake-data': 'should_pass_silently'} destinations = [[fake_selection1], [fake_selection2]] spec = {'fake': 'specs', 'instance_properties': instances[0]} build_request_spec.return_value = spec fake_spec = objects.RequestSpec() with test.nested( mock.patch.object(instances[0], 'save', side_effect=exc.InstanceInfoCacheNotFound( instance_uuid=instances[0].uuid)), mock.patch.object(instances[1], 'save'), mock.patch.object(objects.RequestSpec, 'from_primitives', return_value=fake_spec), mock.patch.object(self.conductor_manager.scheduler_client, 'select_destinations', return_value=destinations), mock.patch.object(self.conductor_manager.compute_rpcapi, 'build_and_run_instance'), mock.patch.object(objects.BuildRequest, 'get_by_instance_uuid') ) as (inst1_save, inst2_save, from_primitives, select_destinations, build_and_run_instance, get_buildreq): # build_instances() is a cast, we need to wait for it to complete self.useFixture(cast_as_call.CastAsCall(self)) self.conductor.build_instances(self.context, instances=instances, image=image, filter_properties={}, admin_password='admin_password', injected_files='injected_files', requested_networks=None, security_groups='security_groups', block_device_mapping='block_device_mapping', legacy_bdm=False) setup_instance_group.assert_called_once_with( self.context, fake_spec) get_buildreq.return_value.destroy.assert_called_once_with() build_and_run_instance.assert_called_once_with(self.context, instance=instances[1], host='host2', image={'fake-data': 'should_pass_silently'}, request_spec=from_primitives.return_value, filter_properties={'limits': {}, 'retry': {'num_attempts': 1, 'hosts': [['host2', 'node2']]}}, admin_password='admin_password', injected_files='injected_files', requested_networks=None, security_groups='security_groups', block_device_mapping=mock.ANY, node='node2', limits=None, host_list=[]) @mock.patch('nova.objects.Instance.save') def test_build_instances_max_retries_exceeded(self, mock_save): """Tests that when populate_retry raises MaxRetriesExceeded in build_instances, we don't attempt to cleanup the build request. """ instance = fake_instance.fake_instance_obj(self.context) image = {'id': uuids.image_id} filter_props = { 'retry': { 'num_attempts': CONF.scheduler.max_attempts } } requested_networks = objects.NetworkRequestList() with mock.patch.object(self.conductor, '_destroy_build_request', new_callable=mock.NonCallableMock): self.conductor.build_instances( self.context, [instance], image, filter_props, mock.sentinel.admin_pass, mock.sentinel.files, requested_networks, mock.sentinel.secgroups) mock_save.assert_called_once_with() @mock.patch('nova.objects.Instance.save') def test_build_instances_reschedule_no_valid_host(self, mock_save): """Tests that when select_destinations raises NoValidHost in build_instances, we don't attempt to cleanup the build request if we're rescheduling (num_attempts>1). """ instance = fake_instance.fake_instance_obj(self.context) image = {'id': uuids.image_id} filter_props = { 'retry': { 'num_attempts': 1 # populate_retry will increment this } } requested_networks = objects.NetworkRequestList() with mock.patch.object(self.conductor, '_destroy_build_request', new_callable=mock.NonCallableMock): with mock.patch.object( self.conductor.scheduler_client, 'select_destinations', side_effect=exc.NoValidHost(reason='oops')): self.conductor.build_instances( self.context, [instance], image, filter_props, mock.sentinel.admin_pass, mock.sentinel.files, requested_networks, mock.sentinel.secgroups) mock_save.assert_called_once_with() def test_cleanup_allocated_networks_none_requested(self): # Tests that we don't deallocate networks if 'none' were specifically # requested. fake_inst = fake_instance.fake_instance_obj( self.context, expected_attrs=['system_metadata']) requested_networks = objects.NetworkRequestList( objects=[objects.NetworkRequest(network_id='none')]) with mock.patch.object(self.conductor.network_api, 'deallocate_for_instance') as deallocate: with mock.patch.object(fake_inst, 'save') as mock_save: self.conductor._cleanup_allocated_networks( self.context, fake_inst, requested_networks) self.assertFalse(deallocate.called) self.assertEqual('False', fake_inst.system_metadata['network_allocated'], fake_inst.system_metadata) mock_save.assert_called_once_with() def test_cleanup_allocated_networks_auto_or_none_provided(self): # Tests that we deallocate networks if auto-allocating networks or # requested_networks=None. fake_inst = fake_instance.fake_instance_obj( self.context, expected_attrs=['system_metadata']) requested_networks = objects.NetworkRequestList( objects=[objects.NetworkRequest(network_id='auto')]) for req_net in (requested_networks, None): with mock.patch.object(self.conductor.network_api, 'deallocate_for_instance') as deallocate: with mock.patch.object(fake_inst, 'save') as mock_save: self.conductor._cleanup_allocated_networks( self.context, fake_inst, req_net) deallocate.assert_called_once_with( self.context, fake_inst, requested_networks=req_net) self.assertEqual('False', fake_inst.system_metadata['network_allocated'], fake_inst.system_metadata) mock_save.assert_called_once_with() @mock.patch.object(objects.ComputeNode, 'get_by_host_and_nodename', side_effect=exc.ComputeHostNotFound('source-host')) def test_allocate_for_evacuate_dest_host_source_node_not_found_no_reqspec( self, get_compute_node): """Tests that the source node for the instance isn't found. In this case there is no request spec provided. """ instance = self.params['build_requests'][0].instance instance.host = 'source-host' with mock.patch.object(self.conductor, '_set_vm_state_and_notify') as notify: ex = self.assertRaises( exc.ComputeHostNotFound, self.conductor._allocate_for_evacuate_dest_host, self.ctxt, instance, 'dest-host') get_compute_node.assert_called_once_with( self.ctxt, instance.host, instance.node) notify.assert_called_once_with( self.ctxt, instance.uuid, 'rebuild_server', {'vm_state': instance.vm_state, 'task_state': None}, ex, None) @mock.patch.object(objects.ComputeNode, 'get_by_host_and_nodename', return_value=objects.ComputeNode(host='source-host')) @mock.patch.object(objects.ComputeNode, 'get_first_node_by_host_for_old_compat', side_effect=exc.ComputeHostNotFound(host='dest-host')) def test_allocate_for_evacuate_dest_host_dest_node_not_found_reqspec( self, get_dest_node, get_source_node): """Tests that the destination node for the request isn't found. In this case there is a request spec provided. """ instance = self.params['build_requests'][0].instance instance.host = 'source-host' reqspec = self.params['request_specs'][0] with mock.patch.object(self.conductor, '_set_vm_state_and_notify') as notify: ex = self.assertRaises( exc.ComputeHostNotFound, self.conductor._allocate_for_evacuate_dest_host, self.ctxt, instance, 'dest-host', reqspec) get_source_node.assert_called_once_with( self.ctxt, instance.host, instance.node) get_dest_node.assert_called_once_with( self.ctxt, 'dest-host', use_slave=True) notify.assert_called_once_with( self.ctxt, instance.uuid, 'rebuild_server', {'vm_state': instance.vm_state, 'task_state': None}, ex, reqspec) @mock.patch.object(objects.ComputeNode, 'get_by_host_and_nodename', return_value=objects.ComputeNode(host='source-host')) @mock.patch.object(objects.ComputeNode, 'get_first_node_by_host_for_old_compat', return_value=objects.ComputeNode(host='dest-host')) def test_allocate_for_evacuate_dest_host_claim_fails( self, get_dest_node, get_source_node): """Tests that the allocation claim fails.""" instance = self.params['build_requests'][0].instance instance.host = 'source-host' reqspec = self.params['request_specs'][0] with test.nested( mock.patch.object(self.conductor, '_set_vm_state_and_notify'), mock.patch.object(scheduler_utils, 'claim_resources_on_destination', side_effect=exc.NoValidHost(reason='I am full')) ) as ( notify, claim ): ex = self.assertRaises( exc.NoValidHost, self.conductor._allocate_for_evacuate_dest_host, self.ctxt, instance, 'dest-host', reqspec) get_source_node.assert_called_once_with( self.ctxt, instance.host, instance.node) get_dest_node.assert_called_once_with( self.ctxt, 'dest-host', use_slave=True) claim.assert_called_once_with( self.ctxt, self.conductor.scheduler_client.reportclient, instance, get_source_node.return_value, get_dest_node.return_value) notify.assert_called_once_with( self.ctxt, instance.uuid, 'rebuild_server', {'vm_state': instance.vm_state, 'task_state': None}, ex, reqspec) @mock.patch('nova.conductor.tasks.live_migrate.LiveMigrationTask.execute') def test_live_migrate_instance(self, mock_execute): """Tests that asynchronous live migration targets the cell that the instance lives in. """ instance = self.params['build_requests'][0].instance scheduler_hint = {'host': None} reqspec = self.params['request_specs'][0] # setUp created the instance mapping but didn't target it to a cell, # to mock out the API doing that, but let's just update it to point # at cell1. im = objects.InstanceMapping.get_by_instance_uuid( self.ctxt, instance.uuid) im.cell_mapping = self.cell_mappings[test.CELL1_NAME] im.save() # Make sure the InstanceActionEvent is created in the cell. original_event_start = objects.InstanceActionEvent.event_start def fake_event_start(_cls, ctxt, *args, **kwargs): # Make sure the context is targeted to the cell that the instance # was created in. self.assertIsNotNone(ctxt.db_connection, 'Context is not targeted') original_event_start(ctxt, *args, **kwargs) self.stub_out( 'nova.objects.InstanceActionEvent.event_start', fake_event_start) self.conductor.live_migrate_instance( self.ctxt, instance, scheduler_hint, block_migration=None, disk_over_commit=None, request_spec=reqspec) mock_execute.assert_called_once_with() class ConductorTaskRPCAPITestCase(_BaseTaskTestCase, test_compute.BaseTestCase): """Conductor compute_task RPC namespace Tests.""" def setUp(self): super(ConductorTaskRPCAPITestCase, self).setUp() self.conductor_service = self.start_service( 'conductor', manager='nova.conductor.manager.ConductorManager') self.conductor = conductor_rpcapi.ComputeTaskAPI() service_manager = self.conductor_service.manager self.conductor_manager = service_manager.compute_task_mgr def test_live_migrate_instance(self): inst = fake_instance.fake_db_instance() inst_obj = objects.Instance._from_db_object( self.context, objects.Instance(), inst, []) version = '1.15' scheduler_hint = {'host': 'destination'} cctxt_mock = mock.MagicMock() @mock.patch.object(self.conductor.client, 'prepare', return_value=cctxt_mock) def _test(prepare_mock): self.conductor.live_migrate_instance( self.context, inst_obj, scheduler_hint, 'block_migration', 'disk_over_commit', request_spec=None) prepare_mock.assert_called_once_with(version=version) kw = {'instance': inst_obj, 'scheduler_hint': scheduler_hint, 'block_migration': 'block_migration', 'disk_over_commit': 'disk_over_commit', 'request_spec': None, } cctxt_mock.cast.assert_called_once_with( self.context, 'live_migrate_instance', **kw) _test() @mock.patch.object(objects.InstanceMapping, 'get_by_instance_uuid') def test_targets_cell_no_instance_mapping(self, mock_im): @conductor_manager.targets_cell def test(self, context, instance): return mock.sentinel.iransofaraway mock_im.side_effect = exc.InstanceMappingNotFound(uuid='something') ctxt = mock.MagicMock() inst = mock.MagicMock() self.assertEqual(mock.sentinel.iransofaraway, test(None, ctxt, inst)) mock_im.assert_called_once_with(ctxt, inst.uuid) def test_schedule_and_build_instances_with_tags(self): build_request = fake_build_request.fake_req_obj(self.context) request_spec = objects.RequestSpec( instance_uuid=build_request.instance_uuid) image = {'fake_data': 'should_pass_silently'} admin_password = 'fake_password' injected_file = 'fake' requested_network = None block_device_mapping = None tags = ['fake_tag'] version = '1.17' cctxt_mock = mock.MagicMock() @mock.patch.object(self.conductor.client, 'can_send_version', return_value=True) @mock.patch.object(self.conductor.client, 'prepare', return_value=cctxt_mock) def _test(prepare_mock, can_send_mock): self.conductor.schedule_and_build_instances( self.context, build_request, request_spec, image, admin_password, injected_file, requested_network, block_device_mapping, tags=tags) prepare_mock.assert_called_once_with(version=version) kw = {'build_requests': build_request, 'request_specs': request_spec, 'image': jsonutils.to_primitive(image), 'admin_password': admin_password, 'injected_files': injected_file, 'requested_networks': requested_network, 'block_device_mapping': block_device_mapping, 'tags': tags} cctxt_mock.cast.assert_called_once_with( self.context, 'schedule_and_build_instances', **kw) _test() def test_schedule_and_build_instances_with_tags_cannot_send(self): build_request = fake_build_request.fake_req_obj(self.context) request_spec = objects.RequestSpec( instance_uuid=build_request.instance_uuid) image = {'fake_data': 'should_pass_silently'} admin_password = 'fake_password' injected_file = 'fake' requested_network = None block_device_mapping = None tags = ['fake_tag'] cctxt_mock = mock.MagicMock() @mock.patch.object(self.conductor.client, 'can_send_version', return_value=False) @mock.patch.object(self.conductor.client, 'prepare', return_value=cctxt_mock) def _test(prepare_mock, can_send_mock): self.conductor.schedule_and_build_instances( self.context, build_request, request_spec, image, admin_password, injected_file, requested_network, block_device_mapping, tags=tags) prepare_mock.assert_called_once_with(version='1.16') kw = {'build_requests': build_request, 'request_specs': request_spec, 'image': jsonutils.to_primitive(image), 'admin_password': admin_password, 'injected_files': injected_file, 'requested_networks': requested_network, 'block_device_mapping': block_device_mapping} cctxt_mock.cast.assert_called_once_with( self.context, 'schedule_and_build_instances', **kw) _test() def test_build_instances_with_request_spec_ok(self): """Tests passing a request_spec to the build_instances RPC API method and having it passed through to the conductor task manager. """ image = {} cctxt_mock = mock.MagicMock() @mock.patch.object(self.conductor.client, 'can_send_version', side_effect=(False, True, True, True, True)) @mock.patch.object(self.conductor.client, 'prepare', return_value=cctxt_mock) def _test(prepare_mock, can_send_mock): self.conductor.build_instances( self.context, mock.sentinel.instances, image, mock.sentinel.filter_properties, mock.sentinel.admin_password, mock.sentinel.injected_files, mock.sentinel.requested_networks, mock.sentinel.security_groups, mock.sentinel.block_device_mapping, request_spec=mock.sentinel.request_spec) kw = {'instances': mock.sentinel.instances, 'image': image, 'filter_properties': mock.sentinel.filter_properties, 'admin_password': mock.sentinel.admin_password, 'injected_files': mock.sentinel.injected_files, 'requested_networks': mock.sentinel.requested_networks, 'security_groups': mock.sentinel.security_groups, 'request_spec': mock.sentinel.request_spec} cctxt_mock.cast.assert_called_once_with( self.context, 'build_instances', **kw) _test() def test_build_instances_with_request_spec_cannot_send(self): """Tests passing a request_spec to the build_instances RPC API method but not having it passed through to the conductor task manager because the version is too old to handle it. """ image = {} cctxt_mock = mock.MagicMock() @mock.patch.object(self.conductor.client, 'can_send_version', side_effect=(False, False, True, True, True)) @mock.patch.object(self.conductor.client, 'prepare', return_value=cctxt_mock) def _test(prepare_mock, can_send_mock): self.conductor.build_instances( self.context, mock.sentinel.instances, image, mock.sentinel.filter_properties, mock.sentinel.admin_password, mock.sentinel.injected_files, mock.sentinel.requested_networks, mock.sentinel.security_groups, mock.sentinel.block_device_mapping, request_spec=mock.sentinel.request_spec) kw = {'instances': mock.sentinel.instances, 'image': image, 'filter_properties': mock.sentinel.filter_properties, 'admin_password': mock.sentinel.admin_password, 'injected_files': mock.sentinel.injected_files, 'requested_networks': mock.sentinel.requested_networks, 'security_groups': mock.sentinel.security_groups} cctxt_mock.cast.assert_called_once_with( self.context, 'build_instances', **kw) _test() class ConductorTaskAPITestCase(_BaseTaskTestCase, test_compute.BaseTestCase): """Compute task API Tests.""" def setUp(self): super(ConductorTaskAPITestCase, self).setUp() self.conductor_service = self.start_service( 'conductor', manager='nova.conductor.manager.ConductorManager') self.conductor = conductor_api.ComputeTaskAPI() service_manager = self.conductor_service.manager self.conductor_manager = service_manager.compute_task_mgr def test_live_migrate(self): inst = fake_instance.fake_db_instance() inst_obj = objects.Instance._from_db_object( self.context, objects.Instance(), inst, []) with mock.patch.object(self.conductor.conductor_compute_rpcapi, 'migrate_server') as mock_migrate_server: self.conductor.live_migrate_instance(self.context, inst_obj, 'destination', 'block_migration', 'disk_over_commit') mock_migrate_server.assert_called_once_with( self.context, inst_obj, {'host': 'destination'}, True, False, None, 'block_migration', 'disk_over_commit', None, request_spec=None) nova-17.0.1/nova/tests/unit/conductor/__init__.py0000666000175000017500000000000013250073126021751 0ustar zuulzuul00000000000000nova-17.0.1/nova/tests/unit/virt/0000775000175000017500000000000013250073472016640 5ustar zuulzuul00000000000000nova-17.0.1/nova/tests/unit/virt/xenapi/0000775000175000017500000000000013250073472020124 5ustar zuulzuul00000000000000nova-17.0.1/nova/tests/unit/virt/xenapi/test_driver.py0000666000175000017500000003775713250073126023051 0ustar zuulzuul00000000000000# Copyright (c) 2013 Rackspace Hosting # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import math import mock from oslo_utils import units from nova import exception from nova.objects import fields as obj_fields from nova.tests.unit.virt.xenapi import stubs from nova.tests import uuidsentinel as uuids from nova.virt import driver from nova.virt import fake from nova.virt import xenapi from nova.virt.xenapi import driver as xenapi_driver from nova.virt.xenapi import host class XenAPIDriverTestCase(stubs.XenAPITestBaseNoDB): """Unit tests for Driver operations.""" def _get_driver(self): stubs.stubout_session(self.stubs, stubs.FakeSessionForVMTests) self.flags(connection_url='http://localhost', connection_password='test_pass', group='xenserver') return xenapi.XenAPIDriver(fake.FakeVirtAPI(), False) def host_stats(self, refresh=True): return {'host_memory_total': 3 * units.Mi, 'host_memory_free_computed': 2 * units.Mi, 'disk_total': 5 * units.Gi, 'disk_used': 2 * units.Gi, 'disk_allocated': 4 * units.Gi, 'host_hostname': 'somename', 'supported_instances': obj_fields.Architecture.X86_64, 'host_cpu_info': {'cpu_count': 50}, 'cpu_model': { 'vendor': 'GenuineIntel', 'model': 'Intel(R) Xeon(R) CPU X3430 @ 2.40GHz', 'topology': { 'sockets': 1, 'cores': 4, 'threads': 1, }, 'features': [ 'fpu', 'de', 'tsc', 'msr', 'pae', 'mce', 'cx8', 'apic', 'sep', 'mtrr', 'mca', 'cmov', 'pat', 'clflush', 'acpi', 'mmx', 'fxsr', 'sse', 'sse2', 'ss', 'ht', 'nx', 'constant_tsc', 'nonstop_tsc', 'aperfmperf', 'pni', 'vmx', 'est', 'ssse3', 'sse4_1', 'sse4_2', 'popcnt', 'hypervisor', 'ida', 'tpr_shadow', 'vnmi', 'flexpriority', 'ept', 'vpid', ], }, 'vcpus_used': 10, 'pci_passthrough_devices': '', 'host_other-config': {'iscsi_iqn': 'someiqn'}, 'vgpu_stats': { 'c8328467-badf-43d8-8e28-0e096b0f88b1': {'uuid': '6444c6ee-3a49-42f5-bebb-606b52175e67', 'type_name': 'Intel GVT-g', 'max_heads': 1, 'total': 7, 'remaining': 7, }, }} def test_available_resource(self): driver = self._get_driver() driver._session.product_version = (6, 8, 2) with mock.patch.object(driver.host_state, 'get_host_stats', side_effect=self.host_stats) as mock_get: resources = driver.get_available_resource(None) self.assertEqual(6008002, resources['hypervisor_version']) self.assertEqual(50, resources['vcpus']) self.assertEqual(3, resources['memory_mb']) self.assertEqual(5, resources['local_gb']) self.assertEqual(10, resources['vcpus_used']) self.assertEqual(3 - 2, resources['memory_mb_used']) self.assertEqual(2, resources['local_gb_used']) self.assertEqual('XenServer', resources['hypervisor_type']) self.assertEqual('somename', resources['hypervisor_hostname']) self.assertEqual(1, resources['disk_available_least']) mock_get.assert_called_once_with(refresh=True) def test_overhead(self): driver = self._get_driver() instance = {'memory_mb': 30720, 'vcpus': 4} # expected memory overhead per: # https://wiki.openstack.org/wiki/XenServer/Overhead expected = ((instance['memory_mb'] * xenapi_driver.OVERHEAD_PER_MB) + (instance['vcpus'] * xenapi_driver.OVERHEAD_PER_VCPU) + xenapi_driver.OVERHEAD_BASE) expected = math.ceil(expected) overhead = driver.estimate_instance_overhead(instance) self.assertEqual(expected, overhead['memory_mb']) def test_set_bootable(self): driver = self._get_driver() with mock.patch.object(driver._vmops, 'set_bootable') as mock_set_bootable: driver.set_bootable('inst', True) mock_set_bootable.assert_called_once_with('inst', True) def test_post_interrupted_snapshot_cleanup(self): driver = self._get_driver() fake_vmops_cleanup = mock.Mock() driver._vmops.post_interrupted_snapshot_cleanup = fake_vmops_cleanup driver.post_interrupted_snapshot_cleanup("context", "instance") fake_vmops_cleanup.assert_called_once_with("context", "instance") def test_public_api_signatures(self): inst = self._get_driver() self.assertPublicAPISignatures(driver.ComputeDriver(None), inst) def test_get_volume_connector(self): ip = '123.123.123.123' driver = self._get_driver() self.flags(connection_url='http://%s' % ip, connection_password='test_pass', group='xenserver') with mock.patch.object(driver.host_state, 'get_host_stats', side_effect=self.host_stats) as mock_get: connector = driver.get_volume_connector({'uuid': 'fake'}) self.assertIn('ip', connector) self.assertEqual(connector['ip'], ip) self.assertIn('initiator', connector) self.assertEqual(connector['initiator'], 'someiqn') mock_get.assert_called_once_with(refresh=True) def test_get_block_storage_ip(self): my_ip = '123.123.123.123' connection_ip = '124.124.124.124' driver = self._get_driver() self.flags(connection_url='http://%s' % connection_ip, group='xenserver') self.flags(my_ip=my_ip, my_block_storage_ip=my_ip) ip = driver._get_block_storage_ip() self.assertEqual(connection_ip, ip) def test_get_block_storage_ip_conf(self): driver = self._get_driver() my_ip = '123.123.123.123' my_block_storage_ip = '124.124.124.124' self.flags(my_ip=my_ip, my_block_storage_ip=my_block_storage_ip) ip = driver._get_block_storage_ip() self.assertEqual(my_block_storage_ip, ip) @mock.patch.object(xenapi_driver, 'invalid_option') @mock.patch.object(xenapi_driver.vm_utils, 'ensure_correct_host') def test_invalid_options(self, mock_ensure, mock_invalid): driver = self._get_driver() self.flags(independent_compute=True, group='xenserver') self.flags(check_host=True, group='xenserver') self.flags(flat_injected=True) self.flags(default_ephemeral_format='vfat') driver.init_host('host') expected_calls = [ mock.call('CONF.xenserver.check_host', False), mock.call('CONF.flat_injected', False), mock.call('CONF.default_ephemeral_format', 'ext3')] mock_invalid.assert_has_calls(expected_calls) @mock.patch.object(xenapi_driver.vm_utils, 'cleanup_attached_vdis') @mock.patch.object(xenapi_driver.vm_utils, 'ensure_correct_host') def test_independent_compute_no_vdi_cleanup(self, mock_ensure, mock_cleanup): driver = self._get_driver() self.flags(independent_compute=True, group='xenserver') self.flags(check_host=False, group='xenserver') self.flags(flat_injected=False) driver.init_host('host') self.assertFalse(mock_cleanup.called) self.assertFalse(mock_ensure.called) @mock.patch.object(xenapi_driver.vm_utils, 'cleanup_attached_vdis') @mock.patch.object(xenapi_driver.vm_utils, 'ensure_correct_host') def test_dependent_compute_vdi_cleanup(self, mock_ensure, mock_cleanup): driver = self._get_driver() self.assertFalse(mock_cleanup.called) self.flags(independent_compute=False, group='xenserver') self.flags(check_host=True, group='xenserver') driver.init_host('host') self.assertTrue(mock_cleanup.called) self.assertTrue(mock_ensure.called) @mock.patch.object(xenapi_driver.vmops.VMOps, 'attach_interface') def test_attach_interface(self, mock_attach_interface): driver = self._get_driver() driver.attach_interface('fake_context', 'fake_instance', 'fake_image_meta', 'fake_vif') mock_attach_interface.assert_called_once_with('fake_instance', 'fake_vif') @mock.patch.object(xenapi_driver.vmops.VMOps, 'detach_interface') def test_detach_interface(self, mock_detach_interface): driver = self._get_driver() driver.detach_interface('fake_context', 'fake_instance', 'fake_vif') mock_detach_interface.assert_called_once_with('fake_instance', 'fake_vif') @mock.patch.object(xenapi_driver.vmops.VMOps, 'post_live_migration_at_source') def test_post_live_migration_at_source(self, mock_post_live_migration): driver = self._get_driver() driver.post_live_migration_at_source('fake_context', 'fake_instance', 'fake_network_info') mock_post_live_migration.assert_called_once_with( 'fake_context', 'fake_instance', 'fake_network_info') @mock.patch.object(xenapi_driver.vmops.VMOps, 'rollback_live_migration_at_destination') def test_rollback_live_migration_at_destination(self, mock_rollback): driver = self._get_driver() driver.rollback_live_migration_at_destination( 'fake_context', 'fake_instance', 'fake_network_info', 'fake_block_device') mock_rollback.assert_called_once_with('fake_instance', 'fake_network_info', 'fake_block_device') @mock.patch.object(host.HostState, 'get_host_stats') def test_get_inventory(self, mock_get_stats): expected_inv = { obj_fields.ResourceClass.VCPU: { 'total': 50, 'min_unit': 1, 'max_unit': 50, 'step_size': 1, }, obj_fields.ResourceClass.MEMORY_MB: { 'total': 3, 'min_unit': 1, 'max_unit': 3, 'step_size': 1, }, obj_fields.ResourceClass.DISK_GB: { 'total': 5, 'min_unit': 1, 'max_unit': 5, 'step_size': 1, }, obj_fields.ResourceClass.VGPU: { 'total': 7, 'min_unit': 1, 'max_unit': 1, 'step_size': 1, }, } mock_get_stats.side_effect = self.host_stats drv = self._get_driver() inv = drv.get_inventory(mock.sentinel.nodename) mock_get_stats.assert_called_once_with(refresh=True) self.assertEqual(expected_inv, inv) @mock.patch.object(host.HostState, 'get_host_stats') def test_get_inventory_no_vgpu(self, mock_get_stats): # Test when there are no vGPU resources in the inventory. host_stats = self.host_stats() host_stats.update(vgpu_stats={}) mock_get_stats.return_value = host_stats drv = self._get_driver() inv = drv.get_inventory(mock.sentinel.nodename) # check if the inventory data does NOT contain VGPU. self.assertNotIn(obj_fields.ResourceClass.VGPU, inv) def test_get_vgpu_total_single_grp(self): # Test when only one group included in the host_stats. vgpu_stats = { 'grp_uuid_1': { 'total': 7 } } drv = self._get_driver() vgpu_total = drv._get_vgpu_total(vgpu_stats) self.assertEqual(7, vgpu_total) def test_get_vgpu_total_multiple_grps(self): # Test when multiple groups included in the host_stats. vgpu_stats = { 'grp_uuid_1': { 'total': 7 }, 'grp_uuid_2': { 'total': 4 } } drv = self._get_driver() vgpu_total = drv._get_vgpu_total(vgpu_stats) self.assertEqual(11, vgpu_total) def test_get_vgpu_info_no_vgpu_alloc(self): # no vgpu in allocation. alloc = { 'rp1': { 'resources': { 'VCPU': 1, 'MEMORY_MB': 512, 'DISK_GB': 1, } } } drv = self._get_driver() vgpu_info = drv._get_vgpu_info(alloc) self.assertIsNone(vgpu_info) @mock.patch.object(host.HostState, 'get_host_stats') def test_get_vgpu_info_has_vgpu_alloc(self, mock_get_stats): # Have vgpu in allocation. alloc = { 'rp1': { 'resources': { 'VCPU': 1, 'MEMORY_MB': 512, 'DISK_GB': 1, 'VGPU': 1, } } } # The following fake data assumes there are two GPU # groups both of which supply the same type of vGPUs. # If the 1st GPU group has no remaining available vGPUs; # the 2nd GPU group still has remaining available vGPUs. # it should return the uuid from the 2nd GPU group. vgpu_stats = { uuids.gpu_group_1: { 'uuid': uuids.vgpu_type, 'type_name': 'GRID K180Q', 'max_heads': 4, 'total': 2, 'remaining': 0, }, uuids.gpu_group_2: { 'uuid': uuids.vgpu_type, 'type_name': 'GRID K180Q', 'max_heads': 4, 'total': 2, 'remaining': 2, }, } host_stats = self.host_stats() host_stats.update(vgpu_stats=vgpu_stats) mock_get_stats.return_value = host_stats drv = self._get_driver() vgpu_info = drv._get_vgpu_info(alloc) expected_info = {'gpu_grp_uuid': uuids.gpu_group_2, 'vgpu_type_uuid': uuids.vgpu_type} self.assertEqual(expected_info, vgpu_info) @mock.patch.object(host.HostState, 'get_host_stats') def test_get_vgpu_info_has_vgpu_alloc_except(self, mock_get_stats): # Allocated vGPU but got exception due to no remaining vGPU. alloc = { 'rp1': { 'resources': { 'VCPU': 1, 'MEMORY_MB': 512, 'DISK_GB': 1, 'VGPU': 1, } } } vgpu_stats = { uuids.gpu_group: { 'uuid': uuids.vgpu_type, 'type_name': 'Intel GVT-g', 'max_heads': 1, 'total': 7, 'remaining': 0, }, } host_stats = self.host_stats() host_stats.update(vgpu_stats=vgpu_stats) mock_get_stats.return_value = host_stats drv = self._get_driver() self.assertRaises(exception.ComputeResourcesUnavailable, drv._get_vgpu_info, alloc) nova-17.0.1/nova/tests/unit/virt/xenapi/test_vmops.py0000666000175000017500000030254013250073126022703 0ustar zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import base64 import uuid import zlib try: import xmlrpclib except ImportError: import six.moves.xmlrpc_client as xmlrpclib from eventlet import greenthread import mock from os_xenapi.client import host_xenstore import six from nova.compute import power_state from nova.compute import task_states from nova import context from nova import exception from nova import objects from nova.objects import fields from nova.pci import manager as pci_manager from nova import test from nova.tests.unit import fake_flavor from nova.tests.unit import fake_instance from nova.tests.unit.virt.xenapi import stubs from nova.tests import uuidsentinel as uuids from nova import utils from nova.virt import fake from nova.virt.xenapi import agent as xenapi_agent from nova.virt.xenapi import fake as xenapi_fake from nova.virt.xenapi import vm_utils from nova.virt.xenapi import vmops from nova.virt.xenapi import volume_utils from nova.virt.xenapi import volumeops class VMOpsTestBase(stubs.XenAPITestBaseNoDB): def setUp(self): super(VMOpsTestBase, self).setUp() self._setup_mock_vmops() self.vms = [] def _setup_mock_vmops(self, product_brand=None, product_version=None): stubs.stubout_session(self.stubs, xenapi_fake.SessionBase) self._session = xenapi_fake.SessionBase( 'http://localhost', 'root', 'test_pass') self.vmops = vmops.VMOps(self._session, fake.FakeVirtAPI()) def create_vm(self, name, state="Running"): vm_ref = xenapi_fake.create_vm(name, state) self.vms.append(vm_ref) vm = xenapi_fake.get_record("VM", vm_ref) return vm, vm_ref def tearDown(self): super(VMOpsTestBase, self).tearDown() for vm in self.vms: xenapi_fake.destroy_vm(vm) class VMOpsTestCase(VMOpsTestBase): def setUp(self): super(VMOpsTestCase, self).setUp() self._setup_mock_vmops() self.context = context.RequestContext('user', 'project') self.instance = fake_instance.fake_instance_obj(self.context) def _setup_mock_vmops(self, product_brand=None, product_version=None): self._session = self._get_mock_session(product_brand, product_version) self._vmops = vmops.VMOps(self._session, fake.FakeVirtAPI()) def _get_mock_session(self, product_brand, product_version): class Mock(object): pass mock_session = Mock() mock_session.product_brand = product_brand mock_session.product_version = product_version return mock_session def _test_finish_revert_migration_after_crash(self, backup_made, new_made, vm_shutdown=True): instance = {'name': 'foo', 'task_state': task_states.RESIZE_MIGRATING} context = 'fake_context' lookup_returns = [backup_made and 'foo' or None, (not backup_made or new_made) and 'foo' or None] @mock.patch.object(vm_utils, 'lookup', side_effect=lookup_returns) @mock.patch.object(self._vmops, '_destroy') @mock.patch.object(vm_utils, 'set_vm_name_label') @mock.patch.object(self._vmops, '_attach_mapped_block_devices') @mock.patch.object(self._vmops, '_start') @mock.patch.object(vm_utils, 'is_vm_shutdown', return_value=vm_shutdown) def test(mock_is_vm, mock_start, mock_attach_bdm, mock_set_vm_name, mock_destroy, mock_lookup): self._vmops.finish_revert_migration(context, instance, []) mock_lookup.assert_has_calls([mock.call(self._session, 'foo-orig'), mock.call(self._session, 'foo')]) if backup_made: if new_made: mock_destroy.assert_called_once_with(instance, 'foo') mock_set_vm_name.assert_called_once_with(self._session, 'foo', 'foo') mock_attach_bdm.assert_called_once_with(instance, []) mock_is_vm.assert_called_once_with(self._session, 'foo') if vm_shutdown: mock_start.assert_called_once_with(instance, 'foo') test() def test_finish_revert_migration_after_crash(self): self._test_finish_revert_migration_after_crash(True, True) def test_finish_revert_migration_after_crash_before_new(self): self._test_finish_revert_migration_after_crash(True, False) def test_finish_revert_migration_after_crash_before_backup(self): self._test_finish_revert_migration_after_crash(False, False) @mock.patch.object(vm_utils, 'lookup', return_value=None) def test_get_vm_opaque_ref_raises_instance_not_found(self, mock_lookup): instance = {"name": "dummy"} self.assertRaises(exception.InstanceNotFound, self._vmops._get_vm_opaque_ref, instance) mock_lookup.assert_called_once_with(self._session, instance['name'], False) @mock.patch.object(vm_utils, 'destroy_vm') @mock.patch.object(vm_utils, 'clean_shutdown_vm') @mock.patch.object(vm_utils, 'hard_shutdown_vm') def test_clean_shutdown_no_bdm_on_destroy(self, hard_shutdown_vm, clean_shutdown_vm, destroy_vm): vm_ref = 'vm_ref' self._vmops._destroy(self.instance, vm_ref, destroy_disks=False) hard_shutdown_vm.assert_called_once_with(self._vmops._session, self.instance, vm_ref) self.assertEqual(0, clean_shutdown_vm.call_count) @mock.patch.object(vm_utils, 'destroy_vm') @mock.patch.object(vm_utils, 'clean_shutdown_vm') @mock.patch.object(vm_utils, 'hard_shutdown_vm') def test_clean_shutdown_with_bdm_on_destroy(self, hard_shutdown_vm, clean_shutdown_vm, destroy_vm): vm_ref = 'vm_ref' block_device_info = {'block_device_mapping': ['fake']} self._vmops._destroy(self.instance, vm_ref, destroy_disks=False, block_device_info=block_device_info) clean_shutdown_vm.assert_called_once_with(self._vmops._session, self.instance, vm_ref) self.assertEqual(0, hard_shutdown_vm.call_count) @mock.patch.object(vm_utils, 'destroy_vm') @mock.patch.object(vm_utils, 'clean_shutdown_vm', return_value=False) @mock.patch.object(vm_utils, 'hard_shutdown_vm') def test_clean_shutdown_with_bdm_failed_on_destroy(self, hard_shutdown_vm, clean_shutdown_vm, destroy_vm): vm_ref = 'vm_ref' block_device_info = {'block_device_mapping': ['fake']} self._vmops._destroy(self.instance, vm_ref, destroy_disks=False, block_device_info=block_device_info) clean_shutdown_vm.assert_called_once_with(self._vmops._session, self.instance, vm_ref) hard_shutdown_vm.assert_called_once_with(self._vmops._session, self.instance, vm_ref) @mock.patch.object(vm_utils, 'try_auto_configure_disk') @mock.patch.object(vm_utils, 'create_vbd', side_effect=test.TestingException) def test_attach_disks_rescue_auto_disk_config_false(self, create_vbd, try_auto_config): ctxt = context.RequestContext('user', 'project') instance = fake_instance.fake_instance_obj(ctxt) image_meta = objects.ImageMeta.from_dict( {'properties': {'auto_disk_config': 'false'}}) vdis = {'root': {'ref': 'fake-ref'}} self.assertRaises(test.TestingException, self._vmops._attach_disks, ctxt, instance, image_meta=image_meta, vm_ref=None, name_label=None, vdis=vdis, disk_image_type='fake', network_info=[], rescue=True) self.assertFalse(try_auto_config.called) @mock.patch.object(vm_utils, 'try_auto_configure_disk') @mock.patch.object(vm_utils, 'create_vbd', side_effect=test.TestingException) def test_attach_disks_rescue_auto_disk_config_true(self, create_vbd, try_auto_config): ctxt = context.RequestContext('user', 'project') instance = fake_instance.fake_instance_obj(ctxt) image_meta = objects.ImageMeta.from_dict( {'properties': {'auto_disk_config': 'true'}}) vdis = {'root': {'ref': 'fake-ref'}} self.assertRaises(test.TestingException, self._vmops._attach_disks, ctxt, instance, image_meta=image_meta, vm_ref=None, name_label=None, vdis=vdis, disk_image_type='fake', network_info=[], rescue=True) try_auto_config.assert_called_once_with(self._vmops._session, 'fake-ref', instance.flavor.root_gb) class InjectAutoDiskConfigTestCase(VMOpsTestBase): def test_inject_auto_disk_config_when_present(self): vm, vm_ref = self.create_vm("dummy") instance = {"name": "dummy", "uuid": "1234", "auto_disk_config": True} self.vmops._inject_auto_disk_config(instance, vm_ref) xenstore_data = vm['xenstore_data'] self.assertEqual(xenstore_data['vm-data/auto-disk-config'], 'True') def test_inject_auto_disk_config_none_as_false(self): vm, vm_ref = self.create_vm("dummy") instance = {"name": "dummy", "uuid": "1234", "auto_disk_config": None} self.vmops._inject_auto_disk_config(instance, vm_ref) xenstore_data = vm['xenstore_data'] self.assertEqual(xenstore_data['vm-data/auto-disk-config'], 'False') class GetConsoleOutputTestCase(VMOpsTestBase): def _mock_console_log(self, session, domid): if domid == 0: raise session.XenAPI.Failure('No console') return base64.b64encode(zlib.compress(six.b('dom_id: %s' % domid))) @mock.patch.object(vmops.vm_management, 'get_console_log') def test_get_console_output_works(self, mock_console_log): ctxt = context.RequestContext('user', 'project') mock_console_log.side_effect = self._mock_console_log instance = fake_instance.fake_instance_obj(ctxt) with mock.patch.object(self.vmops, '_get_last_dom_id', return_value=42) as mock_last_dom: self.assertEqual(b"dom_id: 42", self.vmops.get_console_output(instance)) mock_last_dom.assert_called_once_with(instance, check_rescue=True) @mock.patch.object(vmops.vm_management, 'get_console_log') def test_get_console_output_not_available(self, mock_console_log): mock_console_log.side_effect = self._mock_console_log ctxt = context.RequestContext('user', 'project') instance = fake_instance.fake_instance_obj(ctxt) # dom_id=0 used to trigger exception in fake XenAPI with mock.patch.object(self.vmops, '_get_last_dom_id', return_value=0) as mock_last_dom: self.assertRaises(exception.ConsoleNotAvailable, self.vmops.get_console_output, instance) mock_last_dom.assert_called_once_with(instance, check_rescue=True) def test_get_dom_id_works(self): instance = {"name": "dummy"} vm, vm_ref = self.create_vm("dummy") self.assertEqual(vm["domid"], self.vmops._get_dom_id(instance)) def test_get_dom_id_works_with_rescue_vm(self): instance = {"name": "dummy"} vm, vm_ref = self.create_vm("dummy-rescue") self.assertEqual(vm["domid"], self.vmops._get_dom_id(instance, check_rescue=True)) def test_get_dom_id_raises_not_found(self): instance = {"name": "dummy"} self.create_vm("not-dummy") self.assertRaises(exception.InstanceNotFound, self.vmops._get_dom_id, instance) def test_get_dom_id_works_with_vmref(self): vm, vm_ref = self.create_vm("dummy") instance = {'name': 'dummy'} self.assertEqual(vm["domid"], self.vmops._get_dom_id(instance, vm_ref=vm_ref)) def test_get_dom_id_fails_if_shutdown(self): vm, vm_ref = self.create_vm("dummy") instance = {'name': 'dummy'} self._session.VM.hard_shutdown(vm_ref) self.assertRaises(exception.InstanceNotFound, self.vmops._get_dom_id, instance, vm_ref=vm_ref) class SpawnTestCase(VMOpsTestBase): def _stub_out_common(self): self.mox.StubOutWithMock(self.vmops, '_ensure_instance_name_unique') self.mox.StubOutWithMock(self.vmops, '_ensure_enough_free_mem') self.mox.StubOutWithMock(self.vmops, '_update_instance_progress') self.mox.StubOutWithMock(vm_utils, 'determine_disk_image_type') self.mox.StubOutWithMock(self.vmops, '_get_vdis_for_instance') self.mox.StubOutWithMock(vm_utils, 'safe_destroy_vdis') self.mox.StubOutWithMock(self.vmops._volumeops, 'safe_cleanup_from_vdis') self.mox.StubOutWithMock(self.vmops, '_resize_up_vdis') self.mox.StubOutWithMock(vm_utils, 'create_kernel_and_ramdisk') self.mox.StubOutWithMock(vm_utils, 'destroy_kernel_ramdisk') self.mox.StubOutWithMock(self.vmops, '_create_vm_record') self.mox.StubOutWithMock(self.vmops, '_destroy') self.mox.StubOutWithMock(self.vmops, '_attach_disks') self.mox.StubOutWithMock(self.vmops, '_save_device_metadata') self.mox.StubOutWithMock(self.vmops, '_prepare_disk_metadata') self.mox.StubOutWithMock(pci_manager, 'get_instance_pci_devs') self.mox.StubOutWithMock(vm_utils, 'set_other_config_pci') self.mox.StubOutWithMock(self.vmops, '_attach_orig_disks') self.mox.StubOutWithMock(self.vmops, 'inject_network_info') self.mox.StubOutWithMock(self.vmops, '_inject_hostname') self.mox.StubOutWithMock(self.vmops, '_inject_instance_metadata') self.mox.StubOutWithMock(self.vmops, '_inject_auto_disk_config') self.mox.StubOutWithMock(self.vmops, '_file_inject_vm_settings') self.mox.StubOutWithMock(self.vmops, '_create_vifs') self.mox.StubOutWithMock(self.vmops.firewall_driver, 'setup_basic_filtering') self.mox.StubOutWithMock(self.vmops.firewall_driver, 'prepare_instance_filter') self.mox.StubOutWithMock(self.vmops, '_start') self.mox.StubOutWithMock(self.vmops, '_wait_for_instance_to_start') self.mox.StubOutWithMock(self.vmops, '_configure_new_instance_with_agent') self.mox.StubOutWithMock(self.vmops, '_remove_hostname') self.mox.StubOutWithMock(self.vmops.firewall_driver, 'apply_instance_filter') self.mox.StubOutWithMock(self.vmops, '_update_last_dom_id') self.mox.StubOutWithMock(self.vmops._session, 'call_xenapi') self.mox.StubOutWithMock(self.vmops, '_attach_vgpu') @staticmethod def _new_instance(obj): class _Instance(dict): __getattr__ = dict.__getitem__ __setattr__ = dict.__setitem__ return _Instance(**obj) def _test_spawn(self, name_label_param=None, block_device_info_param=None, rescue=False, include_root_vdi=True, throw_exception=None, attach_pci_dev=False, neutron_exception=False, network_info=None, vgpu_info=None): self._stub_out_common() instance = self._new_instance({"name": "dummy", "uuid": "fake_uuid", "device_metadata": None}) name_label = name_label_param if name_label is None: name_label = "dummy" image_meta = objects.ImageMeta.from_dict({"id": uuids.image_id}) context = "context" session = self.vmops._session injected_files = "fake_files" admin_password = "password" if network_info is None: network_info = [] steps = 10 if rescue: steps += 1 block_device_info = block_device_info_param if block_device_info and not block_device_info['root_device_name']: block_device_info = dict(block_device_info_param) block_device_info['root_device_name'] = \ self.vmops.default_root_dev di_type = "di_type" vm_utils.determine_disk_image_type(image_meta).AndReturn(di_type) step = 1 self.vmops._update_instance_progress(context, instance, step, steps) vdis = {"other": {"ref": "fake_ref_2", "osvol": True}} if include_root_vdi: vdis["root"] = {"ref": "fake_ref"} self.vmops._get_vdis_for_instance(context, instance, name_label, image_meta, di_type, block_device_info).AndReturn(vdis) self.vmops._resize_up_vdis(instance, vdis) step += 1 self.vmops._update_instance_progress(context, instance, step, steps) kernel_file = "kernel" ramdisk_file = "ramdisk" vm_utils.create_kernel_and_ramdisk(context, session, instance, name_label).AndReturn((kernel_file, ramdisk_file)) step += 1 self.vmops._update_instance_progress(context, instance, step, steps) vm_ref = "fake_vm_ref" self.vmops._ensure_instance_name_unique(name_label) self.vmops._ensure_enough_free_mem(instance) self.vmops._create_vm_record(context, instance, name_label, di_type, kernel_file, ramdisk_file, image_meta, rescue).AndReturn(vm_ref) step += 1 self.vmops._update_instance_progress(context, instance, step, steps) self.vmops._save_device_metadata(context, instance, block_device_info) self.vmops._attach_disks(context, instance, image_meta, vm_ref, name_label, vdis, di_type, network_info, rescue, admin_password, injected_files) if attach_pci_dev: fake_dev = { 'created_at': None, 'updated_at': None, 'deleted_at': None, 'deleted': None, 'id': 1, 'compute_node_id': 1, 'address': '00:00.0', 'vendor_id': '1234', 'product_id': 'abcd', 'dev_type': fields.PciDeviceType.STANDARD, 'status': 'available', 'dev_id': 'devid', 'label': 'label', 'instance_uuid': None, 'extra_info': '{}', } pci_manager.get_instance_pci_devs(instance).AndReturn([fake_dev]) vm_utils.set_other_config_pci(self.vmops._session, vm_ref, "0/0000:00:00.0") else: pci_manager.get_instance_pci_devs(instance).AndReturn([]) self.vmops._attach_vgpu(vm_ref, vgpu_info, instance) step += 1 self.vmops._update_instance_progress(context, instance, step, steps) self.vmops._inject_instance_metadata(instance, vm_ref) self.vmops._inject_auto_disk_config(instance, vm_ref) self.vmops._inject_hostname(instance, vm_ref, rescue) self.vmops._file_inject_vm_settings(instance, vm_ref, vdis, network_info) self.vmops.inject_network_info(instance, network_info, vm_ref) step += 1 self.vmops._update_instance_progress(context, instance, step, steps) if neutron_exception: events = [('network-vif-plugged', 1)] self.vmops._get_neutron_events(network_info, True, True, False).AndReturn(events) self.mox.StubOutWithMock(self.vmops, '_neutron_failed_callback') self.mox.StubOutWithMock(self.vmops._virtapi, 'wait_for_instance_event') self.vmops._virtapi.wait_for_instance_event(instance, events, deadline=300, error_callback=self.vmops._neutron_failed_callback).\ AndRaise(exception.VirtualInterfaceCreateException) else: self.vmops._create_vifs(instance, vm_ref, network_info) self.vmops.firewall_driver.setup_basic_filtering(instance, network_info).AndRaise(NotImplementedError) self.vmops.firewall_driver.prepare_instance_filter(instance, network_info) step += 1 self.vmops._update_instance_progress(context, instance, step, steps) if rescue: self.vmops._attach_orig_disks(instance, vm_ref) step += 1 self.vmops._update_instance_progress(context, instance, step, steps) start_pause = True self.vmops._start(instance, vm_ref, start_pause=start_pause) step += 1 self.vmops._update_instance_progress(context, instance, step, steps) self.vmops.firewall_driver.apply_instance_filter(instance, network_info) step += 1 self.vmops._update_instance_progress(context, instance, step, steps) self.vmops._session.call_xenapi('VM.unpause', vm_ref) self.vmops._wait_for_instance_to_start(instance, vm_ref) self.vmops._update_last_dom_id(vm_ref) self.vmops._configure_new_instance_with_agent(instance, vm_ref, injected_files, admin_password) self.vmops._remove_hostname(instance, vm_ref) step += 1 last_call = self.vmops._update_instance_progress(context, instance, step, steps) if throw_exception: last_call.AndRaise(throw_exception) if throw_exception or neutron_exception: self.vmops._destroy(instance, vm_ref, network_info=network_info) vm_utils.destroy_kernel_ramdisk(self.vmops._session, instance, kernel_file, ramdisk_file) vm_utils.safe_destroy_vdis(self.vmops._session, ["fake_ref"]) self.vmops._volumeops.safe_cleanup_from_vdis(["fake_ref_2"]) self.mox.ReplayAll() self.vmops.spawn(context, instance, image_meta, injected_files, admin_password, network_info, block_device_info_param, vgpu_info, name_label_param, rescue) def test_spawn(self): self._test_spawn() def test_spawn_with_alternate_options(self): self._test_spawn(include_root_vdi=False, rescue=True, name_label_param="bob", block_device_info_param={"root_device_name": ""}) def test_spawn_with_pci_available_on_the_host(self): self._test_spawn(attach_pci_dev=True) def test_spawn_with_vgpu(self): vgpu_info = {'grp_uuid': uuids.gpu_group_1, 'vgpu_type_uuid': uuids.vgpu_type_1} self._test_spawn(vgpu_info=vgpu_info) def test_spawn_performs_rollback_and_throws_exception(self): self.assertRaises(test.TestingException, self._test_spawn, throw_exception=test.TestingException()) def test_spawn_with_neutron(self): self.flags(use_neutron=True) self.mox.StubOutWithMock(self.vmops, '_get_neutron_events') events = [('network-vif-plugged', 1)] network_info = [{'id': 1, 'active': True}] self.vmops._get_neutron_events(network_info, True, True, False).AndReturn(events) self.mox.StubOutWithMock(self.vmops, '_neutron_failed_callback') self._test_spawn(network_info=network_info) @staticmethod def _dev_mock(obj): dev = mock.MagicMock(**obj) dev.__contains__.side_effect = ( lambda attr: getattr(dev, attr, None) is not None) return dev @mock.patch.object(objects, 'XenDeviceBus') @mock.patch.object(objects, 'IDEDeviceBus') @mock.patch.object(objects, 'DiskMetadata') def test_prepare_disk_metadata(self, mock_DiskMetadata, mock_IDEDeviceBus, mock_XenDeviceBus): mock_IDEDeviceBus.side_effect = \ lambda **kw: \ self._dev_mock({"address": kw.get("address"), "bus": "ide"}) mock_XenDeviceBus.side_effect = \ lambda **kw: \ self._dev_mock({"address": kw.get("address"), "bus": "xen"}) mock_DiskMetadata.side_effect = \ lambda **kw: self._dev_mock(dict(**kw)) bdm = self._dev_mock({"device_name": "/dev/xvda", "tag": "disk_a"}) disk_metadata = self.vmops._prepare_disk_metadata(bdm) self.assertEqual(disk_metadata[0].tags, ["disk_a"]) self.assertEqual(disk_metadata[0].bus.bus, "ide") self.assertEqual(disk_metadata[0].bus.address, "0:0") self.assertEqual(disk_metadata[1].tags, ["disk_a"]) self.assertEqual(disk_metadata[1].bus.bus, "xen") self.assertEqual(disk_metadata[1].bus.address, "000000") self.assertEqual(disk_metadata[2].tags, ["disk_a"]) self.assertEqual(disk_metadata[2].bus.bus, "xen") self.assertEqual(disk_metadata[2].bus.address, "51712") self.assertEqual(disk_metadata[3].tags, ["disk_a"]) self.assertEqual(disk_metadata[3].bus.bus, "xen") self.assertEqual(disk_metadata[3].bus.address, "768") bdm = self._dev_mock({"device_name": "/dev/xvdc", "tag": "disk_c"}) disk_metadata = self.vmops._prepare_disk_metadata(bdm) self.assertEqual(disk_metadata[0].tags, ["disk_c"]) self.assertEqual(disk_metadata[0].bus.bus, "ide") self.assertEqual(disk_metadata[0].bus.address, "1:0") self.assertEqual(disk_metadata[1].tags, ["disk_c"]) self.assertEqual(disk_metadata[1].bus.bus, "xen") self.assertEqual(disk_metadata[1].bus.address, "000200") self.assertEqual(disk_metadata[2].tags, ["disk_c"]) self.assertEqual(disk_metadata[2].bus.bus, "xen") self.assertEqual(disk_metadata[2].bus.address, "51744") self.assertEqual(disk_metadata[3].tags, ["disk_c"]) self.assertEqual(disk_metadata[3].bus.bus, "xen") self.assertEqual(disk_metadata[3].bus.address, "5632") bdm = self._dev_mock({"device_name": "/dev/xvde", "tag": "disk_e"}) disk_metadata = self.vmops._prepare_disk_metadata(bdm) self.assertEqual(disk_metadata[0].tags, ["disk_e"]) self.assertEqual(disk_metadata[0].bus.bus, "xen") self.assertEqual(disk_metadata[0].bus.address, "000400") self.assertEqual(disk_metadata[1].tags, ["disk_e"]) self.assertEqual(disk_metadata[1].bus.bus, "xen") self.assertEqual(disk_metadata[1].bus.address, "51776") @mock.patch.object(objects.VirtualInterfaceList, 'get_by_instance_uuid') @mock.patch.object(objects.BlockDeviceMappingList, 'get_by_instance_uuid') @mock.patch.object(objects, 'NetworkInterfaceMetadata') @mock.patch.object(objects, 'InstanceDeviceMetadata') @mock.patch.object(objects, 'PCIDeviceBus') @mock.patch.object(vmops.VMOps, '_prepare_disk_metadata') def test_save_device_metadata(self, mock_prepare_disk_metadata, mock_PCIDeviceBus, mock_InstanceDeviceMetadata, mock_NetworkInterfaceMetadata, mock_get_bdms, mock_get_vifs): context = {} instance = {"uuid": "fake_uuid"} vif = self._dev_mock({"address": "fake_address", "tag": "vif_tag"}) bdm = self._dev_mock({"device_name": "/dev/xvdx", "tag": "bdm_tag"}) block_device_info = {'block_device_mapping': [bdm]} mock_get_vifs.return_value = [vif] mock_get_bdms.return_value = [bdm] mock_InstanceDeviceMetadata.side_effect = \ lambda **kw: {"devices": kw.get("devices")} mock_NetworkInterfaceMetadata.return_value = mock.sentinel.vif_metadata mock_prepare_disk_metadata.return_value = [mock.sentinel.bdm_metadata] dev_meta = self.vmops._save_device_metadata(context, instance, block_device_info) mock_get_vifs.assert_called_once_with(context, instance["uuid"]) mock_NetworkInterfaceMetadata.assert_called_once_with(mac=vif.address, bus=mock_PCIDeviceBus.return_value, tags=[vif.tag]) mock_prepare_disk_metadata.assert_called_once_with(bdm) self.assertEqual(dev_meta["devices"], [mock.sentinel.vif_metadata, mock.sentinel.bdm_metadata]) @mock.patch.object(objects.VirtualInterfaceList, 'get_by_instance_uuid') @mock.patch.object(objects.BlockDeviceMappingList, 'get_by_instance_uuid') @mock.patch.object(vmops.VMOps, '_prepare_disk_metadata') def test_save_device_metadata_no_vifs_no_bdms( self, mock_prepare_disk_metadata, mock_get_bdms, mock_get_vifs): """Tests that we don't save any device metadata when there are no VIFs or BDMs. """ ctxt = context.RequestContext('fake-user', 'fake-project') instance = objects.Instance(uuid=uuids.instance_uuid) block_device_info = {'block_device_mapping': []} mock_get_vifs.return_value = objects.VirtualInterfaceList() dev_meta = self.vmops._save_device_metadata( ctxt, instance, block_device_info) self.assertIsNone(dev_meta) mock_get_vifs.assert_called_once_with(ctxt, uuids.instance_uuid) mock_get_bdms.assert_not_called() mock_prepare_disk_metadata.assert_not_called() def test_spawn_with_neutron_exception(self): self.mox.StubOutWithMock(self.vmops, '_get_neutron_events') self.assertRaises(exception.VirtualInterfaceCreateException, self._test_spawn, neutron_exception=True) def _test_finish_migration(self, power_on=True, resize_instance=True, throw_exception=None, booted_from_volume=False, vgpu_info=None): self._stub_out_common() self.mox.StubOutWithMock(volumeops.VolumeOps, "connect_volume") self.mox.StubOutWithMock(vm_utils, "import_all_migrated_disks") self.mox.StubOutWithMock(self.vmops, "_attach_mapped_block_devices") context = "context" migration = {} name_label = "dummy" instance = self._new_instance({"name": name_label, "uuid": "fake_uuid", "root_device_name": "/dev/xvda", "device_metadata": None}) disk_info = "disk_info" network_info = "net_info" image_meta = objects.ImageMeta.from_dict({"id": uuids.image_id}) block_device_info = {} import_root = True if booted_from_volume: block_device_info = {'block_device_mapping': [ {'mount_device': '/dev/xvda', 'connection_info': {'data': 'fake-data'}}]} import_root = False volumeops.VolumeOps.connect_volume( {'data': 'fake-data'}).AndReturn(('sr', 'vol-vdi-uuid')) self.vmops._session.call_xenapi('VDI.get_by_uuid', 'vol-vdi-uuid').AndReturn('vol-vdi-ref') session = self.vmops._session self.vmops._ensure_instance_name_unique(name_label) self.vmops._ensure_enough_free_mem(instance) di_type = "di_type" vm_utils.determine_disk_image_type(image_meta).AndReturn(di_type) root_vdi = {"ref": "fake_ref"} ephemeral_vdi = {"ref": "fake_ref_e"} vdis = {"root": root_vdi, "ephemerals": {4: ephemeral_vdi}} vm_utils.import_all_migrated_disks(self.vmops._session, instance, import_root=import_root).AndReturn(vdis) kernel_file = "kernel" ramdisk_file = "ramdisk" vm_utils.create_kernel_and_ramdisk(context, session, instance, name_label).AndReturn((kernel_file, ramdisk_file)) vm_ref = "fake_vm_ref" rescue = False self.vmops._create_vm_record(context, instance, name_label, di_type, kernel_file, ramdisk_file, image_meta, rescue).AndReturn(vm_ref) if resize_instance: self.vmops._resize_up_vdis(instance, vdis) self.vmops._save_device_metadata(context, instance, block_device_info) self.vmops._attach_disks(context, instance, image_meta, vm_ref, name_label, vdis, di_type, network_info, False, None, None) self.vmops._attach_mapped_block_devices(instance, block_device_info) pci_manager.get_instance_pci_devs(instance).AndReturn([]) self.vmops._attach_vgpu(vm_ref, vgpu_info, instance) self.vmops._inject_instance_metadata(instance, vm_ref) self.vmops._inject_auto_disk_config(instance, vm_ref) self.vmops._file_inject_vm_settings(instance, vm_ref, vdis, network_info) self.vmops.inject_network_info(instance, network_info, vm_ref) self.vmops._create_vifs(instance, vm_ref, network_info) self.vmops.firewall_driver.setup_basic_filtering(instance, network_info).AndRaise(NotImplementedError) self.vmops.firewall_driver.prepare_instance_filter(instance, network_info) if power_on: self.vmops._start(instance, vm_ref, start_pause=True) self.vmops.firewall_driver.apply_instance_filter(instance, network_info) if power_on: self.vmops._session.call_xenapi('VM.unpause', vm_ref) self.vmops._wait_for_instance_to_start(instance, vm_ref) self.vmops._update_last_dom_id(vm_ref) last_call = self.vmops._update_instance_progress(context, instance, step=5, total_steps=5) if throw_exception: last_call.AndRaise(throw_exception) self.vmops._destroy(instance, vm_ref, network_info=network_info) vm_utils.destroy_kernel_ramdisk(self.vmops._session, instance, kernel_file, ramdisk_file) vm_utils.safe_destroy_vdis(self.vmops._session, ["fake_ref_e", "fake_ref"]) self.mox.ReplayAll() self.vmops.finish_migration(context, migration, instance, disk_info, network_info, image_meta, resize_instance, block_device_info, power_on) def test_finish_migration(self): self._test_finish_migration() def test_finish_migration_no_power_on(self): self._test_finish_migration(power_on=False, resize_instance=False) def test_finish_migration_booted_from_volume(self): self._test_finish_migration(booted_from_volume=True) def test_finish_migrate_performs_rollback_on_error(self): self.assertRaises(test.TestingException, self._test_finish_migration, power_on=False, resize_instance=False, throw_exception=test.TestingException()) def test_remove_hostname(self): vm, vm_ref = self.create_vm("dummy") instance = {"name": "dummy", "uuid": "1234", "auto_disk_config": None} self.mox.StubOutWithMock(self._session, 'call_xenapi') self._session.call_xenapi("VM.remove_from_xenstore_data", vm_ref, "vm-data/hostname") self.mox.ReplayAll() self.vmops._remove_hostname(instance, vm_ref) self.mox.VerifyAll() def test_reset_network(self): class mock_agent(object): def __init__(self): self.called = False def resetnetwork(self): self.called = True vm, vm_ref = self.create_vm("dummy") instance = {"name": "dummy", "uuid": "1234", "auto_disk_config": None} agent = mock_agent() self.mox.StubOutWithMock(self.vmops, 'agent_enabled') self.mox.StubOutWithMock(self.vmops, '_get_agent') self.mox.StubOutWithMock(self.vmops, '_inject_hostname') self.mox.StubOutWithMock(self.vmops, '_remove_hostname') self.vmops.agent_enabled(instance).AndReturn(True) self.vmops._get_agent(instance, vm_ref).AndReturn(agent) self.vmops._inject_hostname(instance, vm_ref, False) self.vmops._remove_hostname(instance, vm_ref) self.mox.ReplayAll() self.vmops.reset_network(instance) self.assertTrue(agent.called) self.mox.VerifyAll() def test_inject_hostname(self): instance = {"hostname": "dummy", "os_type": "fake", "uuid": "uuid"} vm_ref = "vm_ref" self.mox.StubOutWithMock(self.vmops, '_add_to_param_xenstore') self.vmops._add_to_param_xenstore(vm_ref, 'vm-data/hostname', 'dummy') self.mox.ReplayAll() self.vmops._inject_hostname(instance, vm_ref, rescue=False) def test_inject_hostname_with_rescue_prefix(self): instance = {"hostname": "dummy", "os_type": "fake", "uuid": "uuid"} vm_ref = "vm_ref" self.mox.StubOutWithMock(self.vmops, '_add_to_param_xenstore') self.vmops._add_to_param_xenstore(vm_ref, 'vm-data/hostname', 'RESCUE-dummy') self.mox.ReplayAll() self.vmops._inject_hostname(instance, vm_ref, rescue=True) def test_inject_hostname_with_windows_name_truncation(self): instance = {"hostname": "dummydummydummydummydummy", "os_type": "windows", "uuid": "uuid"} vm_ref = "vm_ref" self.mox.StubOutWithMock(self.vmops, '_add_to_param_xenstore') self.vmops._add_to_param_xenstore(vm_ref, 'vm-data/hostname', 'RESCUE-dummydum') self.mox.ReplayAll() self.vmops._inject_hostname(instance, vm_ref, rescue=True) def test_wait_for_instance_to_start(self): instance = {"uuid": "uuid"} vm_ref = "vm_ref" self.mox.StubOutWithMock(vm_utils, 'get_power_state') self.mox.StubOutWithMock(greenthread, 'sleep') vm_utils.get_power_state(self._session, vm_ref).AndReturn( power_state.SHUTDOWN) greenthread.sleep(0.5) vm_utils.get_power_state(self._session, vm_ref).AndReturn( power_state.RUNNING) self.mox.ReplayAll() self.vmops._wait_for_instance_to_start(instance, vm_ref) @mock.patch.object(vm_utils, 'lookup', return_value='ref') @mock.patch.object(vm_utils, 'create_vbd') def test_attach_orig_disks(self, mock_create_vbd, mock_lookup): instance = {"name": "dummy"} vm_ref = "vm_ref" vbd_refs = {vmops.DEVICE_ROOT: "vdi_ref"} with mock.patch.object(self.vmops, '_find_vdi_refs', return_value=vbd_refs) as mock_find_vdi: self.vmops._attach_orig_disks(instance, vm_ref) mock_lookup.assert_called_once_with(self.vmops._session, 'dummy') mock_find_vdi.assert_called_once_with('ref', exclude_volumes=True) mock_create_vbd.assert_called_once_with( self.vmops._session, vm_ref, 'vdi_ref', vmops.DEVICE_RESCUE, bootable=False) def test_agent_update_setup(self): # agent updates need to occur after networking is configured instance = {'name': 'betelgeuse', 'uuid': '1-2-3-4-5-6'} vm_ref = 'vm_ref' agent = xenapi_agent.XenAPIBasedAgent(self.vmops._session, self.vmops._virtapi, instance, vm_ref) self.mox.StubOutWithMock(xenapi_agent, 'should_use_agent') self.mox.StubOutWithMock(self.vmops, '_get_agent') self.mox.StubOutWithMock(agent, 'get_version') self.mox.StubOutWithMock(agent, 'resetnetwork') self.mox.StubOutWithMock(agent, 'update_if_needed') xenapi_agent.should_use_agent(instance).AndReturn(True) self.vmops._get_agent(instance, vm_ref).AndReturn(agent) agent.get_version().AndReturn('1.2.3') agent.resetnetwork() agent.update_if_needed('1.2.3') self.mox.ReplayAll() self.vmops._configure_new_instance_with_agent(instance, vm_ref, None, None) @mock.patch.object(utils, 'is_neutron', return_value=True) def test_get_neutron_event(self, mock_is_neutron): network_info = [{"active": False, "id": 1}, {"active": True, "id": 2}, {"active": False, "id": 3}, {"id": 4}] power_on = True first_boot = True rescue = False events = self.vmops._get_neutron_events(network_info, power_on, first_boot, rescue) self.assertEqual("network-vif-plugged", events[0][0]) self.assertEqual(1, events[0][1]) self.assertEqual("network-vif-plugged", events[1][0]) self.assertEqual(3, events[1][1]) @mock.patch.object(utils, 'is_neutron', return_value=False) def test_get_neutron_event_not_neutron_network(self, mock_is_neutron): network_info = [{"active": False, "id": 1}, {"active": True, "id": 2}, {"active": False, "id": 3}, {"id": 4}] power_on = True first_boot = True rescue = False events = self.vmops._get_neutron_events(network_info, power_on, first_boot, rescue) self.assertEqual([], events) @mock.patch.object(utils, 'is_neutron', return_value=True) def test_get_neutron_event_power_off(self, mock_is_neutron): network_info = [{"active": False, "id": 1}, {"active": True, "id": 2}, {"active": False, "id": 3}, {"id": 4}] power_on = False first_boot = True rescue = False events = self.vmops._get_neutron_events(network_info, power_on, first_boot, rescue) self.assertEqual([], events) @mock.patch.object(utils, 'is_neutron', return_value=True) def test_get_neutron_event_not_first_boot(self, mock_is_neutron): network_info = [{"active": False, "id": 1}, {"active": True, "id": 2}, {"active": False, "id": 3}, {"id": 4}] power_on = True first_boot = False rescue = False events = self.vmops._get_neutron_events(network_info, power_on, first_boot, rescue) self.assertEqual([], events) @mock.patch.object(utils, 'is_neutron', return_value=True) def test_get_neutron_event_rescue(self, mock_is_neutron): network_info = [{"active": False, "id": 1}, {"active": True, "id": 2}, {"active": False, "id": 3}, {"id": 4}] power_on = True first_boot = True rescue = True events = self.vmops._get_neutron_events(network_info, power_on, first_boot, rescue) self.assertEqual([], events) class DestroyTestCase(VMOpsTestBase): def setUp(self): super(DestroyTestCase, self).setUp() self.context = context.RequestContext(user_id=None, project_id=None) self.instance = fake_instance.fake_instance_obj(self.context) @mock.patch.object(vm_utils, 'lookup', side_effect=[None, None]) @mock.patch.object(vm_utils, 'hard_shutdown_vm') @mock.patch.object(volume_utils, 'find_sr_by_uuid') @mock.patch.object(volume_utils, 'forget_sr') def test_no_vm_no_bdm(self, forget_sr, find_sr_by_uuid, hard_shutdown_vm, lookup): self.vmops.destroy(self.instance, 'network_info', {'block_device_mapping': []}) self.assertEqual(0, find_sr_by_uuid.call_count) self.assertEqual(0, forget_sr.call_count) self.assertEqual(0, hard_shutdown_vm.call_count) @mock.patch.object(vm_utils, 'lookup', side_effect=[None, None]) @mock.patch.object(vm_utils, 'hard_shutdown_vm') @mock.patch.object(volume_utils, 'find_sr_by_uuid', return_value=None) @mock.patch.object(volume_utils, 'forget_sr') def test_no_vm_orphaned_volume_no_sr(self, forget_sr, find_sr_by_uuid, hard_shutdown_vm, lookup): self.vmops.destroy(self.instance, 'network_info', {'block_device_mapping': [{'connection_info': {'data': {'volume_id': 'fake-uuid'}}}]}) find_sr_by_uuid.assert_called_once_with(self.vmops._session, 'FA15E-D15C-fake-uuid') self.assertEqual(0, forget_sr.call_count) self.assertEqual(0, hard_shutdown_vm.call_count) @mock.patch.object(vm_utils, 'lookup', side_effect=[None, None]) @mock.patch.object(vm_utils, 'hard_shutdown_vm') @mock.patch.object(volume_utils, 'find_sr_by_uuid', return_value='sr_ref') @mock.patch.object(volume_utils, 'forget_sr') def test_no_vm_orphaned_volume_old_sr(self, forget_sr, find_sr_by_uuid, hard_shutdown_vm, lookup): self.vmops.destroy(self.instance, 'network_info', {'block_device_mapping': [{'connection_info': {'data': {'volume_id': 'fake-uuid'}}}]}) find_sr_by_uuid.assert_called_once_with(self.vmops._session, 'FA15E-D15C-fake-uuid') forget_sr.assert_called_once_with(self.vmops._session, 'sr_ref') self.assertEqual(0, hard_shutdown_vm.call_count) @mock.patch.object(vm_utils, 'lookup', side_effect=[None, None]) @mock.patch.object(vm_utils, 'hard_shutdown_vm') @mock.patch.object(volume_utils, 'find_sr_by_uuid', side_effect=[None, 'sr_ref']) @mock.patch.object(volume_utils, 'forget_sr') @mock.patch.object(uuid, 'uuid5', return_value='fake-uuid') def test_no_vm_orphaned_volume(self, uuid5, forget_sr, find_sr_by_uuid, hard_shutdown_vm, lookup): fake_data = {'volume_id': 'fake-uuid', 'target_portal': 'host:port', 'target_iqn': 'iqn'} self.vmops.destroy(self.instance, 'network_info', {'block_device_mapping': [{'connection_info': {'data': fake_data}}]}) call1 = mock.call(self.vmops._session, 'FA15E-D15C-fake-uuid') call2 = mock.call(self.vmops._session, 'fake-uuid') uuid5.assert_called_once_with(volume_utils.SR_NAMESPACE, 'host/port/iqn') find_sr_by_uuid.assert_has_calls([call1, call2]) forget_sr.assert_called_once_with(self.vmops._session, 'sr_ref') self.assertEqual(0, hard_shutdown_vm.call_count) @mock.patch.object(vmops.VMOps, '_update_instance_progress') @mock.patch.object(vmops.VMOps, '_get_vm_opaque_ref') @mock.patch.object(vm_utils, 'get_sr_path') @mock.patch.object(vmops.VMOps, '_detach_block_devices_from_orig_vm') @mock.patch.object(vmops.VMOps, '_migrate_disk_resizing_down') @mock.patch.object(vmops.VMOps, '_migrate_disk_resizing_up') class MigrateDiskAndPowerOffTestCase(VMOpsTestBase): def setUp(self): super(MigrateDiskAndPowerOffTestCase, self).setUp() self.context = context.RequestContext('user', 'project') def test_migrate_disk_and_power_off_works_down(self, migrate_up, migrate_down, *mocks): instance = objects.Instance( flavor=objects.Flavor(root_gb=2, ephemeral_gb=0), uuid=uuids.instance) flavor = fake_flavor.fake_flavor_obj(self.context, root_gb=1, ephemeral_gb=0) self.vmops.migrate_disk_and_power_off(None, instance, None, flavor, None) self.assertFalse(migrate_up.called) self.assertTrue(migrate_down.called) def test_migrate_disk_and_power_off_works_up(self, migrate_up, migrate_down, *mocks): instance = objects.Instance( flavor=objects.Flavor(root_gb=1, ephemeral_gb=1), uuid=uuids.instance) flavor = fake_flavor.fake_flavor_obj(self.context, root_gb=2, ephemeral_gb=2) self.vmops.migrate_disk_and_power_off(None, instance, None, flavor, None) self.assertFalse(migrate_down.called) self.assertTrue(migrate_up.called) def test_migrate_disk_and_power_off_resize_down_ephemeral_fails(self, migrate_up, migrate_down, *mocks): instance = objects.Instance(flavor=objects.Flavor(ephemeral_gb=2)) flavor = fake_flavor.fake_flavor_obj(self.context, ephemeral_gb=1) self.assertRaises(exception.ResizeError, self.vmops.migrate_disk_and_power_off, None, instance, None, flavor, None) @mock.patch.object(vm_utils, 'get_vdi_for_vm_safely') @mock.patch.object(vm_utils, 'migrate_vhd') @mock.patch.object(vmops.VMOps, '_resize_ensure_vm_is_shutdown') @mock.patch.object(vm_utils, 'get_all_vdi_uuids_for_vm') @mock.patch.object(vmops.VMOps, '_update_instance_progress') @mock.patch.object(vmops.VMOps, '_apply_orig_vm_name_label') class MigrateDiskResizingUpTestCase(VMOpsTestBase): def _fake_snapshot_attached_here(self, session, instance, vm_ref, label, userdevice, post_snapshot_callback): self.assertIsInstance(instance, dict) if userdevice == '0': self.assertEqual("vm_ref", vm_ref) self.assertEqual("fake-snapshot", label) yield ["leaf", "parent", "grandp"] else: leaf = userdevice + "-leaf" parent = userdevice + "-parent" yield [leaf, parent] @mock.patch.object(volume_utils, 'is_booted_from_volume', return_value=False) def test_migrate_disk_resizing_up_works_no_ephemeral(self, mock_is_booted_from_volume, mock_apply_orig, mock_update_progress, mock_get_all_vdi_uuids, mock_shutdown, mock_migrate_vhd, mock_get_vdi_for_vm): context = "ctxt" instance = {"name": "fake", "uuid": "uuid"} dest = "dest" vm_ref = "vm_ref" sr_path = "sr_path" mock_get_all_vdi_uuids.return_value = None mock_get_vdi_for_vm.return_value = ({}, {"uuid": "root"}) with mock.patch.object(vm_utils, '_snapshot_attached_here_impl', self._fake_snapshot_attached_here): self.vmops._migrate_disk_resizing_up(context, instance, dest, vm_ref, sr_path) mock_get_all_vdi_uuids.assert_called_once_with(self.vmops._session, vm_ref, min_userdevice=4) mock_apply_orig.assert_called_once_with(instance, vm_ref) mock_shutdown.assert_called_once_with(instance, vm_ref) m_vhd_expected = [mock.call(self.vmops._session, instance, "parent", dest, sr_path, 1), mock.call(self.vmops._session, instance, "grandp", dest, sr_path, 2), mock.call(self.vmops._session, instance, "root", dest, sr_path, 0)] self.assertEqual(m_vhd_expected, mock_migrate_vhd.call_args_list) prog_expected = [ mock.call(context, instance, 1, 5), mock.call(context, instance, 2, 5), mock.call(context, instance, 3, 5), mock.call(context, instance, 4, 5) # 5/5: step to be executed by finish migration. ] self.assertEqual(prog_expected, mock_update_progress.call_args_list) @mock.patch.object(volume_utils, 'is_booted_from_volume', return_value=False) def test_migrate_disk_resizing_up_works_with_two_ephemeral(self, mock_is_booted_from_volume, mock_apply_orig, mock_update_progress, mock_get_all_vdi_uuids, mock_shutdown, mock_migrate_vhd, mock_get_vdi_for_vm): context = "ctxt" instance = {"name": "fake", "uuid": "uuid"} dest = "dest" vm_ref = "vm_ref" sr_path = "sr_path" mock_get_all_vdi_uuids.return_value = ["vdi-eph1", "vdi-eph2"] mock_get_vdi_for_vm.side_effect = [({}, {"uuid": "root"}), ({}, {"uuid": "4-root"}), ({}, {"uuid": "5-root"})] with mock.patch.object(vm_utils, '_snapshot_attached_here_impl', self._fake_snapshot_attached_here): self.vmops._migrate_disk_resizing_up(context, instance, dest, vm_ref, sr_path) mock_get_all_vdi_uuids.assert_called_once_with(self.vmops._session, vm_ref, min_userdevice=4) mock_apply_orig.assert_called_once_with(instance, vm_ref) mock_shutdown.assert_called_once_with(instance, vm_ref) m_vhd_expected = [mock.call(self.vmops._session, instance, "parent", dest, sr_path, 1), mock.call(self.vmops._session, instance, "grandp", dest, sr_path, 2), mock.call(self.vmops._session, instance, "4-parent", dest, sr_path, 1, 1), mock.call(self.vmops._session, instance, "5-parent", dest, sr_path, 1, 2), mock.call(self.vmops._session, instance, "root", dest, sr_path, 0), mock.call(self.vmops._session, instance, "4-root", dest, sr_path, 0, 1), mock.call(self.vmops._session, instance, "5-root", dest, sr_path, 0, 2)] self.assertEqual(m_vhd_expected, mock_migrate_vhd.call_args_list) prog_expected = [ mock.call(context, instance, 1, 5), mock.call(context, instance, 2, 5), mock.call(context, instance, 3, 5), mock.call(context, instance, 4, 5) # 5/5: step to be executed by finish migration. ] self.assertEqual(prog_expected, mock_update_progress.call_args_list) @mock.patch.object(volume_utils, 'is_booted_from_volume', return_value=True) def test_migrate_disk_resizing_up_booted_from_volume(self, mock_is_booted_from_volume, mock_apply_orig, mock_update_progress, mock_get_all_vdi_uuids, mock_shutdown, mock_migrate_vhd, mock_get_vdi_for_vm): context = "ctxt" instance = {"name": "fake", "uuid": "uuid"} dest = "dest" vm_ref = "vm_ref" sr_path = "sr_path" mock_get_all_vdi_uuids.return_value = ["vdi-eph1", "vdi-eph2"] mock_get_vdi_for_vm.side_effect = [({}, {"uuid": "4-root"}), ({}, {"uuid": "5-root"})] with mock.patch.object(vm_utils, '_snapshot_attached_here_impl', self._fake_snapshot_attached_here): self.vmops._migrate_disk_resizing_up(context, instance, dest, vm_ref, sr_path) mock_get_all_vdi_uuids.assert_called_once_with(self.vmops._session, vm_ref, min_userdevice=4) mock_apply_orig.assert_called_once_with(instance, vm_ref) mock_shutdown.assert_called_once_with(instance, vm_ref) m_vhd_expected = [mock.call(self.vmops._session, instance, "4-parent", dest, sr_path, 1, 1), mock.call(self.vmops._session, instance, "5-parent", dest, sr_path, 1, 2), mock.call(self.vmops._session, instance, "4-root", dest, sr_path, 0, 1), mock.call(self.vmops._session, instance, "5-root", dest, sr_path, 0, 2)] self.assertEqual(m_vhd_expected, mock_migrate_vhd.call_args_list) prog_expected = [ mock.call(context, instance, 1, 5), mock.call(context, instance, 2, 5), mock.call(context, instance, 3, 5), mock.call(context, instance, 4, 5) # 5/5: step to be executed by finish migration. ] self.assertEqual(prog_expected, mock_update_progress.call_args_list) @mock.patch.object(vmops.VMOps, '_restore_orig_vm_and_cleanup_orphan') @mock.patch.object(volume_utils, 'is_booted_from_volume', return_value=False) def test_migrate_disk_resizing_up_rollback(self, mock_is_booted_from_volume, mock_restore, mock_apply_orig, mock_update_progress, mock_get_all_vdi_uuids, mock_shutdown, mock_migrate_vhd, mock_get_vdi_for_vm): context = "ctxt" instance = {"name": "fake", "uuid": "fake"} dest = "dest" vm_ref = "vm_ref" sr_path = "sr_path" mock_migrate_vhd.side_effect = test.TestingException mock_restore.side_effect = test.TestingException with mock.patch.object(vm_utils, '_snapshot_attached_here_impl', self._fake_snapshot_attached_here): self.assertRaises(exception.InstanceFaultRollback, self.vmops._migrate_disk_resizing_up, context, instance, dest, vm_ref, sr_path) mock_apply_orig.assert_called_once_with(instance, vm_ref) mock_restore.assert_called_once_with(instance) mock_migrate_vhd.assert_called_once_with(self.vmops._session, instance, "parent", dest, sr_path, 1) class CreateVMRecordTestCase(VMOpsTestBase): @mock.patch.object(vm_utils, 'determine_vm_mode') @mock.patch.object(vm_utils, 'get_vm_device_id') @mock.patch.object(vm_utils, 'create_vm') def test_create_vm_record_with_vm_device_id(self, mock_create_vm, mock_get_vm_device_id, mock_determine_vm_mode): context = "context" instance = objects.Instance(vm_mode="vm_mode", uuid=uuids.instance) name_label = "dummy" disk_image_type = "vhd" kernel_file = "kernel" ramdisk_file = "ram" device_id = "0002" image_properties = {"xenapi_device_id": device_id} image_meta = objects.ImageMeta.from_dict( {"properties": image_properties}) rescue = False session = "session" self.vmops._session = session mock_get_vm_device_id.return_value = device_id mock_determine_vm_mode.return_value = "vm_mode" self.vmops._create_vm_record(context, instance, name_label, disk_image_type, kernel_file, ramdisk_file, image_meta, rescue) mock_get_vm_device_id.assert_called_with(session, image_meta) mock_create_vm.assert_called_with(session, instance, name_label, kernel_file, ramdisk_file, False, device_id) class BootableTestCase(VMOpsTestBase): def setUp(self): super(BootableTestCase, self).setUp() self.instance = {"name": "test", "uuid": "fake"} vm_rec, self.vm_ref = self.create_vm('test') # sanity check bootlock is initially disabled: self.assertEqual({}, vm_rec['blocked_operations']) def _get_blocked(self): vm_rec = self._session.call_xenapi("VM.get_record", self.vm_ref) return vm_rec['blocked_operations'] def test_acquire_bootlock(self): self.vmops._acquire_bootlock(self.vm_ref) blocked = self._get_blocked() self.assertIn('start', blocked) def test_release_bootlock(self): self.vmops._acquire_bootlock(self.vm_ref) self.vmops._release_bootlock(self.vm_ref) blocked = self._get_blocked() self.assertNotIn('start', blocked) def test_set_bootable(self): self.vmops.set_bootable(self.instance, True) blocked = self._get_blocked() self.assertNotIn('start', blocked) def test_set_not_bootable(self): self.vmops.set_bootable(self.instance, False) blocked = self._get_blocked() self.assertIn('start', blocked) @mock.patch.object(vm_utils, 'update_vdi_virtual_size', autospec=True) class ResizeVdisTestCase(VMOpsTestBase): def test_dont_resize_root_volumes_osvol_false(self, mock_resize): instance = fake_instance.fake_instance_obj( None, flavor=objects.Flavor(root_gb=20)) vdis = {'root': {'osvol': False, 'ref': 'vdi_ref'}} self.vmops._resize_up_vdis(instance, vdis) self.assertTrue(mock_resize.called) def test_dont_resize_root_volumes_osvol_true(self, mock_resize): instance = fake_instance.fake_instance_obj( None, flavor=objects.Flavor(root_gb=20)) vdis = {'root': {'osvol': True}} self.vmops._resize_up_vdis(instance, vdis) self.assertFalse(mock_resize.called) def test_dont_resize_root_volumes_no_osvol(self, mock_resize): instance = fake_instance.fake_instance_obj( None, flavor=objects.Flavor(root_gb=20)) vdis = {'root': {}} self.vmops._resize_up_vdis(instance, vdis) self.assertFalse(mock_resize.called) @mock.patch.object(vm_utils, 'get_ephemeral_disk_sizes') def test_ensure_ephemeral_resize_with_root_volume(self, mock_sizes, mock_resize): mock_sizes.return_value = [2000, 1000] instance = fake_instance.fake_instance_obj( None, flavor=objects.Flavor(root_gb=20, ephemeral_gb=20)) ephemerals = {"4": {"ref": 4}, "5": {"ref": 5}} vdis = {'root': {'osvol': True, 'ref': 'vdi_ref'}, 'ephemerals': ephemerals} with mock.patch.object(vm_utils, 'generate_single_ephemeral', autospec=True) as g: self.vmops._resize_up_vdis(instance, vdis) self.assertEqual([mock.call(self.vmops._session, instance, 4, 2000), mock.call(self.vmops._session, instance, 5, 1000)], mock_resize.call_args_list) self.assertFalse(g.called) def test_resize_up_vdis_root(self, mock_resize): instance = objects.Instance(flavor=objects.Flavor(root_gb=20, ephemeral_gb=0)) self.vmops._resize_up_vdis(instance, {"root": {"ref": "vdi_ref"}}) mock_resize.assert_called_once_with(self.vmops._session, instance, "vdi_ref", 20) def test_resize_up_vdis_zero_disks(self, mock_resize): instance = objects.Instance(flavor=objects.Flavor(root_gb=0, ephemeral_gb=0)) self.vmops._resize_up_vdis(instance, {"root": {}}) self.assertFalse(mock_resize.called) def test_resize_up_vdis_no_vdis_like_initial_spawn(self, mock_resize): instance = objects.Instance(flavor=objects.Flavor(root_gb=0, ephemeral_gb=3000)) vdis = {} self.vmops._resize_up_vdis(instance, vdis) self.assertFalse(mock_resize.called) @mock.patch.object(vm_utils, 'get_ephemeral_disk_sizes') def test_resize_up_vdis_ephemeral(self, mock_sizes, mock_resize): mock_sizes.return_value = [2000, 1000] instance = objects.Instance(flavor=objects.Flavor(root_gb=0, ephemeral_gb=3000)) ephemerals = {"4": {"ref": 4}, "5": {"ref": 5}} vdis = {"ephemerals": ephemerals} self.vmops._resize_up_vdis(instance, vdis) mock_sizes.assert_called_once_with(3000) expected = [mock.call(self.vmops._session, instance, 4, 2000), mock.call(self.vmops._session, instance, 5, 1000)] self.assertEqual(expected, mock_resize.call_args_list) @mock.patch.object(vm_utils, 'generate_single_ephemeral') @mock.patch.object(vm_utils, 'get_ephemeral_disk_sizes') def test_resize_up_vdis_ephemeral_with_generate(self, mock_sizes, mock_generate, mock_resize): mock_sizes.return_value = [2000, 1000] instance = objects.Instance(uuid=uuids.instance, flavor=objects.Flavor(root_gb=0, ephemeral_gb=3000)) ephemerals = {"4": {"ref": 4}} vdis = {"ephemerals": ephemerals} self.vmops._resize_up_vdis(instance, vdis) mock_sizes.assert_called_once_with(3000) mock_resize.assert_called_once_with(self.vmops._session, instance, 4, 2000) mock_generate.assert_called_once_with(self.vmops._session, instance, None, 5, 1000) @mock.patch.object(vm_utils, 'remove_old_snapshots') class CleanupFailedSnapshotTestCase(VMOpsTestBase): def test_post_interrupted_snapshot_cleanup(self, mock_remove): self.vmops._get_vm_opaque_ref = mock.Mock() self.vmops._get_vm_opaque_ref.return_value = "vm_ref" self.vmops.post_interrupted_snapshot_cleanup("context", "instance") mock_remove.assert_called_once_with(self.vmops._session, "instance", "vm_ref") class XenstoreCallsTestCase(VMOpsTestBase): """Test cases for Read/Write/Delete/Update xenstore calls from vmops. """ @mock.patch.object(vmops.VMOps, '_get_dom_id') @mock.patch.object(host_xenstore, 'read_record') def test_read_from_xenstore(self, mock_read_record, mock_dom_id): mock_read_record.return_value = "fake_xapi_return" mock_dom_id.return_value = "fake_dom_id" fake_instance = {"name": "fake_instance"} path = "attr/PVAddons/MajorVersion" self.assertEqual("fake_xapi_return", self.vmops._read_from_xenstore(fake_instance, path, vm_ref="vm_ref")) mock_dom_id.assert_called_once_with(fake_instance, "vm_ref") @mock.patch.object(vmops.VMOps, '_get_dom_id') @mock.patch.object(host_xenstore, 'read_record') def test_read_from_xenstore_ignore_missing_path(self, mock_read_record, mock_dom_id): mock_read_record.return_value = "fake_xapi_return" mock_dom_id.return_value = "fake_dom_id" fake_instance = {"name": "fake_instance"} path = "attr/PVAddons/MajorVersion" self.vmops._read_from_xenstore(fake_instance, path, vm_ref="vm_ref") mock_read_record.assert_called_once_with( self._session, "fake_dom_id", path, ignore_missing_path=True) @mock.patch.object(vmops.VMOps, '_get_dom_id') @mock.patch.object(host_xenstore, 'read_record') def test_read_from_xenstore_missing_path(self, mock_read_record, mock_dom_id): mock_read_record.return_value = "fake_xapi_return" mock_dom_id.return_value = "fake_dom_id" fake_instance = {"name": "fake_instance"} path = "attr/PVAddons/MajorVersion" self.vmops._read_from_xenstore(fake_instance, path, vm_ref="vm_ref", ignore_missing_path=False) mock_read_record.assert_called_once_with(self._session, "fake_dom_id", path, ignore_missing_path=False) class LiveMigrateTestCase(VMOpsTestBase): @mock.patch.object(vmops.VMOps, '_get_network_ref') @mock.patch.object(vmops.VMOps, '_ensure_host_in_aggregate') def _test_check_can_live_migrate_destination_shared_storage( self, shared, mock_ensure_host, mock_net_ref): fake_instance = {"name": "fake_instance", "host": "fake_host"} block_migration = None disk_over_commit = False ctxt = 'ctxt' mock_net_ref.return_value = 'fake_net_ref' with mock.patch.object(self._session, 'get_rec') as fake_sr_rec: fake_sr_rec.return_value = {'shared': shared} migrate_data_ret = self.vmops.check_can_live_migrate_destination( ctxt, fake_instance, block_migration, disk_over_commit) if shared: self.assertFalse(migrate_data_ret.block_migration) else: self.assertTrue(migrate_data_ret.block_migration) self.assertEqual({'': 'fake_net_ref'}, migrate_data_ret.vif_uuid_map) def test_check_can_live_migrate_destination_shared_storage(self): self._test_check_can_live_migrate_destination_shared_storage(True) def test_check_can_live_migrate_destination_shared_storage_false(self): self._test_check_can_live_migrate_destination_shared_storage(False) @mock.patch.object(vmops.VMOps, '_get_network_ref') @mock.patch.object(vmops.VMOps, '_ensure_host_in_aggregate', side_effect=exception.MigrationPreCheckError(reason="")) def test_check_can_live_migrate_destination_block_migration( self, mock_ensure_host, mock_net_ref): fake_instance = {"name": "fake_instance", "host": "fake_host"} block_migration = None disk_over_commit = False ctxt = 'ctxt' mock_net_ref.return_value = 'fake_net_ref' migrate_data_ret = self.vmops.check_can_live_migrate_destination( ctxt, fake_instance, block_migration, disk_over_commit) self.assertTrue(migrate_data_ret.block_migration) self.assertEqual(vm_utils.safe_find_sr(self._session), migrate_data_ret.destination_sr_ref) self.assertEqual({'value': 'fake_migrate_data'}, migrate_data_ret.migrate_send_data) self.assertEqual({'': 'fake_net_ref'}, migrate_data_ret.vif_uuid_map) @mock.patch.object(vmops.objects.AggregateList, 'get_by_host') def test_get_host_uuid_from_aggregate_no_aggr(self, mock_get_by_host): mock_get_by_host.return_value = objects.AggregateList(objects=[]) context = "ctx" hostname = "other_host" self.assertRaises(exception.MigrationPreCheckError, self.vmops._get_host_uuid_from_aggregate, context, hostname) @mock.patch.object(vmops.objects.AggregateList, 'get_by_host') def test_get_host_uuid_from_aggregate_bad_aggr(self, mock_get_by_host): context = "ctx" hostname = "other_host" fake_aggregate_obj = objects.Aggregate(hosts=['fake'], metadata={'this': 'that'}) fake_aggr_list = objects.AggregateList(objects=[fake_aggregate_obj]) mock_get_by_host.return_value = fake_aggr_list self.assertRaises(exception.MigrationPreCheckError, self.vmops._get_host_uuid_from_aggregate, context, hostname) @mock.patch.object(vmops.VMOps, 'create_interim_networks') @mock.patch.object(vmops.VMOps, 'connect_block_device_volumes') def test_pre_live_migration(self, mock_connect, mock_create): migrate_data = objects.XenapiLiveMigrateData() migrate_data.block_migration = True sr_uuid_map = {"sr_uuid": "sr_ref"} vif_uuid_map = {"neutron_vif_uuid": "dest_network_ref"} mock_connect.return_value = {"sr_uuid": "sr_ref"} mock_create.return_value = {"neutron_vif_uuid": "dest_network_ref"} result = self.vmops.pre_live_migration( None, None, "bdi", "fake_network_info", None, migrate_data) self.assertTrue(result.block_migration) self.assertEqual(result.sr_uuid_map, sr_uuid_map) self.assertEqual(result.vif_uuid_map, vif_uuid_map) mock_connect.assert_called_once_with("bdi") mock_create.assert_called_once_with("fake_network_info") @mock.patch.object(vmops.VMOps, '_delete_networks_and_bridges') def test_post_live_migration_at_source(self, mock_delete): self.vmops.post_live_migration_at_source('fake_context', 'fake_instance', 'fake_network_info') mock_delete.assert_called_once_with('fake_instance', 'fake_network_info') class LiveMigrateFakeVersionTestCase(VMOpsTestBase): @mock.patch.object(vmops.VMOps, '_pv_device_reported') @mock.patch.object(vmops.VMOps, '_pv_driver_version_reported') @mock.patch.object(vmops.VMOps, '_write_fake_pv_version') def test_ensure_pv_driver_info_for_live_migration( self, mock_write_fake_pv_version, mock_pv_driver_version_reported, mock_pv_device_reported): mock_pv_device_reported.return_value = True mock_pv_driver_version_reported.return_value = False fake_instance = {"name": "fake_instance"} self.vmops._ensure_pv_driver_info_for_live_migration(fake_instance, "vm_rec") mock_write_fake_pv_version.assert_called_once_with(fake_instance, "vm_rec") @mock.patch.object(vmops.VMOps, '_read_from_xenstore') def test_pv_driver_version_reported_None(self, fake_read_from_xenstore): fake_read_from_xenstore.return_value = '"None"' fake_instance = {"name": "fake_instance"} self.assertFalse(self.vmops._pv_driver_version_reported(fake_instance, "vm_ref")) @mock.patch.object(vmops.VMOps, '_read_from_xenstore') def test_pv_driver_version_reported(self, fake_read_from_xenstore): fake_read_from_xenstore.return_value = '6.2.0' fake_instance = {"name": "fake_instance"} self.assertTrue(self.vmops._pv_driver_version_reported(fake_instance, "vm_ref")) @mock.patch.object(vmops.VMOps, '_read_from_xenstore') def test_pv_device_reported(self, fake_read_from_xenstore): with mock.patch.object(self._session.VM, 'get_record') as fake_vm_rec: fake_vm_rec.return_value = {'VIFs': 'fake-vif-object'} with mock.patch.object(self._session, 'call_xenapi') as fake_call: fake_call.return_value = {'device': '0'} fake_read_from_xenstore.return_value = '4' fake_instance = {"name": "fake_instance"} self.assertTrue(self.vmops._pv_device_reported(fake_instance, "vm_ref")) @mock.patch.object(vmops.VMOps, '_read_from_xenstore') def test_pv_device_not_reported(self, fake_read_from_xenstore): with mock.patch.object(self._session.VM, 'get_record') as fake_vm_rec: fake_vm_rec.return_value = {'VIFs': 'fake-vif-object'} with mock.patch.object(self._session, 'call_xenapi') as fake_call: fake_call.return_value = {'device': '0'} fake_read_from_xenstore.return_value = '0' fake_instance = {"name": "fake_instance"} self.assertFalse(self.vmops._pv_device_reported(fake_instance, "vm_ref")) @mock.patch.object(vmops.VMOps, '_read_from_xenstore') def test_pv_device_None_reported(self, fake_read_from_xenstore): with mock.patch.object(self._session.VM, 'get_record') as fake_vm_rec: fake_vm_rec.return_value = {'VIFs': 'fake-vif-object'} with mock.patch.object(self._session, 'call_xenapi') as fake_call: fake_call.return_value = {'device': '0'} fake_read_from_xenstore.return_value = '"None"' fake_instance = {"name": "fake_instance"} self.assertFalse(self.vmops._pv_device_reported(fake_instance, "vm_ref")) @mock.patch.object(vmops.VMOps, '_write_to_xenstore') def test_write_fake_pv_version(self, fake_write_to_xenstore): fake_write_to_xenstore.return_value = 'fake_return' fake_instance = {"name": "fake_instance"} with mock.patch.object(self._session, 'product_version') as version: version.return_value = ('6', '2', '0') self.assertIsNone(self.vmops._write_fake_pv_version(fake_instance, "vm_ref")) class LiveMigrateHelperTestCase(VMOpsTestBase): def test_connect_block_device_volumes_none(self): self.assertEqual({}, self.vmops.connect_block_device_volumes(None)) @mock.patch.object(volumeops.VolumeOps, "connect_volume") def test_connect_block_device_volumes_calls_connect(self, mock_connect): with mock.patch.object(self.vmops._session, "call_xenapi") as mock_session: mock_connect.return_value = ("sr_uuid", None) mock_session.return_value = "sr_ref" bdm = {"connection_info": "c_info"} bdi = {"block_device_mapping": [bdm]} result = self.vmops.connect_block_device_volumes(bdi) self.assertEqual({'sr_uuid': 'sr_ref'}, result) mock_connect.assert_called_once_with("c_info") mock_session.assert_called_once_with("SR.get_by_uuid", "sr_uuid") @mock.patch.object(volumeops.VolumeOps, "connect_volume") @mock.patch.object(volume_utils, 'forget_sr') def test_connect_block_device_volumes_calls_forget_sr(self, mock_forget, mock_connect): bdms = [{'connection_info': 'info1'}, {'connection_info': 'info2'}] def fake_connect(connection_info): expected = bdms[mock_connect.call_count - 1]['connection_info'] self.assertEqual(expected, connection_info) if mock_connect.call_count == 2: raise exception.VolumeDriverNotFound(driver_type='123') return ('sr_uuid_1', None) def fake_call_xenapi(method, uuid): self.assertEqual('sr_uuid_1', uuid) return 'sr_ref_1' mock_connect.side_effect = fake_connect with mock.patch.object(self.vmops._session, "call_xenapi", side_effect=fake_call_xenapi): self.assertRaises(exception.VolumeDriverNotFound, self.vmops.connect_block_device_volumes, {'block_device_mapping': bdms}) mock_forget.assert_called_once_with(self.vmops._session, 'sr_ref_1') def _call_live_migrate_command_with_migrate_send_data(self, migrate_data): command_name = 'test_command' vm_ref = "vm_ref" def side_effect(method, *args): if method == "SR.get_by_uuid": return "sr_ref_new" xmlrpclib.dumps(args, method, allow_none=1) with mock.patch.object(self.vmops, "_generate_vdi_map") as mock_gen_vdi_map, \ mock.patch.object(self.vmops._session, 'call_xenapi') as mock_call_xenapi, \ mock.patch.object(self.vmops, "_generate_vif_network_map") as mock_vif_map: mock_call_xenapi.side_effect = side_effect mock_gen_vdi_map.side_effect = [ {"vdi": "sr_ref"}, {"vdi": "sr_ref_2"}] mock_vif_map.return_value = {"vif_ref1": "dest_net_ref"} self.vmops._call_live_migrate_command(command_name, vm_ref, migrate_data) expect_vif_map = {} if 'vif_uuid_map' in migrate_data: expect_vif_map.update({"vif_ref1": "dest_net_ref"}) expected_vdi_map = {'vdi': 'sr_ref'} if 'sr_uuid_map' in migrate_data: expected_vdi_map = {'vdi': 'sr_ref_2'} self.assertEqual(mock_call_xenapi.call_args_list[-1], mock.call(command_name, vm_ref, migrate_data.migrate_send_data, True, expected_vdi_map, expect_vif_map, {})) self.assertEqual(mock_gen_vdi_map.call_args_list[0], mock.call(migrate_data.destination_sr_ref, vm_ref)) if 'sr_uuid_map' in migrate_data: self.assertEqual(mock_gen_vdi_map.call_args_list[1], mock.call(migrate_data.sr_uuid_map["sr_uuid2"], vm_ref, "sr_ref_new")) def test_call_live_migrate_command_with_full_data(self): migrate_data = objects.XenapiLiveMigrateData() migrate_data.migrate_send_data = {"foo": "bar"} migrate_data.destination_sr_ref = "sr_ref" migrate_data.sr_uuid_map = {"sr_uuid2": "sr_ref_3"} migrate_data.vif_uuid_map = {"vif_id": "dest_net_ref"} self._call_live_migrate_command_with_migrate_send_data(migrate_data) def test_call_live_migrate_command_with_no_sr_uuid_map(self): migrate_data = objects.XenapiLiveMigrateData() migrate_data.migrate_send_data = {"foo": "baz"} migrate_data.destination_sr_ref = "sr_ref" self._call_live_migrate_command_with_migrate_send_data(migrate_data) def test_call_live_migrate_command_with_no_migrate_send_data(self): migrate_data = objects.XenapiLiveMigrateData() self.assertRaises(exception.InvalidParameterValue, self._call_live_migrate_command_with_migrate_send_data, migrate_data) def test_generate_vif_network_map(self): with mock.patch.object(self._session.VIF, 'get_other_config') as mock_other_config, \ mock.patch.object(self._session.VM, 'get_VIFs') as mock_get_vif: mock_other_config.side_effect = [{'neutron-port-id': 'vif_id_a'}, {'neutron-port-id': 'vif_id_b'}] mock_get_vif.return_value = ['vif_ref1', 'vif_ref2'] vif_uuid_map = {'vif_id_b': 'dest_net_ref2', 'vif_id_a': 'dest_net_ref1'} vif_map = self.vmops._generate_vif_network_map('vm_ref', vif_uuid_map) expected = {'vif_ref1': 'dest_net_ref1', 'vif_ref2': 'dest_net_ref2'} self.assertEqual(vif_map, expected) def test_generate_vif_network_map_default_net(self): with mock.patch.object(self._session.VIF, 'get_other_config') as mock_other_config, \ mock.patch.object(self._session.VM, 'get_VIFs') as mock_get_vif: mock_other_config.side_effect = [{'neutron-port-id': 'vif_id_a'}, {'neutron-port-id': 'vif_id_b'}] mock_get_vif.return_value = ['vif_ref1'] vif_uuid_map = {'': 'default_net_ref'} vif_map = self.vmops._generate_vif_network_map('vm_ref', vif_uuid_map) expected = {'vif_ref1': 'default_net_ref'} self.assertEqual(vif_map, expected) def test_generate_vif_network_map_exception(self): with mock.patch.object(self._session.VIF, 'get_other_config') as mock_other_config, \ mock.patch.object(self._session.VM, 'get_VIFs') as mock_get_vif: mock_other_config.side_effect = [{'neutron-port-id': 'vif_id_a'}, {'neutron-port-id': 'vif_id_b'}] mock_get_vif.return_value = ['vif_ref1', 'vif_ref2'] vif_uuid_map = {'vif_id_c': 'dest_net_ref2', 'vif_id_d': 'dest_net_ref1'} self.assertRaises(exception.MigrationError, self.vmops._generate_vif_network_map, 'vm_ref', vif_uuid_map) def test_generate_vif_network_map_exception_no_iface(self): with mock.patch.object(self._session.VIF, 'get_other_config') as mock_other_config, \ mock.patch.object(self._session.VM, 'get_VIFs') as mock_get_vif: mock_other_config.return_value = {} mock_get_vif.return_value = ['vif_ref1'] vif_uuid_map = {} self.assertRaises(exception.MigrationError, self.vmops._generate_vif_network_map, 'vm_ref', vif_uuid_map) def test_delete_networks_and_bridges(self): self.vmops.vif_driver = mock.Mock() network_info = ['fake_vif'] self.vmops._delete_networks_and_bridges('fake_instance', network_info) self.vmops.vif_driver.delete_network_and_bridge.\ assert_called_once_with('fake_instance', 'fake_vif') def test_create_interim_networks(self): class FakeVifDriver(object): def create_vif_interim_network(self, vif): if vif['id'] == "vif_1": return "network_ref_1" if vif['id'] == "vif_2": return "network_ref_2" network_info = [{'id': "vif_1"}, {'id': 'vif_2'}] self.vmops.vif_driver = FakeVifDriver() vif_map = self.vmops.create_interim_networks(network_info) self.assertEqual(vif_map, {'vif_1': 'network_ref_1', 'vif_2': 'network_ref_2'}) class RollbackLiveMigrateDestinationTestCase(VMOpsTestBase): @mock.patch.object(vmops.VMOps, '_delete_networks_and_bridges') @mock.patch.object(volume_utils, 'find_sr_by_uuid', return_value='sr_ref') @mock.patch.object(volume_utils, 'forget_sr') def test_rollback_dest_calls_sr_forget(self, forget_sr, sr_ref, delete_networks_bridges): block_device_info = {'block_device_mapping': [{'connection_info': {'data': {'volume_id': 'fake-uuid', 'target_iqn': 'fake-iqn', 'target_portal': 'fake-portal'}}}]} network_info = [{'id': 'vif1'}] self.vmops.rollback_live_migration_at_destination('instance', network_info, block_device_info) forget_sr.assert_called_once_with(self.vmops._session, 'sr_ref') delete_networks_bridges.assert_called_once_with( 'instance', [{'id': 'vif1'}]) @mock.patch.object(vmops.VMOps, '_delete_networks_and_bridges') @mock.patch.object(volume_utils, 'forget_sr') @mock.patch.object(volume_utils, 'find_sr_by_uuid', side_effect=test.TestingException) def test_rollback_dest_handles_exception(self, find_sr_ref, forget_sr, delete_networks_bridges): block_device_info = {'block_device_mapping': [{'connection_info': {'data': {'volume_id': 'fake-uuid', 'target_iqn': 'fake-iqn', 'target_portal': 'fake-portal'}}}]} network_info = [{'id': 'vif1'}] self.vmops.rollback_live_migration_at_destination('instance', network_info, block_device_info) self.assertFalse(forget_sr.called) delete_networks_bridges.assert_called_once_with( 'instance', [{'id': 'vif1'}]) @mock.patch.object(vmops.VMOps, '_resize_ensure_vm_is_shutdown') @mock.patch.object(vmops.VMOps, '_apply_orig_vm_name_label') @mock.patch.object(vmops.VMOps, '_update_instance_progress') @mock.patch.object(vm_utils, 'get_vdi_for_vm_safely') @mock.patch.object(vm_utils, 'resize_disk') @mock.patch.object(vm_utils, 'migrate_vhd') @mock.patch.object(vm_utils, 'destroy_vdi') class MigrateDiskResizingDownTestCase(VMOpsTestBase): def test_migrate_disk_resizing_down_works_no_ephemeral( self, mock_destroy_vdi, mock_migrate_vhd, mock_resize_disk, mock_get_vdi_for_vm_safely, mock_update_instance_progress, mock_apply_orig_vm_name_label, mock_resize_ensure_vm_is_shutdown): context = "ctx" instance = {"name": "fake", "uuid": "uuid"} dest = "dest" vm_ref = "vm_ref" sr_path = "sr_path" instance_type = dict(root_gb=1) old_vdi_ref = "old_ref" new_vdi_ref = "new_ref" new_vdi_uuid = "new_uuid" mock_get_vdi_for_vm_safely.return_value = (old_vdi_ref, None) mock_resize_disk.return_value = (new_vdi_ref, new_vdi_uuid) self.vmops._migrate_disk_resizing_down(context, instance, dest, instance_type, vm_ref, sr_path) mock_get_vdi_for_vm_safely.assert_called_once_with( self.vmops._session, vm_ref) mock_resize_ensure_vm_is_shutdown.assert_called_once_with( instance, vm_ref) mock_apply_orig_vm_name_label.assert_called_once_with( instance, vm_ref) mock_resize_disk.assert_called_once_with( self.vmops._session, instance, old_vdi_ref, instance_type) mock_migrate_vhd.assert_called_once_with( self.vmops._session, instance, new_vdi_uuid, dest, sr_path, 0) mock_destroy_vdi.assert_called_once_with( self.vmops._session, new_vdi_ref) prog_expected = [ mock.call(context, instance, 1, 5), mock.call(context, instance, 2, 5), mock.call(context, instance, 3, 5), mock.call(context, instance, 4, 5) # 5/5: step to be executed by finish migration. ] self.assertEqual(prog_expected, mock_update_instance_progress.call_args_list) class GetVdisForInstanceTestCase(VMOpsTestBase): """Tests get_vdis_for_instance utility method.""" def setUp(self): super(GetVdisForInstanceTestCase, self).setUp() self.context = context.get_admin_context() self.context.auth_token = 'auth_token' self.session = mock.Mock() self.vmops._session = self.session self.instance = fake_instance.fake_instance_obj(self.context) self.name_label = 'name' self.image = 'fake_image_id' @mock.patch.object(volumeops.VolumeOps, "connect_volume", return_value=("sr", "vdi_uuid")) def test_vdis_for_instance_bdi_password_scrubbed(self, get_uuid_mock): # setup fake data data = {'name_label': self.name_label, 'sr_uuid': 'fake', 'auth_password': 'scrubme'} bdm = [{'mount_device': '/dev/vda', 'connection_info': {'data': data}}] bdi = {'root_device_name': 'vda', 'block_device_mapping': bdm} # Tests that the parameters to the to_xml method are sanitized for # passwords when logged. def fake_debug(*args, **kwargs): if 'auth_password' in args[0]: self.assertNotIn('scrubme', args[0]) fake_debug.matched = True fake_debug.matched = False with mock.patch.object(vmops.LOG, 'debug', side_effect=fake_debug) as debug_mock: vdis = self.vmops._get_vdis_for_instance(self.context, self.instance, self.name_label, self.image, image_type=4, block_device_info=bdi) self.assertEqual(1, len(vdis)) get_uuid_mock.assert_called_once_with({"data": data}) # we don't care what the log message is, we just want to make sure # our stub method is called which asserts the password is scrubbed self.assertTrue(debug_mock.called) self.assertTrue(fake_debug.matched) class AttachInterfaceTestCase(VMOpsTestBase): """Test VIF hot plug/unplug""" def setUp(self): super(AttachInterfaceTestCase, self).setUp() self.vmops.vif_driver = mock.Mock() self.fake_vif = {'id': '12345'} self.fake_instance = mock.Mock() self.fake_instance.uuid = '6478' @mock.patch.object(vmops.VMOps, '_get_vm_opaque_ref') def test_attach_interface(self, mock_get_vm_opaque_ref): mock_get_vm_opaque_ref.return_value = 'fake_vm_ref' with mock.patch.object(self._session.VM, 'get_allowed_VIF_devices')\ as fake_devices: fake_devices.return_value = [2, 3, 4] self.vmops.attach_interface(self.fake_instance, self.fake_vif) fake_devices.assert_called_once_with('fake_vm_ref') mock_get_vm_opaque_ref.assert_called_once_with(self.fake_instance) self.vmops.vif_driver.plug.assert_called_once_with( self.fake_instance, self.fake_vif, vm_ref='fake_vm_ref', device=2) @mock.patch.object(vmops.VMOps, '_get_vm_opaque_ref') def test_attach_interface_no_devices(self, mock_get_vm_opaque_ref): mock_get_vm_opaque_ref.return_value = 'fake_vm_ref' with mock.patch.object(self._session.VM, 'get_allowed_VIF_devices')\ as fake_devices: fake_devices.return_value = [] self.assertRaises(exception.InterfaceAttachFailed, self.vmops.attach_interface, self.fake_instance, self.fake_vif) @mock.patch.object(vmops.VMOps, '_get_vm_opaque_ref') def test_attach_interface_plug_failed(self, mock_get_vm_opaque_ref): mock_get_vm_opaque_ref.return_value = 'fake_vm_ref' with mock.patch.object(self._session.VM, 'get_allowed_VIF_devices')\ as fake_devices: fake_devices.return_value = [2, 3, 4] self.vmops.vif_driver.plug.side_effect =\ exception.VirtualInterfacePlugException('Failed to plug VIF') self.assertRaises(exception.VirtualInterfacePlugException, self.vmops.attach_interface, self.fake_instance, self.fake_vif) self.vmops.vif_driver.plug.assert_called_once_with( self.fake_instance, self.fake_vif, vm_ref='fake_vm_ref', device=2) self.vmops.vif_driver.unplug.assert_called_once_with( self.fake_instance, self.fake_vif, 'fake_vm_ref') @mock.patch.object(vmops.VMOps, '_get_vm_opaque_ref') def test_attach_interface_reraise_exception(self, mock_get_vm_opaque_ref): mock_get_vm_opaque_ref.return_value = 'fake_vm_ref' with mock.patch.object(self._session.VM, 'get_allowed_VIF_devices')\ as fake_devices: fake_devices.return_value = [2, 3, 4] self.vmops.vif_driver.plug.side_effect =\ exception.VirtualInterfacePlugException('Failed to plug VIF') self.vmops.vif_driver.unplug.side_effect =\ exception.VirtualInterfaceUnplugException( 'Failed to unplug VIF') ex = self.assertRaises(exception.VirtualInterfacePlugException, self.vmops.attach_interface, self.fake_instance, self.fake_vif) self.assertEqual('Failed to plug VIF', six.text_type(ex)) self.vmops.vif_driver.plug.assert_called_once_with( self.fake_instance, self.fake_vif, vm_ref='fake_vm_ref', device=2) self.vmops.vif_driver.unplug.assert_called_once_with( self.fake_instance, self.fake_vif, 'fake_vm_ref') @mock.patch.object(vmops.VMOps, '_get_vm_opaque_ref') def test_detach_interface(self, mock_get_vm_opaque_ref): mock_get_vm_opaque_ref.return_value = 'fake_vm_ref' self.vmops.detach_interface(self.fake_instance, self.fake_vif) mock_get_vm_opaque_ref.assert_called_once_with(self.fake_instance) self.vmops.vif_driver.unplug.assert_called_once_with( self.fake_instance, self.fake_vif, 'fake_vm_ref') @mock.patch.object(vmops.VMOps, '_get_vm_opaque_ref') def test_detach_interface_exception(self, mock_get_vm_opaque_ref): mock_get_vm_opaque_ref.return_value = 'fake_vm_ref' self.vmops.vif_driver.unplug.side_effect =\ exception.VirtualInterfaceUnplugException('Failed to unplug VIF') self.assertRaises(exception.VirtualInterfaceUnplugException, self.vmops.detach_interface, self.fake_instance, self.fake_vif) nova-17.0.1/nova/tests/unit/virt/xenapi/test_vm_utils.py0000666000175000017500000031071713250073136023407 0ustar zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import contextlib from eventlet import greenthread import mock from mox3 import mox import os_xenapi from oslo_concurrency import lockutils from oslo_concurrency import processutils from oslo_config import fixture as config_fixture from oslo_utils import fixture as utils_fixture from oslo_utils import timeutils from oslo_utils import units from oslo_utils import uuidutils import six from nova.compute import flavors from nova.compute import power_state import nova.conf from nova import context from nova import exception from nova import objects from nova.objects import fields as obj_fields from nova import test from nova.tests.unit import fake_flavor from nova.tests.unit import fake_instance from nova.tests.unit.objects import test_flavor from nova.tests.unit.virt.xenapi import stubs from nova.tests import uuidsentinel as uuids from nova import utils from nova.virt import hardware from nova.virt.xenapi import driver as xenapi_conn from nova.virt.xenapi import fake from nova.virt.xenapi import vm_utils import time CONF = nova.conf.CONF XENSM_TYPE = 'xensm' ISCSI_TYPE = 'iscsi' def get_fake_connection_data(sr_type): fakes = {XENSM_TYPE: {'sr_uuid': 'falseSR', 'name_label': 'fake_storage', 'name_description': 'test purposes', 'server': 'myserver', 'serverpath': '/local/scratch/myname', 'sr_type': 'nfs', 'introduce_sr_keys': ['server', 'serverpath', 'sr_type'], 'vdi_uuid': 'falseVDI'}, ISCSI_TYPE: {'volume_id': 'fake_volume_id', 'target_lun': 1, 'target_iqn': 'fake_iqn:volume-fake_volume_id', 'target_portal': u'localhost:3260', 'target_discovered': False}, } return fakes[sr_type] @contextlib.contextmanager def contextified(result): yield result def _fake_noop(*args, **kwargs): return class VMUtilsTestBase(stubs.XenAPITestBaseNoDB): pass class LookupTestCase(VMUtilsTestBase): def setUp(self): super(LookupTestCase, self).setUp() self.session = self.mox.CreateMockAnything('Fake Session') self.name_label = 'my_vm' def _do_mock(self, result): self.session.call_xenapi( "VM.get_by_name_label", self.name_label).AndReturn(result) self.mox.ReplayAll() def test_normal(self): self._do_mock(['x']) result = vm_utils.lookup(self.session, self.name_label) self.assertEqual('x', result) def test_no_result(self): self._do_mock([]) result = vm_utils.lookup(self.session, self.name_label) self.assertIsNone(result) def test_too_many(self): self._do_mock(['a', 'b']) self.assertRaises(exception.InstanceExists, vm_utils.lookup, self.session, self.name_label) def test_rescue_none(self): self.session.call_xenapi( "VM.get_by_name_label", self.name_label + '-rescue').AndReturn([]) self._do_mock(['x']) result = vm_utils.lookup(self.session, self.name_label, check_rescue=True) self.assertEqual('x', result) def test_rescue_found(self): self.session.call_xenapi( "VM.get_by_name_label", self.name_label + '-rescue').AndReturn(['y']) self.mox.ReplayAll() result = vm_utils.lookup(self.session, self.name_label, check_rescue=True) self.assertEqual('y', result) def test_rescue_too_many(self): self.session.call_xenapi( "VM.get_by_name_label", self.name_label + '-rescue').AndReturn(['a', 'b', 'c']) self.mox.ReplayAll() self.assertRaises(exception.InstanceExists, vm_utils.lookup, self.session, self.name_label, check_rescue=True) class GenerateConfigDriveTestCase(VMUtilsTestBase): @mock.patch.object(vm_utils, 'safe_find_sr') @mock.patch.object(vm_utils, "create_vdi", return_value='vdi_ref') @mock.patch.object(vm_utils.instance_metadata, "InstanceMetadata") @mock.patch.object(vm_utils.configdrive, 'ConfigDriveBuilder') @mock.patch.object(vm_utils.utils, 'execute') @mock.patch.object(vm_utils.volume_utils, 'stream_to_vdi') @mock.patch.object(vm_utils.os.path, 'getsize', return_value=100) @mock.patch.object(vm_utils, 'create_vbd', return_value='vbd_ref') @mock.patch.object(vm_utils.utils, 'tempdir') def test_no_admin_pass(self, mock_tmpdir, mock_create_vbd, mock_size, mock_stream, mock_execute, mock_builder, mock_instance_metadata, mock_create_vdi, mock_find_sr): mock_tmpdir.return_value.__enter__.return_value = '/mock' with mock.patch.object(six.moves.builtins, 'open') as mock_open: mock_open.return_value.__enter__.return_value = 'open_fd' vm_utils.generate_configdrive('session', 'context', 'instance', 'vm_ref', 'userdevice', 'network_info') mock_size.assert_called_with('/mock/configdrive.vhd') mock_open.assert_called_with('/mock/configdrive.vhd') mock_execute.assert_called_with('qemu-img', 'convert', '-Ovpc', '/mock/configdrive', '/mock/configdrive.vhd') mock_instance_metadata.assert_called_with( 'instance', content=None, extra_md={}, network_info='network_info', request_context='context') mock_stream.assert_called_with('session', 'instance', 'vhd', 'open_fd', 100, 'vdi_ref') @mock.patch.object(vm_utils, "destroy_vdi") @mock.patch.object(vm_utils, 'safe_find_sr') @mock.patch.object(vm_utils, "create_vdi", return_value='vdi_ref') @mock.patch.object(vm_utils.instance_metadata, "InstanceMetadata", side_effect=test.TestingException) def test_vdi_cleaned_up(self, mock_instance_metadata, mock_create, mock_find_sr, mock_destroy): self.assertRaises(test.TestingException, vm_utils.generate_configdrive, 'session', None, None, None, None, None) mock_destroy.assert_called_once_with('session', 'vdi_ref') class XenAPIGetUUID(VMUtilsTestBase): def test_get_this_vm_uuid_new_kernel(self): self.mox.StubOutWithMock(vm_utils, '_get_sys_hypervisor_uuid') vm_utils._get_sys_hypervisor_uuid().AndReturn( '2f46f0f5-f14c-ef1b-1fac-9eeca0888a3f') self.mox.ReplayAll() self.assertEqual('2f46f0f5-f14c-ef1b-1fac-9eeca0888a3f', vm_utils.get_this_vm_uuid(None)) self.mox.VerifyAll() def test_get_this_vm_uuid_old_kernel_reboot(self): self.mox.StubOutWithMock(vm_utils, '_get_sys_hypervisor_uuid') self.mox.StubOutWithMock(utils, 'execute') vm_utils._get_sys_hypervisor_uuid().AndRaise( IOError(13, 'Permission denied')) utils.execute('xenstore-read', 'domid', run_as_root=True).AndReturn( ('27', '')) utils.execute('xenstore-read', '/local/domain/27/vm', run_as_root=True).AndReturn( ('/vm/2f46f0f5-f14c-ef1b-1fac-9eeca0888a3f', '')) self.mox.ReplayAll() self.assertEqual('2f46f0f5-f14c-ef1b-1fac-9eeca0888a3f', vm_utils.get_this_vm_uuid(None)) self.mox.VerifyAll() class FakeSession(object): def call_xenapi(self, *args): pass def call_plugin(self, *args): pass def call_plugin_serialized(self, plugin, fn, *args, **kwargs): pass def call_plugin_serialized_with_retry(self, plugin, fn, num_retries, callback, *args, **kwargs): pass class FetchVhdImageTestCase(VMUtilsTestBase): def setUp(self): super(FetchVhdImageTestCase, self).setUp() self.context = context.get_admin_context() self.context.auth_token = 'auth_token' self.session = FakeSession() self.instance = {"uuid": "uuid"} self.flags(group='glance', api_servers=['http://localhost:9292']) self.mox.StubOutWithMock(vm_utils, '_make_uuid_stack') vm_utils._make_uuid_stack().AndReturn(["uuid_stack"]) self.mox.StubOutWithMock(vm_utils, 'get_sr_path') vm_utils.get_sr_path(self.session).AndReturn('sr_path') def _stub_glance_download_vhd(self, raise_exc=None): self.mox.StubOutWithMock( self.session, 'call_plugin_serialized_with_retry') func = self.session.call_plugin_serialized_with_retry( 'glance.py', 'download_vhd2', 0, mox.IgnoreArg(), mox.IgnoreArg(), extra_headers={'X-Auth-Token': 'auth_token', 'X-Roles': '', 'X-Tenant-Id': None, 'X-User-Id': None, 'X-Identity-Status': 'Confirmed'}, image_id='image_id', uuid_stack=["uuid_stack"], sr_path='sr_path') if raise_exc: func.AndRaise(raise_exc) else: func.AndReturn({'root': {'uuid': 'vdi'}}) def test_fetch_vhd_image_works_with_glance(self): self._stub_glance_download_vhd() self.mox.StubOutWithMock(vm_utils, 'safe_find_sr') vm_utils.safe_find_sr(self.session).AndReturn("sr") self.mox.StubOutWithMock(vm_utils, '_scan_sr') vm_utils._scan_sr(self.session, "sr") self.mox.StubOutWithMock(vm_utils, '_check_vdi_size') vm_utils._check_vdi_size( self.context, self.session, self.instance, "vdi") self.mox.ReplayAll() self.assertEqual("vdi", vm_utils._fetch_vhd_image(self.context, self.session, self.instance, 'image_id')['root']['uuid']) self.mox.VerifyAll() def test_fetch_vhd_image_cleans_up_vdi_on_fail(self): self._stub_glance_download_vhd() self.mox.StubOutWithMock(vm_utils, 'safe_find_sr') vm_utils.safe_find_sr(self.session).AndReturn("sr") self.mox.StubOutWithMock(vm_utils, '_scan_sr') vm_utils._scan_sr(self.session, "sr") self.mox.StubOutWithMock(vm_utils, '_check_vdi_size') vm_utils._check_vdi_size(self.context, self.session, self.instance, "vdi").AndRaise(exception.FlavorDiskSmallerThanImage( flavor_size=0, image_size=1)) self.mox.StubOutWithMock(self.session, 'call_xenapi') self.session.call_xenapi("VDI.get_by_uuid", "vdi").AndReturn("ref") self.mox.StubOutWithMock(vm_utils, 'destroy_vdi') vm_utils.destroy_vdi(self.session, "ref").AndRaise(exception.StorageError(reason="")) self.mox.ReplayAll() self.assertRaises(exception.FlavorDiskSmallerThanImage, vm_utils._fetch_vhd_image, self.context, self.session, self.instance, 'image_id') self.mox.VerifyAll() def test_fetch_vhd_image_download_exception(self): self._stub_glance_download_vhd(raise_exc=RuntimeError) self.mox.ReplayAll() self.assertRaises(RuntimeError, vm_utils._fetch_vhd_image, self.context, self.session, self.instance, 'image_id') self.mox.VerifyAll() class TestImageCompression(VMUtilsTestBase): def test_image_compression(self): # Testing for nova.conf, too low, negative, and a correct value. self.assertIsNone(vm_utils.get_compression_level()) self.flags(image_compression_level=6, group='xenserver') self.assertEqual(vm_utils.get_compression_level(), 6) class ResizeHelpersTestCase(VMUtilsTestBase): def setUp(self): super(ResizeHelpersTestCase, self).setUp() self.context = context.RequestContext('user', 'project') def test_repair_filesystem(self): self.mox.StubOutWithMock(utils, 'execute') utils.execute('e2fsck', '-f', "-y", "fakepath", run_as_root=True, check_exit_code=[0, 1, 2]).AndReturn( ("size is: 42", "")) self.mox.ReplayAll() vm_utils._repair_filesystem("fakepath") def _call_tune2fs_remove_journal(self, path): utils.execute("tune2fs", "-O ^has_journal", path, run_as_root=True) def _call_tune2fs_add_journal(self, path): utils.execute("tune2fs", "-j", path, run_as_root=True) def _call_parted_mkpart(self, path, start, end): utils.execute('parted', '--script', path, 'rm', '1', run_as_root=True) utils.execute('parted', '--script', path, 'mkpart', 'primary', '%ds' % start, '%ds' % end, run_as_root=True) def _call_parted_boot_flag(self, path): utils.execute('parted', '--script', path, 'set', '1', 'boot', 'on', run_as_root=True) @mock.patch('nova.privsep.fs.resize_partition') @mock.patch.object(vm_utils, '_repair_filesystem') @mock.patch.object(utils, 'execute') def test_resize_part_and_fs_down_succeeds(self, mock_execute, mock_repair, mock_resize): dev_path = '/dev/fake' partition_path = '%s1' % dev_path vm_utils._resize_part_and_fs('fake', 0, 20, 10, 'boot') mock_execute.assert_has_calls([ mock.call('tune2fs', '-O ^has_journal', partition_path, run_as_root=True), mock.call('resize2fs', partition_path, '10s', run_as_root=True), mock.call('tune2fs', '-j', partition_path, run_as_root=True)]) mock_resize.assert_has_calls([ mock.call(dev_path, 0, 9, True)]) def test_log_progress_if_required(self): self.mox.StubOutWithMock(vm_utils.LOG, "debug") vm_utils.LOG.debug("Sparse copy in progress, " "%(complete_pct).2f%% complete. " "%(left)s bytes left to copy", {"complete_pct": 50.0, "left": 1}) current = timeutils.utcnow() time_fixture = self.useFixture(utils_fixture.TimeFixture(current)) time_fixture.advance_time_seconds( vm_utils.PROGRESS_INTERVAL_SECONDS + 1) self.mox.ReplayAll() vm_utils._log_progress_if_required(1, current, 2) def test_log_progress_if_not_required(self): self.mox.StubOutWithMock(vm_utils.LOG, "debug") current = timeutils.utcnow() time_fixture = self.useFixture(utils_fixture.TimeFixture(current)) time_fixture.advance_time_seconds( vm_utils.PROGRESS_INTERVAL_SECONDS - 1) self.mox.ReplayAll() vm_utils._log_progress_if_required(1, current, 2) def test_resize_part_and_fs_down_fails_disk_too_big(self): self.mox.StubOutWithMock(vm_utils, "_repair_filesystem") self.mox.StubOutWithMock(utils, 'execute') dev_path = "/dev/fake" partition_path = "%s1" % dev_path new_sectors = 10 vm_utils._repair_filesystem(partition_path) self._call_tune2fs_remove_journal(partition_path) mobj = utils.execute("resize2fs", partition_path, "%ss" % new_sectors, run_as_root=True) mobj.AndRaise(processutils.ProcessExecutionError) self.mox.ReplayAll() self.assertRaises(exception.ResizeError, vm_utils._resize_part_and_fs, "fake", 0, 20, 10, "boot") @mock.patch('nova.privsep.fs.resize_partition') @mock.patch.object(vm_utils, '_repair_filesystem') @mock.patch.object(utils, 'execute') def test_resize_part_and_fs_up_succeeds(self, mock_execute, mock_repair, mock_resize): dev_path = '/dev/fake' partition_path = '%s1' % dev_path vm_utils._resize_part_and_fs('fake', 0, 20, 30, '') mock_execute.assert_has_calls([ mock.call('tune2fs', '-O ^has_journal', partition_path, run_as_root=True), mock.call('resize2fs', partition_path, run_as_root=True), mock.call('tune2fs', '-j', partition_path, run_as_root=True)]) mock_resize.assert_has_calls([ mock.call(dev_path, 0, 29, False)]) def test_resize_disk_throws_on_zero_size(self): flavor = fake_flavor.fake_flavor_obj(self.context, root_gb=0) self.assertRaises(exception.ResizeError, vm_utils.resize_disk, "session", "instance", "vdi_ref", flavor) def test_auto_config_disk_returns_early_on_zero_size(self): vm_utils.try_auto_configure_disk("bad_session", "bad_vdi_ref", 0) class CheckVDISizeTestCase(VMUtilsTestBase): def setUp(self): super(CheckVDISizeTestCase, self).setUp() self.context = 'fakecontext' self.session = 'fakesession' self.instance = objects.Instance(uuid=uuids.fake) self.flavor = objects.Flavor() self.vdi_uuid = 'fakeuuid' def test_not_too_large(self): self.mox.StubOutWithMock(vm_utils, '_get_vdi_chain_size') vm_utils._get_vdi_chain_size(self.session, self.vdi_uuid).AndReturn(1073741824) self.mox.ReplayAll() with mock.patch.object(self.instance, 'get_flavor') as get: self.flavor.root_gb = 1 get.return_value = self.flavor vm_utils._check_vdi_size(self.context, self.session, self.instance, self.vdi_uuid) def test_too_large(self): self.mox.StubOutWithMock(vm_utils, '_get_vdi_chain_size') vm_utils._get_vdi_chain_size(self.session, self.vdi_uuid).AndReturn(11811160065) # 10GB overhead allowed self.mox.ReplayAll() with mock.patch.object(self.instance, 'get_flavor') as get: self.flavor.root_gb = 1 get.return_value = self.flavor self.assertRaises(exception.FlavorDiskSmallerThanImage, vm_utils._check_vdi_size, self.context, self.session, self.instance, self.vdi_uuid) def test_zero_root_gb_disables_check(self): with mock.patch.object(self.instance, 'get_flavor') as get: self.flavor.root_gb = 0 get.return_value = self.flavor vm_utils._check_vdi_size(self.context, self.session, self.instance, self.vdi_uuid) class GetInstanceForVdisForSrTestCase(VMUtilsTestBase): def setUp(self): super(GetInstanceForVdisForSrTestCase, self).setUp() self.fixture = self.useFixture(config_fixture.Config(lockutils.CONF)) self.fixture.config(disable_process_locking=True, group='oslo_concurrency') self.flags(instance_name_template='%d', firewall_driver='nova.virt.xenapi.firewall.' 'Dom0IptablesFirewallDriver') self.flags(connection_url='http://localhost', connection_password='test_pass', group='xenserver') def test_get_instance_vdis_for_sr(self): vm_ref = fake.create_vm("foo", "Running") sr_ref = fake.create_sr() vdi_1 = fake.create_vdi('vdiname1', sr_ref) vdi_2 = fake.create_vdi('vdiname2', sr_ref) for vdi_ref in [vdi_1, vdi_2]: fake.create_vbd(vm_ref, vdi_ref) stubs.stubout_session(self.stubs, fake.SessionBase) driver = xenapi_conn.XenAPIDriver(False) result = list(vm_utils.get_instance_vdis_for_sr( driver._session, vm_ref, sr_ref)) self.assertEqual([vdi_1, vdi_2], result) def test_get_instance_vdis_for_sr_no_vbd(self): vm_ref = fake.create_vm("foo", "Running") sr_ref = fake.create_sr() stubs.stubout_session(self.stubs, fake.SessionBase) driver = xenapi_conn.XenAPIDriver(False) result = list(vm_utils.get_instance_vdis_for_sr( driver._session, vm_ref, sr_ref)) self.assertEqual([], result) class VMRefOrRaiseVMFoundTestCase(VMUtilsTestBase): @mock.patch.object(vm_utils, 'lookup', return_value='ignored') def test_lookup_call(self, mock_lookup): vm_utils.vm_ref_or_raise('session', 'somename') mock_lookup.assert_called_once_with('session', 'somename') @mock.patch.object(vm_utils, 'lookup', return_value='vmref') def test_return_value(self, mock_lookup): self.assertEqual( 'vmref', vm_utils.vm_ref_or_raise('session', 'somename')) mock_lookup.assert_called_once_with('session', 'somename') class VMRefOrRaiseVMNotFoundTestCase(VMUtilsTestBase): @mock.patch.object(vm_utils, 'lookup', return_value=None) def test_exception_raised(self, mock_lookup): self.assertRaises( exception.InstanceNotFound, lambda: vm_utils.vm_ref_or_raise('session', 'somename') ) mock_lookup.assert_called_once_with('session', 'somename') @mock.patch.object(vm_utils, 'lookup', return_value=None) def test_exception_msg_contains_vm_name(self, mock_lookup): try: vm_utils.vm_ref_or_raise('session', 'somename') except exception.InstanceNotFound as e: self.assertIn('somename', six.text_type(e)) mock_lookup.assert_called_once_with('session', 'somename') @mock.patch.object(vm_utils, 'safe_find_sr', return_value='safe_find_sr') class CreateCachedImageTestCase(VMUtilsTestBase): def setUp(self): super(CreateCachedImageTestCase, self).setUp() self.session = stubs.get_fake_session() @mock.patch.object(vm_utils, '_clone_vdi', return_value='new_vdi_ref') def test_cached(self, mock_clone_vdi, mock_safe_find_sr): self.session.call_xenapi.side_effect = ['ext', {'vdi_ref': 2}, None, None, None, 'vdi_uuid'] self.assertEqual((False, {'root': {'uuid': 'vdi_uuid', 'file': None}}), vm_utils._create_cached_image('context', self.session, 'instance', 'name', 'uuid', vm_utils.ImageType.DISK_VHD)) @mock.patch.object(vm_utils, '_safe_copy_vdi', return_value='new_vdi_ref') def test_no_cow(self, mock_safe_copy_vdi, mock_safe_find_sr): self.flags(use_cow_images=False) self.session.call_xenapi.side_effect = ['ext', {'vdi_ref': 2}, None, None, None, 'vdi_uuid'] self.assertEqual((False, {'root': {'uuid': 'vdi_uuid', 'file': None}}), vm_utils._create_cached_image('context', self.session, 'instance', 'name', 'uuid', vm_utils.ImageType.DISK_VHD)) def test_no_cow_no_ext(self, mock_safe_find_sr): self.flags(use_cow_images=False) self.session.call_xenapi.side_effect = ['non-ext', {'vdi_ref': 2}, 'vdi_ref', None, None, None, 'vdi_uuid'] self.assertEqual((False, {'root': {'uuid': 'vdi_uuid', 'file': None}}), vm_utils._create_cached_image('context', self.session, 'instance', 'name', 'uuid', vm_utils.ImageType.DISK_VHD)) @mock.patch.object(vm_utils, '_clone_vdi', return_value='new_vdi_ref') @mock.patch.object(vm_utils, '_fetch_image', return_value={'root': {'uuid': 'vdi_uuid', 'file': None}}) def test_noncached(self, mock_fetch_image, mock_clone_vdi, mock_safe_find_sr): self.session.call_xenapi.side_effect = ['ext', {}, 'cache_vdi_ref', None, None, None, None, None, None, None, 'vdi_uuid'] self.assertEqual((True, {'root': {'uuid': 'vdi_uuid', 'file': None}}), vm_utils._create_cached_image('context', self.session, 'instance', 'name', 'uuid', vm_utils.ImageType.DISK_VHD)) class DestroyCachedImageTestCase(VMUtilsTestBase): def setUp(self): super(DestroyCachedImageTestCase, self).setUp() self.session = stubs.get_fake_session() @mock.patch.object(vm_utils, '_find_cached_images') @mock.patch.object(vm_utils, 'destroy_vdi') @mock.patch.object(vm_utils, '_walk_vdi_chain') @mock.patch.object(time, 'time') def test_destroy_cached_image_out_of_keep_days(self, mock_time, mock_walk_vdi_chain, mock_destroy_vdi, mock_find_cached_images): fake_cached_time = '0' mock_find_cached_images.return_value = {'fake_image_id': { 'vdi_ref': 'fake_vdi_ref', 'cached_time': fake_cached_time}} self.session.call_xenapi.return_value = 'fake_uuid' mock_walk_vdi_chain.return_value = ('just_one',) mock_time.return_value = 2 * 3600 * 24 fake_keep_days = 1 expected_return = set() expected_return.add('fake_uuid') uuid_return = vm_utils.destroy_cached_images(self.session, 'fake_sr_ref', False, False, fake_keep_days) mock_find_cached_images.assert_called_once() mock_walk_vdi_chain.assert_called_once() mock_time.assert_called() mock_destroy_vdi.assert_called_once() self.assertEqual(expected_return, uuid_return) @mock.patch.object(vm_utils, '_find_cached_images') @mock.patch.object(vm_utils, 'destroy_vdi') @mock.patch.object(vm_utils, '_walk_vdi_chain') @mock.patch.object(time, 'time') def test_destroy_cached_image(self, mock_time, mock_walk_vdi_chain, mock_destroy_vdi, mock_find_cached_images): fake_cached_time = '0' mock_find_cached_images.return_value = {'fake_image_id': { 'vdi_ref': 'fake_vdi_ref', 'cached_time': fake_cached_time}} self.session.call_xenapi.return_value = 'fake_uuid' mock_walk_vdi_chain.return_value = ('just_one',) mock_time.return_value = 2 * 3600 * 24 fake_keep_days = 1 expected_return = set() expected_return.add('fake_uuid') uuid_return = vm_utils.destroy_cached_images(self.session, 'fake_sr_ref', False, False, fake_keep_days) mock_find_cached_images.assert_called_once() mock_walk_vdi_chain.assert_called_once() mock_destroy_vdi.assert_called_once() self.assertEqual(expected_return, uuid_return) @mock.patch.object(vm_utils, '_find_cached_images') @mock.patch.object(vm_utils, 'destroy_vdi') @mock.patch.object(vm_utils, '_walk_vdi_chain') @mock.patch.object(time, 'time') def test_destroy_cached_image_cached_time_not_exceed( self, mock_time, mock_walk_vdi_chain, mock_destroy_vdi, mock_find_cached_images): fake_cached_time = '0' mock_find_cached_images.return_value = {'fake_image_id': { 'vdi_ref': 'fake_vdi_ref', 'cached_time': fake_cached_time}} self.session.call_xenapi.return_value = 'fake_uuid' mock_walk_vdi_chain.return_value = ('just_one',) mock_time.return_value = 1 * 3600 * 24 fake_keep_days = 2 expected_return = set() uuid_return = vm_utils.destroy_cached_images(self.session, 'fake_sr_ref', False, False, fake_keep_days) mock_find_cached_images.assert_called_once() mock_walk_vdi_chain.assert_called_once() mock_destroy_vdi.assert_not_called() self.assertEqual(expected_return, uuid_return) @mock.patch.object(vm_utils, '_find_cached_images') @mock.patch.object(vm_utils, 'destroy_vdi') @mock.patch.object(vm_utils, '_walk_vdi_chain') @mock.patch.object(time, 'time') def test_destroy_cached_image_no_cached_time( self, mock_time, mock_walk_vdi_chain, mock_destroy_vdi, mock_find_cached_images): mock_find_cached_images.return_value = {'fake_image_id': { 'vdi_ref': 'fake_vdi_ref', 'cached_time': None}} self.session.call_xenapi.return_value = 'fake_uuid' mock_walk_vdi_chain.return_value = ('just_one',) fake_keep_days = 2 expected_return = set() uuid_return = vm_utils.destroy_cached_images(self.session, 'fake_sr_ref', False, False, fake_keep_days) mock_find_cached_images.assert_called_once() mock_walk_vdi_chain.assert_called_once() mock_destroy_vdi.assert_not_called() self.assertEqual(expected_return, uuid_return) class ShutdownTestCase(VMUtilsTestBase): def test_hardshutdown_should_return_true_when_vm_is_shutdown(self): self.mock = mox.Mox() session = FakeSession() instance = "instance" vm_ref = "vm-ref" self.mock.StubOutWithMock(vm_utils, 'is_vm_shutdown') vm_utils.is_vm_shutdown(session, vm_ref).AndReturn(True) self.mock.StubOutWithMock(vm_utils, 'LOG') self.assertTrue(vm_utils.hard_shutdown_vm( session, instance, vm_ref)) def test_cleanshutdown_should_return_true_when_vm_is_shutdown(self): self.mock = mox.Mox() session = FakeSession() instance = "instance" vm_ref = "vm-ref" self.mock.StubOutWithMock(vm_utils, 'is_vm_shutdown') vm_utils.is_vm_shutdown(session, vm_ref).AndReturn(True) self.mock.StubOutWithMock(vm_utils, 'LOG') self.assertTrue(vm_utils.clean_shutdown_vm( session, instance, vm_ref)) class CreateVBDTestCase(VMUtilsTestBase): def setUp(self): super(CreateVBDTestCase, self).setUp() self.session = FakeSession() self.mock = mox.Mox() self.mock.StubOutWithMock(self.session, 'call_xenapi') self.vbd_rec = self._generate_vbd_rec() def _generate_vbd_rec(self): vbd_rec = {} vbd_rec['VM'] = 'vm_ref' vbd_rec['VDI'] = 'vdi_ref' vbd_rec['userdevice'] = '0' vbd_rec['bootable'] = False vbd_rec['mode'] = 'RW' vbd_rec['type'] = 'disk' vbd_rec['unpluggable'] = True vbd_rec['empty'] = False vbd_rec['other_config'] = {} vbd_rec['qos_algorithm_type'] = '' vbd_rec['qos_algorithm_params'] = {} vbd_rec['qos_supported_algorithms'] = [] return vbd_rec def test_create_vbd_default_args(self): self.session.call_xenapi('VBD.create', self.vbd_rec).AndReturn("vbd_ref") self.mock.ReplayAll() result = vm_utils.create_vbd(self.session, "vm_ref", "vdi_ref", 0) self.assertEqual(result, "vbd_ref") self.mock.VerifyAll() def test_create_vbd_osvol(self): self.session.call_xenapi('VBD.create', self.vbd_rec).AndReturn("vbd_ref") self.session.call_xenapi('VBD.add_to_other_config', "vbd_ref", "osvol", "True") self.mock.ReplayAll() result = vm_utils.create_vbd(self.session, "vm_ref", "vdi_ref", 0, osvol=True) self.assertEqual(result, "vbd_ref") self.mock.VerifyAll() def test_create_vbd_extra_args(self): self.vbd_rec['VDI'] = 'OpaqueRef:NULL' self.vbd_rec['type'] = 'a' self.vbd_rec['mode'] = 'RO' self.vbd_rec['bootable'] = True self.vbd_rec['empty'] = True self.vbd_rec['unpluggable'] = False self.session.call_xenapi('VBD.create', self.vbd_rec).AndReturn("vbd_ref") self.mock.ReplayAll() result = vm_utils.create_vbd(self.session, "vm_ref", None, 0, vbd_type="a", read_only=True, bootable=True, empty=True, unpluggable=False) self.assertEqual(result, "vbd_ref") self.mock.VerifyAll() def test_attach_cd(self): self.mock.StubOutWithMock(vm_utils, 'create_vbd') vm_utils.create_vbd(self.session, "vm_ref", None, 1, vbd_type='cd', read_only=True, bootable=True, empty=True, unpluggable=False).AndReturn("vbd_ref") self.session.call_xenapi('VBD.insert', "vbd_ref", "vdi_ref") self.mock.ReplayAll() result = vm_utils.attach_cd(self.session, "vm_ref", "vdi_ref", 1) self.assertEqual(result, "vbd_ref") self.mock.VerifyAll() class UnplugVbdTestCase(VMUtilsTestBase): @mock.patch.object(greenthread, 'sleep') def test_unplug_vbd_works(self, mock_sleep): session = stubs.get_fake_session() vbd_ref = "vbd_ref" vm_ref = 'vm_ref' vm_utils.unplug_vbd(session, vbd_ref, vm_ref) session.call_xenapi.assert_called_once_with('VBD.unplug', vbd_ref) self.assertEqual(0, mock_sleep.call_count) def test_unplug_vbd_raises_unexpected_error(self): session = stubs.get_fake_session() session.XenAPI.Failure = fake.Failure vbd_ref = "vbd_ref" vm_ref = 'vm_ref' session.call_xenapi.side_effect = test.TestingException() self.assertRaises(test.TestingException, vm_utils.unplug_vbd, session, vm_ref, vbd_ref) self.assertEqual(1, session.call_xenapi.call_count) def test_unplug_vbd_already_detached_works(self): error = "DEVICE_ALREADY_DETACHED" session = stubs.get_fake_session(error) vbd_ref = "vbd_ref" vm_ref = 'vm_ref' vm_utils.unplug_vbd(session, vbd_ref, vm_ref) self.assertEqual(1, session.call_xenapi.call_count) def test_unplug_vbd_already_raises_unexpected_xenapi_error(self): session = stubs.get_fake_session("") vbd_ref = "vbd_ref" vm_ref = 'vm_ref' self.assertRaises(exception.StorageError, vm_utils.unplug_vbd, session, vbd_ref, vm_ref) self.assertEqual(1, session.call_xenapi.call_count) def _test_uplug_vbd_retries(self, mock_sleep, error): session = stubs.get_fake_session(error) vbd_ref = "vbd_ref" vm_ref = 'vm_ref' self.assertRaises(exception.StorageError, vm_utils.unplug_vbd, session, vm_ref, vbd_ref) self.assertEqual(11, session.call_xenapi.call_count) self.assertEqual(10, mock_sleep.call_count) def _test_uplug_vbd_retries_with_neg_val(self): session = stubs.get_fake_session() self.flags(num_vbd_unplug_retries=-1, group='xenserver') vbd_ref = "vbd_ref" vm_ref = 'vm_ref' vm_utils.unplug_vbd(session, vbd_ref, vm_ref) self.assertEqual(1, session.call_xenapi.call_count) @mock.patch.object(greenthread, 'sleep') def test_uplug_vbd_retries_on_rejected(self, mock_sleep): self._test_uplug_vbd_retries(mock_sleep, "DEVICE_DETACH_REJECTED") @mock.patch.object(greenthread, 'sleep') def test_uplug_vbd_retries_on_internal_error(self, mock_sleep): self._test_uplug_vbd_retries(mock_sleep, "INTERNAL_ERROR") @mock.patch.object(greenthread, 'sleep') def test_uplug_vbd_retries_on_missing_pv_drivers_error(self, mock_sleep): self._test_uplug_vbd_retries(mock_sleep, "VM_MISSING_PV_DRIVERS") class VDIOtherConfigTestCase(VMUtilsTestBase): """Tests to ensure that the code is populating VDI's `other_config` attribute with the correct metadta. """ def setUp(self): super(VDIOtherConfigTestCase, self).setUp() class _FakeSession(object): def call_xenapi(self, operation, *args, **kwargs): # VDI.add_to_other_config -> VDI_add_to_other_config method = getattr(self, operation.replace('.', '_'), None) if method: return method(*args, **kwargs) self.operation = operation self.args = args self.kwargs = kwargs self.session = _FakeSession() self.context = context.get_admin_context() self.fake_instance = {'uuid': 'aaaa-bbbb-cccc-dddd', 'name': 'myinstance'} def test_create_vdi(self): # Some images are registered with XenServer explicitly by calling # `create_vdi` vm_utils.create_vdi(self.session, 'sr_ref', self.fake_instance, 'myvdi', 'root', 1024, read_only=True) expected = {'nova_disk_type': 'root', 'nova_instance_uuid': 'aaaa-bbbb-cccc-dddd'} self.assertEqual(expected, self.session.args[0]['other_config']) @mock.patch.object(vm_utils, '_fetch_image', return_value={'root': {'uuid': 'fake-uuid'}}) def test_create_image(self, mock_vm_utils): # Other images are registered implicitly when they are dropped into # the SR by a dom0 plugin or some other process self.flags(cache_images='none', group='xenserver') other_config = {} def VDI_add_to_other_config(ref, key, value): other_config[key] = value # Stubbing on the session object and not class so we don't pollute # other tests self.session.VDI_add_to_other_config = VDI_add_to_other_config self.session.VDI_get_other_config = lambda vdi: {} vm_utils.create_image(self.context, self.session, self.fake_instance, 'myvdi', 'image1', vm_utils.ImageType.DISK_VHD) expected = {'nova_disk_type': 'root', 'nova_instance_uuid': 'aaaa-bbbb-cccc-dddd'} self.assertEqual(expected, other_config) @mock.patch.object(os_xenapi.client.vm_management, 'receive_vhd') @mock.patch.object(vm_utils, 'scan_default_sr') @mock.patch.object(vm_utils, 'get_sr_path') def test_import_migrated_vhds(self, mock_sr_path, mock_scan_sr, mock_recv_vhd): # Migrated images should preserve the `other_config` other_config = {} def VDI_add_to_other_config(ref, key, value): other_config[key] = value # Stubbing on the session object and not class so we don't pollute # other tests self.session.VDI_add_to_other_config = VDI_add_to_other_config self.session.VDI_get_other_config = lambda vdi: {} mock_sr_path.return_value = {'root': {'uuid': 'aaaa-bbbb-cccc-dddd'}} vm_utils._import_migrated_vhds(self.session, self.fake_instance, "disk_label", "root", "vdi_label") expected = {'nova_disk_type': 'root', 'nova_instance_uuid': 'aaaa-bbbb-cccc-dddd'} self.assertEqual(expected, other_config) mock_scan_sr.assert_called_once_with(self.session) mock_recv_vhd.assert_called_with( self.session, "disk_label", {'root': {'uuid': 'aaaa-bbbb-cccc-dddd'}}, mock.ANY) mock_sr_path.assert_called_once_with(self.session) class GenerateDiskTestCase(VMUtilsTestBase): @mock.patch.object(vm_utils, 'vdi_attached') @mock.patch.object(vm_utils.utils, 'mkfs', side_effect = test.TestingException()) @mock.patch.object(vm_utils, '_get_dom0_ref', return_value='dom0_ref') @mock.patch.object(vm_utils, 'safe_find_sr', return_value='sr_ref') @mock.patch.object(vm_utils, 'create_vdi', return_value='vdi_ref') @mock.patch.object(vm_utils, 'create_vbd') def test_generate_disk_with_no_fs_given(self, mock_create_vbd, mock_create_vdi, mock_findsr, mock_dom0ref, mock_mkfs, mock_attached_here): session = stubs.get_fake_session() vdi_ref = mock.MagicMock() mock_attached_here.return_value = vdi_ref instance = {'uuid': 'fake_uuid'} vm_utils._generate_disk(session, instance, 'vm_ref', '2', 'name', 'user', 10, None, None) mock_attached_here.assert_called_once_with(session, 'vdi_ref', read_only=False, dom0=True) mock_create_vbd.assert_called_with(session, 'vm_ref', 'vdi_ref', '2', bootable=False) @mock.patch.object(vm_utils, 'vdi_attached') @mock.patch.object(vm_utils.utils, 'mkfs') @mock.patch.object(vm_utils, '_get_dom0_ref', return_value='dom0_ref') @mock.patch.object(vm_utils, 'safe_find_sr', return_value='sr_ref') @mock.patch.object(vm_utils, 'create_vdi', return_value='vdi_ref') @mock.patch.object(vm_utils.utils, 'make_dev_path', return_value='/dev/fake_devp1') @mock.patch.object(vm_utils, 'create_vbd') def test_generate_disk_swap(self, mock_create_vbd, mock_make_path, mock_create_vdi, mock_findsr, mock_dom0ref, mock_mkfs, mock_attached_here): session = stubs.get_fake_session() vdi_dev = mock.MagicMock() mock_attached_here.return_value = vdi_dev vdi_dev.__enter__.return_value = 'fakedev' instance = {'uuid': 'fake_uuid'} vm_utils._generate_disk(session, instance, 'vm_ref', '2', 'name', 'user', 10, 'swap', 'swap-1') mock_attached_here.assert_any_call(session, 'vdi_ref', read_only=False, dom0=True) # As swap is supported in dom0, mkfs will run there session.call_plugin_serialized.assert_any_call( 'partition_utils.py', 'mkfs', 'fakedev', '1', 'swap', 'swap-1') mock_create_vbd.assert_called_with(session, 'vm_ref', 'vdi_ref', '2', bootable=False) @mock.patch.object(vm_utils, 'vdi_attached') @mock.patch.object(vm_utils.utils, 'mkfs') @mock.patch.object(vm_utils, '_get_dom0_ref', return_value='dom0_ref') @mock.patch.object(vm_utils, 'safe_find_sr', return_value='sr_ref') @mock.patch.object(vm_utils, 'create_vdi', return_value='vdi_ref') @mock.patch.object(vm_utils.utils, 'make_dev_path', return_value='/dev/fake_devp1') @mock.patch.object(vm_utils, 'create_vbd') def test_generate_disk_ephemeral(self, mock_create_vbd, mock_make_path, mock_create_vdi, mock_findsr, mock_dom0ref, mock_mkfs, mock_attached_here): session = stubs.get_fake_session() vdi_ref = mock.MagicMock() mock_attached_here.return_value = vdi_ref instance = {'uuid': 'fake_uuid'} vm_utils._generate_disk(session, instance, 'vm_ref', '2', 'name', 'ephemeral', 10, 'ext4', 'ephemeral-1') mock_attached_here.assert_any_call(session, 'vdi_ref', read_only=False, dom0=True) # As ext4 is not supported in dom0, mkfs will run in domU mock_attached_here.assert_any_call(session, 'vdi_ref', read_only=False) mock_mkfs.assert_called_with('ext4', '/dev/fake_devp1', 'ephemeral-1', run_as_root=True) mock_create_vbd.assert_called_with(session, 'vm_ref', 'vdi_ref', '2', bootable=False) @mock.patch.object(vm_utils, 'safe_find_sr', return_value='sr_ref') @mock.patch.object(vm_utils, 'create_vdi', return_value='vdi_ref') @mock.patch.object(vm_utils, '_get_dom0_ref', side_effect = test.TestingException()) @mock.patch.object(vm_utils, 'safe_destroy_vdis') def test_generate_disk_ensure_cleanup_called(self, mock_destroy_vdis, mock_dom0ref, mock_create_vdi, mock_findsr): session = stubs.get_fake_session() instance = {'uuid': 'fake_uuid'} self.assertRaises(test.TestingException, vm_utils._generate_disk, session, instance, None, '2', 'name', 'user', 10, None, None) mock_destroy_vdis.assert_called_once_with(session, ['vdi_ref']) @mock.patch.object(vm_utils, 'safe_find_sr', return_value='sr_ref') @mock.patch.object(vm_utils, 'create_vdi', return_value='vdi_ref') @mock.patch.object(vm_utils, 'vdi_attached') @mock.patch.object(vm_utils, '_get_dom0_ref', return_value='dom0_ref') @mock.patch.object(vm_utils, 'create_vbd') def test_generate_disk_ephemeral_no_vmref(self, mock_create_vbd, mock_dom0_ref, mock_attached_here, mock_create_vdi, mock_findsr): session = stubs.get_fake_session() vdi_ref = mock.MagicMock() mock_attached_here.return_value = vdi_ref instance = {'uuid': 'fake_uuid'} vdi_ref = vm_utils._generate_disk( session, instance, None, None, 'name', 'user', 10, None, None) mock_attached_here.assert_called_once_with(session, 'vdi_ref', read_only=False, dom0=True) self.assertFalse(mock_create_vbd.called) class GenerateEphemeralTestCase(VMUtilsTestBase): def setUp(self): super(GenerateEphemeralTestCase, self).setUp() self.session = "session" self.instance = "instance" self.vm_ref = "vm_ref" self.name_label = "name" self.ephemeral_name_label = "name ephemeral" self.userdevice = 4 self.fs_label = "ephemeral" self.mox.StubOutWithMock(vm_utils, "_generate_disk") self.mox.StubOutWithMock(vm_utils, "safe_destroy_vdis") def test_get_ephemeral_disk_sizes_simple(self): result = vm_utils.get_ephemeral_disk_sizes(20) expected = [20] self.assertEqual(expected, list(result)) def test_get_ephemeral_disk_sizes_three_disks_2000(self): result = vm_utils.get_ephemeral_disk_sizes(4030) expected = [2000, 2000, 30] self.assertEqual(expected, list(result)) def test_get_ephemeral_disk_sizes_two_disks_1024(self): result = vm_utils.get_ephemeral_disk_sizes(2048) expected = [1024, 1024] self.assertEqual(expected, list(result)) def _expect_generate_disk(self, size, device, name_label, fs_label): vm_utils._generate_disk( self.session, self.instance, self.vm_ref, str(device), name_label, 'ephemeral', size * 1024, None, fs_label).AndReturn(device) def test_generate_ephemeral_adds_one_disk(self): self._expect_generate_disk( 20, self.userdevice, self.ephemeral_name_label, self.fs_label) self.mox.ReplayAll() vm_utils.generate_ephemeral( self.session, self.instance, self.vm_ref, str(self.userdevice), self.name_label, 20) def test_generate_ephemeral_adds_multiple_disks(self): self._expect_generate_disk( 2000, self.userdevice, self.ephemeral_name_label, self.fs_label) self._expect_generate_disk( 2000, self.userdevice + 1, self.ephemeral_name_label + " (1)", self.fs_label + "1") self._expect_generate_disk( 30, self.userdevice + 2, self.ephemeral_name_label + " (2)", self.fs_label + "2") self.mox.ReplayAll() vm_utils.generate_ephemeral( self.session, self.instance, self.vm_ref, str(self.userdevice), self.name_label, 4030) def test_generate_ephemeral_cleans_up_on_error(self): self._expect_generate_disk( 1024, self.userdevice, self.ephemeral_name_label, self.fs_label) self._expect_generate_disk( 1024, self.userdevice + 1, self.ephemeral_name_label + " (1)", self.fs_label + "1") vm_utils._generate_disk( self.session, self.instance, self.vm_ref, str(self.userdevice + 2), "name ephemeral (2)", 'ephemeral', units.Mi, None, 'ephemeral2').AndRaise(exception.NovaException) vm_utils.safe_destroy_vdis(self.session, [4, 5]) self.mox.ReplayAll() self.assertRaises( exception.NovaException, vm_utils.generate_ephemeral, self.session, self.instance, self.vm_ref, str(self.userdevice), self.name_label, 4096) class FakeFile(object): def __init__(self): self._file_operations = [] def seek(self, offset): self._file_operations.append((self.seek, offset)) class StreamDiskTestCase(VMUtilsTestBase): def setUp(self): super(StreamDiskTestCase, self).setUp() self.mox.StubOutWithMock(vm_utils.utils, 'make_dev_path') self.mox.StubOutWithMock(vm_utils.utils, 'temporary_chown') self.mox.StubOutWithMock(vm_utils, '_write_partition') # NOTE(matelakat): This might hide the fail reason, as test runners # are unhappy with a mocked out open. self.mox.StubOutWithMock(six.moves.builtins, 'open') self.image_service_func = self.mox.CreateMockAnything() def test_non_ami(self): fake_file = FakeFile() vm_utils.utils.make_dev_path('dev').AndReturn('some_path') vm_utils.utils.temporary_chown( 'some_path').AndReturn(contextified(None)) open('some_path', 'wb').AndReturn(contextified(fake_file)) self.image_service_func(fake_file) self.mox.ReplayAll() vm_utils._stream_disk("session", self.image_service_func, vm_utils.ImageType.KERNEL, None, 'dev') self.assertEqual([(fake_file.seek, 0)], fake_file._file_operations) def test_ami_disk(self): fake_file = FakeFile() vm_utils._write_partition("session", 100, 'dev') vm_utils.utils.make_dev_path('dev').AndReturn('some_path') vm_utils.utils.temporary_chown( 'some_path').AndReturn(contextified(None)) open('some_path', 'wb').AndReturn(contextified(fake_file)) self.image_service_func(fake_file) self.mox.ReplayAll() vm_utils._stream_disk("session", self.image_service_func, vm_utils.ImageType.DISK, 100, 'dev') self.assertEqual( [(fake_file.seek, vm_utils.MBR_SIZE_BYTES)], fake_file._file_operations) class VMUtilsSRPath(VMUtilsTestBase): def setUp(self): super(VMUtilsSRPath, self).setUp() self.fixture = self.useFixture(config_fixture.Config(lockutils.CONF)) self.fixture.config(disable_process_locking=True, group='oslo_concurrency') self.flags(instance_name_template='%d', firewall_driver='nova.virt.xenapi.firewall.' 'Dom0IptablesFirewallDriver') self.flags(connection_url='http://localhost', connection_password='test_pass', group='xenserver') stubs.stubout_session(self.stubs, fake.SessionBase) driver = xenapi_conn.XenAPIDriver(False) self.session = driver._session self.session.is_local_connection = False def test_defined(self): self.mox.StubOutWithMock(vm_utils, "safe_find_sr") self.mox.StubOutWithMock(self.session, "call_xenapi") vm_utils.safe_find_sr(self.session).AndReturn("sr_ref") self.session.host_ref = "host_ref" self.session.call_xenapi('PBD.get_all_records_where', 'field "host"="host_ref" and field "SR"="sr_ref"').AndReturn( {'pbd_ref': {'device_config': {'path': 'sr_path'}}}) self.mox.ReplayAll() self.assertEqual(vm_utils.get_sr_path(self.session), "sr_path") def test_default(self): self.mox.StubOutWithMock(vm_utils, "safe_find_sr") self.mox.StubOutWithMock(self.session, "call_xenapi") vm_utils.safe_find_sr(self.session).AndReturn("sr_ref") self.session.host_ref = "host_ref" self.session.call_xenapi('PBD.get_all_records_where', 'field "host"="host_ref" and field "SR"="sr_ref"').AndReturn( {'pbd_ref': {'device_config': {}}}) self.session.call_xenapi("SR.get_record", "sr_ref").AndReturn( {'uuid': 'sr_uuid', 'type': 'ext'}) self.mox.ReplayAll() self.assertEqual(vm_utils.get_sr_path(self.session), "/var/run/sr-mount/sr_uuid") class CreateKernelRamdiskTestCase(VMUtilsTestBase): def setUp(self): super(CreateKernelRamdiskTestCase, self).setUp() self.context = "context" self.session = FakeSession() self.instance = {"kernel_id": None, "ramdisk_id": None} self.name_label = "name" self.mox.StubOutWithMock(self.session, "call_plugin") self.mox.StubOutWithMock(uuidutils, "generate_uuid") self.mox.StubOutWithMock(vm_utils, "_fetch_disk_image") def test_create_kernel_and_ramdisk_no_create(self): self.mox.ReplayAll() result = vm_utils.create_kernel_and_ramdisk(self.context, self.session, self.instance, self.name_label) self.assertEqual((None, None), result) @mock.patch.object(os_xenapi.client.disk_management, 'create_kernel_ramdisk') def test_create_kernel_and_ramdisk_create_both_cached(self, mock_ramdisk): kernel_id = "kernel" ramdisk_id = "ramdisk" self.instance["kernel_id"] = kernel_id self.instance["ramdisk_id"] = ramdisk_id args_kernel = {} args_kernel['cached-image'] = kernel_id args_kernel['new-image-uuid'] = "fake_uuid1" uuidutils.generate_uuid().AndReturn("fake_uuid1") mock_ramdisk.side_effect = ["k", "r"] args_ramdisk = {} args_ramdisk['cached-image'] = ramdisk_id args_ramdisk['new-image-uuid'] = "fake_uuid2" uuidutils.generate_uuid().AndReturn("fake_uuid2") self.mox.ReplayAll() result = vm_utils.create_kernel_and_ramdisk(self.context, self.session, self.instance, self.name_label) self.assertEqual(("k", "r"), result) @mock.patch.object(os_xenapi.client.disk_management, 'create_kernel_ramdisk') def test_create_kernel_and_ramdisk_create_kernel_not_cached(self, mock_ramdisk): kernel_id = "kernel" self.instance["kernel_id"] = kernel_id args_kernel = {} args_kernel['cached-image'] = kernel_id args_kernel['new-image-uuid'] = "fake_uuid1" uuidutils.generate_uuid().AndReturn("fake_uuid1") mock_ramdisk.return_value = "" kernel = {"kernel": {"file": "k"}} vm_utils._fetch_disk_image(self.context, self.session, self.instance, self.name_label, kernel_id, 0).AndReturn(kernel) self.mox.ReplayAll() result = vm_utils.create_kernel_and_ramdisk(self.context, self.session, self.instance, self.name_label) self.assertEqual(("k", None), result) def _test_create_kernel_image(self, cache_images): kernel_id = "kernel" self.instance["kernel_id"] = kernel_id args_kernel = {} args_kernel['cached-image'] = kernel_id args_kernel['new-image-uuid'] = "fake_uuid1" self.flags(cache_images=cache_images, group='xenserver') if cache_images == 'all': uuidutils.generate_uuid().AndReturn("fake_uuid1") else: kernel = {"kernel": {"file": "new_image", "uuid": None}} vm_utils._fetch_disk_image(self.context, self.session, self.instance, self.name_label, kernel_id, 0).AndReturn(kernel) self.mox.ReplayAll() result = vm_utils._create_kernel_image(self.context, self.session, self.instance, self.name_label, kernel_id, 0) if cache_images == 'all': self.assertEqual(result, {"kernel": {"file": "cached_image", "uuid": None}}) else: self.assertEqual(result, {"kernel": {"file": "new_image", "uuid": None}}) @mock.patch.object(os_xenapi.client.disk_management, 'create_kernel_ramdisk') def test_create_kernel_image_cached_config(self, mock_ramdisk): mock_ramdisk.return_value = "cached_image" self._test_create_kernel_image('all') mock_ramdisk.assert_called_once_with(self.session, "kernel", "fake_uuid1") def test_create_kernel_image_uncached_config(self): self._test_create_kernel_image('none') class ScanSrTestCase(VMUtilsTestBase): @mock.patch.object(vm_utils, "_scan_sr") @mock.patch.object(vm_utils, "safe_find_sr") def test_scan_default_sr(self, mock_safe_find_sr, mock_scan_sr): mock_safe_find_sr.return_value = "sr_ref" self.assertEqual("sr_ref", vm_utils.scan_default_sr("fake_session")) mock_scan_sr.assert_called_once_with("fake_session", "sr_ref") def test_scan_sr_works(self): session = mock.Mock() vm_utils._scan_sr(session, "sr_ref") session.call_xenapi.assert_called_once_with('SR.scan', "sr_ref") def test_scan_sr_unknown_error_fails_once(self): session = mock.Mock() session.XenAPI.Failure = fake.Failure session.call_xenapi.side_effect = test.TestingException self.assertRaises(test.TestingException, vm_utils._scan_sr, session, "sr_ref") session.call_xenapi.assert_called_once_with('SR.scan', "sr_ref") @mock.patch.object(greenthread, 'sleep') def test_scan_sr_known_error_retries_then_throws(self, mock_sleep): session = mock.Mock() class FakeException(Exception): details = ['SR_BACKEND_FAILURE_40', "", "", ""] session.XenAPI.Failure = FakeException session.call_xenapi.side_effect = FakeException self.assertRaises(FakeException, vm_utils._scan_sr, session, "sr_ref") session.call_xenapi.assert_called_with('SR.scan', "sr_ref") self.assertEqual(4, session.call_xenapi.call_count) mock_sleep.assert_has_calls([mock.call(2), mock.call(4), mock.call(8)]) @mock.patch.object(greenthread, 'sleep') def test_scan_sr_known_error_retries_then_succeeds(self, mock_sleep): session = mock.Mock() class FakeException(Exception): details = ['SR_BACKEND_FAILURE_40', "", "", ""] session.XenAPI.Failure = FakeException def fake_call_xenapi(*args): fake_call_xenapi.count += 1 if fake_call_xenapi.count != 2: raise FakeException() fake_call_xenapi.count = 0 session.call_xenapi.side_effect = fake_call_xenapi vm_utils._scan_sr(session, "sr_ref") session.call_xenapi.assert_called_with('SR.scan', "sr_ref") self.assertEqual(2, session.call_xenapi.call_count) mock_sleep.assert_called_once_with(2) @mock.patch.object(flavors, 'extract_flavor', return_value={ 'memory_mb': 1024, 'vcpus': 1, 'vcpu_weight': 1.0, }) class CreateVmTestCase(VMUtilsTestBase): def test_vss_provider(self, mock_extract): self.flags(vcpu_pin_set="2,3") session = stubs.get_fake_session() instance = objects.Instance(uuid=uuids.nova_uuid, os_type="windows", system_metadata={}) with mock.patch.object(instance, 'get_flavor') as get: get.return_value = objects.Flavor._from_db_object( None, objects.Flavor(), test_flavor.fake_flavor) vm_utils.create_vm(session, instance, "label", "kernel", "ramdisk") vm_rec = { 'VCPUs_params': {'cap': '0', 'mask': '2,3', 'weight': '1'}, 'PV_args': '', 'memory_static_min': '0', 'ha_restart_priority': '', 'HVM_boot_policy': 'BIOS order', 'PV_bootloader': '', 'tags': [], 'VCPUs_max': '4', 'memory_static_max': '1073741824', 'actions_after_shutdown': 'destroy', 'memory_dynamic_max': '1073741824', 'user_version': '0', 'xenstore_data': {'vm-data/allowvssprovider': 'false'}, 'blocked_operations': {}, 'is_a_template': False, 'name_description': '', 'memory_dynamic_min': '1073741824', 'actions_after_crash': 'destroy', 'memory_target': '1073741824', 'PV_ramdisk': '', 'PV_bootloader_args': '', 'PCI_bus': '', 'other_config': {'nova_uuid': uuids.nova_uuid}, 'name_label': 'label', 'actions_after_reboot': 'restart', 'VCPUs_at_startup': '4', 'HVM_boot_params': {'order': 'dc'}, 'platform': {'nx': 'true', 'pae': 'true', 'apic': 'true', 'timeoffset': '0', 'viridian': 'true', 'acpi': 'true'}, 'PV_legacy_args': '', 'PV_kernel': '', 'affinity': '', 'recommendations': '', 'ha_always_run': False } session.call_xenapi.assert_called_once_with("VM.create", vm_rec) def test_invalid_cpu_mask_raises(self, mock_extract): self.flags(vcpu_pin_set="asdf") session = mock.Mock() instance = objects.Instance(uuid=uuids.fake, system_metadata={}) with mock.patch.object(instance, 'get_flavor') as get: get.return_value = objects.Flavor._from_db_object( None, objects.Flavor(), test_flavor.fake_flavor) self.assertRaises(exception.Invalid, vm_utils.create_vm, session, instance, "label", "kernel", "ramdisk") def test_destroy_vm(self, mock_extract): session = mock.Mock() instance = objects.Instance(uuid=uuids.fake) vm_utils.destroy_vm(session, instance, "vm_ref") session.VM.destroy.assert_called_once_with("vm_ref") def test_destroy_vm_silently_fails(self, mock_extract): session = mock.Mock() exc = test.TestingException() session.XenAPI.Failure = test.TestingException session.VM.destroy.side_effect = exc instance = objects.Instance(uuid=uuids.fake) vm_utils.destroy_vm(session, instance, "vm_ref") session.VM.destroy.assert_called_once_with("vm_ref") class DetermineVmModeTestCase(VMUtilsTestBase): def _fake_object(self, updates): return fake_instance.fake_instance_obj(None, **updates) def test_determine_vm_mode_returns_xen_mode(self): instance = self._fake_object({"vm_mode": "xen"}) self.assertEqual(obj_fields.VMMode.XEN, vm_utils.determine_vm_mode(instance, None)) def test_determine_vm_mode_returns_hvm_mode(self): instance = self._fake_object({"vm_mode": "hvm"}) self.assertEqual(obj_fields.VMMode.HVM, vm_utils.determine_vm_mode(instance, None)) def test_determine_vm_mode_returns_xen_for_linux(self): instance = self._fake_object({"vm_mode": None, "os_type": "linux"}) self.assertEqual(obj_fields.VMMode.XEN, vm_utils.determine_vm_mode(instance, None)) def test_determine_vm_mode_returns_hvm_for_windows(self): instance = self._fake_object({"vm_mode": None, "os_type": "windows"}) self.assertEqual(obj_fields.VMMode.HVM, vm_utils.determine_vm_mode(instance, None)) def test_determine_vm_mode_returns_hvm_by_default(self): instance = self._fake_object({"vm_mode": None, "os_type": None}) self.assertEqual(obj_fields.VMMode.HVM, vm_utils.determine_vm_mode(instance, None)) def test_determine_vm_mode_returns_xen_for_VHD(self): instance = self._fake_object({"vm_mode": None, "os_type": None}) self.assertEqual(obj_fields.VMMode.XEN, vm_utils.determine_vm_mode(instance, vm_utils.ImageType.DISK_VHD)) def test_determine_vm_mode_returns_xen_for_DISK(self): instance = self._fake_object({"vm_mode": None, "os_type": None}) self.assertEqual(obj_fields.VMMode.XEN, vm_utils.determine_vm_mode(instance, vm_utils.ImageType.DISK)) class CallXenAPIHelpersTestCase(VMUtilsTestBase): def test_vm_get_vbd_refs(self): session = mock.Mock() session.call_xenapi.return_value = "foo" self.assertEqual("foo", vm_utils._vm_get_vbd_refs(session, "vm_ref")) session.call_xenapi.assert_called_once_with("VM.get_VBDs", "vm_ref") def test_vbd_get_rec(self): session = mock.Mock() session.call_xenapi.return_value = "foo" self.assertEqual("foo", vm_utils._vbd_get_rec(session, "vbd_ref")) session.call_xenapi.assert_called_once_with("VBD.get_record", "vbd_ref") def test_vdi_get_rec(self): session = mock.Mock() session.call_xenapi.return_value = "foo" self.assertEqual("foo", vm_utils._vdi_get_rec(session, "vdi_ref")) session.call_xenapi.assert_called_once_with("VDI.get_record", "vdi_ref") def test_vdi_snapshot(self): session = mock.Mock() session.call_xenapi.return_value = "foo" self.assertEqual("foo", vm_utils._vdi_snapshot(session, "vdi_ref")) session.call_xenapi.assert_called_once_with("VDI.snapshot", "vdi_ref", {}) def test_vdi_get_virtual_size(self): session = mock.Mock() session.call_xenapi.return_value = "123" self.assertEqual(123, vm_utils._vdi_get_virtual_size(session, "ref")) session.call_xenapi.assert_called_once_with("VDI.get_virtual_size", "ref") @mock.patch.object(vm_utils, '_get_resize_func_name') def test_vdi_resize(self, mock_get_resize_func_name): session = mock.Mock() mock_get_resize_func_name.return_value = "VDI.fake" vm_utils._vdi_resize(session, "ref", 123) session.call_xenapi.assert_called_once_with("VDI.fake", "ref", "123") @mock.patch.object(vm_utils, '_vdi_resize') @mock.patch.object(vm_utils, '_vdi_get_virtual_size') def test_update_vdi_virtual_size_works(self, mock_get_size, mock_resize): mock_get_size.return_value = (1024 ** 3) - 1 instance = {"uuid": "a"} vm_utils.update_vdi_virtual_size("s", instance, "ref", 1) mock_get_size.assert_called_once_with("s", "ref") mock_resize.assert_called_once_with("s", "ref", 1024 ** 3) @mock.patch.object(vm_utils, '_vdi_resize') @mock.patch.object(vm_utils, '_vdi_get_virtual_size') def test_update_vdi_virtual_size_skips_resize_down(self, mock_get_size, mock_resize): mock_get_size.return_value = 1024 ** 3 instance = {"uuid": "a"} vm_utils.update_vdi_virtual_size("s", instance, "ref", 1) mock_get_size.assert_called_once_with("s", "ref") self.assertFalse(mock_resize.called) @mock.patch.object(vm_utils, '_vdi_resize') @mock.patch.object(vm_utils, '_vdi_get_virtual_size') def test_update_vdi_virtual_size_raise_if_disk_big(self, mock_get_size, mock_resize): mock_get_size.return_value = 1024 ** 3 + 1 instance = {"uuid": "a"} self.assertRaises(exception.ResizeError, vm_utils.update_vdi_virtual_size, "s", instance, "ref", 1) mock_get_size.assert_called_once_with("s", "ref") self.assertFalse(mock_resize.called) @mock.patch.object(vm_utils, '_vdi_get_rec') @mock.patch.object(vm_utils, '_vbd_get_rec') @mock.patch.object(vm_utils, '_vm_get_vbd_refs') class GetVdiForVMTestCase(VMUtilsTestBase): def test_get_vdi_for_vm_safely(self, vm_get_vbd_refs, vbd_get_rec, vdi_get_rec): session = "session" vm_get_vbd_refs.return_value = ["a", "b"] vbd_get_rec.return_value = {'userdevice': '0', 'VDI': 'vdi_ref'} vdi_get_rec.return_value = {} result = vm_utils.get_vdi_for_vm_safely(session, "vm_ref") self.assertEqual(('vdi_ref', {}), result) vm_get_vbd_refs.assert_called_once_with(session, "vm_ref") vbd_get_rec.assert_called_once_with(session, "a") vdi_get_rec.assert_called_once_with(session, "vdi_ref") def test_get_vdi_for_vm_safely_fails(self, vm_get_vbd_refs, vbd_get_rec, vdi_get_rec): session = "session" vm_get_vbd_refs.return_value = ["a", "b"] vbd_get_rec.return_value = {'userdevice': '0', 'VDI': 'vdi_ref'} self.assertRaises(exception.NovaException, vm_utils.get_vdi_for_vm_safely, session, "vm_ref", userdevice='1') self.assertEqual([], vdi_get_rec.call_args_list) self.assertEqual(2, len(vbd_get_rec.call_args_list)) @mock.patch.object(vm_utils, '_vdi_get_uuid') @mock.patch.object(vm_utils, '_vbd_get_rec') @mock.patch.object(vm_utils, '_vm_get_vbd_refs') class GetAllVdiForVMTestCase(VMUtilsTestBase): def _setup_get_all_vdi_uuids_for_vm(self, vm_get_vbd_refs, vbd_get_rec, vdi_get_uuid): def fake_vbd_get_rec(session, vbd_ref): return {'userdevice': vbd_ref, 'VDI': "vdi_ref_%s" % vbd_ref} def fake_vdi_get_uuid(session, vdi_ref): return vdi_ref vm_get_vbd_refs.return_value = ["0", "2"] vbd_get_rec.side_effect = fake_vbd_get_rec vdi_get_uuid.side_effect = fake_vdi_get_uuid def test_get_all_vdi_uuids_for_vm_works(self, vm_get_vbd_refs, vbd_get_rec, vdi_get_uuid): self._setup_get_all_vdi_uuids_for_vm(vm_get_vbd_refs, vbd_get_rec, vdi_get_uuid) result = vm_utils.get_all_vdi_uuids_for_vm('session', "vm_ref") expected = ['vdi_ref_0', 'vdi_ref_2'] self.assertEqual(expected, list(result)) def test_get_all_vdi_uuids_for_vm_finds_none(self, vm_get_vbd_refs, vbd_get_rec, vdi_get_uuid): self._setup_get_all_vdi_uuids_for_vm(vm_get_vbd_refs, vbd_get_rec, vdi_get_uuid) result = vm_utils.get_all_vdi_uuids_for_vm('session', "vm_ref", min_userdevice=1) expected = ["vdi_ref_2"] self.assertEqual(expected, list(result)) class GetAllVdisTestCase(VMUtilsTestBase): def test_get_all_vdis_in_sr(self): def fake_get_rec(record_type, ref): if ref == "2": return "vdi_rec_2" session = mock.Mock() session.call_xenapi.return_value = ["1", "2"] session.get_rec.side_effect = fake_get_rec sr_ref = "sr_ref" actual = list(vm_utils._get_all_vdis_in_sr(session, sr_ref)) self.assertEqual(actual, [('2', 'vdi_rec_2')]) session.call_xenapi.assert_called_once_with("SR.get_VDIs", sr_ref) class SnapshotAttachedHereTestCase(VMUtilsTestBase): @mock.patch.object(vm_utils, '_snapshot_attached_here_impl') def test_snapshot_attached_here(self, mock_impl): def fake_impl(session, instance, vm_ref, label, userdevice, post_snapshot_callback): self.assertEqual("session", session) self.assertEqual("instance", instance) self.assertEqual("vm_ref", vm_ref) self.assertEqual("label", label) self.assertEqual('0', userdevice) self.assertIsNone(post_snapshot_callback) yield "fake" mock_impl.side_effect = fake_impl with vm_utils.snapshot_attached_here("session", "instance", "vm_ref", "label") as result: self.assertEqual("fake", result) mock_impl.assert_called_once_with("session", "instance", "vm_ref", "label", '0', None) @mock.patch.object(vm_utils, '_delete_snapshots_in_vdi_chain') @mock.patch.object(vm_utils, 'safe_destroy_vdis') @mock.patch.object(vm_utils, '_walk_vdi_chain') @mock.patch.object(vm_utils, '_wait_for_vhd_coalesce') @mock.patch.object(vm_utils, '_vdi_get_uuid') @mock.patch.object(vm_utils, '_vdi_snapshot') @mock.patch.object(vm_utils, 'get_vdi_for_vm_safely') def test_snapshot_attached_here_impl(self, mock_get_vdi_for_vm_safely, mock_vdi_snapshot, mock_vdi_get_uuid, mock_wait_for_vhd_coalesce, mock_walk_vdi_chain, mock_safe_destroy_vdis, mock_delete_snapshots_in_vdi_chain): session = "session" instance = {"uuid": "uuid"} mock_callback = mock.Mock() mock_get_vdi_for_vm_safely.return_value = ("vdi_ref", {"SR": "sr_ref", "uuid": "vdi_uuid"}) mock_vdi_snapshot.return_value = "snap_ref" mock_vdi_get_uuid.return_value = "snap_uuid" mock_walk_vdi_chain.return_value = [{"uuid": "a"}, {"uuid": "b"}] try: with vm_utils.snapshot_attached_here(session, instance, "vm_ref", "label", '2', mock_callback) as result: self.assertEqual(["a", "b"], result) raise test.TestingException() self.assertTrue(False) except test.TestingException: pass mock_get_vdi_for_vm_safely.assert_called_once_with(session, "vm_ref", '2') mock_vdi_snapshot.assert_called_once_with(session, "vdi_ref") mock_wait_for_vhd_coalesce.assert_called_once_with(session, instance, "sr_ref", "vdi_ref", ['a', 'b']) mock_vdi_get_uuid.assert_called_once_with(session, "snap_ref") mock_walk_vdi_chain.assert_has_calls([mock.call(session, "vdi_uuid"), mock.call(session, "snap_uuid")]) mock_callback.assert_called_once_with( task_state="image_pending_upload") mock_safe_destroy_vdis.assert_called_once_with(session, ["snap_ref"]) mock_delete_snapshots_in_vdi_chain.assert_called_once_with(session, instance, ['a', 'b'], "sr_ref") @mock.patch.object(greenthread, 'sleep') def test_wait_for_vhd_coalesce_leaf_node(self, mock_sleep): instance = {"uuid": "fake"} vm_utils._wait_for_vhd_coalesce("session", instance, "sr_ref", "vdi_ref", ["uuid"]) self.assertFalse(mock_sleep.called) @mock.patch.object(vm_utils, '_count_children') @mock.patch.object(greenthread, 'sleep') def test_wait_for_vhd_coalesce_parent_snapshot(self, mock_sleep, mock_count): mock_count.return_value = 2 instance = {"uuid": "fake"} vm_utils._wait_for_vhd_coalesce("session", instance, "sr_ref", "vdi_ref", ["uuid1", "uuid2"]) self.assertFalse(mock_sleep.called) self.assertTrue(mock_count.called) @mock.patch.object(greenthread, 'sleep') @mock.patch.object(vm_utils, '_get_vhd_parent_uuid') @mock.patch.object(vm_utils, '_count_children') @mock.patch.object(vm_utils, '_scan_sr') def test_wait_for_vhd_coalesce_raises(self, mock_scan_sr, mock_count, mock_get_vhd_parent_uuid, mock_sleep): mock_count.return_value = 1 instance = {"uuid": "fake"} self.assertRaises(exception.NovaException, vm_utils._wait_for_vhd_coalesce, "session", instance, "sr_ref", "vdi_ref", ["uuid1", "uuid2"]) self.assertTrue(mock_count.called) self.assertEqual(20, mock_sleep.call_count) self.assertEqual(20, mock_scan_sr.call_count) @mock.patch.object(greenthread, 'sleep') @mock.patch.object(vm_utils, '_get_vhd_parent_uuid') @mock.patch.object(vm_utils, '_count_children') @mock.patch.object(vm_utils, '_scan_sr') def test_wait_for_vhd_coalesce_success(self, mock_scan_sr, mock_count, mock_get_vhd_parent_uuid, mock_sleep): mock_count.return_value = 1 instance = {"uuid": "fake"} mock_get_vhd_parent_uuid.side_effect = ["bad", "uuid2"] vm_utils._wait_for_vhd_coalesce("session", instance, "sr_ref", "vdi_ref", ["uuid1", "uuid2"]) self.assertEqual(1, mock_sleep.call_count) self.assertEqual(2, mock_scan_sr.call_count) @mock.patch.object(vm_utils, '_get_all_vdis_in_sr') def test_count_children(self, mock_get_all_vdis_in_sr): vdis = [('child1', {'sm_config': {'vhd-parent': 'parent1'}}), ('child2', {'sm_config': {'vhd-parent': 'parent2'}}), ('child3', {'sm_config': {'vhd-parent': 'parent1'}})] mock_get_all_vdis_in_sr.return_value = vdis self.assertEqual(2, vm_utils._count_children('session', 'parent1', 'sr')) class ImportMigratedDisksTestCase(VMUtilsTestBase): @mock.patch.object(vm_utils, '_import_migrate_ephemeral_disks') @mock.patch.object(vm_utils, '_import_migrated_root_disk') def test_import_all_migrated_disks(self, mock_root, mock_ephemeral): session = "session" instance = "instance" mock_root.return_value = "root_vdi" mock_ephemeral.return_value = ["a", "b"] result = vm_utils.import_all_migrated_disks(session, instance) expected = {'root': 'root_vdi', 'ephemerals': ["a", "b"]} self.assertEqual(expected, result) mock_root.assert_called_once_with(session, instance) mock_ephemeral.assert_called_once_with(session, instance) @mock.patch.object(vm_utils, '_import_migrate_ephemeral_disks') @mock.patch.object(vm_utils, '_import_migrated_root_disk') def test_import_all_migrated_disks_import_root_false(self, mock_root, mock_ephemeral): session = "session" instance = "instance" mock_root.return_value = "root_vdi" mock_ephemeral.return_value = ["a", "b"] result = vm_utils.import_all_migrated_disks(session, instance, import_root=False) expected = {'root': None, 'ephemerals': ["a", "b"]} self.assertEqual(expected, result) self.assertEqual(0, mock_root.call_count) mock_ephemeral.assert_called_once_with(session, instance) @mock.patch.object(vm_utils, '_import_migrated_vhds') def test_import_migrated_root_disk(self, mock_migrate): mock_migrate.return_value = "foo" instance = {"uuid": "uuid", "name": "name"} result = vm_utils._import_migrated_root_disk("s", instance) self.assertEqual("foo", result) mock_migrate.assert_called_once_with("s", instance, "uuid", "root", "name") @mock.patch.object(vm_utils, '_import_migrated_vhds') def test_import_migrate_ephemeral_disks(self, mock_migrate): mock_migrate.return_value = "foo" instance = objects.Instance(id=1, uuid=uuids.fake) instance.old_flavor = objects.Flavor(ephemeral_gb=4000) result = vm_utils._import_migrate_ephemeral_disks("s", instance) self.assertEqual({'4': 'foo', '5': 'foo'}, result) inst_uuid = instance.uuid inst_name = instance.name expected_calls = [mock.call("s", instance, "%s_ephemeral_1" % inst_uuid, "ephemeral", "%s ephemeral (1)" % inst_name), mock.call("s", instance, "%s_ephemeral_2" % inst_uuid, "ephemeral", "%s ephemeral (2)" % inst_name)] self.assertEqual(expected_calls, mock_migrate.call_args_list) @mock.patch.object(vm_utils, 'get_ephemeral_disk_sizes') def test_import_migrate_ephemeral_disks_use_old_flavor(self, mock_get_sizes): mock_get_sizes.return_value = [] instance = objects.Instance(id=1, uuid=uuids.fake, ephemeral_gb=2000) instance.old_flavor = objects.Flavor(ephemeral_gb=4000) vm_utils._import_migrate_ephemeral_disks("s", instance) mock_get_sizes.assert_called_once_with(4000) @mock.patch.object(os_xenapi.client.vm_management, 'receive_vhd') @mock.patch.object(vm_utils, '_set_vdi_info') @mock.patch.object(vm_utils, 'scan_default_sr') @mock.patch.object(vm_utils, 'get_sr_path') def test_import_migrated_vhds(self, mock_get_sr_path, mock_scan_sr, mock_set_info, mock_recv_vhd): session = mock.Mock() instance = {"uuid": "uuid"} mock_recv_vhd.return_value = {"root": {"uuid": "a"}} session.call_xenapi.return_value = "vdi_ref" mock_get_sr_path.return_value = "sr_path" result = vm_utils._import_migrated_vhds(session, instance, 'chain_label', 'disk_type', 'vdi_label') expected = {'uuid': "a", 'ref': "vdi_ref"} self.assertEqual(expected, result) mock_get_sr_path.assert_called_once_with(session) mock_recv_vhd.assert_called_once_with(session, 'chain_label', 'sr_path', mock.ANY) mock_scan_sr.assert_called_once_with(session) session.call_xenapi.assert_called_once_with('VDI.get_by_uuid', 'a') mock_set_info.assert_called_once_with(session, 'vdi_ref', 'disk_type', 'vdi_label', 'disk_type', instance) def test_get_vhd_parent_uuid_rec_provided(self): session = mock.Mock() vdi_ref = 'vdi_ref' vdi_rec = {'sm_config': {}} self.assertIsNone(vm_utils._get_vhd_parent_uuid(session, vdi_ref, vdi_rec)) self.assertFalse(session.call_xenapi.called) class MigrateVHDTestCase(VMUtilsTestBase): def _assert_transfer_called(self, session, label): session.call_plugin_serialized.assert_called_once_with( 'migration.py', 'transfer_vhd', instance_uuid=label, host="dest", vdi_uuid="vdi_uuid", sr_path="sr_path", seq_num=2) @mock.patch.object(os_xenapi.client.vm_management, 'transfer_vhd') def test_migrate_vhd_root(self, mock_trans_vhd): session = mock.Mock() instance = {"uuid": "a"} vm_utils.migrate_vhd(session, instance, "vdi_uuid", "dest", "sr_path", 2) mock_trans_vhd.assert_called_once_with(session, "a", "dest", "vdi_uuid", "sr_path", 2) @mock.patch.object(os_xenapi.client.vm_management, 'transfer_vhd') def test_migrate_vhd_ephemeral(self, mock_trans_vhd): session = mock.Mock() instance = {"uuid": "a"} vm_utils.migrate_vhd(session, instance, "vdi_uuid", "dest", "sr_path", 2, 2) mock_trans_vhd.assert_called_once_with(session, "a_ephemeral_2", "dest", "vdi_uuid", "sr_path", 2) @mock.patch.object(os_xenapi.client.vm_management, 'transfer_vhd') def test_migrate_vhd_converts_exceptions(self, mock_trans_vhd): session = mock.Mock() session.XenAPI.Failure = test.TestingException mock_trans_vhd.side_effect = test.TestingException() instance = {"uuid": "a"} self.assertRaises(exception.MigrationError, vm_utils.migrate_vhd, session, instance, "vdi_uuid", "dest", "sr_path", 2) mock_trans_vhd.assert_called_once_with(session, "a", "dest", "vdi_uuid", "sr_path", 2) class StripBaseMirrorTestCase(VMUtilsTestBase): def test_strip_base_mirror_from_vdi_works(self): session = mock.Mock() vm_utils._try_strip_base_mirror_from_vdi(session, "vdi_ref") session.call_xenapi.assert_called_once_with( "VDI.remove_from_sm_config", "vdi_ref", "base_mirror") def test_strip_base_mirror_from_vdi_hides_error(self): session = mock.Mock() session.XenAPI.Failure = test.TestingException session.call_xenapi.side_effect = test.TestingException() vm_utils._try_strip_base_mirror_from_vdi(session, "vdi_ref") session.call_xenapi.assert_called_once_with( "VDI.remove_from_sm_config", "vdi_ref", "base_mirror") @mock.patch.object(vm_utils, '_try_strip_base_mirror_from_vdi') def test_strip_base_mirror_from_vdis(self, mock_strip): def call_xenapi(method, arg): if method == "VM.get_VBDs": return ['VBD_ref_1', 'VBD_ref_2'] if method == "VBD.get_VDI": return 'VDI' + arg[3:] return "Unexpected call_xenapi: %s.%s" % (method, arg) session = mock.Mock() session.call_xenapi.side_effect = call_xenapi vm_utils.strip_base_mirror_from_vdis(session, "vm_ref") expected = [mock.call('VM.get_VBDs', "vm_ref"), mock.call('VBD.get_VDI', "VBD_ref_1"), mock.call('VBD.get_VDI', "VBD_ref_2")] self.assertEqual(expected, session.call_xenapi.call_args_list) expected = [mock.call(session, "VDI_ref_1"), mock.call(session, "VDI_ref_2")] self.assertEqual(expected, mock_strip.call_args_list) class DeviceIdTestCase(VMUtilsTestBase): def test_device_id_is_none_if_not_specified_in_meta_data(self): image_meta = objects.ImageMeta.from_dict({}) session = mock.Mock() session.product_version = (6, 1, 0) self.assertIsNone(vm_utils.get_vm_device_id(session, image_meta)) def test_get_device_id_if_hypervisor_version_is_greater_than_6_1(self): image_meta = objects.ImageMeta.from_dict( {'properties': {'xenapi_device_id': '0002'}}) session = mock.Mock() session.product_version = (6, 2, 0) self.assertEqual(2, vm_utils.get_vm_device_id(session, image_meta)) session.product_version = (6, 3, 1) self.assertEqual(2, vm_utils.get_vm_device_id(session, image_meta)) def test_raise_exception_if_device_id_not_supported_by_hyp_version(self): image_meta = objects.ImageMeta.from_dict( {'properties': {'xenapi_device_id': '0002'}}) session = mock.Mock() session.product_version = (6, 0) exc = self.assertRaises(exception.NovaException, vm_utils.get_vm_device_id, session, image_meta) self.assertEqual("Device id 2 specified is not supported by " "hypervisor version (6, 0)", exc.message) session.product_version = ('6a') exc = self.assertRaises(exception.NovaException, vm_utils.get_vm_device_id, session, image_meta) self.assertEqual("Device id 2 specified is not supported by " "hypervisor version 6a", exc.message) class CreateVmRecordTestCase(VMUtilsTestBase): @mock.patch.object(flavors, 'extract_flavor') def test_create_vm_record_linux(self, mock_extract_flavor): instance = objects.Instance(uuid=uuids.nova_uuid, os_type="linux") self._test_create_vm_record(mock_extract_flavor, instance, False) @mock.patch.object(flavors, 'extract_flavor') def test_create_vm_record_windows(self, mock_extract_flavor): instance = objects.Instance(uuid=uuids.nova_uuid, os_type="windows") with mock.patch.object(instance, 'get_flavor') as get: get.return_value = objects.Flavor._from_db_object( None, objects.Flavor(), test_flavor.fake_flavor) self._test_create_vm_record(mock_extract_flavor, instance, True) def _test_create_vm_record(self, mock_extract_flavor, instance, is_viridian): session = stubs.get_fake_session() flavor = {"memory_mb": 1024, "vcpus": 1, "vcpu_weight": 2} mock_extract_flavor.return_value = flavor with mock.patch.object(instance, 'get_flavor') as get: get.return_value = objects.Flavor(memory_mb=1024, vcpus=1, vcpu_weight=2) vm_utils.create_vm(session, instance, "name", "kernel", "ramdisk", device_id=2) is_viridian_str = str(is_viridian).lower() expected_vm_rec = { 'VCPUs_params': {'cap': '0', 'weight': '2'}, 'PV_args': '', 'memory_static_min': '0', 'ha_restart_priority': '', 'HVM_boot_policy': 'BIOS order', 'PV_bootloader': '', 'tags': [], 'VCPUs_max': '1', 'memory_static_max': '1073741824', 'actions_after_shutdown': 'destroy', 'memory_dynamic_max': '1073741824', 'user_version': '0', 'xenstore_data': {'vm-data/allowvssprovider': 'false'}, 'blocked_operations': {}, 'is_a_template': False, 'name_description': '', 'memory_dynamic_min': '1073741824', 'actions_after_crash': 'destroy', 'memory_target': '1073741824', 'PV_ramdisk': '', 'PV_bootloader_args': '', 'PCI_bus': '', 'other_config': {'nova_uuid': uuids.nova_uuid}, 'name_label': 'name', 'actions_after_reboot': 'restart', 'VCPUs_at_startup': '1', 'HVM_boot_params': {'order': 'dc'}, 'platform': {'nx': 'true', 'pae': 'true', 'apic': 'true', 'timeoffset': '0', 'viridian': is_viridian_str, 'acpi': 'true', 'device_id': '0002'}, 'PV_legacy_args': '', 'PV_kernel': '', 'affinity': '', 'recommendations': '', 'ha_always_run': False} session.call_xenapi.assert_called_with('VM.create', expected_vm_rec) def test_list_vms(self): self.fixture = self.useFixture(config_fixture.Config(lockutils.CONF)) self.fixture.config(disable_process_locking=True, group='oslo_concurrency') self.flags(instance_name_template='%d', firewall_driver='nova.virt.xenapi.firewall.' 'Dom0IptablesFirewallDriver') self.flags(connection_url='http://localhost', connection_password='test_pass', group='xenserver') fake.create_vm("foo1", "Halted") vm_ref = fake.create_vm("foo2", "Running") stubs.stubout_session(self.stubs, fake.SessionBase) driver = xenapi_conn.XenAPIDriver(False) result = list(vm_utils.list_vms(driver._session)) # Will have 3 VMs - but one is Dom0 and one is not running on the host self.assertEqual(len(driver._session.call_xenapi('VM.get_all')), 3) self.assertEqual(len(result), 1) result_keys = [key for (key, value) in result] self.assertIn(vm_ref, result_keys) class ChildVHDsTestCase(test.NoDBTestCase): all_vdis = [ ("my-vdi-ref", {"uuid": "my-uuid", "sm_config": {}, "is_a_snapshot": False, "other_config": {}}), ("non-parent", {"uuid": "uuid-1", "sm_config": {}, "is_a_snapshot": False, "other_config": {}}), ("diff-parent", {"uuid": "uuid-1", "sm_config": {"vhd-parent": "other-uuid"}, "is_a_snapshot": False, "other_config": {}}), ("child", {"uuid": "uuid-child", "sm_config": {"vhd-parent": "my-uuid"}, "is_a_snapshot": False, "other_config": {}}), ("child-snap", {"uuid": "uuid-child-snap", "sm_config": {"vhd-parent": "my-uuid"}, "is_a_snapshot": True, "other_config": {}}), ] @mock.patch.object(vm_utils, '_get_all_vdis_in_sr') def test_child_vhds_defaults(self, mock_get_all): mock_get_all.return_value = self.all_vdis result = vm_utils._child_vhds("session", "sr_ref", ["my-uuid"]) self.assertJsonEqual(['uuid-child', 'uuid-child-snap'], result) @mock.patch.object(vm_utils, '_get_all_vdis_in_sr') def test_child_vhds_only_snapshots(self, mock_get_all): mock_get_all.return_value = self.all_vdis result = vm_utils._child_vhds("session", "sr_ref", ["my-uuid"], old_snapshots_only=True) self.assertEqual(['uuid-child-snap'], result) @mock.patch.object(vm_utils, '_get_all_vdis_in_sr') def test_child_vhds_chain(self, mock_get_all): mock_get_all.return_value = self.all_vdis result = vm_utils._child_vhds("session", "sr_ref", ["my-uuid", "other-uuid"], old_snapshots_only=True) self.assertEqual(['uuid-child-snap'], result) def test_is_vdi_a_snapshot_works(self): vdi_rec = {"is_a_snapshot": True, "other_config": {}} self.assertTrue(vm_utils._is_vdi_a_snapshot(vdi_rec)) def test_is_vdi_a_snapshot_base_images_false(self): vdi_rec = {"is_a_snapshot": True, "other_config": {"image-id": "fake"}} self.assertFalse(vm_utils._is_vdi_a_snapshot(vdi_rec)) def test_is_vdi_a_snapshot_false_for_non_snapshot(self): vdi_rec = {"is_a_snapshot": False, "other_config": {}} self.assertFalse(vm_utils._is_vdi_a_snapshot(vdi_rec)) class RemoveOldSnapshotsTestCase(test.NoDBTestCase): @mock.patch.object(vm_utils, 'get_vdi_for_vm_safely') @mock.patch.object(vm_utils, '_walk_vdi_chain') @mock.patch.object(vm_utils, '_delete_snapshots_in_vdi_chain') def test_remove_old_snapshots(self, mock_delete, mock_walk, mock_get): instance = {"uuid": "fake"} mock_get.return_value = ("ref", {"uuid": "vdi", "SR": "sr_ref"}) mock_walk.return_value = [{"uuid": "uuid1"}, {"uuid": "uuid2"}] vm_utils.remove_old_snapshots("session", instance, "vm_ref") mock_delete.assert_called_once_with("session", instance, ["uuid1", "uuid2"], "sr_ref") mock_get.assert_called_once_with("session", "vm_ref") mock_walk.assert_called_once_with("session", "vdi") @mock.patch.object(vm_utils, '_child_vhds') def test_delete_snapshots_in_vdi_chain_no_chain(self, mock_child): instance = {"uuid": "fake"} vm_utils._delete_snapshots_in_vdi_chain("session", instance, ["uuid"], "sr") self.assertFalse(mock_child.called) @mock.patch.object(vm_utils, '_child_vhds') def test_delete_snapshots_in_vdi_chain_no_snapshots(self, mock_child): instance = {"uuid": "fake"} mock_child.return_value = [] vm_utils._delete_snapshots_in_vdi_chain("session", instance, ["uuid1", "uuid2"], "sr") mock_child.assert_called_once_with("session", "sr", ["uuid2"], old_snapshots_only=True) @mock.patch.object(vm_utils, '_scan_sr') @mock.patch.object(vm_utils, 'safe_destroy_vdis') @mock.patch.object(vm_utils, '_child_vhds') def test_delete_snapshots_in_vdi_chain_calls_destroy(self, mock_child, mock_destroy, mock_scan): instance = {"uuid": "fake"} mock_child.return_value = ["suuid1", "suuid2"] session = mock.Mock() session.VDI.get_by_uuid.side_effect = ["ref1", "ref2"] vm_utils._delete_snapshots_in_vdi_chain(session, instance, ["uuid1", "uuid2"], "sr") mock_child.assert_called_once_with(session, "sr", ["uuid2"], old_snapshots_only=True) session.VDI.get_by_uuid.assert_has_calls([ mock.call("suuid1"), mock.call("suuid2")]) mock_destroy.assert_called_once_with(session, ["ref1", "ref2"]) mock_scan.assert_called_once_with(session, "sr") class ResizeFunctionTestCase(test.NoDBTestCase): def _call_get_resize_func_name(self, brand, version): session = mock.Mock() session.product_brand = brand session.product_version = version return vm_utils._get_resize_func_name(session) def _test_is_resize(self, brand, version): result = self._call_get_resize_func_name(brand, version) self.assertEqual("VDI.resize", result) def _test_is_resize_online(self, brand, version): result = self._call_get_resize_func_name(brand, version) self.assertEqual("VDI.resize_online", result) def test_xenserver_5_5(self): self._test_is_resize_online("XenServer", (5, 5, 0)) def test_xenserver_6_0(self): self._test_is_resize("XenServer", (6, 0, 0)) def test_xcp_1_1(self): self._test_is_resize_online("XCP", (1, 1, 0)) def test_xcp_1_2(self): self._test_is_resize("XCP", (1, 2, 0)) def test_xcp_2_0(self): self._test_is_resize("XCP", (2, 0, 0)) def test_random_brand(self): self._test_is_resize("asfd", (1, 1, 0)) def test_default(self): self._test_is_resize(None, None) def test_empty(self): self._test_is_resize("", "") class VMInfoTests(VMUtilsTestBase): def setUp(self): super(VMInfoTests, self).setUp() self.session = mock.Mock() def test_get_power_state_valid(self): # Save on test setup calls by having these simple tests in one method self.session.call_xenapi.return_value = "Running" self.assertEqual(vm_utils.get_power_state(self.session, "ref"), power_state.RUNNING) self.session.call_xenapi.return_value = "Halted" self.assertEqual(vm_utils.get_power_state(self.session, "ref"), power_state.SHUTDOWN) self.session.call_xenapi.return_value = "Paused" self.assertEqual(vm_utils.get_power_state(self.session, "ref"), power_state.PAUSED) self.session.call_xenapi.return_value = "Suspended" self.assertEqual(vm_utils.get_power_state(self.session, "ref"), power_state.SUSPENDED) self.session.call_xenapi.return_value = "Crashed" self.assertEqual(vm_utils.get_power_state(self.session, "ref"), power_state.CRASHED) def test_get_power_state_invalid(self): self.session.call_xenapi.return_value = "Invalid" self.assertRaises(KeyError, vm_utils.get_power_state, self.session, "ref") _XAPI_record = {'power_state': 'Running', 'memory_static_max': str(10 << 10), 'memory_dynamic_max': str(9 << 10), 'VCPUs_max': '5'} def test_compile_info(self): def call_xenapi(method, *args): if method.startswith('VM.get_') and args[0] == 'dummy': return self._XAPI_record[method[7:]] self.session.call_xenapi.side_effect = call_xenapi info = vm_utils.compile_info(self.session, "dummy") self.assertEqual(hardware.InstanceInfo(state=power_state.RUNNING), info) nova-17.0.1/nova/tests/unit/virt/xenapi/test_network_utils.py0000666000175000017500000000544213250073126024451 0ustar zuulzuul00000000000000 # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova import exception from nova.tests.unit.virt.xenapi import stubs from nova.virt.xenapi import network_utils class NetworkUtilsTestCase(stubs.XenAPITestBaseNoDB): def test_find_network_with_name_label_works(self): session = mock.Mock() session.network.get_by_name_label.return_value = ["net"] result = network_utils.find_network_with_name_label(session, "label") self.assertEqual("net", result) session.network.get_by_name_label.assert_called_once_with("label") def test_find_network_with_name_returns_none(self): session = mock.Mock() session.network.get_by_name_label.return_value = [] result = network_utils.find_network_with_name_label(session, "label") self.assertIsNone(result) def test_find_network_with_name_label_raises(self): session = mock.Mock() session.network.get_by_name_label.return_value = ["net", "net2"] self.assertRaises(exception.NovaException, network_utils.find_network_with_name_label, session, "label") def test_find_network_with_bridge_works(self): session = mock.Mock() session.network.get_all_records_where.return_value = {"net": "asdf"} result = network_utils.find_network_with_bridge(session, "bridge") self.assertEqual(result, "net") expr = 'field "name__label" = "bridge" or field "bridge" = "bridge"' session.network.get_all_records_where.assert_called_once_with(expr) def test_find_network_with_bridge_raises_too_many(self): session = mock.Mock() session.network.get_all_records_where.return_value = { "net": "asdf", "net2": "asdf2" } self.assertRaises(exception.NovaException, network_utils.find_network_with_bridge, session, "bridge") def test_find_network_with_bridge_raises_no_networks(self): session = mock.Mock() session.network.get_all_records_where.return_value = {} self.assertRaises(exception.NovaException, network_utils.find_network_with_bridge, session, "bridge") nova-17.0.1/nova/tests/unit/virt/xenapi/vm_rrd.xml0000666000175000017500000006544413250073126022152 0ustar zuulzuul00000000000000 0003 5 1328795567 cpu0 DERIVE 300.0000 0.0 1.0000 5102.8417 0.0110 0 memory GAUGE 300.0000 0.0 Infinity 4294967296 10961792000.0000 0 memory_target GAUGE 300.0000 0.0 Infinity 4294967296 10961792000.0000 0 vif_0_tx DERIVE 300.0000 -Infinity Infinity 1079132206 752.4007 0 vif_0_rx DERIVE 300.0000 -Infinity Infinity 1093250983 4837.8805 0 vbd_xvda_write DERIVE 300.0000 -Infinity Infinity 4552440832 0.0 0 vbd_xvda_read DERIVE 300.0000 -Infinity Infinity 1371223040 0.0 0 memory_internal_free GAUGE 300.0000 -Infinity Infinity 1415564 3612860.6020 0 vbd_xvdb_write DERIVE 300.0000 -Infinity Infinity 0.0 0.0 2 vbd_xvdb_read DERIVE 300.0000 -Infinity Infinity 0.0 0.0 2 vif_2_tx DERIVE 300.0000 -Infinity Infinity 0.0 0.0 2 vif_2_rx DERIVE 300.0000 -Infinity Infinity 0.0 0.0 2 AVERAGE 1 0.5000 0.0 0.0 0.0 0 0.0 0.0 0.0 0 0.0 0.0 0.0 0 0.0 0.0 0.0 0 0.0 0.0 0.0 0 0.0 0.0 0.0 0 0.0 0.0 0.0 0 0.0 0.0 0.0 0 0.0 0.0 0.0 0 0.0 0.0 0.0 0 0.0 0.0 0.0 0 0.0 0.0 0.0 0 0.0259 4294967296.0000 4294967296.0000 270.6642 1968.1381 0.0 0.0 1433552.0000 0.0 0.0 0.0 0.0 0.0042 4294967296.0000 4294967296.0000 258.6530 1890.5522 565.3453 0.0 1433552.0000 0.0 0.0 0.0 0.0 0.0043 4294967296.0000 4294967296.0000 249.1120 1778.2501 817.5985 0.0 1433552.0000 0.0 0.0 0.0 0.0 0.0039 4294967296.0000 4294967296.0000 270.5131 1806.3336 9811.4443 0.0 1433552.0000 0.0 0.0 0.0 0.0 0.0041 4294967296.0000 4294967296.0000 264.3683 1952.4054 4370.4121 0.0 1433552.0000 0.0 0.0 0.0 0.0 0.0034 4294967296.0000 4294967296.0000 251.6331 1958.8002 0.0 0.0 1433552.0000 0.0 0.0 0.0 0.0 0.0042 4294967296.0000 4294967296.0000 274.5222 2067.5947 0.0 0.0 1433552.0000 0.0 0.0 0.0 0.0 0.0046 4294967296.0000 4294967296.0000 260.9790 2042.7045 1671.6940 0.0 1433552.0000 0.0 0.0 0.0 0.0 0.0163 4294967296.0000 4294967296.0000 249.0992 1845.3728 4119.4312 0.0 1431698.1250 0.0 0.0 0.0 0.0 0.0098 4294967296.0000 4294967296.0000 273.9898 1879.1331 5459.4102 0.0 1430824.0000 0.0 0.0 0.0 0.0 0.0043 4294967296.0000 4294967296.0000 261.3513 2335.3000 6837.4907 0.0 1430824.0000 0.0 0.0 0.0 0.0 0.0793 4294967296.0000 4294967296.0000 249.2620 2092.4504 2391.9744 0.0 1430824.0000 0.0 0.0 0.0 0.0 0.0406 4294967296.0000 4294967296.0000 270.0746 1859.9802 0.0 0.0 1430824.0000 0.0 0.0 0.0 0.0 0.0043 4294967296.0000 4294967296.0000 263.4259 2010.8950 550.1484 0.0 1430824.0000 0.0 0.0 0.0 0.0 0.0565 4294967296.0000 4294967296.0000 29891.2227 26210.6699 3213.4324 0.0 1415564.0000 0.0 0.0 0.0 0.0 0.0645 4294967296.0000 4294967296.0000 31501.1562 29642.1641 400.9566 0.0 1415564.0000 0.0 0.0 0.0 0.0 0.0381 4294967296.0000 4294967296.0000 17350.7676 20748.6133 1247.4755 0.0 1415564.0000 0.0 0.0 0.0 0.0 0.0212 4294967296.0000 4294967296.0000 11981.0918 12866.9775 5774.9497 0.0 1415564.0000 0.0 0.0 0.0 0.0 0.0045 4294967296.0000 4294967296.0000 249.0901 1898.6758 4446.3750 0.0 1415564.0000 0.0 0.0 0.0 0.0 0.0614 4294967296.0000 4294967296.0000 249.0959 2255.1912 0.0 0.0 1415564.0000 0.0 0.0 0.0 0.0 0.0609 4294967296.0000 4294967296.0000 253.1091 2099.0601 1230.0925 0.0 1415564.0000 0.0 0.0 0.0 0.0 0.0047 4294967296.0000 4294967296.0000 268.6620 1759.5667 2861.2107 0.0 1415564.0000 0.0 0.0 0.0 0.0 0.0100 4294967296.0000 4294967296.0000 292.2647 1828.5435 3270.3474 0.0 1415564.0000 0.0 0.0 0.0 0.0 0.0093 4294967296.0000 4294967296.0000 303.5810 1932.1176 4485.4355 0.0 1415564.0000 0.0 0.0 0.0 0.0 0.0038 4294967296.0000 4294967296.0000 291.6633 1842.4425 2898.5137 0.0 1415564.0000 0.0 0.0 0.0 0.0 0.0042 4294967296.0000 4294967296.0000 287.4134 1816.0144 0.0 0.0 1415564.0000 0.0 0.0 0.0 0.0 AVERAGE 12 0.5000 0.0 0.0 0.0150 0 0.0 0.0 3221225472.0000 0 0.0 0.0 3221225472.0000 0 0.0 0.0 1181.3309 0 0.0 0.0 2358.2158 0 0.0 0.0 2080.5770 0 0.0 0.0 0.0 0 0.0 0.0 1061673.0000 0 0.0 0.0 0.0 0 0.0 0.0 0.0 0 0.0 0.0 0.0 0 0.0 0.0 0.0 0 0.0130 4294967296.0000 4294967296.0000 261.6000 1990.6442 1432.2385 0.0 1441908.0000 0.0 0.0 0.0 0.0 0.0172 4294967296.0000 4294967296.0000 318.8885 1979.7030 1724.9528 0.0 1441912.7500 0.0 0.0 0.0 0.0 0.0483 4294967296.0000 4294967296.0000 3108.1233 4815.9639 4962.0503 68.2667 1441916.0000 0.0 0.0 0.0 0.0 0.0229 4294967296.0000 4294967296.0000 1944.2039 3757.9177 10861.6670 0.0 1439546.7500 0.0 0.0 0.0 0.0 0.0639 4294967296.0000 4294967296.0000 44504.8789 34745.1523 9571.1455 0.0 1437892.0000 0.0 0.0 0.0 0.0 0.2945 4294967296.0000 4294967296.0000 79219.1641 102827.0781 438999.3438 0.0 1415337.7500 0.0 0.0 0.0 0.0 0.1219 4294967296.0000 4294967296.0000 61093.7109 49836.3164 8734.3730 0.0 1399324.0000 0.0 0.0 0.0 0.0 0.0151 4294967296.0000 4294967296.0000 48.3914 1922.5935 2251.4346 0.0 1421237.1250 0.0 0.0 0.0 0.0 0.3162 4294967296.0000 4294967296.0000 80667.4922 53950.0430 416858.5000 0.0 1437032.0000 0.0 0.0 0.0 0.0 AVERAGE 720 0.5000 0.0 0.0 0.0848 0 0.0 0.0 3775992081.0667 0 0.0 0.0 3775992081.0667 0 0.0 0.0 16179.3166 0 0.0 0.0 13379.7997 0 0.0 0.0 109091.4636 0 0.0 0.0 323.1289 0 0.0 0.0 1259057.5294 0 0.0 0.0 0.0 0 0.0 0.0 0.0 0 0.0 0.0 0.0 0 0.0 0.0 0.0 0 0.1458 4294967296.0000 4294967296.0000 6454.3096 5327.6709 116520.9609 738.4178 2653538.0000 0.0 0.0 0.0 0.0 0.0971 4294967296.0000 4294967296.0000 10180.4941 10825.1777 98749.3438 523.3778 2381725.7500 0.0 0.0 0.0 0.0 0.0683 4294967296.0000 4294967296.0000 23183.2695 19607.6523 93946.5703 807.8222 2143269.2500 0.0 0.0 0.0 0.0 0.0352 4294967296.0000 4294967296.0000 7552.5708 7320.5391 30907.9453 150384.6406 1583336.0000 0.0 0.0 0.0 0.0 AVERAGE 17280 0.5000 0.0 0.0 0.0187 0 0.0 0.0 2483773622.0445 0 0.0 0.0 2483773622.0445 0 0.0 0.0 2648.2715 0 0.0 0.0 3002.4238 0 0.0 0.0 19129.3156 0 0.0 0.0 6365.7244 0 0.0 0.0 1468863.7753 0 0.0 0.0 0.0 0 0.0 0.0 0.0 0 0.0 0.0 0.0 0 0.0 0.0 0.0 0 0.0579 4294967296.0000 4294967296.0000 6291.0151 7489.2583 70915.3750 50.1570 613674.0000 0.0 0.0 0.0 0.0 0.0541 4294967296.0000 4294967296.0000 10406.3682 10638.9365 32972.1250 7.6800 647683.5625 0.0 0.0 0.0 0.0 0.0189 4294967296.0000 4294967296.0000 207.0768 2145.3167 1685.8905 0.0 599934.0000 0.0 0.0 0.0 0.0 0.0202 4294967296.0000 4294967296.0000 71.0270 2046.6521 6703.9795 182.0444 595963.8750 0.0 0.0 0.0 0.0 0.0661 4294967296.0000 4294967296.0000 8520.3213 8488.0664 52978.7930 7.3956 727540.0000 0.0 0.0 0.0 0.0 0.0219 4294967296.0000 4294967296.0000 40443.0117 20702.5996 -1377536.8750 36990.5898 1823778.0000 0.0 0.0 0.0 0.0 0.0265 4294971904.0000 4294754304.0000 6384.6367 6513.4951 22415.6348 2486.9690 3072170.0000 0.0 0.0 0.0 0.0 nova-17.0.1/nova/tests/unit/virt/xenapi/test_volumeops.py0000666000175000017500000006015313250073126023571 0ustar zuulzuul00000000000000# Copyright (c) 2012 Citrix Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova import exception from nova import test from nova.tests.unit.virt.xenapi import stubs from nova.virt.xenapi import vm_utils from nova.virt.xenapi import volume_utils from nova.virt.xenapi import volumeops class VolumeOpsTestBase(stubs.XenAPITestBaseNoDB): def setUp(self): super(VolumeOpsTestBase, self).setUp() self._setup_mock_volumeops() def _setup_mock_volumeops(self): self.session = stubs.FakeSessionForVolumeTests('fake_uri') self.ops = volumeops.VolumeOps(self.session) class VolumeDetachTestCase(VolumeOpsTestBase): @mock.patch.object(volumeops.vm_utils, 'lookup', return_value='vmref') @mock.patch.object(volumeops.volume_utils, 'find_vbd_by_number', return_value='vbdref') @mock.patch.object(volumeops.vm_utils, 'is_vm_shutdown', return_value=False) @mock.patch.object(volumeops.vm_utils, 'unplug_vbd') @mock.patch.object(volumeops.vm_utils, 'destroy_vbd') @mock.patch.object(volumeops.volume_utils, 'get_device_number', return_value='devnumber') @mock.patch.object(volumeops.volume_utils, 'find_sr_from_vbd', return_value='srref') @mock.patch.object(volumeops.volume_utils, 'purge_sr') def test_detach_volume_call(self, mock_purge, mock_find_sr, mock_get_device_num, mock_destroy_vbd, mock_unplug_vbd, mock_is_vm, mock_find_vbd, mock_lookup): ops = volumeops.VolumeOps('session') ops.detach_volume( dict(driver_volume_type='iscsi', data='conn_data'), 'instance_1', 'mountpoint') mock_lookup.assert_called_once_with('session', 'instance_1') mock_get_device_num.assert_called_once_with('mountpoint') mock_find_vbd.assert_called_once_with('session', 'vmref', 'devnumber') mock_is_vm.assert_called_once_with('session', 'vmref') mock_unplug_vbd.assert_called_once_with('session', 'vbdref', 'vmref') mock_destroy_vbd.assert_called_once_with('session', 'vbdref') mock_find_sr.assert_called_once_with('session', 'vbdref') mock_purge.assert_called_once_with('session', 'srref') @mock.patch.object(volumeops.VolumeOps, "_detach_vbds_and_srs") @mock.patch.object(volume_utils, "find_vbd_by_number") @mock.patch.object(vm_utils, "vm_ref_or_raise") def test_detach_volume(self, mock_vm, mock_vbd, mock_detach): mock_vm.return_value = "vm_ref" mock_vbd.return_value = "vbd_ref" self.ops.detach_volume({}, "name", "/dev/xvdd") mock_vm.assert_called_once_with(self.session, "name") mock_vbd.assert_called_once_with(self.session, "vm_ref", 3) mock_detach.assert_called_once_with("vm_ref", ["vbd_ref"]) @mock.patch.object(volumeops.VolumeOps, "_detach_vbds_and_srs") @mock.patch.object(volume_utils, "find_vbd_by_number") @mock.patch.object(vm_utils, "vm_ref_or_raise") def test_detach_volume_skips_error_skip_attach(self, mock_vm, mock_vbd, mock_detach): mock_vm.return_value = "vm_ref" mock_vbd.return_value = None self.ops.detach_volume({}, "name", "/dev/xvdd") self.assertFalse(mock_detach.called) @mock.patch.object(volumeops.VolumeOps, "_detach_vbds_and_srs") @mock.patch.object(volume_utils, "find_vbd_by_number") @mock.patch.object(vm_utils, "vm_ref_or_raise") def test_detach_volume_raises(self, mock_vm, mock_vbd, mock_detach): mock_vm.return_value = "vm_ref" mock_vbd.side_effect = test.TestingException self.assertRaises(test.TestingException, self.ops.detach_volume, {}, "name", "/dev/xvdd") self.assertFalse(mock_detach.called) @mock.patch.object(volume_utils, "purge_sr") @mock.patch.object(vm_utils, "destroy_vbd") @mock.patch.object(volume_utils, "find_sr_from_vbd") @mock.patch.object(vm_utils, "unplug_vbd") @mock.patch.object(vm_utils, "is_vm_shutdown") def test_detach_vbds_and_srs_not_shutdown(self, mock_shutdown, mock_unplug, mock_find_sr, mock_destroy, mock_purge): mock_shutdown.return_value = False mock_find_sr.return_value = "sr_ref" self.ops._detach_vbds_and_srs("vm_ref", ["vbd_ref"]) mock_shutdown.assert_called_once_with(self.session, "vm_ref") mock_find_sr.assert_called_once_with(self.session, "vbd_ref") mock_unplug.assert_called_once_with(self.session, "vbd_ref", "vm_ref") mock_destroy.assert_called_once_with(self.session, "vbd_ref") mock_purge.assert_called_once_with(self.session, "sr_ref") @mock.patch.object(volume_utils, "purge_sr") @mock.patch.object(vm_utils, "destroy_vbd") @mock.patch.object(volume_utils, "find_sr_from_vbd") @mock.patch.object(vm_utils, "unplug_vbd") @mock.patch.object(vm_utils, "is_vm_shutdown") def test_detach_vbds_and_srs_is_shutdown(self, mock_shutdown, mock_unplug, mock_find_sr, mock_destroy, mock_purge): mock_shutdown.return_value = True mock_find_sr.return_value = "sr_ref" self.ops._detach_vbds_and_srs("vm_ref", ["vbd_ref_1", "vbd_ref_2"]) expected = [mock.call(self.session, "vbd_ref_1"), mock.call(self.session, "vbd_ref_2")] self.assertEqual(expected, mock_destroy.call_args_list) mock_purge.assert_called_with(self.session, "sr_ref") self.assertFalse(mock_unplug.called) @mock.patch.object(volumeops.VolumeOps, "_detach_vbds_and_srs") @mock.patch.object(volumeops.VolumeOps, "_get_all_volume_vbd_refs") def test_detach_all_no_volumes(self, mock_get_all, mock_detach): mock_get_all.return_value = [] self.ops.detach_all("vm_ref") mock_get_all.assert_called_once_with("vm_ref") self.assertFalse(mock_detach.called) @mock.patch.object(volumeops.VolumeOps, "_detach_vbds_and_srs") @mock.patch.object(volumeops.VolumeOps, "_get_all_volume_vbd_refs") def test_detach_all_volumes(self, mock_get_all, mock_detach): mock_get_all.return_value = ["1"] self.ops.detach_all("vm_ref") mock_get_all.assert_called_once_with("vm_ref") mock_detach.assert_called_once_with("vm_ref", ["1"]) def test_get_all_volume_vbd_refs_no_vbds(self): with mock.patch.object(self.session.VM, "get_VBDs") as mock_get: with mock.patch.object(self.session.VBD, "get_other_config") as mock_conf: mock_get.return_value = [] result = self.ops._get_all_volume_vbd_refs("vm_ref") self.assertEqual([], list(result)) mock_get.assert_called_once_with("vm_ref") self.assertFalse(mock_conf.called) def test_get_all_volume_vbd_refs_no_volumes(self): with mock.patch.object(self.session.VM, "get_VBDs") as mock_get: with mock.patch.object(self.session.VBD, "get_other_config") as mock_conf: mock_get.return_value = ["1"] mock_conf.return_value = {} result = self.ops._get_all_volume_vbd_refs("vm_ref") self.assertEqual([], list(result)) mock_get.assert_called_once_with("vm_ref") mock_conf.assert_called_once_with("1") def test_get_all_volume_vbd_refs_with_volumes(self): with mock.patch.object(self.session.VM, "get_VBDs") as mock_get: with mock.patch.object(self.session.VBD, "get_other_config") as mock_conf: mock_get.return_value = ["1", "2"] mock_conf.return_value = {"osvol": True} result = self.ops._get_all_volume_vbd_refs("vm_ref") self.assertEqual(["1", "2"], list(result)) mock_get.assert_called_once_with("vm_ref") class AttachVolumeTestCase(VolumeOpsTestBase): @mock.patch.object(volumeops.VolumeOps, "_attach_volume") @mock.patch.object(vm_utils, "vm_ref_or_raise") def test_attach_volume_default_hotplug(self, mock_get_vm, mock_attach): mock_get_vm.return_value = "vm_ref" self.ops.attach_volume({}, "instance_name", "/dev/xvda") mock_attach.assert_called_once_with({}, "vm_ref", "instance_name", '/dev/xvda', True) @mock.patch.object(volumeops.VolumeOps, "_attach_volume") @mock.patch.object(vm_utils, "vm_ref_or_raise") def test_attach_volume_hotplug(self, mock_get_vm, mock_attach): mock_get_vm.return_value = "vm_ref" self.ops.attach_volume({}, "instance_name", "/dev/xvda", False) mock_attach.assert_called_once_with({}, "vm_ref", "instance_name", '/dev/xvda', False) @mock.patch.object(volumeops.VolumeOps, "_attach_volume") def test_attach_volume_default_hotplug_connect_volume(self, mock_attach): self.ops.connect_volume({}) mock_attach.assert_called_once_with({}) @mock.patch.object(volumeops.VolumeOps, "_check_is_supported_driver_type") @mock.patch.object(volumeops.VolumeOps, "_connect_to_volume_provider") @mock.patch.object(volumeops.VolumeOps, "_connect_hypervisor_to_volume") @mock.patch.object(volumeops.VolumeOps, "_attach_volume_to_vm") def test_attach_volume_with_defaults(self, mock_attach, mock_hypervisor, mock_provider, mock_driver): connection_info = {"data": {}} with mock.patch.object(self.session.VDI, "get_uuid") as mock_vdi: mock_provider.return_value = ("sr_ref", "sr_uuid") mock_vdi.return_value = "vdi_uuid" result = self.ops._attach_volume(connection_info) self.assertEqual(result, ("sr_uuid", "vdi_uuid")) mock_driver.assert_called_once_with(connection_info) mock_provider.assert_called_once_with({}, None) mock_hypervisor.assert_called_once_with("sr_ref", {}) self.assertFalse(mock_attach.called) @mock.patch.object(volumeops.VolumeOps, "_check_is_supported_driver_type") @mock.patch.object(volumeops.VolumeOps, "_connect_to_volume_provider") @mock.patch.object(volumeops.VolumeOps, "_connect_hypervisor_to_volume") @mock.patch.object(volumeops.VolumeOps, "_attach_volume_to_vm") def test_attach_volume_with_hot_attach(self, mock_attach, mock_hypervisor, mock_provider, mock_driver): connection_info = {"data": {}} with mock.patch.object(self.session.VDI, "get_uuid") as mock_vdi: mock_provider.return_value = ("sr_ref", "sr_uuid") mock_hypervisor.return_value = "vdi_ref" mock_vdi.return_value = "vdi_uuid" result = self.ops._attach_volume(connection_info, "vm_ref", "name", 2, True) self.assertEqual(result, ("sr_uuid", "vdi_uuid")) mock_driver.assert_called_once_with(connection_info) mock_provider.assert_called_once_with({}, "name") mock_hypervisor.assert_called_once_with("sr_ref", {}) mock_attach.assert_called_once_with("vdi_ref", "vm_ref", "name", 2, True) @mock.patch.object(volume_utils, "forget_sr") @mock.patch.object(volumeops.VolumeOps, "_check_is_supported_driver_type") @mock.patch.object(volumeops.VolumeOps, "_connect_to_volume_provider") @mock.patch.object(volumeops.VolumeOps, "_connect_hypervisor_to_volume") @mock.patch.object(volumeops.VolumeOps, "_attach_volume_to_vm") def test_attach_volume_cleanup(self, mock_attach, mock_hypervisor, mock_provider, mock_driver, mock_forget): connection_info = {"data": {}} mock_provider.return_value = ("sr_ref", "sr_uuid") mock_hypervisor.side_effect = test.TestingException self.assertRaises(test.TestingException, self.ops._attach_volume, connection_info) mock_driver.assert_called_once_with(connection_info) mock_provider.assert_called_once_with({}, None) mock_hypervisor.assert_called_once_with("sr_ref", {}) mock_forget.assert_called_once_with(self.session, "sr_ref") self.assertFalse(mock_attach.called) def test_check_is_supported_driver_type_pass_iscsi(self): conn_info = {"driver_volume_type": "iscsi"} self.ops._check_is_supported_driver_type(conn_info) def test_check_is_supported_driver_type_pass_xensm(self): conn_info = {"driver_volume_type": "xensm"} self.ops._check_is_supported_driver_type(conn_info) def test_check_is_supported_driver_type_pass_bad(self): conn_info = {"driver_volume_type": "bad"} self.assertRaises(exception.VolumeDriverNotFound, self.ops._check_is_supported_driver_type, conn_info) @mock.patch.object(volume_utils, "introduce_sr") @mock.patch.object(volume_utils, "find_sr_by_uuid") @mock.patch.object(volume_utils, "parse_sr_info") def test_connect_to_volume_provider_new_sr(self, mock_parse, mock_find_sr, mock_introduce_sr): mock_parse.return_value = ("uuid", "label", "params") mock_find_sr.return_value = None mock_introduce_sr.return_value = "sr_ref" ref, uuid = self.ops._connect_to_volume_provider({}, "name") self.assertEqual("sr_ref", ref) self.assertEqual("uuid", uuid) mock_parse.assert_called_once_with({}, "Disk-for:name") mock_find_sr.assert_called_once_with(self.session, "uuid") mock_introduce_sr.assert_called_once_with(self.session, "uuid", "label", "params") @mock.patch.object(volume_utils, "introduce_sr") @mock.patch.object(volume_utils, "find_sr_by_uuid") @mock.patch.object(volume_utils, "parse_sr_info") def test_connect_to_volume_provider_old_sr(self, mock_parse, mock_find_sr, mock_introduce_sr): mock_parse.return_value = ("uuid", "label", "params") mock_find_sr.return_value = "sr_ref" ref, uuid = self.ops._connect_to_volume_provider({}, "name") self.assertEqual("sr_ref", ref) self.assertEqual("uuid", uuid) mock_parse.assert_called_once_with({}, "Disk-for:name") mock_find_sr.assert_called_once_with(self.session, "uuid") self.assertFalse(mock_introduce_sr.called) @mock.patch.object(volume_utils, "introduce_vdi") def test_connect_hypervisor_to_volume_regular(self, mock_intro): mock_intro.return_value = "vdi" result = self.ops._connect_hypervisor_to_volume("sr", {}) self.assertEqual("vdi", result) mock_intro.assert_called_once_with(self.session, "sr") @mock.patch.object(volume_utils, "introduce_vdi") def test_connect_hypervisor_to_volume_vdi(self, mock_intro): mock_intro.return_value = "vdi" conn = {"vdi_uuid": "id"} result = self.ops._connect_hypervisor_to_volume("sr", conn) self.assertEqual("vdi", result) mock_intro.assert_called_once_with(self.session, "sr", vdi_uuid="id") @mock.patch.object(volume_utils, "introduce_vdi") def test_connect_hypervisor_to_volume_lun(self, mock_intro): mock_intro.return_value = "vdi" conn = {"target_lun": "lun"} result = self.ops._connect_hypervisor_to_volume("sr", conn) self.assertEqual("vdi", result) mock_intro.assert_called_once_with(self.session, "sr", target_lun="lun") @mock.patch.object(volume_utils, "introduce_vdi") @mock.patch.object(volumeops.LOG, 'debug') def test_connect_hypervisor_to_volume_mask_password(self, mock_debug, mock_intro): # Tests that the connection_data is scrubbed before logging. data = {'auth_password': 'verybadpass'} self.ops._connect_hypervisor_to_volume("sr", data) self.assertTrue(mock_debug.called, 'LOG.debug was not called') password_logged = False for call in mock_debug.call_args_list: # The call object is a tuple of (args, kwargs) if 'verybadpass' in call[0]: password_logged = True break self.assertFalse(password_logged, 'connection_data was not scrubbed') @mock.patch.object(vm_utils, "is_vm_shutdown") @mock.patch.object(vm_utils, "create_vbd") def test_attach_volume_to_vm_plug(self, mock_vbd, mock_shutdown): mock_vbd.return_value = "vbd" mock_shutdown.return_value = False with mock.patch.object(self.session.VBD, "plug") as mock_plug: self.ops._attach_volume_to_vm("vdi", "vm", "name", '/dev/2', True) mock_plug.assert_called_once_with("vbd", "vm") mock_vbd.assert_called_once_with(self.session, "vm", "vdi", 2, bootable=False, osvol=True) mock_shutdown.assert_called_once_with(self.session, "vm") @mock.patch.object(vm_utils, "is_vm_shutdown") @mock.patch.object(vm_utils, "create_vbd") def test_attach_volume_to_vm_no_plug(self, mock_vbd, mock_shutdown): mock_vbd.return_value = "vbd" mock_shutdown.return_value = True with mock.patch.object(self.session.VBD, "plug") as mock_plug: self.ops._attach_volume_to_vm("vdi", "vm", "name", '/dev/2', True) self.assertFalse(mock_plug.called) mock_vbd.assert_called_once_with(self.session, "vm", "vdi", 2, bootable=False, osvol=True) mock_shutdown.assert_called_once_with(self.session, "vm") @mock.patch.object(vm_utils, "is_vm_shutdown") @mock.patch.object(vm_utils, "create_vbd") def test_attach_volume_to_vm_no_hotplug(self, mock_vbd, mock_shutdown): mock_vbd.return_value = "vbd" with mock.patch.object(self.session.VBD, "plug") as mock_plug: self.ops._attach_volume_to_vm("vdi", "vm", "name", '/dev/2', False) self.assertFalse(mock_plug.called) mock_vbd.assert_called_once_with(self.session, "vm", "vdi", 2, bootable=False, osvol=True) self.assertFalse(mock_shutdown.called) class FindBadVolumeTestCase(VolumeOpsTestBase): @mock.patch.object(volumeops.VolumeOps, "_get_all_volume_vbd_refs") def test_find_bad_volumes_no_vbds(self, mock_get_all): mock_get_all.return_value = [] result = self.ops.find_bad_volumes("vm_ref") mock_get_all.assert_called_once_with("vm_ref") self.assertEqual([], result) @mock.patch.object(volume_utils, "find_sr_from_vbd") @mock.patch.object(volumeops.VolumeOps, "_get_all_volume_vbd_refs") def test_find_bad_volumes_no_bad_vbds(self, mock_get_all, mock_find_sr): mock_get_all.return_value = ["1", "2"] mock_find_sr.return_value = "sr_ref" with mock.patch.object(self.session.SR, "scan") as mock_scan: result = self.ops.find_bad_volumes("vm_ref") mock_get_all.assert_called_once_with("vm_ref") expected_find = [mock.call(self.session, "1"), mock.call(self.session, "2")] self.assertEqual(expected_find, mock_find_sr.call_args_list) expected_scan = [mock.call("sr_ref"), mock.call("sr_ref")] self.assertEqual(expected_scan, mock_scan.call_args_list) self.assertEqual([], result) @mock.patch.object(volume_utils, "find_sr_from_vbd") @mock.patch.object(volumeops.VolumeOps, "_get_all_volume_vbd_refs") def test_find_bad_volumes_bad_vbds(self, mock_get_all, mock_find_sr): mock_get_all.return_value = ["vbd_ref"] mock_find_sr.return_value = "sr_ref" class FakeException(Exception): details = ['SR_BACKEND_FAILURE_40', "", "", ""] session = mock.Mock() session.XenAPI.Failure = FakeException self.ops._session = session with mock.patch.object(session.SR, "scan") as mock_scan: with mock.patch.object(session.VBD, "get_device") as mock_get: mock_scan.side_effect = FakeException mock_get.return_value = "xvdb" result = self.ops.find_bad_volumes("vm_ref") mock_get_all.assert_called_once_with("vm_ref") mock_scan.assert_called_once_with("sr_ref") mock_get.assert_called_once_with("vbd_ref") self.assertEqual(["/dev/xvdb"], result) @mock.patch.object(volume_utils, "find_sr_from_vbd") @mock.patch.object(volumeops.VolumeOps, "_get_all_volume_vbd_refs") def test_find_bad_volumes_raises(self, mock_get_all, mock_find_sr): mock_get_all.return_value = ["vbd_ref"] mock_find_sr.return_value = "sr_ref" class FakeException(Exception): details = ['foo', "", "", ""] session = mock.Mock() session.XenAPI.Failure = FakeException self.ops._session = session with mock.patch.object(session.SR, "scan") as mock_scan: with mock.patch.object(session.VBD, "get_device") as mock_get: mock_scan.side_effect = FakeException mock_get.return_value = "xvdb" self.assertRaises(FakeException, self.ops.find_bad_volumes, "vm_ref") mock_scan.assert_called_once_with("sr_ref") class CleanupFromVDIsTestCase(VolumeOpsTestBase): def _check_find_purge_calls(self, find_sr_from_vdi, purge_sr, vdi_refs, sr_refs): find_sr_calls = [mock.call(self.ops._session, vdi_ref) for vdi_ref in vdi_refs] find_sr_from_vdi.assert_has_calls(find_sr_calls) purge_sr_calls = [mock.call(self.ops._session, sr_ref) for sr_ref in sr_refs] purge_sr.assert_has_calls(purge_sr_calls) @mock.patch.object(volume_utils, 'find_sr_from_vdi') @mock.patch.object(volume_utils, 'purge_sr') def test_safe_cleanup_from_vdis(self, purge_sr, find_sr_from_vdi): vdi_refs = ['vdi_ref1', 'vdi_ref2'] sr_refs = ['sr_ref1', 'sr_ref2'] find_sr_from_vdi.side_effect = sr_refs self.ops.safe_cleanup_from_vdis(vdi_refs) self._check_find_purge_calls(find_sr_from_vdi, purge_sr, vdi_refs, sr_refs) @mock.patch.object(volume_utils, 'find_sr_from_vdi', side_effect=[exception.StorageError(reason=''), 'sr_ref2']) @mock.patch.object(volume_utils, 'purge_sr') def test_safe_cleanup_from_vdis_handles_find_sr_exception(self, purge_sr, find_sr_from_vdi): vdi_refs = ['vdi_ref1', 'vdi_ref2'] sr_refs = ['sr_ref2'] find_sr_from_vdi.side_effect = [exception.StorageError(reason=''), sr_refs[0]] self.ops.safe_cleanup_from_vdis(vdi_refs) self._check_find_purge_calls(find_sr_from_vdi, purge_sr, vdi_refs, sr_refs) @mock.patch.object(volume_utils, 'find_sr_from_vdi') @mock.patch.object(volume_utils, 'purge_sr') def test_safe_cleanup_from_vdis_handles_purge_sr_exception(self, purge_sr, find_sr_from_vdi): vdi_refs = ['vdi_ref1', 'vdi_ref2'] sr_refs = ['sr_ref1', 'sr_ref2'] find_sr_from_vdi.side_effect = sr_refs purge_sr.side_effect = [test.TestingException, None] self.ops.safe_cleanup_from_vdis(vdi_refs) self._check_find_purge_calls(find_sr_from_vdi, purge_sr, vdi_refs, sr_refs) nova-17.0.1/nova/tests/unit/virt/xenapi/__init__.py0000666000175000017500000000000013250073126022221 0ustar zuulzuul00000000000000nova-17.0.1/nova/tests/unit/virt/xenapi/test_volume_utils.py0000666000175000017500000003706613250073126024276 0ustar zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from eventlet import greenthread import mock import six from nova import exception from nova import test from nova.tests.unit.virt.xenapi import stubs from nova.virt.xenapi import volume_utils class SROps(stubs.XenAPITestBaseNoDB): def test_find_sr_valid_uuid(self): self.session = mock.Mock() self.session.call_xenapi.return_value = 'sr_ref' self.assertEqual(volume_utils.find_sr_by_uuid(self.session, 'sr_uuid'), 'sr_ref') def test_find_sr_invalid_uuid(self): class UUIDException(Exception): details = ["UUID_INVALID", "", "", ""] self.session = mock.Mock() self.session.XenAPI.Failure = UUIDException self.session.call_xenapi.side_effect = UUIDException self.assertIsNone( volume_utils.find_sr_by_uuid(self.session, 'sr_uuid')) def test_find_sr_from_vdi(self): vdi_ref = 'fake-ref' def fake_call_xenapi(method, *args): self.assertEqual(method, 'VDI.get_SR') self.assertEqual(args[0], vdi_ref) return args[0] session = mock.Mock() session.call_xenapi.side_effect = fake_call_xenapi self.assertEqual(volume_utils.find_sr_from_vdi(session, vdi_ref), vdi_ref) def test_find_sr_from_vdi_exception(self): vdi_ref = 'fake-ref' class FakeException(Exception): pass session = mock.Mock() session.XenAPI.Failure = FakeException session.call_xenapi.side_effect = FakeException self.assertRaises(exception.StorageError, volume_utils.find_sr_from_vdi, session, vdi_ref) class ISCSIParametersTestCase(stubs.XenAPITestBaseNoDB): def test_target_host(self): self.assertEqual(volume_utils._get_target_host('host:port'), 'host') self.assertEqual(volume_utils._get_target_host('host'), 'host') # There is no default value self.assertIsNone(volume_utils._get_target_host(':port')) self.assertIsNone(volume_utils._get_target_host(None)) def test_target_port(self): self.assertEqual(volume_utils._get_target_port('host:port'), 'port') self.assertEqual(volume_utils._get_target_port('host'), 3260) class IntroduceTestCase(stubs.XenAPITestBaseNoDB): @mock.patch.object(volume_utils, '_get_vdi_ref') @mock.patch.object(greenthread, 'sleep') def test_introduce_vdi_retry(self, mock_sleep, mock_get_vdi_ref): def fake_get_vdi_ref(session, sr_ref, vdi_uuid, target_lun): fake_get_vdi_ref.call_count += 1 if fake_get_vdi_ref.call_count == 2: return 'vdi_ref' def fake_call_xenapi(method, *args): if method == 'SR.scan': return elif method == 'VDI.get_record': return {'managed': 'true'} session = mock.Mock() session.call_xenapi.side_effect = fake_call_xenapi mock_get_vdi_ref.side_effect = fake_get_vdi_ref fake_get_vdi_ref.call_count = 0 self.assertEqual(volume_utils.introduce_vdi(session, 'sr_ref'), 'vdi_ref') mock_sleep.assert_called_once_with(20) @mock.patch.object(volume_utils, '_get_vdi_ref') @mock.patch.object(greenthread, 'sleep') def test_introduce_vdi_exception(self, mock_sleep, mock_get_vdi_ref): def fake_call_xenapi(method, *args): if method == 'SR.scan': return elif method == 'VDI.get_record': return {'managed': 'true'} session = mock.Mock() session.call_xenapi.side_effect = fake_call_xenapi mock_get_vdi_ref.return_value = None self.assertRaises(exception.StorageError, volume_utils.introduce_vdi, session, 'sr_ref') mock_sleep.assert_called_once_with(20) class ParseVolumeInfoTestCase(stubs.XenAPITestBaseNoDB): def test_mountpoint_to_number(self): cases = { 'sda': 0, 'sdp': 15, 'hda': 0, 'hdp': 15, 'vda': 0, 'xvda': 0, '0': 0, '10': 10, 'vdq': -1, 'sdq': -1, 'hdq': -1, 'xvdq': -1, } for (input, expected) in cases.items(): actual = volume_utils._mountpoint_to_number(input) self.assertEqual(actual, expected, '%s yielded %s, not %s' % (input, actual, expected)) @classmethod def _make_connection_info(cls): target_iqn = 'iqn.2010-10.org.openstack:volume-00000001' return {'driver_volume_type': 'iscsi', 'data': {'volume_id': 1, 'target_iqn': target_iqn, 'target_portal': '127.0.0.1:3260,fake', 'target_lun': None, 'auth_method': 'CHAP', 'auth_username': 'username', 'auth_password': 'verybadpass'}} def test_parse_volume_info_parsing_auth_details(self): conn_info = self._make_connection_info() result = volume_utils._parse_volume_info(conn_info['data']) self.assertEqual('username', result['chapuser']) self.assertEqual('verybadpass', result['chappassword']) def test_parse_volume_info_missing_details(self): # Tests that a StorageError is raised if volume_id, target_host, or # target_ign is missing from connection_data. Also ensures that the # auth_password value is not present in the StorageError message. for data_key_to_null in ('volume_id', 'target_portal', 'target_iqn'): conn_info = self._make_connection_info() conn_info['data'][data_key_to_null] = None ex = self.assertRaises(exception.StorageError, volume_utils._parse_volume_info, conn_info['data']) self.assertNotIn('verybadpass', six.text_type(ex)) def test_get_device_number_raise_exception_on_wrong_mountpoint(self): self.assertRaises( exception.StorageError, volume_utils.get_device_number, 'dev/sd') class FindVBDTestCase(stubs.XenAPITestBaseNoDB): def test_find_vbd_by_number_works(self): session = mock.Mock() session.VM.get_VBDs.return_value = ["a", "b"] session.VBD.get_userdevice.return_value = "1" result = volume_utils.find_vbd_by_number(session, "vm_ref", 1) self.assertEqual("a", result) session.VM.get_VBDs.assert_called_once_with("vm_ref") session.VBD.get_userdevice.assert_called_once_with("a") def test_find_vbd_by_number_no_matches(self): session = mock.Mock() session.VM.get_VBDs.return_value = ["a", "b"] session.VBD.get_userdevice.return_value = "3" result = volume_utils.find_vbd_by_number(session, "vm_ref", 1) self.assertIsNone(result) session.VM.get_VBDs.assert_called_once_with("vm_ref") expected = [mock.call("a"), mock.call("b")] self.assertEqual(expected, session.VBD.get_userdevice.call_args_list) def test_find_vbd_by_number_no_vbds(self): session = mock.Mock() session.VM.get_VBDs.return_value = [] result = volume_utils.find_vbd_by_number(session, "vm_ref", 1) self.assertIsNone(result) session.VM.get_VBDs.assert_called_once_with("vm_ref") self.assertFalse(session.VBD.get_userdevice.called) def test_find_vbd_by_number_ignores_exception(self): session = mock.Mock() session.XenAPI.Failure = test.TestingException session.VM.get_VBDs.return_value = ["a"] session.VBD.get_userdevice.side_effect = test.TestingException result = volume_utils.find_vbd_by_number(session, "vm_ref", 1) self.assertIsNone(result) session.VM.get_VBDs.assert_called_once_with("vm_ref") session.VBD.get_userdevice.assert_called_once_with("a") class IntroduceSRTestCase(stubs.XenAPITestBaseNoDB): @mock.patch.object(volume_utils, '_create_pbd') def test_backend_kind(self, create_pbd): session = mock.Mock() session.product_version = (6, 5, 0) session.call_xenapi.return_value = 'sr_ref' params = {'sr_type': 'iscsi'} sr_uuid = 'sr_uuid' label = 'label' expected_params = {'backend-kind': 'vbd'} volume_utils.introduce_sr(session, sr_uuid, label, params) session.call_xenapi.assert_any_call('SR.introduce', sr_uuid, label, '', 'iscsi', '', False, expected_params) @mock.patch.object(volume_utils, '_create_pbd') def test_backend_kind_upstream_fix(self, create_pbd): session = mock.Mock() session.product_version = (7, 0, 0) session.call_xenapi.return_value = 'sr_ref' params = {'sr_type': 'iscsi'} sr_uuid = 'sr_uuid' label = 'label' expected_params = {} volume_utils.introduce_sr(session, sr_uuid, label, params) session.call_xenapi.assert_any_call('SR.introduce', sr_uuid, label, '', 'iscsi', '', False, expected_params) class BootedFromVolumeTestCase(stubs.XenAPITestBaseNoDB): def test_booted_from_volume(self): session = mock.Mock() session.VM.get_VBDs.return_value = ['vbd_ref'] session.VBD.get_userdevice.return_value = '0' session.VBD.get_other_config.return_value = {'osvol': True} booted_from_volume = volume_utils.is_booted_from_volume(session, 'vm_ref') self.assertTrue(booted_from_volume) def test_not_booted_from_volume(self): session = mock.Mock() session.VM.get_VBDs.return_value = ['vbd_ref'] session.VBD.get_userdevice.return_value = '0' session.VBD.get_other_config.return_value = {} booted_from_volume = volume_utils.is_booted_from_volume(session, 'vm_ref') self.assertFalse(booted_from_volume) class MultipleVolumesTestCase(stubs.XenAPITestBaseNoDB): def test_sr_info_two_luns(self): data1 = {'target_portal': 'host:port', 'target_iqn': 'iqn', 'volume_id': 'vol_id_1', 'target_lun': 1} data2 = {'target_portal': 'host:port', 'target_iqn': 'iqn', 'volume_id': 'vol_id_2', 'target_lun': 2} (sr_uuid1, label1, params1) = volume_utils.parse_sr_info(data1) (sr_uuid2, label2, params2) = volume_utils.parse_sr_info(data2) self.assertEqual(sr_uuid1, sr_uuid2) self.assertEqual(label1, label2) @mock.patch.object(volume_utils, 'forget_sr') def test_purge_sr_no_VBDs(self, mock_forget): def _call_xenapi(func, *args): if func == 'SR.get_VDIs': return ['VDI1', 'VDI2'] if func == 'VDI.get_VBDs': return [] self.session = mock.Mock() self.session.call_xenapi = _call_xenapi volume_utils.purge_sr(self.session, 'SR') mock_forget.assert_called_once_with(self.session, 'SR') @mock.patch.object(volume_utils, 'forget_sr') def test_purge_sr_in_use(self, mock_forget): def _call_xenapi(func, *args): if func == 'SR.get_VDIs': return ['VDI1', 'VDI2'] if func == 'VDI.get_VBDs': if args[0] == 'VDI1': return ['VBD1'] if args[0] == 'VDI2': return ['VBD2'] self.session = mock.Mock() self.session.call_xenapi = _call_xenapi volume_utils.purge_sr(self.session, 'SR') self.assertEqual([], mock_forget.mock_calls) class TestStreamToVDI(stubs.XenAPITestBaseNoDB): @mock.patch.object(volume_utils, '_stream_to_vdi') @mock.patch.object(volume_utils, '_get_vdi_import_path', return_value='vdi_import_path') def test_creates_task_conn(self, mock_import_path, mock_stream): session = stubs.get_fake_session() session.custom_task = mock.MagicMock() session.custom_task.return_value.__enter__.return_value = 'task' session.http_connection = mock.MagicMock() session.http_connection.return_value.__enter__.return_value = 'conn' instance = {'name': 'instance-name'} volume_utils.stream_to_vdi(session, instance, 'vhd', 'file_obj', 100, 'vdi_ref') session.custom_task.assert_called_with('VDI_IMPORT_for_instance-name') mock_stream.assert_called_with('conn', 'vdi_import_path', 100, 'file_obj') self.assertTrue(session.http_connection.return_value.__exit__.called) self.assertTrue(session.custom_task.return_value.__exit__.called) def test_stream_to_vdi_tiny(self): mock_file = mock.Mock() mock_file.read.side_effect = ['a'] mock_conn = mock.Mock() resp = mock.Mock() resp.status = '200' resp.reason = 'OK' mock_conn.getresponse.return_value = resp volume_utils._stream_to_vdi(mock_conn, '/path', 1, mock_file) args, kwargs = mock_conn.request.call_args self.assertEqual(kwargs['headers']['Content-Length'], '1') mock_file.read.assert_called_once_with(1) mock_conn.send.assert_called_once_with('a') def test_stream_to_vdi_chunk_multiple(self): mock_file = mock.Mock() mock_file.read.side_effect = ['aaaaa', 'bbbbb'] mock_conn = mock.Mock() resp = mock.Mock() resp.status = '200' resp.reason = 'OK' mock_conn.getresponse.return_value = resp tot_size = 2 * 16 * 1024 volume_utils._stream_to_vdi(mock_conn, '/path', tot_size, mock_file) args, kwargs = mock_conn.request.call_args self.assertEqual(kwargs['headers']['Content-Length'], str(tot_size)) mock_file.read.assert_has_calls([mock.call(16 * 1024), mock.call(16 * 1024)]) mock_conn.send.assert_has_calls([mock.call('aaaaa'), mock.call('bbbbb')]) def test_stream_to_vdi_chunk_remaining(self): mock_file = mock.Mock() mock_file.read.side_effect = ['aaaaa', 'bb'] mock_conn = mock.Mock() resp = mock.Mock() resp.status = '200' resp.reason = 'OK' mock_conn.getresponse.return_value = resp tot_size = 16 * 1024 + 1024 volume_utils._stream_to_vdi(mock_conn, '/path', tot_size, mock_file) args, kwargs = mock_conn.request.call_args self.assertEqual(kwargs['headers']['Content-Length'], str(tot_size)) mock_file.read.assert_has_calls([mock.call(16 * 1024), mock.call(1024)]) mock_conn.send.assert_has_calls([mock.call('aaaaa'), mock.call('bb')]) nova-17.0.1/nova/tests/unit/virt/xenapi/test_vif.py0000666000175000017500000004557213250073126022334 0ustar zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova.compute import power_state from nova import exception from nova.network import model from nova import test from nova.tests.unit.virt.xenapi import stubs from nova.virt.xenapi import network_utils from nova.virt.xenapi import vif from nova.virt.xenapi import vm_utils import os_xenapi fake_vif = { 'created_at': None, 'updated_at': None, 'deleted_at': None, 'deleted': 0, 'id': '123456789123', 'address': '00:00:00:00:00:00', 'network_id': 123, 'instance_uuid': 'fake-uuid', 'uuid': 'fake-uuid-2', } def fake_call_xenapi(method, *args): if method == "VM.get_VIFs": return ["fake_vif_ref", "fake_vif_ref_A2"] if method == "VIF.get_record": if args[0] == "fake_vif_ref": return {'uuid': fake_vif['uuid'], 'MAC': fake_vif['address'], 'network': 'fake_network', 'other_config': {'neutron-port-id': fake_vif['id']} } else: raise exception.Exception("Failed get vif record") if method == "VIF.unplug": return if method == "VIF.destroy": if args[0] == "fake_vif_ref": return else: raise exception.Exception("unplug vif failed") if method == "VIF.create": if args[0] == "fake_vif_rec": return "fake_vif_ref" else: raise exception.Exception("VIF existed") return "Unexpected call_xenapi: %s.%s" % (method, args) class XenVIFDriverTestBase(stubs.XenAPITestBaseNoDB): def setUp(self): super(XenVIFDriverTestBase, self).setUp() self._session = mock.Mock() self._session.call_xenapi.side_effect = fake_call_xenapi def mock_patch_object(self, target, attribute, return_val=None, side_effect=None): """Utilility function to mock object's attribute at runtime: Some methods are dynamic, so standard mocking does not work and we need to mock them at runtime. e.g. self._session.VIF.get_record which is dynamically created via the override function of __getattr__. """ patcher = mock.patch.object(target, attribute, return_value=return_val, side_effect=side_effect) mock_one = patcher.start() self.addCleanup(patcher.stop) return mock_one class XenVIFDriverTestCase(XenVIFDriverTestBase): def setUp(self): super(XenVIFDriverTestCase, self).setUp() self.base_driver = vif.XenVIFDriver(self._session) def test_get_vif_ref(self): vm_ref = "fake_vm_ref" vif_ref = 'fake_vif_ref' ret_vif_ref = self.base_driver._get_vif_ref(fake_vif, vm_ref) self.assertEqual(vif_ref, ret_vif_ref) expected = [mock.call('VM.get_VIFs', vm_ref), mock.call('VIF.get_record', vif_ref)] self.assertEqual(expected, self._session.call_xenapi.call_args_list) def test_get_vif_ref_none_and_exception(self): vm_ref = "fake_vm_ref" vif = {'address': "no_match_vif_address"} ret_vif_ref = self.base_driver._get_vif_ref(vif, vm_ref) self.assertIsNone(ret_vif_ref) expected = [mock.call('VM.get_VIFs', vm_ref), mock.call('VIF.get_record', 'fake_vif_ref'), mock.call('VIF.get_record', 'fake_vif_ref_A2')] self.assertEqual(expected, self._session.call_xenapi.call_args_list) def test_create_vif(self): vif_rec = "fake_vif_rec" vm_ref = "fake_vm_ref" ret_vif_ref = self.base_driver._create_vif(fake_vif, vif_rec, vm_ref) self.assertEqual("fake_vif_ref", ret_vif_ref) expected = [mock.call('VIF.create', vif_rec)] self.assertEqual(expected, self._session.call_xenapi.call_args_list) def test_create_vif_exception(self): self.assertRaises(exception.NovaException, self.base_driver._create_vif, "fake_vif", "missing_vif_rec", "fake_vm_ref") @mock.patch.object(vif.XenVIFDriver, 'hot_unplug') @mock.patch.object(vif.XenVIFDriver, '_get_vif_ref', return_value='fake_vif_ref') def test_unplug(self, mock_get_vif_ref, mock_hot_unplug): instance = {'name': "fake_instance"} vm_ref = "fake_vm_ref" self.base_driver.unplug(instance, fake_vif, vm_ref) expected = [mock.call('VIF.destroy', 'fake_vif_ref')] self.assertEqual(expected, self._session.call_xenapi.call_args_list) mock_hot_unplug.assert_called_once_with( fake_vif, instance, 'fake_vm_ref', 'fake_vif_ref') @mock.patch.object(vif.XenVIFDriver, '_get_vif_ref', return_value='missing_vif_ref') def test_unplug_exception(self, mock_get_vif_ref): instance = "fake_instance" vm_ref = "fake_vm_ref" self.assertRaises(exception.NovaException, self.base_driver.unplug, instance, fake_vif, vm_ref) class XenAPIBridgeDriverTestCase(XenVIFDriverTestBase, object): def setUp(self): super(XenAPIBridgeDriverTestCase, self).setUp() self.bridge_driver = vif.XenAPIBridgeDriver(self._session) @mock.patch.object(vif.XenAPIBridgeDriver, '_ensure_vlan_bridge', return_value='fake_network_ref') @mock.patch.object(vif.XenVIFDriver, '_create_vif', return_value='fake_vif_ref') def test_plug_create_vlan(self, mock_create_vif, mock_ensure_vlan_bridge): instance = {'name': "fake_instance_name"} network = model.Network() network._set_meta({'should_create_vlan': True}) vif = model.VIF() vif._set_meta({'rxtx_cap': 1}) vif['network'] = network vif['address'] = "fake_address" vm_ref = "fake_vm_ref" device = 1 ret_vif_ref = self.bridge_driver.plug(instance, vif, vm_ref, device) self.assertEqual('fake_vif_ref', ret_vif_ref) @mock.patch.object(vif.vm_utils, 'lookup', return_value=None) def test_plug_exception(self, mock_lookup): instance = {'name': "fake_instance_name"} self.assertRaises(exception.VirtualInterfacePlugException, self.bridge_driver.plug, instance, fake_vif, vm_ref=None, device=1) mock_lookup.assert_called_once_with(self._session, instance['name']) class XenAPIOpenVswitchDriverTestCase(XenVIFDriverTestBase): def setUp(self): super(XenAPIOpenVswitchDriverTestCase, self).setUp() self.ovs_driver = vif.XenAPIOpenVswitchDriver(self._session) @mock.patch.object(vif.XenAPIOpenVswitchDriver, 'hot_plug') @mock.patch.object(vif.XenVIFDriver, '_create_vif', return_value='fake_vif_ref') @mock.patch.object(vif.XenAPIOpenVswitchDriver, 'create_vif_interim_network') @mock.patch.object(vif.XenVIFDriver, '_get_vif_ref', return_value=None) @mock.patch.object(vif.vm_utils, 'lookup', return_value='fake_vm_ref') def test_plug(self, mock_lookup, mock_get_vif_ref, mock_create_vif_interim_network, mock_create_vif, mock_hot_plug): instance = {'name': "fake_instance_name"} ret_vif_ref = self.ovs_driver.plug( instance, fake_vif, vm_ref=None, device=1) self.assertTrue(mock_lookup.called) self.assertTrue(mock_get_vif_ref.called) self.assertTrue(mock_create_vif_interim_network.called) self.assertTrue(mock_create_vif.called) self.assertEqual('fake_vif_ref', ret_vif_ref) mock_hot_plug.assert_called_once_with(fake_vif, instance, 'fake_vm_ref', 'fake_vif_ref') @mock.patch.object(vif.vm_utils, 'lookup', return_value=None) def test_plug_exception(self, mock_lookup): instance = {'name': "fake_instance_name"} self.assertRaises(exception.VirtualInterfacePlugException, self.ovs_driver.plug, instance, fake_vif, vm_ref=None, device=1) mock_lookup.assert_called_once_with(self._session, instance['name']) @mock.patch.object(vif.XenAPIOpenVswitchDriver, 'delete_network_and_bridge') @mock.patch.object(network_utils, 'find_network_with_name_label', return_value='fake_network') @mock.patch.object(vif.XenVIFDriver, 'unplug') def test_unplug(self, mock_super_unplug, mock_find_network_with_name_label, mock_delete_network_bridge): instance = {'name': "fake_instance"} vm_ref = "fake_vm_ref" mock_network_get_VIFs = self.mock_patch_object( self._session.network, 'get_VIFs', return_val=None) self.ovs_driver.unplug(instance, fake_vif, vm_ref) self.assertTrue(mock_super_unplug.called) self.assertTrue(mock_find_network_with_name_label.called) self.assertTrue(mock_network_get_VIFs.called) self.assertTrue(mock_delete_network_bridge.called) @mock.patch.object(vif.XenAPIOpenVswitchDriver, '_delete_linux_bridge') @mock.patch.object(vif.XenAPIOpenVswitchDriver, '_delete_linux_port') @mock.patch.object(os_xenapi.client.host_network, 'ovs_del_br') @mock.patch.object(os_xenapi.client.host_network, 'ovs_del_port') @mock.patch.object(network_utils, 'find_network_with_name_label') def test_delete_network_and_bridge(self, mock_find_network, mock_ovs_del_port, mock_ovs_del_br, mock_delete_linux_port, mock_delete_linux_bridge): mock_find_network.return_value = 'fake_network' instance = {'name': 'fake_instance'} vif = {'id': 'fake_vif'} self._session.network = mock.Mock() self.ovs_driver.delete_network_and_bridge(instance, vif) self._session.network.get_bridge.assert_called_once_with( 'fake_network') self._session.network.destroy.assert_called_once_with('fake_network') self.assertTrue(mock_find_network.called) self.assertEqual(mock_ovs_del_port.call_count, 2) self.assertEqual(mock_delete_linux_port.call_count, 2) self.assertTrue(mock_delete_linux_bridge.called) self.assertTrue(mock_ovs_del_br.called) @mock.patch.object(os_xenapi.client.host_network, 'ovs_del_port') @mock.patch.object(network_utils, 'find_network_with_name_label', return_value='fake_network') def test_delete_network_and_bridge_destroy_exception(self, mock_find_network, mock_ovs_del_port): instance = {'name': "fake_instance"} self.mock_patch_object( self._session.network, 'get_VIFs', return_val=None) self.mock_patch_object( self._session.network, 'get_bridge', return_val='fake_bridge') self.mock_patch_object( self._session.network, 'destroy', side_effect=test.TestingException) self.assertRaises(exception.VirtualInterfaceUnplugException, self.ovs_driver.delete_network_and_bridge, instance, fake_vif) self.assertTrue(mock_find_network.called) self.assertTrue(mock_ovs_del_port.called) @mock.patch.object(vif.XenAPIOpenVswitchDriver, '_device_exists') @mock.patch.object(os_xenapi.client.host_network, 'brctl_add_if') @mock.patch.object(vif.XenAPIOpenVswitchDriver, '_create_linux_bridge') @mock.patch.object(os_xenapi.client.host_network, 'ovs_add_port') def test_post_start_actions(self, mock_ovs_add_port, mock_create_linux_bridge, mock_brctl_add_if, mock_device_exists): vif_ref = "fake_vif_ref" instance = {'name': 'fake_instance_name'} fake_vif_rec = {'uuid': fake_vif['uuid'], 'MAC': fake_vif['address'], 'network': 'fake_network', 'other_config': { 'neutron-port-id': 'fake-neutron-port-id'} } mock_VIF_get_record = self.mock_patch_object( self._session.VIF, 'get_record', return_val=fake_vif_rec) mock_network_get_bridge = self.mock_patch_object( self._session.network, 'get_bridge', return_val='fake_bridge_name') mock_network_get_uuid = self.mock_patch_object( self._session.network, 'get_uuid', return_val='fake_network_uuid') mock_device_exists.return_value = False self.ovs_driver.post_start_actions(instance, vif_ref) self.assertTrue(mock_VIF_get_record.called) self.assertTrue(mock_network_get_bridge.called) self.assertTrue(mock_network_get_uuid.called) self.assertEqual(mock_ovs_add_port.call_count, 1) self.assertTrue(mock_brctl_add_if.called) @mock.patch.object(vif.XenAPIOpenVswitchDriver, '_device_exists') @mock.patch.object(os_xenapi.client.host_network, 'brctl_add_if') @mock.patch.object(vif.XenAPIOpenVswitchDriver, '_create_linux_bridge') @mock.patch.object(os_xenapi.client.host_network, 'ovs_add_port') def test_post_start_actions_tap_exist(self, mock_ovs_add_port, mock_create_linux_bridge, mock_brctl_add_if, mock_device_exists): vif_ref = "fake_vif_ref" instance = {'name': 'fake_instance_name'} fake_vif_rec = {'uuid': fake_vif['uuid'], 'MAC': fake_vif['address'], 'network': 'fake_network', 'other_config': { 'neutron-port-id': 'fake-neutron-port-id'} } mock_VIF_get_record = self.mock_patch_object( self._session.VIF, 'get_record', return_val=fake_vif_rec) mock_network_get_bridge = self.mock_patch_object( self._session.network, 'get_bridge', return_val='fake_bridge_name') mock_network_get_uuid = self.mock_patch_object( self._session.network, 'get_uuid', return_val='fake_network_uuid') mock_device_exists.return_value = True self.ovs_driver.post_start_actions(instance, vif_ref) self.assertTrue(mock_VIF_get_record.called) self.assertTrue(mock_network_get_bridge.called) self.assertTrue(mock_network_get_uuid.called) self.assertTrue(mock_create_linux_bridge.called) self.assertFalse(mock_brctl_add_if.called) self.assertFalse(mock_ovs_add_port.called) @mock.patch.object(network_utils, 'find_network_with_name_label', return_value="exist_network_ref") def test_create_vif_interim_network_exist(self, mock_find_network_with_name_label): mock_network_create = self.mock_patch_object( self._session.network, 'create', return_val='new_network_ref') network_ref = self.ovs_driver.create_vif_interim_network(fake_vif) self.assertFalse(mock_network_create.called) self.assertEqual(network_ref, 'exist_network_ref') @mock.patch.object(network_utils, 'find_network_with_name_label', return_value=None) def test_create_vif_interim_network_new(self, mock_find_network_with_name_label): mock_network_create = self.mock_patch_object( self._session.network, 'create', return_val='new_network_ref') network_ref = self.ovs_driver.create_vif_interim_network(fake_vif) self.assertTrue(mock_network_create.called) self.assertEqual(network_ref, 'new_network_ref') @mock.patch.object(vif.XenAPIOpenVswitchDriver, 'post_start_actions') @mock.patch.object(vm_utils, 'get_power_state') def test_hot_plug_power_on(self, mock_get_power_state, mock_post_start_actions): vif_ref = "fake_vif_ref" vif = "fake_vif" instance = "fake_instance" vm_ref = "fake_vm_ref" mock_get_power_state.return_value = power_state.RUNNING mock_VIF_plug = self.mock_patch_object( self._session.VIF, 'plug', return_val=None) self.ovs_driver.hot_plug(vif, instance, vm_ref, vif_ref) mock_VIF_plug.assert_called_once_with(vif_ref) mock_post_start_actions.assert_called_once_with(instance, vif_ref) mock_get_power_state.assert_called_once_with(self._session, vm_ref) @mock.patch.object(vm_utils, 'get_power_state') def test_hot_plug_power_off(self, mock_get_power_state): vif_ref = "fake_vif_ref" vif = "fake_vif" instance = "fake_instance" vm_ref = "fake_vm_ref" mock_get_power_state.return_value = power_state.SHUTDOWN mock_VIF_plug = self.mock_patch_object( self._session.VIF, 'plug', return_val=None) self.ovs_driver.hot_plug(vif, instance, vm_ref, vif_ref) mock_VIF_plug.assert_not_called() mock_get_power_state.assert_called_once_with(self._session, vm_ref) @mock.patch.object(vm_utils, 'get_power_state') def test_hot_unplug_power_on(self, mock_get_power_state): vm_ref = 'fake_vm_ref' vif_ref = 'fake_vif_ref' instance = 'fake_instance' mock_get_power_state.return_value = power_state.RUNNING mock_VIF_unplug = self.mock_patch_object( self._session.VIF, 'unplug', return_val=None) self.ovs_driver.hot_unplug(fake_vif, instance, vm_ref, vif_ref) mock_VIF_unplug.assert_called_once_with(vif_ref) mock_get_power_state.assert_called_once_with(self._session, vm_ref) @mock.patch.object(vm_utils, 'get_power_state') def test_hot_unplug_power_off(self, mock_get_power_state): vm_ref = 'fake_vm_ref' vif_ref = 'fake_vif_ref' instance = 'fake_instance' mock_get_power_state.return_value = power_state.SHUTDOWN mock_VIF_unplug = self.mock_patch_object( self._session.VIF, 'unplug', return_val=None) self.ovs_driver.hot_unplug(fake_vif, instance, vm_ref, vif_ref) mock_VIF_unplug.assert_not_called() mock_get_power_state.assert_called_once_with(self._session, vm_ref) nova-17.0.1/nova/tests/unit/virt/xenapi/image/0000775000175000017500000000000013250073472021206 5ustar zuulzuul00000000000000nova-17.0.1/nova/tests/unit/virt/xenapi/image/__init__.py0000666000175000017500000000000013250073126023303 0ustar zuulzuul00000000000000nova-17.0.1/nova/tests/unit/virt/xenapi/image/test_utils.py0000666000175000017500000002223613250073126023762 0ustar zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import tarfile import mock from nova import test from nova.virt.xenapi.image import utils @mock.patch.object(utils, 'IMAGE_API') class GlanceImageTestCase(test.NoDBTestCase): def _get_image(self): return utils.GlanceImage(mock.sentinel.context, mock.sentinel.image_ref) def test_meta(self, mocked): mocked.get.return_value = mock.sentinel.meta image = self._get_image() self.assertEqual(mock.sentinel.meta, image.meta) mocked.get.assert_called_once_with(mock.sentinel.context, mock.sentinel.image_ref) def test_download_to(self, mocked): mocked.download.return_value = None image = self._get_image() result = image.download_to(mock.sentinel.fobj) self.assertIsNone(result) mocked.download.assert_called_once_with(mock.sentinel.context, mock.sentinel.image_ref, mock.sentinel.fobj) def test_is_raw_tgz_empty_meta(self, mocked): mocked.get.return_value = {} image = self._get_image() self.assertFalse(image.is_raw_tgz()) def test_is_raw_tgz_for_raw_tgz(self, mocked): mocked.get.return_value = { 'disk_format': 'raw', 'container_format': 'tgz' } image = self._get_image() self.assertTrue(image.is_raw_tgz()) def test_data(self, mocked): mocked.download.return_value = mock.sentinel.image image = self._get_image() self.assertEqual(mock.sentinel.image, image.data()) class RawImageTestCase(test.NoDBTestCase): @mock.patch.object(utils, 'GlanceImage', spec_set=True, autospec=True) def test_get_size(self, mock_glance_image): mock_glance_image.meta = {'size': '123'} raw_image = utils.RawImage(mock_glance_image) self.assertEqual(123, raw_image.get_size()) @mock.patch.object(utils, 'GlanceImage', spec_set=True, autospec=True) def test_stream_to(self, mock_glance_image): mock_glance_image.download_to.return_value = 'result' raw_image = utils.RawImage(mock_glance_image) self.assertEqual('result', raw_image.stream_to('file')) mock_glance_image.download_to.assert_called_once_with('file') class TestIterableBasedFile(test.NoDBTestCase): def test_constructor(self): class FakeIterable(object): def __iter__(_self): return 'iterator' the_file = utils.IterableToFileAdapter(FakeIterable()) self.assertEqual('iterator', the_file.iterator) def test_read_one_character(self): the_file = utils.IterableToFileAdapter([ 'chunk1', 'chunk2' ]) self.assertEqual('c', the_file.read(1)) def test_read_stores_remaining_characters(self): the_file = utils.IterableToFileAdapter([ 'chunk1', 'chunk2' ]) the_file.read(1) self.assertEqual('hunk1', the_file.remaining_data) def test_read_remaining_characters(self): the_file = utils.IterableToFileAdapter([ 'chunk1', 'chunk2' ]) self.assertEqual('c', the_file.read(1)) self.assertEqual('h', the_file.read(1)) def test_read_reached_end_of_file(self): the_file = utils.IterableToFileAdapter([ 'chunk1', 'chunk2' ]) self.assertEqual('chunk1', the_file.read(100)) self.assertEqual('chunk2', the_file.read(100)) self.assertEqual('', the_file.read(100)) def test_empty_chunks(self): the_file = utils.IterableToFileAdapter([ '', '', 'chunk2' ]) self.assertEqual('chunk2', the_file.read(100)) class RawTGZTestCase(test.NoDBTestCase): @mock.patch.object(utils.RawTGZImage, '_as_file', return_value='the_file') @mock.patch.object(utils.tarfile, 'open', return_value='tf') def test_as_tarfile(self, mock_open, mock_as_file): image = utils.RawTGZImage(None) result = image._as_tarfile() self.assertEqual('tf', result) mock_as_file.assert_called_once_with() mock_open.assert_called_once_with(mode='r|gz', fileobj='the_file') @mock.patch.object(utils, 'GlanceImage', spec_set=True, autospec=True) @mock.patch.object(utils, 'IterableToFileAdapter', return_value='data-as-file') def test_as_file(self, mock_adapter, mock_glance_image): mock_glance_image.data.return_value = 'iterable-data' image = utils.RawTGZImage(mock_glance_image) result = image._as_file() self.assertEqual('data-as-file', result) mock_glance_image.data.assert_called_once_with() mock_adapter.assert_called_once_with('iterable-data') @mock.patch.object(tarfile, 'TarFile', spec_set=True, autospec=True) @mock.patch.object(tarfile, 'TarInfo', autospec=True) @mock.patch.object(utils.RawTGZImage, '_as_tarfile') def test_get_size(self, mock_as_tar, mock_tar_info, mock_tar_file): mock_tar_file.next.return_value = mock_tar_info mock_tar_info.size = 124 mock_as_tar.return_value = mock_tar_file image = utils.RawTGZImage(None) result = image.get_size() self.assertEqual(124, result) self.assertEqual(image._tar_info, mock_tar_info) self.assertEqual(image._tar_file, mock_tar_file) mock_as_tar.assert_called_once_with() mock_tar_file.next.assert_called_once_with() @mock.patch.object(tarfile, 'TarFile', spec_set=True, autospec=True) @mock.patch.object(tarfile, 'TarInfo', autospec=True) @mock.patch.object(utils.RawTGZImage, '_as_tarfile') def test_get_size_called_twice(self, mock_as_tar, mock_tar_info, mock_tar_file): mock_tar_file.next.return_value = mock_tar_info mock_tar_info.size = 124 mock_as_tar.return_value = mock_tar_file image = utils.RawTGZImage(None) image.get_size() result = image.get_size() self.assertEqual(124, result) self.assertEqual(image._tar_info, mock_tar_info) self.assertEqual(image._tar_file, mock_tar_file) mock_as_tar.assert_called_once_with() mock_tar_file.next.assert_called_once_with() @mock.patch.object(tarfile, 'TarFile', spec_set=True, autospec=True) @mock.patch.object(tarfile, 'TarInfo', spec_set=True, autospec=True) @mock.patch.object(utils.RawTGZImage, '_as_tarfile') @mock.patch.object(utils.shutil, 'copyfileobj') def test_stream_to_without_size_retrieved(self, mock_copyfile, mock_as_tar, mock_tar_info, mock_tar_file): target_file = mock.create_autospec(open) source_file = mock.create_autospec(open) mock_tar_file.next.return_value = mock_tar_info mock_tar_file.extractfile.return_value = source_file mock_as_tar.return_value = mock_tar_file image = utils.RawTGZImage(None) image._image_service_and_image_id = ('service', 'id') image.stream_to(target_file) mock_as_tar.assert_called_once_with() mock_tar_file.next.assert_called_once_with() mock_tar_file.extractfile.assert_called_once_with(mock_tar_info) mock_copyfile.assert_called_once_with( source_file, target_file) mock_tar_file.close.assert_called_once_with() @mock.patch.object(tarfile, 'TarFile', spec_set=True, autospec=True) @mock.patch.object(tarfile, 'TarInfo', autospec=True) @mock.patch.object(utils.RawTGZImage, '_as_tarfile') @mock.patch.object(utils.shutil, 'copyfileobj') def test_stream_to_with_size_retrieved(self, mock_copyfile, mock_as_tar, mock_tar_info, mock_tar_file): target_file = mock.create_autospec(open) source_file = mock.create_autospec(open) mock_tar_info.size = 124 mock_tar_file.next.return_value = mock_tar_info mock_tar_file.extractfile.return_value = source_file mock_as_tar.return_value = mock_tar_file image = utils.RawTGZImage(None) image._image_service_and_image_id = ('service', 'id') image.get_size() image.stream_to(target_file) mock_as_tar.assert_called_once_with() mock_tar_file.next.assert_called_once_with() mock_tar_file.extractfile.assert_called_once_with(mock_tar_info) mock_copyfile.assert_called_once_with( source_file, target_file) mock_tar_file.close.assert_called_once_with() nova-17.0.1/nova/tests/unit/virt/xenapi/image/test_vdi_through_dev.py0000666000175000017500000001540713250073126026004 0ustar zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import contextlib import tarfile import eventlet from os_xenapi.client import session as xenapi_session import six from nova.image import glance from nova import test from nova.virt.xenapi.image import vdi_through_dev @contextlib.contextmanager def fake_context(result=None): yield result class TestDelegatingToCommand(test.NoDBTestCase): def test_upload_image_is_delegated_to_command(self): command = self.mox.CreateMock(vdi_through_dev.UploadToGlanceAsRawTgz) self.mox.StubOutWithMock(vdi_through_dev, 'UploadToGlanceAsRawTgz') vdi_through_dev.UploadToGlanceAsRawTgz( 'ctx', 'session', 'instance', 'image_id', 'vdis').AndReturn( command) command.upload_image().AndReturn('result') self.mox.ReplayAll() store = vdi_through_dev.VdiThroughDevStore() result = store.upload_image( 'ctx', 'session', 'instance', 'image_id', 'vdis') self.assertEqual('result', result) class TestUploadToGlanceAsRawTgz(test.NoDBTestCase): def test_upload_image(self): store = vdi_through_dev.UploadToGlanceAsRawTgz( 'context', 'session', 'instance', 'id', ['vdi0', 'vdi1']) self.mox.StubOutWithMock(store, '_perform_upload') self.mox.StubOutWithMock(store, '_get_vdi_ref') self.mox.StubOutWithMock(vdi_through_dev, 'glance') self.mox.StubOutWithMock(vdi_through_dev, 'vm_utils') self.mox.StubOutWithMock(vdi_through_dev, 'utils') store._get_vdi_ref().AndReturn('vdi_ref') vdi_through_dev.vm_utils.vdi_attached( 'session', 'vdi_ref', read_only=True).AndReturn( fake_context('dev')) vdi_through_dev.utils.make_dev_path('dev').AndReturn('devpath') vdi_through_dev.utils.temporary_chown('devpath').AndReturn( fake_context()) store._perform_upload('devpath') self.mox.ReplayAll() store.upload_image() def test__perform_upload(self): producer = self.mox.CreateMock(vdi_through_dev.TarGzProducer) consumer = self.mox.CreateMock(glance.UpdateGlanceImage) pool = self.mox.CreateMock(eventlet.GreenPool) store = vdi_through_dev.UploadToGlanceAsRawTgz( 'context', 'session', 'instance', 'id', ['vdi0', 'vdi1']) self.mox.StubOutWithMock(store, '_create_pipe') self.mox.StubOutWithMock(store, '_get_virtual_size') self.mox.StubOutWithMock(producer, 'get_metadata') self.mox.StubOutWithMock(vdi_through_dev, 'TarGzProducer') self.mox.StubOutWithMock(glance, 'UpdateGlanceImage') self.mox.StubOutWithMock(vdi_through_dev, 'eventlet') producer.get_metadata().AndReturn('metadata') store._get_virtual_size().AndReturn('324') store._create_pipe().AndReturn(('readfile', 'writefile')) vdi_through_dev.TarGzProducer( 'devpath', 'writefile', '324', 'disk.raw').AndReturn( producer) glance.UpdateGlanceImage('context', 'id', 'metadata', 'readfile').AndReturn(consumer) vdi_through_dev.eventlet.GreenPool().AndReturn(pool) pool.spawn(producer.start) pool.spawn(consumer.start) pool.waitall() self.mox.ReplayAll() store._perform_upload('devpath') def test__get_vdi_ref(self): session = self.mox.CreateMock(xenapi_session.XenAPISession) store = vdi_through_dev.UploadToGlanceAsRawTgz( 'context', session, 'instance', 'id', ['vdi0', 'vdi1']) session.call_xenapi('VDI.get_by_uuid', 'vdi0').AndReturn('vdi_ref') self.mox.ReplayAll() self.assertEqual('vdi_ref', store._get_vdi_ref()) def test__get_virtual_size(self): session = self.mox.CreateMock(xenapi_session.XenAPISession) store = vdi_through_dev.UploadToGlanceAsRawTgz( 'context', session, 'instance', 'id', ['vdi0', 'vdi1']) self.mox.StubOutWithMock(store, '_get_vdi_ref') store._get_vdi_ref().AndReturn('vdi_ref') session.call_xenapi('VDI.get_virtual_size', 'vdi_ref') self.mox.ReplayAll() store._get_virtual_size() def test__create_pipe(self): store = vdi_through_dev.UploadToGlanceAsRawTgz( 'context', 'session', 'instance', 'id', ['vdi0', 'vdi1']) self.mox.StubOutWithMock(vdi_through_dev, 'os') self.mox.StubOutWithMock(vdi_through_dev, 'greenio') vdi_through_dev.os.pipe().AndReturn(('rpipe', 'wpipe')) vdi_through_dev.greenio.GreenPipe('rpipe', 'rb', 0).AndReturn('rfile') vdi_through_dev.greenio.GreenPipe('wpipe', 'wb', 0).AndReturn('wfile') self.mox.ReplayAll() result = store._create_pipe() self.assertEqual(('rfile', 'wfile'), result) class TestTarGzProducer(test.NoDBTestCase): def test_constructor(self): producer = vdi_through_dev.TarGzProducer('devpath', 'writefile', '100', 'fname') self.assertEqual('devpath', producer.fpath) self.assertEqual('writefile', producer.output) self.assertEqual('100', producer.size) self.assertEqual('writefile', producer.output) def test_start(self): outf = six.StringIO() producer = vdi_through_dev.TarGzProducer('fpath', outf, '100', 'fname') tfile = self.mox.CreateMock(tarfile.TarFile) tinfo = self.mox.CreateMock(tarfile.TarInfo) inf = self.mox.CreateMock(open) self.mox.StubOutWithMock(vdi_through_dev, 'tarfile') self.mox.StubOutWithMock(producer, '_open_file') vdi_through_dev.tarfile.TarInfo(name='fname').AndReturn(tinfo) vdi_through_dev.tarfile.open(fileobj=outf, mode='w|gz').AndReturn( fake_context(tfile)) producer._open_file('fpath', 'rb').AndReturn(fake_context(inf)) tfile.addfile(tinfo, fileobj=inf) outf.close() self.mox.ReplayAll() producer.start() self.assertEqual(100, tinfo.size) def test_get_metadata(self): producer = vdi_through_dev.TarGzProducer('devpath', 'writefile', '100', 'fname') self.assertEqual({ 'disk_format': 'raw', 'container_format': 'tgz'}, producer.get_metadata()) nova-17.0.1/nova/tests/unit/virt/xenapi/image/test_glance.py0000666000175000017500000003732113250073126024054 0ustar zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import random import time import mock from os_xenapi.client import exception as xenapi_exception from os_xenapi.client import host_glance from os_xenapi.client import XenAPI from nova.compute import utils as compute_utils from nova import context from nova import exception from nova.image import glance as common_glance from nova.tests.unit.virt.xenapi import stubs from nova import utils from nova.virt.xenapi import driver as xenapi_conn from nova.virt.xenapi import fake from nova.virt.xenapi.image import glance from nova.virt.xenapi import vm_utils class TestGlanceStore(stubs.XenAPITestBaseNoDB): def setUp(self): super(TestGlanceStore, self).setUp() self.store = glance.GlanceStore() self.flags(api_servers=['http://localhost:9292'], group='glance') self.flags(connection_url='http://localhost', connection_password='test_pass', group='xenserver') self.context = context.RequestContext( 'user', 'project', auth_token='foobar') fake.reset() stubs.stubout_session(self.stubs, fake.SessionBase) driver = xenapi_conn.XenAPIDriver(False) self.session = driver._session self.stub_out('nova.virt.xenapi.vm_utils.get_sr_path', lambda *a, **kw: '/fake/sr/path') self.instance = {'uuid': 'blah', 'system_metadata': [], 'auto_disk_config': True, 'os_type': 'default', 'xenapi_use_agent': 'true'} def _get_params(self): return {'image_id': 'fake_image_uuid', 'endpoint': 'http://localhost:9292', 'sr_path': '/fake/sr/path', 'api_version': 2, 'extra_headers': {'X-Auth-Token': 'foobar', 'X-Roles': '', 'X-Tenant-Id': 'project', 'X-User-Id': 'user', 'X-Identity-Status': 'Confirmed'}} def _get_download_params(self): params = self._get_params() params['uuid_stack'] = ['uuid1'] return params @mock.patch.object(vm_utils, '_make_uuid_stack', return_value=['uuid1']) def test_download_image(self, mock_make_uuid_stack): params = self._get_download_params() with mock.patch.object(self.session, 'call_plugin_serialized' ) as mock_call_plugin: self.store.download_image(self.context, self.session, self.instance, 'fake_image_uuid') mock_call_plugin.assert_called_once_with('glance.py', 'download_vhd2', **params) mock_make_uuid_stack.assert_called_once_with() @mock.patch.object(vm_utils, '_make_uuid_stack', return_value=['uuid1']) @mock.patch.object(random, 'shuffle') @mock.patch.object(time, 'sleep') @mock.patch.object(compute_utils, 'add_instance_fault_from_exc') def test_download_image_retry(self, mock_fault, mock_sleep, mock_shuffle, mock_make_uuid_stack): params = self._get_download_params() self.flags(num_retries=2, group='glance') params.pop("endpoint") calls = [mock.call('glance.py', 'download_vhd2', endpoint='http://10.0.1.1:9292', **params), mock.call('glance.py', 'download_vhd2', endpoint='http://10.0.0.1:9293', **params)] glance_api_servers = ['http://10.0.1.1:9292', 'http://10.0.0.1:9293'] self.flags(api_servers=glance_api_servers, group='glance') with (mock.patch.object(self.session, 'call_plugin_serialized') ) as mock_call_plugin_serialized: error_details = ["", "", "RetryableError", ""] error = self.session.XenAPI.Failure(details=error_details) mock_call_plugin_serialized.side_effect = [error, "success"] self.store.download_image(self.context, self.session, self.instance, 'fake_image_uuid') mock_call_plugin_serialized.assert_has_calls(calls) self.assertEqual(1, mock_fault.call_count) def _get_upload_params(self, auto_disk_config=True, expected_os_type='default'): params = {} params['vdi_uuids'] = ['fake_vdi_uuid'] params['properties'] = {'auto_disk_config': auto_disk_config, 'os_type': expected_os_type} return params @mock.patch.object(utils, 'get_auto_disk_config_from_instance') @mock.patch.object(common_glance, 'generate_identity_headers') @mock.patch.object(vm_utils, 'get_sr_path') @mock.patch.object(host_glance, 'upload_vhd') def test_upload_image(self, mock_upload, mock_sr_path, mock_extra_header, mock_disk_config): params = self._get_upload_params() mock_upload.return_value = 'fake_upload' mock_sr_path.return_value = 'fake_sr_path' mock_extra_header.return_value = 'fake_extra_header' mock_disk_config.return_value = 'true' self.store.upload_image(self.context, self.session, self.instance, 'fake_image_uuid', ['fake_vdi_uuid']) mock_sr_path.assert_called_once_with(self.session) mock_extra_header.assert_called_once_with(self.context) mock_upload.assert_called_once_with( self.session, 0, mock.ANY, mock.ANY, 'fake_image_uuid', 'fake_sr_path', 'fake_extra_header', **params) @mock.patch.object(utils, 'get_auto_disk_config_from_instance') @mock.patch.object(common_glance, 'generate_identity_headers') @mock.patch.object(vm_utils, 'get_sr_path') @mock.patch.object(host_glance, 'upload_vhd') def test_upload_image_None_os_type(self, mock_upload, mock_sr_path, mock_extra_header, mock_disk_config): self.instance['os_type'] = None mock_sr_path.return_value = 'fake_sr_path' mock_extra_header.return_value = 'fake_extra_header' mock_upload.return_value = 'fake_upload' mock_disk_config.return_value = 'true' params = self._get_upload_params(True, 'linux') self.store.upload_image(self.context, self.session, self.instance, 'fake_image_uuid', ['fake_vdi_uuid']) mock_sr_path.assert_called_once_with(self.session) mock_extra_header.assert_called_once_with(self.context) mock_upload.assert_called_once_with( self.session, 0, mock.ANY, mock.ANY, 'fake_image_uuid', 'fake_sr_path', 'fake_extra_header', **params) mock_disk_config.assert_called_once_with(self.instance) @mock.patch.object(utils, 'get_auto_disk_config_from_instance') @mock.patch.object(common_glance, 'generate_identity_headers') @mock.patch.object(vm_utils, 'get_sr_path') @mock.patch.object(host_glance, 'upload_vhd') def test_upload_image_no_os_type(self, mock_upload, mock_sr_path, mock_extra_header, mock_disk_config): mock_sr_path.return_value = 'fake_sr_path' mock_extra_header.return_value = 'fake_extra_header' mock_upload.return_value = 'fake_upload' del self.instance['os_type'] params = self._get_upload_params(True, 'linux') self.store.upload_image(self.context, self.session, self.instance, 'fake_image_uuid', ['fake_vdi_uuid']) mock_sr_path.assert_called_once_with(self.session) mock_extra_header.assert_called_once_with(self.context) mock_upload.assert_called_once_with( self.session, 0, mock.ANY, mock.ANY, 'fake_image_uuid', 'fake_sr_path', 'fake_extra_header', **params) mock_disk_config.assert_called_once_with(self.instance) @mock.patch.object(common_glance, 'generate_identity_headers') @mock.patch.object(vm_utils, 'get_sr_path') @mock.patch.object(host_glance, 'upload_vhd') def test_upload_image_auto_config_disk_disabled( self, mock_upload, mock_sr_path, mock_extra_header): mock_sr_path.return_value = 'fake_sr_path' mock_extra_header.return_value = 'fake_extra_header' mock_upload.return_value = 'fake_upload' sys_meta = [{"key": "image_auto_disk_config", "value": "Disabled"}] self.instance["system_metadata"] = sys_meta params = self._get_upload_params("disabled") self.store.upload_image(self.context, self.session, self.instance, 'fake_image_uuid', ['fake_vdi_uuid']) mock_sr_path.assert_called_once_with(self.session) mock_extra_header.assert_called_once_with(self.context) mock_upload.assert_called_once_with( self.session, 0, mock.ANY, mock.ANY, 'fake_image_uuid', 'fake_sr_path', 'fake_extra_header', **params) @mock.patch.object(common_glance, 'generate_identity_headers') @mock.patch.object(vm_utils, 'get_sr_path') @mock.patch.object(host_glance, 'upload_vhd') def test_upload_image_raises_exception(self, mock_upload, mock_sr_path, mock_extra_header): mock_sr_path.return_value = 'fake_sr_path' mock_extra_header.return_value = 'fake_extra_header' mock_upload.side_effect = RuntimeError params = self._get_upload_params() self.assertRaises(RuntimeError, self.store.upload_image, self.context, self.session, self.instance, 'fake_image_uuid', ['fake_vdi_uuid']) mock_sr_path.assert_called_once_with(self.session) mock_extra_header.assert_called_once_with(self.context) mock_upload.assert_called_once_with( self.session, 0, mock.ANY, mock.ANY, 'fake_image_uuid', 'fake_sr_path', 'fake_extra_header', **params) @mock.patch.object(time, 'sleep') @mock.patch.object(compute_utils, 'add_instance_fault_from_exc') def test_upload_image_retries_then_raises_exception(self, mock_add_inst, mock_time_sleep): self.flags(num_retries=2, group='glance') params = self._get_params() params.update(self._get_upload_params()) error_details = ["", "", "RetryableError", ""] error = XenAPI.Failure(details=error_details) with mock.patch.object(self.session, 'call_plugin_serialized', side_effect=error) as mock_call_plugin: self.assertRaises(exception.CouldNotUploadImage, self.store.upload_image, self.context, self.session, self.instance, 'fake_image_uuid', ['fake_vdi_uuid']) time_sleep_args = [mock.call(0.5), mock.call(1)] call_plugin_args = [ mock.call('glance.py', 'upload_vhd2', **params), mock.call('glance.py', 'upload_vhd2', **params), mock.call('glance.py', 'upload_vhd2', **params)] add_inst_args = [ mock.call(self.context, self.instance, error, (XenAPI.Failure, error, mock.ANY)), mock.call(self.context, self.instance, error, (XenAPI.Failure, error, mock.ANY)), mock.call(self.context, self.instance, error, (XenAPI.Failure, error, mock.ANY))] mock_time_sleep.assert_has_calls(time_sleep_args) mock_call_plugin.assert_has_calls(call_plugin_args) mock_add_inst.assert_has_calls(add_inst_args) @mock.patch.object(time, 'sleep') @mock.patch.object(compute_utils, 'add_instance_fault_from_exc') def test_upload_image_retries_on_signal_exception(self, mock_add_inst, mock_time_sleep): self.flags(num_retries=2, group='glance') params = self._get_params() params.update(self._get_upload_params()) error_details = ["", "task signaled", "", ""] error = XenAPI.Failure(details=error_details) # Note(johngarbutt) XenServer 6.1 and later has this error error_details_v61 = ["", "signal: SIGTERM", "", ""] error_v61 = self.session.XenAPI.Failure(details=error_details_v61) with mock.patch.object(self.session, 'call_plugin_serialized', side_effect=[error, error_v61, None] ) as mock_call_plugin: self.store.upload_image(self.context, self.session, self.instance, 'fake_image_uuid', ['fake_vdi_uuid']) time_sleep_args = [mock.call(0.5), mock.call(1)] call_plugin_args = [ mock.call('glance.py', 'upload_vhd2', **params), mock.call('glance.py', 'upload_vhd2', **params), mock.call('glance.py', 'upload_vhd2', **params)] add_inst_args = [ mock.call(self.context, self.instance, error, (XenAPI.Failure, error, mock.ANY)), mock.call(self.context, self.instance, error_v61, (XenAPI.Failure, error_v61, mock.ANY))] mock_time_sleep.assert_has_calls(time_sleep_args) mock_call_plugin.assert_has_calls(call_plugin_args) mock_add_inst.assert_has_calls(add_inst_args) @mock.patch.object(utils, 'get_auto_disk_config_from_instance') @mock.patch.object(common_glance, 'generate_identity_headers') @mock.patch.object(vm_utils, 'get_sr_path') @mock.patch.object(host_glance, 'upload_vhd') def test_upload_image_raises_exception_image_not_found(self, mock_upload, mock_sr_path, mock_extra_header, mock_disk_config): params = self._get_upload_params() mock_upload.return_value = 'fake_upload' mock_sr_path.return_value = 'fake_sr_path' mock_extra_header.return_value = 'fake_extra_header' mock_disk_config.return_value = 'true' image_id = 'fake_image_id' mock_upload.side_effect = xenapi_exception.PluginImageNotFound( image_id=image_id ) self.assertRaises(exception.ImageNotFound, self.store.upload_image, self.context, self.session, self.instance, 'fake_image_uuid', ['fake_vdi_uuid']) mock_sr_path.assert_called_once_with(self.session) mock_extra_header.assert_called_once_with(self.context) mock_upload.assert_called_once_with( self.session, 0, mock.ANY, mock.ANY, 'fake_image_uuid', 'fake_sr_path', 'fake_extra_header', **params) nova-17.0.1/nova/tests/unit/virt/xenapi/stubs.py0000666000175000017500000003454113250073126021643 0ustar zuulzuul00000000000000# Copyright (c) 2010 Citrix Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Stubouts, mocks and fixtures for the test suite.""" import pickle import random import sys import fixtures import mock from os_xenapi.client import session from os_xenapi.client import XenAPI from oslo_serialization import jsonutils from nova import test import nova.tests.unit.image.fake from nova.virt.xenapi import fake from nova.virt.xenapi import vm_utils from nova.virt.xenapi import vmops def stubout_firewall_driver(stubs, conn): def fake_none(self, *args): return _vmops = conn._vmops stubs.Set(_vmops.firewall_driver, 'prepare_instance_filter', fake_none) stubs.Set(_vmops.firewall_driver, 'instance_filter_exists', fake_none) def stubout_instance_snapshot(stubs): def fake_fetch_image(context, session, instance, name_label, image, type): return {'root': dict(uuid=_make_fake_vdi(), file=None), 'kernel': dict(uuid=_make_fake_vdi(), file=None), 'ramdisk': dict(uuid=_make_fake_vdi(), file=None)} stubs.Set(vm_utils, '_fetch_image', fake_fetch_image) def fake_wait_for_vhd_coalesce(*args): # TODO(sirp): Should we actually fake out the data here return "fakeparent", "fakebase" stubs.Set(vm_utils, '_wait_for_vhd_coalesce', fake_wait_for_vhd_coalesce) def stubout_session(stubs, cls, product_version=(5, 6, 2), product_brand='XenServer', platform_version=(1, 9, 0), **opt_args): """Stubs out methods from XenAPISession.""" stubs.Set(session.XenAPISession, '_create_session', lambda s, url: cls(url, **opt_args)) stubs.Set(session.XenAPISession, '_get_product_version_and_brand', lambda s: (product_version, product_brand)) stubs.Set(session.XenAPISession, '_get_platform_version', lambda s: platform_version) def stubout_get_this_vm_uuid(stubs): def f(session): vms = [rec['uuid'] for rec in fake.get_all_records('VM').values() if rec['is_control_domain']] return vms[0] stubs.Set(vm_utils, 'get_this_vm_uuid', f) def stubout_image_service_download(stubs): def fake_download(*args, **kwargs): pass stubs.Set(nova.tests.unit.image.fake._FakeImageService, 'download', fake_download) def stubout_stream_disk(stubs): def fake_stream_disk(*args, **kwargs): pass stubs.Set(vm_utils, '_stream_disk', fake_stream_disk) def stubout_determine_is_pv_objectstore(stubs): """Assumes VMs stu have PV kernels.""" def f(*args): return False stubs.Set(vm_utils, '_determine_is_pv_objectstore', f) def stubout_is_snapshot(stubs): """Always returns true xenapi fake driver does not create vmrefs for snapshots. """ def f(*args): return True stubs.Set(vm_utils, 'is_snapshot', f) def stubout_lookup_image(stubs): """Simulates a failure in lookup image.""" def f(_1, _2, _3, _4): raise Exception("Test Exception raised by fake lookup_image") stubs.Set(vm_utils, 'lookup_image', f) def stubout_fetch_disk_image(stubs, raise_failure=False): """Simulates a failure in fetch image_glance_disk.""" def _fake_fetch_disk_image(context, session, instance, name_label, image, image_type): if raise_failure: raise XenAPI.Failure("Test Exception raised by " "fake fetch_image_glance_disk") elif image_type == vm_utils.ImageType.KERNEL: filename = "kernel" elif image_type == vm_utils.ImageType.RAMDISK: filename = "ramdisk" else: filename = "unknown" vdi_type = vm_utils.ImageType.to_string(image_type) return {vdi_type: dict(uuid=None, file=filename)} stubs.Set(vm_utils, '_fetch_disk_image', _fake_fetch_disk_image) def stubout_create_vm(stubs): """Simulates a failure in create_vm.""" def f(*args): raise XenAPI.Failure("Test Exception raised by fake create_vm") stubs.Set(vm_utils, 'create_vm', f) def stubout_attach_disks(stubs): """Simulates a failure in _attach_disks.""" def f(*args): raise XenAPI.Failure("Test Exception raised by fake _attach_disks") stubs.Set(vmops.VMOps, '_attach_disks', f) def _make_fake_vdi(): sr_ref = fake.get_all('SR')[0] vdi_ref = fake.create_vdi('', sr_ref) vdi_rec = fake.get_record('VDI', vdi_ref) return vdi_rec['uuid'] class FakeSessionForVMTests(fake.SessionBase): """Stubs out a XenAPISession for VM tests.""" _fake_iptables_save_output = ("# Generated by iptables-save v1.4.10 on " "Sun Nov 6 22:49:02 2011\n" "*filter\n" ":INPUT ACCEPT [0:0]\n" ":FORWARD ACCEPT [0:0]\n" ":OUTPUT ACCEPT [0:0]\n" "COMMIT\n" "# Completed on Sun Nov 6 22:49:02 2011\n") def host_call_plugin(self, _1, _2, plugin, method, _5): plugin = plugin.rstrip('.py') if plugin == 'glance' and method in ('download_vhd2'): root_uuid = _make_fake_vdi() return pickle.dumps(dict(root=dict(uuid=root_uuid))) elif (plugin, method) == ('xenhost', 'iptables_config'): return fake.as_json(out=self._fake_iptables_save_output, err='') else: return (super(FakeSessionForVMTests, self). host_call_plugin(_1, _2, plugin, method, _5)) def VM_start(self, _1, ref, _2, _3): vm = fake.get_record('VM', ref) if vm['power_state'] != 'Halted': raise XenAPI.Failure(['VM_BAD_POWER_STATE', ref, 'Halted', vm['power_state']]) vm['power_state'] = 'Running' vm['is_a_template'] = False vm['is_control_domain'] = False vm['domid'] = random.randrange(1, 1 << 16) return vm def VM_start_on(self, _1, vm_ref, host_ref, _2, _3): vm_rec = self.VM_start(_1, vm_ref, _2, _3) vm_rec['resident_on'] = host_ref def VDI_snapshot(self, session_ref, vm_ref, _1): sr_ref = "fakesr" return fake.create_vdi('fakelabel', sr_ref, read_only=True) def SR_scan(self, session_ref, sr_ref): pass class FakeSessionForFirewallTests(FakeSessionForVMTests): """Stubs out a XenApi Session for doing IPTable Firewall tests.""" def __init__(self, uri, test_case=None): super(FakeSessionForFirewallTests, self).__init__(uri) if hasattr(test_case, '_in_rules'): self._in_rules = test_case._in_rules if hasattr(test_case, '_in6_filter_rules'): self._in6_filter_rules = test_case._in6_filter_rules self._test_case = test_case def host_call_plugin(self, _1, _2, plugin, method, args): """Mock method for host_call_plugin to be used in unit tests for the dom0 iptables Firewall drivers for XenAPI """ plugin = plugin.rstrip('.py') if plugin == 'xenhost' and method == 'iptables_config': # The command to execute is a json-encoded list cmd_args = args.get('cmd_args', None) cmd = jsonutils.loads(cmd_args) if not cmd: ret_str = '' else: output = '' process_input = args.get('process_input', None) if cmd == ['ip6tables-save', '-c']: output = '\n'.join(self._in6_filter_rules) if cmd == ['iptables-save', '-c']: output = '\n'.join(self._in_rules) if cmd == ['iptables-restore', '-c', ]: lines = process_input.split('\n') if '*filter' in lines: if self._test_case is not None: self._test_case._out_rules = lines output = '\n'.join(lines) if cmd == ['ip6tables-restore', '-c', ]: lines = process_input.split('\n') if '*filter' in lines: output = '\n'.join(lines) ret_str = fake.as_json(out=output, err='') return ret_str else: return (super(FakeSessionForVMTests, self). host_call_plugin(_1, _2, plugin, method, args)) def stub_out_vm_methods(stubs): def fake_acquire_bootlock(self, vm): pass def fake_release_bootlock(self, vm): pass def fake_generate_ephemeral(*args): pass def fake_wait_for_device(session, dev, dom0, max_seconds): pass stubs.Set(vmops.VMOps, "_acquire_bootlock", fake_acquire_bootlock) stubs.Set(vmops.VMOps, "_release_bootlock", fake_release_bootlock) stubs.Set(vm_utils, 'generate_ephemeral', fake_generate_ephemeral) stubs.Set(vm_utils, '_wait_for_device', fake_wait_for_device) class ReplaceModule(fixtures.Fixture): """Replace a module with a fake module.""" def __init__(self, name, new_value): self.name = name self.new_value = new_value def _restore(self, old_value): sys.modules[self.name] = old_value def setUp(self): super(ReplaceModule, self).setUp() old_value = sys.modules.get(self.name) sys.modules[self.name] = self.new_value self.addCleanup(self._restore, old_value) class FakeSessionForVolumeTests(fake.SessionBase): """Stubs out a XenAPISession for Volume tests.""" def VDI_introduce(self, _1, uuid, _2, _3, _4, _5, _6, _7, _8, _9, _10, _11): valid_vdi = False refs = fake.get_all('VDI') for ref in refs: rec = fake.get_record('VDI', ref) if rec['uuid'] == uuid: valid_vdi = True if not valid_vdi: raise XenAPI.Failure([['INVALID_VDI', 'session', self._session]]) class FakeSessionForVolumeFailedTests(FakeSessionForVolumeTests): """Stubs out a XenAPISession for Volume tests: it injects failures.""" def VDI_introduce(self, _1, uuid, _2, _3, _4, _5, _6, _7, _8, _9, _10, _11): # This is for testing failure raise XenAPI.Failure([['INVALID_VDI', 'session', self._session]]) def PBD_unplug(self, _1, ref): rec = fake.get_record('PBD', ref) rec['currently-attached'] = False def SR_forget(self, _1, ref): pass def stub_out_migration_methods(stubs): fakesr = fake.create_sr() def fake_import_all_migrated_disks(session, instance, import_root=True): vdi_ref = fake.create_vdi(instance['name'], fakesr) vdi_rec = fake.get_record('VDI', vdi_ref) vdi_rec['other_config']['nova_disk_type'] = 'root' return {"root": {'uuid': vdi_rec['uuid'], 'ref': vdi_ref}, "ephemerals": {}} def fake_wait_for_instance_to_start(self, *args): pass def fake_get_vdi(session, vm_ref, userdevice='0'): vdi_ref_parent = fake.create_vdi('derp-parent', fakesr) vdi_rec_parent = fake.get_record('VDI', vdi_ref_parent) vdi_ref = fake.create_vdi('derp', fakesr, sm_config={'vhd-parent': vdi_rec_parent['uuid']}) vdi_rec = session.call_xenapi("VDI.get_record", vdi_ref) return vdi_ref, vdi_rec def fake_sr(session, *args): return fakesr def fake_get_sr_path(*args): return "fake" def fake_destroy(*args, **kwargs): pass def fake_generate_ephemeral(*args): pass stubs.Set(vmops.VMOps, '_destroy', fake_destroy) stubs.Set(vmops.VMOps, '_wait_for_instance_to_start', fake_wait_for_instance_to_start) stubs.Set(vm_utils, 'import_all_migrated_disks', fake_import_all_migrated_disks) stubs.Set(vm_utils, 'scan_default_sr', fake_sr) stubs.Set(vm_utils, 'get_vdi_for_vm_safely', fake_get_vdi) stubs.Set(vm_utils, 'get_sr_path', fake_get_sr_path) stubs.Set(vm_utils, 'generate_ephemeral', fake_generate_ephemeral) class FakeSessionForFailedMigrateTests(FakeSessionForVMTests): def VM_assert_can_migrate(self, session, vmref, migrate_data, live, vdi_map, vif_map, options): raise XenAPI.Failure("XenAPI VM.assert_can_migrate failed") def host_migrate_receive(self, session, hostref, networkref, options): raise XenAPI.Failure("XenAPI host.migrate_receive failed") def VM_migrate_send(self, session, vmref, migrate_data, islive, vdi_map, vif_map, options): raise XenAPI.Failure("XenAPI VM.migrate_send failed") def get_fake_session(error=None): fake_session = mock.MagicMock() session.apply_session_helpers(fake_session) if error is not None: class FakeException(Exception): details = [error, "a", "b", "c"] fake_session.XenAPI.Failure = FakeException fake_session.call_xenapi.side_effect = FakeException return fake_session # FIXME(sirp): XenAPITestBase is deprecated, all tests should be converted # over to use XenAPITestBaseNoDB class XenAPITestBase(test.TestCase): def setUp(self): super(XenAPITestBase, self).setUp() # TODO(mriedem): The tests need to be fixed to work with the # XenAPIOpenVswitchDriver vif driver. self.flags(vif_driver='nova.virt.xenapi.vif.XenAPIBridgeDriver', group='xenserver') self.useFixture(ReplaceModule('XenAPI', fake)) fake.reset() class XenAPITestBaseNoDB(test.NoDBTestCase): def setUp(self): super(XenAPITestBaseNoDB, self).setUp() # TODO(mriedem): The tests need to be fixed to work with the # XenAPIOpenVswitchDriver vif driver. self.flags(vif_driver='nova.virt.xenapi.vif.XenAPIBridgeDriver', group='xenserver') self.useFixture(ReplaceModule('XenAPI', fake)) fake.reset() nova-17.0.1/nova/tests/unit/virt/xenapi/test_xenapi.py0000666000175000017500000053477713250073136023047 0ustar zuulzuul00000000000000# Copyright (c) 2010 Citrix Systems, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Test suite for XenAPI.""" import ast import base64 import contextlib import copy import functools import os import re import mock from mox3 import mox from os_xenapi.client import host_management from os_xenapi.client import XenAPI from oslo_concurrency import lockutils from oslo_config import fixture as config_fixture from oslo_log import log as logging from oslo_serialization import jsonutils from oslo_utils import importutils from oslo_utils import uuidutils import testtools from nova.compute import api as compute_api from nova.compute import manager from nova.compute import power_state from nova.compute import task_states from nova.compute import utils as compute_utils from nova.compute import vm_states import nova.conf from nova import context from nova import crypto from nova import db from nova import exception from nova import objects from nova.objects import base from nova.objects import fields as obj_fields from nova import test from nova.tests import fixtures from nova.tests.unit.api.openstack import fakes from nova.tests.unit.db import fakes as db_fakes from nova.tests.unit import fake_diagnostics from nova.tests.unit import fake_flavor from nova.tests.unit import fake_instance from nova.tests.unit import fake_network from nova.tests.unit import fake_processutils import nova.tests.unit.image.fake as fake_image from nova.tests.unit import matchers from nova.tests.unit.objects import test_aggregate from nova.tests.unit.objects import test_diagnostics from nova.tests.unit import utils as test_utils from nova.tests.unit.virt.xenapi import stubs from nova.tests import uuidsentinel as uuids from nova.virt import fake from nova.virt.xenapi import agent from nova.virt.xenapi import driver as xenapi_conn from nova.virt.xenapi import fake as xenapi_fake from nova.virt.xenapi import host from nova.virt.xenapi.image import glance from nova.virt.xenapi import pool from nova.virt.xenapi import pool_states from nova.virt.xenapi import vm_utils from nova.virt.xenapi import vmops from nova.virt.xenapi import volume_utils LOG = logging.getLogger(__name__) CONF = nova.conf.CONF IMAGE_MACHINE = uuids.image_ref IMAGE_KERNEL = uuids.image_kernel_id IMAGE_RAMDISK = uuids.image_ramdisk_id IMAGE_RAW = uuids.image_raw IMAGE_VHD = uuids.image_vhd IMAGE_ISO = uuids.image_iso IMAGE_IPXE_ISO = uuids.image_ipxe_iso IMAGE_FROM_VOLUME = uuids.image_from_volume IMAGE_FIXTURES = { IMAGE_MACHINE: { 'image_meta': {'name': 'fakemachine', 'size': 0, 'disk_format': 'ami', 'container_format': 'ami', 'id': 'fake-image'}, }, IMAGE_KERNEL: { 'image_meta': {'name': 'fakekernel', 'size': 0, 'disk_format': 'aki', 'container_format': 'aki', 'id': 'fake-kernel'}, }, IMAGE_RAMDISK: { 'image_meta': {'name': 'fakeramdisk', 'size': 0, 'disk_format': 'ari', 'container_format': 'ari', 'id': 'fake-ramdisk'}, }, IMAGE_RAW: { 'image_meta': {'name': 'fakeraw', 'size': 0, 'disk_format': 'raw', 'container_format': 'bare', 'id': 'fake-image-raw'}, }, IMAGE_VHD: { 'image_meta': {'name': 'fakevhd', 'size': 0, 'disk_format': 'vhd', 'container_format': 'ovf', 'id': 'fake-image-vhd'}, }, IMAGE_ISO: { 'image_meta': {'name': 'fakeiso', 'size': 0, 'disk_format': 'iso', 'container_format': 'bare', 'id': 'fake-image-iso'}, }, IMAGE_IPXE_ISO: { 'image_meta': {'name': 'fake_ipxe_iso', 'size': 0, 'disk_format': 'iso', 'container_format': 'bare', 'id': 'fake-image-pxe', 'properties': {'ipxe_boot': 'true'}}, }, IMAGE_FROM_VOLUME: { 'image_meta': {'name': 'fake_ipxe_iso', 'id': 'fake-image-volume', 'properties': {'foo': 'bar'}}, }, } def get_session(): return xenapi_fake.SessionBase('http://localhost', 'root', 'test_pass') def set_image_fixtures(): image_service = fake_image.FakeImageService() image_service.images.clear() for image_id, image_meta in IMAGE_FIXTURES.items(): image_meta = image_meta['image_meta'] image_meta['id'] = image_id image_service.create(None, image_meta) def get_fake_device_info(): # FIXME: 'sr_uuid', 'introduce_sr_keys', sr_type and vdi_uuid # can be removed from the dict when LP bug #1087308 is fixed fake_vdi_ref = xenapi_fake.create_vdi('fake-vdi', None) fake_vdi_uuid = xenapi_fake.get_record('VDI', fake_vdi_ref)['uuid'] fake = {'block_device_mapping': [{'connection_info': {'driver_volume_type': 'iscsi', 'data': {'sr_uuid': 'falseSR', 'introduce_sr_keys': ['sr_type'], 'sr_type': 'iscsi', 'vdi_uuid': fake_vdi_uuid, 'target_discovered': False, 'target_iqn': 'foo_iqn:foo_volid', 'target_portal': 'localhost:3260', 'volume_id': 'foo_volid', 'target_lun': 1, 'auth_password': 'my-p@55w0rd', 'auth_username': 'johndoe', 'auth_method': u'CHAP'}, }, 'mount_device': 'vda', 'delete_on_termination': False}, ], 'root_device_name': '/dev/sda', 'ephemerals': [], 'swap': None, } return fake def stub_vm_utils_with_vdi_attached(function): """vm_utils.with_vdi_attached needs to be stubbed out because it calls down to the filesystem to attach a vdi. This provides a decorator to handle that. """ @functools.wraps(function) def decorated_function(self, *args, **kwargs): @contextlib.contextmanager def fake_vdi_attached(*args, **kwargs): fake_dev = 'fakedev' yield fake_dev def fake_image_download(*args, **kwargs): pass orig_vdi_attached = vm_utils.vdi_attached orig_image_download = fake_image._FakeImageService.download try: vm_utils.vdi_attached = fake_vdi_attached fake_image._FakeImageService.download = fake_image_download return function(self, *args, **kwargs) finally: fake_image._FakeImageService.download = orig_image_download vm_utils.vdi_attached = orig_vdi_attached return decorated_function def create_instance_with_system_metadata(context, instance_values): inst = objects.Instance(context=context, system_metadata={}) for k, v in instance_values.items(): setattr(inst, k, v) inst.flavor = objects.Flavor.get_by_id(context, instance_values['instance_type_id']) inst.old_flavor = None inst.new_flavor = None inst.create() inst.pci_devices = objects.PciDeviceList(objects=[]) return inst class XenAPIVolumeTestCase(stubs.XenAPITestBaseNoDB): """Unit tests for Volume operations.""" def setUp(self): super(XenAPIVolumeTestCase, self).setUp() self.fixture = self.useFixture(config_fixture.Config(lockutils.CONF)) self.fixture.config(disable_process_locking=True, group='oslo_concurrency') self.flags(firewall_driver='nova.virt.xenapi.firewall.' 'Dom0IptablesFirewallDriver') self.flags(connection_url='http://localhost', connection_password='test_pass', group='xenserver') self.instance = fake_instance.fake_db_instance(name='foo') @classmethod def _make_connection_info(cls): target_iqn = 'iqn.2010-10.org.openstack:volume-00000001' return {'driver_volume_type': 'iscsi', 'data': {'volume_id': 1, 'target_iqn': target_iqn, 'target_portal': '127.0.0.1:3260,fake', 'target_lun': None, 'auth_method': 'CHAP', 'auth_username': 'username', 'auth_password': 'password'}} def test_attach_volume(self): # This shows how to test Ops classes' methods. stubs.stubout_session(self.stubs, stubs.FakeSessionForVolumeTests) conn = xenapi_conn.XenAPIDriver(fake.FakeVirtAPI(), False) vm = xenapi_fake.create_vm(self.instance['name'], 'Running') conn_info = self._make_connection_info() self.assertIsNone( conn.attach_volume(None, conn_info, self.instance, '/dev/sdc')) # check that the VM has a VBD attached to it # Get XenAPI record for VBD vbds = xenapi_fake.get_all('VBD') vbd = xenapi_fake.get_record('VBD', vbds[0]) vm_ref = vbd['VM'] self.assertEqual(vm_ref, vm) def test_attach_volume_raise_exception(self): # This shows how to test when exceptions are raised. stubs.stubout_session(self.stubs, stubs.FakeSessionForVolumeFailedTests) conn = xenapi_conn.XenAPIDriver(fake.FakeVirtAPI(), False) xenapi_fake.create_vm(self.instance['name'], 'Running') self.assertRaises(exception.VolumeDriverNotFound, conn.attach_volume, None, {'driver_volume_type': 'nonexist'}, self.instance, '/dev/sdc') # FIXME(sirp): convert this to use XenAPITestBaseNoDB class XenAPIVMTestCase(stubs.XenAPITestBase, test_diagnostics.DiagnosticsComparisonMixin): """Unit tests for VM operations.""" def setUp(self): super(XenAPIVMTestCase, self).setUp() self.useFixture(test.SampleNetworks()) self.network = importutils.import_object(CONF.network_manager) self.fixture = self.useFixture(config_fixture.Config(lockutils.CONF)) self.fixture.config(disable_process_locking=True, group='oslo_concurrency') self.flags(instance_name_template='%d', firewall_driver='nova.virt.xenapi.firewall.' 'Dom0IptablesFirewallDriver') self.flags(connection_url='http://localhost', connection_password='test_pass', group='xenserver') db_fakes.stub_out_db_instance_api(self) xenapi_fake.create_network('fake', 'fake_br1') stubs.stubout_session(self.stubs, stubs.FakeSessionForVMTests) stubs.stubout_get_this_vm_uuid(self.stubs) stubs.stub_out_vm_methods(self.stubs) fake_processutils.stub_out_processutils_execute(self) self.user_id = 'fake' self.project_id = fakes.FAKE_PROJECT_ID self.context = context.RequestContext(self.user_id, self.project_id) self.conn = xenapi_conn.XenAPIDriver(fake.FakeVirtAPI(), False) self.conn._session.is_local_connection = False fake_image.stub_out_image_service(self) set_image_fixtures() stubs.stubout_image_service_download(self.stubs) stubs.stubout_stream_disk(self.stubs) def fake_inject_instance_metadata(self, instance, vm): pass self.stubs.Set(vmops.VMOps, '_inject_instance_metadata', fake_inject_instance_metadata) def fake_safe_copy_vdi(session, sr_ref, instance, vdi_to_copy_ref): name_label = "fakenamelabel" disk_type = "fakedisktype" virtual_size = 777 return vm_utils.create_vdi( session, sr_ref, instance, name_label, disk_type, virtual_size) self.stubs.Set(vm_utils, '_safe_copy_vdi', fake_safe_copy_vdi) def fake_unpause_and_wait(self, vm_ref, instance, power_on): self._update_last_dom_id(vm_ref) self.stubs.Set(vmops.VMOps, '_unpause_and_wait', fake_unpause_and_wait) def tearDown(self): fake_image.FakeImageService_reset() super(XenAPIVMTestCase, self).tearDown() def test_init_host(self): session = get_session() vm = vm_utils._get_this_vm_ref(session) # Local root disk vdi0 = xenapi_fake.create_vdi('compute', None) vbd0 = xenapi_fake.create_vbd(vm, vdi0) # Instance VDI vdi1 = xenapi_fake.create_vdi('instance-aaaa', None, other_config={'nova_instance_uuid': 'aaaa'}) xenapi_fake.create_vbd(vm, vdi1) # Only looks like instance VDI vdi2 = xenapi_fake.create_vdi('instance-bbbb', None) vbd2 = xenapi_fake.create_vbd(vm, vdi2) self.conn.init_host(None) self.assertEqual(set(xenapi_fake.get_all('VBD')), set([vbd0, vbd2])) @mock.patch.object(vm_utils, 'lookup', return_value=True) def test_instance_exists(self, mock_lookup): self.stubs.Set(objects.Instance, 'name', 'foo') instance = objects.Instance(uuid=uuids.instance) self.assertTrue(self.conn.instance_exists(instance)) mock_lookup.assert_called_once_with(mock.ANY, 'foo') @mock.patch.object(vm_utils, 'lookup', return_value=None) def test_instance_not_exists(self, mock_lookup): self.stubs.Set(objects.Instance, 'name', 'bar') instance = objects.Instance(uuid=uuids.instance) self.assertFalse(self.conn.instance_exists(instance)) mock_lookup.assert_called_once_with(mock.ANY, 'bar') def test_list_instances_0(self): instances = self.conn.list_instances() self.assertEqual(instances, []) def test_list_instance_uuids_0(self): instance_uuids = self.conn.list_instance_uuids() self.assertEqual(instance_uuids, []) def test_list_instance_uuids(self): uuids = [] for x in range(1, 4): instance = self._create_instance() uuids.append(instance['uuid']) instance_uuids = self.conn.list_instance_uuids() self.assertEqual(len(uuids), len(instance_uuids)) self.assertEqual(set(uuids), set(instance_uuids)) def test_get_rrd_server(self): self.flags(connection_url='myscheme://myaddress/', group='xenserver') server_info = vm_utils._get_rrd_server() self.assertEqual(server_info[0], 'myscheme') self.assertEqual(server_info[1], 'myaddress') expected_raw_diagnostics = { 'vbd_xvdb_write': '0.0', 'memory_target': '4294967296.0000', 'memory_internal_free': '1415564.0000', 'memory': '4294967296.0000', 'vbd_xvda_write': '0.0', 'cpu0': '0.0042', 'vif_0_tx': '287.4134', 'vbd_xvda_read': '0.0', 'vif_0_rx': '1816.0144', 'vif_2_rx': '0.0', 'vif_2_tx': '0.0', 'vbd_xvdb_read': '0.0', 'last_update': '1328795567', } def test_get_diagnostics(self): def fake_get_rrd(host, vm_uuid): path = os.path.dirname(os.path.realpath(__file__)) with open(os.path.join(path, 'vm_rrd.xml')) as f: return re.sub(r'\s', '', f.read()) self.stubs.Set(vm_utils, '_get_rrd', fake_get_rrd) expected = self.expected_raw_diagnostics instance = self._create_instance() actual = self.conn.get_diagnostics(instance) self.assertThat(actual, matchers.DictMatches(expected)) def test_get_instance_diagnostics(self): expected = fake_diagnostics.fake_diagnostics_obj( config_drive=False, state='running', driver='xenapi', cpu_details=[{'id': 0, 'utilisation': 11}, {'id': 1, 'utilisation': 22}, {'id': 2, 'utilisation': 33}, {'id': 3, 'utilisation': 44}], nic_details=[{'mac_address': 'DE:AD:BE:EF:00:01', 'rx_rate': 50, 'tx_rate': 100}], disk_details=[{'read_bytes': 50, 'write_bytes': 100}], memory_details={'maximum': 8192, 'used': 3072}) instance = self._create_instance(obj=True) actual = self.conn.get_instance_diagnostics(instance) self.assertDiagnosticsEqual(expected, actual) def _test_get_instance_diagnostics_failure(self, **kwargs): instance = self._create_instance(obj=True) with mock.patch.object(xenapi_fake.SessionBase, 'VM_query_data_source', **kwargs): actual = self.conn.get_instance_diagnostics(instance) expected = fake_diagnostics.fake_diagnostics_obj( config_drive=False, state='running', driver='xenapi', cpu_details=[{'id': 0}, {'id': 1}, {'id': 2}, {'id': 3}], nic_details=[{'mac_address': 'DE:AD:BE:EF:00:01'}], disk_details=[{}], memory_details={'maximum': None, 'used': None}) self.assertDiagnosticsEqual(expected, actual) def test_get_instance_diagnostics_xenapi_exception(self): self._test_get_instance_diagnostics_failure( side_effect=XenAPI.Failure('')) def test_get_instance_diagnostics_nan_value(self): self._test_get_instance_diagnostics_failure( return_value=float('NaN')) def test_get_vnc_console(self): instance = self._create_instance(obj=True) session = get_session() conn = xenapi_conn.XenAPIDriver(fake.FakeVirtAPI(), False) vm_ref = vm_utils.lookup(session, instance['name']) console = conn.get_vnc_console(self.context, instance) # Note(sulo): We don't care about session id in test # they will always differ so strip that out actual_path = console.internal_access_path.split('&')[0] expected_path = "/console?ref=%s" % str(vm_ref) self.assertEqual(expected_path, actual_path) def test_get_vnc_console_for_rescue(self): instance = self._create_instance(obj=True) conn = xenapi_conn.XenAPIDriver(fake.FakeVirtAPI(), False) rescue_vm = xenapi_fake.create_vm(instance['name'] + '-rescue', 'Running') # Set instance state to rescued instance['vm_state'] = 'rescued' console = conn.get_vnc_console(self.context, instance) # Note(sulo): We don't care about session id in test # they will always differ so strip that out actual_path = console.internal_access_path.split('&')[0] expected_path = "/console?ref=%s" % str(rescue_vm) self.assertEqual(expected_path, actual_path) def test_get_vnc_console_instance_not_ready(self): instance = self._create_instance(obj=True, spawn=False) instance.vm_state = 'building' conn = xenapi_conn.XenAPIDriver(fake.FakeVirtAPI(), False) self.assertRaises(exception.InstanceNotFound, conn.get_vnc_console, self.context, instance) def test_get_vnc_console_rescue_not_ready(self): instance = self._create_instance(obj=True, spawn=False) instance.vm_state = 'rescued' conn = xenapi_conn.XenAPIDriver(fake.FakeVirtAPI(), False) self.assertRaises(exception.InstanceNotReady, conn.get_vnc_console, self.context, instance) def test_instance_snapshot_fails_with_no_primary_vdi(self): def create_bad_vbd(session, vm_ref, vdi_ref, userdevice, vbd_type='disk', read_only=False, bootable=False, osvol=False): vbd_rec = {'VM': vm_ref, 'VDI': vdi_ref, 'userdevice': 'fake', 'currently_attached': False} vbd_ref = xenapi_fake._create_object('VBD', vbd_rec) xenapi_fake.after_VBD_create(vbd_ref, vbd_rec) return vbd_ref self.stubs.Set(vm_utils, 'create_vbd', create_bad_vbd) stubs.stubout_instance_snapshot(self.stubs) # Stubbing out firewall driver as previous stub sets alters # xml rpc result parsing stubs.stubout_firewall_driver(self.stubs, self.conn) instance = self._create_instance() image_id = "my_snapshot_id" self.assertRaises(exception.NovaException, self.conn.snapshot, self.context, instance, image_id, lambda *args, **kwargs: None) def test_instance_snapshot(self): expected_calls = [ {'args': (), 'kwargs': {'task_state': task_states.IMAGE_PENDING_UPLOAD}}, {'args': (), 'kwargs': {'task_state': task_states.IMAGE_UPLOADING, 'expected_state': task_states.IMAGE_PENDING_UPLOAD}}] func_call_matcher = matchers.FunctionCallMatcher(expected_calls) image_id = "my_snapshot_id" stubs.stubout_instance_snapshot(self.stubs) stubs.stubout_is_snapshot(self.stubs) # Stubbing out firewall driver as previous stub sets alters # xml rpc result parsing stubs.stubout_firewall_driver(self.stubs, self.conn) instance = self._create_instance() self.fake_upload_called = False def fake_image_upload(_self, ctx, session, inst, img_id, vdi_uuids): self.fake_upload_called = True self.assertEqual(ctx, self.context) self.assertEqual(inst, instance) self.assertIsInstance(vdi_uuids, list) self.assertEqual(img_id, image_id) self.stubs.Set(glance.GlanceStore, 'upload_image', fake_image_upload) self.conn.snapshot(self.context, instance, image_id, func_call_matcher.call) # Ensure VM was torn down vm_labels = [] for vm_ref in xenapi_fake.get_all('VM'): vm_rec = xenapi_fake.get_record('VM', vm_ref) if not vm_rec["is_control_domain"]: vm_labels.append(vm_rec["name_label"]) self.assertEqual(vm_labels, [instance['name']]) # Ensure VBDs were torn down vbd_labels = [] for vbd_ref in xenapi_fake.get_all('VBD'): vbd_rec = xenapi_fake.get_record('VBD', vbd_ref) vbd_labels.append(vbd_rec["vm_name_label"]) self.assertEqual(vbd_labels, [instance['name']]) # Ensure task states changed in correct order self.assertIsNone(func_call_matcher.match()) # Ensure VDIs were torn down for vdi_ref in xenapi_fake.get_all('VDI'): vdi_rec = xenapi_fake.get_record('VDI', vdi_ref) name_label = vdi_rec["name_label"] self.assertFalse(name_label.endswith('snapshot')) self.assertTrue(self.fake_upload_called) def create_vm_record(self, conn, os_type, name): instances = conn.list_instances() self.assertEqual(instances, [name]) # Get Nova record for VM vm_info = conn.get_info({'name': name}) # Get XenAPI record for VM vms = [rec for rec in xenapi_fake.get_all_records('VM').values() if not rec['is_control_domain']] vm = vms[0] self.vm_info = vm_info self.vm = vm def check_vm_record(self, conn, instance_type_id, check_injection): flavor = objects.Flavor.get_by_id(self.context, instance_type_id) mem_kib = int(flavor['memory_mb']) << 10 mem_bytes = str(mem_kib << 10) vcpus = flavor['vcpus'] vcpu_weight = flavor['vcpu_weight'] self.assertEqual(self.vm['memory_static_max'], mem_bytes) self.assertEqual(self.vm['memory_dynamic_max'], mem_bytes) self.assertEqual(self.vm['memory_dynamic_min'], mem_bytes) self.assertEqual(self.vm['VCPUs_max'], str(vcpus)) self.assertEqual(self.vm['VCPUs_at_startup'], str(vcpus)) if vcpu_weight is None: self.assertEqual(self.vm['VCPUs_params'], {}) else: self.assertEqual(self.vm['VCPUs_params'], {'weight': str(vcpu_weight), 'cap': '0'}) # Check that the VM is running according to Nova self.assertEqual(self.vm_info.state, power_state.RUNNING) # Check that the VM is running according to XenAPI. self.assertEqual(self.vm['power_state'], 'Running') if check_injection: xenstore_data = self.vm['xenstore_data'] self.assertNotIn('vm-data/hostname', xenstore_data) key = 'vm-data/networking/DEADBEEF0001' xenstore_value = xenstore_data[key] tcpip_data = ast.literal_eval(xenstore_value) self.assertJsonEqual({'broadcast': '192.168.1.255', 'dns': ['192.168.1.4', '192.168.1.3'], 'gateway': '192.168.1.1', 'gateway_v6': '2001:db8:0:1::1', 'ip6s': [{'enabled': '1', 'ip': '2001:db8:0:1:dcad:beff:feef:1', 'netmask': 64, 'gateway': '2001:db8:0:1::1'}], 'ips': [{'enabled': '1', 'ip': '192.168.1.100', 'netmask': '255.255.255.0', 'gateway': '192.168.1.1'}, {'enabled': '1', 'ip': '192.168.1.101', 'netmask': '255.255.255.0', 'gateway': '192.168.1.1'}], 'label': 'test1', 'mac': 'DE:AD:BE:EF:00:01'}, tcpip_data) def check_vm_params_for_windows(self): self.assertEqual(self.vm['platform']['nx'], 'true') self.assertEqual(self.vm['HVM_boot_params'], {'order': 'dc'}) self.assertEqual(self.vm['HVM_boot_policy'], 'BIOS order') # check that these are not set self.assertEqual(self.vm['PV_args'], '') self.assertEqual(self.vm['PV_bootloader'], '') self.assertEqual(self.vm['PV_kernel'], '') self.assertEqual(self.vm['PV_ramdisk'], '') def check_vm_params_for_linux(self): self.assertEqual(self.vm['platform']['nx'], 'false') self.assertEqual(self.vm['PV_args'], '') self.assertEqual(self.vm['PV_bootloader'], 'pygrub') # check that these are not set self.assertEqual(self.vm['PV_kernel'], '') self.assertEqual(self.vm['PV_ramdisk'], '') self.assertEqual(self.vm['HVM_boot_params'], {}) self.assertEqual(self.vm['HVM_boot_policy'], '') def check_vm_params_for_linux_with_external_kernel(self): self.assertEqual(self.vm['platform']['nx'], 'false') self.assertEqual(self.vm['PV_args'], 'root=/dev/xvda1') self.assertNotEqual(self.vm['PV_kernel'], '') self.assertNotEqual(self.vm['PV_ramdisk'], '') # check that these are not set self.assertEqual(self.vm['HVM_boot_params'], {}) self.assertEqual(self.vm['HVM_boot_policy'], '') def _list_vdis(self): session = get_session() return session.call_xenapi('VDI.get_all') def _list_vms(self): session = get_session() return session.call_xenapi('VM.get_all') def _check_vdis(self, start_list, end_list): for vdi_ref in end_list: if vdi_ref not in start_list: vdi_rec = xenapi_fake.get_record('VDI', vdi_ref) # If the cache is turned on then the base disk will be # there even after the cleanup if 'other_config' in vdi_rec: if 'image-id' not in vdi_rec['other_config']: self.fail('Found unexpected VDI:%s' % vdi_ref) else: self.fail('Found unexpected VDI:%s' % vdi_ref) def _test_spawn(self, image_ref, kernel_id, ramdisk_id, instance_type_id="3", os_type="linux", hostname="test", architecture="x86-64", instance_id=1, injected_files=None, check_injection=False, create_record=True, empty_dns=False, block_device_info=None, key_data=None): if injected_files is None: injected_files = [] # Fake out inject_instance_metadata def fake_inject_instance_metadata(self, instance, vm): pass self.stubs.Set(vmops.VMOps, '_inject_instance_metadata', fake_inject_instance_metadata) if create_record: flavor = objects.Flavor.get_by_id(self.context, instance_type_id) instance = objects.Instance(context=self.context) instance.project_id = self.project_id instance.user_id = self.user_id instance.image_ref = image_ref instance.kernel_id = kernel_id instance.ramdisk_id = ramdisk_id instance.root_gb = flavor.root_gb instance.ephemeral_gb = flavor.ephemeral_gb instance.instance_type_id = instance_type_id instance.os_type = os_type instance.hostname = hostname instance.key_data = key_data instance.architecture = architecture instance.system_metadata = {} instance.flavor = flavor instance.create() else: instance = objects.Instance.get_by_id(self.context, instance_id, expected_attrs=['flavor']) network_info = fake_network.fake_get_instance_nw_info(self) if empty_dns: # NOTE(tr3buchet): this is a terrible way to do this... network_info[0]['network']['subnets'][0]['dns'] = [] image_meta = objects.ImageMeta.from_dict( IMAGE_FIXTURES[image_ref]["image_meta"]) self.conn.spawn(self.context, instance, image_meta, injected_files, 'herp', {}, network_info, block_device_info) self.create_vm_record(self.conn, os_type, instance['name']) self.check_vm_record(self.conn, instance_type_id, check_injection) self.assertEqual(instance['os_type'], os_type) self.assertEqual(instance['architecture'], architecture) def test_spawn_ipxe_iso_success(self): self.mox.StubOutWithMock(vm_utils, 'get_sr_path') vm_utils.get_sr_path(mox.IgnoreArg()).AndReturn('/sr/path') self.flags(ipxe_network_name='test1', ipxe_boot_menu_url='http://boot.example.com', ipxe_mkisofs_cmd='/root/mkisofs', group='xenserver') self.mox.StubOutWithMock(self.conn._session, 'call_plugin_serialized') self.conn._session.call_plugin_serialized( 'ipxe.py', 'inject', '/sr/path', mox.IgnoreArg(), 'http://boot.example.com', '192.168.1.100', '255.255.255.0', '192.168.1.1', '192.168.1.3', '/root/mkisofs') self.conn._session.call_plugin_serialized('partition_utils.py', 'make_partition', 'fakedev', '2048', '-') self.mox.ReplayAll() self._test_spawn(IMAGE_IPXE_ISO, None, None) def test_spawn_ipxe_iso_no_network_name(self): self.flags(ipxe_network_name=None, ipxe_boot_menu_url='http://boot.example.com', group='xenserver') # ipxe inject shouldn't be called self.mox.StubOutWithMock(self.conn._session, 'call_plugin_serialized') self.conn._session.call_plugin_serialized('partition_utils.py', 'make_partition', 'fakedev', '2048', '-') self.mox.ReplayAll() self._test_spawn(IMAGE_IPXE_ISO, None, None) def test_spawn_ipxe_iso_no_boot_menu_url(self): self.flags(ipxe_network_name='test1', ipxe_boot_menu_url=None, group='xenserver') # ipxe inject shouldn't be called self.mox.StubOutWithMock(self.conn._session, 'call_plugin_serialized') self.conn._session.call_plugin_serialized('partition_utils.py', 'make_partition', 'fakedev', '2048', '-') self.mox.ReplayAll() self._test_spawn(IMAGE_IPXE_ISO, None, None) def test_spawn_ipxe_iso_unknown_network_name(self): self.flags(ipxe_network_name='test2', ipxe_boot_menu_url='http://boot.example.com', group='xenserver') # ipxe inject shouldn't be called self.mox.StubOutWithMock(self.conn._session, 'call_plugin_serialized') self.conn._session.call_plugin_serialized('partition_utils.py', 'make_partition', 'fakedev', '2048', '-') self.mox.ReplayAll() self._test_spawn(IMAGE_IPXE_ISO, None, None) def test_spawn_empty_dns(self): # Test spawning with an empty dns list. self._test_spawn(IMAGE_VHD, None, None, os_type="linux", architecture="x86-64", empty_dns=True) self.check_vm_params_for_linux() def test_spawn_not_enough_memory(self): self.assertRaises(exception.InsufficientFreeMemory, self._test_spawn, IMAGE_MACHINE, IMAGE_KERNEL, IMAGE_RAMDISK, "4") # m1.xlarge def test_spawn_fail_cleanup_1(self): """Simulates an error while downloading an image. Verifies that the VM and VDIs created are properly cleaned up. """ vdi_recs_start = self._list_vdis() start_vms = self._list_vms() stubs.stubout_fetch_disk_image(self.stubs, raise_failure=True) self.assertRaises(XenAPI.Failure, self._test_spawn, IMAGE_MACHINE, IMAGE_KERNEL, IMAGE_RAMDISK) # No additional VDI should be found. vdi_recs_end = self._list_vdis() end_vms = self._list_vms() self._check_vdis(vdi_recs_start, vdi_recs_end) # No additional VMs should be found. self.assertEqual(start_vms, end_vms) def test_spawn_fail_cleanup_2(self): """Simulates an error while creating VM record. Verifies that the VM and VDIs created are properly cleaned up. """ vdi_recs_start = self._list_vdis() start_vms = self._list_vms() stubs.stubout_create_vm(self.stubs) self.assertRaises(XenAPI.Failure, self._test_spawn, IMAGE_MACHINE, IMAGE_KERNEL, IMAGE_RAMDISK) # No additional VDI should be found. vdi_recs_end = self._list_vdis() end_vms = self._list_vms() self._check_vdis(vdi_recs_start, vdi_recs_end) # No additional VMs should be found. self.assertEqual(start_vms, end_vms) def test_spawn_fail_cleanup_3(self): """Simulates an error while attaching disks. Verifies that the VM and VDIs created are properly cleaned up. """ stubs.stubout_attach_disks(self.stubs) vdi_recs_start = self._list_vdis() start_vms = self._list_vms() self.assertRaises(XenAPI.Failure, self._test_spawn, IMAGE_MACHINE, IMAGE_KERNEL, IMAGE_RAMDISK) # No additional VDI should be found. vdi_recs_end = self._list_vdis() end_vms = self._list_vms() self._check_vdis(vdi_recs_start, vdi_recs_end) # No additional VMs should be found. self.assertEqual(start_vms, end_vms) def test_spawn_raw_glance(self): self._test_spawn(IMAGE_RAW, None, None, os_type=None) self.check_vm_params_for_windows() def test_spawn_vhd_glance_linux(self): self._test_spawn(IMAGE_VHD, None, None, os_type="linux", architecture="x86-64") self.check_vm_params_for_linux() def test_spawn_vhd_glance_windows(self): self._test_spawn(IMAGE_VHD, None, None, os_type="windows", architecture="i386", instance_type_id=5) self.check_vm_params_for_windows() def test_spawn_iso_glance(self): self._test_spawn(IMAGE_ISO, None, None, os_type="windows", architecture="i386") self.check_vm_params_for_windows() def test_spawn_glance(self): def fake_fetch_disk_image(context, session, instance, name_label, image_id, image_type): sr_ref = vm_utils.safe_find_sr(session) image_type_str = vm_utils.ImageType.to_string(image_type) vdi_ref = vm_utils.create_vdi(session, sr_ref, instance, name_label, image_type_str, "20") vdi_role = vm_utils.ImageType.get_role(image_type) vdi_uuid = session.call_xenapi("VDI.get_uuid", vdi_ref) return {vdi_role: dict(uuid=vdi_uuid, file=None)} self.stubs.Set(vm_utils, '_fetch_disk_image', fake_fetch_disk_image) self._test_spawn(IMAGE_MACHINE, IMAGE_KERNEL, IMAGE_RAMDISK) self.check_vm_params_for_linux_with_external_kernel() def test_spawn_boot_from_volume_no_glance_image_meta(self): dev_info = get_fake_device_info() self._test_spawn(IMAGE_FROM_VOLUME, None, None, block_device_info=dev_info) def test_spawn_boot_from_volume_with_image_meta(self): dev_info = get_fake_device_info() self._test_spawn(IMAGE_VHD, None, None, block_device_info=dev_info) @testtools.skipIf(test_utils.is_osx(), 'IPv6 pretty-printing broken on OSX, see bug 1409135') @mock.patch.object(nova.privsep.path, 'readlink') @mock.patch.object(nova.privsep.path, 'writefile') @mock.patch.object(nova.privsep.path, 'makedirs') @mock.patch.object(nova.privsep.path, 'chown') @mock.patch.object(nova.privsep.path, 'chmod') @mock.patch.object(nova.privsep.fs, 'mount', return_value=(None, None)) @mock.patch.object(nova.privsep.fs, 'umount') def test_spawn_netinject_file(self, umount, mount, chmod, chown, mkdir, write_file, read_link): self.flags(flat_injected=True) db_fakes.stub_out_db_instance_api(self, injected=True) self._test_spawn(IMAGE_MACHINE, IMAGE_KERNEL, IMAGE_RAMDISK, check_injection=True) read_link.assert_called() mkdir.assert_called() chown.assert_called() chmod.assert_called() write_file.assert_called() @testtools.skipIf(test_utils.is_osx(), 'IPv6 pretty-printing broken on OSX, see bug 1409135') def test_spawn_netinject_xenstore(self): db_fakes.stub_out_db_instance_api(self, injected=True) self._tee_executed = False def _mount_handler(cmd, *ignore_args, **ignore_kwargs): # When mounting, create real files under the mountpoint to simulate # files in the mounted filesystem # mount point will be the last item of the command list self._tmpdir = cmd[len(cmd) - 1] LOG.debug('Creating files in %s to simulate guest agent', self._tmpdir) os.makedirs(os.path.join(self._tmpdir, 'usr', 'sbin')) # Touch the file using open open(os.path.join(self._tmpdir, 'usr', 'sbin', 'xe-update-networking'), 'w').close() return '', '' def _umount_handler(cmd, *ignore_args, **ignore_kwargs): # Umount would normally make files in the mounted filesystem # disappear, so do that here LOG.debug('Removing simulated guest agent files in %s', self._tmpdir) os.remove(os.path.join(self._tmpdir, 'usr', 'sbin', 'xe-update-networking')) os.rmdir(os.path.join(self._tmpdir, 'usr', 'sbin')) os.rmdir(os.path.join(self._tmpdir, 'usr')) return '', '' def _tee_handler(cmd, *ignore_args, **ignore_kwargs): self._tee_executed = True return '', '' fake_processutils.fake_execute_set_repliers([ (r'mount', _mount_handler), (r'umount', _umount_handler), (r'tee.*interfaces', _tee_handler)]) self._test_spawn(IMAGE_MACHINE, IMAGE_KERNEL, IMAGE_RAMDISK, check_injection=True) # tee must not run in this case, where an injection-capable # guest agent is detected self.assertFalse(self._tee_executed) def test_spawn_injects_auto_disk_config_to_xenstore(self): instance = self._create_instance(spawn=False, obj=True) self.mox.StubOutWithMock(self.conn._vmops, '_inject_auto_disk_config') self.conn._vmops._inject_auto_disk_config(instance, mox.IgnoreArg()) self.mox.ReplayAll() image_meta = objects.ImageMeta.from_dict( IMAGE_FIXTURES[IMAGE_MACHINE]["image_meta"]) self.conn.spawn(self.context, instance, image_meta, [], 'herp', {}, '') def test_spawn_vlanmanager(self): self.flags(network_manager='nova.network.manager.VlanManager', vlan_interface='fake0') def dummy(*args, **kwargs): pass self.stubs.Set(vmops.VMOps, '_create_vifs', dummy) # Reset network table xenapi_fake.reset_table('network') # Instance 2 will use vlan network (see db/fakes.py) ctxt = self.context.elevated() inst2 = self._create_instance(False, obj=True) networks = self.network.db.network_get_all(ctxt) with mock.patch('nova.objects.network.Network._from_db_object'): for network in networks: self.network.set_network_host(ctxt, network) self.network.allocate_for_instance(ctxt, instance_id=inst2.id, instance_uuid=inst2.uuid, host=CONF.host, vpn=None, rxtx_factor=3, project_id=self.project_id, macs=None) self._test_spawn(IMAGE_MACHINE, IMAGE_KERNEL, IMAGE_RAMDISK, instance_id=inst2.id, create_record=False) # TODO(salvatore-orlando): a complete test here would require # a check for making sure the bridge for the VM's VIF is # consistent with bridge specified in nova db def test_spawn_with_network_qos(self): self._create_instance() for vif_ref in xenapi_fake.get_all('VIF'): vif_rec = xenapi_fake.get_record('VIF', vif_ref) self.assertEqual(vif_rec['qos_algorithm_type'], 'ratelimit') self.assertEqual(vif_rec['qos_algorithm_params']['kbps'], str(3 * 10 * 1024)) def test_spawn_ssh_key_injection(self): # Test spawning with key_data on an instance. Should use # agent file injection. self.flags(use_agent_default=True, group='xenserver') actual_injected_files = [] def fake_inject_file(self, method, args): path = base64.b64decode(args['b64_path']) contents = base64.b64decode(args['b64_contents']) actual_injected_files.append((path, contents)) return jsonutils.dumps({'returncode': '0', 'message': 'success'}) self.stubs.Set(stubs.FakeSessionForVMTests, '_plugin_agent_inject_file', fake_inject_file) def fake_encrypt_text(sshkey, new_pass): self.assertEqual("ssh-rsa fake_keydata", sshkey) return "fake" self.stubs.Set(crypto, 'ssh_encrypt_text', fake_encrypt_text) expected_data = (b'\n# The following ssh key was injected by ' b'Nova\nssh-rsa fake_keydata\n') injected_files = [(b'/root/.ssh/authorized_keys', expected_data)] self._test_spawn(IMAGE_VHD, None, None, os_type="linux", architecture="x86-64", key_data='ssh-rsa fake_keydata') self.assertEqual(actual_injected_files, injected_files) def test_spawn_ssh_key_injection_non_rsa(self): # Test spawning with key_data on an instance. Should use # agent file injection. self.flags(use_agent_default=True, group='xenserver') actual_injected_files = [] def fake_inject_file(self, method, args): path = base64.b64decode(args['b64_path']) contents = base64.b64decode(args['b64_contents']) actual_injected_files.append((path, contents)) return jsonutils.dumps({'returncode': '0', 'message': 'success'}) self.stubs.Set(stubs.FakeSessionForVMTests, '_plugin_agent_inject_file', fake_inject_file) def fake_encrypt_text(sshkey, new_pass): raise NotImplementedError("Should not be called") self.stubs.Set(crypto, 'ssh_encrypt_text', fake_encrypt_text) expected_data = (b'\n# The following ssh key was injected by ' b'Nova\nssh-dsa fake_keydata\n') injected_files = [(b'/root/.ssh/authorized_keys', expected_data)] self._test_spawn(IMAGE_VHD, None, None, os_type="linux", architecture="x86-64", key_data='ssh-dsa fake_keydata') self.assertEqual(actual_injected_files, injected_files) def test_spawn_injected_files(self): # Test spawning with injected_files. self.flags(use_agent_default=True, group='xenserver') actual_injected_files = [] def fake_inject_file(self, method, args): path = base64.b64decode(args['b64_path']) contents = base64.b64decode(args['b64_contents']) actual_injected_files.append((path, contents)) return jsonutils.dumps({'returncode': '0', 'message': 'success'}) self.stubs.Set(stubs.FakeSessionForVMTests, '_plugin_agent_inject_file', fake_inject_file) injected_files = [(b'/tmp/foo', b'foobar')] self._test_spawn(IMAGE_VHD, None, None, os_type="linux", architecture="x86-64", injected_files=injected_files) self.check_vm_params_for_linux() self.assertEqual(actual_injected_files, injected_files) @mock.patch('nova.db.agent_build_get_by_triple') def test_spawn_agent_upgrade(self, mock_get): self.flags(use_agent_default=True, group='xenserver') mock_get.return_value = {"version": "1.1.0", "architecture": "x86-64", "hypervisor": "xen", "os": "windows", "url": "url", "md5hash": "asdf", 'created_at': None, 'updated_at': None, 'deleted_at': None, 'deleted': False, 'id': 1} self._test_spawn(IMAGE_VHD, None, None, os_type="linux", architecture="x86-64") @mock.patch('nova.db.agent_build_get_by_triple') def test_spawn_agent_upgrade_fails_silently(self, mock_get): mock_get.return_value = {"version": "1.1.0", "architecture": "x86-64", "hypervisor": "xen", "os": "windows", "url": "url", "md5hash": "asdf", 'created_at': None, 'updated_at': None, 'deleted_at': None, 'deleted': False, 'id': 1} self._test_spawn_fails_silently_with(exception.AgentError, method="_plugin_agent_agentupdate", failure="fake_error") def test_spawn_with_resetnetwork_alternative_returncode(self): self.flags(use_agent_default=True, group='xenserver') def fake_resetnetwork(self, method, args): fake_resetnetwork.called = True # NOTE(johngarbutt): as returned by FreeBSD and Gentoo return jsonutils.dumps({'returncode': '500', 'message': 'success'}) self.stubs.Set(stubs.FakeSessionForVMTests, '_plugin_agent_resetnetwork', fake_resetnetwork) fake_resetnetwork.called = False self._test_spawn(IMAGE_VHD, None, None, os_type="linux", architecture="x86-64") self.assertTrue(fake_resetnetwork.called) def _test_spawn_fails_silently_with(self, expected_exception_cls, method="_plugin_agent_version", failure=None, value=None): self.flags(use_agent_default=True, agent_version_timeout=0, group='xenserver') def fake_agent_call(self, method, args): if failure: raise XenAPI.Failure([failure]) else: return value self.stubs.Set(stubs.FakeSessionForVMTests, method, fake_agent_call) called = {} def fake_add_instance_fault(*args, **kwargs): called["fake_add_instance_fault"] = args[2] self.stubs.Set(compute_utils, 'add_instance_fault_from_exc', fake_add_instance_fault) self._test_spawn(IMAGE_VHD, None, None, os_type="linux", architecture="x86-64") actual_exception = called["fake_add_instance_fault"] self.assertIsInstance(actual_exception, expected_exception_cls) def test_spawn_fails_silently_with_agent_timeout(self): self._test_spawn_fails_silently_with(exception.AgentTimeout, failure="TIMEOUT:fake") def test_spawn_fails_silently_with_agent_not_implemented(self): self._test_spawn_fails_silently_with(exception.AgentNotImplemented, failure="NOT IMPLEMENTED:fake") def test_spawn_fails_silently_with_agent_error(self): self._test_spawn_fails_silently_with(exception.AgentError, failure="fake_error") def test_spawn_fails_silently_with_agent_bad_return(self): error = jsonutils.dumps({'returncode': -1, 'message': 'fake'}) self._test_spawn_fails_silently_with(exception.AgentError, value=error) def test_spawn_sets_last_dom_id(self): self._test_spawn(IMAGE_VHD, None, None, os_type="linux", architecture="x86-64") self.assertEqual(self.vm['domid'], self.vm['other_config']['last_dom_id']) def test_rescue(self): instance = self._create_instance(spawn=False, obj=True) xenapi_fake.create_vm(instance['name'], 'Running') session = get_session() vm_ref = vm_utils.lookup(session, instance['name']) swap_vdi_ref = xenapi_fake.create_vdi('swap', None) root_vdi_ref = xenapi_fake.create_vdi('root', None) eph1_vdi_ref = xenapi_fake.create_vdi('eph', None) eph2_vdi_ref = xenapi_fake.create_vdi('eph', None) vol_vdi_ref = xenapi_fake.create_vdi('volume', None) xenapi_fake.create_vbd(vm_ref, swap_vdi_ref, userdevice=2) xenapi_fake.create_vbd(vm_ref, root_vdi_ref, userdevice=0) xenapi_fake.create_vbd(vm_ref, eph1_vdi_ref, userdevice=4) xenapi_fake.create_vbd(vm_ref, eph2_vdi_ref, userdevice=5) xenapi_fake.create_vbd(vm_ref, vol_vdi_ref, userdevice=6, other_config={'osvol': True}) conn = xenapi_conn.XenAPIDriver(fake.FakeVirtAPI(), False) image_meta = objects.ImageMeta.from_dict( {'id': IMAGE_VHD, 'disk_format': 'vhd', 'properties': {'vm_mode': 'xen'}}) conn.rescue(self.context, instance, [], image_meta, '') vm = xenapi_fake.get_record('VM', vm_ref) rescue_name = "%s-rescue" % vm["name_label"] rescue_ref = vm_utils.lookup(session, rescue_name) rescue_vm = xenapi_fake.get_record('VM', rescue_ref) vdi_refs = {} for vbd_ref in rescue_vm['VBDs']: vbd = xenapi_fake.get_record('VBD', vbd_ref) vdi_refs[vbd['VDI']] = vbd['userdevice'] self.assertEqual('1', vdi_refs[root_vdi_ref]) self.assertEqual('2', vdi_refs[swap_vdi_ref]) self.assertEqual('4', vdi_refs[eph1_vdi_ref]) self.assertEqual('5', vdi_refs[eph2_vdi_ref]) self.assertNotIn(vol_vdi_ref, vdi_refs) def test_rescue_preserve_disk_on_failure(self): # test that the original disk is preserved if rescue setup fails # bug #1227898 instance = self._create_instance(obj=True) session = get_session() image_meta = objects.ImageMeta.from_dict( {'id': IMAGE_VHD, 'disk_format': 'vhd', 'properties': {'vm_mode': 'xen'}}) vm_ref = vm_utils.lookup(session, instance['name']) vdi_ref, vdi_rec = vm_utils.get_vdi_for_vm_safely(session, vm_ref) # raise an error in the spawn setup process and trigger the # undo manager logic: def fake_start(*args, **kwargs): raise test.TestingException('Start Error') self.stubs.Set(self.conn._vmops, '_start', fake_start) self.assertRaises(test.TestingException, self.conn.rescue, self.context, instance, [], image_meta, '') # confirm original disk still exists: vdi_ref2, vdi_rec2 = vm_utils.get_vdi_for_vm_safely(session, vm_ref) self.assertEqual(vdi_ref, vdi_ref2) self.assertEqual(vdi_rec['uuid'], vdi_rec2['uuid']) def test_unrescue(self): instance = self._create_instance(obj=True) conn = xenapi_conn.XenAPIDriver(fake.FakeVirtAPI(), False) # Unrescue expects the original instance to be powered off conn.power_off(instance) xenapi_fake.create_vm(instance['name'] + '-rescue', 'Running') conn.unrescue(instance, None) def test_unrescue_not_in_rescue(self): instance = self._create_instance(obj=True) conn = xenapi_conn.XenAPIDriver(fake.FakeVirtAPI(), False) # Ensure that it will not unrescue a non-rescued instance. self.assertRaises(exception.InstanceNotInRescueMode, conn.unrescue, instance, None) def test_finish_revert_migration(self): instance = self._create_instance() class VMOpsMock(object): def __init__(self): self.finish_revert_migration_called = False def finish_revert_migration(self, context, instance, block_info, power_on): self.finish_revert_migration_called = True conn = xenapi_conn.XenAPIDriver(fake.FakeVirtAPI(), False) conn._vmops = VMOpsMock() conn.finish_revert_migration(self.context, instance, None) self.assertTrue(conn._vmops.finish_revert_migration_called) def test_reboot_hard(self): instance = self._create_instance() conn = xenapi_conn.XenAPIDriver(fake.FakeVirtAPI(), False) conn.reboot(self.context, instance, None, "HARD") def test_poll_rebooting_instances(self): self.mox.StubOutWithMock(compute_api.API, 'reboot') compute_api.API.reboot(mox.IgnoreArg(), mox.IgnoreArg(), mox.IgnoreArg()) self.mox.ReplayAll() instance = self._create_instance() instances = [instance] conn = xenapi_conn.XenAPIDriver(fake.FakeVirtAPI(), False) conn.poll_rebooting_instances(60, instances) def test_reboot_soft(self): instance = self._create_instance() conn = xenapi_conn.XenAPIDriver(fake.FakeVirtAPI(), False) conn.reboot(self.context, instance, None, "SOFT") def test_reboot_halted(self): session = get_session() instance = self._create_instance(spawn=False) conn = xenapi_conn.XenAPIDriver(fake.FakeVirtAPI(), False) xenapi_fake.create_vm(instance['name'], 'Halted') conn.reboot(self.context, instance, None, "SOFT") vm_ref = vm_utils.lookup(session, instance['name']) vm = xenapi_fake.get_record('VM', vm_ref) self.assertEqual(vm['power_state'], 'Running') def test_reboot_unknown_state(self): instance = self._create_instance(spawn=False) conn = xenapi_conn.XenAPIDriver(fake.FakeVirtAPI(), False) xenapi_fake.create_vm(instance['name'], 'Unknown') self.assertRaises(XenAPI.Failure, conn.reboot, self.context, instance, None, "SOFT") def test_reboot_rescued(self): instance = self._create_instance() instance['vm_state'] = vm_states.RESCUED conn = xenapi_conn.XenAPIDriver(fake.FakeVirtAPI(), False) real_result = vm_utils.lookup(conn._session, instance['name']) with mock.patch.object(vm_utils, 'lookup', return_value=real_result) as mock_lookup: conn.reboot(self.context, instance, None, "SOFT") mock_lookup.assert_called_once_with(conn._session, instance['name'], True) def test_get_console_output_succeeds(self): def fake_get_console_output(instance): self.assertEqual("instance", instance) return "console_log" self.stubs.Set(self.conn._vmops, 'get_console_output', fake_get_console_output) self.assertEqual(self.conn.get_console_output('context', "instance"), "console_log") def _test_maintenance_mode(self, find_host, find_aggregate): real_call_xenapi = self.conn._session.call_xenapi instance = self._create_instance(spawn=True) api_calls = {} # Record all the xenapi calls, and return a fake list of hosts # for the host.get_all call def fake_call_xenapi(method, *args): api_calls[method] = args if method == 'host.get_all': return ['foo', 'bar', 'baz'] return real_call_xenapi(method, *args) self.stubs.Set(self.conn._session, 'call_xenapi', fake_call_xenapi) def fake_aggregate_get(context, host, key): if find_aggregate: return [test_aggregate.fake_aggregate] else: return [] self.stub_out('nova.db.aggregate_get_by_host', fake_aggregate_get) def fake_host_find(context, session, src, dst): if find_host: return 'bar' else: raise exception.NoValidHost("I saw this one coming...") self.stubs.Set(host, '_host_find', fake_host_find) result = self.conn.host_maintenance_mode('bar', 'on_maintenance') self.assertEqual(result, 'on_maintenance') # We expect the VM.pool_migrate call to have been called to # migrate our instance to the 'bar' host vm_ref = vm_utils.lookup(self.conn._session, instance['name']) host_ref = "foo" expected = (vm_ref, host_ref, {"live": "true"}) self.assertEqual(api_calls.get('VM.pool_migrate'), expected) instance = db.instance_get_by_uuid(self.context, instance['uuid']) self.assertEqual(instance['vm_state'], vm_states.ACTIVE) self.assertEqual(instance['task_state'], task_states.MIGRATING) def test_maintenance_mode(self): self._test_maintenance_mode(True, True) def test_maintenance_mode_no_host(self): self.assertRaises(exception.NoValidHost, self._test_maintenance_mode, False, True) def test_maintenance_mode_no_aggregate(self): self.assertRaises(exception.NotFound, self._test_maintenance_mode, True, False) def test_uuid_find(self): self.mox.StubOutWithMock(db, 'instance_get_all_by_host') fake_inst = fake_instance.fake_db_instance(id=123) fake_inst2 = fake_instance.fake_db_instance(id=456) db.instance_get_all_by_host(self.context, fake_inst['host'], columns_to_join=None ).AndReturn([fake_inst, fake_inst2]) self.mox.ReplayAll() expected_name = CONF.instance_name_template % fake_inst['id'] inst_uuid = host._uuid_find(self.context, fake_inst['host'], expected_name) self.assertEqual(inst_uuid, fake_inst['uuid']) def test_per_instance_usage_running(self): instance = self._create_instance(spawn=True) flavor = objects.Flavor.get_by_id(self.context, 3) expected = {instance['uuid']: {'memory_mb': flavor['memory_mb'], 'uuid': instance['uuid']}} actual = self.conn.get_per_instance_usage() self.assertEqual(expected, actual) # Paused instances still consume resources: self.conn.pause(instance) actual = self.conn.get_per_instance_usage() self.assertEqual(expected, actual) def test_per_instance_usage_suspended(self): # Suspended instances do not consume memory: instance = self._create_instance(spawn=True) self.conn.suspend(self.context, instance) actual = self.conn.get_per_instance_usage() self.assertEqual({}, actual) def test_per_instance_usage_halted(self): instance = self._create_instance(spawn=True, obj=True) self.conn.power_off(instance) actual = self.conn.get_per_instance_usage() self.assertEqual({}, actual) def _create_instance(self, spawn=True, obj=False, **attrs): """Creates and spawns a test instance.""" instance_values = { 'uuid': uuidutils.generate_uuid(), 'display_name': 'host-', 'project_id': self.project_id, 'user_id': self.user_id, 'image_ref': IMAGE_MACHINE, 'kernel_id': IMAGE_KERNEL, 'ramdisk_id': IMAGE_RAMDISK, 'root_gb': 80, 'ephemeral_gb': 0, 'instance_type_id': '3', # m1.large 'os_type': 'linux', 'vm_mode': 'hvm', 'architecture': 'x86-64'} instance_values.update(attrs) instance = create_instance_with_system_metadata(self.context, instance_values) network_info = fake_network.fake_get_instance_nw_info(self) image_meta = objects.ImageMeta.from_dict( {'id': uuids.image_id, 'disk_format': 'vhd'}) if spawn: self.conn.spawn(self.context, instance, image_meta, [], 'herp', {}, network_info) if obj: return instance return base.obj_to_primitive(instance) def test_destroy_clean_up_kernel_and_ramdisk(self): def fake_lookup_kernel_ramdisk(session, vm_ref): return "kernel", "ramdisk" self.stubs.Set(vm_utils, "lookup_kernel_ramdisk", fake_lookup_kernel_ramdisk) def fake_destroy_kernel_ramdisk(session, instance, kernel, ramdisk): fake_destroy_kernel_ramdisk.called = True self.assertEqual("kernel", kernel) self.assertEqual("ramdisk", ramdisk) fake_destroy_kernel_ramdisk.called = False self.stubs.Set(vm_utils, "destroy_kernel_ramdisk", fake_destroy_kernel_ramdisk) instance = self._create_instance(spawn=True, obj=True) network_info = fake_network.fake_get_instance_nw_info(self) self.conn.destroy(self.context, instance, network_info) vm_ref = vm_utils.lookup(self.conn._session, instance['name']) self.assertIsNone(vm_ref) self.assertTrue(fake_destroy_kernel_ramdisk.called) class XenAPIDiffieHellmanTestCase(test.NoDBTestCase): """Unit tests for Diffie-Hellman code.""" def setUp(self): super(XenAPIDiffieHellmanTestCase, self).setUp() self.alice = agent.SimpleDH() self.bob = agent.SimpleDH() def test_shared(self): alice_pub = self.alice.get_public() bob_pub = self.bob.get_public() alice_shared = self.alice.compute_shared(bob_pub) bob_shared = self.bob.compute_shared(alice_pub) self.assertEqual(alice_shared, bob_shared) def _test_encryption(self, message): enc = self.alice.encrypt(message) self.assertFalse(enc.endswith('\n')) dec = self.bob.decrypt(enc) self.assertEqual(dec, message) def test_encrypt_simple_message(self): self._test_encryption('This is a simple message.') def test_encrypt_message_with_newlines_at_end(self): self._test_encryption('This message has a newline at the end.\n') def test_encrypt_many_newlines_at_end(self): self._test_encryption('Message with lotsa newlines.\n\n\n') def test_encrypt_newlines_inside_message(self): self._test_encryption('Message\nwith\ninterior\nnewlines.') def test_encrypt_with_leading_newlines(self): self._test_encryption('\n\nMessage with leading newlines.') def test_encrypt_really_long_message(self): self._test_encryption(''.join(['abcd' for i in range(1024)])) # FIXME(sirp): convert this to use XenAPITestBaseNoDB class XenAPIMigrateInstance(stubs.XenAPITestBase): """Unit test for verifying migration-related actions.""" REQUIRES_LOCKING = True def setUp(self): super(XenAPIMigrateInstance, self).setUp() self.flags(connection_url='http://localhost', connection_password='test_pass', group='xenserver') self.flags(firewall_driver='nova.virt.xenapi.firewall.' 'Dom0IptablesFirewallDriver') stubs.stubout_session(self.stubs, stubs.FakeSessionForVMTests) db_fakes.stub_out_db_instance_api(self) xenapi_fake.create_network('fake', 'fake_br1') self.user_id = 'fake' self.project_id = 'fake' self.context = context.RequestContext(self.user_id, self.project_id) self.instance_values = { 'project_id': self.project_id, 'user_id': self.user_id, 'image_ref': IMAGE_MACHINE, 'kernel_id': None, 'ramdisk_id': None, 'root_gb': 80, 'ephemeral_gb': 0, 'instance_type_id': '3', # m1.large 'os_type': 'linux', 'architecture': 'x86-64'} migration_values = { 'source_compute': 'nova-compute', 'dest_compute': 'nova-compute', 'dest_host': '10.127.5.114', 'status': 'post-migrating', 'instance_uuid': '15f23e6a-cc6e-4d22-b651-d9bdaac316f7', 'old_instance_type_id': 5, 'new_instance_type_id': 1 } self.migration = db.migration_create( context.get_admin_context(), migration_values) fake_processutils.stub_out_processutils_execute(self) stubs.stub_out_migration_methods(self.stubs) stubs.stubout_get_this_vm_uuid(self.stubs) def fake_inject_instance_metadata(self, instance, vm): pass self.stub_out('nova.virt.xenapi.vmops.VMOps._inject_instance_metadata', fake_inject_instance_metadata) def fake_unpause_and_wait(self, vm_ref, instance, power_on): pass self.stub_out('nova.virt.xenapi.vmops.VMOps._unpause_and_wait', fake_unpause_and_wait) def _create_instance(self, **kw): values = self.instance_values.copy() values.update(kw) instance = objects.Instance(context=self.context, **values) instance.flavor = objects.Flavor(root_gb=80, ephemeral_gb=0) instance.create() return instance @mock.patch.object(vmops.VMOps, '_migrate_disk_resizing_up') @mock.patch.object(vm_utils, 'get_sr_path') @mock.patch.object(vm_utils, 'lookup') @mock.patch.object(volume_utils, 'is_booted_from_volume') def test_migrate_disk_and_power_off(self, mock_boot_from_volume, mock_lookup, mock_sr_path, mock_migrate): instance = self._create_instance() xenapi_fake.create_vm(instance['name'], 'Running') flavor = fake_flavor.fake_flavor_obj(self.context, root_gb=80, ephemeral_gb=0) conn = xenapi_conn.XenAPIDriver(fake.FakeVirtAPI(), False) mock_boot_from_volume.return_value = True mock_lookup.return_value = 'fake_vm_ref' mock_sr_path.return_value = 'fake_sr_path' conn.migrate_disk_and_power_off(self.context, instance, '127.0.0.1', flavor, None) mock_lookup.assert_called_once_with(conn._session, instance['name'], False) mock_sr_path.assert_called_once_with(conn._session) mock_migrate.assert_called_once_with(self.context, instance, '127.0.0.1', 'fake_vm_ref', 'fake_sr_path') def test_migrate_disk_and_power_off_passes_exceptions(self): instance = self._create_instance() xenapi_fake.create_vm(instance['name'], 'Running') flavor = fake_flavor.fake_flavor_obj(self.context, root_gb=80, ephemeral_gb=0) def fake_raise(*args, **kwargs): raise exception.MigrationError(reason='test failure') self.stub_out( 'nova.virt.xenapi.vmops.VMOps._migrate_disk_resizing_up', fake_raise) conn = xenapi_conn.XenAPIDriver(fake.FakeVirtAPI(), False) self.assertRaises(exception.MigrationError, conn.migrate_disk_and_power_off, self.context, instance, '127.0.0.1', flavor, None) def test_migrate_disk_and_power_off_throws_on_zero_gb_resize_down(self): instance = self._create_instance() flavor = fake_flavor.fake_flavor_obj(self.context, root_gb=0, ephemeral_gb=0) conn = xenapi_conn.XenAPIDriver(fake.FakeVirtAPI(), False) self.assertRaises(exception.ResizeError, conn.migrate_disk_and_power_off, self.context, instance, 'fake_dest', flavor, None) @mock.patch.object(vmops.VMOps, '_migrate_disk_resizing_up') @mock.patch.object(vm_utils, 'get_sr_path') @mock.patch.object(vm_utils, 'lookup') @mock.patch.object(volume_utils, 'is_booted_from_volume') def test_migrate_disk_and_power_off_with_zero_gb_old_and_new_works( self, mock_boot_from_volume, mock_lookup, mock_sr_path, mock_migrate): flavor = fake_flavor.fake_flavor_obj(self.context, root_gb=0, ephemeral_gb=0) instance = self._create_instance(root_gb=0, ephemeral_gb=0) instance.flavor.root_gb = 0 instance.flavor.ephemeral_gb = 0 xenapi_fake.create_vm(instance['name'], 'Running') mock_boot_from_volume.return_value = True mock_lookup.return_value = 'fake_vm_ref' mock_sr_path.return_value = 'fake_sr_path' conn = xenapi_conn.XenAPIDriver(fake.FakeVirtAPI(), False) conn.migrate_disk_and_power_off(self.context, instance, '127.0.0.1', flavor, None) mock_lookup.assert_called_once_with(conn._session, instance['name'], False) mock_sr_path.assert_called_once_with(conn._session) mock_migrate.assert_called_once_with(self.context, instance, '127.0.0.1', 'fake_vm_ref', 'fake_sr_path') def _test_revert_migrate(self, power_on): instance = create_instance_with_system_metadata(self.context, self.instance_values) self.called = False self.fake_vm_start_called = False self.fake_finish_revert_migration_called = False context = 'fake_context' def fake_vm_start(*args, **kwargs): self.fake_vm_start_called = True def fake_vdi_resize(*args, **kwargs): self.called = True def fake_finish_revert_migration(*args, **kwargs): self.fake_finish_revert_migration_called = True self.stub_out( 'nova.tests.unit.virt.xenapi.stubs.FakeSessionForVMTests' '.VDI_resize_online', fake_vdi_resize) self.stub_out('nova.virt.xenapi.vmops.VMOps._start', fake_vm_start) self.stub_out('nova.virt.xenapi.vmops.VMOps.finish_revert_migration', fake_finish_revert_migration) stubs.stubout_session(self.stubs, stubs.FakeSessionForVMTests, product_version=(4, 0, 0), product_brand='XenServer') conn = xenapi_conn.XenAPIDriver(fake.FakeVirtAPI(), False) network_info = fake_network.fake_get_instance_nw_info(self) image_meta = objects.ImageMeta.from_dict( {'id': instance['image_ref'], 'disk_format': 'vhd'}) base = xenapi_fake.create_vdi('hurr', 'fake') base_uuid = xenapi_fake.get_record('VDI', base)['uuid'] cow = xenapi_fake.create_vdi('durr', 'fake') cow_uuid = xenapi_fake.get_record('VDI', cow)['uuid'] conn.finish_migration(self.context, self.migration, instance, dict(base_copy=base_uuid, cow=cow_uuid), network_info, image_meta, resize_instance=True, block_device_info=None, power_on=power_on) self.assertTrue(self.called) self.assertEqual(self.fake_vm_start_called, power_on) conn.finish_revert_migration(context, instance, network_info) self.assertTrue(self.fake_finish_revert_migration_called) def test_revert_migrate_power_on(self): self._test_revert_migrate(True) def test_revert_migrate_power_off(self): self._test_revert_migrate(False) def _test_finish_migrate(self, power_on): instance = create_instance_with_system_metadata(self.context, self.instance_values) self.called = False self.fake_vm_start_called = False def fake_vm_start(*args, **kwargs): self.fake_vm_start_called = True def fake_vdi_resize(*args, **kwargs): self.called = True self.stub_out('nova.virt.xenapi.vmops.VMOps._start', fake_vm_start) self.stub_out('nova.tests.unit.virt.xenapi.stubs' '.FakeSessionForVMTests.VDI_resize_online', fake_vdi_resize) stubs.stubout_session(self.stubs, stubs.FakeSessionForVMTests, product_version=(4, 0, 0), product_brand='XenServer') conn = xenapi_conn.XenAPIDriver(fake.FakeVirtAPI(), False) network_info = fake_network.fake_get_instance_nw_info(self) image_meta = objects.ImageMeta.from_dict( {'id': instance['image_ref'], 'disk_format': 'vhd'}) conn.finish_migration(self.context, self.migration, instance, dict(base_copy='hurr', cow='durr'), network_info, image_meta, resize_instance=True, block_device_info=None, power_on=power_on) self.assertTrue(self.called) self.assertEqual(self.fake_vm_start_called, power_on) def test_finish_migrate_power_on(self): self._test_finish_migrate(True) def test_finish_migrate_power_off(self): self._test_finish_migrate(False) def test_finish_migrate_no_local_storage(self): values = copy.copy(self.instance_values) values["root_gb"] = 0 values["ephemeral_gb"] = 0 instance = create_instance_with_system_metadata(self.context, values) instance.flavor.root_gb = 0 instance.flavor.ephemeral_gb = 0 def fake_vdi_resize(*args, **kwargs): raise Exception("This shouldn't be called") self.stub_out('nova.tests.unit.virt.xenapi.stubs' '.FakeSessionForVMTests.VDI_resize_online', fake_vdi_resize) conn = xenapi_conn.XenAPIDriver(fake.FakeVirtAPI(), False) network_info = fake_network.fake_get_instance_nw_info(self) image_meta = objects.ImageMeta.from_dict( {'id': instance['image_ref'], 'disk_format': 'vhd'}) conn.finish_migration(self.context, self.migration, instance, dict(base_copy='hurr', cow='durr'), network_info, image_meta, resize_instance=True) def test_finish_migrate_no_resize_vdi(self): instance = create_instance_with_system_metadata(self.context, self.instance_values) def fake_vdi_resize(*args, **kwargs): raise Exception("This shouldn't be called") self.stub_out('nova.tests.unit.virt.xenapi.stubs' '.FakeSessionForVMTests.VDI_resize_online', fake_vdi_resize) conn = xenapi_conn.XenAPIDriver(fake.FakeVirtAPI(), False) network_info = fake_network.fake_get_instance_nw_info(self) # Resize instance would be determined by the compute call image_meta = objects.ImageMeta.from_dict( {'id': instance['image_ref'], 'disk_format': 'vhd'}) conn.finish_migration(self.context, self.migration, instance, dict(base_copy='hurr', cow='durr'), network_info, image_meta, resize_instance=False) @stub_vm_utils_with_vdi_attached def test_migrate_too_many_partitions_no_resize_down(self): instance = self._create_instance() xenapi_fake.create_vm(instance['name'], 'Running') flavor = objects.Flavor.get_by_name(self.context, 'm1.small') conn = xenapi_conn.XenAPIDriver(fake.FakeVirtAPI(), False) def fake_get_partitions(partition): return [(1, 2, 3, 4, "", ""), (1, 2, 3, 4, "", "")] self.stub_out('nova.virt.xenapi.vm_utils._get_partitions', fake_get_partitions) self.assertRaises(exception.InstanceFaultRollback, conn.migrate_disk_and_power_off, self.context, instance, '127.0.0.1', flavor, None) @stub_vm_utils_with_vdi_attached def test_migrate_bad_fs_type_no_resize_down(self): instance = self._create_instance() xenapi_fake.create_vm(instance['name'], 'Running') flavor = objects.Flavor.get_by_name(self.context, 'm1.small') conn = xenapi_conn.XenAPIDriver(fake.FakeVirtAPI(), False) def fake_get_partitions(partition): return [(1, 2, 3, "ext2", "", "boot")] self.stub_out('nova.virt.xenapi.vm_utils._get_partitions', fake_get_partitions) self.assertRaises(exception.InstanceFaultRollback, conn.migrate_disk_and_power_off, self.context, instance, '127.0.0.1', flavor, None) @mock.patch.object(vmops.VMOps, '_resize_ensure_vm_is_shutdown') @mock.patch.object(vmops.VMOps, '_apply_orig_vm_name_label') @mock.patch.object(vm_utils, 'resize_disk') @mock.patch.object(vm_utils, 'migrate_vhd') @mock.patch.object(vm_utils, 'destroy_vdi') @mock.patch.object(vm_utils, 'get_vdi_for_vm_safely') @mock.patch.object(vmops.VMOps, '_restore_orig_vm_and_cleanup_orphan') def test_migrate_rollback_when_resize_down_fs_fails(self, mock_restore, mock_get_vdi, mock_destroy, mock_migrate, mock_disk, mock_label, mock_resize): conn = xenapi_conn.XenAPIDriver(fake.FakeVirtAPI(), False) vmops = conn._vmops instance = objects.Instance(context=self.context, auto_disk_config=True, uuid=uuids.instance) instance.obj_reset_changes() vm_ref = "vm_ref" dest = "dest" flavor = "type" sr_path = "sr_path" vmops._resize_ensure_vm_is_shutdown(instance, vm_ref) vmops._apply_orig_vm_name_label(instance, vm_ref) old_vdi_ref = "old_ref" mock_get_vdi.return_value = (old_vdi_ref, None) new_vdi_ref = "new_ref" new_vdi_uuid = "new_uuid" mock_disk.return_value = (new_vdi_ref, new_vdi_uuid) mock_migrate.side_effect = exception.ResizeError(reason="asdf") vm_utils.destroy_vdi(vmops._session, new_vdi_ref) vmops._restore_orig_vm_and_cleanup_orphan(instance) with mock.patch.object(instance, 'save') as mock_save: self.assertRaises(exception.InstanceFaultRollback, vmops._migrate_disk_resizing_down, self.context, instance, dest, flavor, vm_ref, sr_path) self.assertEqual(3, mock_save.call_count) self.assertEqual(60.0, instance.progress) mock_resize.assert_any_call(instance, vm_ref) mock_label.assert_any_call(instance, vm_ref) mock_get_vdi.assert_called_once_with(vmops._session, vm_ref) mock_disk.assert_called_once_with(vmops._session, instance, old_vdi_ref, flavor) mock_migrate.assert_called_once_with(vmops._session, instance, new_vdi_uuid, dest, sr_path, 0) mock_destroy.assert_any_call(vmops._session, new_vdi_ref) mock_restore.assert_any_call(instance) @mock.patch.object(vm_utils, 'is_vm_shutdown') @mock.patch.object(vm_utils, 'clean_shutdown_vm') @mock.patch.object(vm_utils, 'hard_shutdown_vm') def test_resize_ensure_vm_is_shutdown_cleanly(self, mock_hard, mock_clean, mock_shutdown): conn = xenapi_conn.XenAPIDriver(fake.FakeVirtAPI(), False) vmops = conn._vmops fake_instance = {'uuid': 'uuid'} mock_shutdown.return_value = False mock_clean.return_value = False vmops._resize_ensure_vm_is_shutdown(fake_instance, "ref") mock_shutdown.assert_called_once_with(vmops._session, "ref") mock_clean.assert_called_once_with(vmops._session, fake_instance, "ref") @mock.patch.object(vm_utils, 'is_vm_shutdown') @mock.patch.object(vm_utils, 'clean_shutdown_vm') @mock.patch.object(vm_utils, 'hard_shutdown_vm') def test_resize_ensure_vm_is_shutdown_forced(self, mock_hard, mock_clean, mock_shutdown): conn = xenapi_conn.XenAPIDriver(fake.FakeVirtAPI(), False) vmops = conn._vmops fake_instance = {'uuid': 'uuid'} mock_shutdown.return_value = False mock_clean.return_value = False mock_hard.return_value = True vmops._resize_ensure_vm_is_shutdown(fake_instance, "ref") mock_shutdown.assert_called_once_with(vmops._session, "ref") mock_clean.assert_called_once_with(vmops._session, fake_instance, "ref") mock_hard.assert_called_once_with(vmops._session, fake_instance, "ref") @mock.patch.object(vm_utils, 'is_vm_shutdown') @mock.patch.object(vm_utils, 'clean_shutdown_vm') @mock.patch.object(vm_utils, 'hard_shutdown_vm') def test_resize_ensure_vm_is_shutdown_fails(self, mock_hard, mock_clean, mock_shutdown): conn = xenapi_conn.XenAPIDriver(fake.FakeVirtAPI(), False) vmops = conn._vmops fake_instance = {'uuid': 'uuid'} mock_shutdown.return_value = False mock_clean.return_value = False mock_hard.return_value = False self.assertRaises(exception.ResizeError, vmops._resize_ensure_vm_is_shutdown, fake_instance, "ref") mock_shutdown.assert_called_once_with(vmops._session, "ref") mock_clean.assert_called_once_with(vmops._session, fake_instance, "ref") mock_hard.assert_called_once_with(vmops._session, fake_instance, "ref") @mock.patch.object(vm_utils, 'is_vm_shutdown') @mock.patch.object(vm_utils, 'clean_shutdown_vm') @mock.patch.object(vm_utils, 'hard_shutdown_vm') def test_resize_ensure_vm_is_shutdown_already_shutdown(self, mock_hard, mock_clean, mock_shutdown): conn = xenapi_conn.XenAPIDriver(fake.FakeVirtAPI(), False) vmops = conn._vmops fake_instance = {'uuid': 'uuid'} mock_shutdown.return_value = True vmops._resize_ensure_vm_is_shutdown(fake_instance, "ref") mock_shutdown.assert_called_once_with(vmops._session, "ref") class XenAPIImageTypeTestCase(test.NoDBTestCase): """Test ImageType class.""" def test_to_string(self): # Can convert from type id to type string. self.assertEqual( vm_utils.ImageType.to_string(vm_utils.ImageType.KERNEL), vm_utils.ImageType.KERNEL_STR) def _assert_role(self, expected_role, image_type_id): self.assertEqual( expected_role, vm_utils.ImageType.get_role(image_type_id)) def test_get_image_role_kernel(self): self._assert_role('kernel', vm_utils.ImageType.KERNEL) def test_get_image_role_ramdisk(self): self._assert_role('ramdisk', vm_utils.ImageType.RAMDISK) def test_get_image_role_disk(self): self._assert_role('root', vm_utils.ImageType.DISK) def test_get_image_role_disk_raw(self): self._assert_role('root', vm_utils.ImageType.DISK_RAW) def test_get_image_role_disk_vhd(self): self._assert_role('root', vm_utils.ImageType.DISK_VHD) class XenAPIDetermineDiskImageTestCase(test.NoDBTestCase): """Unit tests for code that detects the ImageType.""" def assert_disk_type(self, image_meta, expected_disk_type): actual = vm_utils.determine_disk_image_type(image_meta) self.assertEqual(expected_disk_type, actual) def test_machine(self): image_meta = objects.ImageMeta.from_dict( {'disk_format': 'ami'}) self.assert_disk_type(image_meta, vm_utils.ImageType.DISK) def test_raw(self): image_meta = objects.ImageMeta.from_dict( {'disk_format': 'raw'}) self.assert_disk_type(image_meta, vm_utils.ImageType.DISK_RAW) def test_vhd(self): image_meta = objects.ImageMeta.from_dict( {'disk_format': 'vhd'}) self.assert_disk_type(image_meta, vm_utils.ImageType.DISK_VHD) # FIXME(sirp): convert this to use XenAPITestBaseNoDB class XenAPIHostTestCase(stubs.XenAPITestBase): """Tests HostState, which holds metrics from XenServer that get reported back to the Schedulers. """ def setUp(self): super(XenAPIHostTestCase, self).setUp() self.flags(connection_url='http://localhost', connection_password='test_pass', group='xenserver') stubs.stubout_session(self.stubs, stubs.FakeSessionForVMTests) self.context = context.get_admin_context() self.conn = xenapi_conn.XenAPIDriver(fake.FakeVirtAPI(), False) self.instance = fake_instance.fake_db_instance(name='foo') self.useFixture(fixtures.SingleCellSimple()) def test_host_state(self): stats = self.conn.host_state.get_host_stats(False) # Values from fake.create_local_srs (ext SR) self.assertEqual(stats['disk_total'], 40000) self.assertEqual(stats['disk_used'], 0) # Values from fake._plugin_xenhost_host_data self.assertEqual(stats['host_memory_total'], 10) self.assertEqual(stats['host_memory_overhead'], 20) self.assertEqual(stats['host_memory_free'], 30) self.assertEqual(stats['host_memory_free_computed'], 40) self.assertEqual(stats['hypervisor_hostname'], 'fake-xenhost') self.assertEqual(stats['host_cpu_info']['cpu_count'], 4) self.assertThat({ 'vendor': 'GenuineIntel', 'model': 'Intel(R) Xeon(R) CPU X3430 @ 2.40GHz', 'topology': { 'sockets': 1, 'cores': 4, 'threads': 1, }, 'features': [ 'fpu', 'de', 'tsc', 'msr', 'pae', 'mce', 'cx8', 'apic', 'sep', 'mtrr', 'mca', 'cmov', 'pat', 'clflush', 'acpi', 'mmx', 'fxsr', 'sse', 'sse2', 'ss', 'ht', 'nx', 'constant_tsc', 'nonstop_tsc', 'aperfmperf', 'pni', 'vmx', 'est', 'ssse3', 'sse4_1', 'sse4_2', 'popcnt', 'hypervisor', 'ida', 'tpr_shadow', 'vnmi', 'flexpriority', 'ept', 'vpid', ]}, matchers.DictMatches(stats['cpu_model'])) # No VMs running self.assertEqual(stats['vcpus_used'], 0) def test_host_state_vcpus_used(self): stats = self.conn.host_state.get_host_stats(True) self.assertEqual(stats['vcpus_used'], 0) xenapi_fake.create_vm(self.instance['name'], 'Running') stats = self.conn.host_state.get_host_stats(True) self.assertEqual(stats['vcpus_used'], 4) def test_pci_passthrough_devices(self): stats = self.conn.host_state.get_host_stats(False) self.assertEqual(len(stats['pci_passthrough_devices']), 2) def test_host_state_missing_sr(self): # Must trigger construction of 'host_state' property # before introducing the stub which raises the error hs = self.conn.host_state def fake_safe_find_sr(session): raise exception.StorageRepositoryNotFound('not there') self.stubs.Set(vm_utils, 'safe_find_sr', fake_safe_find_sr) self.assertRaises(exception.StorageRepositoryNotFound, hs.get_host_stats, refresh=True) def _test_host_action(self, method, action, expected=None): result = method('host', action) if not expected: expected = action self.assertEqual(result, expected) def _test_host_action_no_param(self, method, action, expected=None): result = method(action) if not expected: expected = action self.assertEqual(result, expected) def test_host_reboot(self): self._test_host_action_no_param(self.conn.host_power_action, 'reboot') def test_host_shutdown(self): self._test_host_action_no_param(self.conn.host_power_action, 'shutdown') def test_host_startup(self): self.assertRaises(NotImplementedError, self.conn.host_power_action, 'startup') def test_host_maintenance_on(self): self._test_host_action(self.conn.host_maintenance_mode, True, 'on_maintenance') def test_host_maintenance_off(self): self._test_host_action(self.conn.host_maintenance_mode, False, 'off_maintenance') def test_set_enable_host_enable(self): _create_service_entries(self.context, values={'nova': ['fake-mini']}) self._test_host_action_no_param(self.conn.set_host_enabled, True, 'enabled') service = db.service_get_by_host_and_binary(self.context, 'fake-mini', 'nova-compute') self.assertFalse(service.disabled) def test_set_enable_host_disable(self): _create_service_entries(self.context, values={'nova': ['fake-mini']}) self._test_host_action_no_param(self.conn.set_host_enabled, False, 'disabled') service = db.service_get_by_host_and_binary(self.context, 'fake-mini', 'nova-compute') self.assertTrue(service.disabled) def test_get_host_uptime(self): result = self.conn.get_host_uptime() self.assertEqual(result, 'fake uptime') def test_supported_instances_is_included_in_host_state(self): stats = self.conn.host_state.get_host_stats(False) self.assertIn('supported_instances', stats) def test_supported_instances_is_calculated_by_to_supported_instances(self): def to_supported_instances(somedata): return "SOMERETURNVALUE" self.stubs.Set(host, 'to_supported_instances', to_supported_instances) stats = self.conn.host_state.get_host_stats(False) self.assertEqual("SOMERETURNVALUE", stats['supported_instances']) @mock.patch.object(host.HostState, 'get_disk_used') @mock.patch.object(host.HostState, '_get_passthrough_devices') @mock.patch.object(host.HostState, '_get_vgpu_stats') @mock.patch.object(jsonutils, 'loads') @mock.patch.object(vm_utils, 'list_vms') @mock.patch.object(vm_utils, 'scan_default_sr') @mock.patch.object(host_management, 'get_host_data') def test_update_stats_caches_hostname(self, mock_host_data, mock_scan_sr, mock_list_vms, mock_loads, mock_vgpus_stats, mock_devices, mock_dis_used): data = {'disk_total': 0, 'disk_used': 0, 'disk_available': 0, 'supported_instances': 0, 'host_capabilities': [], 'host_hostname': 'foo', 'vcpus_used': 0, } sr_rec = { 'physical_size': 0, 'physical_utilisation': 0, 'virtual_allocation': 0, } mock_loads.return_value = data mock_host_data.return_value = data mock_scan_sr.return_value = 'ref' mock_list_vms.return_value = [] mock_devices.return_value = "dev1" mock_dis_used.return_value = (0, 0) self.conn._session = mock.Mock() with mock.patch.object(self.conn._session.SR, 'get_record') \ as mock_record: mock_record.return_value = sr_rec stats = self.conn.host_state.get_host_stats(refresh=True) self.assertEqual('foo', stats['hypervisor_hostname']) self.assertEqual(2, mock_loads.call_count) self.assertEqual(2, mock_host_data.call_count) self.assertEqual(2, mock_scan_sr.call_count) self.assertEqual(2, mock_devices.call_count) self.assertEqual(2, mock_vgpus_stats.call_count) mock_loads.assert_called_with(data) mock_host_data.assert_called_with(self.conn._session) mock_scan_sr.assert_called_with(self.conn._session) mock_devices.assert_called_with() mock_vgpus_stats.assert_called_with() @mock.patch.object(host.HostState, 'update_status') class XenAPIHostStateTestCase(stubs.XenAPITestBaseNoDB): def _test_get_disk_used(self, vdis, attached_vbds): session = mock.MagicMock() host_state = host.HostState(session) sr_ref = 'sr_ref' session.SR.get_VDIs.return_value = vdis.keys() session.VDI.get_virtual_size.side_effect = \ lambda vdi_ref: vdis[vdi_ref]['virtual_size'] session.VDI.get_physical_utilisation.side_effect = \ lambda vdi_ref: vdis[vdi_ref]['physical_utilisation'] session.VDI.get_VBDs.side_effect = \ lambda vdi_ref: vdis[vdi_ref]['VBDs'] session.VBD.get_currently_attached.side_effect = \ lambda vbd_ref: vbd_ref in attached_vbds disk_used = host_state.get_disk_used(sr_ref) session.SR.get_VDIs.assert_called_once_with(sr_ref) return disk_used def test_get_disk_used_virtual(self, mock_update_status): # Both VDIs are attached attached_vbds = ['vbd_1', 'vbd_2'] vdis = { 'vdi_1': {'physical_utilisation': 1, 'virtual_size': 100, 'VBDs': ['vbd_1']}, 'vdi_2': {'physical_utilisation': 1, 'virtual_size': 100, 'VBDs': ['vbd_2']} } disk_used = self._test_get_disk_used(vdis, attached_vbds) self.assertEqual((200, 2), disk_used) def test_get_disk_used_physical(self, mock_update_status): # Neither VDIs are attached attached_vbds = [] vdis = { 'vdi_1': {'physical_utilisation': 1, 'virtual_size': 100, 'VBDs': ['vbd_1']}, 'vdi_2': {'physical_utilisation': 1, 'virtual_size': 100, 'VBDs': ['vbd_2']} } disk_used = self._test_get_disk_used(vdis, attached_vbds) self.assertEqual((2, 2), disk_used) def test_get_disk_used_both(self, mock_update_status): # One VDI is attached attached_vbds = ['vbd_1'] vdis = { 'vdi_1': {'physical_utilisation': 1, 'virtual_size': 100, 'VBDs': ['vbd_1']}, 'vdi_2': {'physical_utilisation': 1, 'virtual_size': 100, 'VBDs': ['vbd_2']} } disk_used = self._test_get_disk_used(vdis, attached_vbds) self.assertEqual((101, 2), disk_used) class ToSupportedInstancesTestCase(test.NoDBTestCase): def test_default_return_value(self): self.assertEqual([], host.to_supported_instances(None)) def test_return_value(self): self.assertEqual( [(obj_fields.Architecture.X86_64, obj_fields.HVType.XEN, 'xen')], host.to_supported_instances([u'xen-3.0-x86_64'])) def test_invalid_values_do_not_break(self): self.assertEqual( [(obj_fields.Architecture.X86_64, obj_fields.HVType.XEN, 'xen')], host.to_supported_instances([u'xen-3.0-x86_64', 'spam'])) def test_multiple_values(self): self.assertEqual( [ (obj_fields.Architecture.X86_64, obj_fields.HVType.XEN, 'xen'), (obj_fields.Architecture.I686, obj_fields.HVType.XEN, 'hvm') ], host.to_supported_instances([u'xen-3.0-x86_64', 'hvm-3.0-x86_32']) ) # FIXME(sirp): convert this to use XenAPITestBaseNoDB class XenAPIAutoDiskConfigTestCase(stubs.XenAPITestBase): def setUp(self): super(XenAPIAutoDiskConfigTestCase, self).setUp() self.flags(connection_url='http://localhost', connection_password='test_pass', group='xenserver') self.flags(firewall_driver='nova.virt.xenapi.firewall.' 'Dom0IptablesFirewallDriver') stubs.stubout_session(self.stubs, stubs.FakeSessionForVMTests) self.conn = xenapi_conn.XenAPIDriver(fake.FakeVirtAPI(), False) self.user_id = 'fake' self.project_id = 'fake' self.instance_values = { 'project_id': self.project_id, 'user_id': self.user_id, 'image_ref': IMAGE_MACHINE, 'kernel_id': IMAGE_KERNEL, 'ramdisk_id': IMAGE_RAMDISK, 'root_gb': 80, 'ephemeral_gb': 0, 'instance_type_id': '3', # m1.large 'os_type': 'linux', 'architecture': 'x86-64'} self.context = context.RequestContext(self.user_id, self.project_id) def fake_create_vbd(session, vm_ref, vdi_ref, userdevice, vbd_type='disk', read_only=False, bootable=True, osvol=False): pass self.stubs.Set(vm_utils, 'create_vbd', fake_create_vbd) def assertIsPartitionCalled(self, called): marker = {"partition_called": False} def fake_resize_part_and_fs(dev, start, old_sectors, new_sectors, flags): marker["partition_called"] = True self.stubs.Set(vm_utils, "_resize_part_and_fs", fake_resize_part_and_fs) context.RequestContext(self.user_id, self.project_id) session = get_session() disk_image_type = vm_utils.ImageType.DISK_VHD instance = create_instance_with_system_metadata(self.context, self.instance_values) vm_ref = xenapi_fake.create_vm(instance['name'], 'Halted') vdi_ref = xenapi_fake.create_vdi(instance['name'], 'fake') vdi_uuid = session.call_xenapi('VDI.get_record', vdi_ref)['uuid'] vdis = {'root': {'uuid': vdi_uuid, 'ref': vdi_ref}} image_meta = objects.ImageMeta.from_dict( {'id': uuids.image_id, 'disk_format': 'vhd', 'properties': {'vm_mode': 'xen'}}) self.mox.ReplayAll() self.conn._vmops._attach_disks(self.context, instance, image_meta, vm_ref, instance['name'], vdis, disk_image_type, "fake_nw_inf") self.assertEqual(marker["partition_called"], called) def test_instance_not_auto_disk_config(self): """Should not partition unless instance is marked as auto_disk_config. """ self.instance_values['auto_disk_config'] = False self.assertIsPartitionCalled(False) @stub_vm_utils_with_vdi_attached def test_instance_auto_disk_config_fails_safe_two_partitions(self): # Should not partition unless fail safes pass. self.instance_values['auto_disk_config'] = True def fake_get_partitions(dev): return [(1, 0, 100, 'ext4', "", ""), (2, 100, 200, 'ext4' "", "")] self.stubs.Set(vm_utils, "_get_partitions", fake_get_partitions) self.assertIsPartitionCalled(False) @stub_vm_utils_with_vdi_attached def test_instance_auto_disk_config_fails_safe_badly_numbered(self): # Should not partition unless fail safes pass. self.instance_values['auto_disk_config'] = True def fake_get_partitions(dev): return [(2, 100, 200, 'ext4', "", "")] self.stubs.Set(vm_utils, "_get_partitions", fake_get_partitions) self.assertIsPartitionCalled(False) @stub_vm_utils_with_vdi_attached def test_instance_auto_disk_config_fails_safe_bad_fstype(self): # Should not partition unless fail safes pass. self.instance_values['auto_disk_config'] = True def fake_get_partitions(dev): return [(1, 100, 200, 'asdf', "", "")] self.stubs.Set(vm_utils, "_get_partitions", fake_get_partitions) self.assertIsPartitionCalled(False) @stub_vm_utils_with_vdi_attached def test_instance_auto_disk_config_passes_fail_safes(self): """Should partition if instance is marked as auto_disk_config=True and virt-layer specific fail-safe checks pass. """ self.instance_values['auto_disk_config'] = True def fake_get_partitions(dev): return [(1, 0, 100, 'ext4', "", "boot")] self.stubs.Set(vm_utils, "_get_partitions", fake_get_partitions) self.assertIsPartitionCalled(True) # FIXME(sirp): convert this to use XenAPITestBaseNoDB class XenAPIGenerateLocal(stubs.XenAPITestBase): """Test generating of local disks, like swap and ephemeral.""" def setUp(self): super(XenAPIGenerateLocal, self).setUp() self.flags(connection_url='http://localhost', connection_password='test_pass', group='xenserver') self.flags(firewall_driver='nova.virt.xenapi.firewall.' 'Dom0IptablesFirewallDriver') stubs.stubout_session(self.stubs, stubs.FakeSessionForVMTests) db_fakes.stub_out_db_instance_api(self) self.conn = xenapi_conn.XenAPIDriver(fake.FakeVirtAPI(), False) self.user_id = 'fake' self.project_id = 'fake' self.instance_values = { 'project_id': self.project_id, 'user_id': self.user_id, 'image_ref': IMAGE_MACHINE, 'kernel_id': IMAGE_KERNEL, 'ramdisk_id': IMAGE_RAMDISK, 'root_gb': 80, 'ephemeral_gb': 0, 'instance_type_id': '3', # m1.large 'os_type': 'linux', 'architecture': 'x86-64'} self.context = context.RequestContext(self.user_id, self.project_id) def fake_create_vbd(session, vm_ref, vdi_ref, userdevice, vbd_type='disk', read_only=False, bootable=True, osvol=False, empty=False, unpluggable=True): return session.call_xenapi('VBD.create', {'VM': vm_ref, 'VDI': vdi_ref}) self.stubs.Set(vm_utils, 'create_vbd', fake_create_vbd) def assertCalled(self, instance, disk_image_type=vm_utils.ImageType.DISK_VHD): context.RequestContext(self.user_id, self.project_id) session = get_session() vm_ref = xenapi_fake.create_vm(instance['name'], 'Halted') vdi_ref = xenapi_fake.create_vdi(instance['name'], 'fake') vdi_uuid = session.call_xenapi('VDI.get_record', vdi_ref)['uuid'] vdi_key = 'root' if disk_image_type == vm_utils.ImageType.DISK_ISO: vdi_key = 'iso' vdis = {vdi_key: {'uuid': vdi_uuid, 'ref': vdi_ref}} self.called = False image_meta = objects.ImageMeta.from_dict( {'id': uuids.image_id, 'disk_format': 'vhd', 'properties': {'vm_mode': 'xen'}}) self.conn._vmops._attach_disks(self.context, instance, image_meta, vm_ref, instance['name'], vdis, disk_image_type, "fake_nw_inf") self.assertTrue(self.called) def test_generate_swap(self): # Test swap disk generation. instance_values = dict(self.instance_values, instance_type_id=5) instance = create_instance_with_system_metadata(self.context, instance_values) def fake_generate_swap(*args, **kwargs): self.called = True self.stubs.Set(vm_utils, 'generate_swap', fake_generate_swap) self.assertCalled(instance) def test_generate_ephemeral(self): # Test ephemeral disk generation. instance_values = dict(self.instance_values, instance_type_id=4) instance = create_instance_with_system_metadata(self.context, instance_values) def fake_generate_ephemeral(*args): self.called = True self.stubs.Set(vm_utils, 'generate_ephemeral', fake_generate_ephemeral) self.assertCalled(instance) def test_generate_iso_blank_root_disk(self): instance_values = dict(self.instance_values, instance_type_id=4) instance_values.pop('kernel_id') instance_values.pop('ramdisk_id') instance = create_instance_with_system_metadata(self.context, instance_values) def fake_generate_ephemeral(*args): pass self.stubs.Set(vm_utils, 'generate_ephemeral', fake_generate_ephemeral) def fake_generate_iso(*args): self.called = True self.stubs.Set(vm_utils, 'generate_iso_blank_root_disk', fake_generate_iso) self.assertCalled(instance, vm_utils.ImageType.DISK_ISO) class XenAPIBWCountersTestCase(stubs.XenAPITestBaseNoDB): FAKE_VMS = {'test1:ref': dict(name_label='test1', other_config=dict(nova_uuid='hash'), domid='12', _vifmap={'0': "a:b:c:d...", '1': "e:f:12:q..."}), 'test2:ref': dict(name_label='test2', other_config=dict(nova_uuid='hash'), domid='42', _vifmap={'0': "a:3:c:d...", '1': "e:f:42:q..."}), } def setUp(self): super(XenAPIBWCountersTestCase, self).setUp() self.stubs.Set(vm_utils, 'list_vms', XenAPIBWCountersTestCase._fake_list_vms) self.flags(connection_url='http://localhost', connection_password='test_pass', group='xenserver') self.flags(firewall_driver='nova.virt.xenapi.firewall.' 'Dom0IptablesFirewallDriver') stubs.stubout_session(self.stubs, stubs.FakeSessionForVMTests) self.conn = xenapi_conn.XenAPIDriver(fake.FakeVirtAPI(), False) def _fake_get_vif_device_map(vm_rec): return vm_rec['_vifmap'] self.stubs.Set(self.conn._vmops, "_get_vif_device_map", _fake_get_vif_device_map) @classmethod def _fake_list_vms(cls, session): return cls.FAKE_VMS.items() @staticmethod def _fake_fetch_bandwidth_mt(session): return {} @staticmethod def _fake_fetch_bandwidth(session): return {'42': {'0': {'bw_in': 21024, 'bw_out': 22048}, '1': {'bw_in': 231337, 'bw_out': 221212121}}, '12': {'0': {'bw_in': 1024, 'bw_out': 2048}, '1': {'bw_in': 31337, 'bw_out': 21212121}}, } def test_get_all_bw_counters(self): instances = [dict(name='test1', uuid='1-2-3'), dict(name='test2', uuid='4-5-6')] self.stubs.Set(vm_utils, 'fetch_bandwidth', self._fake_fetch_bandwidth) result = self.conn.get_all_bw_counters(instances) self.assertEqual(len(result), 4) self.assertIn(dict(uuid='1-2-3', mac_address="a:b:c:d...", bw_in=1024, bw_out=2048), result) self.assertIn(dict(uuid='1-2-3', mac_address="e:f:12:q...", bw_in=31337, bw_out=21212121), result) self.assertIn(dict(uuid='4-5-6', mac_address="a:3:c:d...", bw_in=21024, bw_out=22048), result) self.assertIn(dict(uuid='4-5-6', mac_address="e:f:42:q...", bw_in=231337, bw_out=221212121), result) def test_get_all_bw_counters_in_failure_case(self): """Test that get_all_bw_conters returns an empty list when no data returned from Xenserver. c.f. bug #910045. """ instances = [dict(name='instance-0001', uuid='1-2-3-4-5')] self.stubs.Set(vm_utils, 'fetch_bandwidth', self._fake_fetch_bandwidth_mt) result = self.conn.get_all_bw_counters(instances) self.assertEqual(result, []) # TODO(salvatore-orlando): this class and # nova.tests.unit.virt.test_libvirt.IPTablesFirewallDriverTestCase # share a lot of code. Consider abstracting common code in a base # class for firewall driver testing. # # FIXME(sirp): convert this to use XenAPITestBaseNoDB class XenAPIDom0IptablesFirewallTestCase(stubs.XenAPITestBase): REQUIRES_LOCKING = True _in_rules = [ '# Generated by iptables-save v1.4.10 on Sat Feb 19 00:03:19 2011', '*nat', ':PREROUTING ACCEPT [1170:189210]', ':INPUT ACCEPT [844:71028]', ':OUTPUT ACCEPT [5149:405186]', ':POSTROUTING ACCEPT [5063:386098]', '# Completed on Mon Dec 6 11:54:13 2010', '# Generated by iptables-save v1.4.4 on Mon Dec 6 11:54:13 2010', '*mangle', ':INPUT ACCEPT [969615:281627771]', ':FORWARD ACCEPT [0:0]', ':OUTPUT ACCEPT [915599:63811649]', ':nova-block-ipv4 - [0:0]', '[0:0] -A INPUT -i virbr0 -p tcp -m tcp --dport 67 -j ACCEPT ', '[0:0] -A FORWARD -d 192.168.122.0/24 -o virbr0 -m state --state RELATED' ',ESTABLISHED -j ACCEPT ', '[0:0] -A FORWARD -s 192.168.122.0/24 -i virbr0 -j ACCEPT ', '[0:0] -A FORWARD -i virbr0 -o virbr0 -j ACCEPT ', '[0:0] -A FORWARD -o virbr0 -j REJECT ' '--reject-with icmp-port-unreachable ', '[0:0] -A FORWARD -i virbr0 -j REJECT ' '--reject-with icmp-port-unreachable ', 'COMMIT', '# Completed on Mon Dec 6 11:54:13 2010', '# Generated by iptables-save v1.4.4 on Mon Dec 6 11:54:13 2010', '*filter', ':INPUT ACCEPT [969615:281627771]', ':FORWARD ACCEPT [0:0]', ':OUTPUT ACCEPT [915599:63811649]', ':nova-block-ipv4 - [0:0]', '[0:0] -A INPUT -i virbr0 -p tcp -m tcp --dport 67 -j ACCEPT ', '[0:0] -A FORWARD -d 192.168.122.0/24 -o virbr0 -m state --state RELATED' ',ESTABLISHED -j ACCEPT ', '[0:0] -A FORWARD -s 192.168.122.0/24 -i virbr0 -j ACCEPT ', '[0:0] -A FORWARD -i virbr0 -o virbr0 -j ACCEPT ', '[0:0] -A FORWARD -o virbr0 -j REJECT ' '--reject-with icmp-port-unreachable ', '[0:0] -A FORWARD -i virbr0 -j REJECT ' '--reject-with icmp-port-unreachable ', 'COMMIT', '# Completed on Mon Dec 6 11:54:13 2010', ] _in6_filter_rules = [ '# Generated by ip6tables-save v1.4.4 on Tue Jan 18 23:47:56 2011', '*filter', ':INPUT ACCEPT [349155:75810423]', ':FORWARD ACCEPT [0:0]', ':OUTPUT ACCEPT [349256:75777230]', 'COMMIT', '# Completed on Tue Jan 18 23:47:56 2011', ] def setUp(self): super(XenAPIDom0IptablesFirewallTestCase, self).setUp() self.flags(connection_url='http://localhost', connection_password='test_pass', group='xenserver') self.flags(instance_name_template='%d', firewall_driver='nova.virt.xenapi.firewall.' 'Dom0IptablesFirewallDriver') self.user_id = 'mappin' self.project_id = 'fake' stubs.stubout_session(self.stubs, stubs.FakeSessionForFirewallTests, test_case=self) self.context = context.RequestContext(self.user_id, self.project_id) self.network = importutils.import_object(CONF.network_manager) self.conn = xenapi_conn.XenAPIDriver(fake.FakeVirtAPI(), False) self.fw = self.conn._vmops.firewall_driver def _create_instance_ref(self): return db.instance_create(self.context, {'user_id': self.user_id, 'project_id': self.project_id, 'instance_type_id': 1}) def _create_test_security_group(self): admin_ctxt = context.get_admin_context() secgroup = db.security_group_create(admin_ctxt, {'user_id': self.user_id, 'project_id': self.project_id, 'name': 'testgroup', 'description': 'test group'}) db.security_group_rule_create(admin_ctxt, {'parent_group_id': secgroup['id'], 'protocol': 'icmp', 'from_port': -1, 'to_port': -1, 'cidr': '192.168.11.0/24'}) db.security_group_rule_create(admin_ctxt, {'parent_group_id': secgroup['id'], 'protocol': 'icmp', 'from_port': 8, 'to_port': -1, 'cidr': '192.168.11.0/24'}) db.security_group_rule_create(admin_ctxt, {'parent_group_id': secgroup['id'], 'protocol': 'tcp', 'from_port': 80, 'to_port': 81, 'cidr': '192.168.10.0/24'}) return secgroup def _validate_security_group(self): in_rules = [l for l in self._in_rules if not l.startswith('#')] for rule in in_rules: if 'nova' not in rule: self.assertIn(rule, self._out_rules, 'Rule went missing: %s' % rule) instance_chain = None for rule in self._out_rules: # This is pretty crude, but it'll do for now # last two octets change if re.search('-d 192.168.[0-9]{1,3}.[0-9]{1,3} -j', rule): instance_chain = rule.split(' ')[-1] break self.assertTrue(instance_chain, "The instance chain wasn't added") security_group_chain = None for rule in self._out_rules: # This is pretty crude, but it'll do for now if '-A %s -j' % instance_chain in rule: security_group_chain = rule.split(' ')[-1] break self.assertTrue(security_group_chain, "The security group chain wasn't added") regex = re.compile('\[0\:0\] -A .* -j ACCEPT -p icmp' ' -s 192.168.11.0/24') match_rules = [rule for rule in self._out_rules if regex.match(rule)] self.assertGreater(len(match_rules), 0, "ICMP acceptance rule wasn't added") regex = re.compile('\[0\:0\] -A .* -j ACCEPT -p icmp -m icmp' ' --icmp-type 8 -s 192.168.11.0/24') match_rules = [rule for rule in self._out_rules if regex.match(rule)] self.assertGreater(len(match_rules), 0, "ICMP Echo Request acceptance rule wasn't added") regex = re.compile('\[0\:0\] -A .* -j ACCEPT -p tcp --dport 80:81' ' -s 192.168.10.0/24') match_rules = [rule for rule in self._out_rules if regex.match(rule)] self.assertGreater(len(match_rules), 0, "TCP port 80/81 acceptance rule wasn't added") def test_static_filters(self): instance_ref = self._create_instance_ref() src_instance_ref = self._create_instance_ref() admin_ctxt = context.get_admin_context() secgroup = self._create_test_security_group() src_secgroup = db.security_group_create(admin_ctxt, {'user_id': self.user_id, 'project_id': self.project_id, 'name': 'testsourcegroup', 'description': 'src group'}) db.security_group_rule_create(admin_ctxt, {'parent_group_id': secgroup['id'], 'protocol': 'tcp', 'from_port': 80, 'to_port': 81, 'group_id': src_secgroup['id']}) db.instance_add_security_group(admin_ctxt, instance_ref['uuid'], secgroup['id']) db.instance_add_security_group(admin_ctxt, src_instance_ref['uuid'], src_secgroup['id']) instance_ref = db.instance_get(admin_ctxt, instance_ref['id']) src_instance_ref = db.instance_get(admin_ctxt, src_instance_ref['id']) network_model = fake_network.fake_get_instance_nw_info(self, 1) self.stubs.Set(objects.Instance, 'get_network_info', lambda instance: network_model) self.fw.prepare_instance_filter(instance_ref, network_model) self.fw.apply_instance_filter(instance_ref, network_model) self._validate_security_group() # Extra test for TCP acceptance rules for ip in network_model.fixed_ips(): if ip['version'] != 4: continue regex = re.compile('\[0\:0\] -A .* -j ACCEPT -p tcp' ' --dport 80:81 -s %s' % ip['address']) match_rules = [rule for rule in self._out_rules if regex.match(rule)] self.assertGreater(len(match_rules), 0, "TCP port 80/81 acceptance rule wasn't added") db.instance_destroy(admin_ctxt, instance_ref['uuid']) def test_filters_for_instance_with_ip_v6(self): self.flags(use_ipv6=True) network_info = fake_network.fake_get_instance_nw_info(self, 1) rulesv4, rulesv6 = self.fw._filters_for_instance("fake", network_info) self.assertEqual(len(rulesv4), 2) self.assertEqual(len(rulesv6), 1) def test_filters_for_instance_without_ip_v6(self): self.flags(use_ipv6=False) network_info = fake_network.fake_get_instance_nw_info(self, 1) rulesv4, rulesv6 = self.fw._filters_for_instance("fake", network_info) self.assertEqual(len(rulesv4), 2) self.assertEqual(len(rulesv6), 0) def test_multinic_iptables(self): ipv4_rules_per_addr = 1 ipv4_addr_per_network = 2 ipv6_rules_per_addr = 1 ipv6_addr_per_network = 1 networks_count = 5 instance_ref = self._create_instance_ref() _get_instance_nw_info = fake_network.fake_get_instance_nw_info network_info = _get_instance_nw_info(self, networks_count, ipv4_addr_per_network) network_info[0]['network']['subnets'][0]['meta']['dhcp_server'] = \ '1.1.1.1' ipv4_len = len(self.fw.iptables.ipv4['filter'].rules) ipv6_len = len(self.fw.iptables.ipv6['filter'].rules) inst_ipv4, inst_ipv6 = self.fw.instance_rules(instance_ref, network_info) self.fw.prepare_instance_filter(instance_ref, network_info) ipv4 = self.fw.iptables.ipv4['filter'].rules ipv6 = self.fw.iptables.ipv6['filter'].rules ipv4_network_rules = len(ipv4) - len(inst_ipv4) - ipv4_len ipv6_network_rules = len(ipv6) - len(inst_ipv6) - ipv6_len # Extra rules are for the DHCP request rules = (ipv4_rules_per_addr * ipv4_addr_per_network * networks_count) + 2 self.assertEqual(ipv4_network_rules, rules) self.assertEqual(ipv6_network_rules, ipv6_rules_per_addr * ipv6_addr_per_network * networks_count) def test_do_refresh_security_group_rules(self): admin_ctxt = context.get_admin_context() instance_ref = self._create_instance_ref() network_info = fake_network.fake_get_instance_nw_info(self, 1, 1) secgroup = self._create_test_security_group() db.instance_add_security_group(admin_ctxt, instance_ref['uuid'], secgroup['id']) self.fw.prepare_instance_filter(instance_ref, network_info) self.fw.instance_info[instance_ref['id']] = (instance_ref, network_info) self._validate_security_group() # add a rule to the security group db.security_group_rule_create(admin_ctxt, {'parent_group_id': secgroup['id'], 'protocol': 'udp', 'from_port': 200, 'to_port': 299, 'cidr': '192.168.99.0/24'}) # validate the extra rule self.fw.refresh_security_group_rules(secgroup) regex = re.compile('\[0\:0\] -A .* -j ACCEPT -p udp --dport 200:299' ' -s 192.168.99.0/24') match_rules = [rule for rule in self._out_rules if regex.match(rule)] self.assertGreater(len(match_rules), 0, "Rules were not updated properly. " "The rule for UDP acceptance is missing") class XenAPISRSelectionTestCase(stubs.XenAPITestBaseNoDB): """Unit tests for testing we find the right SR.""" def test_safe_find_sr_raise_exception(self): # Ensure StorageRepositoryNotFound is raise when wrong filter. self.flags(sr_matching_filter='yadayadayada', group='xenserver') stubs.stubout_session(self.stubs, stubs.FakeSessionForVMTests) session = get_session() self.assertRaises(exception.StorageRepositoryNotFound, vm_utils.safe_find_sr, session) def test_safe_find_sr_local_storage(self): # Ensure the default local-storage is found. self.flags(sr_matching_filter='other-config:i18n-key=local-storage', group='xenserver') stubs.stubout_session(self.stubs, stubs.FakeSessionForVMTests) session = get_session() # This test is only guaranteed if there is one host in the pool self.assertEqual(len(xenapi_fake.get_all('host')), 1) host_ref = xenapi_fake.get_all('host')[0] pbd_refs = xenapi_fake.get_all('PBD') for pbd_ref in pbd_refs: pbd_rec = xenapi_fake.get_record('PBD', pbd_ref) if pbd_rec['host'] != host_ref: continue sr_rec = xenapi_fake.get_record('SR', pbd_rec['SR']) if sr_rec['other_config']['i18n-key'] == 'local-storage': local_sr = pbd_rec['SR'] expected = vm_utils.safe_find_sr(session) self.assertEqual(local_sr, expected) def test_safe_find_sr_by_other_criteria(self): # Ensure the SR is found when using a different filter. self.flags(sr_matching_filter='other-config:my_fake_sr=true', group='xenserver') stubs.stubout_session(self.stubs, stubs.FakeSessionForVMTests) session = get_session() host_ref = xenapi_fake.get_all('host')[0] local_sr = xenapi_fake.create_sr(name_label='Fake Storage', type='lvm', other_config={'my_fake_sr': 'true'}, host_ref=host_ref) expected = vm_utils.safe_find_sr(session) self.assertEqual(local_sr, expected) def test_safe_find_sr_default(self): # Ensure the default SR is found regardless of other-config. self.flags(sr_matching_filter='default-sr:true', group='xenserver') stubs.stubout_session(self.stubs, stubs.FakeSessionForVMTests) session = get_session() pool_ref = session.call_xenapi('pool.get_all')[0] expected = vm_utils.safe_find_sr(session) self.assertEqual(session.call_xenapi('pool.get_default_SR', pool_ref), expected) def _create_service_entries(context, values={'avail_zone1': ['fake_host1', 'fake_host2'], 'avail_zone2': ['fake_host3'], }): for hosts in values.values(): for service_host in hosts: db.service_create(context, {'host': service_host, 'binary': 'nova-compute', 'topic': 'compute', 'report_count': 0}) return values # FIXME(sirp): convert this to use XenAPITestBaseNoDB class XenAPIAggregateTestCase(stubs.XenAPITestBase): """Unit tests for aggregate operations.""" def setUp(self): super(XenAPIAggregateTestCase, self).setUp() self.flags(connection_url='http://localhost', connection_username='test_user', connection_password='test_pass', group='xenserver') self.flags(instance_name_template='%d', firewall_driver='nova.virt.xenapi.firewall.' 'Dom0IptablesFirewallDriver', host='host', compute_driver='xenapi.XenAPIDriver', default_availability_zone='avail_zone1') host_ref = xenapi_fake.get_all('host')[0] stubs.stubout_session(self.stubs, stubs.FakeSessionForVMTests) self.context = context.get_admin_context() self.conn = xenapi_conn.XenAPIDriver(fake.FakeVirtAPI(), False) self.compute = manager.ComputeManager() self.api = compute_api.AggregateAPI() values = {'name': 'test_aggr', 'metadata': {'availability_zone': 'test_zone', pool_states.POOL_FLAG: 'XenAPI'}} self.aggr = objects.Aggregate(context=self.context, id=1, **values) self.fake_metadata = {pool_states.POOL_FLAG: 'XenAPI', 'master_compute': 'host', 'availability_zone': 'fake_zone', pool_states.KEY: pool_states.ACTIVE, 'host': xenapi_fake.get_record('host', host_ref)['uuid']} self.useFixture(fixtures.SingleCellSimple()) def test_pool_add_to_aggregate_called_by_driver(self): calls = [] def pool_add_to_aggregate(context, aggregate, host, slave_info=None): self.assertEqual("CONTEXT", context) self.assertEqual("AGGREGATE", aggregate) self.assertEqual("HOST", host) self.assertEqual("SLAVEINFO", slave_info) calls.append(pool_add_to_aggregate) self.stubs.Set(self.conn._pool, "add_to_aggregate", pool_add_to_aggregate) self.conn.add_to_aggregate("CONTEXT", "AGGREGATE", "HOST", slave_info="SLAVEINFO") self.assertIn(pool_add_to_aggregate, calls) def test_pool_remove_from_aggregate_called_by_driver(self): calls = [] def pool_remove_from_aggregate(context, aggregate, host, slave_info=None): self.assertEqual("CONTEXT", context) self.assertEqual("AGGREGATE", aggregate) self.assertEqual("HOST", host) self.assertEqual("SLAVEINFO", slave_info) calls.append(pool_remove_from_aggregate) self.stubs.Set(self.conn._pool, "remove_from_aggregate", pool_remove_from_aggregate) self.conn.remove_from_aggregate("CONTEXT", "AGGREGATE", "HOST", slave_info="SLAVEINFO") self.assertIn(pool_remove_from_aggregate, calls) def test_add_to_aggregate_for_first_host_sets_metadata(self): def fake_init_pool(id, name): fake_init_pool.called = True self.stubs.Set(self.conn._pool, "_init_pool", fake_init_pool) aggregate = self._aggregate_setup() self.conn._pool.add_to_aggregate(self.context, aggregate, "host") result = objects.Aggregate.get_by_id(self.context, aggregate.id) self.assertTrue(fake_init_pool.called) self.assertThat(self.fake_metadata, matchers.DictMatches(result.metadata)) def test_join_slave(self): # Ensure join_slave gets called when the request gets to master. def fake_join_slave(id, compute_uuid, host, url, user, password): fake_join_slave.called = True self.stubs.Set(self.conn._pool, "_join_slave", fake_join_slave) aggregate = self._aggregate_setup(hosts=['host', 'host2'], metadata=self.fake_metadata) self.conn._pool.add_to_aggregate(self.context, aggregate, "host2", dict(compute_uuid='fake_uuid', url='fake_url', user='fake_user', passwd='fake_pass', xenhost_uuid='fake_uuid')) self.assertTrue(fake_join_slave.called) def test_add_to_aggregate_first_host(self): def fake_pool_set_name_label(self, session, pool_ref, name): fake_pool_set_name_label.called = True self.stubs.Set(xenapi_fake.SessionBase, "pool_set_name_label", fake_pool_set_name_label) self.conn._session.call_xenapi("pool.create", {"name": "asdf"}) metadata = {'availability_zone': 'fake_zone', pool_states.POOL_FLAG: "XenAPI", pool_states.KEY: pool_states.CREATED} aggregate = objects.Aggregate(context=self.context) aggregate.name = 'fake_aggregate' aggregate.metadata = dict(metadata) aggregate.create() aggregate.add_host('host') self.assertEqual(["host"], aggregate.hosts) self.assertEqual(metadata, aggregate.metadata) self.conn._pool.add_to_aggregate(self.context, aggregate, "host") self.assertTrue(fake_pool_set_name_label.called) def test_remove_from_aggregate_called(self): def fake_remove_from_aggregate(context, aggregate, host): fake_remove_from_aggregate.called = True self.stubs.Set(self.conn._pool, "remove_from_aggregate", fake_remove_from_aggregate) self.conn.remove_from_aggregate(None, None, None) self.assertTrue(fake_remove_from_aggregate.called) def test_remove_from_empty_aggregate(self): result = self._aggregate_setup() self.assertRaises(exception.InvalidAggregateActionDelete, self.conn._pool.remove_from_aggregate, self.context, result, "test_host") def test_remove_slave(self): # Ensure eject slave gets called. def fake_eject_slave(id, compute_uuid, host_uuid): fake_eject_slave.called = True self.stubs.Set(self.conn._pool, "_eject_slave", fake_eject_slave) self.fake_metadata['host2'] = 'fake_host2_uuid' aggregate = self._aggregate_setup(hosts=['host', 'host2'], metadata=self.fake_metadata, aggr_state=pool_states.ACTIVE) self.conn._pool.remove_from_aggregate(self.context, aggregate, "host2") self.assertTrue(fake_eject_slave.called) def test_remove_master_solo(self): # Ensure metadata are cleared after removal. def fake_clear_pool(id): fake_clear_pool.called = True self.stubs.Set(self.conn._pool, "_clear_pool", fake_clear_pool) aggregate = self._aggregate_setup(metadata=self.fake_metadata) self.conn._pool.remove_from_aggregate(self.context, aggregate, "host") result = objects.Aggregate.get_by_id(self.context, aggregate.id) self.assertTrue(fake_clear_pool.called) self.assertThat({'availability_zone': 'fake_zone', pool_states.POOL_FLAG: 'XenAPI', pool_states.KEY: pool_states.ACTIVE}, matchers.DictMatches(result.metadata)) def test_remote_master_non_empty_pool(self): # Ensure AggregateError is raised if removing the master. aggregate = self._aggregate_setup(hosts=['host', 'host2'], metadata=self.fake_metadata) self.assertRaises(exception.InvalidAggregateActionDelete, self.conn._pool.remove_from_aggregate, self.context, aggregate, "host") def _aggregate_setup(self, aggr_name='fake_aggregate', aggr_zone='fake_zone', aggr_state=pool_states.CREATED, hosts=['host'], metadata=None): aggregate = objects.Aggregate(context=self.context) aggregate.name = aggr_name aggregate.metadata = {'availability_zone': aggr_zone, pool_states.POOL_FLAG: 'XenAPI', pool_states.KEY: aggr_state, } if metadata: aggregate.metadata.update(metadata) aggregate.create() for aggregate_host in hosts: aggregate.add_host(aggregate_host) return aggregate def test_add_host_to_aggregate_invalid_changing_status(self): """Ensure InvalidAggregateActionAdd is raised when adding host while aggregate is not ready. """ aggregate = self._aggregate_setup(aggr_state=pool_states.CHANGING) ex = self.assertRaises(exception.InvalidAggregateActionAdd, self.conn.add_to_aggregate, self.context, aggregate, 'host') self.assertIn('setup in progress', str(ex)) def test_add_host_to_aggregate_invalid_dismissed_status(self): """Ensure InvalidAggregateActionAdd is raised when aggregate is deleted. """ aggregate = self._aggregate_setup(aggr_state=pool_states.DISMISSED) ex = self.assertRaises(exception.InvalidAggregateActionAdd, self.conn.add_to_aggregate, self.context, aggregate, 'fake_host') self.assertIn('aggregate deleted', str(ex)) def test_add_host_to_aggregate_invalid_error_status(self): """Ensure InvalidAggregateActionAdd is raised when aggregate is in error. """ aggregate = self._aggregate_setup(aggr_state=pool_states.ERROR) ex = self.assertRaises(exception.InvalidAggregateActionAdd, self.conn.add_to_aggregate, self.context, aggregate, 'fake_host') self.assertIn('aggregate in error', str(ex)) def test_remove_host_from_aggregate_error(self): # Ensure we can remove a host from an aggregate even if in error. values = _create_service_entries(self.context) fake_zone = list(values.keys())[0] aggr = self.api.create_aggregate(self.context, 'fake_aggregate', fake_zone) # let's mock the fact that the aggregate is ready! metadata = {pool_states.POOL_FLAG: "XenAPI", pool_states.KEY: pool_states.ACTIVE} self.api.update_aggregate_metadata(self.context, aggr.id, metadata) for aggregate_host in values[fake_zone]: aggr = self.api.add_host_to_aggregate(self.context, aggr.id, aggregate_host) # let's mock the fact that the aggregate is in error! expected = self.api.remove_host_from_aggregate(self.context, aggr.id, values[fake_zone][0]) self.assertEqual(len(aggr.hosts) - 1, len(expected.hosts)) self.assertEqual(expected.metadata[pool_states.KEY], pool_states.ACTIVE) def test_remove_host_from_aggregate_invalid_dismissed_status(self): """Ensure InvalidAggregateActionDelete is raised when aggregate is deleted. """ aggregate = self._aggregate_setup(aggr_state=pool_states.DISMISSED) self.assertRaises(exception.InvalidAggregateActionDelete, self.conn.remove_from_aggregate, self.context, aggregate, 'fake_host') def test_remove_host_from_aggregate_invalid_changing_status(self): """Ensure InvalidAggregateActionDelete is raised when aggregate is changing. """ aggregate = self._aggregate_setup(aggr_state=pool_states.CHANGING) self.assertRaises(exception.InvalidAggregateActionDelete, self.conn.remove_from_aggregate, self.context, aggregate, 'fake_host') def test_add_aggregate_host_raise_err(self): # Ensure the undo operation works correctly on add. def fake_driver_add_to_aggregate(context, aggregate, host, **_ignore): raise exception.AggregateError( aggregate_id='', action='', reason='') self.stubs.Set(self.compute.driver, "add_to_aggregate", fake_driver_add_to_aggregate) metadata = {pool_states.POOL_FLAG: "XenAPI", pool_states.KEY: pool_states.ACTIVE} self.aggr.metadata = metadata self.aggr.hosts = ['fake_host'] self.assertRaises(exception.AggregateError, self.compute.add_aggregate_host, self.context, host="fake_host", aggregate=self.aggr, slave_info=None) self.assertEqual(self.aggr.metadata[pool_states.KEY], pool_states.ERROR) self.assertEqual(self.aggr.hosts, ['fake_host']) class MockComputeAPI(object): def __init__(self): self._mock_calls = [] def add_aggregate_host(self, ctxt, aggregate, host_param, host, slave_info): self._mock_calls.append(( self.add_aggregate_host, ctxt, aggregate, host_param, host, slave_info)) def remove_aggregate_host(self, ctxt, host, aggregate_id, host_param, slave_info): self._mock_calls.append(( self.remove_aggregate_host, ctxt, host, aggregate_id, host_param, slave_info)) class StubDependencies(object): """Stub dependencies for ResourcePool.""" def __init__(self): self.compute_rpcapi = MockComputeAPI() def _is_hv_pool(self, *_ignore): return True def _get_metadata(self, *_ignore): return { pool_states.KEY: {}, 'master_compute': 'master' } def _create_slave_info(self, *ignore): return "SLAVE_INFO" class ResourcePoolWithStubs(StubDependencies, pool.ResourcePool): """A ResourcePool, use stub dependencies.""" class HypervisorPoolTestCase(test.NoDBTestCase): fake_aggregate = { 'id': 98, 'hosts': [], 'metadata': { 'master_compute': 'master', pool_states.POOL_FLAG: '', pool_states.KEY: '' } } fake_aggregate = objects.Aggregate(**fake_aggregate) def test_slave_asks_master_to_add_slave_to_pool(self): slave = ResourcePoolWithStubs() slave.add_to_aggregate("CONTEXT", self.fake_aggregate, "slave") self.assertIn( (slave.compute_rpcapi.add_aggregate_host, "CONTEXT", "slave", self.fake_aggregate, "master", "SLAVE_INFO"), slave.compute_rpcapi._mock_calls) def test_slave_asks_master_to_remove_slave_from_pool(self): slave = ResourcePoolWithStubs() slave.remove_from_aggregate("CONTEXT", self.fake_aggregate, "slave") self.assertIn( (slave.compute_rpcapi.remove_aggregate_host, "CONTEXT", "slave", 98, "master", "SLAVE_INFO"), slave.compute_rpcapi._mock_calls) class SwapXapiHostTestCase(test.NoDBTestCase): def test_swapping(self): self.assertEqual( "http://otherserver:8765/somepath", pool.swap_xapi_host( "http://someserver:8765/somepath", 'otherserver')) def test_no_port(self): self.assertEqual( "http://otherserver/somepath", pool.swap_xapi_host( "http://someserver/somepath", 'otherserver')) def test_no_path(self): self.assertEqual( "http://otherserver", pool.swap_xapi_host( "http://someserver", 'otherserver')) class XenAPILiveMigrateTestCase(stubs.XenAPITestBaseNoDB): """Unit tests for live_migration.""" def setUp(self): super(XenAPILiveMigrateTestCase, self).setUp() self.flags(connection_url='http://localhost', connection_password='test_pass', group='xenserver') self.flags(firewall_driver='nova.virt.xenapi.firewall.' 'Dom0IptablesFirewallDriver', host='host') db_fakes.stub_out_db_instance_api(self) self.context = context.get_admin_context() def test_live_migration_calls_vmops(self): stubs.stubout_session(self.stubs, stubs.FakeSessionForVMTests) self.conn = xenapi_conn.XenAPIDriver(fake.FakeVirtAPI(), False) def fake_live_migrate(context, instance_ref, dest, post_method, recover_method, block_migration, migrate_data): fake_live_migrate.called = True self.stubs.Set(self.conn._vmops, "live_migrate", fake_live_migrate) self.conn.live_migration(None, None, None, None, None) self.assertTrue(fake_live_migrate.called) def test_pre_live_migration(self): stubs.stubout_session(self.stubs, stubs.FakeSessionForVMTests) self.conn = xenapi_conn.XenAPIDriver(fake.FakeVirtAPI(), False) with mock.patch.object(self.conn._vmops, "pre_live_migration") as pre: pre.return_value = True result = self.conn.pre_live_migration( "ctx", "inst", "bdi", "nw", "di", "data") self.assertTrue(result) pre.assert_called_with("ctx", "inst", "bdi", "nw", "di", "data") @mock.patch.object(vmops.VMOps, '_post_start_actions') def test_post_live_migration_at_destination(self, mock_post_action): # ensure method is present stubs.stubout_session(self.stubs, stubs.FakeSessionForVMTests) self.conn = xenapi_conn.XenAPIDriver(fake.FakeVirtAPI(), False) fake_instance = {"name": "name"} fake_network_info = "network_info" def fake_fw(instance, network_info): self.assertEqual(instance, fake_instance) self.assertEqual(network_info, fake_network_info) fake_fw.call_count += 1 def fake_create_kernel_and_ramdisk(context, session, instance, name_label): return "fake-kernel-file", "fake-ramdisk-file" fake_fw.call_count = 0 _vmops = self.conn._vmops self.stubs.Set(_vmops.firewall_driver, 'setup_basic_filtering', fake_fw) self.stubs.Set(_vmops.firewall_driver, 'prepare_instance_filter', fake_fw) self.stubs.Set(_vmops.firewall_driver, 'apply_instance_filter', fake_fw) self.stubs.Set(vm_utils, "create_kernel_and_ramdisk", fake_create_kernel_and_ramdisk) def fake_get_vm_opaque_ref(instance): fake_get_vm_opaque_ref.called = True self.stubs.Set(_vmops, "_get_vm_opaque_ref", fake_get_vm_opaque_ref) fake_get_vm_opaque_ref.called = False def fake_strip_base_mirror_from_vdis(session, vm_ref): fake_strip_base_mirror_from_vdis.called = True self.stubs.Set(vm_utils, "strip_base_mirror_from_vdis", fake_strip_base_mirror_from_vdis) fake_strip_base_mirror_from_vdis.called = False self.conn.post_live_migration_at_destination(None, fake_instance, fake_network_info, None) self.assertEqual(fake_fw.call_count, 3) self.assertTrue(fake_get_vm_opaque_ref.called) self.assertTrue(fake_strip_base_mirror_from_vdis.called) mock_post_action.assert_called_once_with(fake_instance) def test_check_can_live_migrate_destination_with_block_migration(self): stubs.stubout_session(self.stubs, stubs.FakeSessionForVMTests) self.conn = xenapi_conn.XenAPIDriver(fake.FakeVirtAPI(), False) self.stubs.Set(vm_utils, "safe_find_sr", lambda _x: "asdf") expected = {'block_migration': True, 'is_volume_backed': False, 'migrate_data': { 'migrate_send_data': {'value': 'fake_migrate_data'}, 'destination_sr_ref': 'asdf' } } result = self.conn.check_can_live_migrate_destination(self.context, {'host': 'host'}, {}, {}, True, False) result.is_volume_backed = False self.assertEqual(expected, result.to_legacy_dict()) def test_check_live_migrate_destination_verifies_ip(self): stubs.stubout_session(self.stubs, stubs.FakeSessionForVMTests) self.conn = xenapi_conn.XenAPIDriver(fake.FakeVirtAPI(), False) for pif_ref in xenapi_fake.get_all('PIF'): pif_rec = xenapi_fake.get_record('PIF', pif_ref) pif_rec['IP'] = '' pif_rec['IPv6'] = '' self.stubs.Set(vm_utils, "safe_find_sr", lambda _x: "asdf") self.assertRaises(exception.MigrationError, self.conn.check_can_live_migrate_destination, self.context, {'host': 'host'}, {}, {}, True, False) def test_check_can_live_migrate_destination_block_migration_fails(self): stubs.stubout_session(self.stubs, stubs.FakeSessionForFailedMigrateTests) self.conn = xenapi_conn.XenAPIDriver(fake.FakeVirtAPI(), False) self.assertRaises(exception.MigrationError, self.conn.check_can_live_migrate_destination, self.context, {'host': 'host'}, {}, {}, True, False) def _add_default_live_migrate_stubs(self, conn): @classmethod def fake_generate_vdi_map(cls, destination_sr_ref, _vm_ref): pass @classmethod def fake_get_iscsi_srs(cls, destination_sr_ref, _vm_ref): return [] @classmethod def fake_get_vm_opaque_ref(cls, instance): return "fake_vm" def fake_lookup_kernel_ramdisk(session, vm): return ("fake_PV_kernel", "fake_PV_ramdisk") @classmethod def fake_generate_vif_map(cls, vif_uuid_map): return {'vif_ref1': 'dest_net_ref'} self.stub_out('nova.virt.xenapi.vmops.VMOps._generate_vdi_map', fake_generate_vdi_map) self.stub_out('nova.virt.xenapi.vmops.VMOps._get_iscsi_srs', fake_get_iscsi_srs) self.stub_out('nova.virt.xenapi.vmops.VMOps._get_vm_opaque_ref', fake_get_vm_opaque_ref) self.stub_out('nova.virt.xenapi.vm_utils.lookup_kernel_ramdisk', fake_lookup_kernel_ramdisk) self.stub_out('nova.virt.xenapi.vmops.VMOps._generate_vif_network_map', fake_generate_vif_map) def test_check_can_live_migrate_source_with_block_migrate(self): stubs.stubout_session(self.stubs, stubs.FakeSessionForVMTests) self.conn = xenapi_conn.XenAPIDriver(fake.FakeVirtAPI(), False) self._add_default_live_migrate_stubs(self.conn) dest_check_data = objects.XenapiLiveMigrateData( block_migration=True, is_volume_backed=False, destination_sr_ref=None, migrate_send_data={'key': 'value'}) result = self.conn.check_can_live_migrate_source(self.context, {'host': 'host'}, dest_check_data) self.assertEqual(dest_check_data, result) def test_check_can_live_migrate_source_with_block_migrate_iscsi(self): stubs.stubout_session(self.stubs, stubs.FakeSessionForVMTests) self.conn = xenapi_conn.XenAPIDriver(fake.FakeVirtAPI(), False) self._add_default_live_migrate_stubs(self.conn) def fake_get_iscsi_srs(destination_sr_ref, _vm_ref): return ['sr_ref'] self.stubs.Set(self.conn._vmops, "_get_iscsi_srs", fake_get_iscsi_srs) def fake_is_xsm_sr_check_relaxed(): return True self.stubs.Set(self.conn._vmops._session, 'is_xsm_sr_check_relaxed', fake_is_xsm_sr_check_relaxed) dest_check_data = objects.XenapiLiveMigrateData( block_migration=True, is_volume_backed=True, destination_sr_ref=None, migrate_send_data={'key': 'value'}) result = self.conn.check_can_live_migrate_source(self.context, {'host': 'host'}, dest_check_data) self.assertEqual(dest_check_data.to_legacy_dict(), result.to_legacy_dict()) def test_check_can_live_migrate_source_with_block_iscsi_fails(self): stubs.stubout_session(self.stubs, stubs.FakeSessionForVMTests) self.conn = xenapi_conn.XenAPIDriver(fake.FakeVirtAPI(), False) self._add_default_live_migrate_stubs(self.conn) def fake_get_iscsi_srs(destination_sr_ref, _vm_ref): return ['sr_ref'] self.stubs.Set(self.conn._vmops, "_get_iscsi_srs", fake_get_iscsi_srs) def fake_is_xsm_sr_check_relaxed(): return False self.stubs.Set(self.conn._vmops._session, 'is_xsm_sr_check_relaxed', fake_is_xsm_sr_check_relaxed) self.assertRaises(exception.MigrationError, self.conn.check_can_live_migrate_source, self.context, {'host': 'host'}, {}) def test_check_can_live_migrate_source_with_block_migrate_fails(self): stubs.stubout_session(self.stubs, stubs.FakeSessionForFailedMigrateTests) self.conn = xenapi_conn.XenAPIDriver(fake.FakeVirtAPI(), False) self._add_default_live_migrate_stubs(self.conn) dest_check_data = objects.XenapiLiveMigrateData( block_migration=True, is_volume_backed=True, migrate_send_data={'key': 'value'}, destination_sr_ref=None) self.assertRaises(exception.MigrationError, self.conn.check_can_live_migrate_source, self.context, {'host': 'host'}, dest_check_data) @mock.patch.object(objects.AggregateList, 'get_by_host') def test_check_can_live_migrate_works(self, mock_get_by_host): stubs.stubout_session(self.stubs, stubs.FakeSessionForVMTests) self.conn = xenapi_conn.XenAPIDriver(fake.FakeVirtAPI(), False) metadata = {'host': 'test_host_uuid'} aggregate = objects.Aggregate(metadata=metadata) aggregate_list = objects.AggregateList(objects=[aggregate]) mock_get_by_host.return_value = aggregate_list instance = objects.Instance(host='host') self.conn.check_can_live_migrate_destination( self.context, instance, None, None) mock_get_by_host.assert_called_once_with( self.context, CONF.host, key='hypervisor_pool') @mock.patch.object(objects.AggregateList, 'get_by_host') def test_check_can_live_migrate_fails(self, mock_get_by_host): stubs.stubout_session(self.stubs, stubs.FakeSessionForVMTests) self.conn = xenapi_conn.XenAPIDriver(fake.FakeVirtAPI(), False) metadata = {'dest_other': 'test_host_uuid'} aggregate = objects.Aggregate(metadata=metadata) aggregate_list = objects.AggregateList(objects=[aggregate]) mock_get_by_host.return_value = aggregate_list instance = objects.Instance(host='host') self.assertRaises(exception.MigrationError, self.conn.check_can_live_migrate_destination, self.context, instance, None, None) mock_get_by_host.assert_called_once_with( self.context, CONF.host, key='hypervisor_pool') def test_live_migration(self): stubs.stubout_session(self.stubs, stubs.FakeSessionForVMTests) self.conn = xenapi_conn.XenAPIDriver(fake.FakeVirtAPI(), False) def fake_lookup_kernel_ramdisk(session, vm_ref): return "kernel", "ramdisk" self.stubs.Set(vm_utils, "lookup_kernel_ramdisk", fake_lookup_kernel_ramdisk) def fake_get_vm_opaque_ref(instance): return "fake_vm" self.stubs.Set(self.conn._vmops, "_get_vm_opaque_ref", fake_get_vm_opaque_ref) def fake_get_host_opaque_ref(context, destination_hostname): return "fake_host" self.stubs.Set(self.conn._vmops, "_get_host_opaque_ref", fake_get_host_opaque_ref) def post_method(context, instance, destination_hostname, block_migration, migrate_data): post_method.called = True migrate_data = objects.XenapiLiveMigrateData( destination_sr_ref="foo", migrate_send_data={"bar": "baz"}, block_migration=False) self.conn.live_migration(self.conn, None, None, post_method, None, None, migrate_data) self.assertTrue(post_method.called, "post_method.called") def test_live_migration_on_failure(self): stubs.stubout_session(self.stubs, stubs.FakeSessionForVMTests) self.conn = xenapi_conn.XenAPIDriver(fake.FakeVirtAPI(), False) def fake_get_vm_opaque_ref(instance): return "fake_vm" self.stubs.Set(self.conn._vmops, "_get_vm_opaque_ref", fake_get_vm_opaque_ref) def fake_get_host_opaque_ref(context, destination_hostname): return "fake_host" self.stubs.Set(self.conn._vmops, "_get_host_opaque_ref", fake_get_host_opaque_ref) def fake_call_xenapi(*args): raise NotImplementedError() self.stubs.Set(self.conn._vmops._session, "call_xenapi", fake_call_xenapi) def recover_method(context, instance, destination_hostname, migrate_data=None): self.assertIsNotNone(migrate_data, 'migrate_data should be passed') recover_method.called = True migrate_data = objects.XenapiLiveMigrateData( destination_sr_ref="foo", migrate_send_data={"bar": "baz"}, block_migration=False) self.assertRaises(NotImplementedError, self.conn.live_migration, self.conn, None, None, None, recover_method, None, migrate_data) self.assertTrue(recover_method.called, "recover_method.called") def test_live_migration_calls_post_migration(self): stubs.stubout_session(self.stubs, stubs.FakeSessionForVMTests) self.conn = xenapi_conn.XenAPIDriver(fake.FakeVirtAPI(), False) self._add_default_live_migrate_stubs(self.conn) def post_method(context, instance, destination_hostname, block_migration, migrate_data): post_method.called = True # pass block_migration = True and migrate data migrate_data = objects.XenapiLiveMigrateData( destination_sr_ref="foo", migrate_send_data={"bar": "baz"}, block_migration=True) self.conn.live_migration(self.conn, None, None, post_method, None, True, migrate_data) self.assertTrue(post_method.called, "post_method.called") def test_live_migration_block_cleans_srs(self): stubs.stubout_session(self.stubs, stubs.FakeSessionForVMTests) self.conn = xenapi_conn.XenAPIDriver(fake.FakeVirtAPI(), False) self._add_default_live_migrate_stubs(self.conn) def fake_get_iscsi_srs(context, instance): return ['sr_ref'] self.stubs.Set(self.conn._vmops, "_get_iscsi_srs", fake_get_iscsi_srs) def fake_forget_sr(context, instance): fake_forget_sr.called = True self.stubs.Set(volume_utils, "forget_sr", fake_forget_sr) def post_method(context, instance, destination_hostname, block_migration, migrate_data): post_method.called = True migrate_data = objects.XenapiLiveMigrateData( destination_sr_ref="foo", migrate_send_data={"bar": "baz"}, block_migration=True) self.conn.live_migration(self.conn, None, None, post_method, None, True, migrate_data) self.assertTrue(post_method.called, "post_method.called") self.assertTrue(fake_forget_sr.called, "forget_sr.called") def test_live_migration_with_block_migration_fails_migrate_send(self): stubs.stubout_session(self.stubs, stubs.FakeSessionForFailedMigrateTests) self.conn = xenapi_conn.XenAPIDriver(fake.FakeVirtAPI(), False) self._add_default_live_migrate_stubs(self.conn) def recover_method(context, instance, destination_hostname, migrate_data=None): self.assertIsNotNone(migrate_data, 'migrate_data should be passed') recover_method.called = True # pass block_migration = True and migrate data migrate_data = objects.XenapiLiveMigrateData( destination_sr_ref='foo', migrate_send_data={'bar': 'baz'}, block_migration=True) self.assertRaises(exception.MigrationError, self.conn.live_migration, self.conn, None, None, None, recover_method, True, migrate_data) self.assertTrue(recover_method.called, "recover_method.called") def test_live_migrate_block_migration_xapi_call_parameters(self): fake_vdi_map = object() class Session(xenapi_fake.SessionBase): def VM_migrate_send(self_, session, vmref, migrate_data, islive, vdi_map, vif_map, options): self.assertEqual({'SOMEDATA': 'SOMEVAL'}, migrate_data) self.assertEqual(fake_vdi_map, vdi_map) stubs.stubout_session(self.stubs, Session) conn = xenapi_conn.XenAPIDriver(fake.FakeVirtAPI(), False) self._add_default_live_migrate_stubs(conn) def fake_generate_vdi_map(destination_sr_ref, _vm_ref): return fake_vdi_map self.stubs.Set(conn._vmops, "_generate_vdi_map", fake_generate_vdi_map) def dummy_callback(*args, **kwargs): pass migrate_data = objects.XenapiLiveMigrateData( migrate_send_data={'SOMEDATA': 'SOMEVAL'}, destination_sr_ref='TARGET_SR_OPAQUE_REF', block_migration=True) conn.live_migration( self.context, instance=dict(name='ignore'), dest=None, post_method=dummy_callback, recover_method=dummy_callback, block_migration="SOMEDATA", migrate_data=migrate_data) def test_live_migrate_pool_migration_xapi_call_parameters(self): class Session(xenapi_fake.SessionBase): def VM_pool_migrate(self_, session, vm_ref, host_ref, options): self.assertEqual("fake_ref", host_ref) self.assertEqual({"live": "true"}, options) raise IOError() stubs.stubout_session(self.stubs, Session) conn = xenapi_conn.XenAPIDriver(fake.FakeVirtAPI(), False) self._add_default_live_migrate_stubs(conn) def fake_get_host_opaque_ref(context, destination): return "fake_ref" self.stubs.Set(conn._vmops, "_get_host_opaque_ref", fake_get_host_opaque_ref) def dummy_callback(*args, **kwargs): pass migrate_data = objects.XenapiLiveMigrateData( migrate_send_data={'foo': 'bar'}, destination_sr_ref='foo', block_migration=False) self.assertRaises(IOError, conn.live_migration, self.context, instance=dict(name='ignore'), dest=None, post_method=dummy_callback, recover_method=dummy_callback, block_migration=False, migrate_data=migrate_data) def test_generate_vdi_map(self): stubs.stubout_session(self.stubs, xenapi_fake.SessionBase) conn = xenapi_conn.XenAPIDriver(fake.FakeVirtAPI(), False) vm_ref = "fake_vm_ref" def fake_find_sr(_session): self.assertEqual(conn._session, _session) return "source_sr_ref" self.stubs.Set(vm_utils, "safe_find_sr", fake_find_sr) def fake_get_instance_vdis_for_sr(_session, _vm_ref, _sr_ref): self.assertEqual(conn._session, _session) self.assertEqual(vm_ref, _vm_ref) self.assertEqual("source_sr_ref", _sr_ref) return ["vdi0", "vdi1"] self.stubs.Set(vm_utils, "get_instance_vdis_for_sr", fake_get_instance_vdis_for_sr) result = conn._vmops._generate_vdi_map("dest_sr_ref", vm_ref) self.assertEqual({"vdi0": "dest_sr_ref", "vdi1": "dest_sr_ref"}, result) @mock.patch.object(vmops.VMOps, "_delete_networks_and_bridges") def test_rollback_live_migration_at_destination(self, mock_delete_network): stubs.stubout_session(self.stubs, xenapi_fake.SessionBase) conn = xenapi_conn.XenAPIDriver(fake.FakeVirtAPI(), False) network_info = ["fake_vif1"] with mock.patch.object(conn, "destroy") as mock_destroy: conn.rollback_live_migration_at_destination("context", "instance", network_info, {'block_device_mapping': []}) self.assertFalse(mock_destroy.called) self.assertTrue(mock_delete_network.called) class XenAPIInjectMetadataTestCase(stubs.XenAPITestBaseNoDB): def setUp(self): super(XenAPIInjectMetadataTestCase, self).setUp() self.flags(connection_url='http://localhost', connection_password='test_pass', group='xenserver') self.flags(firewall_driver='nova.virt.xenapi.firewall.' 'Dom0IptablesFirewallDriver') stubs.stubout_session(self.stubs, stubs.FakeSessionForVMTests) self.conn = xenapi_conn.XenAPIDriver(fake.FakeVirtAPI(), False) self.xenstore = dict(persist={}, ephem={}) self.called_fake_get_vm_opaque_ref = False def fake_get_vm_opaque_ref(inst, instance): self.called_fake_get_vm_opaque_ref = True if instance["uuid"] == "not_found": raise exception.NotFound self.assertEqual(instance, {'uuid': 'fake'}) return 'vm_ref' def fake_add_to_param_xenstore(inst, vm_ref, key, val): self.assertEqual(vm_ref, 'vm_ref') self.xenstore['persist'][key] = val def fake_remove_from_param_xenstore(inst, vm_ref, key): self.assertEqual(vm_ref, 'vm_ref') if key in self.xenstore['persist']: del self.xenstore['persist'][key] def fake_write_to_xenstore(inst, instance, path, value, vm_ref=None): self.assertEqual(instance, {'uuid': 'fake'}) self.assertEqual(vm_ref, 'vm_ref') self.xenstore['ephem'][path] = jsonutils.dumps(value) def fake_delete_from_xenstore(inst, instance, path, vm_ref=None): self.assertEqual(instance, {'uuid': 'fake'}) self.assertEqual(vm_ref, 'vm_ref') if path in self.xenstore['ephem']: del self.xenstore['ephem'][path] self.stubs.Set(vmops.VMOps, '_get_vm_opaque_ref', fake_get_vm_opaque_ref) self.stubs.Set(vmops.VMOps, '_add_to_param_xenstore', fake_add_to_param_xenstore) self.stubs.Set(vmops.VMOps, '_remove_from_param_xenstore', fake_remove_from_param_xenstore) self.stubs.Set(vmops.VMOps, '_write_to_xenstore', fake_write_to_xenstore) self.stubs.Set(vmops.VMOps, '_delete_from_xenstore', fake_delete_from_xenstore) def test_inject_instance_metadata(self): # Add some system_metadata to ensure it doesn't get added # to xenstore instance = dict(metadata=[{'key': 'a', 'value': 1}, {'key': 'b', 'value': 2}, {'key': 'c', 'value': 3}, # Check xenstore key sanitizing {'key': 'hi.there', 'value': 4}, {'key': 'hi!t.e/e', 'value': 5}], # Check xenstore key sanitizing system_metadata=[{'key': 'sys_a', 'value': 1}, {'key': 'sys_b', 'value': 2}, {'key': 'sys_c', 'value': 3}], uuid='fake') self.conn._vmops._inject_instance_metadata(instance, 'vm_ref') self.assertEqual(self.xenstore, { 'persist': { 'vm-data/user-metadata/a': '1', 'vm-data/user-metadata/b': '2', 'vm-data/user-metadata/c': '3', 'vm-data/user-metadata/hi_there': '4', 'vm-data/user-metadata/hi_t_e_e': '5', }, 'ephem': {}, }) def test_change_instance_metadata_add(self): # Test XenStore key sanitizing here, too. diff = {'test.key': ['+', 4]} instance = {'uuid': 'fake'} self.xenstore = { 'persist': { 'vm-data/user-metadata/a': '1', 'vm-data/user-metadata/b': '2', 'vm-data/user-metadata/c': '3', }, 'ephem': { 'vm-data/user-metadata/a': '1', 'vm-data/user-metadata/b': '2', 'vm-data/user-metadata/c': '3', }, } self.conn._vmops.change_instance_metadata(instance, diff) self.assertEqual(self.xenstore, { 'persist': { 'vm-data/user-metadata/a': '1', 'vm-data/user-metadata/b': '2', 'vm-data/user-metadata/c': '3', 'vm-data/user-metadata/test_key': '4', }, 'ephem': { 'vm-data/user-metadata/a': '1', 'vm-data/user-metadata/b': '2', 'vm-data/user-metadata/c': '3', 'vm-data/user-metadata/test_key': '4', }, }) def test_change_instance_metadata_update(self): diff = dict(b=['+', 4]) instance = {'uuid': 'fake'} self.xenstore = { 'persist': { 'vm-data/user-metadata/a': '1', 'vm-data/user-metadata/b': '2', 'vm-data/user-metadata/c': '3', }, 'ephem': { 'vm-data/user-metadata/a': '1', 'vm-data/user-metadata/b': '2', 'vm-data/user-metadata/c': '3', }, } self.conn._vmops.change_instance_metadata(instance, diff) self.assertEqual(self.xenstore, { 'persist': { 'vm-data/user-metadata/a': '1', 'vm-data/user-metadata/b': '4', 'vm-data/user-metadata/c': '3', }, 'ephem': { 'vm-data/user-metadata/a': '1', 'vm-data/user-metadata/b': '4', 'vm-data/user-metadata/c': '3', }, }) def test_change_instance_metadata_delete(self): diff = dict(b=['-']) instance = {'uuid': 'fake'} self.xenstore = { 'persist': { 'vm-data/user-metadata/a': '1', 'vm-data/user-metadata/b': '2', 'vm-data/user-metadata/c': '3', }, 'ephem': { 'vm-data/user-metadata/a': '1', 'vm-data/user-metadata/b': '2', 'vm-data/user-metadata/c': '3', }, } self.conn._vmops.change_instance_metadata(instance, diff) self.assertEqual(self.xenstore, { 'persist': { 'vm-data/user-metadata/a': '1', 'vm-data/user-metadata/c': '3', }, 'ephem': { 'vm-data/user-metadata/a': '1', 'vm-data/user-metadata/c': '3', }, }) def test_change_instance_metadata_not_found(self): instance = {'uuid': 'not_found'} self.conn._vmops.change_instance_metadata(instance, "fake_diff") self.assertTrue(self.called_fake_get_vm_opaque_ref) class XenAPIFakeTestCase(test.NoDBTestCase): def test_query_matches(self): record = {'a': '1', 'b': '2', 'c_d': '3'} tests = {'field "a"="1"': True, 'field "b"="2"': True, 'field "b"="4"': False, 'not field "b"="4"': True, 'field "a"="1" and field "b"="4"': False, 'field "a"="1" or field "b"="4"': True, 'field "c__d"="3"': True, 'field \'b\'=\'2\'': True, } for query in tests.keys(): expected = tests[query] fail_msg = "for test '%s'" % query self.assertEqual(xenapi_fake._query_matches(record, query), expected, fail_msg) def test_query_bad_format(self): record = {'a': '1', 'b': '2', 'c': '3'} tests = ['"a"="1" or "b"="4"', 'a=1', ] for query in tests: fail_msg = "for test '%s'" % query self.assertFalse(xenapi_fake._query_matches(record, query), fail_msg) nova-17.0.1/nova/tests/unit/virt/xenapi/test_agent.py0000666000175000017500000004404013250073126022633 0ustar zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import base64 import time import mock from os_xenapi.client import host_agent from os_xenapi.client import XenAPI from oslo_utils import uuidutils from nova import exception from nova import test from nova.virt.xenapi import agent def _get_fake_instance(**kwargs): system_metadata = [] for k, v in kwargs.items(): system_metadata.append({ "key": k, "value": v }) return { "system_metadata": system_metadata, "uuid": "uuid", "key_data": "ssh-rsa asdf", "os_type": "asdf", } class AgentTestCaseBase(test.NoDBTestCase): def _create_agent(self, instance, session="session"): self.session = session self.virtapi = "virtapi" self.vm_ref = "vm_ref" return agent.XenAPIBasedAgent(self.session, self.virtapi, instance, self.vm_ref) class AgentImageFlagsTestCase(AgentTestCaseBase): def test_agent_is_present(self): self.flags(use_agent_default=False, group='xenserver') instance = {"system_metadata": [{"key": "image_xenapi_use_agent", "value": "true"}]} self.assertTrue(agent.should_use_agent(instance)) def test_agent_is_disabled(self): self.flags(use_agent_default=True, group='xenserver') instance = {"system_metadata": [{"key": "image_xenapi_use_agent", "value": "false"}]} self.assertFalse(agent.should_use_agent(instance)) def test_agent_uses_deafault_when_prop_invalid(self): self.flags(use_agent_default=True, group='xenserver') instance = {"system_metadata": [{"key": "image_xenapi_use_agent", "value": "bob"}], "uuid": "uuid"} self.assertTrue(agent.should_use_agent(instance)) def test_agent_default_not_present(self): self.flags(use_agent_default=False, group='xenserver') instance = {"system_metadata": []} self.assertFalse(agent.should_use_agent(instance)) def test_agent_default_present(self): self.flags(use_agent_default=True, group='xenserver') instance = {"system_metadata": []} self.assertTrue(agent.should_use_agent(instance)) class SysMetaKeyTestBase(object): key = None def _create_agent_with_value(self, value): kwargs = {self.key: value} instance = _get_fake_instance(**kwargs) return self._create_agent(instance) def test_get_sys_meta_key_true(self): agent = self._create_agent_with_value("true") self.assertTrue(agent._get_sys_meta_key(self.key)) def test_get_sys_meta_key_false(self): agent = self._create_agent_with_value("False") self.assertFalse(agent._get_sys_meta_key(self.key)) def test_get_sys_meta_key_invalid_is_false(self): agent = self._create_agent_with_value("invalid") self.assertFalse(agent._get_sys_meta_key(self.key)) def test_get_sys_meta_key_missing_is_false(self): instance = _get_fake_instance() agent = self._create_agent(instance) self.assertFalse(agent._get_sys_meta_key(self.key)) class SkipSshFlagTestCase(SysMetaKeyTestBase, AgentTestCaseBase): key = "image_xenapi_skip_agent_inject_ssh" def test_skip_ssh_key_inject(self): agent = self._create_agent_with_value("True") self.assertTrue(agent._skip_ssh_key_inject()) class SkipFileInjectAtBootFlagTestCase(SysMetaKeyTestBase, AgentTestCaseBase): key = "image_xenapi_skip_agent_inject_files_at_boot" def test_skip_inject_files_at_boot(self): agent = self._create_agent_with_value("True") self.assertTrue(agent._skip_inject_files_at_boot()) class InjectSshTestCase(AgentTestCaseBase): @mock.patch.object(agent.XenAPIBasedAgent, 'inject_file') def test_inject_ssh_key_succeeds(self, mock_inject_file): instance = _get_fake_instance() agent = self._create_agent(instance) agent.inject_ssh_key() mock_inject_file.assert_called_once_with("/root/.ssh/authorized_keys", "\n# The following ssh key " "was injected by Nova" "\nssh-rsa asdf\n") @mock.patch.object(agent.XenAPIBasedAgent, 'inject_file') def _test_inject_ssh_key_skipped(self, instance, mock_inject_file): agent = self._create_agent(instance) # make sure its not called agent.inject_ssh_key() mock_inject_file.assert_not_called() def test_inject_ssh_key_skipped_no_key_data(self): instance = _get_fake_instance() instance["key_data"] = None self._test_inject_ssh_key_skipped(instance) def test_inject_ssh_key_skipped_windows(self): instance = _get_fake_instance() instance["os_type"] = "windows" self._test_inject_ssh_key_skipped(instance) def test_inject_ssh_key_skipped_cloud_init_present(self): instance = _get_fake_instance( image_xenapi_skip_agent_inject_ssh="True") self._test_inject_ssh_key_skipped(instance) class FileInjectionTestCase(AgentTestCaseBase): @mock.patch.object(agent.XenAPIBasedAgent, '_call_agent') def test_inject_file(self, mock_call_agent): instance = _get_fake_instance() agent = self._create_agent(instance) b64_path = base64.b64encode(b'path') b64_contents = base64.b64encode(b'contents') agent.inject_file("path", "contents") mock_call_agent.assert_called_once_with(host_agent.inject_file, {'b64_contents': b64_contents, 'b64_path': b64_path}) @mock.patch.object(agent.XenAPIBasedAgent, 'inject_file') def test_inject_files(self, mock_inject_file): instance = _get_fake_instance() agent = self._create_agent(instance) files = [("path1", "content1"), ("path2", "content2")] agent.inject_files(files) mock_inject_file.assert_has_calls( [mock.call("path1", "content1"), mock.call("path2", "content2")]) @mock.patch.object(agent.XenAPIBasedAgent, 'inject_file') def test_inject_files_skipped_when_cloud_init_installed(self, mock_inject_file): instance = _get_fake_instance( image_xenapi_skip_agent_inject_files_at_boot="True") agent = self._create_agent(instance) files = [("path1", "content1"), ("path2", "content2")] agent.inject_files(files) mock_inject_file.assert_not_called() class RebootRetryTestCase(AgentTestCaseBase): @mock.patch.object(agent, '_wait_for_new_dom_id') def test_retry_on_reboot(self, mock_wait): mock_session = mock.Mock() mock_session.VM.get_domid.return_value = "fake_dom_id" agent = self._create_agent(None, mock_session) mock_method = mock.Mock().method() mock_method.side_effect = [XenAPI.Failure(["REBOOT: fake"]), {"returncode": '0', "message": "done"}] result = agent._call_agent(mock_method) self.assertEqual("done", result) self.assertTrue(mock_session.VM.get_domid.called) self.assertEqual(2, mock_method.call_count) mock_wait.assert_called_once_with(mock_session, self.vm_ref, "fake_dom_id", mock_method) @mock.patch.object(time, 'sleep') @mock.patch.object(time, 'time') def test_wait_for_new_dom_id_found(self, mock_time, mock_sleep): mock_session = mock.Mock() mock_session.VM.get_domid.return_value = "new" agent._wait_for_new_dom_id(mock_session, "vm_ref", "old", "method") mock_session.VM.get_domid.assert_called_once_with("vm_ref") self.assertFalse(mock_sleep.called) @mock.patch.object(time, 'sleep') @mock.patch.object(time, 'time') def test_wait_for_new_dom_id_after_retry(self, mock_time, mock_sleep): self.flags(agent_timeout=3, group="xenserver") mock_time.return_value = 0 mock_session = mock.Mock() old = "40" new = "42" mock_session.VM.get_domid.side_effect = [old, "-1", new] agent._wait_for_new_dom_id(mock_session, "vm_ref", old, "method") mock_session.VM.get_domid.assert_called_with("vm_ref") self.assertEqual(3, mock_session.VM.get_domid.call_count) self.assertEqual(2, mock_sleep.call_count) @mock.patch.object(time, 'sleep') @mock.patch.object(time, 'time') def test_wait_for_new_dom_id_timeout(self, mock_time, mock_sleep): self.flags(agent_timeout=3, group="xenserver") def fake_time(): fake_time.time = fake_time.time + 1 return fake_time.time fake_time.time = 0 mock_time.side_effect = fake_time mock_session = mock.Mock() mock_session.VM.get_domid.return_value = "old" mock_method = mock.Mock().method() mock_method.__name__ = "mock_method" self.assertRaises(exception.AgentTimeout, agent._wait_for_new_dom_id, mock_session, "vm_ref", "old", mock_method) self.assertEqual(4, mock_session.VM.get_domid.call_count) class SetAdminPasswordTestCase(AgentTestCaseBase): @mock.patch.object(agent.XenAPIBasedAgent, '_call_agent') @mock.patch("nova.virt.xenapi.agent.SimpleDH") def test_exchange_key_with_agent(self, mock_simple_dh, mock_call_agent): agent = self._create_agent(None) instance_mock = mock_simple_dh() instance_mock.get_public.return_value = 4321 mock_call_agent.return_value = "1234" result = agent._exchange_key_with_agent() mock_call_agent.assert_called_once_with(host_agent.key_init, {"pub": "4321"}, success_codes=['D0'], ignore_errors=False) result.compute_shared.assert_called_once_with(1234) @mock.patch.object(agent.XenAPIBasedAgent, '_call_agent') @mock.patch.object(agent.XenAPIBasedAgent, '_save_instance_password_if_sshkey_present') @mock.patch.object(agent.XenAPIBasedAgent, '_exchange_key_with_agent') def test_set_admin_password_works(self, mock_exchange, mock_save, mock_call_agent): mock_dh = mock.Mock(spec_set=agent.SimpleDH) mock_dh.encrypt.return_value = "enc_pass" mock_exchange.return_value = mock_dh agent_inst = self._create_agent(None) agent_inst.set_admin_password("new_pass") mock_dh.encrypt.assert_called_once_with("new_pass\n") mock_call_agent.assert_called_once_with(host_agent.password, {'enc_pass': 'enc_pass'}) mock_save.assert_called_once_with("new_pass") @mock.patch.object(agent.XenAPIBasedAgent, '_add_instance_fault') @mock.patch.object(agent.XenAPIBasedAgent, '_exchange_key_with_agent') def test_set_admin_password_silently_fails(self, mock_exchange, mock_add_fault): error = exception.AgentTimeout(method="fake") mock_exchange.side_effect = error agent_inst = self._create_agent(None) agent_inst.set_admin_password("new_pass") mock_add_fault.assert_called_once_with(error, mock.ANY) class UpgradeRequiredTestCase(test.NoDBTestCase): def test_less_than(self): self.assertTrue(agent.is_upgrade_required('1.2.3.4', '1.2.3.5')) def test_greater_than(self): self.assertFalse(agent.is_upgrade_required('1.2.3.5', '1.2.3.4')) def test_equal(self): self.assertFalse(agent.is_upgrade_required('1.2.3.4', '1.2.3.4')) def test_non_lexical(self): self.assertFalse(agent.is_upgrade_required('1.2.3.10', '1.2.3.4')) def test_length(self): self.assertTrue(agent.is_upgrade_required('1.2.3', '1.2.3.4')) @mock.patch.object(uuidutils, 'generate_uuid') class CallAgentTestCase(AgentTestCaseBase): def test_call_agent_success(self, mock_uuid): session = mock.Mock() instance = {"uuid": "fake"} addl_args = {"foo": "bar"} session.VM.get_domid.return_value = '42' mock_uuid.return_value = 1 mock_method = mock.Mock().method() mock_method.return_value = {'returncode': '4', 'message': "asdf\\r\\n"} mock_method.__name__ = "mock_method" self.assertEqual("asdf", agent._call_agent(session, instance, "vm_ref", mock_method, addl_args, timeout=300, success_codes=['0', '4'])) expected_args = {} expected_args.update(addl_args) mock_method.assert_called_once_with(session, 1, '42', 300, **expected_args) session.VM.get_domid.assert_called_once_with("vm_ref") def _call_agent_setup(self, session, mock_uuid, mock_method, returncode='0', success_codes=None, exception=None): session.XenAPI.Failure = XenAPI.Failure instance = {"uuid": "fake"} addl_args = {"foo": "bar"} session.VM.get_domid.return_value = "42" mock_uuid.return_value = 1 if exception: mock_method.side_effect = exception else: mock_method.return_value = {'returncode': returncode, 'message': "asdf\\r\\n"} return agent._call_agent(session, instance, "vm_ref", mock_method, addl_args, success_codes=success_codes) def _assert_agent_called(self, session, mock_uuid, mock_method): expected_args = {"foo": "bar"} mock_uuid.assert_called_once_with() mock_method.assert_called_once_with(session, 1, '42', 30, **expected_args) session.VM.get_domid.assert_called_once_with("vm_ref") def test_call_agent_works_with_defaults(self, mock_uuid): session = mock.Mock() mock_method = mock.Mock().method() mock_method.__name__ = "mock_method" self._call_agent_setup(session, mock_uuid, mock_method) self._assert_agent_called(session, mock_uuid, mock_method) def test_call_agent_fails_with_timeout(self, mock_uuid): session = mock.Mock() mock_method = mock.Mock().method() mock_method.__name__ = "mock_method" self.assertRaises(exception.AgentTimeout, self._call_agent_setup, session, mock_uuid, mock_method, exception=XenAPI.Failure(["TIMEOUT:fake"])) self._assert_agent_called(session, mock_uuid, mock_method) def test_call_agent_fails_with_not_implemented(self, mock_uuid): session = mock.Mock() mock_method = mock.Mock().method() mock_method.__name__ = "mock_method" self.assertRaises(exception.AgentNotImplemented, self._call_agent_setup, session, mock_uuid, mock_method, exception=XenAPI.Failure(["NOT IMPLEMENTED:"])) self._assert_agent_called(session, mock_uuid, mock_method) def test_call_agent_fails_with_other_error(self, mock_uuid): session = mock.Mock() mock_method = mock.Mock().method() mock_method.__name__ = "mock_method" self.assertRaises(exception.AgentError, self._call_agent_setup, session, mock_uuid, mock_method, exception=XenAPI.Failure(["asdf"])) self._assert_agent_called(session, mock_uuid, mock_method) def test_call_agent_fails_with_returned_error(self, mock_uuid): session = mock.Mock() mock_method = mock.Mock().method() mock_method.__name__ = "mock_method" self.assertRaises(exception.AgentError, self._call_agent_setup, session, mock_uuid, mock_method, returncode='42') self._assert_agent_called(session, mock_uuid, mock_method) class XenAPIBasedAgent(AgentTestCaseBase): @mock.patch.object(agent.XenAPIBasedAgent, "_add_instance_fault") @mock.patch.object(agent, "_call_agent") def test_call_agent_swallows_error(self, mock_call_agent, mock_add_instance_fault): fake_error = exception.AgentError(method="bob") mock_call_agent.side_effect = fake_error instance = _get_fake_instance() agent = self._create_agent(instance) agent._call_agent("bob") mock_call_agent.assert_called_once_with(agent.session, agent.instance, agent.vm_ref, "bob", None, None, None) mock_add_instance_fault.assert_called_once_with(fake_error, mock.ANY) @mock.patch.object(agent.XenAPIBasedAgent, "_add_instance_fault") @mock.patch.object(agent, "_call_agent") def test_call_agent_throws_error(self, mock_call_agent, mock_add_instance_fault): fake_error = exception.AgentError(method="bob") mock_call_agent.side_effect = fake_error instance = _get_fake_instance() agent = self._create_agent(instance) self.assertRaises(exception.AgentError, agent._call_agent, "bob", ignore_errors=False) mock_call_agent.assert_called_once_with(agent.session, agent.instance, agent.vm_ref, "bob", None, None, None) self.assertFalse(mock_add_instance_fault.called) nova-17.0.1/nova/tests/unit/virt/xenapi/test_vgpu.py0000666000175000017500000002137113250073126022520 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova import test from nova.virt.xenapi import host class VGPUTestCase(test.NoDBTestCase): """Unit tests for Driver operations.""" @mock.patch.object(host.HostState, 'update_status', return_value='fake_stats_1') @mock.patch.object(host.HostState, '_get_vgpu_stats_in_group') def test_get_vgpu_stats_empty_cfg(self, mock_get, mock_update): # no vGPU type configured. self.flags(enabled_vgpu_types=[], group='devices') session = mock.Mock() host_obj = host.HostState(session) stats = host_obj._get_vgpu_stats() session.call_xenapi.assert_not_called() self.assertEqual(stats, {}) @mock.patch.object(host.HostState, 'update_status', return_value='fake_stats_1') @mock.patch.object(host.HostState, '_get_vgpu_stats_in_group') def test_get_vgpu_stats_single_type(self, mock_get, mock_update): # configured single vGPU type self.flags(enabled_vgpu_types=['type_name_1'], group='devices') session = mock.Mock() # multiple GPU groups session.call_xenapi.side_effect = [ ['grp_ref1', 'grp_ref2'], # GPU_group.get_all 'uuid_1', # GPU_group.get_uuid 'uuid_2', # GPU_group.get_uuid ] # Let it return None for the 2nd GPU group for the case # that it doesn't have the specified vGPU type enabled. mock_get.side_effect = ['fake_stats_1', None] host_obj = host.HostState(session) stats = host_obj._get_vgpu_stats() self.assertEqual(session.call_xenapi.call_count, 3) self.assertEqual(mock_update.call_count, 1) self.assertEqual(mock_get.call_count, 2) self.assertEqual(stats, {'uuid_1': 'fake_stats_1'}) @mock.patch.object(host.HostState, 'update_status', return_value='fake_stats_1') @mock.patch.object(host.HostState, '_get_vgpu_stats_in_group') def test_get_vgpu_stats_multi_types(self, mock_get, mock_update): # when multiple vGPU types configured, it use the first one. self.flags(enabled_vgpu_types=['type_name_1', 'type_name_2'], group='devices') session = mock.Mock() session.call_xenapi.side_effect = [ ['grp_ref1'], # GPU_group.get_all 'uuid_1', # GPU_group.get_uuid ] mock_get.side_effect = ['fake_stats_1'] host_obj = host.HostState(session) stats = host_obj._get_vgpu_stats() self.assertEqual(session.call_xenapi.call_count, 2) self.assertEqual(mock_update.call_count, 1) self.assertEqual(stats, {'uuid_1': 'fake_stats_1'}) # called with the first vGPU type: 'type_name_1' mock_get.assert_called_with('grp_ref1', ['type_name_1']) @mock.patch.object(host.HostState, 'update_status', return_value='fake_stats_1') @mock.patch.object(host.HostState, '_get_total_vgpu_in_grp', return_value=7) def test_get_vgpu_stats_in_group(self, mock_get, mock_update): # Test it will return vGPU stat for the enabled vGPU type. enabled_vgpu_types = ['type_name_2'] session = mock.Mock() session.call_xenapi.side_effect = [ ['type_ref_1', 'type_ref_2'], # GPU_group.get_enabled_VGPU_types 'type_name_1', # VGPU_type.get_model_name 'type_name_2', # VGPU_type.get_model_name 'type_uuid_2', # VGPU_type.get_uuid '4', # VGPU_type.get_max_heads '6', # GPU_group.get_remaining_capacity ] host_obj = host.HostState(session) stats = host_obj._get_vgpu_stats_in_group('grp_ref', enabled_vgpu_types) expect_stats = {'uuid': 'type_uuid_2', 'type_name': 'type_name_2', 'max_heads': 4, 'total': 7, 'remaining': 6, } self.assertEqual(session.call_xenapi.call_count, 6) # It should get_uuid for the vGPU type passed via *enabled_vgpu_types* # (the arg for get_uuid should be 'type_ref_2'). get_uuid_call = [mock.call('VGPU_type.get_uuid', 'type_ref_2')] session.call_xenapi.assert_has_calls(get_uuid_call) mock_get.assert_called_once() self.assertEqual(expect_stats, stats) @mock.patch.object(host.HostState, 'update_status') @mock.patch.object(host.HostState, '_get_total_vgpu_in_grp', return_value=7) def test_get_vgpu_stats_in_group_multiple(self, mock_get, mock_update): # Test when enabled multiple vGPU types in the same group. # It should only return the first vGPU type's stats. enabled_vgpu_types = ['type_name_1', 'type_name_2'] session = mock.Mock() session.call_xenapi.side_effect = [ ['type_ref_1', 'type_ref_2'], # GPU_group.get_enabled_VGPU_types 'type_name_1', # VGPU_type.get_model_name 'type_name_2', # VGPU_type.get_model_name 'type_uuid_1', # VGPU_type.get_uuid '4', # VGPU_type.get_max_heads '6', # GPU_group.get_remaining_capacity ] host_obj = host.HostState(session) stats = host_obj._get_vgpu_stats_in_group('grp_ref', enabled_vgpu_types) expect_stats = { 'uuid': 'type_uuid_1', 'type_name': 'type_name_1', 'max_heads': 4, 'total': 7, 'remaining': 6, } self.assertEqual(session.call_xenapi.call_count, 6) # It should call get_uuid for the first vGPU type (the arg for get_uuid # should be 'type_ref_1'). get_uuid_call = [mock.call('VGPU_type.get_uuid', 'type_ref_1')] session.call_xenapi.assert_has_calls(get_uuid_call) mock_get.assert_called_once() self.assertEqual(expect_stats, stats) @mock.patch.object(host.HostState, 'update_status') @mock.patch.object(host.HostState, '_get_total_vgpu_in_grp', return_value=7) def test_get_vgpu_stats_in_group_cfg_not_in_grp(self, mock_get, mock_update): # Test when the enable_vgpu_types is not a valid # type belong to the GPU group. It will return None. enabled_vgpu_types = ['bad_type_name'] session = mock.Mock() session.call_xenapi.side_effect = [ ['type_ref_1', 'type_ref_2'], # GPU_group.get_enabled_VGPU_types 'type_name_1', # VGPU_type.get_model_name 'type_name_2', # VGPU_type.get_model_name ] host_obj = host.HostState(session) stats = host_obj._get_vgpu_stats_in_group('grp_ref', enabled_vgpu_types) expect_stats = None self.assertEqual(session.call_xenapi.call_count, 3) mock_get.assert_not_called() self.assertEqual(expect_stats, stats) @mock.patch.object(host.HostState, 'update_status') def test_get_total_vgpu_in_grp(self, mock_update): session = mock.Mock() # The fake PGPU records returned from call_xenapi's string function: # "PGPU.get_all_records_where". pgpu_records = { 'pgpu_ref1': { 'enabled_VGPU_types': ['type_ref1', 'type_ref2'], 'supported_VGPU_max_capacities': { 'type_ref1': '1', 'type_ref2': '3', } }, 'pgpu_ref2': { 'enabled_VGPU_types': ['type_ref1', 'type_ref2'], 'supported_VGPU_max_capacities': { 'type_ref1': '1', 'type_ref2': '3', } } } session.call_xenapi.return_value = pgpu_records host_obj = host.HostState(session) total = host_obj._get_total_vgpu_in_grp('grp_ref', 'type_ref1') session.call_xenapi.assert_called_with( 'PGPU.get_all_records_where', 'field "GPU_group" = "grp_ref"') # The total amount of VGPUs is equal to sum of vaiable VGPU of # 'type_ref1' in all PGPUs. self.assertEqual(total, 2) nova-17.0.1/nova/tests/unit/virt/disk/0000775000175000017500000000000013250073472017572 5ustar zuulzuul00000000000000nova-17.0.1/nova/tests/unit/virt/disk/test_inject.py0000666000175000017500000002723313250073126022464 0ustar zuulzuul00000000000000# Copyright (C) 2012 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from collections import OrderedDict import os import fixtures from nova import exception from nova import test from nova.tests.unit.virt.disk.vfs import fakeguestfs from nova.virt.disk import api as diskapi from nova.virt.disk.vfs import guestfs as vfsguestfs from nova.virt.image import model as imgmodel class VirtDiskTest(test.NoDBTestCase): def setUp(self): super(VirtDiskTest, self).setUp() self.useFixture( fixtures.MonkeyPatch('nova.virt.disk.vfs.guestfs.guestfs', fakeguestfs)) self.file = imgmodel.LocalFileImage("/some/file", imgmodel.FORMAT_QCOW2) def test_inject_data(self): self.assertTrue(diskapi.inject_data( imgmodel.LocalFileImage("/some/file", imgmodel.FORMAT_QCOW2))) self.assertTrue(diskapi.inject_data( imgmodel.LocalFileImage("/some/file", imgmodel.FORMAT_RAW), mandatory=('files',))) self.assertTrue(diskapi.inject_data( imgmodel.LocalFileImage("/some/file", imgmodel.FORMAT_RAW), key="mysshkey", mandatory=('key',))) os_name = os.name os.name = 'nt' # Cause password injection to fail self.assertRaises(exception.NovaException, diskapi.inject_data, imgmodel.LocalFileImage("/some/file", imgmodel.FORMAT_RAW), admin_password="p", mandatory=('admin_password',)) self.assertFalse(diskapi.inject_data( imgmodel.LocalFileImage("/some/file", imgmodel.FORMAT_RAW), admin_password="p")) os.name = os_name self.assertFalse(diskapi.inject_data( imgmodel.LocalFileImage("/some/fail/file", imgmodel.FORMAT_RAW), key="mysshkey")) def test_inject_data_key(self): vfs = vfsguestfs.VFSGuestFS(self.file) vfs.setup() diskapi._inject_key_into_fs("mysshkey", vfs) self.assertIn("/root/.ssh", vfs.handle.files) self.assertEqual(vfs.handle.files["/root/.ssh"], {'isdir': True, 'gid': 0, 'uid': 0, 'mode': 0o700}) self.assertIn("/root/.ssh/authorized_keys", vfs.handle.files) self.assertEqual(vfs.handle.files["/root/.ssh/authorized_keys"], {'isdir': False, 'content': "Hello World\n# The following ssh " + "key was injected by Nova\nmysshkey\n", 'gid': 100, 'uid': 100, 'mode': 0o600}) vfs.teardown() def test_inject_data_key_with_selinux(self): vfs = vfsguestfs.VFSGuestFS(self.file) vfs.setup() vfs.make_path("etc/selinux") vfs.make_path("etc/rc.d") diskapi._inject_key_into_fs("mysshkey", vfs) self.assertIn("/etc/rc.d/rc.local", vfs.handle.files) self.assertEqual(vfs.handle.files["/etc/rc.d/rc.local"], {'isdir': False, 'content': "Hello World#!/bin/sh\n# Added by " + "Nova to ensure injected ssh keys " + "have the right context\nrestorecon " + "-RF root/.ssh 2>/dev/null || :\n", 'gid': 100, 'uid': 100, 'mode': 0o700}) self.assertIn("/root/.ssh", vfs.handle.files) self.assertEqual(vfs.handle.files["/root/.ssh"], {'isdir': True, 'gid': 0, 'uid': 0, 'mode': 0o700}) self.assertIn("/root/.ssh/authorized_keys", vfs.handle.files) self.assertEqual(vfs.handle.files["/root/.ssh/authorized_keys"], {'isdir': False, 'content': "Hello World\n# The following ssh " + "key was injected by Nova\nmysshkey\n", 'gid': 100, 'uid': 100, 'mode': 0o600}) vfs.teardown() def test_inject_data_key_with_selinux_append_with_newline(self): vfs = vfsguestfs.VFSGuestFS(self.file) vfs.setup() vfs.replace_file("/etc/rc.d/rc.local", "#!/bin/sh\necho done") vfs.make_path("etc/selinux") vfs.make_path("etc/rc.d") diskapi._inject_key_into_fs("mysshkey", vfs) self.assertIn("/etc/rc.d/rc.local", vfs.handle.files) self.assertEqual(vfs.handle.files["/etc/rc.d/rc.local"], {'isdir': False, 'content': "#!/bin/sh\necho done\n# Added " "by Nova to ensure injected ssh keys have " "the right context\nrestorecon -RF " "root/.ssh 2>/dev/null || :\n", 'gid': 100, 'uid': 100, 'mode': 0o700}) vfs.teardown() def test_inject_net(self): vfs = vfsguestfs.VFSGuestFS(self.file) vfs.setup() diskapi._inject_net_into_fs("mynetconfig", vfs) self.assertIn("/etc/network/interfaces", vfs.handle.files) self.assertEqual(vfs.handle.files["/etc/network/interfaces"], {'content': 'mynetconfig', 'gid': 100, 'isdir': False, 'mode': 0o700, 'uid': 100}) vfs.teardown() def test_inject_metadata(self): vfs = vfsguestfs.VFSGuestFS(self.file) vfs.setup() metadata = {"foo": "bar", "eek": "wizz"} metadata = OrderedDict(sorted(metadata.items())) diskapi._inject_metadata_into_fs(metadata, vfs) self.assertIn("/meta.js", vfs.handle.files) self.assertEqual({'content': '{"eek": "wizz", ' + '"foo": "bar"}', 'gid': 100, 'isdir': False, 'mode': 0o700, 'uid': 100}, vfs.handle.files["/meta.js"]) vfs.teardown() def test_inject_admin_password(self): vfs = vfsguestfs.VFSGuestFS(self.file) vfs.setup() def fake_salt(): return "1234567890abcdef" self.stub_out('nova.virt.disk.api._generate_salt', fake_salt) vfs.handle.write("/etc/shadow", "root:$1$12345678$xxxxx:14917:0:99999:7:::\n" + "bin:*:14495:0:99999:7:::\n" + "daemon:*:14495:0:99999:7:::\n") vfs.handle.write("/etc/passwd", "root:x:0:0:root:/root:/bin/bash\n" + "bin:x:1:1:bin:/bin:/sbin/nologin\n" + "daemon:x:2:2:daemon:/sbin:/sbin/nologin\n") diskapi._inject_admin_password_into_fs("123456", vfs) self.assertEqual(vfs.handle.files["/etc/passwd"], {'content': "root:x:0:0:root:/root:/bin/bash\n" + "bin:x:1:1:bin:/bin:/sbin/nologin\n" + "daemon:x:2:2:daemon:/sbin:" + "/sbin/nologin\n", 'gid': 100, 'isdir': False, 'mode': 0o700, 'uid': 100}) shadow = vfs.handle.files["/etc/shadow"] # if the encrypted password is only 13 characters long, then # nova.virt.disk.api:_set_password fell back to DES. if len(shadow['content']) == 91: self.assertEqual(shadow, {'content': "root:12tir.zIbWQ3c" + ":14917:0:99999:7:::\n" + "bin:*:14495:0:99999:7:::\n" + "daemon:*:14495:0:99999:7:::\n", 'gid': 100, 'isdir': False, 'mode': 0o700, 'uid': 100}) else: self.assertEqual(shadow, {'content': "root:$1$12345678$a4ge4d5iJ5vw" + "vbFS88TEN0:14917:0:99999:7:::\n" + "bin:*:14495:0:99999:7:::\n" + "daemon:*:14495:0:99999:7:::\n", 'gid': 100, 'isdir': False, 'mode': 0o700, 'uid': 100}) vfs.teardown() def test_inject_files_into_fs(self): vfs = vfsguestfs.VFSGuestFS(self.file) vfs.setup() diskapi._inject_files_into_fs([("/path/to/not/exists/file", "inject-file-contents")], vfs) self.assertIn("/path/to/not/exists", vfs.handle.files) shadow_dir = vfs.handle.files["/path/to/not/exists"] self.assertEqual(shadow_dir, {"isdir": True, "gid": 0, "uid": 0, "mode": 0o744}) shadow_file = vfs.handle.files["/path/to/not/exists/file"] self.assertEqual(shadow_file, {"isdir": False, "content": "inject-file-contents", "gid": 100, "uid": 100, "mode": 0o700}) vfs.teardown() def test_inject_files_into_fs_dir_exists(self): vfs = vfsguestfs.VFSGuestFS(self.file) vfs.setup() called = {'make_path': False} def fake_has_file(*args, **kwargs): return True def fake_make_path(*args, **kwargs): called['make_path'] = True self.stub_out('nova.virt.disk.vfs.guestfs.VFSGuestFS.has_file', fake_has_file) self.stub_out('nova.virt.disk.vfs.guestfs.VFSGuestFS.make_path', fake_make_path) # test for already exists dir diskapi._inject_files_into_fs([("/path/to/exists/file", "inject-file-contents")], vfs) self.assertIn("/path/to/exists/file", vfs.handle.files) self.assertFalse(called['make_path']) # test for root dir diskapi._inject_files_into_fs([("/inject-file", "inject-file-contents")], vfs) self.assertIn("/inject-file", vfs.handle.files) self.assertFalse(called['make_path']) # test for null dir vfs.handle.files.pop("/inject-file") diskapi._inject_files_into_fs([("inject-file", "inject-file-contents")], vfs) self.assertIn("/inject-file", vfs.handle.files) self.assertFalse(called['make_path']) vfs.teardown() nova-17.0.1/nova/tests/unit/virt/disk/test_api.py0000666000175000017500000002277313250073126021765 0ustar zuulzuul00000000000000# Copyright 2012 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import tempfile import mock from oslo_concurrency import processutils from oslo_utils import units from nova import test from nova import utils from nova.virt.disk import api from nova.virt.disk.mount import api as mount from nova.virt.disk.vfs import localfs from nova.virt.image import model as imgmodel class FakeMount(object): device = None @staticmethod def instance_for_format(image, mountdir, partition): return FakeMount() def get_dev(self): pass def unget_dev(self): pass class APITestCase(test.NoDBTestCase): @mock.patch.object(localfs.VFSLocalFS, 'get_image_fs', autospec=True, return_value='') def test_can_resize_need_fs_type_specified(self, mock_image_fs): imgfile = tempfile.NamedTemporaryFile() self.addCleanup(imgfile.close) image = imgmodel.LocalFileImage(imgfile.name, imgmodel.FORMAT_QCOW2) self.assertFalse(api.is_image_extendable(image)) self.assertTrue(mock_image_fs.called) @mock.patch.object(utils, 'execute', autospec=True) def test_is_image_extendable_raw(self, mock_exec): imgfile = tempfile.NamedTemporaryFile() image = imgmodel.LocalFileImage(imgfile, imgmodel.FORMAT_RAW) self.addCleanup(imgfile.close) self.assertTrue(api.is_image_extendable(image)) mock_exec.assert_called_once_with('e2label', imgfile) @mock.patch('oslo_concurrency.processutils.execute', autospec=True) def test_resize2fs_success(self, mock_exec): imgfile = tempfile.NamedTemporaryFile() self.addCleanup(imgfile.close) api.resize2fs(imgfile) mock_exec.assert_has_calls( [mock.call('e2fsck', '-fp', imgfile, check_exit_code=[0, 1, 2]), mock.call('resize2fs', imgfile, check_exit_code=False)]) @mock.patch('oslo_concurrency.processutils.execute') @mock.patch('nova.privsep.fs.resize2fs') def test_resize2fs_success_as_root(self, mock_resize, mock_exec): imgfile = tempfile.NamedTemporaryFile() self.addCleanup(imgfile.close) api.resize2fs(imgfile, run_as_root=True) mock_exec.assert_not_called() mock_resize.assert_called() @mock.patch('oslo_concurrency.processutils.execute', autospec=True, side_effect=processutils.ProcessExecutionError("fs error")) def test_resize2fs_e2fsck_fails(self, mock_exec): imgfile = tempfile.NamedTemporaryFile() self.addCleanup(imgfile.close) api.resize2fs(imgfile) mock_exec.assert_called_once_with('e2fsck', '-fp', imgfile, check_exit_code=[0, 1, 2]) @mock.patch.object(api, 'can_resize_image', autospec=True, return_value=True) @mock.patch.object(api, 'is_image_extendable', autospec=True, return_value=True) @mock.patch.object(api, 'resize2fs', autospec=True) @mock.patch.object(mount.Mount, 'instance_for_format') @mock.patch.object(utils, 'execute', autospec=True) def test_extend_qcow_success(self, mock_exec, mock_inst, mock_resize, mock_extendable, mock_can_resize): imgfile = tempfile.NamedTemporaryFile() self.addCleanup(imgfile.close) imgsize = 10 device = "/dev/sdh" image = imgmodel.LocalFileImage(imgfile, imgmodel.FORMAT_QCOW2) self.flags(resize_fs_using_block_device=True) mounter = FakeMount.instance_for_format( image, None, None) mounter.device = device mock_inst.return_value = mounter with test.nested( mock.patch.object(mounter, 'get_dev', autospec=True, return_value=True), mock.patch.object(mounter, 'unget_dev', autospec=True), ) as (mock_get_dev, mock_unget_dev): api.extend(image, imgsize) mock_can_resize.assert_called_once_with(imgfile, imgsize) mock_exec.assert_called_once_with('qemu-img', 'resize', imgfile, imgsize) mock_extendable.assert_called_once_with(image) mock_inst.assert_called_once_with(image, None, None) mock_resize.assert_called_once_with(mounter.device, run_as_root=True, check_exit_code=[0]) mock_get_dev.assert_called_once_with() mock_unget_dev.assert_called_once_with() @mock.patch.object(api, 'can_resize_image', autospec=True, return_value=True) @mock.patch.object(api, 'is_image_extendable', autospec=True) @mock.patch.object(utils, 'execute', autospec=True) def test_extend_qcow_no_resize(self, mock_execute, mock_extendable, mock_can_resize_image): imgfile = tempfile.NamedTemporaryFile() self.addCleanup(imgfile.close) imgsize = 10 image = imgmodel.LocalFileImage(imgfile, imgmodel.FORMAT_QCOW2) self.flags(resize_fs_using_block_device=False) api.extend(image, imgsize) mock_can_resize_image.assert_called_once_with(imgfile, imgsize) mock_execute.assert_called_once_with('qemu-img', 'resize', imgfile, imgsize) self.assertFalse(mock_extendable.called) @mock.patch.object(api, 'can_resize_image', autospec=True, return_value=True) @mock.patch('nova.privsep.libvirt.ploop_resize') def test_extend_ploop(self, mock_ploop_resize, mock_can_resize_image): imgfile = tempfile.NamedTemporaryFile() self.addCleanup(imgfile.close) imgsize = 10 * units.Gi image = imgmodel.LocalFileImage(imgfile, imgmodel.FORMAT_PLOOP) api.extend(image, imgsize) mock_can_resize_image.assert_called_once_with(image.path, imgsize) mock_ploop_resize.assert_called_once_with(imgfile, imgsize) @mock.patch.object(api, 'can_resize_image', autospec=True, return_value=True) @mock.patch.object(api, 'resize2fs', autospec=True) @mock.patch.object(utils, 'execute', autospec=True) def test_extend_raw_success(self, mock_exec, mock_resize, mock_can_resize): imgfile = tempfile.NamedTemporaryFile() self.addCleanup(imgfile.close) imgsize = 10 image = imgmodel.LocalFileImage(imgfile, imgmodel.FORMAT_RAW) api.extend(image, imgsize) mock_exec.assert_has_calls( [mock.call('qemu-img', 'resize', imgfile, imgsize), mock.call('e2label', image.path)]) mock_resize.assert_called_once_with(imgfile, run_as_root=False, check_exit_code=[0]) mock_can_resize.assert_called_once_with(imgfile, imgsize) HASH_VFAT = utils.get_hash_str(api.FS_FORMAT_VFAT)[:7] HASH_EXT4 = utils.get_hash_str(api.FS_FORMAT_EXT4)[:7] HASH_NTFS = utils.get_hash_str(api.FS_FORMAT_NTFS)[:7] def test_get_file_extension_for_os_type(self): self.assertEqual(self.HASH_VFAT, api.get_file_extension_for_os_type(None, None)) self.assertEqual(self.HASH_EXT4, api.get_file_extension_for_os_type('linux', None)) self.assertEqual(self.HASH_NTFS, api.get_file_extension_for_os_type( 'windows', None)) def test_get_file_extension_for_os_type_with_overrides(self): with mock.patch('nova.virt.disk.api._DEFAULT_MKFS_COMMAND', 'custom mkfs command'): self.assertEqual("a74d253", api.get_file_extension_for_os_type( 'linux', None)) self.assertEqual("a74d253", api.get_file_extension_for_os_type( 'windows', None)) self.assertEqual("a74d253", api.get_file_extension_for_os_type('osx', None)) with mock.patch.dict(api._MKFS_COMMAND, {'osx': 'custom mkfs command'}, clear=True): self.assertEqual(self.HASH_VFAT, api.get_file_extension_for_os_type(None, None)) self.assertEqual(self.HASH_EXT4, api.get_file_extension_for_os_type('linux', None)) self.assertEqual(self.HASH_NTFS, api.get_file_extension_for_os_type( 'windows', None)) self.assertEqual("a74d253", api.get_file_extension_for_os_type( 'osx', None)) nova-17.0.1/nova/tests/unit/virt/disk/vfs/0000775000175000017500000000000013250073472020370 5ustar zuulzuul00000000000000nova-17.0.1/nova/tests/unit/virt/disk/vfs/fakeguestfs.py0000666000175000017500000001361613250073126023256 0ustar zuulzuul00000000000000# Copyright 2012 Red Hat, Inc # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. EVENT_APPLIANCE = 0x1 EVENT_LIBRARY = 0x2 EVENT_WARNING = 0x3 EVENT_TRACE = 0x4 class GuestFS(object): SUPPORT_CLOSE_ON_EXIT = True SUPPORT_RETURN_DICT = True CAN_SET_OWNERSHIP = True def __init__(self, **kwargs): if not self.SUPPORT_CLOSE_ON_EXIT and 'close_on_exit' in kwargs: raise TypeError('close_on_exit') if not self.SUPPORT_RETURN_DICT and 'python_return_dict' in kwargs: raise TypeError('python_return_dict') self._python_return_dict = kwargs.get('python_return_dict', False) self.kwargs = kwargs self.drives = [] self.running = False self.closed = False self.mounts = [] self.files = {} self.auginit = False self.root_mounted = False self.backend_settings = None self.trace_enabled = False self.verbose_enabled = False self.event_callback = None def launch(self): self.running = True def shutdown(self): self.running = False self.mounts = [] self.drives = [] def set_backend_settings(self, settings): self.backend_settings = settings def close(self): self.closed = True def add_drive_opts(self, file, *args, **kwargs): if file == "/some/fail/file": raise RuntimeError("%s: No such file or directory", file) self.drives.append((file, kwargs)) def add_drive(self, file, format=None, *args, **kwargs): self.add_drive_opts(file, format=None, *args, **kwargs) def inspect_os(self): return ["/dev/guestvgf/lv_root"] def inspect_get_mountpoints(self, dev): mountpoints = [("/home", "/dev/mapper/guestvgf-lv_home"), ("/", "/dev/mapper/guestvgf-lv_root"), ("/boot", "/dev/vda1")] if self.SUPPORT_RETURN_DICT and self._python_return_dict: return dict(mountpoints) else: return mountpoints def mount_options(self, options, device, mntpoint): if mntpoint == "/": self.root_mounted = True else: if not self.root_mounted: raise RuntimeError( "mount: %s: No such file or directory" % mntpoint) self.mounts.append((options, device, mntpoint)) def mkdir_p(self, path): if path not in self.files: self.files[path] = { "isdir": True, "gid": 100, "uid": 100, "mode": 0o700 } def read_file(self, path): if path not in self.files: self.files[path] = { "isdir": False, "content": "Hello World", "gid": 100, "uid": 100, "mode": 0o700 } return self.files[path]["content"] def write(self, path, content): if path not in self.files: self.files[path] = { "isdir": False, "content": "Hello World", "gid": 100, "uid": 100, "mode": 0o700 } self.files[path]["content"] = content def write_append(self, path, content): if path not in self.files: self.files[path] = { "isdir": False, "content": "Hello World", "gid": 100, "uid": 100, "mode": 0o700 } self.files[path]["content"] = self.files[path]["content"] + content def stat(self, path): if path not in self.files: raise RuntimeError("No such file: " + path) return self.files[path]["mode"] def chown(self, uid, gid, path): if path not in self.files: raise RuntimeError("No such file: " + path) if uid != -1: self.files[path]["uid"] = uid if gid != -1: self.files[path]["gid"] = gid def chmod(self, mode, path): if path not in self.files: raise RuntimeError("No such file: " + path) self.files[path]["mode"] = mode def aug_init(self, root, flags): self.auginit = True def aug_close(self): self.auginit = False def aug_get(self, cfgpath): if not self.auginit: raise RuntimeError("Augeus not initialized") if ((cfgpath.startswith("/files/etc/passwd") or cfgpath.startswith("/files/etc/group")) and not self.CAN_SET_OWNERSHIP): raise RuntimeError("Node not found %s", cfgpath) if cfgpath == "/files/etc/passwd/root/uid": return 0 elif cfgpath == "/files/etc/passwd/fred/uid": return 105 elif cfgpath == "/files/etc/passwd/joe/uid": return 110 elif cfgpath == "/files/etc/group/root/gid": return 0 elif cfgpath == "/files/etc/group/users/gid": return 500 elif cfgpath == "/files/etc/group/admins/gid": return 600 raise RuntimeError("Unknown path %s", cfgpath) def set_trace(self, enabled): self.trace_enabled = enabled def set_verbose(self, enabled): self.verbose_enabled = enabled def set_event_callback(self, func, events): self.event_callback = (func, events) def vfs_type(self, dev): return 'ext3' nova-17.0.1/nova/tests/unit/virt/disk/vfs/test_guestfs.py0000666000175000017500000003340413250073126023463 0ustar zuulzuul00000000000000# Copyright (C) 2012 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import fixtures import mock from nova import exception from nova import test from nova.tests.unit.virt.disk.vfs import fakeguestfs from nova.virt.disk.vfs import guestfs as vfsimpl from nova.virt.image import model as imgmodel class VirtDiskVFSGuestFSTest(test.NoDBTestCase): def setUp(self): super(VirtDiskVFSGuestFSTest, self).setUp() self.useFixture( fixtures.MonkeyPatch('nova.virt.disk.vfs.guestfs.guestfs', fakeguestfs)) self.qcowfile = imgmodel.LocalFileImage("/dummy.qcow2", imgmodel.FORMAT_QCOW2) self.rawfile = imgmodel.LocalFileImage("/dummy.img", imgmodel.FORMAT_RAW) self.lvmfile = imgmodel.LocalBlockImage("/dev/volgroup/myvol") self.rbdfile = imgmodel.RBDImage("myvol", "mypool", "cthulu", "arrrrrgh", ["server1:123", "server2:123"]) def _do_test_appliance_setup_inspect(self, image, drives, forcetcg): if forcetcg: vfsimpl.force_tcg() else: vfsimpl.force_tcg(False) vfs = vfsimpl.VFSGuestFS( image, partition=-1) vfs.setup() if forcetcg: self.assertEqual("force_tcg", vfs.handle.backend_settings) vfsimpl.force_tcg(False) else: self.assertIsNone(vfs.handle.backend_settings) self.assertTrue(vfs.handle.running) self.assertEqual(drives, vfs.handle.drives) self.assertEqual(3, len(vfs.handle.mounts)) self.assertEqual("/dev/mapper/guestvgf-lv_root", vfs.handle.mounts[0][1]) self.assertEqual("/dev/vda1", vfs.handle.mounts[1][1]) self.assertEqual("/dev/mapper/guestvgf-lv_home", vfs.handle.mounts[2][1]) self.assertEqual("/", vfs.handle.mounts[0][2]) self.assertEqual("/boot", vfs.handle.mounts[1][2]) self.assertEqual("/home", vfs.handle.mounts[2][2]) handle = vfs.handle vfs.teardown() self.assertIsNone(vfs.handle) self.assertFalse(handle.running) self.assertTrue(handle.closed) self.assertEqual(0, len(handle.mounts)) def test_appliance_setup_inspect_auto(self): drives = [("/dummy.qcow2", {"format": "qcow2"})] self._do_test_appliance_setup_inspect(self.qcowfile, drives, False) def test_appliance_setup_inspect_tcg(self): drives = [("/dummy.qcow2", {"format": "qcow2"})] self._do_test_appliance_setup_inspect(self.qcowfile, drives, True) def test_appliance_setup_inspect_raw(self): drives = [("/dummy.img", {"format": "raw"})] self._do_test_appliance_setup_inspect(self.rawfile, drives, True) def test_appliance_setup_inspect_lvm(self): drives = [("/dev/volgroup/myvol", {"format": "raw"})] self._do_test_appliance_setup_inspect(self.lvmfile, drives, True) def test_appliance_setup_inspect_rbd(self): drives = [("mypool/myvol", {"format": "raw", "protocol": "rbd", "username": "cthulu", "secret": "arrrrrgh", "server": ["server1:123", "server2:123"]})] self._do_test_appliance_setup_inspect(self.rbdfile, drives, True) def test_appliance_setup_inspect_no_root_raises(self): vfs = vfsimpl.VFSGuestFS(self.qcowfile, partition=-1) # call setup to init the handle so we can stub it vfs.setup() self.assertIsNone(vfs.handle.backend_settings) with mock.patch.object( vfs.handle, 'inspect_os', return_value=[]) as mock_inspect_os: self.assertRaises(exception.NovaException, vfs.setup_os_inspect) mock_inspect_os.assert_called_once_with() def test_appliance_setup_inspect_multi_boots_raises(self): vfs = vfsimpl.VFSGuestFS(self.qcowfile, partition=-1) # call setup to init the handle so we can stub it vfs.setup() self.assertIsNone(vfs.handle.backend_settings) with mock.patch.object( vfs.handle, 'inspect_os', return_value=['fake1', 'fake2']) as mock_inspect_os: self.assertRaises(exception.NovaException, vfs.setup_os_inspect) mock_inspect_os.assert_called_once_with() def test_appliance_setup_static_nopart(self): vfs = vfsimpl.VFSGuestFS(self.qcowfile, partition=None) vfs.setup() self.assertIsNone(vfs.handle.backend_settings) self.assertTrue(vfs.handle.running) self.assertEqual(1, len(vfs.handle.mounts)) self.assertEqual("/dev/sda", vfs.handle.mounts[0][1]) self.assertEqual("/", vfs.handle.mounts[0][2]) handle = vfs.handle vfs.teardown() self.assertIsNone(vfs.handle) self.assertFalse(handle.running) self.assertTrue(handle.closed) self.assertEqual(0, len(handle.mounts)) def test_appliance_setup_static_part(self): vfs = vfsimpl.VFSGuestFS(self.qcowfile, partition=2) vfs.setup() self.assertIsNone(vfs.handle.backend_settings) self.assertTrue(vfs.handle.running) self.assertEqual(1, len(vfs.handle.mounts)) self.assertEqual("/dev/sda2", vfs.handle.mounts[0][1]) self.assertEqual("/", vfs.handle.mounts[0][2]) handle = vfs.handle vfs.teardown() self.assertIsNone(vfs.handle) self.assertFalse(handle.running) self.assertTrue(handle.closed) self.assertEqual(0, len(handle.mounts)) def test_makepath(self): vfs = vfsimpl.VFSGuestFS(self.qcowfile) vfs.setup() vfs.make_path("/some/dir") vfs.make_path("/other/dir") self.assertIn("/some/dir", vfs.handle.files) self.assertIn("/other/dir", vfs.handle.files) self.assertTrue(vfs.handle.files["/some/dir"]["isdir"]) self.assertTrue(vfs.handle.files["/other/dir"]["isdir"]) vfs.teardown() def test_append_file(self): vfs = vfsimpl.VFSGuestFS(self.qcowfile) vfs.setup() vfs.append_file("/some/file", " Goodbye") self.assertIn("/some/file", vfs.handle.files) self.assertEqual("Hello World Goodbye", vfs.handle.files["/some/file"]["content"]) vfs.teardown() def test_replace_file(self): vfs = vfsimpl.VFSGuestFS(self.qcowfile) vfs.setup() vfs.replace_file("/some/file", "Goodbye") self.assertIn("/some/file", vfs.handle.files) self.assertEqual("Goodbye", vfs.handle.files["/some/file"]["content"]) vfs.teardown() def test_read_file(self): vfs = vfsimpl.VFSGuestFS(self.qcowfile) vfs.setup() self.assertEqual("Hello World", vfs.read_file("/some/file")) vfs.teardown() def test_has_file(self): vfs = vfsimpl.VFSGuestFS(self.qcowfile) vfs.setup() vfs.read_file("/some/file") self.assertTrue(vfs.has_file("/some/file")) self.assertFalse(vfs.has_file("/other/file")) vfs.teardown() def test_set_permissions(self): vfs = vfsimpl.VFSGuestFS(self.qcowfile) vfs.setup() vfs.read_file("/some/file") self.assertEqual(0o700, vfs.handle.files["/some/file"]["mode"]) vfs.set_permissions("/some/file", 0o7777) self.assertEqual(0o7777, vfs.handle.files["/some/file"]["mode"]) vfs.teardown() def test_set_ownership(self): vfs = vfsimpl.VFSGuestFS(self.qcowfile) vfs.setup() vfs.read_file("/some/file") self.assertEqual(100, vfs.handle.files["/some/file"]["uid"]) self.assertEqual(100, vfs.handle.files["/some/file"]["gid"]) vfs.set_ownership("/some/file", "fred", None) self.assertEqual(105, vfs.handle.files["/some/file"]["uid"]) self.assertEqual(100, vfs.handle.files["/some/file"]["gid"]) vfs.set_ownership("/some/file", None, "users") self.assertEqual(105, vfs.handle.files["/some/file"]["uid"]) self.assertEqual(500, vfs.handle.files["/some/file"]["gid"]) vfs.set_ownership("/some/file", "joe", "admins") self.assertEqual(110, vfs.handle.files["/some/file"]["uid"]) self.assertEqual(600, vfs.handle.files["/some/file"]["gid"]) vfs.teardown() def test_set_ownership_not_supported(self): # NOTE(andreaf) Setting ownership relies on /etc/passwd and/or # /etc/group being available in the image, which is not always the # case - e.g. CirrOS image before boot. vfs = vfsimpl.VFSGuestFS(self.qcowfile) vfs.setup() self.stub_out('nova.tests.unit.virt.disk.vfs.fakeguestfs.GuestFS.' 'CAN_SET_OWNERSHIP', False) self.assertRaises(exception.NovaException, vfs.set_ownership, "/some/file", "fred", None) self.assertRaises(exception.NovaException, vfs.set_ownership, "/some/file", None, "users") def test_close_on_error(self): vfs = vfsimpl.VFSGuestFS(self.qcowfile) vfs.setup() self.assertFalse(vfs.handle.kwargs['close_on_exit']) vfs.teardown() self.stub_out('nova.tests.unit.virt.disk.vfs.fakeguestfs.GuestFS.' 'SUPPORT_CLOSE_ON_EXIT', False) vfs = vfsimpl.VFSGuestFS(self.qcowfile) vfs.setup() self.assertNotIn('close_on_exit', vfs.handle.kwargs) vfs.teardown() def test_python_return_dict(self): vfs = vfsimpl.VFSGuestFS(self.qcowfile) vfs.setup() self.assertFalse(vfs.handle.kwargs['python_return_dict']) vfs.teardown() self.stub_out('nova.tests.unit.virt.disk.vfs.fakeguestfs.GuestFS.' 'SUPPORT_RETURN_DICT', False) vfs = vfsimpl.VFSGuestFS(self.qcowfile) vfs.setup() self.assertNotIn('python_return_dict', vfs.handle.kwargs) vfs.teardown() def test_setup_debug_disable(self): vfs = vfsimpl.VFSGuestFS(self.qcowfile) vfs.setup() self.assertFalse(vfs.handle.trace_enabled) self.assertFalse(vfs.handle.verbose_enabled) self.assertIsNone(vfs.handle.event_callback) def test_setup_debug_enabled(self): self.flags(debug=True, group='guestfs') vfs = vfsimpl.VFSGuestFS(self.qcowfile) vfs.setup() self.assertTrue(vfs.handle.trace_enabled) self.assertTrue(vfs.handle.verbose_enabled) self.assertIsNotNone(vfs.handle.event_callback) def test_get_format_fs(self): vfs = vfsimpl.VFSGuestFS(self.rawfile) vfs.setup() self.assertIsNotNone(vfs.handle) self.assertEqual('ext3', vfs.get_image_fs()) vfs.teardown() @mock.patch.object(vfsimpl.VFSGuestFS, 'setup_os') def test_setup_mount(self, setup_os): vfs = vfsimpl.VFSGuestFS(self.qcowfile) vfs.setup() self.assertTrue(setup_os.called) @mock.patch.object(vfsimpl.VFSGuestFS, 'setup_os') def test_setup_mount_false(self, setup_os): vfs = vfsimpl.VFSGuestFS(self.qcowfile) vfs.setup(mount=False) self.assertFalse(setup_os.called) @mock.patch('os.access') @mock.patch('os.uname', return_value=('Linux', '', 'kernel_name')) def test_appliance_setup_inspect_capabilties_fail_with_ubuntu(self, mock_uname, mock_access): # In ubuntu os will default host kernel as 600 permission m = mock.MagicMock() m.launch.side_effect = Exception vfs = vfsimpl.VFSGuestFS(self.qcowfile) mock_access.return_value = False self.flags(debug=False, group='guestfs') with mock.patch('eventlet.tpool.Proxy', return_value=m) as tpool_mock: self.assertRaises(exception.LibguestfsCannotReadKernel, vfs.inspect_capabilities) m.add_drive.assert_called_once_with('/dev/null') m.launch.assert_called_once_with() mock_access.assert_called_once_with('/boot/vmlinuz-kernel_name', mock.ANY) mock_uname.assert_called_once_with() self.assertEqual(1, tpool_mock.call_count) def test_appliance_setup_inspect_capabilties_debug_mode(self): """Asserts that we do not use an eventlet thread pool when guestfs debug logging is enabled. """ # We can't actually mock guestfs.GuestFS because it's an optional # native package import. All we really care about here is that # eventlet isn't used. self.flags(debug=True, group='guestfs') vfs = vfsimpl.VFSGuestFS(self.qcowfile) with mock.patch('eventlet.tpool.Proxy', new_callable=mock.NonCallableMock): vfs.inspect_capabilities() nova-17.0.1/nova/tests/unit/virt/disk/vfs/__init__.py0000666000175000017500000000000013250073126022465 0ustar zuulzuul00000000000000nova-17.0.1/nova/tests/unit/virt/disk/vfs/test_localfs.py0000666000175000017500000001761513250073126023434 0ustar zuulzuul00000000000000# Copyright (C) 2012 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import grp import pwd import tempfile from collections import namedtuple import mock from nova import exception from nova import test import nova.utils from nova.virt.disk.mount import nbd from nova.virt.disk.vfs import localfs as vfsimpl from nova.virt.image import model as imgmodel class VirtDiskVFSLocalFSTestPaths(test.NoDBTestCase): def setUp(self): super(VirtDiskVFSLocalFSTestPaths, self).setUp() self.rawfile = imgmodel.LocalFileImage('/dummy.img', imgmodel.FORMAT_RAW) # NOTE(mikal): mocking a decorator is non-trivial, so this is the # best we can do. @mock.patch.object(nova.privsep.path, 'readlink') def test_check_safe_path(self, read_link): vfs = vfsimpl.VFSLocalFS(self.rawfile) vfs.imgdir = '/foo' read_link.return_value = '/foo/etc/something.conf' ret = vfs._canonical_path('etc/something.conf') self.assertEqual(ret, '/foo/etc/something.conf') @mock.patch.object(nova.privsep.path, 'readlink') def test_check_unsafe_path(self, read_link): vfs = vfsimpl.VFSLocalFS(self.rawfile) vfs.imgdir = '/foo' read_link.return_value = '/etc/something.conf' self.assertRaises(exception.Invalid, vfs._canonical_path, 'etc/../../../something.conf') class VirtDiskVFSLocalFSTest(test.NoDBTestCase): def setUp(self): super(VirtDiskVFSLocalFSTest, self).setUp() self.qcowfile = imgmodel.LocalFileImage('/dummy.qcow2', imgmodel.FORMAT_QCOW2) self.rawfile = imgmodel.LocalFileImage('/dummy.img', imgmodel.FORMAT_RAW) @mock.patch.object(nova.privsep.path, 'readlink') @mock.patch.object(nova.privsep.path, 'makedirs') def test_makepath(self, mkdir, read_link): vfs = vfsimpl.VFSLocalFS(self.qcowfile) vfs.imgdir = '/scratch/dir' vfs.make_path('/some/dir') read_link.assert_called() mkdir.assert_called_with(read_link.return_value) read_link.reset_mock() mkdir.reset_mock() vfs.make_path('/other/dir') read_link.assert_called() mkdir.assert_called_with(read_link.return_value) @mock.patch.object(nova.privsep.path, 'readlink') @mock.patch.object(nova.privsep.path, 'writefile') def test_append_file(self, write_file, read_link): vfs = vfsimpl.VFSLocalFS(self.qcowfile) vfs.imgdir = '/scratch/dir' vfs.append_file('/some/file', ' Goodbye') read_link.assert_called() write_file.assert_called_with(read_link.return_value, 'a', ' Goodbye') @mock.patch.object(nova.privsep.path, 'readlink') @mock.patch.object(nova.privsep.path, 'writefile') def test_replace_file(self, write_file, read_link): vfs = vfsimpl.VFSLocalFS(self.qcowfile) vfs.imgdir = '/scratch/dir' vfs.replace_file('/some/file', 'Goodbye') read_link.assert_called() write_file.assert_called_with(read_link.return_value, 'w', 'Goodbye') @mock.patch.object(nova.privsep.path, 'readlink') @mock.patch.object(nova.privsep.path, 'readfile') def test_read_file(self, read_file, read_link): vfs = vfsimpl.VFSLocalFS(self.qcowfile) vfs.imgdir = '/scratch/dir' self.assertEqual(read_file.return_value, vfs.read_file('/some/file')) read_link.assert_called() read_file.assert_called() @mock.patch.object(nova.privsep.path.path, 'exists') def test_has_file(self, exists): vfs = vfsimpl.VFSLocalFS(self.qcowfile) vfs.imgdir = '/scratch/dir' has = vfs.has_file('/some/file') self.assertEqual(exists.return_value, has) @mock.patch.object(nova.privsep.path, 'readlink') @mock.patch.object(nova.privsep.path, 'chmod') def test_set_permissions(self, chmod, read_link): vfs = vfsimpl.VFSLocalFS(self.qcowfile) vfs.imgdir = '/scratch/dir' vfs.set_permissions('/some/file', 0o777) read_link.assert_called() chmod.assert_called_with(read_link.return_value, 0o777) @mock.patch.object(nova.privsep.path, 'readlink') @mock.patch.object(nova.privsep.path, 'chown') @mock.patch.object(pwd, 'getpwnam') @mock.patch.object(grp, 'getgrnam') def test_set_ownership(self, getgrnam, getpwnam, chown, read_link): vfs = vfsimpl.VFSLocalFS(self.qcowfile) vfs.imgdir = '/scratch/dir' fake_passwd = namedtuple('fake_passwd', 'pw_uid') getpwnam.return_value(fake_passwd(pw_uid=100)) fake_group = namedtuple('fake_group', 'gr_gid') getgrnam.return_value(fake_group(gr_gid=101)) vfs.set_ownership('/some/file', 'fred', None) read_link.assert_called() chown.assert_called_with(read_link.return_value, uid=getpwnam.return_value.pw_uid) read_link.reset_mock() chown.reset_mock() vfs.set_ownership('/some/file', None, 'users') read_link.assert_called() chown.assert_called_with(read_link.return_value, gid=getgrnam.return_value.gr_gid) read_link.reset_mock() chown.reset_mock() vfs.set_ownership('/some/file', 'joe', 'admins') read_link.assert_called() chown.assert_called_with(read_link.return_value, uid=getpwnam.return_value.pw_uid, gid=getgrnam.return_value.gr_gid) @mock.patch('nova.privsep.fs.get_filesystem_type', return_value=('ext3\n', '')) def test_get_format_fs(self, mock_type): vfs = vfsimpl.VFSLocalFS(self.rawfile) vfs.setup = mock.MagicMock() vfs.teardown = mock.MagicMock() def fake_setup(): vfs.mount = mock.MagicMock() vfs.mount.device = None vfs.mount.get_dev.side_effect = fake_get_dev def fake_teardown(): vfs.mount.device = None def fake_get_dev(): vfs.mount.device = '/dev/xyz' return True vfs.setup.side_effect = fake_setup vfs.teardown.side_effect = fake_teardown vfs.setup() self.assertEqual('ext3', vfs.get_image_fs()) vfs.teardown() vfs.mount.get_dev.assert_called_once_with() mock_type.assert_called_once_with('/dev/xyz') @mock.patch.object(tempfile, 'mkdtemp') @mock.patch.object(nbd, 'NbdMount') def test_setup_mount(self, NbdMount, mkdtemp): vfs = vfsimpl.VFSLocalFS(self.qcowfile) mounter = mock.MagicMock() mkdtemp.return_value = 'tmp/' NbdMount.return_value = mounter vfs.setup() self.assertTrue(mkdtemp.called) NbdMount.assert_called_once_with(self.qcowfile, 'tmp/', None) mounter.do_mount.assert_called_once_with() @mock.patch.object(tempfile, 'mkdtemp') @mock.patch.object(nbd, 'NbdMount') def test_setup_mount_false(self, NbdMount, mkdtemp): vfs = vfsimpl.VFSLocalFS(self.qcowfile) mounter = mock.MagicMock() mkdtemp.return_value = 'tmp/' NbdMount.return_value = mounter vfs.setup(mount=False) self.assertTrue(mkdtemp.called) NbdMount.assert_called_once_with(self.qcowfile, 'tmp/', None) self.assertFalse(mounter.do_mount.called) nova-17.0.1/nova/tests/unit/virt/disk/__init__.py0000666000175000017500000000000013250073126021667 0ustar zuulzuul00000000000000nova-17.0.1/nova/tests/unit/virt/disk/mount/0000775000175000017500000000000013250073472020734 5ustar zuulzuul00000000000000nova-17.0.1/nova/tests/unit/virt/disk/mount/test_nbd.py0000666000175000017500000002763313250073126023121 0ustar zuulzuul00000000000000# Copyright 2012 Michael Still and Canonical Inc # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import os import tempfile import time import eventlet import fixtures import six from nova import test from nova.virt.disk.mount import nbd from nova.virt.image import model as imgmodel ORIG_EXISTS = os.path.exists ORIG_LISTDIR = os.listdir def _fake_exists_no_users(path): if path.startswith('/sys/block/nbd'): if path.endswith('pid'): return False return True return ORIG_EXISTS(path) def _fake_listdir_nbd_devices(path): if isinstance(path, six.string_types) and path.startswith('/sys/block'): return ['nbd0', 'nbd1'] return ORIG_LISTDIR(path) def _fake_exists_all_used(path): if path.startswith('/sys/block/nbd'): return True return ORIG_EXISTS(path) def _fake_detect_nbd_devices_none(self): return [] def _fake_detect_nbd_devices(self): return ['nbd0', 'nbd1'] def _fake_noop(*args, **kwargs): return class NbdTestCase(test.NoDBTestCase): def setUp(self): super(NbdTestCase, self).setUp() self.stub_out('nova.virt.disk.mount.nbd.NbdMount._detect_nbd_devices', _fake_detect_nbd_devices) self.useFixture(fixtures.MonkeyPatch('os.listdir', _fake_listdir_nbd_devices)) self.file = imgmodel.LocalFileImage("/some/file.qcow2", imgmodel.FORMAT_QCOW2) def test_nbd_no_devices(self): tempdir = self.useFixture(fixtures.TempDir()).path self.stub_out('nova.virt.disk.mount.nbd.NbdMount._detect_nbd_devices', _fake_detect_nbd_devices_none) n = nbd.NbdMount(self.file, tempdir) self.assertIsNone(n._allocate_nbd()) def test_nbd_no_free_devices(self): tempdir = self.useFixture(fixtures.TempDir()).path n = nbd.NbdMount(self.file, tempdir) self.useFixture(fixtures.MonkeyPatch('os.path.exists', _fake_exists_all_used)) self.assertIsNone(n._allocate_nbd()) def test_nbd_not_loaded(self): tempdir = self.useFixture(fixtures.TempDir()).path n = nbd.NbdMount(self.file, tempdir) # Fake out os.path.exists def fake_exists(path): if path.startswith('/sys/block/nbd'): return False return ORIG_EXISTS(path) self.useFixture(fixtures.MonkeyPatch('os.path.exists', fake_exists)) # This should fail, as we don't have the module "loaded" # TODO(mikal): work out how to force english as the gettext language # so that the error check always passes self.assertIsNone(n._allocate_nbd()) self.assertEqual('nbd unavailable: module not loaded', n.error) def test_nbd_allocation(self): tempdir = self.useFixture(fixtures.TempDir()).path n = nbd.NbdMount(self.file, tempdir) self.useFixture(fixtures.MonkeyPatch('os.path.exists', _fake_exists_no_users)) self.useFixture(fixtures.MonkeyPatch('random.shuffle', _fake_noop)) # Allocate a nbd device self.assertEqual('/dev/nbd0', n._allocate_nbd()) def test_nbd_allocation_one_in_use(self): tempdir = self.useFixture(fixtures.TempDir()).path n = nbd.NbdMount(self.file, tempdir) self.useFixture(fixtures.MonkeyPatch('random.shuffle', _fake_noop)) # Fake out os.path.exists def fake_exists(path): if path.startswith('/sys/block/nbd'): if path == '/sys/block/nbd0/pid': return True if path.endswith('pid'): return False return True return ORIG_EXISTS(path) self.useFixture(fixtures.MonkeyPatch('os.path.exists', fake_exists)) # Allocate a nbd device, should not be the in use one # TODO(mikal): Note that there is a leak here, as the in use nbd device # is removed from the list, but not returned so it will never be # re-added. I will fix this in a later patch. self.assertEqual('/dev/nbd1', n._allocate_nbd()) def test_inner_get_dev_no_devices(self): tempdir = self.useFixture(fixtures.TempDir()).path self.stub_out('nova.virt.disk.mount.nbd.NbdMount._detect_nbd_devices', _fake_detect_nbd_devices_none) n = nbd.NbdMount(self.file, tempdir) self.assertFalse(n._inner_get_dev()) @mock.patch('nova.privsep.fs.nbd_connect', return_value=('', 'broken')) def test_inner_get_dev_qemu_fails(self, mock_nbd_connect): tempdir = self.useFixture(fixtures.TempDir()).path n = nbd.NbdMount(self.file, tempdir) self.useFixture(fixtures.MonkeyPatch('os.path.exists', _fake_exists_no_users)) # Error logged, no device consumed self.assertFalse(n._inner_get_dev()) self.assertTrue(n.error.startswith('qemu-nbd error')) @mock.patch('random.shuffle') @mock.patch('os.path.exists', side_effect=[True, False, False, False, False, False, False, False]) @mock.patch('os.listdir', return_value=['nbd0', 'nbd1', 'loop0']) @mock.patch('nova.privsep.fs.nbd_connect', return_value=('', '')) @mock.patch('nova.privsep.fs.nbd_disconnect', return_value=('', '')) @mock.patch('time.sleep') def test_inner_get_dev_qemu_timeout(self, mock_sleep, mock_nbd_disconnct, mock_nbd_connect, mock_exists, mock_listdir, mock_shuffle): self.flags(timeout_nbd=3) tempdir = self.useFixture(fixtures.TempDir()).path n = nbd.NbdMount(self.file, tempdir) # Error logged, no device consumed self.assertFalse(n._inner_get_dev()) self.assertTrue(n.error.endswith('did not show up')) @mock.patch('random.shuffle') @mock.patch('os.path.exists', side_effect=[True, False, False, False, False, True]) @mock.patch('os.listdir', return_value=['nbd0', 'nbd1', 'loop0']) @mock.patch('nova.privsep.fs.nbd_connect', return_value=('', '')) @mock.patch('nova.privsep.fs.nbd_disconnect') def test_inner_get_dev_works(self, mock_nbd_disconnect, mock_nbd_connect, mock_exists, mock_listdir, mock_shuffle): tempdir = self.useFixture(fixtures.TempDir()).path n = nbd.NbdMount(self.file, tempdir) # No error logged, device consumed self.assertTrue(n._inner_get_dev()) self.assertTrue(n.linked) self.assertEqual('', n.error) self.assertEqual('/dev/nbd0', n.device) # Free n.unget_dev() self.assertFalse(n.linked) self.assertEqual('', n.error) self.assertIsNone(n.device) def test_unget_dev_simple(self): # This test is just checking we don't get an exception when we unget # something we don't have tempdir = self.useFixture(fixtures.TempDir()).path n = nbd.NbdMount(self.file, tempdir) self.useFixture(fixtures.MonkeyPatch('nova.utils.execute', _fake_noop)) n.unget_dev() @mock.patch('random.shuffle') @mock.patch('os.path.exists', side_effect=[True, False, False, False, False, True]) @mock.patch('os.listdir', return_value=['nbd0', 'nbd1', 'loop0']) @mock.patch('nova.privsep.fs.nbd_connect', return_value=('', '')) @mock.patch('nova.privsep.fs.nbd_disconnect') def test_get_dev(self, mock_nbd_disconnect, mock_nbd_connect, mock_exists, mock_listdir, mock_shuffle): tempdir = self.useFixture(fixtures.TempDir()).path n = nbd.NbdMount(self.file, tempdir) # No error logged, device consumed self.assertTrue(n.get_dev()) self.assertTrue(n.linked) self.assertEqual('', n.error) self.assertEqual('/dev/nbd0', n.device) # Free n.unget_dev() self.assertFalse(n.linked) self.assertEqual('', n.error) self.assertIsNone(n.device) @mock.patch('random.shuffle') @mock.patch('time.sleep') @mock.patch('nova.privsep.fs.nbd_connect') @mock.patch('nova.privsep.fs.nbd_disconnect') @mock.patch('os.path.exists', return_value=True) @mock.patch('nova.virt.disk.mount.nbd.NbdMount._inner_get_dev', return_value=False) def test_get_dev_timeout(self, mock_get_dev, mock_exists, mock_nbd_disconnect, mock_nbd_connect, mock_sleep, mock_shuffle): tempdir = self.useFixture(fixtures.TempDir()).path n = nbd.NbdMount(self.file, tempdir) self.useFixture(fixtures.MonkeyPatch(('nova.virt.disk.mount.api.' 'MAX_DEVICE_WAIT'), -10)) # No error logged, device consumed self.assertFalse(n.get_dev()) @mock.patch('nova.privsep.fs.mount', return_value=('', 'broken')) def test_do_mount_need_to_specify_fs_type(self, mock_mount): # NOTE(mikal): Bug 1094373 saw a regression where we failed to # communicate a failed mount properly. imgfile = tempfile.NamedTemporaryFile() self.addCleanup(imgfile.close) tempdir = self.useFixture(fixtures.TempDir()).path mount = nbd.NbdMount(imgfile.name, tempdir) def fake_returns_true(*args, **kwargs): return True mount.get_dev = fake_returns_true mount.map_dev = fake_returns_true self.assertFalse(mount.do_mount()) @mock.patch('nova.privsep.fs.nbd_connect') @mock.patch('nova.privsep.fs.nbd_disconnect') @mock.patch('os.path.exists') def test_device_creation_race(self, mock_exists, mock_nbd_disconnect, mock_nbd_connect): # Make sure that even if two threads create instances at the same time # they cannot choose the same nbd number (see bug 1207422) tempdir = self.useFixture(fixtures.TempDir()).path free_devices = _fake_detect_nbd_devices(None)[:] chosen_devices = [] def fake_find_unused(self): return os.path.join('/dev', free_devices[-1]) def delay_and_remove_device(*args, **kwargs): # Ensure that context switch happens before the device is marked # as used. This will cause a failure without nbd-allocation-lock # in place. time.sleep(0.1) # We always choose the top device in find_unused - remove it now. free_devices.pop() return '', '' def pid_exists(pidfile): return pidfile not in [os.path.join('/sys/block', dev, 'pid') for dev in free_devices] self.stub_out('nova.virt.disk.mount.nbd.NbdMount._allocate_nbd', fake_find_unused) mock_nbd_connect.side_effect = delay_and_remove_device mock_exists.side_effect = pid_exists def get_a_device(): n = nbd.NbdMount(self.file, tempdir) n.get_dev() chosen_devices.append(n.device) thread1 = eventlet.spawn(get_a_device) thread2 = eventlet.spawn(get_a_device) thread1.wait() thread2.wait() self.assertEqual(2, len(chosen_devices)) self.assertNotEqual(chosen_devices[0], chosen_devices[1]) nova-17.0.1/nova/tests/unit/virt/disk/mount/test_api.py0000666000175000017500000002004213250073126023112 0ustar zuulzuul00000000000000# Copyright 2015 Hewlett-Packard Development Company, L.P. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova import test from nova.virt.disk.mount import api from nova.virt.disk.mount import block from nova.virt.disk.mount import loop from nova.virt.disk.mount import nbd from nova.virt.image import model as imgmodel PARTITION = 77 ORIG_DEVICE = "/dev/null" AUTOMAP_PARTITION = "/dev/nullp77" MAP_PARTITION = "/dev/mapper/nullp77" class MountTestCase(test.NoDBTestCase): def _test_map_dev(self, partition): mount = api.Mount(mock.sentinel.image, mock.sentinel.mount_dir) mount.device = ORIG_DEVICE mount.partition = partition mount.map_dev() return mount def _exists_effect(self, data): def exists_effect(filename): try: v = data[filename] if isinstance(v, list): if len(v) > 0: return v.pop(0) self.fail("Out of items for: %s" % filename) return v except KeyError: self.fail("Unexpected call with: %s" % filename) return exists_effect def _check_calls(self, exists, filenames, trailing=0): self.assertEqual([mock.call(x) for x in filenames], exists.call_args_list[:len(filenames)]) self.assertEqual([mock.call(MAP_PARTITION)] * trailing, exists.call_args_list[len(filenames):]) @mock.patch('os.path.exists') def test_map_dev_partition_search(self, exists): exists.side_effect = self._exists_effect({ ORIG_DEVICE: True}) mount = self._test_map_dev(-1) self._check_calls(exists, [ORIG_DEVICE]) self.assertNotEqual("", mount.error) self.assertFalse(mount.mapped) @mock.patch('os.path.exists') @mock.patch('nova.privsep.fs.create_device_maps', return_value=(None, None)) def test_map_dev_good(self, mock_create_maps, mock_exists): mock_exists.side_effect = self._exists_effect({ ORIG_DEVICE: True, AUTOMAP_PARTITION: False, MAP_PARTITION: [False, True]}) mount = self._test_map_dev(PARTITION) self._check_calls(mock_exists, [ORIG_DEVICE, AUTOMAP_PARTITION], 2) self.assertEqual("", mount.error) self.assertTrue(mount.mapped) @mock.patch('os.path.exists') @mock.patch('nova.privsep.fs.create_device_maps', return_value=(None, None)) def test_map_dev_error(self, mock_create_maps, mock_exists): mock_exists.side_effect = self._exists_effect({ ORIG_DEVICE: True, AUTOMAP_PARTITION: False, MAP_PARTITION: False}) mount = self._test_map_dev(PARTITION) self._check_calls(mock_exists, [ORIG_DEVICE, AUTOMAP_PARTITION], api.MAX_FILE_CHECKS + 1) self.assertNotEqual("", mount.error) self.assertFalse(mount.mapped) @mock.patch('os.path.exists') @mock.patch('nova.privsep.fs.create_device_maps', return_value=(None, None)) def test_map_dev_error_then_pass(self, mock_create_maps, mock_exists): mock_exists.side_effect = self._exists_effect({ ORIG_DEVICE: True, AUTOMAP_PARTITION: False, MAP_PARTITION: [False, False, True]}) mount = self._test_map_dev(PARTITION) self._check_calls(mock_exists, [ORIG_DEVICE, AUTOMAP_PARTITION], 3) self.assertEqual("", mount.error) self.assertTrue(mount.mapped) @mock.patch('os.path.exists') def test_map_dev_automap(self, exists): exists.side_effect = self._exists_effect({ ORIG_DEVICE: True, AUTOMAP_PARTITION: True}) mount = self._test_map_dev(PARTITION) self._check_calls(exists, [ORIG_DEVICE, AUTOMAP_PARTITION, AUTOMAP_PARTITION]) self.assertEqual(AUTOMAP_PARTITION, mount.mapped_device) self.assertTrue(mount.automapped) self.assertTrue(mount.mapped) @mock.patch('os.path.exists') def test_map_dev_else(self, exists): exists.side_effect = self._exists_effect({ ORIG_DEVICE: True, AUTOMAP_PARTITION: True}) mount = self._test_map_dev(None) self._check_calls(exists, [ORIG_DEVICE]) self.assertEqual(ORIG_DEVICE, mount.mapped_device) self.assertFalse(mount.automapped) self.assertTrue(mount.mapped) def test_instance_for_format_raw(self): image = imgmodel.LocalFileImage("/some/file.raw", imgmodel.FORMAT_RAW) mount_dir = '/mount/dir' partition = -1 inst = api.Mount.instance_for_format(image, mount_dir, partition) self.assertIsInstance(inst, loop.LoopMount) def test_instance_for_format_qcow2(self): image = imgmodel.LocalFileImage("/some/file.qcows", imgmodel.FORMAT_QCOW2) mount_dir = '/mount/dir' partition = -1 inst = api.Mount.instance_for_format(image, mount_dir, partition) self.assertIsInstance(inst, nbd.NbdMount) def test_instance_for_format_block(self): image = imgmodel.LocalBlockImage( "/dev/mapper/instances--instance-0000001_disk",) mount_dir = '/mount/dir' partition = -1 inst = api.Mount.instance_for_format(image, mount_dir, partition) self.assertIsInstance(inst, block.BlockMount) def test_instance_for_device_loop(self): image = mock.MagicMock() mount_dir = '/mount/dir' partition = -1 device = '/dev/loop0' inst = api.Mount.instance_for_device(image, mount_dir, partition, device) self.assertIsInstance(inst, loop.LoopMount) def test_instance_for_device_loop_partition(self): image = mock.MagicMock() mount_dir = '/mount/dir' partition = 1 device = '/dev/mapper/loop0p1' inst = api.Mount.instance_for_device(image, mount_dir, partition, device) self.assertIsInstance(inst, loop.LoopMount) def test_instance_for_device_nbd(self): image = mock.MagicMock() mount_dir = '/mount/dir' partition = -1 device = '/dev/nbd0' inst = api.Mount.instance_for_device(image, mount_dir, partition, device) self.assertIsInstance(inst, nbd.NbdMount) def test_instance_for_device_nbd_partition(self): image = mock.MagicMock() mount_dir = '/mount/dir' partition = 1 device = '/dev/mapper/nbd0p1' inst = api.Mount.instance_for_device(image, mount_dir, partition, device) self.assertIsInstance(inst, nbd.NbdMount) def test_instance_for_device_block(self): image = mock.MagicMock() mount_dir = '/mount/dir' partition = -1 device = '/dev/mapper/instances--instance-0000001_disk' inst = api.Mount.instance_for_device(image, mount_dir, partition, device) self.assertIsInstance(inst, block.BlockMount) def test_instance_for_device_block_partiton(self,): image = mock.MagicMock() mount_dir = '/mount/dir' partition = 1 device = '/dev/mapper/instances--instance-0000001_diskp1' inst = api.Mount.instance_for_device(image, mount_dir, partition, device) self.assertIsInstance(inst, block.BlockMount) nova-17.0.1/nova/tests/unit/virt/disk/mount/__init__.py0000666000175000017500000000000013250073126023031 0ustar zuulzuul00000000000000nova-17.0.1/nova/tests/unit/virt/disk/mount/test_loop.py0000666000175000017500000000642113250073126023317 0ustar zuulzuul00000000000000# Copyright 2012 Michael Still # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import fixtures import mock from nova import test from nova.virt.disk.mount import loop from nova.virt.image import model as imgmodel def _fake_noop(*args, **kwargs): return class LoopTestCase(test.NoDBTestCase): def setUp(self): super(LoopTestCase, self).setUp() self.file = imgmodel.LocalFileImage("/some/file.qcow2", imgmodel.FORMAT_QCOW2) @mock.patch('nova.privsep.fs.loopsetup', return_value=('/dev/loop0', '')) @mock.patch('nova.privsep.fs.loopremove') def test_get_dev(self, mock_loopremove, mock_loopsetup): tempdir = self.useFixture(fixtures.TempDir()).path l = loop.LoopMount(self.file, tempdir) self.useFixture(fixtures.MonkeyPatch('nova.utils.execute', _fake_noop)) # No error logged, device consumed self.assertTrue(l.get_dev()) self.assertTrue(l.linked) self.assertEqual('', l.error) self.assertEqual('/dev/loop0', l.device) # Free l.unget_dev() self.assertFalse(l.linked) self.assertEqual('', l.error) self.assertIsNone(l.device) @mock.patch('nova.privsep.fs.loopsetup', return_value=('', 'doh')) def test_inner_get_dev_fails(self, mock_loopsetup): tempdir = self.useFixture(fixtures.TempDir()).path l = loop.LoopMount(self.file, tempdir) # No error logged, device consumed self.assertFalse(l._inner_get_dev()) self.assertFalse(l.linked) self.assertNotEqual('', l.error) self.assertIsNone(l.device) # Free l.unget_dev() self.assertFalse(l.linked) self.assertIsNone(l.device) @mock.patch('nova.privsep.fs.loopsetup', return_value=('', 'doh')) def test_get_dev_timeout(self, mock_loopsetup): tempdir = self.useFixture(fixtures.TempDir()).path l = loop.LoopMount(self.file, tempdir) self.useFixture(fixtures.MonkeyPatch('time.sleep', _fake_noop)) self.useFixture(fixtures.MonkeyPatch(('nova.virt.disk.mount.api.' 'MAX_DEVICE_WAIT'), -10)) # Always fail to get a device def fake_get_dev_fails(): return False l._inner_get_dev = fake_get_dev_fails # Fail to get a device self.assertFalse(l.get_dev()) @mock.patch('nova.privsep.fs.loopremove') def test_unget_dev(self, mock_loopremove): tempdir = self.useFixture(fixtures.TempDir()).path l = loop.LoopMount(self.file, tempdir) # This just checks that a free of something we don't have doesn't # throw an exception l.unget_dev() nova-17.0.1/nova/tests/unit/virt/disk/mount/test_block.py0000666000175000017500000000272613250073126023444 0ustar zuulzuul00000000000000# Copyright 2015 Rackspace Hosting, Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import fixtures from nova import test from nova.virt.disk.mount import block from nova.virt.image import model as imgmodel class LoopTestCase(test.NoDBTestCase): def setUp(self): super(LoopTestCase, self).setUp() device_path = '/dev/mapper/instances--instance-0000001_disk' self.image = imgmodel.LocalBlockImage(device_path) def test_get_dev(self): tempdir = self.useFixture(fixtures.TempDir()).path b = block.BlockMount(self.image, tempdir) self.assertTrue(b.get_dev()) self.assertTrue(b.linked) self.assertEqual(self.image.path, b.device) def test_unget_dev(self): tempdir = self.useFixture(fixtures.TempDir()).path b = block.BlockMount(self.image, tempdir) b.unget_dev() self.assertIsNone(b.device) self.assertFalse(b.linked) nova-17.0.1/nova/tests/unit/virt/vmwareapi/0000775000175000017500000000000013250073472020633 5ustar zuulzuul00000000000000nova-17.0.1/nova/tests/unit/virt/vmwareapi/fake.py0000666000175000017500000016423013250073126022117 0ustar zuulzuul00000000000000# Copyright (c) 2013 Hewlett-Packard Development Company, L.P. # Copyright (c) 2012 VMware, Inc. # Copyright (c) 2011 Citrix Systems, Inc. # Copyright 2011 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ A fake VMware VI API implementation. """ import collections import sys from oslo_log import log as logging from oslo_serialization import jsonutils from oslo_utils import units from oslo_utils import uuidutils from oslo_vmware import exceptions as vexc from oslo_vmware.objects import datastore as ds_obj from nova import exception from nova.virt.vmwareapi import constants _CLASSES = ['Datacenter', 'Datastore', 'ResourcePool', 'VirtualMachine', 'Network', 'HostSystem', 'HostNetworkSystem', 'Task', 'session', 'files', 'ClusterComputeResource', 'HostStorageSystem', 'Folder'] _FAKE_FILE_SIZE = 1024 _FAKE_VCENTER_UUID = '497c514c-ef5e-4e7f-8d93-ec921993b93a' _db_content = {} _array_types = {} _vim_map = {} LOG = logging.getLogger(__name__) def reset(): """Resets the db contents.""" cleanup() create_network() create_folder() create_host_network_system() create_host_storage_system() ds_ref1 = create_datastore('ds1', 1024, 500) create_host(ds_ref=ds_ref1) ds_ref2 = create_datastore('ds2', 1024, 500) create_host(ds_ref=ds_ref2) create_datacenter('dc1', ds_ref1) create_datacenter('dc2', ds_ref2) create_res_pool() create_cluster('test_cluster', ds_ref1) create_cluster('test_cluster2', ds_ref2) def cleanup(): """Clear the db contents.""" for c in _CLASSES: # We fake the datastore by keeping the file references as a list of # names in the db if c == 'files': _db_content[c] = [] else: _db_content[c] = {} def _create_object(table, table_obj): """Create an object in the db.""" _db_content.setdefault(table, {}) _db_content[table][table_obj.obj] = table_obj def _get_object(obj_ref): """Get object for the give reference.""" return _db_content[obj_ref.type][obj_ref] def _get_objects(obj_type): """Get objects of the type.""" lst_objs = FakeRetrieveResult() for key in _db_content[obj_type]: lst_objs.add_object(_db_content[obj_type][key]) return lst_objs def _convert_to_array_of_mor(mors): """Wraps the given array into a DataObject.""" array_of_mors = DataObject() array_of_mors.ManagedObjectReference = mors return array_of_mors def _convert_to_array_of_opt_val(optvals): """Wraps the given array into a DataObject.""" array_of_optv = DataObject() array_of_optv.OptionValue = optvals return array_of_optv def _create_array_of_type(t): """Returns an array to contain objects of type t.""" if t in _array_types: return _array_types[t]() array_type_name = 'ArrayOf%s' % t array_type = type(array_type_name, (DataObject,), {}) def __init__(self): super(array_type, self).__init__(array_type_name) setattr(self, t, []) setattr(array_type, '__init__', __init__) _array_types[t] = array_type return array_type() class FakeRetrieveResult(object): """Object to retrieve a ObjectContent list.""" def __init__(self, token=None): self.objects = [] if token is not None: self.token = token def add_object(self, object): self.objects.append(object) def _get_object_refs(obj_type): """Get object References of the type.""" lst_objs = [] for key in _db_content[obj_type]: lst_objs.append(key) return lst_objs def _update_object(table, table_obj): """Update objects of the type.""" _db_content[table][table_obj.obj] = table_obj class Prop(object): """Property Object base class.""" def __init__(self, name=None, val=None): self.name = name self.val = val class ManagedObjectReference(object): """A managed object reference is a remote identifier.""" def __init__(self, name="ManagedObject", value=None): super(ManagedObjectReference, self) # Managed Object Reference value attributes # typically have values like vm-123 or # host-232 and not UUID. self.value = value # Managed Object Reference type # attributes hold the name of the type # of the vCenter object the value # attribute is the identifier for self.type = name self._type = name class ObjectContent(object): """ObjectContent array holds dynamic properties.""" # This class is a *fake* of a class sent back to us by # SOAP. It has its own names. These names are decided # for us by the API we are *faking* here. def __init__(self, obj_ref, prop_list=None, missing_list=None): self.obj = obj_ref if not isinstance(prop_list, collections.Iterable): prop_list = [] if not isinstance(missing_list, collections.Iterable): missing_list = [] # propSet is the name your Python code will need to # use since this is the name that the API will use if prop_list: self.propSet = prop_list # missingSet is the name your python code will # need to use since this is the name that the # API we are talking to will use. if missing_list: self.missingSet = missing_list class ManagedObject(object): """Managed Object base class.""" _counter = 0 def __init__(self, mo_id_prefix="obj"): """Sets the obj property which acts as a reference to the object.""" object.__setattr__(self, 'mo_id', self._generate_moid(mo_id_prefix)) object.__setattr__(self, 'propSet', []) object.__setattr__(self, 'obj', ManagedObjectReference(self.__class__.__name__, self.mo_id)) def set(self, attr, val): """Sets an attribute value. Not using the __setattr__ directly for we want to set attributes of the type 'a.b.c' and using this function class we set the same. """ self.__setattr__(attr, val) def get(self, attr): """Gets an attribute. Used as an intermediary to get nested property like 'a.b.c' value. """ return self.__getattr__(attr) def delete(self, attr): """Deletes an attribute.""" self.propSet = [elem for elem in self.propSet if elem.name != attr] def __setattr__(self, attr, val): # TODO(hartsocks): this is adds unnecessary complexity to the class for prop in self.propSet: if prop.name == attr: prop.val = val return elem = Prop() elem.name = attr elem.val = val self.propSet.append(elem) def __getattr__(self, attr): # TODO(hartsocks): remove this # in a real ManagedObject you have to iterate the propSet # in a real ManagedObject, the propSet is a *set* not a list for elem in self.propSet: if elem.name == attr: return elem.val msg = "Property %(attr)s not set for the managed object %(name)s" raise exception.NovaException(msg % {'attr': attr, 'name': self.__class__.__name__}) def _generate_moid(self, prefix): """Generates a new Managed Object ID.""" self.__class__._counter += 1 return prefix + "-" + str(self.__class__._counter) def __repr__(self): return jsonutils.dumps({elem.name: elem.val for elem in self.propSet}) class DataObject(object): """Data object base class.""" def __init__(self, obj_name=None): if obj_name is None: obj_name = 'ns0:' + self.__class__.__name__ self.obj_name = obj_name def __repr__(self): return str(self.__dict__) def __eq__(self, other): return self.__dict__ == other.__dict__ class HostInternetScsiHba(DataObject): """iSCSI Host Bus Adapter.""" def __init__(self, iscsi_name=None): super(HostInternetScsiHba, self).__init__() self.device = 'vmhba33' self.key = 'key-vmhba33' self.iScsiName = iscsi_name class FileAlreadyExists(DataObject): """File already exists class.""" def __init__(self): super(FileAlreadyExists, self).__init__() self.__name__ = vexc.FILE_ALREADY_EXISTS class FileNotFound(DataObject): """File not found class.""" def __init__(self): super(FileNotFound, self).__init__() self.__name__ = vexc.FILE_NOT_FOUND class FileFault(DataObject): """File fault.""" def __init__(self): super(FileFault, self).__init__() self.__name__ = vexc.FILE_FAULT class CannotDeleteFile(DataObject): """Cannot delete file.""" def __init__(self): super(CannotDeleteFile, self).__init__() self.__name__ = vexc.CANNOT_DELETE_FILE class FileLocked(DataObject): """File locked.""" def __init__(self): super(FileLocked, self).__init__() self.__name__ = vexc.FILE_LOCKED class VirtualDisk(DataObject): """Virtual Disk class.""" def __init__(self, controllerKey=0, unitNumber=0): super(VirtualDisk, self).__init__() self.key = 0 self.controllerKey = controllerKey self.unitNumber = unitNumber class VirtualDiskFlatVer2BackingInfo(DataObject): """VirtualDiskFlatVer2BackingInfo class.""" def __init__(self): super(VirtualDiskFlatVer2BackingInfo, self).__init__() self.thinProvisioned = False self.eagerlyScrub = False class VirtualDiskRawDiskMappingVer1BackingInfo(DataObject): """VirtualDiskRawDiskMappingVer1BackingInfo class.""" def __init__(self): super(VirtualDiskRawDiskMappingVer1BackingInfo, self).__init__() self.lunUuid = "" class VirtualIDEController(DataObject): def __init__(self, key=0): self.key = key class VirtualLsiLogicController(DataObject): """VirtualLsiLogicController class.""" def __init__(self, key=0, scsiCtlrUnitNumber=0, busNumber=0): self.key = key self.busNumber = busNumber self.scsiCtlrUnitNumber = scsiCtlrUnitNumber self.device = [] class VirtualLsiLogicSASController(DataObject): """VirtualLsiLogicSASController class.""" pass class VirtualPCNet32(DataObject): """VirtualPCNet32 class.""" def __init__(self): super(VirtualPCNet32, self).__init__() self.key = 4000 class OptionValue(DataObject): """OptionValue class.""" def __init__(self, key=None, value=None): super(OptionValue, self).__init__() self.key = key self.value = value class VirtualMachine(ManagedObject): """Virtual Machine class.""" def __init__(self, **kwargs): super(VirtualMachine, self).__init__("vm") self.set("name", kwargs.get("name", 'test-vm')) self.set("runtime.connectionState", kwargs.get("conn_state", "connected")) self.set("summary.config.guestId", kwargs.get("guest", constants.DEFAULT_OS_TYPE)) ds_do = kwargs.get("ds", None) self.set("datastore", _convert_to_array_of_mor(ds_do)) self.set("summary.guest.toolsStatus", kwargs.get("toolsstatus", "toolsOk")) self.set("summary.guest.toolsRunningStatus", kwargs.get( "toolsrunningstate", "guestToolsRunning")) self.set("runtime.powerState", kwargs.get("powerstate", "poweredOn")) self.set("config.files.vmPathName", kwargs.get("vmPathName")) self.set("summary.config.numCpu", kwargs.get("numCpu", 1)) self.set("summary.config.memorySizeMB", kwargs.get("mem", 1)) self.set("summary.config.instanceUuid", kwargs.get("instanceUuid")) self.set("version", kwargs.get("version")) devices = _create_array_of_type('VirtualDevice') devices.VirtualDevice = kwargs.get("virtual_device", []) self.set("config.hardware.device", devices) exconfig_do = kwargs.get("extra_config", None) self.set("config.extraConfig", _convert_to_array_of_opt_val(exconfig_do)) if exconfig_do: for optval in exconfig_do: self.set('config.extraConfig["%s"]' % optval.key, optval) self.set('runtime.host', kwargs.get("runtime_host", None)) self.device = kwargs.get("virtual_device", []) # Sample of diagnostics data is below. config = [ ('template', False), ('vmPathName', 'fake_path'), ('memorySizeMB', 512), ('cpuReservation', 0), ('memoryReservation', 0), ('numCpu', 1), ('numEthernetCards', 1), ('numVirtualDisks', 1)] self.set("summary.config", config) quickStats = [ ('overallCpuUsage', 0), ('overallCpuDemand', 0), ('guestMemoryUsage', 0), ('hostMemoryUsage', 141), ('balloonedMemory', 0), ('consumedOverheadMemory', 20)] self.set("summary.quickStats", quickStats) key1 = {'key': 'cpuid.AES'} key2 = {'key': 'cpuid.AVX'} runtime = [ ('connectionState', 'connected'), ('powerState', 'poweredOn'), ('toolsInstallerMounted', False), ('suspendInterval', 0), ('memoryOverhead', 21417984), ('maxCpuUsage', 2000), ('featureRequirement', [key1, key2])] self.set("summary.runtime", runtime) def _update_extra_config(self, extra): extra_config = self.get("config.extraConfig") values = extra_config.OptionValue for value in values: if value.key == extra.key: value.value = extra.value return kv = DataObject() kv.key = extra.key kv.value = extra.value extra_config.OptionValue.append(kv) self.set("config.extraConfig", extra_config) extra_config = self.get("config.extraConfig") def reconfig(self, factory, val): """Called to reconfigure the VM. Actually customizes the property setting of the Virtual Machine object. """ if hasattr(val, 'name') and val.name: self.set("name", val.name) if hasattr(val, 'extraConfig'): extraConfigs = _merge_extraconfig( self.get("config.extraConfig").OptionValue, val.extraConfig) self.get("config.extraConfig").OptionValue = extraConfigs if hasattr(val, 'instanceUuid') and val.instanceUuid is not None: if val.instanceUuid == "": val.instanceUuid = uuidutils.generate_uuid() self.set("summary.config.instanceUuid", val.instanceUuid) try: if not hasattr(val, 'deviceChange'): return if hasattr(val, 'extraConfig'): # there are 2 cases - new entry or update an existing one for extra in val.extraConfig: self._update_extra_config(extra) if len(val.deviceChange) < 2: return # Case of Reconfig of VM to attach disk controller_key = val.deviceChange[0].device.controllerKey filename = val.deviceChange[0].device.backing.fileName disk = VirtualDisk() disk.controllerKey = controller_key disk_backing = VirtualDiskFlatVer2BackingInfo() disk_backing.fileName = filename disk_backing.key = -101 disk.backing = disk_backing disk.capacityInBytes = 1024 disk.capacityInKB = 1 controller = VirtualLsiLogicController() controller.key = controller_key devices = _create_array_of_type('VirtualDevice') devices.VirtualDevice = [disk, controller, self.device[0]] self.set("config.hardware.device", devices) except AttributeError: pass class Folder(ManagedObject): """Folder class.""" def __init__(self): super(Folder, self).__init__("Folder") self.set("childEntity", []) class Network(ManagedObject): """Network class.""" def __init__(self): super(Network, self).__init__("network") self.set("summary.name", "vmnet0") class ResourcePool(ManagedObject): """Resource Pool class.""" def __init__(self, name="test_ResPool", value="resgroup-test"): super(ResourcePool, self).__init__("rp") self.set("name", name) summary = DataObject() runtime = DataObject() config = DataObject() memory = DataObject() cpu = DataObject() memoryAllocation = DataObject() cpuAllocation = DataObject() vm_list = DataObject() memory.maxUsage = 1000 * units.Mi memory.overallUsage = 500 * units.Mi cpu.maxUsage = 10000 cpu.overallUsage = 1000 runtime.cpu = cpu runtime.memory = memory summary.runtime = runtime cpuAllocation.limit = 10000 memoryAllocation.limit = 1024 memoryAllocation.reservation = 1024 config.memoryAllocation = memoryAllocation config.cpuAllocation = cpuAllocation vm_list.ManagedObjectReference = [] self.set("summary", summary) self.set("summary.runtime.memory", memory) self.set("config", config) self.set("vm", vm_list) parent = ManagedObjectReference(value=value, name=name) owner = ManagedObjectReference(value=value, name=name) self.set("parent", parent) self.set("owner", owner) class DatastoreHostMount(DataObject): def __init__(self, value='host-100'): super(DatastoreHostMount, self).__init__() host_ref = (_db_content["HostSystem"] [list(_db_content["HostSystem"].keys())[0]].obj) host_system = DataObject() host_system.ManagedObjectReference = [host_ref] host_system.value = value self.key = host_system class ClusterComputeResource(ManagedObject): """Cluster class.""" def __init__(self, name="test_cluster"): super(ClusterComputeResource, self).__init__("domain") self.set("name", name) self.set("host", None) self.set("datastore", None) self.set("resourcePool", None) summary = DataObject() summary.numHosts = 0 summary.numCpuCores = 0 summary.numCpuThreads = 0 summary.numEffectiveHosts = 0 summary.totalMemory = 0 summary.effectiveMemory = 0 summary.effectiveCpu = 10000 self.set("summary", summary) def _add_root_resource_pool(self, r_pool): if r_pool: self.set("resourcePool", r_pool) def _add_host(self, host_sys): if host_sys: hosts = self.get("host") if hosts is None: hosts = DataObject() hosts.ManagedObjectReference = [] self.set("host", hosts) hosts.ManagedObjectReference.append(host_sys) # Update summary every time a new host is added self._update_summary() def _add_datastore(self, datastore): if datastore: datastores = self.get("datastore") if datastores is None: datastores = DataObject() datastores.ManagedObjectReference = [] self.set("datastore", datastores) datastores.ManagedObjectReference.append(datastore) # Method to update summary of a cluster upon host addition def _update_summary(self): summary = self.get("summary") summary.numHosts = 0 summary.numCpuCores = 0 summary.numCpuThreads = 0 summary.numEffectiveHosts = 0 summary.totalMemory = 0 summary.effectiveMemory = 0 hosts = self.get("host") # Compute the aggregate stats summary.numHosts = len(hosts.ManagedObjectReference) for host_ref in hosts.ManagedObjectReference: host_sys = _get_object(host_ref) connected = host_sys.get("connected") host_summary = host_sys.get("summary") summary.numCpuCores += host_summary.hardware.numCpuCores summary.numCpuThreads += host_summary.hardware.numCpuThreads summary.totalMemory += host_summary.hardware.memorySize free_memory = (host_summary.hardware.memorySize / units.Mi - host_summary.quickStats.overallMemoryUsage) summary.effectiveMemory += free_memory if connected else 0 summary.numEffectiveHosts += 1 if connected else 0 self.set("summary", summary) class Datastore(ManagedObject): """Datastore class.""" def __init__(self, name="fake-ds", capacity=1024, free=500, accessible=True, maintenance_mode="normal"): super(Datastore, self).__init__("ds") self.set("summary.type", "VMFS") self.set("summary.name", name) self.set("summary.capacity", capacity * units.Gi) self.set("summary.freeSpace", free * units.Gi) self.set("summary.accessible", accessible) self.set("summary.maintenanceMode", maintenance_mode) self.set("browser", "") class HostNetworkSystem(ManagedObject): """HostNetworkSystem class.""" def __init__(self, name="networkSystem"): super(HostNetworkSystem, self).__init__("ns") self.set("name", name) pnic_do = DataObject() pnic_do.device = "vmnic0" net_info_pnic = DataObject() net_info_pnic.PhysicalNic = [pnic_do] self.set("networkInfo.pnic", net_info_pnic) class HostStorageSystem(ManagedObject): """HostStorageSystem class.""" def __init__(self): super(HostStorageSystem, self).__init__("storageSystem") class HostSystem(ManagedObject): """Host System class.""" def __init__(self, name="ha-host", connected=True, ds_ref=None, maintenance_mode=False): super(HostSystem, self).__init__("host") self.set("name", name) if _db_content.get("HostNetworkSystem", None) is None: create_host_network_system() if not _get_object_refs('HostStorageSystem'): create_host_storage_system() host_net_key = list(_db_content["HostNetworkSystem"].keys())[0] host_net_sys = _db_content["HostNetworkSystem"][host_net_key].obj self.set("configManager.networkSystem", host_net_sys) host_storage_sys_key = _get_object_refs('HostStorageSystem')[0] self.set("configManager.storageSystem", host_storage_sys_key) if not ds_ref: ds_ref = create_datastore('local-host-%s' % name, 500, 500) datastores = DataObject() datastores.ManagedObjectReference = [ds_ref] self.set("datastore", datastores) summary = DataObject() hardware = DataObject() hardware.numCpuCores = 8 hardware.numCpuPkgs = 2 hardware.numCpuThreads = 16 hardware.vendor = "Intel" hardware.cpuModel = "Intel(R) Xeon(R)" hardware.uuid = "host-uuid" hardware.memorySize = units.Gi summary.hardware = hardware runtime = DataObject() if connected: runtime.connectionState = "connected" else: runtime.connectionState = "disconnected" runtime.inMaintenanceMode = maintenance_mode summary.runtime = runtime quickstats = DataObject() quickstats.overallMemoryUsage = 500 summary.quickStats = quickstats product = DataObject() product.name = "VMware ESXi" product.version = constants.MIN_VC_VERSION config = DataObject() config.product = product summary.config = config pnic_do = DataObject() pnic_do.device = "vmnic0" net_info_pnic = DataObject() net_info_pnic.PhysicalNic = [pnic_do] self.set("summary", summary) self.set("capability.maxHostSupportedVcpus", 600) self.set("summary.hardware", hardware) self.set("summary.runtime", runtime) self.set("summary.quickStats", quickstats) self.set("config.network.pnic", net_info_pnic) self.set("connected", connected) if _db_content.get("Network", None) is None: create_network() net_ref = _db_content["Network"][ list(_db_content["Network"].keys())[0]].obj network_do = DataObject() network_do.ManagedObjectReference = [net_ref] self.set("network", network_do) vswitch_do = DataObject() vswitch_do.pnic = ["vmnic0"] vswitch_do.name = "vSwitch0" vswitch_do.portgroup = ["PortGroup-vmnet0"] net_swicth = DataObject() net_swicth.HostVirtualSwitch = [vswitch_do] self.set("config.network.vswitch", net_swicth) host_pg_do = DataObject() host_pg_do.key = "PortGroup-vmnet0" pg_spec = DataObject() pg_spec.vlanId = 0 pg_spec.name = "vmnet0" host_pg_do.spec = pg_spec host_pg = DataObject() host_pg.HostPortGroup = [host_pg_do] self.set("config.network.portgroup", host_pg) config = DataObject() storageDevice = DataObject() iscsi_hba = HostInternetScsiHba() iscsi_hba.iScsiName = "iscsi-name" host_bus_adapter_array = DataObject() host_bus_adapter_array.HostHostBusAdapter = [iscsi_hba] storageDevice.hostBusAdapter = host_bus_adapter_array config.storageDevice = storageDevice self.set("config.storageDevice.hostBusAdapter", host_bus_adapter_array) # Set the same on the storage system managed object host_storage_sys = _get_object(host_storage_sys_key) host_storage_sys.set('storageDeviceInfo.hostBusAdapter', host_bus_adapter_array) def _add_iscsi_target(self, data): default_lun = DataObject() default_lun.scsiLun = 'key-vim.host.ScsiDisk-010' default_lun.key = 'key-vim.host.ScsiDisk-010' default_lun.deviceName = 'fake-device' default_lun.uuid = 'fake-uuid' scsi_lun_array = DataObject() scsi_lun_array.ScsiLun = [default_lun] self.set("config.storageDevice.scsiLun", scsi_lun_array) transport = DataObject() transport.address = [data['target_portal']] transport.iScsiName = data['target_iqn'] default_target = DataObject() default_target.lun = [default_lun] default_target.transport = transport iscsi_adapter = DataObject() iscsi_adapter.adapter = 'key-vmhba33' iscsi_adapter.transport = transport iscsi_adapter.target = [default_target] iscsi_topology = DataObject() iscsi_topology.adapter = [iscsi_adapter] self.set("config.storageDevice.scsiTopology", iscsi_topology) def _add_port_group(self, spec): """Adds a port group to the host system object in the db.""" pg_name = spec.name vswitch_name = spec.vswitchName vlanid = spec.vlanId vswitch_do = DataObject() vswitch_do.pnic = ["vmnic0"] vswitch_do.name = vswitch_name vswitch_do.portgroup = ["PortGroup-%s" % pg_name] vswitches = self.get("config.network.vswitch").HostVirtualSwitch vswitches.append(vswitch_do) host_pg_do = DataObject() host_pg_do.key = "PortGroup-%s" % pg_name pg_spec = DataObject() pg_spec.vlanId = vlanid pg_spec.name = pg_name host_pg_do.spec = pg_spec host_pgrps = self.get("config.network.portgroup").HostPortGroup host_pgrps.append(host_pg_do) class Datacenter(ManagedObject): """Datacenter class.""" def __init__(self, name="ha-datacenter", ds_ref=None): super(Datacenter, self).__init__("dc") self.set("name", name) if _db_content.get("Folder", None) is None: create_folder() folder_ref = _db_content["Folder"][ list(_db_content["Folder"].keys())[0]].obj folder_do = DataObject() folder_do.ManagedObjectReference = [folder_ref] self.set("vmFolder", folder_ref) if _db_content.get("Network", None) is None: create_network() net_ref = _db_content["Network"][ list(_db_content["Network"].keys())[0]].obj network_do = DataObject() network_do.ManagedObjectReference = [net_ref] self.set("network", network_do) if ds_ref: datastore = DataObject() datastore.ManagedObjectReference = [ds_ref] else: datastore = None self.set("datastore", datastore) class Task(ManagedObject): """Task class.""" def __init__(self, task_name, state="running", result=None, error_fault=None): super(Task, self).__init__("Task") info = DataObject() info.name = task_name info.state = state if state == 'error': error = DataObject() error.localizedMessage = "Error message" if not error_fault: error.fault = DataObject() else: error.fault = error_fault info.error = error info.result = result self.set("info", info) def create_host_network_system(): host_net_system = HostNetworkSystem() _create_object("HostNetworkSystem", host_net_system) def create_host_storage_system(): host_storage_system = HostStorageSystem() _create_object("HostStorageSystem", host_storage_system) def create_host(ds_ref=None): host_system = HostSystem(ds_ref=ds_ref) _create_object('HostSystem', host_system) def create_datacenter(name, ds_ref=None): data_center = Datacenter(name, ds_ref) _create_object('Datacenter', data_center) def create_datastore(name, capacity, free): data_store = Datastore(name, capacity, free) _create_object('Datastore', data_store) return data_store.obj def create_res_pool(): res_pool = ResourcePool() _create_object('ResourcePool', res_pool) return res_pool.obj def create_folder(): folder = Folder() _create_object('Folder', folder) return folder.obj def create_network(): network = Network() _create_object('Network', network) def create_cluster(name, ds_ref): cluster = ClusterComputeResource(name=name) cluster._add_host(_get_object_refs("HostSystem")[0]) cluster._add_host(_get_object_refs("HostSystem")[1]) cluster._add_datastore(ds_ref) cluster._add_root_resource_pool(create_res_pool()) _create_object('ClusterComputeResource', cluster) return cluster def create_vm(uuid=None, name=None, cpus=1, memory=128, devices=None, vmPathName=None, extraConfig=None, res_pool_ref=None, host_ref=None, version=None): if uuid is None: uuid = uuidutils.generate_uuid() if name is None: name = uuid if devices is None: devices = [] if vmPathName is None: vm_path = ds_obj.DatastorePath( list(_db_content['Datastore'].values())[0]) else: vm_path = ds_obj.DatastorePath.parse(vmPathName) if res_pool_ref is None: res_pool_ref = list(_db_content['ResourcePool'].keys())[0] if host_ref is None: host_ref = list(_db_content["HostSystem"].keys())[0] # Fill in the default path to the vmx file if we were only given a # datastore. Note that if you create a VM with vmPathName '[foo]', when you # retrieve vmPathName it will be '[foo] uuid/uuid.vmx'. Hence we use # vm_path below for the stored value of vmPathName. if vm_path.rel_path == '': vm_path = vm_path.join(name, name + '.vmx') for key, value in _db_content["Datastore"].items(): if value.get('summary.name') == vm_path.datastore: ds = key break else: ds = create_datastore(vm_path.datastore, 1024, 500) vm_dict = {"name": name, "ds": [ds], "runtime_host": host_ref, "powerstate": "poweredOff", "vmPathName": str(vm_path), "numCpu": cpus, "mem": memory, "extra_config": extraConfig, "virtual_device": devices, "instanceUuid": uuid, "version": version} vm = VirtualMachine(**vm_dict) _create_object("VirtualMachine", vm) res_pool = _get_object(res_pool_ref) res_pool.vm.ManagedObjectReference.append(vm.obj) return vm.obj def create_task(task_name, state="running", result=None, error_fault=None): task = Task(task_name, state, result, error_fault) _create_object("Task", task) return task def _add_file(file_path): """Adds a file reference to the db.""" _db_content["files"].append(file_path) def _remove_file(file_path): """Removes a file reference from the db.""" # Check if the remove is for a single file object or for a folder if file_path.find(".vmdk") != -1: if file_path not in _db_content.get("files"): raise vexc.FileNotFoundException(file_path) _db_content.get("files").remove(file_path) else: # Removes the files in the folder and the folder too from the db to_delete = set() for file in _db_content.get("files"): if file.find(file_path) != -1: to_delete.add(file) for file in to_delete: _db_content.get("files").remove(file) def fake_plug_vifs(*args, **kwargs): """Fakes plugging vifs.""" pass def fake_get_network(*args, **kwargs): """Fake get network.""" return {'type': 'fake'} def assertPathExists(test, path): test.assertIn(path, _db_content.get('files')) def assertPathNotExists(test, path): test.assertNotIn(path, _db_content.get('files')) def get_file(file_path): """Check if file exists in the db.""" return file_path in _db_content.get("files") def fake_upload_image(context, image, instance, **kwargs): """Fakes the upload of an image.""" pass def fake_fetch_image(context, instance, host, port, dc_name, ds_name, file_path, cookies=None): """Fakes the fetch of an image.""" ds_file_path = "[" + ds_name + "] " + file_path _add_file(ds_file_path) def _get_vm_mdo(vm_ref): """Gets the Virtual Machine with the ref from the db.""" if _db_content.get("VirtualMachine", None) is None: raise exception.NotFound("There is no VM registered") if vm_ref not in _db_content.get("VirtualMachine"): raise exception.NotFound("Virtual Machine with ref %s is not " "there" % vm_ref) return _db_content.get("VirtualMachine")[vm_ref] def _merge_extraconfig(existing, changes): """Imposes the changes in extraConfig over the existing extraConfig.""" existing = existing or [] if (changes): for c in changes: if len([x for x in existing if x.key == c.key]) > 0: extraConf = [x for x in existing if x.key == c.key][0] extraConf.value = c.value else: existing.append(c) return existing class FakeFactory(object): """Fake factory class for the suds client.""" def create(self, obj_name): """Creates a namespace object.""" klass = obj_name[4:] # skip 'ns0:' module = sys.modules[__name__] fake_klass = getattr(module, klass, None) if fake_klass is None: return DataObject(obj_name) else: return fake_klass() class SharesInfo(DataObject): def __init__(self): super(SharesInfo, self).__init__() self.level = None self.shares = None class VirtualEthernetCardResourceAllocation(DataObject): def __init__(self): super(VirtualEthernetCardResourceAllocation, self).__init__() self.share = SharesInfo() class VirtualE1000(DataObject): def __init__(self): super(VirtualE1000, self).__init__() self.resourceAllocation = VirtualEthernetCardResourceAllocation() class FakeService(DataObject): """Fake service class.""" def Logout(self, session_manager): pass def FindExtension(self, extension_manager, key): return [] class FakeClient(DataObject): """Fake client class.""" def __init__(self): """Creates a namespace object.""" self.service = FakeService() class FakeSession(object): """Fake Session Class.""" def __init__(self): self.vim = FakeVim() def _call_method(self, module, method, *args, **kwargs): raise NotImplementedError() def _wait_for_task(self, task_ref): raise NotImplementedError() class FakeObjectRetrievalSession(FakeSession): """A session for faking object retrieval tasks. _call_method() returns a given set of objects sequentially, regardless of the method called. """ def __init__(self, *ret): super(FakeObjectRetrievalSession, self).__init__() self.ret = ret self.ind = 0 def _call_method(self, module, method, *args, **kwargs): if (method == 'continue_retrieval' or method == 'cancel_retrieval'): return # return fake objects in a circular manner self.ind = (self.ind + 1) % len(self.ret) return self.ret[self.ind - 1] def get_fake_vim_object(vmware_api_session): key = vmware_api_session.__repr__() if key not in _vim_map: _vim_map[key] = FakeVim() return _vim_map[key] class FakeVim(object): """Fake VIM Class.""" def __init__(self, protocol="https", host="localhost", trace=None): """Initializes the suds client object, sets the service content contents and the cookies for the session. """ self._session = None self.client = FakeClient() self.client.factory = FakeFactory() transport = DataObject() transport.cookiejar = "Fake-CookieJar" options = DataObject() options.transport = transport self.client.options = options service_content = self.client.factory.create('ns0:ServiceContent') service_content.propertyCollector = "PropCollector" service_content.virtualDiskManager = "VirtualDiskManager" service_content.fileManager = "FileManager" service_content.rootFolder = "RootFolder" service_content.sessionManager = "SessionManager" service_content.extensionManager = "ExtensionManager" service_content.searchIndex = "SearchIndex" about_info = DataObject() about_info.name = "VMware vCenter Server" about_info.version = constants.MIN_VC_VERSION about_info.instanceUuid = _FAKE_VCENTER_UUID service_content.about = about_info self._service_content = service_content @property def service_content(self): return self._service_content def __repr__(self): return "Fake VIM Object" def __str__(self): return "Fake VIM Object" def _login(self): """Logs in and sets the session object in the db.""" self._session = uuidutils.generate_uuid() session = DataObject() session.key = self._session session.userName = 'sessionUserName' _db_content['session'][self._session] = session return session def _terminate_session(self, *args, **kwargs): """Terminates a session.""" s = kwargs.get("sessionId")[0] if s not in _db_content['session']: return del _db_content['session'][s] def _check_session(self): """Checks if the session is active.""" if (self._session is None or self._session not in _db_content['session']): LOG.debug("Session is faulty") raise vexc.VimFaultException([vexc.NOT_AUTHENTICATED], "Session Invalid") def _session_is_active(self, *args, **kwargs): try: self._check_session() return True except Exception: return False def _create_vm(self, method, *args, **kwargs): """Creates and registers a VM object with the Host System.""" config_spec = kwargs.get("config") if config_spec.guestId not in constants.VALID_OS_TYPES: ex = vexc.VMwareDriverException('A specified parameter was ' 'not correct.') return create_task(method, "error", error_fault=ex).obj pool = kwargs.get('pool') version = getattr(config_spec, 'version', None) devices = [] for device_change in config_spec.deviceChange: if device_change.operation == 'add': devices.append(device_change.device) vm_ref = create_vm(config_spec.instanceUuid, config_spec.name, config_spec.numCPUs, config_spec.memoryMB, devices, config_spec.files.vmPathName, config_spec.extraConfig, pool, version=version) task_mdo = create_task(method, "success", result=vm_ref) return task_mdo.obj def _create_folder(self, method, *args, **kwargs): return create_folder() def _reconfig_vm(self, method, *args, **kwargs): """Reconfigures a VM and sets the properties supplied.""" vm_ref = args[0] vm_mdo = _get_vm_mdo(vm_ref) vm_mdo.reconfig(self.client.factory, kwargs.get("spec")) task_mdo = create_task(method, "success") return task_mdo.obj def _rename(self, method, *args, **kwargs): vm_ref = args[0] vm_mdo = _get_vm_mdo(vm_ref) vm_mdo.set('name', kwargs['newName']) task_mdo = create_task(method, "success") return task_mdo.obj def _create_copy_disk(self, method, vmdk_file_path): """Creates/copies a vmdk file object in the datastore.""" # We need to add/create both .vmdk and .-flat.vmdk files flat_vmdk_file_path = vmdk_file_path.replace(".vmdk", "-flat.vmdk") _add_file(vmdk_file_path) _add_file(flat_vmdk_file_path) task_mdo = create_task(method, "success") return task_mdo.obj def _extend_disk(self, method, size): """Extend disk size when create an instance.""" task_mdo = create_task(method, "success") return task_mdo.obj def _snapshot_vm(self, method): """Snapshots a VM. Here we do nothing for faking sake.""" task_mdo = create_task(method, "success") return task_mdo.obj def _find_all_by_uuid(self, *args, **kwargs): uuid = kwargs.get('uuid') vm_refs = [] for vm_ref in _db_content.get("VirtualMachine"): vm = _get_object(vm_ref) vm_uuid = vm.get("summary.config.instanceUuid") if vm_uuid == uuid: vm_refs.append(vm_ref) return vm_refs def _delete_snapshot(self, method, *args, **kwargs): """Deletes a VM snapshot. Here we do nothing for faking sake.""" task_mdo = create_task(method, "success") return task_mdo.obj def _delete_file(self, method, *args, **kwargs): """Deletes a file from the datastore.""" _remove_file(kwargs.get("name")) task_mdo = create_task(method, "success") return task_mdo.obj def _just_return(self): """Fakes a return.""" return def _just_return_task(self, method): """Fakes a task return.""" task_mdo = create_task(method, "success") return task_mdo.obj def _clone_vm(self, method, *args, **kwargs): """Fakes a VM clone.""" """Creates and registers a VM object with the Host System.""" source_vmref = args[0] source_vm_mdo = _get_vm_mdo(source_vmref) clone_spec = kwargs.get("spec") vm_dict = { "name": kwargs.get("name"), "ds": source_vm_mdo.get("datastore"), "runtime_host": source_vm_mdo.get("runtime.host"), "powerstate": source_vm_mdo.get("runtime.powerState"), "vmPathName": source_vm_mdo.get("config.files.vmPathName"), "numCpu": source_vm_mdo.get("summary.config.numCpu"), "mem": source_vm_mdo.get("summary.config.memorySizeMB"), "extra_config": source_vm_mdo.get("config.extraConfig").OptionValue, "virtual_device": source_vm_mdo.get("config.hardware.device").VirtualDevice, "instanceUuid": source_vm_mdo.get("summary.config.instanceUuid")} if hasattr(clone_spec, 'config'): # Impose the config changes specified in the config property if (hasattr(clone_spec.config, 'instanceUuid') and clone_spec.config.instanceUuid is not None): vm_dict["instanceUuid"] = clone_spec.config.instanceUuid if hasattr(clone_spec.config, 'extraConfig'): extraConfigs = _merge_extraconfig(vm_dict["extra_config"], clone_spec.config.extraConfig) vm_dict["extra_config"] = extraConfigs virtual_machine = VirtualMachine(**vm_dict) _create_object("VirtualMachine", virtual_machine) task_mdo = create_task(method, "success") return task_mdo.obj def _unregister_vm(self, method, *args, **kwargs): """Unregisters a VM from the Host System.""" vm_ref = args[0] _get_vm_mdo(vm_ref) del _db_content["VirtualMachine"][vm_ref] task_mdo = create_task(method, "success") return task_mdo.obj def _search_ds(self, method, *args, **kwargs): """Searches the datastore for a file.""" # TODO(garyk): add support for spec parameter ds_path = kwargs.get("datastorePath") matched_files = set() # Check if we are searching for a file or a directory directory = False dname = '%s/' % ds_path for file in _db_content.get("files"): if file == dname: directory = True break # A directory search implies that we must return all # subdirectories if directory: for file in _db_content.get("files"): if file.find(ds_path) != -1: if not file.endswith(ds_path): path = file.replace(dname, '', 1).split('/') if path: matched_files.add(path[0]) if not matched_files: matched_files.add('/') else: for file in _db_content.get("files"): if file.find(ds_path) != -1: matched_files.add(ds_path) if matched_files: result = DataObject() result.path = ds_path result.file = [] for file in matched_files: matched = DataObject() matched.path = file matched.fileSize = 1024 result.file.append(matched) task_mdo = create_task(method, "success", result=result) else: task_mdo = create_task(method, "error", error_fault=FileNotFound()) return task_mdo.obj def _move_file(self, method, *args, **kwargs): source = kwargs.get('sourceName') destination = kwargs.get('destinationName') new_files = [] if source != destination: for file in _db_content.get("files"): if source in file: new_file = file.replace(source, destination) new_files.append(new_file) # if source is not a file then the children will also # be deleted _remove_file(source) for file in new_files: _add_file(file) task_mdo = create_task(method, "success") return task_mdo.obj def _make_dir(self, method, *args, **kwargs): """Creates a directory in the datastore.""" ds_path = kwargs.get("name") if get_file(ds_path): raise vexc.FileAlreadyExistsException() _db_content["files"].append('%s/' % ds_path) def _set_power_state(self, method, vm_ref, pwr_state="poweredOn"): """Sets power state for the VM.""" if _db_content.get("VirtualMachine", None) is None: raise exception.NotFound("No Virtual Machine has been " "registered yet") if vm_ref not in _db_content.get("VirtualMachine"): raise exception.NotFound("Virtual Machine with ref %s is not " "there" % vm_ref) vm_mdo = _db_content.get("VirtualMachine").get(vm_ref) vm_mdo.set("runtime.powerState", pwr_state) task_mdo = create_task(method, "success") return task_mdo.obj def _retrieve_properties_continue(self, method, *args, **kwargs): """Continues the retrieve.""" return FakeRetrieveResult() def _retrieve_properties_cancel(self, method, *args, **kwargs): """Cancels the retrieve.""" return None def _retrieve_properties(self, method, *args, **kwargs): """Retrieves properties based on the type.""" spec_set = kwargs.get("specSet")[0] spec_type = spec_set.propSet[0].type properties = spec_set.propSet[0].pathSet if not isinstance(properties, list): properties = properties.split() objs = spec_set.objectSet lst_ret_objs = FakeRetrieveResult() for obj in objs: try: obj_ref = obj.obj if obj_ref == "RootFolder": # This means that we are retrieving props for all managed # data objects of the specified 'type' in the entire # inventory. This gets invoked by vim_util.get_objects. mdo_refs = _db_content[spec_type] elif obj_ref.type != spec_type: # This means that we are retrieving props for the managed # data objects in the parent object's 'path' property. # This gets invoked by vim_util.get_inner_objects # eg. obj_ref = # type = 'DataStore' # path = 'datastore' # the above will retrieve all datastores in the given # cluster. parent_mdo = _db_content[obj_ref.type][obj_ref] path = obj.selectSet[0].path mdo_refs = parent_mdo.get(path).ManagedObjectReference else: # This means that we are retrieving props of the given # managed data object. This gets invoked by # vim_util.get_properties_for_a_collection_of_objects. mdo_refs = [obj_ref] for mdo_ref in mdo_refs: mdo = _db_content[spec_type][mdo_ref] prop_list = [] for prop_name in properties: prop = Prop(prop_name, mdo.get(prop_name)) prop_list.append(prop) obj_content = ObjectContent(mdo.obj, prop_list) lst_ret_objs.add_object(obj_content) except Exception: LOG.exception("_retrieve_properties error") continue return lst_ret_objs def _add_port_group(self, method, *args, **kwargs): """Adds a port group to the host system.""" _host_sk = list(_db_content["HostSystem"].keys())[0] host_mdo = _db_content["HostSystem"][_host_sk] host_mdo._add_port_group(kwargs.get("portgrp")) def _add_iscsi_send_tgt(self, method, *args, **kwargs): """Adds a iscsi send target to the hba.""" send_targets = kwargs.get('targets') host_storage_sys = _get_objects('HostStorageSystem').objects[0] iscsi_hba_array = host_storage_sys.get('storageDeviceInfo' '.hostBusAdapter') iscsi_hba = iscsi_hba_array.HostHostBusAdapter[0] if hasattr(iscsi_hba, 'configuredSendTarget'): iscsi_hba.configuredSendTarget.extend(send_targets) else: iscsi_hba.configuredSendTarget = send_targets def __getattr__(self, attr_name): if attr_name != "Login": self._check_session() if attr_name == "Login": return lambda *args, **kwargs: self._login() elif attr_name == "SessionIsActive": return lambda *args, **kwargs: self._session_is_active( *args, **kwargs) elif attr_name == "TerminateSession": return lambda *args, **kwargs: self._terminate_session( *args, **kwargs) elif attr_name == "CreateVM_Task": return lambda *args, **kwargs: self._create_vm(attr_name, *args, **kwargs) elif attr_name == "CreateFolder": return lambda *args, **kwargs: self._create_folder(attr_name, *args, **kwargs) elif attr_name == "ReconfigVM_Task": return lambda *args, **kwargs: self._reconfig_vm(attr_name, *args, **kwargs) elif attr_name == "Rename_Task": return lambda *args, **kwargs: self._rename(attr_name, *args, **kwargs) elif attr_name == "CreateVirtualDisk_Task": return lambda *args, **kwargs: self._create_copy_disk(attr_name, kwargs.get("name")) elif attr_name == "DeleteDatastoreFile_Task": return lambda *args, **kwargs: self._delete_file(attr_name, *args, **kwargs) elif attr_name == "PowerOnVM_Task": return lambda *args, **kwargs: self._set_power_state(attr_name, args[0], "poweredOn") elif attr_name == "PowerOffVM_Task": return lambda *args, **kwargs: self._set_power_state(attr_name, args[0], "poweredOff") elif attr_name == "RebootGuest": return lambda *args, **kwargs: self._just_return() elif attr_name == "ResetVM_Task": return lambda *args, **kwargs: self._set_power_state(attr_name, args[0], "poweredOn") elif attr_name == "SuspendVM_Task": return lambda *args, **kwargs: self._set_power_state(attr_name, args[0], "suspended") elif attr_name == "CreateSnapshot_Task": return lambda *args, **kwargs: self._snapshot_vm(attr_name) elif attr_name == "RemoveSnapshot_Task": return lambda *args, **kwargs: self._delete_snapshot(attr_name, *args, **kwargs) elif attr_name == "CopyVirtualDisk_Task": return lambda *args, **kwargs: self._create_copy_disk(attr_name, kwargs.get("destName")) elif attr_name == "ExtendVirtualDisk_Task": return lambda *args, **kwargs: self._extend_disk(attr_name, kwargs.get("size")) elif attr_name == "Destroy_Task": return lambda *args, **kwargs: self._unregister_vm(attr_name, *args, **kwargs) elif attr_name == "UnregisterVM": return lambda *args, **kwargs: self._unregister_vm(attr_name, *args, **kwargs) elif attr_name == "CloneVM_Task": return lambda *args, **kwargs: self._clone_vm(attr_name, *args, **kwargs) elif attr_name == "FindAllByUuid": return lambda *args, **kwargs: self._find_all_by_uuid(attr_name, *args, **kwargs) elif attr_name == "SearchDatastore_Task": return lambda *args, **kwargs: self._search_ds(attr_name, *args, **kwargs) elif attr_name == "MoveDatastoreFile_Task": return lambda *args, **kwargs: self._move_file(attr_name, *args, **kwargs) elif attr_name == "MakeDirectory": return lambda *args, **kwargs: self._make_dir(attr_name, *args, **kwargs) elif attr_name == "RetrievePropertiesEx": return lambda *args, **kwargs: self._retrieve_properties( attr_name, *args, **kwargs) elif attr_name == "ContinueRetrievePropertiesEx": return lambda *args, **kwargs: self._retrieve_properties_continue( attr_name, *args, **kwargs) elif attr_name == "CancelRetrievePropertiesEx": return lambda *args, **kwargs: self._retrieve_properties_cancel( attr_name, *args, **kwargs) elif attr_name == "AddPortGroup": return lambda *args, **kwargs: self._add_port_group(attr_name, *args, **kwargs) elif attr_name in ("RebootHost_Task", "ShutdownHost_Task", "PowerUpHostFromStandBy_Task", "EnterMaintenanceMode_Task", "ExitMaintenanceMode_Task", "RescanHba"): return lambda *args, **kwargs: self._just_return_task(attr_name) elif attr_name == "AddInternetScsiSendTargets": return lambda *args, **kwargs: self._add_iscsi_send_tgt(attr_name, *args, **kwargs) nova-17.0.1/nova/tests/unit/virt/vmwareapi/test_vm_util.py0000666000175000017500000025470613250073126023737 0ustar zuulzuul00000000000000# Copyright (c) 2013 Hewlett-Packard Development Company, L.P. # Copyright 2013 Canonical Corp. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import mock from oslo_utils import units from oslo_utils import uuidutils from oslo_vmware import exceptions as vexc from oslo_vmware.objects import datastore as ds_obj from oslo_vmware import pbm from oslo_vmware import vim_util as vutil from nova import exception from nova.network import model as network_model from nova import test from nova.tests.unit import fake_instance from nova.tests.unit.virt.vmwareapi import fake from nova.tests.unit.virt.vmwareapi import stubs from nova.virt.vmwareapi import constants from nova.virt.vmwareapi import driver from nova.virt.vmwareapi import vm_util class partialObject(object): def __init__(self, path='fake-path'): self.path = path self.fault = fake.DataObject() class VMwareVMUtilTestCase(test.NoDBTestCase): def setUp(self): super(VMwareVMUtilTestCase, self).setUp() fake.reset() stubs.set_stubs(self) vm_util.vm_refs_cache_reset() self._instance = fake_instance.fake_instance_obj( None, **{'id': 7, 'name': 'fake!', 'display_name': 'fake-display-name', 'uuid': uuidutils.generate_uuid(), 'vcpus': 2, 'memory_mb': 2048}) def _test_get_stats_from_cluster(self, connection_state="connected", maintenance_mode=False): ManagedObjectRefs = [fake.ManagedObjectReference("host1", "HostSystem"), fake.ManagedObjectReference("host2", "HostSystem")] hosts = fake._convert_to_array_of_mor(ManagedObjectRefs) respool = fake.ManagedObjectReference("resgroup-11", "ResourcePool") prop_dict = {'host': hosts, 'resourcePool': respool} hardware = fake.DataObject() hardware.numCpuCores = 8 hardware.numCpuThreads = 16 hardware.vendor = "Intel" hardware.cpuModel = "Intel(R) Xeon(R)" hardware.memorySize = 4 * units.Gi runtime_host_1 = fake.DataObject() runtime_host_1.connectionState = "connected" runtime_host_1.inMaintenanceMode = False quickstats_1 = fake.DataObject() quickstats_1.overallMemoryUsage = 512 quickstats_2 = fake.DataObject() quickstats_2.overallMemoryUsage = 512 runtime_host_2 = fake.DataObject() runtime_host_2.connectionState = connection_state runtime_host_2.inMaintenanceMode = maintenance_mode prop_list_host_1 = [fake.Prop(name="summary.hardware", val=hardware), fake.Prop(name="summary.runtime", val=runtime_host_1), fake.Prop(name="summary.quickStats", val=quickstats_1)] prop_list_host_2 = [fake.Prop(name="summary.hardware", val=hardware), fake.Prop(name="summary.runtime", val=runtime_host_2), fake.Prop(name="summary.quickStats", val=quickstats_2)] fake_objects = fake.FakeRetrieveResult() fake_objects.add_object(fake.ObjectContent("prop_list_host1", prop_list_host_1)) fake_objects.add_object(fake.ObjectContent("prop_list_host1", prop_list_host_2)) def fake_call_method(*args): if "get_object_properties_dict" in args: return prop_dict elif "get_properties_for_a_collection_of_objects" in args: return fake_objects else: raise Exception('unexpected method call') session = fake.FakeSession() with mock.patch.object(session, '_call_method', fake_call_method): result = vm_util.get_stats_from_cluster(session, "cluster1") if connection_state == "connected" and not maintenance_mode: num_hosts = 2 else: num_hosts = 1 expected_stats = {'cpu': {'vcpus': num_hosts * 16, 'max_vcpus_per_host': 16}, 'mem': {'total': num_hosts * 4096, 'free': num_hosts * 4096 - num_hosts * 512, 'max_mem_mb_per_host': 4096}} self.assertEqual(expected_stats, result) def test_get_stats_from_cluster_hosts_connected_and_active(self): self._test_get_stats_from_cluster() def test_get_stats_from_cluster_hosts_disconnected_and_active(self): self._test_get_stats_from_cluster(connection_state="disconnected") def test_get_stats_from_cluster_hosts_connected_and_maintenance(self): self._test_get_stats_from_cluster(maintenance_mode=True) def test_get_host_ref_no_hosts_in_cluster(self): self.assertRaises(exception.NoValidHost, vm_util.get_host_ref, fake.FakeObjectRetrievalSession(""), 'fake_cluster') def test_get_resize_spec(self): vcpus = 2 memory_mb = 2048 extra_specs = vm_util.ExtraSpecs() fake_factory = fake.FakeFactory() result = vm_util.get_vm_resize_spec(fake_factory, vcpus, memory_mb, extra_specs) expected = fake_factory.create('ns0:VirtualMachineConfigSpec') expected.memoryMB = memory_mb expected.numCPUs = vcpus cpuAllocation = fake_factory.create('ns0:ResourceAllocationInfo') cpuAllocation.reservation = 0 cpuAllocation.limit = -1 cpuAllocation.shares = fake_factory.create('ns0:SharesInfo') cpuAllocation.shares.level = 'normal' cpuAllocation.shares.shares = 0 expected.cpuAllocation = cpuAllocation self.assertEqual(expected, result) def test_get_resize_spec_with_limits(self): vcpus = 2 memory_mb = 2048 cpu_limits = vm_util.Limits(limit=7, reservation=6) extra_specs = vm_util.ExtraSpecs(cpu_limits=cpu_limits) fake_factory = fake.FakeFactory() result = vm_util.get_vm_resize_spec(fake_factory, vcpus, memory_mb, extra_specs) expected = fake_factory.create('ns0:VirtualMachineConfigSpec') expected.memoryMB = memory_mb expected.numCPUs = vcpus cpuAllocation = fake_factory.create('ns0:ResourceAllocationInfo') cpuAllocation.reservation = 6 cpuAllocation.limit = 7 cpuAllocation.shares = fake_factory.create('ns0:SharesInfo') cpuAllocation.shares.level = 'normal' cpuAllocation.shares.shares = 0 expected.cpuAllocation = cpuAllocation self.assertEqual(expected, result) def test_get_cdrom_attach_config_spec(self): fake_factory = fake.FakeFactory() datastore = fake.Datastore() result = vm_util.get_cdrom_attach_config_spec(fake_factory, datastore, "/tmp/foo.iso", 200, 0) expected = fake_factory.create('ns0:VirtualMachineConfigSpec') expected.deviceChange = [] device_change = fake_factory.create('ns0:VirtualDeviceConfigSpec') device_change.operation = 'add' device_change.device = fake_factory.create('ns0:VirtualCdrom') device_change.device.controllerKey = 200 device_change.device.unitNumber = 0 device_change.device.key = -1 connectable = fake_factory.create('ns0:VirtualDeviceConnectInfo') connectable.allowGuestControl = False connectable.startConnected = True connectable.connected = True device_change.device.connectable = connectable backing = fake_factory.create('ns0:VirtualCdromIsoBackingInfo') backing.fileName = '/tmp/foo.iso' backing.datastore = datastore device_change.device.backing = backing expected.deviceChange.append(device_change) self.assertEqual(expected, result) def test_lsilogic_controller_spec(self): # Test controller spec returned for lsiLogic sas adapter type config_spec = vm_util.create_controller_spec(fake.FakeFactory(), -101, adapter_type=constants.ADAPTER_TYPE_LSILOGICSAS) self.assertEqual("ns0:VirtualLsiLogicSASController", config_spec.device.obj_name) def test_paravirtual_controller_spec(self): # Test controller spec returned for paraVirtual adapter type config_spec = vm_util.create_controller_spec(fake.FakeFactory(), -101, adapter_type=constants.ADAPTER_TYPE_PARAVIRTUAL) self.assertEqual("ns0:ParaVirtualSCSIController", config_spec.device.obj_name) def test_create_controller_spec_with_specific_bus_number(self): # Test controller spec with specific bus number rather default 0 config_spec = vm_util.create_controller_spec(fake.FakeFactory(), -101, adapter_type=constants.ADAPTER_TYPE_LSILOGICSAS, bus_number=1) self.assertEqual(1, config_spec.device.busNumber) def _vmdk_path_and_adapter_type_devices(self, filename, parent=None): # Test the adapter_type returned for a lsiLogic sas controller controller_key = 1000 disk = fake.VirtualDisk() disk.controllerKey = controller_key disk_backing = fake.VirtualDiskFlatVer2BackingInfo() disk_backing.fileName = filename disk.capacityInBytes = 1024 if parent: disk_backing.parent = parent disk.backing = disk_backing # Ephemeral disk e_disk = fake.VirtualDisk() e_disk.controllerKey = controller_key disk_backing = fake.VirtualDiskFlatVer2BackingInfo() disk_backing.fileName = '[test_datastore] uuid/ephemeral_0.vmdk' e_disk.capacityInBytes = 512 e_disk.backing = disk_backing controller = fake.VirtualLsiLogicSASController() controller.key = controller_key devices = [disk, e_disk, controller] return devices def test_get_vmdk_path_and_adapter_type(self): filename = '[test_datastore] uuid/uuid.vmdk' devices = self._vmdk_path_and_adapter_type_devices(filename) session = fake.FakeSession() with mock.patch.object(session, '_call_method', return_value=devices): vmdk = vm_util.get_vmdk_info(session, None) self.assertEqual(constants.ADAPTER_TYPE_LSILOGICSAS, vmdk.adapter_type) self.assertEqual('[test_datastore] uuid/ephemeral_0.vmdk', vmdk.path) self.assertEqual(512, vmdk.capacity_in_bytes) self.assertEqual(devices[1], vmdk.device) def test_get_vmdk_path_and_adapter_type_with_match(self): n_filename = '[test_datastore] uuid/uuid.vmdk' devices = self._vmdk_path_and_adapter_type_devices(n_filename) session = fake.FakeSession() with mock.patch.object(session, '_call_method', return_value=devices): vmdk = vm_util.get_vmdk_info(session, None, uuid='uuid') self.assertEqual(constants.ADAPTER_TYPE_LSILOGICSAS, vmdk.adapter_type) self.assertEqual(n_filename, vmdk.path) self.assertEqual(1024, vmdk.capacity_in_bytes) self.assertEqual(devices[0], vmdk.device) def test_get_vmdk_path_and_adapter_type_with_nomatch(self): n_filename = '[test_datastore] diuu/diuu.vmdk' session = fake.FakeSession() devices = self._vmdk_path_and_adapter_type_devices(n_filename) with mock.patch.object(session, '_call_method', return_value=devices): vmdk = vm_util.get_vmdk_info(session, None, uuid='uuid') self.assertIsNone(vmdk.adapter_type) self.assertIsNone(vmdk.path) self.assertEqual(0, vmdk.capacity_in_bytes) self.assertIsNone(vmdk.device) def test_get_vmdk_adapter_type(self): # Test for the adapter_type to be used in vmdk descriptor # Adapter type in vmdk descriptor is same for LSI-SAS, LSILogic # and ParaVirtual vmdk_adapter_type = vm_util.get_vmdk_adapter_type( constants.DEFAULT_ADAPTER_TYPE) self.assertEqual(constants.DEFAULT_ADAPTER_TYPE, vmdk_adapter_type) vmdk_adapter_type = vm_util.get_vmdk_adapter_type( constants.ADAPTER_TYPE_LSILOGICSAS) self.assertEqual(constants.DEFAULT_ADAPTER_TYPE, vmdk_adapter_type) vmdk_adapter_type = vm_util.get_vmdk_adapter_type( constants.ADAPTER_TYPE_PARAVIRTUAL) self.assertEqual(constants.DEFAULT_ADAPTER_TYPE, vmdk_adapter_type) vmdk_adapter_type = vm_util.get_vmdk_adapter_type("dummyAdapter") self.assertEqual("dummyAdapter", vmdk_adapter_type) def test_get_scsi_adapter_type(self): vm = fake.VirtualMachine() devices = vm.get("config.hardware.device").VirtualDevice scsi_controller = fake.VirtualLsiLogicController() ide_controller = fake.VirtualIDEController() devices.append(scsi_controller) devices.append(ide_controller) fake._update_object("VirtualMachine", vm) # return the scsi type, not ide hardware_device = vm.get("config.hardware.device") self.assertEqual(constants.DEFAULT_ADAPTER_TYPE, vm_util.get_scsi_adapter_type(hardware_device)) def test_get_scsi_adapter_type_with_error(self): vm = fake.VirtualMachine() devices = vm.get("config.hardware.device").VirtualDevice scsi_controller = fake.VirtualLsiLogicController() ide_controller = fake.VirtualIDEController() devices.append(scsi_controller) devices.append(ide_controller) fake._update_object("VirtualMachine", vm) # the controller is not suitable since the device under this controller # has exceeded SCSI_MAX_CONNECT_NUMBER for i in range(0, constants.SCSI_MAX_CONNECT_NUMBER): scsi_controller.device.append('device' + str(i)) hardware_device = vm.get("config.hardware.device") self.assertRaises(exception.StorageError, vm_util.get_scsi_adapter_type, hardware_device) def test_find_allocated_slots(self): disk1 = fake.VirtualDisk(200, 0) disk2 = fake.VirtualDisk(200, 1) disk3 = fake.VirtualDisk(201, 1) ide0 = fake.VirtualIDEController(200) ide1 = fake.VirtualIDEController(201) scsi0 = fake.VirtualLsiLogicController(key=1000, scsiCtlrUnitNumber=7) devices = [disk1, disk2, disk3, ide0, ide1, scsi0] taken = vm_util._find_allocated_slots(devices) self.assertEqual([0, 1], sorted(taken[200])) self.assertEqual([1], taken[201]) self.assertEqual([7], taken[1000]) def test_get_bus_number_for_scsi_controller(self): devices = [fake.VirtualLsiLogicController(1000, scsiCtlrUnitNumber=7, busNumber=0), fake.VirtualLsiLogicController(1002, scsiCtlrUnitNumber=7, busNumber=2)] bus_number = vm_util._get_bus_number_for_scsi_controller(devices) self.assertEqual(1, bus_number) def test_get_bus_number_for_scsi_controller_buses_used_up(self): devices = [fake.VirtualLsiLogicController(1000, scsiCtlrUnitNumber=7, busNumber=0), fake.VirtualLsiLogicController(1001, scsiCtlrUnitNumber=7, busNumber=1), fake.VirtualLsiLogicController(1002, scsiCtlrUnitNumber=7, busNumber=2), fake.VirtualLsiLogicController(1003, scsiCtlrUnitNumber=7, busNumber=3)] self.assertRaises(vexc.VMwareDriverException, vm_util._get_bus_number_for_scsi_controller, devices) def test_allocate_controller_key_and_unit_number_ide_default(self): # Test that default IDE controllers are used when there is a free slot # on them disk1 = fake.VirtualDisk(200, 0) disk2 = fake.VirtualDisk(200, 1) ide0 = fake.VirtualIDEController(200) ide1 = fake.VirtualIDEController(201) devices = [disk1, disk2, ide0, ide1] (controller_key, unit_number, controller_spec) = vm_util.allocate_controller_key_and_unit_number( None, devices, 'ide') self.assertEqual(201, controller_key) self.assertEqual(0, unit_number) self.assertIsNone(controller_spec) def test_allocate_controller_key_and_unit_number_ide(self): # Test that a new controller is created when there is no free slot on # the default IDE controllers ide0 = fake.VirtualIDEController(200) ide1 = fake.VirtualIDEController(201) devices = [ide0, ide1] for controller_key in [200, 201]: for unit_number in [0, 1]: disk = fake.VirtualDisk(controller_key, unit_number) devices.append(disk) factory = fake.FakeFactory() (controller_key, unit_number, controller_spec) = vm_util.allocate_controller_key_and_unit_number( factory, devices, 'ide') self.assertEqual(-101, controller_key) self.assertEqual(0, unit_number) self.assertIsNotNone(controller_spec) def test_allocate_controller_key_and_unit_number_scsi(self): # Test that we allocate on existing SCSI controller if there is a free # slot on it devices = [fake.VirtualLsiLogicController(1000, scsiCtlrUnitNumber=7)] for unit_number in range(7): disk = fake.VirtualDisk(1000, unit_number) devices.append(disk) factory = fake.FakeFactory() (controller_key, unit_number, controller_spec) = vm_util.allocate_controller_key_and_unit_number( factory, devices, constants.DEFAULT_ADAPTER_TYPE) self.assertEqual(1000, controller_key) self.assertEqual(8, unit_number) self.assertIsNone(controller_spec) def test_allocate_controller_key_and_unit_number_scsi_new_controller(self): # Test that we allocate on existing SCSI controller if there is a free # slot on it devices = [fake.VirtualLsiLogicController(1000, scsiCtlrUnitNumber=15)] for unit_number in range(15): disk = fake.VirtualDisk(1000, unit_number) devices.append(disk) factory = fake.FakeFactory() (controller_key, unit_number, controller_spec) = vm_util.allocate_controller_key_and_unit_number( factory, devices, constants.DEFAULT_ADAPTER_TYPE) self.assertEqual(-101, controller_key) self.assertEqual(0, unit_number) self.assertEqual(1, controller_spec.device.busNumber) def test_get_vnc_config_spec(self): fake_factory = fake.FakeFactory() result = vm_util.get_vnc_config_spec(fake_factory, 7) expected = fake_factory.create('ns0:VirtualMachineConfigSpec') expected.extraConfig = [] remote_display_vnc_enabled = fake_factory.create('ns0:OptionValue') remote_display_vnc_enabled.value = 'true' remote_display_vnc_enabled.key = 'RemoteDisplay.vnc.enabled' expected.extraConfig.append(remote_display_vnc_enabled) remote_display_vnc_port = fake_factory.create('ns0:OptionValue') remote_display_vnc_port.value = 7 remote_display_vnc_port.key = 'RemoteDisplay.vnc.port' expected.extraConfig.append(remote_display_vnc_port) remote_display_vnc_keymap = fake_factory.create('ns0:OptionValue') remote_display_vnc_keymap.value = 'en-us' remote_display_vnc_keymap.key = 'RemoteDisplay.vnc.keyMap' expected.extraConfig.append(remote_display_vnc_keymap) self.assertEqual(expected, result) def _create_fake_vms(self): fake_vms = fake.FakeRetrieveResult() OptionValue = collections.namedtuple('OptionValue', ['key', 'value']) for i in range(10): vm = fake.ManagedObject() opt_val = OptionValue(key='', value=5900 + i) vm.set(vm_util.VNC_CONFIG_KEY, opt_val) fake_vms.add_object(vm) return fake_vms def test_get_vnc_port(self): fake_vms = self._create_fake_vms() self.flags(vnc_port=5900, group='vmware') self.flags(vnc_port_total=10000, group='vmware') actual = vm_util.get_vnc_port( fake.FakeObjectRetrievalSession(fake_vms)) self.assertEqual(actual, 5910) def test_get_vnc_port_exhausted(self): fake_vms = self._create_fake_vms() self.flags(vnc_port=5900, group='vmware') self.flags(vnc_port_total=10, group='vmware') self.assertRaises(exception.ConsolePortRangeExhausted, vm_util.get_vnc_port, fake.FakeObjectRetrievalSession(fake_vms)) def test_get_cluster_ref_by_name_none(self): fake_objects = fake.FakeRetrieveResult() ref = vm_util.get_cluster_ref_by_name( fake.FakeObjectRetrievalSession(fake_objects), 'fake_cluster') self.assertIsNone(ref) def test_get_cluster_ref_by_name_exists(self): fake_objects = fake.FakeRetrieveResult() cluster = fake.ClusterComputeResource(name='cluster') fake_objects.add_object(cluster) ref = vm_util.get_cluster_ref_by_name( fake.FakeObjectRetrievalSession(fake_objects), 'cluster') self.assertIs(cluster.obj, ref) def test_get_cluster_ref_by_name_missing(self): fake_objects = fake.FakeRetrieveResult() fake_objects.add_object(partialObject(path='cluster')) ref = vm_util.get_cluster_ref_by_name( fake.FakeObjectRetrievalSession(fake_objects), 'cluster') self.assertIsNone(ref) def test_propset_dict_simple(self): ObjectContent = collections.namedtuple('ObjectContent', ['propSet']) DynamicProperty = collections.namedtuple('Property', ['name', 'val']) object = ObjectContent(propSet=[ DynamicProperty(name='foo', val="bar")]) propdict = vm_util.propset_dict(object.propSet) self.assertEqual("bar", propdict['foo']) def test_propset_dict_complex(self): ObjectContent = collections.namedtuple('ObjectContent', ['propSet']) DynamicProperty = collections.namedtuple('Property', ['name', 'val']) MoRef = collections.namedtuple('Val', ['value']) object = ObjectContent(propSet=[ DynamicProperty(name='foo', val="bar"), DynamicProperty(name='some.thing', val=MoRef(value='else')), DynamicProperty(name='another.thing', val='value')]) propdict = vm_util.propset_dict(object.propSet) self.assertEqual("bar", propdict['foo']) self.assertTrue(hasattr(propdict['some.thing'], 'value')) self.assertEqual("else", propdict['some.thing'].value) self.assertEqual("value", propdict['another.thing']) def _test_detach_virtual_disk_spec(self, destroy_disk=False): virtual_device_config = vm_util.detach_virtual_disk_spec( fake.FakeFactory(), 'fake_device', destroy_disk) self.assertEqual('remove', virtual_device_config.operation) self.assertEqual('fake_device', virtual_device_config.device) self.assertEqual('ns0:VirtualDeviceConfigSpec', virtual_device_config.obj_name) if destroy_disk: self.assertEqual('destroy', virtual_device_config.fileOperation) else: self.assertFalse(hasattr(virtual_device_config, 'fileOperation')) def test_detach_virtual_disk_spec(self): self._test_detach_virtual_disk_spec(destroy_disk=False) def test_detach_virtual_disk_destroy_spec(self): self._test_detach_virtual_disk_spec(destroy_disk=True) def _create_vm_config_spec(self): fake_factory = fake.FakeFactory() spec = fake_factory.create('ns0:VirtualMachineConfigSpec') spec.name = self._instance.uuid spec.instanceUuid = self._instance.uuid spec.deviceChange = [] spec.numCPUs = 2 spec.version = None spec.memoryMB = 2048 spec.guestId = 'otherGuest' spec.extraConfig = [] extra_config = fake_factory.create("ns0:OptionValue") extra_config.value = self._instance.uuid extra_config.key = 'nvp.vm-uuid' spec.extraConfig.append(extra_config) extra_config = fake_factory.create("ns0:OptionValue") extra_config.value = True extra_config.key = 'disk.EnableUUID' spec.extraConfig.append(extra_config) spec.files = fake_factory.create('ns0:VirtualMachineFileInfo') spec.files.vmPathName = '[fake-datastore]' spec.managedBy = fake_factory.create('ns0:ManagedByInfo') spec.managedBy.extensionKey = 'org.openstack.compute' spec.managedBy.type = 'instance' spec.tools = fake_factory.create('ns0:ToolsConfigInfo') spec.tools.afterPowerOn = True spec.tools.afterResume = True spec.tools.beforeGuestReboot = True spec.tools.beforeGuestShutdown = True spec.tools.beforeGuestStandby = True return spec def test_get_vm_extra_config_spec(self): fake_factory = fake.FakeFactory() extra_opts = {mock.sentinel.key: mock.sentinel.value} res = vm_util.get_vm_extra_config_spec(fake_factory, extra_opts) self.assertEqual(1, len(res.extraConfig)) self.assertEqual(mock.sentinel.key, res.extraConfig[0].key) self.assertEqual(mock.sentinel.value, res.extraConfig[0].value) def test_get_vm_create_spec(self): extra_specs = vm_util.ExtraSpecs() fake_factory = fake.FakeFactory() result = vm_util.get_vm_create_spec(fake_factory, self._instance, 'fake-datastore', [], extra_specs) expected = self._create_vm_config_spec() self.assertEqual(expected, result) def test_get_vm_create_spec_with_serial_port(self): extra_specs = vm_util.ExtraSpecs() fake_factory = fake.FakeFactory() self.flags(serial_port_service_uri='foobar', group='vmware') self.flags(serial_port_proxy_uri='telnet://example.com:31337', group='vmware') result = vm_util.get_vm_create_spec(fake_factory, self._instance, 'fake-datastore', [], extra_specs) serial_port_spec = vm_util.create_serial_port_spec(fake_factory) expected = self._create_vm_config_spec() expected.deviceChange = [serial_port_spec] self.assertEqual(expected, result) def test_get_vm_create_spec_with_allocations(self): cpu_limits = vm_util.Limits(limit=7, reservation=6) extra_specs = vm_util.ExtraSpecs(cpu_limits=cpu_limits) fake_factory = fake.FakeFactory() result = vm_util.get_vm_create_spec(fake_factory, self._instance, 'fake-datastore', [], extra_specs) expected = fake_factory.create('ns0:VirtualMachineConfigSpec') expected.deviceChange = [] expected.guestId = constants.DEFAULT_OS_TYPE expected.instanceUuid = self._instance.uuid expected.memoryMB = self._instance.memory_mb expected.name = self._instance.uuid expected.numCPUs = self._instance.vcpus expected.version = None expected.files = fake_factory.create('ns0:VirtualMachineFileInfo') expected.files.vmPathName = '[fake-datastore]' expected.tools = fake_factory.create('ns0:ToolsConfigInfo') expected.tools.afterPowerOn = True expected.tools.afterResume = True expected.tools.beforeGuestReboot = True expected.tools.beforeGuestShutdown = True expected.tools.beforeGuestStandby = True expected.managedBy = fake_factory.create('ns0:ManagedByInfo') expected.managedBy.extensionKey = 'org.openstack.compute' expected.managedBy.type = 'instance' cpu_allocation = fake_factory.create('ns0:ResourceAllocationInfo') cpu_allocation.limit = 7 cpu_allocation.reservation = 6 cpu_allocation.shares = fake_factory.create('ns0:SharesInfo') cpu_allocation.shares.level = 'normal' cpu_allocation.shares.shares = 0 expected.cpuAllocation = cpu_allocation expected.extraConfig = [] extra_config = fake_factory.create('ns0:OptionValue') extra_config.key = 'nvp.vm-uuid' extra_config.value = self._instance.uuid expected.extraConfig.append(extra_config) extra_config = fake_factory.create("ns0:OptionValue") extra_config.value = True extra_config.key = 'disk.EnableUUID' expected.extraConfig.append(extra_config) self.assertEqual(expected, result) def test_get_vm_create_spec_with_limit(self): cpu_limits = vm_util.Limits(limit=7) extra_specs = vm_util.ExtraSpecs(cpu_limits=cpu_limits) fake_factory = fake.FakeFactory() result = vm_util.get_vm_create_spec(fake_factory, self._instance, 'fake-datastore', [], extra_specs) expected = fake_factory.create('ns0:VirtualMachineConfigSpec') expected.files = fake_factory.create('ns0:VirtualMachineFileInfo') expected.files.vmPathName = '[fake-datastore]' expected.instanceUuid = self._instance.uuid expected.name = self._instance.uuid expected.deviceChange = [] expected.extraConfig = [] extra_config = fake_factory.create("ns0:OptionValue") extra_config.value = self._instance.uuid extra_config.key = 'nvp.vm-uuid' expected.extraConfig.append(extra_config) extra_config = fake_factory.create("ns0:OptionValue") extra_config.value = True extra_config.key = 'disk.EnableUUID' expected.extraConfig.append(extra_config) expected.memoryMB = 2048 expected.managedBy = fake_factory.create('ns0:ManagedByInfo') expected.managedBy.extensionKey = 'org.openstack.compute' expected.managedBy.type = 'instance' expected.version = None expected.guestId = constants.DEFAULT_OS_TYPE expected.tools = fake_factory.create('ns0:ToolsConfigInfo') expected.tools.afterPowerOn = True expected.tools.afterResume = True expected.tools.beforeGuestReboot = True expected.tools.beforeGuestShutdown = True expected.tools.beforeGuestStandby = True cpu_allocation = fake_factory.create('ns0:ResourceAllocationInfo') cpu_allocation.limit = 7 cpu_allocation.reservation = 0 cpu_allocation.shares = fake_factory.create('ns0:SharesInfo') cpu_allocation.shares.level = 'normal' cpu_allocation.shares.shares = 0 expected.cpuAllocation = cpu_allocation expected.numCPUs = 2 self.assertEqual(expected, result) def test_get_vm_create_spec_with_share(self): cpu_limits = vm_util.Limits(shares_level='high') extra_specs = vm_util.ExtraSpecs(cpu_limits=cpu_limits) fake_factory = fake.FakeFactory() result = vm_util.get_vm_create_spec(fake_factory, self._instance, 'fake-datastore', [], extra_specs) expected = fake_factory.create('ns0:VirtualMachineConfigSpec') expected.files = fake_factory.create('ns0:VirtualMachineFileInfo') expected.files.vmPathName = '[fake-datastore]' expected.instanceUuid = self._instance.uuid expected.name = self._instance.uuid expected.deviceChange = [] expected.extraConfig = [] extra_config = fake_factory.create('ns0:OptionValue') extra_config.value = self._instance.uuid extra_config.key = 'nvp.vm-uuid' expected.extraConfig.append(extra_config) extra_config = fake_factory.create("ns0:OptionValue") extra_config.value = True extra_config.key = 'disk.EnableUUID' expected.extraConfig.append(extra_config) expected.memoryMB = 2048 expected.managedBy = fake_factory.create('ns0:ManagedByInfo') expected.managedBy.type = 'instance' expected.managedBy.extensionKey = 'org.openstack.compute' expected.version = None expected.guestId = constants.DEFAULT_OS_TYPE expected.tools = fake_factory.create('ns0:ToolsConfigInfo') expected.tools.beforeGuestStandby = True expected.tools.beforeGuestReboot = True expected.tools.beforeGuestShutdown = True expected.tools.afterResume = True expected.tools.afterPowerOn = True cpu_allocation = fake_factory.create('ns0:ResourceAllocationInfo') cpu_allocation.reservation = 0 cpu_allocation.limit = -1 cpu_allocation.shares = fake_factory.create('ns0:SharesInfo') cpu_allocation.shares.level = 'high' cpu_allocation.shares.shares = 0 expected.cpuAllocation = cpu_allocation expected.numCPUs = 2 self.assertEqual(expected, result) def test_get_vm_create_spec_with_share_custom(self): cpu_limits = vm_util.Limits(shares_level='custom', shares_share=1948) extra_specs = vm_util.ExtraSpecs(cpu_limits=cpu_limits) fake_factory = fake.FakeFactory() result = vm_util.get_vm_create_spec(fake_factory, self._instance, 'fake-datastore', [], extra_specs) expected = fake_factory.create('ns0:VirtualMachineConfigSpec') expected.files = fake_factory.create('ns0:VirtualMachineFileInfo') expected.files.vmPathName = '[fake-datastore]' expected.instanceUuid = self._instance.uuid expected.name = self._instance.uuid expected.deviceChange = [] expected.extraConfig = [] extra_config = fake_factory.create('ns0:OptionValue') extra_config.key = 'nvp.vm-uuid' extra_config.value = self._instance.uuid expected.extraConfig.append(extra_config) extra_config = fake_factory.create("ns0:OptionValue") extra_config.value = True extra_config.key = 'disk.EnableUUID' expected.extraConfig.append(extra_config) expected.memoryMB = 2048 expected.managedBy = fake_factory.create('ns0:ManagedByInfo') expected.managedBy.extensionKey = 'org.openstack.compute' expected.managedBy.type = 'instance' expected.version = None expected.guestId = constants.DEFAULT_OS_TYPE expected.tools = fake_factory.create('ns0:ToolsConfigInfo') expected.tools.beforeGuestStandby = True expected.tools.beforeGuestReboot = True expected.tools.beforeGuestShutdown = True expected.tools.afterResume = True expected.tools.afterPowerOn = True cpu_allocation = fake_factory.create('ns0:ResourceAllocationInfo') cpu_allocation.reservation = 0 cpu_allocation.limit = -1 cpu_allocation.shares = fake_factory.create('ns0:SharesInfo') cpu_allocation.shares.level = 'custom' cpu_allocation.shares.shares = 1948 expected.cpuAllocation = cpu_allocation expected.numCPUs = 2 self.assertEqual(expected, result) def test_get_vm_create_spec_with_metadata(self): extra_specs = vm_util.ExtraSpecs() fake_factory = fake.FakeFactory() result = vm_util.get_vm_create_spec(fake_factory, self._instance, 'fake-datastore', [], extra_specs, metadata='fake-metadata') expected = fake_factory.create('ns0:VirtualMachineConfigSpec') expected.name = self._instance.uuid expected.instanceUuid = self._instance.uuid expected.deviceChange = [] expected.numCPUs = 2 expected.version = None expected.memoryMB = 2048 expected.guestId = 'otherGuest' expected.annotation = 'fake-metadata' expected.extraConfig = [] extra_config = fake_factory.create("ns0:OptionValue") extra_config.value = self._instance.uuid extra_config.key = 'nvp.vm-uuid' expected.extraConfig.append(extra_config) extra_config = fake_factory.create("ns0:OptionValue") extra_config.value = True extra_config.key = 'disk.EnableUUID' expected.extraConfig.append(extra_config) expected.files = fake_factory.create('ns0:VirtualMachineFileInfo') expected.files.vmPathName = '[fake-datastore]' expected.managedBy = fake_factory.create('ns0:ManagedByInfo') expected.managedBy.extensionKey = 'org.openstack.compute' expected.managedBy.type = 'instance' expected.tools = fake_factory.create('ns0:ToolsConfigInfo') expected.tools.afterPowerOn = True expected.tools.afterResume = True expected.tools.beforeGuestReboot = True expected.tools.beforeGuestShutdown = True expected.tools.beforeGuestStandby = True self.assertEqual(expected, result) def test_get_vm_create_spec_with_firmware(self): extra_specs = vm_util.ExtraSpecs(firmware='efi') fake_factory = fake.FakeFactory() result = vm_util.get_vm_create_spec(fake_factory, self._instance, 'fake-datastore', [], extra_specs) expected = fake_factory.create('ns0:VirtualMachineConfigSpec') expected.name = self._instance.uuid expected.instanceUuid = self._instance.uuid expected.deviceChange = [] expected.numCPUs = 2 expected.version = None expected.memoryMB = 2048 expected.guestId = 'otherGuest' expected.firmware = 'efi' expected.extraConfig = [] extra_config = fake_factory.create("ns0:OptionValue") extra_config.value = self._instance.uuid extra_config.key = 'nvp.vm-uuid' expected.extraConfig.append(extra_config) extra_config = fake_factory.create("ns0:OptionValue") extra_config.value = True extra_config.key = 'disk.EnableUUID' expected.extraConfig.append(extra_config) expected.files = fake_factory.create('ns0:VirtualMachineFileInfo') expected.files.vmPathName = '[fake-datastore]' expected.managedBy = fake_factory.create('ns0:ManagedByInfo') expected.managedBy.extensionKey = 'org.openstack.compute' expected.managedBy.type = 'instance' expected.tools = fake_factory.create('ns0:ToolsConfigInfo') expected.tools.afterPowerOn = True expected.tools.afterResume = True expected.tools.beforeGuestReboot = True expected.tools.beforeGuestShutdown = True expected.tools.beforeGuestStandby = True self.assertEqual(expected, result) def test_create_vm(self): def fake_call_method(module, method, *args, **kwargs): if (method == 'CreateVM_Task'): return 'fake_create_vm_task' else: self.fail('Should not get here....') def fake_wait_for_task(self, *args): task_info = mock.Mock(state="success", result="fake_vm_ref") return task_info session = fake.FakeSession() fake_call_mock = mock.Mock(side_effect=fake_call_method) fake_wait_mock = mock.Mock(side_effect=fake_wait_for_task) with test.nested( mock.patch.object(session, '_wait_for_task', fake_wait_mock), mock.patch.object(session, '_call_method', fake_call_mock) ) as (wait_for_task, call_method): vm_ref = vm_util.create_vm( session, self._instance, 'fake_vm_folder', 'fake_config_spec', 'fake_res_pool_ref') self.assertEqual('fake_vm_ref', vm_ref) call_method.assert_called_once_with(mock.ANY, 'CreateVM_Task', 'fake_vm_folder', config='fake_config_spec', pool='fake_res_pool_ref') wait_for_task.assert_called_once_with('fake_create_vm_task') @mock.patch.object(vm_util.LOG, 'warning') def test_create_vm_invalid_guestid(self, mock_log_warn): """Ensure we warn when create_vm() fails after we passed an unrecognised guestId """ found = [False] def fake_log_warn(msg, values): if not isinstance(values, dict): return if values.get('ostype') == 'invalid_os_type': found[0] = True mock_log_warn.side_effect = fake_log_warn session = driver.VMwareAPISession() config_spec = vm_util.get_vm_create_spec( session.vim.client.factory, self._instance, 'fake-datastore', [], vm_util.ExtraSpecs(), os_type='invalid_os_type') self.assertRaises(vexc.VMwareDriverException, vm_util.create_vm, session, self._instance, 'folder', config_spec, 'res-pool') self.assertTrue(found[0]) def test_convert_vif_model(self): expected = "VirtualE1000" result = vm_util.convert_vif_model(network_model.VIF_MODEL_E1000) self.assertEqual(expected, result) expected = "VirtualE1000e" result = vm_util.convert_vif_model(network_model.VIF_MODEL_E1000E) self.assertEqual(expected, result) types = ["VirtualE1000", "VirtualE1000e", "VirtualPCNet32", "VirtualVmxnet", "VirtualVmxnet3"] for type in types: self.assertEqual(type, vm_util.convert_vif_model(type)) self.assertRaises(exception.Invalid, vm_util.convert_vif_model, "InvalidVifModel") def test_power_on_instance_with_vm_ref(self): session = fake.FakeSession() with test.nested( mock.patch.object(session, "_call_method", return_value='fake-task'), mock.patch.object(session, "_wait_for_task"), ) as (fake_call_method, fake_wait_for_task): vm_util.power_on_instance(session, self._instance, vm_ref='fake-vm-ref') fake_call_method.assert_called_once_with(session.vim, "PowerOnVM_Task", 'fake-vm-ref') fake_wait_for_task.assert_called_once_with('fake-task') def test_power_on_instance_without_vm_ref(self): session = fake.FakeSession() with test.nested( mock.patch.object(vm_util, "get_vm_ref", return_value='fake-vm-ref'), mock.patch.object(session, "_call_method", return_value='fake-task'), mock.patch.object(session, "_wait_for_task"), ) as (fake_get_vm_ref, fake_call_method, fake_wait_for_task): vm_util.power_on_instance(session, self._instance) fake_get_vm_ref.assert_called_once_with(session, self._instance) fake_call_method.assert_called_once_with(session.vim, "PowerOnVM_Task", 'fake-vm-ref') fake_wait_for_task.assert_called_once_with('fake-task') def test_power_on_instance_with_exception(self): session = fake.FakeSession() with test.nested( mock.patch.object(session, "_call_method", return_value='fake-task'), mock.patch.object(session, "_wait_for_task", side_effect=exception.NovaException('fake')), ) as (fake_call_method, fake_wait_for_task): self.assertRaises(exception.NovaException, vm_util.power_on_instance, session, self._instance, vm_ref='fake-vm-ref') fake_call_method.assert_called_once_with(session.vim, "PowerOnVM_Task", 'fake-vm-ref') fake_wait_for_task.assert_called_once_with('fake-task') def test_power_on_instance_with_power_state_exception(self): session = fake.FakeSession() with test.nested( mock.patch.object(session, "_call_method", return_value='fake-task'), mock.patch.object( session, "_wait_for_task", side_effect=vexc.InvalidPowerStateException), ) as (fake_call_method, fake_wait_for_task): vm_util.power_on_instance(session, self._instance, vm_ref='fake-vm-ref') fake_call_method.assert_called_once_with(session.vim, "PowerOnVM_Task", 'fake-vm-ref') fake_wait_for_task.assert_called_once_with('fake-task') def test_create_virtual_disk(self): session = fake.FakeSession() dm = session.vim.service_content.virtualDiskManager with test.nested( mock.patch.object(vm_util, "get_vmdk_create_spec", return_value='fake-spec'), mock.patch.object(session, "_call_method", return_value='fake-task'), mock.patch.object(session, "_wait_for_task"), ) as (fake_get_spec, fake_call_method, fake_wait_for_task): vm_util.create_virtual_disk(session, 'fake-dc-ref', 'fake-adapter-type', 'fake-disk-type', 'fake-path', 7) fake_get_spec.assert_called_once_with( session.vim.client.factory, 7, 'fake-adapter-type', 'fake-disk-type') fake_call_method.assert_called_once_with( session.vim, "CreateVirtualDisk_Task", dm, name='fake-path', datacenter='fake-dc-ref', spec='fake-spec') fake_wait_for_task.assert_called_once_with('fake-task') def test_copy_virtual_disk(self): session = fake.FakeSession() dm = session.vim.service_content.virtualDiskManager with test.nested( mock.patch.object(session, "_call_method", return_value='fake-task'), mock.patch.object(session, "_wait_for_task"), ) as (fake_call_method, fake_wait_for_task): vm_util.copy_virtual_disk(session, 'fake-dc-ref', 'fake-source', 'fake-dest') fake_call_method.assert_called_once_with( session.vim, "CopyVirtualDisk_Task", dm, sourceName='fake-source', sourceDatacenter='fake-dc-ref', destName='fake-dest') fake_wait_for_task.assert_called_once_with('fake-task') def _create_fake_vm_objects(self): fake_objects = fake.FakeRetrieveResult() fake_objects.add_object(fake.VirtualMachine()) return fake_objects def test_reconfigure_vm(self): session = fake.FakeSession() with test.nested( mock.patch.object(session, '_call_method', return_value='fake_reconfigure_task'), mock.patch.object(session, '_wait_for_task') ) as (_call_method, _wait_for_task): vm_util.reconfigure_vm(session, 'fake-ref', 'fake-spec') _call_method.assert_called_once_with(mock.ANY, 'ReconfigVM_Task', 'fake-ref', spec='fake-spec') _wait_for_task.assert_called_once_with( 'fake_reconfigure_task') def _get_network_attach_config_spec_opaque(self, network_ref, vc6_onwards=False): vif_info = {'network_name': 'fake-name', 'mac_address': '00:00:00:ca:fe:01', 'network_ref': network_ref, 'iface_id': 7, 'vif_model': 'VirtualE1000'} fake_factory = fake.FakeFactory() result = vm_util.get_network_attach_config_spec( fake_factory, vif_info, 1) card = 'ns0:VirtualEthernetCardOpaqueNetworkBackingInfo' expected = fake_factory.create('ns0:VirtualMachineConfigSpec') expected.extraConfig = [] extra_config = fake_factory.create('ns0:OptionValue') extra_config.value = vif_info['iface_id'] extra_config.key = 'nvp.iface-id.1' expected.extraConfig.append(extra_config) expected.deviceChange = [] device_change = fake_factory.create('ns0:VirtualDeviceConfigSpec') device_change.operation = 'add' device = fake_factory.create('ns0:VirtualE1000') device.macAddress = vif_info['mac_address'] if network_ref['use-external-id']: if vc6_onwards: device.externalId = vif_info['iface_id'] else: dp = fake_factory.create('ns0:DynamicProperty') dp.name = '__externalId__' dp.val = vif_info['iface_id'] device.dynamicProperty = [dp] device.addressType = 'manual' connectable = fake_factory.create('ns0:VirtualDeviceConnectInfo') connectable.allowGuestControl = True connectable.startConnected = True connectable.connected = True device.connectable = connectable backing = fake_factory.create(card) backing.opaqueNetworkType = vif_info['network_ref']['network-type'] backing.opaqueNetworkId = vif_info['network_ref']['network-id'] device.backing = backing device.key = -47 device.wakeOnLanEnabled = True device_change.device = device expected.deviceChange.append(device_change) self.assertEqual(expected, result) def test_get_network_attach_config_spec_opaque_integration_bridge(self): network_ref = {'type': 'OpaqueNetwork', 'network-id': 'fake-network-id', 'network-type': 'opaque', 'use-external-id': False} self._get_network_attach_config_spec_opaque(network_ref) def test_get_network_attach_config_spec_opaque(self): network_ref = {'type': 'OpaqueNetwork', 'network-id': 'fake-network-id', 'network-type': 'nsx.LogicalSwitch', 'use-external-id': True} self._get_network_attach_config_spec_opaque(network_ref) @mock.patch.object(fake, 'DataObject') def test_get_network_attach_config_spec_opaque_vc6_onwards(self, mock_object): # Add new attribute externalId supported from VC6 class FakeVirtualE1000(fake.DataObject): def __init__(self): super(FakeVirtualE1000, self).__init__() self.externalId = None mock_object.return_value = FakeVirtualE1000 network_ref = {'type': 'OpaqueNetwork', 'network-id': 'fake-network-id', 'network-type': 'nsx.LogicalSwitch', 'use-external-id': True} self._get_network_attach_config_spec_opaque(network_ref, vc6_onwards=True) def test_get_network_attach_config_spec_dvs(self): vif_info = {'network_name': 'br100', 'mac_address': '00:00:00:ca:fe:01', 'network_ref': {'type': 'DistributedVirtualPortgroup', 'dvsw': 'fake-network-id', 'dvpg': 'fake-group'}, 'iface_id': 7, 'vif_model': 'VirtualE1000'} fake_factory = fake.FakeFactory() result = vm_util.get_network_attach_config_spec( fake_factory, vif_info, 1) port = 'ns0:DistributedVirtualSwitchPortConnection' backing = 'ns0:VirtualEthernetCardDistributedVirtualPortBackingInfo' expected = fake_factory.create('ns0:VirtualMachineConfigSpec') expected.extraConfig = [] extra_config = fake_factory.create('ns0:OptionValue') extra_config.value = vif_info['iface_id'] extra_config.key = 'nvp.iface-id.1' expected.extraConfig.append(extra_config) expected.deviceChange = [] device_change = fake_factory.create('ns0:VirtualDeviceConfigSpec') device_change.operation = 'add' device = fake_factory.create('ns0:VirtualE1000') device.macAddress = vif_info['mac_address'] device.key = -47 device.addressType = 'manual' device.wakeOnLanEnabled = True device.backing = fake_factory.create(backing) device.backing.port = fake_factory.create(port) device.backing.port.portgroupKey = vif_info['network_ref']['dvpg'] device.backing.port.switchUuid = vif_info['network_ref']['dvsw'] connectable = fake_factory.create('ns0:VirtualDeviceConnectInfo') connectable.allowGuestControl = True connectable.connected = True connectable.startConnected = True device.connectable = connectable device_change.device = device expected.deviceChange.append(device_change) self.assertEqual(expected, result) def test_get_network_attach_config_spec_dvs_with_limits(self): vif_info = {'network_name': 'br100', 'mac_address': '00:00:00:ca:fe:01', 'network_ref': {'type': 'DistributedVirtualPortgroup', 'dvsw': 'fake-network-id', 'dvpg': 'fake-group'}, 'iface_id': 7, 'vif_model': 'VirtualE1000'} fake_factory = fake.FakeFactory() limits = vm_util.Limits() limits.limit = 10 limits.reservation = 20 limits.shares_level = 'custom' limits.shares_share = 40 result = vm_util.get_network_attach_config_spec( fake_factory, vif_info, 1, limits) port = 'ns0:DistributedVirtualSwitchPortConnection' backing = 'ns0:VirtualEthernetCardDistributedVirtualPortBackingInfo' expected = fake_factory.create('ns0:VirtualMachineConfigSpec') expected.extraConfig = [] extra_config = fake_factory.create('ns0:OptionValue') extra_config.value = vif_info['iface_id'] extra_config.key = 'nvp.iface-id.1' expected.extraConfig.append(extra_config) expected.deviceChange = [] device_change = fake_factory.create('ns0:VirtualDeviceConfigSpec') device_change.operation = 'add' device = fake_factory.create('ns0:VirtualE1000') device.macAddress = vif_info['mac_address'] device.key = -47 device.addressType = 'manual' device.wakeOnLanEnabled = True device.backing = fake_factory.create(backing) device.backing.port = fake_factory.create(port) device.backing.port.portgroupKey = vif_info['network_ref']['dvpg'] device.backing.port.switchUuid = vif_info['network_ref']['dvsw'] device.resourceAllocation = fake_factory.create( 'ns0:VirtualEthernetCardResourceAllocation') device.resourceAllocation.limit = 10 device.resourceAllocation.reservation = 20 device.resourceAllocation.share = fake_factory.create( 'ns0:SharesInfo') device.resourceAllocation.share.level = 'custom' device.resourceAllocation.share.shares = 40 connectable = fake_factory.create('ns0:VirtualDeviceConnectInfo') connectable.allowGuestControl = True connectable.connected = True connectable.startConnected = True device.connectable = connectable device_change.device = device expected.deviceChange.append(device_change) self.assertEqual(expected, result) def _get_create_vif_spec(self, fake_factory, vif_info): limits = vm_util.Limits() limits.limit = 10 limits.reservation = 20 limits.shares_level = 'custom' limits.shares_share = 40 return vm_util._create_vif_spec(fake_factory, vif_info, limits) def _construct_vif_spec(self, fake_factory, vif_info): port = 'ns0:DistributedVirtualSwitchPortConnection' backing = 'ns0:VirtualEthernetCardDistributedVirtualPortBackingInfo' device_change = fake_factory.create('ns0:VirtualDeviceConfigSpec') device_change.operation = 'add' device = fake_factory.create('ns0:VirtualE1000') device.macAddress = vif_info['mac_address'] device.key = -47 device.addressType = 'manual' device.wakeOnLanEnabled = True device.backing = fake_factory.create(backing) device.backing.port = fake_factory.create(port) device.backing.port.portgroupKey = vif_info['network_ref']['dvpg'] device.backing.port.switchUuid = vif_info['network_ref']['dvsw'] if vif_info['network_ref'].get('dvs_port_key'): device.backing.port.portKey = ( vif_info['network_ref']['dvs_port_key']) device.resourceAllocation = fake_factory.create( 'ns0:VirtualEthernetCardResourceAllocation') device.resourceAllocation.limit = 10 device.resourceAllocation.reservation = 20 device.resourceAllocation.share = fake_factory.create( 'ns0:SharesInfo') device.resourceAllocation.share.level = 'custom' device.resourceAllocation.share.shares = 40 connectable = fake_factory.create('ns0:VirtualDeviceConnectInfo') connectable.allowGuestControl = True connectable.connected = True connectable.startConnected = True device.connectable = connectable device_change.device = device return device_change def test_get_create_vif_spec(self): vif_info = {'network_name': 'br100', 'mac_address': '00:00:00:ca:fe:01', 'network_ref': {'type': 'DistributedVirtualPortgroup', 'dvsw': 'fake-network-id', 'dvpg': 'fake-group'}, 'iface_id': 7, 'vif_model': 'VirtualE1000'} fake_factory = fake.FakeFactory() result = self._get_create_vif_spec(fake_factory, vif_info) device_change = self._construct_vif_spec(fake_factory, vif_info) self.assertEqual(device_change, result) def test_get_create_vif_spec_dvs_port_key(self): vif_info = {'network_name': 'br100', 'mac_address': '00:00:00:ca:fe:01', 'network_ref': {'type': 'DistributedVirtualPortgroup', 'dvsw': 'fake-network-id', 'dvpg': 'fake-group', 'dvs_port_key': 'fake-key'}, 'iface_id': 7, 'vif_model': 'VirtualE1000'} fake_factory = fake.FakeFactory() result = self._get_create_vif_spec(fake_factory, vif_info) device_change = self._construct_vif_spec(fake_factory, vif_info) self.assertEqual(device_change, result) def test_get_network_detach_config_spec(self): fake_factory = fake.FakeFactory() result = vm_util.get_network_detach_config_spec( fake_factory, 'fake-device', 2) expected = fake_factory.create('ns0:VirtualMachineConfigSpec') expected.extraConfig = [] extra_config = fake_factory.create('ns0:OptionValue') extra_config.value = 'free' extra_config.key = 'nvp.iface-id.2' expected.extraConfig.append(extra_config) expected.deviceChange = [] device_change = fake_factory.create('ns0:VirtualDeviceConfigSpec') device_change.device = 'fake-device' device_change.operation = 'remove' expected.deviceChange.append(device_change) self.assertEqual(expected, result) @mock.patch.object(vm_util, "get_vm_ref") def test_power_off_instance(self, fake_get_ref): session = fake.FakeSession() with test.nested( mock.patch.object(session, '_call_method', return_value='fake-task'), mock.patch.object(session, '_wait_for_task') ) as (fake_call_method, fake_wait_for_task): vm_util.power_off_instance(session, self._instance, 'fake-vm-ref') fake_call_method.assert_called_once_with(session.vim, "PowerOffVM_Task", 'fake-vm-ref') fake_wait_for_task.assert_called_once_with('fake-task') self.assertFalse(fake_get_ref.called) @mock.patch.object(vm_util, "get_vm_ref", return_value="fake-vm-ref") def test_power_off_instance_no_vm_ref(self, fake_get_ref): session = fake.FakeSession() with test.nested( mock.patch.object(session, '_call_method', return_value='fake-task'), mock.patch.object(session, '_wait_for_task') ) as (fake_call_method, fake_wait_for_task): vm_util.power_off_instance(session, self._instance) fake_get_ref.assert_called_once_with(session, self._instance) fake_call_method.assert_called_once_with(session.vim, "PowerOffVM_Task", 'fake-vm-ref') fake_wait_for_task.assert_called_once_with('fake-task') @mock.patch.object(vm_util, "get_vm_ref") def test_power_off_instance_with_exception(self, fake_get_ref): session = fake.FakeSession() with test.nested( mock.patch.object(session, '_call_method', return_value='fake-task'), mock.patch.object(session, '_wait_for_task', side_effect=exception.NovaException('fake')) ) as (fake_call_method, fake_wait_for_task): self.assertRaises(exception.NovaException, vm_util.power_off_instance, session, self._instance, 'fake-vm-ref') fake_call_method.assert_called_once_with(session.vim, "PowerOffVM_Task", 'fake-vm-ref') fake_wait_for_task.assert_called_once_with('fake-task') self.assertFalse(fake_get_ref.called) @mock.patch.object(vm_util, "get_vm_ref") def test_power_off_instance_power_state_exception(self, fake_get_ref): session = fake.FakeSession() with test.nested( mock.patch.object(session, '_call_method', return_value='fake-task'), mock.patch.object( session, '_wait_for_task', side_effect=vexc.InvalidPowerStateException) ) as (fake_call_method, fake_wait_for_task): vm_util.power_off_instance(session, self._instance, 'fake-vm-ref') fake_call_method.assert_called_once_with(session.vim, "PowerOffVM_Task", 'fake-vm-ref') fake_wait_for_task.assert_called_once_with('fake-task') self.assertFalse(fake_get_ref.called) def test_get_vm_create_spec_updated_hw_version(self): extra_specs = vm_util.ExtraSpecs(hw_version='vmx-08') result = vm_util.get_vm_create_spec(fake.FakeFactory(), self._instance, 'fake-datastore', [], extra_specs=extra_specs) self.assertEqual('vmx-08', result.version) def test_vm_create_spec_with_profile_spec(self): datastore = ds_obj.Datastore('fake-ds-ref', 'fake-ds-name') extra_specs = vm_util.ExtraSpecs() create_spec = vm_util.get_vm_create_spec(fake.FakeFactory(), self._instance, datastore.name, [], extra_specs, profile_spec='fake_profile_spec') self.assertEqual(['fake_profile_spec'], create_spec.vmProfile) @mock.patch.object(pbm, 'get_profile_id_by_name') def test_get_storage_profile_spec(self, mock_retrieve_profile_id): fake_profile_id = fake.DataObject() fake_profile_id.uniqueId = 'fake_unique_id' mock_retrieve_profile_id.return_value = fake_profile_id profile_spec = vm_util.get_storage_profile_spec(fake.FakeSession(), 'fake_policy') self.assertEqual('ns0:VirtualMachineDefinedProfileSpec', profile_spec.obj_name) self.assertEqual(fake_profile_id.uniqueId, profile_spec.profileId) @mock.patch.object(pbm, 'get_profile_id_by_name') def test_storage_spec_empty_profile(self, mock_retrieve_profile_id): mock_retrieve_profile_id.return_value = None profile_spec = vm_util.get_storage_profile_spec(fake.FakeSession(), 'fake_policy') self.assertIsNone(profile_spec) def test_get_ephemeral_name(self): filename = vm_util.get_ephemeral_name(0) self.assertEqual('ephemeral_0.vmdk', filename) def test_detach_and_delete_devices_config_spec(self): fake_devices = ['device1', 'device2'] fake_factory = fake.FakeFactory() result = vm_util._detach_and_delete_devices_config_spec(fake_factory, fake_devices) expected = fake_factory.create('ns0:VirtualMachineConfigSpec') expected.deviceChange = [] device1 = fake_factory.create('ns0:VirtualDeviceConfigSpec') device1.device = 'device1' device1.operation = 'remove' device1.fileOperation = 'destroy' expected.deviceChange.append(device1) device2 = fake_factory.create('ns0:VirtualDeviceConfigSpec') device2.device = 'device2' device2.operation = 'remove' device2.fileOperation = 'destroy' expected.deviceChange.append(device2) self.assertEqual(expected, result) @mock.patch.object(vm_util, 'reconfigure_vm') def test_detach_devices_from_vm(self, mock_reconfigure): fake_devices = ['device1', 'device2'] session = fake.FakeSession() vm_util.detach_devices_from_vm(session, 'fake-ref', fake_devices) mock_reconfigure.assert_called_once_with(session, 'fake-ref', mock.ANY) def test_get_vm_boot_spec(self): disk = fake.VirtualDisk() disk.key = 7 fake_factory = fake.FakeFactory() result = vm_util.get_vm_boot_spec(fake_factory, disk) expected = fake_factory.create('ns0:VirtualMachineConfigSpec') boot_disk = fake_factory.create( 'ns0:VirtualMachineBootOptionsBootableDiskDevice') boot_disk.deviceKey = disk.key boot_options = fake_factory.create('ns0:VirtualMachineBootOptions') boot_options.bootOrder = [boot_disk] expected.bootOptions = boot_options self.assertEqual(expected, result) def _get_devices(self, filename): devices = fake._create_array_of_type('VirtualDevice') devices.VirtualDevice = self._vmdk_path_and_adapter_type_devices( filename) return devices def test_find_rescue_device(self): filename = '[test_datastore] uuid/uuid-rescue.vmdk' devices = self._get_devices(filename) device = vm_util.find_rescue_device(devices, self._instance) self.assertEqual(filename, device.backing.fileName) def test_find_rescue_device_not_found(self): filename = '[test_datastore] uuid/uuid.vmdk' devices = self._get_devices(filename) self.assertRaises(exception.NotFound, vm_util.find_rescue_device, devices, self._instance) def test_validate_limits(self): limits = vm_util.Limits(shares_level='high', shares_share=1948) self.assertRaises(exception.InvalidInput, limits.validate) limits = vm_util.Limits(shares_level='fira') self.assertRaises(exception.InvalidInput, limits.validate) def test_get_vm_create_spec_with_console_delay(self): extra_specs = vm_util.ExtraSpecs() self.flags(console_delay_seconds=2, group='vmware') fake_factory = fake.FakeFactory() result = vm_util.get_vm_create_spec(fake_factory, self._instance, 'fake-datastore', [], extra_specs) expected = fake_factory.create('ns0:VirtualMachineConfigSpec') expected.name = self._instance.uuid expected.instanceUuid = self._instance.uuid expected.deviceChange = [] expected.numCPUs = 2 expected.version = None expected.memoryMB = 2048 expected.guestId = constants.DEFAULT_OS_TYPE expected.extraConfig = [] extra_config = fake_factory.create("ns0:OptionValue") extra_config.value = self._instance.uuid extra_config.key = 'nvp.vm-uuid' expected.extraConfig.append(extra_config) extra_config = fake_factory.create("ns0:OptionValue") extra_config.value = True extra_config.key = 'disk.EnableUUID' expected.extraConfig.append(extra_config) extra_config = fake_factory.create("ns0:OptionValue") extra_config.value = 2000000 extra_config.key = 'keyboard.typematicMinDelay' expected.extraConfig.append(extra_config) expected.files = fake_factory.create('ns0:VirtualMachineFileInfo') expected.files.vmPathName = '[fake-datastore]' expected.managedBy = fake_factory.create('ns0:ManagedByInfo') expected.managedBy.extensionKey = 'org.openstack.compute' expected.managedBy.type = 'instance' expected.tools = fake_factory.create('ns0:ToolsConfigInfo') expected.tools.afterPowerOn = True expected.tools.afterResume = True expected.tools.beforeGuestReboot = True expected.tools.beforeGuestShutdown = True expected.tools.beforeGuestStandby = True self.assertEqual(expected, result) def test_get_vm_create_spec_with_cores_per_socket(self): extra_specs = vm_util.ExtraSpecs(cores_per_socket=4) fake_factory = fake.FakeFactory() result = vm_util.get_vm_create_spec(fake_factory, self._instance, 'fake-datastore', [], extra_specs) expected = fake_factory.create('ns0:VirtualMachineConfigSpec') expected.deviceChange = [] expected.guestId = 'otherGuest' expected.instanceUuid = self._instance.uuid expected.memoryMB = self._instance.memory_mb expected.name = self._instance.uuid expected.numCPUs = self._instance.vcpus expected.numCoresPerSocket = 4 expected.version = None expected.files = fake_factory.create('ns0:VirtualMachineFileInfo') expected.files.vmPathName = '[fake-datastore]' expected.tools = fake_factory.create('ns0:ToolsConfigInfo') expected.tools.afterPowerOn = True expected.tools.afterResume = True expected.tools.beforeGuestReboot = True expected.tools.beforeGuestShutdown = True expected.tools.beforeGuestStandby = True expected.managedBy = fake_factory.create('ns0:ManagedByInfo') expected.managedBy.extensionKey = 'org.openstack.compute' expected.managedBy.type = 'instance' expected.extraConfig = [] extra_config = fake_factory.create('ns0:OptionValue') extra_config.key = 'nvp.vm-uuid' extra_config.value = self._instance.uuid expected.extraConfig.append(extra_config) extra_config = fake_factory.create("ns0:OptionValue") extra_config.value = True extra_config.key = 'disk.EnableUUID' expected.extraConfig.append(extra_config) self.assertEqual(expected, result) def test_get_vm_create_spec_with_memory_allocations(self): memory_limits = vm_util.Limits(limit=7, reservation=6) extra_specs = vm_util.ExtraSpecs(memory_limits=memory_limits) fake_factory = fake.FakeFactory() result = vm_util.get_vm_create_spec(fake_factory, self._instance, 'fake-datastore', [], extra_specs) expected = fake_factory.create('ns0:VirtualMachineConfigSpec') expected.deviceChange = [] expected.guestId = 'otherGuest' expected.instanceUuid = self._instance.uuid expected.memoryMB = self._instance.memory_mb expected.name = self._instance.uuid expected.numCPUs = self._instance.vcpus expected.version = None expected.files = fake_factory.create('ns0:VirtualMachineFileInfo') expected.files.vmPathName = '[fake-datastore]' expected.tools = fake_factory.create('ns0:ToolsConfigInfo') expected.tools.afterPowerOn = True expected.tools.afterResume = True expected.tools.beforeGuestReboot = True expected.tools.beforeGuestShutdown = True expected.tools.beforeGuestStandby = True expected.managedBy = fake_factory.create('ns0:ManagedByInfo') expected.managedBy.extensionKey = 'org.openstack.compute' expected.managedBy.type = 'instance' memory_allocation = fake_factory.create('ns0:ResourceAllocationInfo') memory_allocation.limit = 7 memory_allocation.reservation = 6 memory_allocation.shares = fake_factory.create('ns0:SharesInfo') memory_allocation.shares.level = 'normal' memory_allocation.shares.shares = 0 expected.memoryAllocation = memory_allocation expected.extraConfig = [] extra_config = fake_factory.create('ns0:OptionValue') extra_config.key = 'nvp.vm-uuid' extra_config.value = self._instance.uuid expected.extraConfig.append(extra_config) extra_config = fake_factory.create("ns0:OptionValue") extra_config.value = True extra_config.key = 'disk.EnableUUID' expected.extraConfig.append(extra_config) self.assertEqual(expected, result) def test_get_swap(self): vm_ref = 'fake-vm-ref' # Root disk controller_key = 1000 root_disk = fake.VirtualDisk() root_disk.controllerKey = controller_key disk_backing = fake.VirtualDiskFlatVer2BackingInfo() disk_backing.fileName = '[test_datastore] uuid/uuid.vmdk' root_disk.capacityInBytes = 1048576 root_disk.backing = disk_backing # Swap disk swap_disk = fake.VirtualDisk() swap_disk.controllerKey = controller_key disk_backing = fake.VirtualDiskFlatVer2BackingInfo() disk_backing.fileName = "swap" swap_disk.capacityInBytes = 1024 swap_disk.backing = disk_backing devices = [root_disk, swap_disk] session = fake.FakeSession() with mock.patch.object(session, '_call_method', return_value=devices) as mock_call: device = vm_util.get_swap(session, vm_ref) mock_call.assert_called_once_with(mock.ANY, "get_object_property", vm_ref, "config.hardware.device") self.assertEqual(swap_disk, device) def test_create_folder(self): """Test create_folder when the folder doesn't exist""" child_folder = mock.sentinel.child_folder session = fake.FakeSession() with mock.patch.object(session, '_call_method', side_effect=[child_folder]): parent_folder = mock.sentinel.parent_folder parent_folder.value = 'parent-ref' child_name = 'child_folder' ret = vm_util.create_folder(session, parent_folder, child_name) self.assertEqual(child_folder, ret) session._call_method.assert_called_once_with(session.vim, 'CreateFolder', parent_folder, name=child_name) def test_create_folder_duplicate_name(self): """Test create_folder when the folder already exists""" session = fake.FakeSession() details = {'object': 'folder-1'} duplicate_exception = vexc.DuplicateName(details=details) with mock.patch.object(session, '_call_method', side_effect=[duplicate_exception]): parent_folder = mock.sentinel.parent_folder parent_folder.value = 'parent-ref' child_name = 'child_folder' ret = vm_util.create_folder(session, parent_folder, child_name) self.assertEqual('Folder', ret._type) self.assertEqual('folder-1', ret.value) session._call_method.assert_called_once_with(session.vim, 'CreateFolder', parent_folder, name=child_name) def test_get_folder_does_not_exist(self): session = fake.FakeSession() with mock.patch.object(session, '_call_method', return_value=None): ret = vm_util._get_folder(session, 'fake-parent', 'fake-name') self.assertIsNone(ret) expected_invoke_api = [mock.call(vutil, 'get_object_property', 'fake-parent', 'childEntity')] self.assertEqual(expected_invoke_api, session._call_method.mock_calls) def test_get_folder_child_entry_not_folder(self): child_entity = mock.Mock() child_entity._type = 'NotFolder' prop_val = mock.Mock() prop_val.ManagedObjectReference = [child_entity] session = fake.FakeSession() with mock.patch.object(session, '_call_method', return_value=prop_val): ret = vm_util._get_folder(session, 'fake-parent', 'fake-name') self.assertIsNone(ret) expected_invoke_api = [mock.call(vutil, 'get_object_property', 'fake-parent', 'childEntity')] self.assertEqual(expected_invoke_api, session._call_method.mock_calls) def test_get_folder_child_entry_not_matched(self): child_entity = mock.Mock() child_entity._type = 'Folder' prop_val = mock.Mock() prop_val.ManagedObjectReference = [child_entity] session = fake.FakeSession() with mock.patch.object(session, '_call_method', side_effect=[prop_val, 'fake-1-name']): ret = vm_util._get_folder(session, 'fake-parent', 'fake-name') self.assertIsNone(ret) expected_invoke_api = [mock.call(vutil, 'get_object_property', 'fake-parent', 'childEntity'), mock.call(vutil, 'get_object_property', child_entity, 'name')] self.assertEqual(expected_invoke_api, session._call_method.mock_calls) def test_get_folder_child_entry_matched(self): child_entity = mock.Mock() child_entity._type = 'Folder' prop_val = mock.Mock() prop_val.ManagedObjectReference = [child_entity] session = fake.FakeSession() with mock.patch.object(session, '_call_method', side_effect=[prop_val, 'fake-name']): ret = vm_util._get_folder(session, 'fake-parent', 'fake-name') self.assertEqual(ret, child_entity) expected_invoke_api = [mock.call(vutil, 'get_object_property', 'fake-parent', 'childEntity'), mock.call(vutil, 'get_object_property', child_entity, 'name')] self.assertEqual(expected_invoke_api, session._call_method.mock_calls) def test_folder_path_ref_cache(self): path = 'OpenStack/Project (e2b86092bf064181ade43deb3188f8e4)' self.assertIsNone(vm_util.folder_ref_cache_get(path)) vm_util.folder_ref_cache_update(path, 'fake-ref') self.assertEqual('fake-ref', vm_util.folder_ref_cache_get(path)) def test_get_vm_name(self): uuid = uuidutils.generate_uuid() expected = uuid name = vm_util._get_vm_name(None, uuid) self.assertEqual(expected, name) display_name = 'fira' expected = 'fira (%s)' % uuid name = vm_util._get_vm_name(display_name, uuid) self.assertEqual(expected, name) display_name = 'X' * 255 expected = '%s (%s)' % ('X' * 41, uuid) name = vm_util._get_vm_name(display_name, uuid) self.assertEqual(expected, name) self.assertEqual(len(name), 80) @mock.patch.object(vm_util, '_get_vm_name', return_value='fake-name') def test_rename_vm(self, mock_get_name): session = fake.FakeSession() with test.nested( mock.patch.object(session, '_call_method', return_value='fake_rename_task'), mock.patch.object(session, '_wait_for_task') ) as (_call_method, _wait_for_task): vm_util.rename_vm(session, 'fake-ref', self._instance) _call_method.assert_called_once_with(mock.ANY, 'Rename_Task', 'fake-ref', newName='fake-name') _wait_for_task.assert_called_once_with( 'fake_rename_task') mock_get_name.assert_called_once_with(self._instance.display_name, self._instance.uuid) @mock.patch.object(driver.VMwareAPISession, 'vim', stubs.fake_vim_prop) class VMwareVMUtilGetHostRefTestCase(test.NoDBTestCase): # N.B. Mocking on the class only mocks test_*(), but we need # VMwareAPISession.vim to be mocked in both setUp and tests. Not mocking in # setUp causes object initialisation to fail. Not mocking in tests results # in vim calls not using FakeVim. @mock.patch.object(driver.VMwareAPISession, 'vim', stubs.fake_vim_prop) def setUp(self): super(VMwareVMUtilGetHostRefTestCase, self).setUp() fake.reset() vm_util.vm_refs_cache_reset() self.session = driver.VMwareAPISession() # Create a fake VirtualMachine running on a known host self.host_ref = list(fake._db_content['HostSystem'].keys())[0] self.vm_ref = fake.create_vm(host_ref=self.host_ref) @mock.patch.object(vm_util, 'get_vm_ref') def test_get_host_ref_for_vm(self, mock_get_vm_ref): mock_get_vm_ref.return_value = self.vm_ref ret = vm_util.get_host_ref_for_vm(self.session, 'fake-instance') mock_get_vm_ref.assert_called_once_with(self.session, 'fake-instance') self.assertEqual(self.host_ref, ret) @mock.patch.object(vm_util, 'get_vm_ref') def test_get_host_name_for_vm(self, mock_get_vm_ref): mock_get_vm_ref.return_value = self.vm_ref host = fake._get_object(self.host_ref) ret = vm_util.get_host_name_for_vm(self.session, 'fake-instance') mock_get_vm_ref.assert_called_once_with(self.session, 'fake-instance') self.assertEqual(host.name, ret) nova-17.0.1/nova/tests/unit/virt/vmwareapi/test_imagecache.py0000666000175000017500000002750113250073126024315 0ustar zuulzuul00000000000000# Copyright (c) 2014 VMware, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import mock from oslo_config import cfg from oslo_utils import fixture as utils_fixture from oslo_vmware.objects import datastore as ds_obj from oslo_vmware import vim_util as vutil from nova import objects from nova import test from nova.tests.unit import fake_instance from nova.tests.unit.virt.vmwareapi import fake from nova.tests import uuidsentinel from nova.virt.vmwareapi import ds_util from nova.virt.vmwareapi import imagecache CONF = cfg.CONF class ImageCacheManagerTestCase(test.NoDBTestCase): REQUIRES_LOCKING = True def setUp(self): super(ImageCacheManagerTestCase, self).setUp() self._session = mock.Mock(name='session') self._imagecache = imagecache.ImageCacheManager(self._session, 'fake-base-folder') self._time = datetime.datetime(2012, 11, 22, 12, 00, 00) self._file_name = 'ts-2012-11-22-12-00-00' fake.reset() def tearDown(self): super(ImageCacheManagerTestCase, self).tearDown() fake.reset() def test_timestamp_cleanup(self): def fake_get_timestamp(ds_browser, ds_path): self.assertEqual('fake-ds-browser', ds_browser) self.assertEqual('[fake-ds] fake-path', str(ds_path)) if not self.exists: return ts = '%s%s' % (imagecache.TIMESTAMP_PREFIX, self._time.strftime(imagecache.TIMESTAMP_FORMAT)) return ts with test.nested( mock.patch.object(self._imagecache, '_get_timestamp', fake_get_timestamp), mock.patch.object(ds_util, 'file_delete') ) as (_get_timestamp, _file_delete): self.exists = False self._imagecache.timestamp_cleanup( 'fake-dc-ref', 'fake-ds-browser', ds_obj.DatastorePath('fake-ds', 'fake-path')) self.assertEqual(0, _file_delete.call_count) self.exists = True self._imagecache.timestamp_cleanup( 'fake-dc-ref', 'fake-ds-browser', ds_obj.DatastorePath('fake-ds', 'fake-path')) expected_ds_path = ds_obj.DatastorePath( 'fake-ds', 'fake-path', self._file_name) _file_delete.assert_called_once_with(self._session, expected_ds_path, 'fake-dc-ref') def test_get_timestamp(self): def fake_get_sub_folders(session, ds_browser, ds_path): self.assertEqual('fake-ds-browser', ds_browser) self.assertEqual('[fake-ds] fake-path', str(ds_path)) if self.exists: files = set() files.add(self._file_name) return files with mock.patch.object(ds_util, 'get_sub_folders', fake_get_sub_folders): self.exists = True ts = self._imagecache._get_timestamp( 'fake-ds-browser', ds_obj.DatastorePath('fake-ds', 'fake-path')) self.assertEqual(self._file_name, ts) self.exists = False ts = self._imagecache._get_timestamp( 'fake-ds-browser', ds_obj.DatastorePath('fake-ds', 'fake-path')) self.assertIsNone(ts) def test_get_timestamp_filename(self): self.useFixture(utils_fixture.TimeFixture(self._time)) fn = self._imagecache._get_timestamp_filename() self.assertEqual(self._file_name, fn) def test_get_datetime_from_filename(self): t = self._imagecache._get_datetime_from_filename(self._file_name) self.assertEqual(self._time, t) def test_get_ds_browser(self): cache = self._imagecache._ds_browser ds_browser = mock.Mock() moref = fake.ManagedObjectReference('datastore-100') self.assertIsNone(cache.get(moref.value)) mock_get_method = mock.Mock(return_value=ds_browser) with mock.patch.object(vutil, 'get_object_property', mock_get_method): ret = self._imagecache._get_ds_browser(moref) mock_get_method.assert_called_once_with(mock.ANY, moref, 'browser') self.assertIs(ds_browser, ret) self.assertIs(ds_browser, cache.get(moref.value)) def test_list_datastore_images(self): def fake_get_object_property(vim, mobj, property_name): return 'fake-ds-browser' def fake_get_sub_folders(session, ds_browser, ds_path): files = set() files.add('image-ref-uuid') return files with test.nested( mock.patch.object(vutil, 'get_object_property', fake_get_object_property), mock.patch.object(ds_util, 'get_sub_folders', fake_get_sub_folders) ) as (_get_dynamic, _get_sub_folders): fake_ds_ref = fake.ManagedObjectReference('fake-ds-ref') datastore = ds_obj.Datastore(name='ds', ref=fake_ds_ref) ds_path = datastore.build_path('base_folder') images = self._imagecache._list_datastore_images( ds_path, datastore) originals = set() originals.add('image-ref-uuid') self.assertEqual({'originals': originals, 'unexplained_images': []}, images) @mock.patch.object(imagecache.ImageCacheManager, 'timestamp_folder_get') @mock.patch.object(imagecache.ImageCacheManager, 'timestamp_cleanup') @mock.patch.object(imagecache.ImageCacheManager, '_get_ds_browser') def test_enlist_image(self, mock_get_ds_browser, mock_timestamp_cleanup, mock_timestamp_folder_get): image_id = "fake_image_id" dc_ref = "fake_dc_ref" fake_ds_ref = mock.Mock() ds = ds_obj.Datastore( ref=fake_ds_ref, name='fake_ds', capacity=1, freespace=1) ds_browser = mock.Mock() mock_get_ds_browser.return_value = ds_browser timestamp_folder_path = mock.Mock() mock_timestamp_folder_get.return_value = timestamp_folder_path self._imagecache.enlist_image(image_id, ds, dc_ref) cache_root_folder = ds.build_path("fake-base-folder") mock_get_ds_browser.assert_called_once_with( ds.ref) mock_timestamp_folder_get.assert_called_once_with( cache_root_folder, "fake_image_id") mock_timestamp_cleanup.assert_called_once_with( dc_ref, ds_browser, timestamp_folder_path) def test_age_cached_images(self): def fake_get_ds_browser(ds_ref): return 'fake-ds-browser' def fake_get_timestamp(ds_browser, ds_path): self._get_timestamp_called += 1 path = str(ds_path) if path == '[fake-ds] fake-path/fake-image-1': # No time stamp exists return if path == '[fake-ds] fake-path/fake-image-2': # Timestamp that will be valid => no deletion return 'ts-2012-11-22-10-00-00' if path == '[fake-ds] fake-path/fake-image-3': # Timestamp that will be invalid => deletion return 'ts-2012-11-20-12-00-00' self.fail() def fake_mkdir(session, ts_path, dc_ref): self.assertEqual( '[fake-ds] fake-path/fake-image-1/ts-2012-11-22-12-00-00', str(ts_path)) def fake_file_delete(session, ds_path, dc_ref): self.assertEqual('[fake-ds] fake-path/fake-image-3', str(ds_path)) def fake_timestamp_cleanup(dc_ref, ds_browser, ds_path): self.assertEqual('[fake-ds] fake-path/fake-image-4', str(ds_path)) with test.nested( mock.patch.object(self._imagecache, '_get_ds_browser', fake_get_ds_browser), mock.patch.object(self._imagecache, '_get_timestamp', fake_get_timestamp), mock.patch.object(ds_util, 'mkdir', fake_mkdir), mock.patch.object(ds_util, 'file_delete', fake_file_delete), mock.patch.object(self._imagecache, 'timestamp_cleanup', fake_timestamp_cleanup), ) as (_get_ds_browser, _get_timestamp, _mkdir, _file_delete, _timestamp_cleanup): self.useFixture(utils_fixture.TimeFixture(self._time)) datastore = ds_obj.Datastore(name='ds', ref='fake-ds-ref') dc_info = ds_util.DcInfo(ref='dc_ref', name='name', vmFolder='vmFolder') self._get_timestamp_called = 0 self._imagecache.originals = set(['fake-image-1', 'fake-image-2', 'fake-image-3', 'fake-image-4']) self._imagecache.used_images = set(['fake-image-4']) self._imagecache._age_cached_images( 'fake-context', datastore, dc_info, ds_obj.DatastorePath('fake-ds', 'fake-path')) self.assertEqual(3, self._get_timestamp_called) @mock.patch.object(objects.block_device.BlockDeviceMappingList, 'bdms_by_instance_uuid', return_value={}) def test_update(self, mock_bdms_by_inst): def fake_list_datastore_images(ds_path, datastore): return {'unexplained_images': [], 'originals': self.images} def fake_age_cached_images(context, datastore, dc_info, ds_path): self.assertEqual('[ds] fake-base-folder', str(ds_path)) self.assertEqual(self.images, self._imagecache.used_images) self.assertEqual(self.images, self._imagecache.originals) with test.nested( mock.patch.object(self._imagecache, '_list_datastore_images', fake_list_datastore_images), mock.patch.object(self._imagecache, '_age_cached_images', fake_age_cached_images) ) as (_list_base, _age_and_verify): instances = [{'image_ref': '1', 'host': CONF.host, 'name': 'inst-1', 'uuid': uuidsentinel.foo, 'vm_state': '', 'task_state': ''}, {'image_ref': '2', 'host': CONF.host, 'name': 'inst-2', 'uuid': uuidsentinel.bar, 'vm_state': '', 'task_state': ''}] all_instances = [fake_instance.fake_instance_obj(None, **instance) for instance in instances] self.images = set(['1', '2']) datastore = ds_obj.Datastore(name='ds', ref='fake-ds-ref') dc_info = ds_util.DcInfo(ref='dc_ref', name='name', vmFolder='vmFolder') datastores_info = [(datastore, dc_info)] self._imagecache.update('context', all_instances, datastores_info) nova-17.0.1/nova/tests/unit/virt/vmwareapi/test_vmops.py0000666000175000017500000040000113250073126023401 0ustar zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import time import mock from oslo_serialization import jsonutils from oslo_utils import units from oslo_utils import uuidutils from oslo_vmware import exceptions as vexc from oslo_vmware.objects import datastore as ds_obj from oslo_vmware import vim_util as vutil import six from nova.compute import power_state from nova import context from nova import exception from nova.network import model as network_model from nova import objects from nova import test from nova.tests.unit import fake_flavor from nova.tests.unit import fake_instance import nova.tests.unit.image.fake from nova.tests.unit.virt.vmwareapi import fake as vmwareapi_fake from nova.tests.unit.virt.vmwareapi import stubs from nova.tests import uuidsentinel from nova import utils from nova import version from nova.virt import hardware from nova.virt.vmwareapi import constants from nova.virt.vmwareapi import driver from nova.virt.vmwareapi import ds_util from nova.virt.vmwareapi import images from nova.virt.vmwareapi import vif from nova.virt.vmwareapi import vim_util from nova.virt.vmwareapi import vm_util from nova.virt.vmwareapi import vmops class DsPathMatcher(object): def __init__(self, expected_ds_path_str): self.expected_ds_path_str = expected_ds_path_str def __eq__(self, ds_path_param): return str(ds_path_param) == self.expected_ds_path_str class VMwareVMOpsTestCase(test.NoDBTestCase): def setUp(self): super(VMwareVMOpsTestCase, self).setUp() ds_util.dc_cache_reset() vmwareapi_fake.reset() stubs.set_stubs(self) self.flags(enabled=True, group='vnc') self.flags(image_cache_subdirectory_name='vmware_base', my_ip='', flat_injected=True) self._context = context.RequestContext('fake_user', 'fake_project') self._session = driver.VMwareAPISession() self._virtapi = mock.Mock() self._image_id = nova.tests.unit.image.fake.get_valid_image_id() fake_ds_ref = vmwareapi_fake.ManagedObjectReference('fake-ds') self._ds = ds_obj.Datastore( ref=fake_ds_ref, name='fake_ds', capacity=10 * units.Gi, freespace=10 * units.Gi) self._dc_info = ds_util.DcInfo( ref='fake_dc_ref', name='fake_dc', vmFolder='fake_vm_folder') cluster = vmwareapi_fake.create_cluster('fake_cluster', fake_ds_ref) self._uuid = uuidsentinel.foo self._instance_values = { 'name': 'fake_name', 'display_name': 'fake_display_name', 'uuid': self._uuid, 'vcpus': 1, 'memory_mb': 512, 'image_ref': self._image_id, 'root_gb': 10, 'node': '%s(%s)' % (cluster.mo_id, cluster.name), 'expected_attrs': ['system_metadata'], } self._instance = fake_instance.fake_instance_obj( self._context, **self._instance_values) self._flavor = objects.Flavor(name='m1.small', memory_mb=512, vcpus=1, root_gb=10, ephemeral_gb=0, swap=0, extra_specs={}) self._instance.flavor = self._flavor self._vmops = vmops.VMwareVMOps(self._session, self._virtapi, None, cluster=cluster.obj) self._cluster = cluster self._image_meta = objects.ImageMeta.from_dict({'id': self._image_id}) subnet_4 = network_model.Subnet(cidr='192.168.0.1/24', dns=[network_model.IP('192.168.0.1')], gateway= network_model.IP('192.168.0.1'), ips=[ network_model.IP('192.168.0.100')], routes=None) subnet_6 = network_model.Subnet(cidr='dead:beef::1/64', dns=None, gateway= network_model.IP('dead:beef::1'), ips=[network_model.IP( 'dead:beef::dcad:beff:feef:0')], routes=None) network = network_model.Network(id=0, bridge='fa0', label='fake', subnets=[subnet_4, subnet_6], vlan=None, bridge_interface=None, injected=True) self._network_values = { 'id': None, 'address': 'DE:AD:BE:EF:00:00', 'network': network, 'type': network_model.VIF_TYPE_OVS, 'devname': None, 'ovs_interfaceid': None, 'rxtx_cap': 3 } self.network_info = network_model.NetworkInfo([ network_model.VIF(**self._network_values) ]) pure_IPv6_network = network_model.Network(id=0, bridge='fa0', label='fake', subnets=[subnet_6], vlan=None, bridge_interface=None, injected=True) self.pure_IPv6_network_info = network_model.NetworkInfo([ network_model.VIF(id=None, address='DE:AD:BE:EF:00:00', network=pure_IPv6_network, type=None, devname=None, ovs_interfaceid=None, rxtx_cap=3) ]) self._metadata = ( "name:fake_display_name\n" "userid:fake_user\n" "username:None\n" "projectid:fake_project\n" "projectname:None\n" "flavor:name:m1.micro\n" "flavor:memory_mb:6\n" "flavor:vcpus:28\n" "flavor:ephemeral_gb:8128\n" "flavor:root_gb:496\n" "flavor:swap:33550336\n" "imageid:70a599e0-31e7-49b7-b260-868f441e862b\n" "package:%s\n" % version.version_string_with_package()) def test_get_machine_id_str(self): result = vmops.VMwareVMOps._get_machine_id_str(self.network_info) self.assertEqual('DE:AD:BE:EF:00:00;192.168.0.100;255.255.255.0;' '192.168.0.1;192.168.0.255;192.168.0.1#', result) result = vmops.VMwareVMOps._get_machine_id_str( self.pure_IPv6_network_info) self.assertEqual('DE:AD:BE:EF:00:00;;;;;#', result) def _setup_create_folder_mocks(self): ops = vmops.VMwareVMOps(mock.Mock(), mock.Mock(), mock.Mock()) base_name = 'folder' ds_name = "datastore" ds_ref = mock.Mock() ds_ref.value = 1 dc_ref = mock.Mock() ds_util._DS_DC_MAPPING[ds_ref.value] = ds_util.DcInfo( ref=dc_ref, name='fake-name', vmFolder='fake-folder') path = ds_obj.DatastorePath(ds_name, base_name) return ds_name, ds_ref, ops, path, dc_ref @mock.patch.object(ds_util, 'mkdir') def test_create_folder_if_missing(self, mock_mkdir): ds_name, ds_ref, ops, path, dc = self._setup_create_folder_mocks() ops._create_folder_if_missing(ds_name, ds_ref, 'folder') mock_mkdir.assert_called_with(ops._session, path, dc) @mock.patch.object(ds_util, 'mkdir') def test_create_folder_if_missing_exception(self, mock_mkdir): ds_name, ds_ref, ops, path, dc = self._setup_create_folder_mocks() ds_util.mkdir.side_effect = vexc.FileAlreadyExistsException() ops._create_folder_if_missing(ds_name, ds_ref, 'folder') mock_mkdir.assert_called_with(ops._session, path, dc) def test_get_valid_vms_from_retrieve_result(self): ops = vmops.VMwareVMOps(self._session, mock.Mock(), mock.Mock()) fake_objects = vmwareapi_fake.FakeRetrieveResult() for x in range(0, 3): vm = vmwareapi_fake.VirtualMachine() vm.set('config.extraConfig["nvp.vm-uuid"]', vmwareapi_fake.OptionValue( value=uuidutils.generate_uuid())) fake_objects.add_object(vm) vms = ops._get_valid_vms_from_retrieve_result(fake_objects) self.assertEqual(3, len(vms)) def test_get_valid_vms_from_retrieve_result_with_invalid(self): ops = vmops.VMwareVMOps(self._session, mock.Mock(), mock.Mock()) fake_objects = vmwareapi_fake.FakeRetrieveResult() valid_vm = vmwareapi_fake.VirtualMachine() valid_vm.set('config.extraConfig["nvp.vm-uuid"]', vmwareapi_fake.OptionValue( value=uuidutils.generate_uuid())) fake_objects.add_object(valid_vm) invalid_vm1 = vmwareapi_fake.VirtualMachine() invalid_vm1.set('runtime.connectionState', 'orphaned') invalid_vm1.set('config.extraConfig["nvp.vm-uuid"]', vmwareapi_fake.OptionValue( value=uuidutils.generate_uuid())) invalid_vm2 = vmwareapi_fake.VirtualMachine() invalid_vm2.set('runtime.connectionState', 'inaccessible') invalid_vm2.set('config.extraConfig["nvp.vm-uuid"]', vmwareapi_fake.OptionValue( value=uuidutils.generate_uuid())) fake_objects.add_object(invalid_vm1) fake_objects.add_object(invalid_vm2) vms = ops._get_valid_vms_from_retrieve_result(fake_objects) self.assertEqual(1, len(vms)) def test_delete_vm_snapshot(self): def fake_call_method(module, method, *args, **kwargs): self.assertEqual('RemoveSnapshot_Task', method) self.assertEqual('fake_vm_snapshot', args[0]) self.assertFalse(kwargs['removeChildren']) self.assertTrue(kwargs['consolidate']) return 'fake_remove_snapshot_task' with test.nested( mock.patch.object(self._session, '_wait_for_task'), mock.patch.object(self._session, '_call_method', fake_call_method) ) as (_wait_for_task, _call_method): self._vmops._delete_vm_snapshot(self._instance, "fake_vm_ref", "fake_vm_snapshot") _wait_for_task.assert_has_calls([ mock.call('fake_remove_snapshot_task')]) def test_create_vm_snapshot(self): method_list = ['CreateSnapshot_Task', 'get_object_property'] def fake_call_method(module, method, *args, **kwargs): expected_method = method_list.pop(0) self.assertEqual(expected_method, method) if (expected_method == 'CreateSnapshot_Task'): self.assertEqual('fake_vm_ref', args[0]) self.assertFalse(kwargs['memory']) self.assertTrue(kwargs['quiesce']) return 'fake_snapshot_task' elif (expected_method == 'get_object_property'): task_info = mock.Mock() task_info.result = "fake_snapshot_ref" self.assertEqual(('fake_snapshot_task', 'info'), args) return task_info with test.nested( mock.patch.object(self._session, '_wait_for_task'), mock.patch.object(self._session, '_call_method', fake_call_method) ) as (_wait_for_task, _call_method): snap = self._vmops._create_vm_snapshot(self._instance, "fake_vm_ref") self.assertEqual("fake_snapshot_ref", snap) _wait_for_task.assert_has_calls([ mock.call('fake_snapshot_task')]) def test_update_instance_progress(self): with mock.patch.object(self._instance, 'save') as mock_save: self._vmops._update_instance_progress(self._instance._context, self._instance, 5, 10) mock_save.assert_called_once_with() self.assertEqual(50, self._instance.progress) @mock.patch.object(vm_util, 'get_vm_ref', return_value='fake_ref') def test_get_info(self, mock_get_vm_ref): result = { 'summary.config.numCpu': 4, 'summary.config.memorySizeMB': 128, 'runtime.powerState': 'poweredOn' } with mock.patch.object(self._session, '_call_method', return_value=result): info = self._vmops.get_info(self._instance) mock_get_vm_ref.assert_called_once_with(self._session, self._instance) expected = hardware.InstanceInfo(state=power_state.RUNNING) self.assertEqual(expected, info) @mock.patch.object(vm_util, 'get_vm_ref', return_value='fake_ref') def test_get_info_when_ds_unavailable(self, mock_get_vm_ref): result = { 'runtime.powerState': 'poweredOff' } with mock.patch.object(self._session, '_call_method', return_value=result): info = self._vmops.get_info(self._instance) mock_get_vm_ref.assert_called_once_with(self._session, self._instance) self.assertEqual(hardware.InstanceInfo(state=power_state.SHUTDOWN), info) @mock.patch.object(vm_util, 'get_vm_ref', return_value='fake_ref') def test_get_info_instance_deleted(self, mock_get_vm_ref): props = ['summary.config.numCpu', 'summary.config.memorySizeMB', 'runtime.powerState'] prop_cpu = vmwareapi_fake.Prop(props[0], 4) prop_mem = vmwareapi_fake.Prop(props[1], 128) prop_state = vmwareapi_fake.Prop(props[2], 'poweredOn') prop_list = [prop_state, prop_mem, prop_cpu] obj_content = vmwareapi_fake.ObjectContent(None, prop_list=prop_list) result = vmwareapi_fake.FakeRetrieveResult() result.add_object(obj_content) def mock_call_method(module, method, *args, **kwargs): raise vexc.ManagedObjectNotFoundException() with mock.patch.object(self._session, '_call_method', mock_call_method): self.assertRaises(exception.InstanceNotFound, self._vmops.get_info, self._instance) mock_get_vm_ref.assert_called_once_with(self._session, self._instance) def _test_get_datacenter_ref_and_name(self, ds_ref_exists=False): instance_ds_ref = mock.Mock() instance_ds_ref.value = "ds-1" _vcvmops = vmops.VMwareVMOps(self._session, None, None) result = vmwareapi_fake.FakeRetrieveResult() if ds_ref_exists: ds_ref = mock.Mock() ds_ref.value = "ds-1" result.add_object(vmwareapi_fake.Datacenter(ds_ref=ds_ref)) else: result.add_object(vmwareapi_fake.Datacenter(ds_ref=None)) result.add_object(vmwareapi_fake.Datacenter()) with mock.patch.object(self._session, '_call_method', return_value=result) as fake_call: dc_info = _vcvmops.get_datacenter_ref_and_name(instance_ds_ref) fake_call.assert_called_once_with( vim_util, "get_objects", "Datacenter", ["name", "datastore", "vmFolder"]) if ds_ref_exists: self.assertEqual(1, len(ds_util._DS_DC_MAPPING)) self.assertEqual("ha-datacenter", dc_info.name) else: self.assertIsNone(dc_info) def test_get_datacenter_ref_and_name(self): self._test_get_datacenter_ref_and_name(ds_ref_exists=True) def test_get_datacenter_ref_and_name_with_no_datastore(self): self._test_get_datacenter_ref_and_name() @mock.patch('nova.image.api.API.get') @mock.patch.object(vm_util, 'power_off_instance') @mock.patch.object(ds_util, 'disk_copy') @mock.patch.object(vm_util, 'get_vm_ref', return_value='fake-ref') @mock.patch.object(vm_util, 'find_rescue_device') @mock.patch.object(vm_util, 'get_vm_boot_spec') @mock.patch.object(vm_util, 'reconfigure_vm') @mock.patch.object(vm_util, 'power_on_instance') @mock.patch.object(ds_obj, 'get_datastore_by_ref') def test_rescue(self, mock_get_ds_by_ref, mock_power_on, mock_reconfigure, mock_get_boot_spec, mock_find_rescue, mock_get_vm_ref, mock_disk_copy, mock_power_off, mock_glance): _volumeops = mock.Mock() self._vmops._volumeops = _volumeops ds_ref = vmwareapi_fake.ManagedObjectReference(value='fake-ref') ds = ds_obj.Datastore(ds_ref, 'ds1') mock_get_ds_by_ref.return_value = ds mock_find_rescue.return_value = 'fake-rescue-device' mock_get_boot_spec.return_value = 'fake-boot-spec' vm_ref = vmwareapi_fake.ManagedObjectReference() mock_get_vm_ref.return_value = vm_ref device = vmwareapi_fake.DataObject() backing = vmwareapi_fake.DataObject() backing.datastore = ds.ref device.backing = backing vmdk = vm_util.VmdkInfo('[fake] uuid/root.vmdk', 'fake-adapter', 'fake-disk', 'fake-capacity', device) with test.nested( mock.patch.object(self._vmops, 'get_datacenter_ref_and_name'), mock.patch.object(vm_util, 'get_vmdk_info', return_value=vmdk) ) as (_get_dc_ref_and_name, fake_vmdk_info): dc_info = mock.Mock() _get_dc_ref_and_name.return_value = dc_info self._vmops.rescue( self._context, self._instance, None, self._image_meta) mock_power_off.assert_called_once_with(self._session, self._instance, vm_ref) uuid = self._instance.image_ref cache_path = ds.build_path('vmware_base', uuid, uuid + '.vmdk') rescue_path = ds.build_path(self._uuid, uuid + '-rescue.vmdk') mock_disk_copy.assert_called_once_with(self._session, dc_info.ref, cache_path, rescue_path) _volumeops.attach_disk_to_vm.assert_called_once_with(vm_ref, self._instance, mock.ANY, mock.ANY, rescue_path) mock_get_boot_spec.assert_called_once_with(mock.ANY, 'fake-rescue-device') mock_reconfigure.assert_called_once_with(self._session, vm_ref, 'fake-boot-spec') mock_power_on.assert_called_once_with(self._session, self._instance, vm_ref=vm_ref) def test_unrescue_power_on(self): self._test_unrescue(True) def test_unrescue_power_off(self): self._test_unrescue(False) def _test_unrescue(self, power_on): _volumeops = mock.Mock() self._vmops._volumeops = _volumeops vm_ref = mock.Mock() def fake_call_method(module, method, *args, **kwargs): expected_args = (vm_ref, 'config.hardware.device') self.assertEqual('get_object_property', method) self.assertEqual(expected_args, args) with test.nested( mock.patch.object(vm_util, 'power_on_instance'), mock.patch.object(vm_util, 'find_rescue_device'), mock.patch.object(vm_util, 'get_vm_ref', return_value=vm_ref), mock.patch.object(self._session, '_call_method', fake_call_method), mock.patch.object(vm_util, 'power_off_instance') ) as (_power_on_instance, _find_rescue, _get_vm_ref, _call_method, _power_off): self._vmops.unrescue(self._instance, power_on=power_on) if power_on: _power_on_instance.assert_called_once_with(self._session, self._instance, vm_ref=vm_ref) else: self.assertFalse(_power_on_instance.called) _get_vm_ref.assert_called_once_with(self._session, self._instance) _power_off.assert_called_once_with(self._session, self._instance, vm_ref) _volumeops.detach_disk_from_vm.assert_called_once_with( vm_ref, self._instance, mock.ANY, destroy_disk=True) @mock.patch.object(time, 'sleep') def _test_clean_shutdown(self, mock_sleep, timeout, retry_interval, returns_on, returns_off, vmware_tools_status, succeeds): """Test the _clean_shutdown method :param timeout: timeout before soft shutdown is considered a fail :param retry_interval: time between rechecking instance power state :param returns_on: how often the instance is reported as poweredOn :param returns_off: how often the instance is reported as poweredOff :param vmware_tools_status: Status of vmware tools :param succeeds: the expected result """ instance = self._instance vm_ref = mock.Mock() return_props = [] expected_methods = ['get_object_properties_dict'] props_on = {'runtime.powerState': 'poweredOn', 'summary.guest.toolsStatus': vmware_tools_status, 'summary.guest.toolsRunningStatus': 'guestToolsRunning'} props_off = {'runtime.powerState': 'poweredOff', 'summary.guest.toolsStatus': vmware_tools_status, 'summary.guest.toolsRunningStatus': 'guestToolsRunning'} # initialize expected instance methods and returned properties if vmware_tools_status == "toolsOk": if returns_on > 0: expected_methods.append('ShutdownGuest') for x in range(returns_on + 1): return_props.append(props_on) for x in range(returns_on): expected_methods.append('get_object_properties_dict') for x in range(returns_off): return_props.append(props_off) if returns_on > 0: expected_methods.append('get_object_properties_dict') else: return_props.append(props_off) def fake_call_method(module, method, *args, **kwargs): expected_method = expected_methods.pop(0) self.assertEqual(expected_method, method) if expected_method == 'get_object_properties_dict': props = return_props.pop(0) return props elif expected_method == 'ShutdownGuest': return with test.nested( mock.patch.object(vm_util, 'get_vm_ref', return_value=vm_ref), mock.patch.object(self._session, '_call_method', side_effect=fake_call_method) ) as (mock_get_vm_ref, mock_call_method): result = self._vmops._clean_shutdown(instance, timeout, retry_interval) self.assertEqual(succeeds, result) mock_get_vm_ref.assert_called_once_with(self._session, self._instance) def test_clean_shutdown_first_time(self): self._test_clean_shutdown(timeout=10, retry_interval=3, returns_on=1, returns_off=1, vmware_tools_status="toolsOk", succeeds=True) def test_clean_shutdown_second_time(self): self._test_clean_shutdown(timeout=10, retry_interval=3, returns_on=2, returns_off=1, vmware_tools_status="toolsOk", succeeds=True) def test_clean_shutdown_timeout(self): self._test_clean_shutdown(timeout=10, retry_interval=3, returns_on=4, returns_off=0, vmware_tools_status="toolsOk", succeeds=False) def test_clean_shutdown_already_off(self): self._test_clean_shutdown(timeout=10, retry_interval=3, returns_on=0, returns_off=1, vmware_tools_status="toolsOk", succeeds=False) def test_clean_shutdown_no_vwaretools(self): self._test_clean_shutdown(timeout=10, retry_interval=3, returns_on=1, returns_off=0, vmware_tools_status="toolsNotOk", succeeds=False) def _test_finish_migration(self, power_on=True, resize_instance=False): with test.nested( mock.patch.object(self._vmops, '_resize_create_ephemerals_and_swap'), mock.patch.object(self._vmops, "_update_instance_progress"), mock.patch.object(vm_util, "power_on_instance"), mock.patch.object(vm_util, "get_vm_ref", return_value='fake-ref') ) as (fake_resize_create_ephemerals_and_swap, fake_update_instance_progress, fake_power_on, fake_get_vm_ref): self._vmops.finish_migration(context=self._context, migration=None, instance=self._instance, disk_info=None, network_info=None, block_device_info=None, resize_instance=resize_instance, image_meta=None, power_on=power_on) fake_resize_create_ephemerals_and_swap.assert_called_once_with( 'fake-ref', self._instance, None) if power_on: fake_power_on.assert_called_once_with(self._session, self._instance, vm_ref='fake-ref') else: self.assertFalse(fake_power_on.called) calls = [ mock.call(self._context, self._instance, step=5, total_steps=vmops.RESIZE_TOTAL_STEPS), mock.call(self._context, self._instance, step=6, total_steps=vmops.RESIZE_TOTAL_STEPS)] fake_update_instance_progress.assert_has_calls(calls) def test_finish_migration_power_on(self): self._test_finish_migration(power_on=True, resize_instance=False) def test_finish_migration_power_off(self): self._test_finish_migration(power_on=False, resize_instance=False) def test_finish_migration_power_on_resize(self): self._test_finish_migration(power_on=True, resize_instance=True) @mock.patch.object(vmops.VMwareVMOps, '_create_swap') @mock.patch.object(vmops.VMwareVMOps, '_create_ephemeral') @mock.patch.object(ds_obj, 'get_datastore_by_ref', return_value='fake-ds-ref') @mock.patch.object(vm_util, 'get_vmdk_info') def _test_resize_create_ephemerals(self, vmdk, datastore, mock_get_vmdk_info, mock_get_datastore_by_ref, mock_create_ephemeral, mock_create_swap): mock_get_vmdk_info.return_value = vmdk dc_info = ds_util.DcInfo(ref='fake_ref', name='fake', vmFolder='fake_folder') with mock.patch.object(self._vmops, 'get_datacenter_ref_and_name', return_value=dc_info) as mock_get_dc_ref_and_name: self._vmops._resize_create_ephemerals_and_swap( 'vm-ref', self._instance, 'block-devices') mock_get_vmdk_info.assert_called_once_with( self._session, 'vm-ref', uuid=self._instance.uuid) if vmdk.device: mock_get_datastore_by_ref.assert_called_once_with( self._session, datastore.ref) mock_get_dc_ref_and_name.assert_called_once_with(datastore.ref) mock_create_ephemeral.assert_called_once_with( 'block-devices', self._instance, 'vm-ref', dc_info, 'fake-ds-ref', 'uuid', 'fake-adapter') mock_create_swap.assert_called_once_with( 'block-devices', self._instance, 'vm-ref', dc_info, 'fake-ds-ref', 'uuid', 'fake-adapter') else: self.assertFalse(mock_create_ephemeral.called) self.assertFalse(mock_get_dc_ref_and_name.called) self.assertFalse(mock_get_datastore_by_ref.called) def test_resize_create_ephemerals(self): datastore = ds_obj.Datastore(ref='fake-ref', name='fake') device = vmwareapi_fake.DataObject() backing = vmwareapi_fake.DataObject() backing.datastore = datastore.ref device.backing = backing vmdk = vm_util.VmdkInfo('[fake] uuid/root.vmdk', 'fake-adapter', 'fake-disk', 'fake-capacity', device) self._test_resize_create_ephemerals(vmdk, datastore) def test_resize_create_ephemerals_no_root(self): vmdk = vm_util.VmdkInfo(None, None, None, 0, None) self._test_resize_create_ephemerals(vmdk, None) @mock.patch.object(vmops.VMwareVMOps, '_get_extra_specs') @mock.patch.object(vmops.VMwareVMOps, '_resize_create_ephemerals_and_swap') @mock.patch.object(vmops.VMwareVMOps, '_remove_ephemerals_and_swap') @mock.patch.object(ds_util, 'disk_delete') @mock.patch.object(ds_util, 'disk_move') @mock.patch.object(ds_util, 'file_exists', return_value=True) @mock.patch.object(vmops.VMwareVMOps, '_get_ds_browser', return_value='fake-browser') @mock.patch.object(vm_util, 'reconfigure_vm') @mock.patch.object(vm_util, 'get_vm_resize_spec', return_value='fake-spec') @mock.patch.object(vm_util, 'power_off_instance') @mock.patch.object(vm_util, 'get_vm_ref', return_value='fake-ref') @mock.patch.object(vm_util, 'power_on_instance') def _test_finish_revert_migration(self, fake_power_on, fake_get_vm_ref, fake_power_off, fake_resize_spec, fake_reconfigure_vm, fake_get_browser, fake_original_exists, fake_disk_move, fake_disk_delete, fake_remove_ephemerals_and_swap, fake_resize_create_ephemerals_and_swap, fake_get_extra_specs, power_on): """Tests the finish_revert_migration method on vmops.""" datastore = ds_obj.Datastore(ref='fake-ref', name='fake') device = vmwareapi_fake.DataObject() backing = vmwareapi_fake.DataObject() backing.datastore = datastore.ref device.backing = backing vmdk = vm_util.VmdkInfo('[fake] uuid/root.vmdk', 'fake-adapter', 'fake-disk', 'fake-capacity', device) dc_info = ds_util.DcInfo(ref='fake_ref', name='fake', vmFolder='fake_folder') extra_specs = vm_util.ExtraSpecs() fake_get_extra_specs.return_value = extra_specs with test.nested( mock.patch.object(self._vmops, 'get_datacenter_ref_and_name', return_value=dc_info), mock.patch.object(vm_util, 'get_vmdk_info', return_value=vmdk) ) as (fake_get_dc_ref_and_name, fake_get_vmdk_info): self._vmops._volumeops = mock.Mock() mock_attach_disk = self._vmops._volumeops.attach_disk_to_vm mock_detach_disk = self._vmops._volumeops.detach_disk_from_vm self._vmops.finish_revert_migration(self._context, instance=self._instance, network_info=None, block_device_info=None, power_on=power_on) fake_get_vm_ref.assert_called_once_with(self._session, self._instance) fake_power_off.assert_called_once_with(self._session, self._instance, 'fake-ref') # Validate VM reconfiguration metadata = ('name:fake_display_name\n' 'userid:fake_user\n' 'username:None\n' 'projectid:fake_project\n' 'projectname:None\n' 'flavor:name:m1.small\n' 'flavor:memory_mb:512\n' 'flavor:vcpus:1\n' 'flavor:ephemeral_gb:0\n' 'flavor:root_gb:10\n' 'flavor:swap:0\n' 'imageid:70a599e0-31e7-49b7-b260-868f441e862b\n' 'package:%s\n' % version.version_string_with_package()) fake_resize_spec.assert_called_once_with( self._session.vim.client.factory, int(self._instance.vcpus), int(self._instance.memory_mb), extra_specs, metadata=metadata) fake_reconfigure_vm.assert_called_once_with(self._session, 'fake-ref', 'fake-spec') # Validate disk configuration fake_get_vmdk_info.assert_called_once_with( self._session, 'fake-ref', uuid=self._instance.uuid) fake_get_browser.assert_called_once_with('fake-ref') fake_original_exists.assert_called_once_with( self._session, 'fake-browser', ds_obj.DatastorePath(datastore.name, 'uuid'), 'original.vmdk') mock_detach_disk.assert_called_once_with('fake-ref', self._instance, device) fake_disk_delete.assert_called_once_with( self._session, dc_info.ref, '[fake] uuid/root.vmdk') fake_disk_move.assert_called_once_with( self._session, dc_info.ref, '[fake] uuid/original.vmdk', '[fake] uuid/root.vmdk') mock_attach_disk.assert_called_once_with( 'fake-ref', self._instance, 'fake-adapter', 'fake-disk', '[fake] uuid/root.vmdk') fake_remove_ephemerals_and_swap.assert_called_once_with('fake-ref') fake_resize_create_ephemerals_and_swap.assert_called_once_with( 'fake-ref', self._instance, None) if power_on: fake_power_on.assert_called_once_with(self._session, self._instance) else: self.assertFalse(fake_power_on.called) def test_finish_revert_migration_power_on(self): self._test_finish_revert_migration(power_on=True) def test_finish_revert_migration_power_off(self): self._test_finish_revert_migration(power_on=False) @mock.patch.object(vmops.VMwareVMOps, '_get_instance_metadata') @mock.patch.object(vmops.VMwareVMOps, '_get_extra_specs') @mock.patch.object(vm_util, 'reconfigure_vm') @mock.patch.object(vm_util, 'get_vm_resize_spec', return_value='fake-spec') def test_resize_vm(self, fake_resize_spec, fake_reconfigure, fake_get_extra_specs, fake_get_metadata): extra_specs = vm_util.ExtraSpecs() fake_get_extra_specs.return_value = extra_specs fake_get_metadata.return_value = self._metadata flavor = objects.Flavor(name='m1.small', memory_mb=1024, vcpus=2, extra_specs={}) self._vmops._resize_vm(self._context, self._instance, 'vm-ref', flavor, None) fake_resize_spec.assert_called_once_with( self._session.vim.client.factory, 2, 1024, extra_specs, metadata=self._metadata) fake_reconfigure.assert_called_once_with(self._session, 'vm-ref', 'fake-spec') @mock.patch.object(vmops.VMwareVMOps, '_extend_virtual_disk') @mock.patch.object(ds_util, 'disk_move') @mock.patch.object(ds_util, 'disk_copy') def test_resize_disk(self, fake_disk_copy, fake_disk_move, fake_extend): datastore = ds_obj.Datastore(ref='fake-ref', name='fake') device = vmwareapi_fake.DataObject() backing = vmwareapi_fake.DataObject() backing.datastore = datastore.ref device.backing = backing vmdk = vm_util.VmdkInfo('[fake] uuid/root.vmdk', 'fake-adapter', 'fake-disk', self._instance.flavor.root_gb * units.Gi, device) dc_info = ds_util.DcInfo(ref='fake_ref', name='fake', vmFolder='fake_folder') with mock.patch.object(self._vmops, 'get_datacenter_ref_and_name', return_value=dc_info) as fake_get_dc_ref_and_name: self._vmops._volumeops = mock.Mock() mock_attach_disk = self._vmops._volumeops.attach_disk_to_vm mock_detach_disk = self._vmops._volumeops.detach_disk_from_vm flavor = fake_flavor.fake_flavor_obj(self._context, root_gb=self._instance.flavor.root_gb + 1) self._vmops._resize_disk(self._instance, 'fake-ref', vmdk, flavor) fake_get_dc_ref_and_name.assert_called_once_with(datastore.ref) fake_disk_copy.assert_called_once_with( self._session, dc_info.ref, '[fake] uuid/root.vmdk', '[fake] uuid/resized.vmdk') mock_detach_disk.assert_called_once_with('fake-ref', self._instance, device) fake_extend.assert_called_once_with( self._instance, flavor['root_gb'] * units.Mi, '[fake] uuid/resized.vmdk', dc_info.ref) calls = [ mock.call(self._session, dc_info.ref, '[fake] uuid/root.vmdk', '[fake] uuid/original.vmdk'), mock.call(self._session, dc_info.ref, '[fake] uuid/resized.vmdk', '[fake] uuid/root.vmdk')] fake_disk_move.assert_has_calls(calls) mock_attach_disk.assert_called_once_with( 'fake-ref', self._instance, 'fake-adapter', 'fake-disk', '[fake] uuid/root.vmdk') @mock.patch.object(vm_util, 'detach_devices_from_vm') @mock.patch.object(vm_util, 'get_swap') @mock.patch.object(vm_util, 'get_ephemerals') def test_remove_ephemerals_and_swap(self, get_ephemerals, get_swap, detach_devices): get_ephemerals.return_value = [mock.sentinel.ephemeral0, mock.sentinel.ephemeral1] get_swap.return_value = mock.sentinel.swap devices = [mock.sentinel.ephemeral0, mock.sentinel.ephemeral1, mock.sentinel.swap] self._vmops._remove_ephemerals_and_swap(mock.sentinel.vm_ref) detach_devices.assert_called_once_with(self._vmops._session, mock.sentinel.vm_ref, devices) @mock.patch.object(ds_util, 'disk_delete') @mock.patch.object(ds_util, 'file_exists', return_value=True) @mock.patch.object(vmops.VMwareVMOps, '_get_ds_browser', return_value='fake-browser') @mock.patch.object(vm_util, 'get_vm_ref', return_value='fake-ref') def test_confirm_migration(self, fake_get_vm_ref, fake_get_browser, fake_original_exists, fake_disk_delete): """Tests the confirm_migration method on vmops.""" datastore = ds_obj.Datastore(ref='fake-ref', name='fake') device = vmwareapi_fake.DataObject() backing = vmwareapi_fake.DataObject() backing.datastore = datastore.ref device.backing = backing vmdk = vm_util.VmdkInfo('[fake] uuid/root.vmdk', 'fake-adapter', 'fake-disk', 'fake-capacity', device) dc_info = ds_util.DcInfo(ref='fake_ref', name='fake', vmFolder='fake_folder') with test.nested( mock.patch.object(self._vmops, 'get_datacenter_ref_and_name', return_value=dc_info), mock.patch.object(vm_util, 'get_vmdk_info', return_value=vmdk) ) as (fake_get_dc_ref_and_name, fake_get_vmdk_info): self._vmops.confirm_migration(None, self._instance, None) fake_get_vm_ref.assert_called_once_with(self._session, self._instance) fake_get_vmdk_info.assert_called_once_with( self._session, 'fake-ref', uuid=self._instance.uuid) fake_get_browser.assert_called_once_with('fake-ref') fake_original_exists.assert_called_once_with( self._session, 'fake-browser', ds_obj.DatastorePath(datastore.name, 'uuid'), 'original.vmdk') fake_disk_delete.assert_called_once_with( self._session, dc_info.ref, '[fake] uuid/original.vmdk') def test_migrate_disk_and_power_off(self): self._test_migrate_disk_and_power_off( flavor_root_gb=self._instance.flavor.root_gb + 1) def test_migrate_disk_and_power_off_zero_disk_flavor(self): self._instance.flavor.root_gb = 0 self._test_migrate_disk_and_power_off(flavor_root_gb=0) def test_migrate_disk_and_power_off_disk_shrink(self): self.assertRaises(exception.InstanceFaultRollback, self._test_migrate_disk_and_power_off, flavor_root_gb=self._instance.flavor.root_gb - 1) @mock.patch.object(vmops.VMwareVMOps, "_remove_ephemerals_and_swap") @mock.patch.object(vm_util, 'get_vmdk_info') @mock.patch.object(vmops.VMwareVMOps, "_resize_disk") @mock.patch.object(vmops.VMwareVMOps, "_resize_vm") @mock.patch.object(vm_util, 'power_off_instance') @mock.patch.object(vmops.VMwareVMOps, "_update_instance_progress") @mock.patch.object(vm_util, 'get_vm_ref', return_value='fake-ref') def _test_migrate_disk_and_power_off(self, fake_get_vm_ref, fake_progress, fake_power_off, fake_resize_vm, fake_resize_disk, fake_get_vmdk_info, fake_remove_ephemerals_and_swap, flavor_root_gb): vmdk = vm_util.VmdkInfo('[fake] uuid/root.vmdk', 'fake-adapter', 'fake-disk', self._instance.flavor.root_gb * units.Gi, 'fake-device') fake_get_vmdk_info.return_value = vmdk flavor = fake_flavor.fake_flavor_obj(self._context, root_gb=flavor_root_gb) self._vmops.migrate_disk_and_power_off(self._context, self._instance, None, flavor) fake_get_vm_ref.assert_called_once_with(self._session, self._instance) fake_power_off.assert_called_once_with(self._session, self._instance, 'fake-ref') fake_resize_vm.assert_called_once_with(self._context, self._instance, 'fake-ref', flavor, mock.ANY) fake_resize_disk.assert_called_once_with(self._instance, 'fake-ref', vmdk, flavor) calls = [mock.call(self._context, self._instance, step=i, total_steps=vmops.RESIZE_TOTAL_STEPS) for i in range(4)] fake_progress.assert_has_calls(calls) @mock.patch.object(vutil, 'get_inventory_path', return_value='fake_path') @mock.patch.object(vmops.VMwareVMOps, '_attach_cdrom_to_vm') @mock.patch.object(vmops.VMwareVMOps, '_create_config_drive') def test_configure_config_drive(self, mock_create_config_drive, mock_attach_cdrom_to_vm, mock_get_inventory_path): injected_files = mock.Mock() admin_password = mock.Mock() network_info = mock.Mock() vm_ref = mock.Mock() mock_create_config_drive.return_value = "fake_iso_path" self._vmops._configure_config_drive( self._context, self._instance, vm_ref, self._dc_info, self._ds, injected_files, admin_password, network_info) upload_iso_path = self._ds.build_path("fake_iso_path") mock_get_inventory_path.assert_called_once_with(self._session.vim, self._dc_info.ref) mock_create_config_drive.assert_called_once_with( self._context, self._instance, injected_files, admin_password, network_info, self._ds.name, 'fake_path', self._instance.uuid, "Fake-CookieJar") mock_attach_cdrom_to_vm.assert_called_once_with( vm_ref, self._instance, self._ds.ref, str(upload_iso_path)) @mock.patch('nova.image.api.API.get') @mock.patch.object(vmops.LOG, 'debug') @mock.patch.object(vmops.VMwareVMOps, '_fetch_image_if_missing') @mock.patch.object(vmops.VMwareVMOps, '_get_vm_config_info') @mock.patch.object(vmops.VMwareVMOps, 'build_virtual_machine') @mock.patch.object(vmops.lockutils, 'lock') def test_spawn_mask_block_device_info_password(self, mock_lock, mock_build_virtual_machine, mock_get_vm_config_info, mock_fetch_image_if_missing, mock_debug, mock_glance): # Very simple test that just ensures block_device_info auth_password # is masked when logged; the rest of the test just fails out early. data = {'auth_password': 'scrubme'} bdm = [{'boot_index': 0, 'disk_bus': constants.DEFAULT_ADAPTER_TYPE, 'connection_info': {'data': data}}] bdi = {'block_device_mapping': bdm} self.password_logged = False # Tests that the parameters to the to_xml method are sanitized for # passwords when logged. def fake_debug(*args, **kwargs): if 'auth_password' in args[0]: self.password_logged = True self.assertNotIn('scrubme', args[0]) mock_debug.side_effect = fake_debug self.flags(flat_injected=False) self.flags(enabled=False, group='vnc') mock_vi = mock.Mock() mock_vi.root_gb = 1 mock_vi.ii.file_size = 2 * units.Gi mock_vi.instance.flavor.root_gb = 1 mock_get_vm_config_info.return_value = mock_vi # Call spawn(). We don't care what it does as long as it generates # the log message, which we check below. with mock.patch.object(self._vmops, '_volumeops') as mock_vo: mock_vo.attach_root_volume.side_effect = test.TestingException try: self._vmops.spawn( self._context, self._instance, self._image_meta, injected_files=None, admin_password=None, network_info=[], block_device_info=bdi ) except test.TestingException: pass # Check that the relevant log message was generated, and therefore # that we checked it was scrubbed self.assertTrue(self.password_logged) def _get_metadata(self, is_image_used=True): if is_image_used: image_id = '70a599e0-31e7-49b7-b260-868f441e862b' else: image_id = None return ("name:fake_display_name\n" "userid:fake_user\n" "username:None\n" "projectid:fake_project\n" "projectname:None\n" "flavor:name:m1.small\n" "flavor:memory_mb:512\n" "flavor:vcpus:1\n" "flavor:ephemeral_gb:0\n" "flavor:root_gb:10\n" "flavor:swap:0\n" "imageid:%(image_id)s\n" "package:%(version)s\n" % { 'image_id': image_id, 'version': version.version_string_with_package()}) @mock.patch.object(vm_util, 'rename_vm') @mock.patch.object(vmops.VMwareVMOps, '_create_folders', return_value='fake_vm_folder') @mock.patch('nova.virt.vmwareapi.vm_util.power_on_instance') @mock.patch.object(vmops.VMwareVMOps, '_use_disk_image_as_linked_clone') @mock.patch.object(vmops.VMwareVMOps, '_fetch_image_if_missing') @mock.patch( 'nova.virt.vmwareapi.imagecache.ImageCacheManager.enlist_image') @mock.patch.object(vmops.VMwareVMOps, 'build_virtual_machine') @mock.patch.object(vmops.VMwareVMOps, '_get_vm_config_info') @mock.patch.object(vmops.VMwareVMOps, '_get_extra_specs') @mock.patch.object(nova.virt.vmwareapi.images.VMwareImage, 'from_image') def test_spawn_non_root_block_device(self, from_image, get_extra_specs, get_vm_config_info, build_virtual_machine, enlist_image, fetch_image, use_disk_image, power_on_instance, create_folders, rename_vm): self._instance.flavor = self._flavor extra_specs = get_extra_specs.return_value connection_info1 = {'data': 'fake-data1', 'serial': 'volume-fake-id1'} connection_info2 = {'data': 'fake-data2', 'serial': 'volume-fake-id2'} bdm = [{'connection_info': connection_info1, 'disk_bus': constants.ADAPTER_TYPE_IDE, 'mount_device': '/dev/sdb'}, {'connection_info': connection_info2, 'disk_bus': constants.DEFAULT_ADAPTER_TYPE, 'mount_device': '/dev/sdc'}] bdi = {'block_device_mapping': bdm, 'root_device_name': '/dev/sda'} self.flags(flat_injected=False) self.flags(enabled=False, group='vnc') image_size = (self._instance.flavor.root_gb) * units.Gi / 2 image_info = images.VMwareImage( image_id=self._image_id, file_size=image_size) vi = get_vm_config_info.return_value from_image.return_value = image_info build_virtual_machine.return_value = 'fake-vm-ref' with mock.patch.object(self._vmops, '_volumeops') as volumeops: self._vmops.spawn(self._context, self._instance, self._image_meta, injected_files=None, admin_password=None, network_info=[], block_device_info=bdi) from_image.assert_called_once_with(self._context, self._instance.image_ref, self._image_meta) get_vm_config_info.assert_called_once_with(self._instance, image_info, extra_specs) build_virtual_machine.assert_called_once_with(self._instance, image_info, vi.dc_info, vi.datastore, [], extra_specs, self._get_metadata()) enlist_image.assert_called_once_with(image_info.image_id, vi.datastore, vi.dc_info.ref) fetch_image.assert_called_once_with(self._context, vi) use_disk_image.assert_called_once_with('fake-vm-ref', vi) volumeops.attach_volume.assert_any_call( connection_info1, self._instance, constants.ADAPTER_TYPE_IDE) volumeops.attach_volume.assert_any_call( connection_info2, self._instance, constants.DEFAULT_ADAPTER_TYPE) @mock.patch.object(vm_util, 'rename_vm') @mock.patch.object(vmops.VMwareVMOps, '_create_folders', return_value='fake_vm_folder') @mock.patch('nova.virt.vmwareapi.vm_util.power_on_instance') @mock.patch.object(vmops.VMwareVMOps, 'build_virtual_machine') @mock.patch.object(vmops.VMwareVMOps, '_get_vm_config_info') @mock.patch.object(vmops.VMwareVMOps, '_get_extra_specs') @mock.patch.object(nova.virt.vmwareapi.images.VMwareImage, 'from_image') def test_spawn_with_no_image_and_block_devices(self, from_image, get_extra_specs, get_vm_config_info, build_virtual_machine, power_on_instance, create_folders, rename_vm): self._instance.image_ref = None self._instance.flavor = self._flavor extra_specs = get_extra_specs.return_value connection_info1 = {'data': 'fake-data1', 'serial': 'volume-fake-id1'} connection_info2 = {'data': 'fake-data2', 'serial': 'volume-fake-id2'} connection_info3 = {'data': 'fake-data3', 'serial': 'volume-fake-id3'} bdm = [{'boot_index': 0, 'connection_info': connection_info1, 'disk_bus': constants.ADAPTER_TYPE_IDE}, {'boot_index': 1, 'connection_info': connection_info2, 'disk_bus': constants.DEFAULT_ADAPTER_TYPE}, {'boot_index': 2, 'connection_info': connection_info3, 'disk_bus': constants.ADAPTER_TYPE_LSILOGICSAS}] bdi = {'block_device_mapping': bdm} self.flags(flat_injected=False) self.flags(enabled=False, group='vnc') image_info = mock.sentinel.image_info vi = get_vm_config_info.return_value from_image.return_value = image_info build_virtual_machine.return_value = 'fake-vm-ref' with mock.patch.object(self._vmops, '_volumeops') as volumeops: self._vmops.spawn(self._context, self._instance, self._image_meta, injected_files=None, admin_password=None, network_info=[], block_device_info=bdi) from_image.assert_called_once_with(self._context, self._instance.image_ref, self._image_meta) get_vm_config_info.assert_called_once_with(self._instance, image_info, extra_specs) build_virtual_machine.assert_called_once_with(self._instance, image_info, vi.dc_info, vi.datastore, [], extra_specs, self._get_metadata(is_image_used=False)) volumeops.attach_root_volume.assert_called_once_with( connection_info1, self._instance, vi.datastore.ref, constants.ADAPTER_TYPE_IDE) volumeops.attach_volume.assert_any_call( connection_info2, self._instance, constants.DEFAULT_ADAPTER_TYPE) volumeops.attach_volume.assert_any_call( connection_info3, self._instance, constants.ADAPTER_TYPE_LSILOGICSAS) @mock.patch.object(vmops.VMwareVMOps, '_create_folders', return_value='fake_vm_folder') @mock.patch('nova.virt.vmwareapi.vm_util.power_on_instance') @mock.patch.object(vmops.VMwareVMOps, 'build_virtual_machine') @mock.patch.object(vmops.VMwareVMOps, '_get_vm_config_info') @mock.patch.object(vmops.VMwareVMOps, '_get_extra_specs') @mock.patch.object(nova.virt.vmwareapi.images.VMwareImage, 'from_image') def test_spawn_unsupported_hardware(self, from_image, get_extra_specs, get_vm_config_info, build_virtual_machine, power_on_instance, create_folders): self._instance.image_ref = None self._instance.flavor = self._flavor extra_specs = get_extra_specs.return_value connection_info = {'data': 'fake-data', 'serial': 'volume-fake-id'} bdm = [{'boot_index': 0, 'connection_info': connection_info, 'disk_bus': 'invalid_adapter_type'}] bdi = {'block_device_mapping': bdm} self.flags(flat_injected=False) self.flags(enabled=False, group='vnc') image_info = mock.sentinel.image_info vi = get_vm_config_info.return_value from_image.return_value = image_info build_virtual_machine.return_value = 'fake-vm-ref' self.assertRaises(exception.UnsupportedHardware, self._vmops.spawn, self._context, self._instance, self._image_meta, injected_files=None, admin_password=None, network_info=[], block_device_info=bdi) from_image.assert_called_once_with(self._context, self._instance.image_ref, self._image_meta) get_vm_config_info.assert_called_once_with( self._instance, image_info, extra_specs) build_virtual_machine.assert_called_once_with(self._instance, image_info, vi.dc_info, vi.datastore, [], extra_specs, self._get_metadata(is_image_used=False)) def test_get_ds_browser(self): cache = self._vmops._datastore_browser_mapping ds_browser = mock.Mock() moref = vmwareapi_fake.ManagedObjectReference('datastore-100') self.assertIsNone(cache.get(moref.value)) mock_call_method = mock.Mock(return_value=ds_browser) with mock.patch.object(self._session, '_call_method', mock_call_method): ret = self._vmops._get_ds_browser(moref) mock_call_method.assert_called_once_with(vutil, 'get_object_property', moref, 'browser') self.assertIs(ds_browser, ret) self.assertIs(ds_browser, cache.get(moref.value)) @mock.patch.object( vmops.VMwareVMOps, '_sized_image_exists', return_value=False) @mock.patch.object(vmops.VMwareVMOps, '_extend_virtual_disk') @mock.patch.object(vm_util, 'copy_virtual_disk') def _test_use_disk_image_as_linked_clone(self, mock_copy_virtual_disk, mock_extend_virtual_disk, mock_sized_image_exists, flavor_fits_image=False): extra_specs = vm_util.ExtraSpecs() file_size = 10 * units.Gi if flavor_fits_image else 5 * units.Gi image_info = images.VMwareImage( image_id=self._image_id, file_size=file_size, linked_clone=False) cache_root_folder = self._ds.build_path("vmware_base", self._image_id) mock_imagecache = mock.Mock() mock_imagecache.get_image_cache_folder.return_value = cache_root_folder vi = vmops.VirtualMachineInstanceConfigInfo( self._instance, image_info, self._ds, self._dc_info, mock_imagecache, extra_specs) sized_cached_image_ds_loc = cache_root_folder.join( "%s.%s.vmdk" % (self._image_id, vi.root_gb)) self._vmops._volumeops = mock.Mock() mock_attach_disk_to_vm = self._vmops._volumeops.attach_disk_to_vm self._vmops._use_disk_image_as_linked_clone("fake_vm_ref", vi) mock_copy_virtual_disk.assert_called_once_with( self._session, self._dc_info.ref, str(vi.cache_image_path), str(sized_cached_image_ds_loc)) if not flavor_fits_image: mock_extend_virtual_disk.assert_called_once_with( self._instance, vi.root_gb * units.Mi, str(sized_cached_image_ds_loc), self._dc_info.ref) mock_attach_disk_to_vm.assert_called_once_with( "fake_vm_ref", self._instance, vi.ii.adapter_type, vi.ii.disk_type, str(sized_cached_image_ds_loc), vi.root_gb * units.Mi, False, disk_io_limits=vi._extra_specs.disk_io_limits) def test_use_disk_image_as_linked_clone(self): self._test_use_disk_image_as_linked_clone() def test_use_disk_image_as_linked_clone_flavor_fits_image(self): self._test_use_disk_image_as_linked_clone(flavor_fits_image=True) @mock.patch.object(vmops.VMwareVMOps, '_extend_virtual_disk') @mock.patch.object(vm_util, 'copy_virtual_disk') def _test_use_disk_image_as_full_clone(self, mock_copy_virtual_disk, mock_extend_virtual_disk, flavor_fits_image=False): extra_specs = vm_util.ExtraSpecs() file_size = 10 * units.Gi if flavor_fits_image else 5 * units.Gi image_info = images.VMwareImage( image_id=self._image_id, file_size=file_size, linked_clone=False) cache_root_folder = self._ds.build_path("vmware_base", self._image_id) mock_imagecache = mock.Mock() mock_imagecache.get_image_cache_folder.return_value = cache_root_folder vi = vmops.VirtualMachineInstanceConfigInfo( self._instance, image_info, self._ds, self._dc_info, mock_imagecache, extra_specs) self._vmops._volumeops = mock.Mock() mock_attach_disk_to_vm = self._vmops._volumeops.attach_disk_to_vm self._vmops._use_disk_image_as_full_clone("fake_vm_ref", vi) fake_path = '[fake_ds] %(uuid)s/%(uuid)s.vmdk' % {'uuid': self._uuid} mock_copy_virtual_disk.assert_called_once_with( self._session, self._dc_info.ref, str(vi.cache_image_path), fake_path) if not flavor_fits_image: mock_extend_virtual_disk.assert_called_once_with( self._instance, vi.root_gb * units.Mi, fake_path, self._dc_info.ref) mock_attach_disk_to_vm.assert_called_once_with( "fake_vm_ref", self._instance, vi.ii.adapter_type, vi.ii.disk_type, fake_path, vi.root_gb * units.Mi, False, disk_io_limits=vi._extra_specs.disk_io_limits) def test_use_disk_image_as_full_clone(self): self._test_use_disk_image_as_full_clone() def test_use_disk_image_as_full_clone_image_too_big(self): self._test_use_disk_image_as_full_clone(flavor_fits_image=True) @mock.patch.object(vmops.VMwareVMOps, '_attach_cdrom_to_vm') @mock.patch.object(vm_util, 'create_virtual_disk') def _test_use_iso_image(self, mock_create_virtual_disk, mock_attach_cdrom, with_root_disk): extra_specs = vm_util.ExtraSpecs() image_info = images.VMwareImage( image_id=self._image_id, file_size=10 * units.Mi, linked_clone=True) cache_root_folder = self._ds.build_path("vmware_base", self._image_id) mock_imagecache = mock.Mock() mock_imagecache.get_image_cache_folder.return_value = cache_root_folder vi = vmops.VirtualMachineInstanceConfigInfo( self._instance, image_info, self._ds, self._dc_info, mock_imagecache, extra_specs) self._vmops._volumeops = mock.Mock() mock_attach_disk_to_vm = self._vmops._volumeops.attach_disk_to_vm self._vmops._use_iso_image("fake_vm_ref", vi) mock_attach_cdrom.assert_called_once_with( "fake_vm_ref", self._instance, self._ds.ref, str(vi.cache_image_path)) fake_path = '[fake_ds] %(uuid)s/%(uuid)s.vmdk' % {'uuid': self._uuid} if with_root_disk: mock_create_virtual_disk.assert_called_once_with( self._session, self._dc_info.ref, vi.ii.adapter_type, vi.ii.disk_type, fake_path, vi.root_gb * units.Mi) linked_clone = False mock_attach_disk_to_vm.assert_called_once_with( "fake_vm_ref", self._instance, vi.ii.adapter_type, vi.ii.disk_type, fake_path, vi.root_gb * units.Mi, linked_clone, disk_io_limits=vi._extra_specs.disk_io_limits) def test_use_iso_image_with_root_disk(self): self._test_use_iso_image(with_root_disk=True) def test_use_iso_image_without_root_disk(self): self._test_use_iso_image(with_root_disk=False) def _verify_spawn_method_calls(self, mock_call_method, extras=None): # TODO(vui): More explicit assertions of spawn() behavior # are waiting on additional refactoring pertaining to image # handling/manipulation. Till then, we continue to assert on the # sequence of VIM operations invoked. expected_methods = ['get_object_property', 'SearchDatastore_Task', 'CreateVirtualDisk_Task', 'DeleteDatastoreFile_Task', 'MoveDatastoreFile_Task', 'DeleteDatastoreFile_Task', 'SearchDatastore_Task', 'ExtendVirtualDisk_Task', ] if extras: expected_methods.extend(extras) # Last call should be renaming the instance expected_methods.append('Rename_Task') recorded_methods = [c[1][1] for c in mock_call_method.mock_calls] self.assertEqual(expected_methods, recorded_methods) @mock.patch.object(vmops.VMwareVMOps, '_create_folders', return_value='fake_vm_folder') @mock.patch( 'nova.virt.vmwareapi.vmops.VMwareVMOps._update_vnic_index') @mock.patch( 'nova.virt.vmwareapi.vmops.VMwareVMOps._configure_config_drive') @mock.patch('nova.virt.vmwareapi.ds_util.get_datastore') @mock.patch( 'nova.virt.vmwareapi.vmops.VMwareVMOps.get_datacenter_ref_and_name') @mock.patch('nova.virt.vmwareapi.vif.get_vif_info', return_value=[]) @mock.patch('nova.utils.is_neutron', return_value=False) @mock.patch('nova.virt.vmwareapi.vm_util.get_vm_create_spec', return_value='fake_create_spec') @mock.patch('nova.virt.vmwareapi.vm_util.create_vm', return_value='fake_vm_ref') @mock.patch('nova.virt.vmwareapi.ds_util.mkdir') @mock.patch('nova.virt.vmwareapi.vmops.VMwareVMOps._set_machine_id') @mock.patch( 'nova.virt.vmwareapi.imagecache.ImageCacheManager.enlist_image') @mock.patch.object(vmops.VMwareVMOps, '_get_and_set_vnc_config') @mock.patch('nova.virt.vmwareapi.vm_util.power_on_instance') @mock.patch('nova.virt.vmwareapi.vm_util.copy_virtual_disk') # TODO(dims): Need to add tests for create_virtual_disk after the # disk/image code in spawn gets refactored def _test_spawn(self, mock_copy_virtual_disk, mock_power_on_instance, mock_get_and_set_vnc_config, mock_enlist_image, mock_set_machine_id, mock_mkdir, mock_create_vm, mock_get_create_spec, mock_is_neutron, mock_get_vif_info, mock_get_datacenter_ref_and_name, mock_get_datastore, mock_configure_config_drive, mock_update_vnic_index, mock_create_folders, block_device_info=None, extra_specs=None, config_drive=False): if extra_specs is None: extra_specs = vm_util.ExtraSpecs() image_size = (self._instance.flavor.root_gb) * units.Gi / 2 image = { 'id': self._image_id, 'disk_format': 'vmdk', 'size': image_size, } image = objects.ImageMeta.from_dict(image) image_info = images.VMwareImage( image_id=self._image_id, file_size=image_size) vi = self._vmops._get_vm_config_info( self._instance, image_info, extra_specs) self._vmops._volumeops = mock.Mock() network_info = mock.Mock() mock_get_datastore.return_value = self._ds mock_get_datacenter_ref_and_name.return_value = self._dc_info mock_call_method = mock.Mock(return_value='fake_task') if extra_specs is None: extra_specs = vm_util.ExtraSpecs() with test.nested( mock.patch.object(self._session, '_wait_for_task'), mock.patch.object(self._session, '_call_method', mock_call_method), mock.patch.object(uuidutils, 'generate_uuid', return_value='tmp-uuid'), mock.patch.object(images, 'fetch_image'), mock.patch('nova.image.api.API.get'), mock.patch.object(vutil, 'get_inventory_path', return_value=self._dc_info.name), mock.patch.object(self._vmops, '_get_extra_specs', return_value=extra_specs), mock.patch.object(self._vmops, '_get_instance_metadata', return_value='fake-metadata') ) as (_wait_for_task, _call_method, _generate_uuid, _fetch_image, _get_img_svc, _get_inventory_path, _get_extra_specs, _get_instance_metadata): self._vmops.spawn(self._context, self._instance, image, injected_files='fake_files', admin_password='password', network_info=network_info, block_device_info=block_device_info) mock_is_neutron.assert_called_once_with() self.assertEqual(2, mock_mkdir.call_count) mock_get_vif_info.assert_called_once_with( self._session, self._cluster.obj, False, constants.DEFAULT_VIF_MODEL, network_info) mock_get_create_spec.assert_called_once_with( self._session.vim.client.factory, self._instance, 'fake_ds', [], extra_specs, constants.DEFAULT_OS_TYPE, profile_spec=None, metadata='fake-metadata') mock_create_vm.assert_called_once_with( self._session, self._instance, 'fake_vm_folder', 'fake_create_spec', self._cluster.resourcePool) mock_get_and_set_vnc_config.assert_called_once_with( self._session.vim.client.factory, self._instance, 'fake_vm_ref') mock_set_machine_id.assert_called_once_with( self._session.vim.client.factory, self._instance, network_info, vm_ref='fake_vm_ref') mock_power_on_instance.assert_called_once_with( self._session, self._instance, vm_ref='fake_vm_ref') if (block_device_info and 'block_device_mapping' in block_device_info): bdms = block_device_info['block_device_mapping'] for bdm in bdms: mock_attach_root = ( self._vmops._volumeops.attach_root_volume) mock_attach = self._vmops._volumeops.attach_volume adapter_type = bdm.get('disk_bus') or vi.ii.adapter_type if bdm.get('boot_index') == 0: mock_attach_root.assert_any_call( bdm['connection_info'], self._instance, self._ds.ref, adapter_type) else: mock_attach.assert_any_call( bdm['connection_info'], self._instance, self._ds.ref, adapter_type) mock_enlist_image.assert_called_once_with( self._image_id, self._ds, self._dc_info.ref) upload_file_name = 'vmware_temp/tmp-uuid/%s/%s-flat.vmdk' % ( self._image_id, self._image_id) _fetch_image.assert_called_once_with( self._context, self._instance, self._session._host, self._session._port, self._dc_info.name, self._ds.name, upload_file_name, cookies='Fake-CookieJar') self.assertGreater(len(_wait_for_task.mock_calls), 0) _get_inventory_path.call_count = 1 extras = None if block_device_info and ('ephemerals' in block_device_info or 'swap' in block_device_info): extras = ['CreateVirtualDisk_Task'] self._verify_spawn_method_calls(_call_method, extras) dc_ref = 'fake_dc_ref' source_file = six.text_type('[fake_ds] vmware_base/%s/%s.vmdk' % (self._image_id, self._image_id)) dest_file = six.text_type('[fake_ds] vmware_base/%s/%s.%d.vmdk' % (self._image_id, self._image_id, self._instance['root_gb'])) # TODO(dims): add more tests for copy_virtual_disk after # the disk/image code in spawn gets refactored mock_copy_virtual_disk.assert_called_with(self._session, dc_ref, source_file, dest_file) if config_drive: mock_configure_config_drive.assert_called_once_with( self._context, self._instance, 'fake_vm_ref', self._dc_info, self._ds, 'fake_files', 'password', network_info) mock_update_vnic_index.assert_called_once_with( self._context, self._instance, network_info) @mock.patch.object(ds_util, 'get_datastore') @mock.patch.object(vmops.VMwareVMOps, 'get_datacenter_ref_and_name') def _test_get_spawn_vm_config_info(self, mock_get_datacenter_ref_and_name, mock_get_datastore, image_size_bytes=0): image_info = images.VMwareImage( image_id=self._image_id, file_size=image_size_bytes, linked_clone=True) mock_get_datastore.return_value = self._ds mock_get_datacenter_ref_and_name.return_value = self._dc_info extra_specs = vm_util.ExtraSpecs() vi = self._vmops._get_vm_config_info(self._instance, image_info, extra_specs) self.assertEqual(image_info, vi.ii) self.assertEqual(self._ds, vi.datastore) self.assertEqual(self._instance.flavor.root_gb, vi.root_gb) self.assertEqual(self._instance, vi.instance) self.assertEqual(self._instance.uuid, vi.instance.uuid) self.assertEqual(extra_specs, vi._extra_specs) cache_image_path = '[%s] vmware_base/%s/%s.vmdk' % ( self._ds.name, self._image_id, self._image_id) self.assertEqual(cache_image_path, str(vi.cache_image_path)) cache_image_folder = '[%s] vmware_base/%s' % ( self._ds.name, self._image_id) self.assertEqual(cache_image_folder, str(vi.cache_image_folder)) def test_get_spawn_vm_config_info(self): image_size = (self._instance.flavor.root_gb) * units.Gi / 2 self._test_get_spawn_vm_config_info(image_size_bytes=image_size) def test_get_spawn_vm_config_info_image_too_big(self): image_size = (self._instance.flavor.root_gb + 1) * units.Gi self.assertRaises(exception.InstanceUnacceptable, self._test_get_spawn_vm_config_info, image_size_bytes=image_size) def test_spawn(self): self._test_spawn() def test_spawn_config_drive_enabled(self): self.flags(force_config_drive=True) self._test_spawn(config_drive=True) def test_spawn_with_block_device_info(self): block_device_info = { 'block_device_mapping': [{'boot_index': 0, 'connection_info': 'fake', 'mount_device': '/dev/vda'}] } self._test_spawn(block_device_info=block_device_info) def test_spawn_with_block_device_info_with_config_drive(self): self.flags(force_config_drive=True) block_device_info = { 'block_device_mapping': [{'boot_index': 0, 'connection_info': 'fake', 'mount_device': '/dev/vda'}] } self._test_spawn(block_device_info=block_device_info, config_drive=True) def _spawn_with_block_device_info_ephemerals(self, ephemerals): block_device_info = {'ephemerals': ephemerals} self._test_spawn(block_device_info=block_device_info) def test_spawn_with_block_device_info_ephemerals(self): ephemerals = [{'device_type': 'disk', 'disk_bus': 'virtio', 'device_name': '/dev/vdb', 'size': 1}] self._spawn_with_block_device_info_ephemerals(ephemerals) def test_spawn_with_block_device_info_ephemerals_no_disk_bus(self): ephemerals = [{'device_type': 'disk', 'disk_bus': None, 'device_name': '/dev/vdb', 'size': 1}] self._spawn_with_block_device_info_ephemerals(ephemerals) def test_spawn_with_block_device_info_swap(self): block_device_info = {'swap': {'disk_bus': None, 'swap_size': 512, 'device_name': '/dev/sdb'}} self._test_spawn(block_device_info=block_device_info) @mock.patch.object(vm_util, 'rename_vm') @mock.patch('nova.virt.vmwareapi.vm_util.power_on_instance') @mock.patch.object(vmops.VMwareVMOps, '_create_and_attach_thin_disk') @mock.patch.object(vmops.VMwareVMOps, '_use_disk_image_as_linked_clone') @mock.patch.object(vmops.VMwareVMOps, '_fetch_image_if_missing') @mock.patch( 'nova.virt.vmwareapi.imagecache.ImageCacheManager.enlist_image') @mock.patch.object(vmops.VMwareVMOps, 'build_virtual_machine') @mock.patch.object(vmops.VMwareVMOps, '_get_vm_config_info') @mock.patch.object(vmops.VMwareVMOps, '_get_extra_specs') @mock.patch.object(nova.virt.vmwareapi.images.VMwareImage, 'from_image') def test_spawn_with_ephemerals_and_swap(self, from_image, get_extra_specs, get_vm_config_info, build_virtual_machine, enlist_image, fetch_image, use_disk_image, create_and_attach_thin_disk, power_on_instance, rename_vm): self._instance.flavor = objects.Flavor(vcpus=1, memory_mb=512, name="m1.tiny", root_gb=1, ephemeral_gb=1, swap=512, extra_specs={}) extra_specs = self._vmops._get_extra_specs(self._instance.flavor) ephemerals = [{'device_type': 'disk', 'disk_bus': None, 'device_name': '/dev/vdb', 'size': 1}, {'device_type': 'disk', 'disk_bus': None, 'device_name': '/dev/vdc', 'size': 1}] swap = {'disk_bus': None, 'swap_size': 512, 'device_name': '/dev/vdd'} bdi = {'block_device_mapping': [], 'root_device_name': '/dev/sda', 'ephemerals': ephemerals, 'swap': swap} metadata = self._vmops._get_instance_metadata(self._context, self._instance) self.flags(enabled=False, group='vnc') self.flags(flat_injected=False) image_size = (self._instance.flavor.root_gb) * units.Gi / 2 image_info = images.VMwareImage( image_id=self._image_id, file_size=image_size) vi = get_vm_config_info.return_value from_image.return_value = image_info build_virtual_machine.return_value = 'fake-vm-ref' self._vmops.spawn(self._context, self._instance, {}, injected_files=None, admin_password=None, network_info=[], block_device_info=bdi) from_image.assert_called_once_with( self._context, self._instance.image_ref, {}) get_vm_config_info.assert_called_once_with(self._instance, image_info, extra_specs) build_virtual_machine.assert_called_once_with(self._instance, image_info, vi.dc_info, vi.datastore, [], extra_specs, metadata) enlist_image.assert_called_once_with(image_info.image_id, vi.datastore, vi.dc_info.ref) fetch_image.assert_called_once_with(self._context, vi) use_disk_image.assert_called_once_with('fake-vm-ref', vi) # _create_and_attach_thin_disk should be called for each ephemeral # and swap disk eph0_path = str(ds_obj.DatastorePath(vi.datastore.name, self._uuid, 'ephemeral_0.vmdk')) eph1_path = str(ds_obj.DatastorePath(vi.datastore.name, self._uuid, 'ephemeral_1.vmdk')) swap_path = str(ds_obj.DatastorePath(vi.datastore.name, self._uuid, 'swap.vmdk')) create_and_attach_thin_disk.assert_has_calls([ mock.call(self._instance, 'fake-vm-ref', vi.dc_info, ephemerals[0]['size'] * units.Mi, vi.ii.adapter_type, eph0_path), mock.call(self._instance, 'fake-vm-ref', vi.dc_info, ephemerals[1]['size'] * units.Mi, vi.ii.adapter_type, eph1_path), mock.call(self._instance, 'fake-vm-ref', vi.dc_info, swap['swap_size'] * units.Ki, vi.ii.adapter_type, swap_path) ]) power_on_instance.assert_called_once_with(self._session, self._instance, vm_ref='fake-vm-ref') def _get_fake_vi(self): image_info = images.VMwareImage( image_id=self._image_id, file_size=7, linked_clone=False) vi = vmops.VirtualMachineInstanceConfigInfo( self._instance, image_info, self._ds, self._dc_info, mock.Mock()) return vi @mock.patch.object(vm_util, 'create_virtual_disk') def test_create_and_attach_thin_disk(self, mock_create): vi = self._get_fake_vi() self._vmops._volumeops = mock.Mock() mock_attach_disk_to_vm = self._vmops._volumeops.attach_disk_to_vm path = str(ds_obj.DatastorePath(vi.datastore.name, self._uuid, 'fake-filename')) self._vmops._create_and_attach_thin_disk(self._instance, 'fake-vm-ref', vi.dc_info, 1, 'fake-adapter-type', path) mock_create.assert_called_once_with( self._session, self._dc_info.ref, 'fake-adapter-type', 'thin', path, 1) mock_attach_disk_to_vm.assert_called_once_with( 'fake-vm-ref', self._instance, 'fake-adapter-type', 'thin', path, 1, False) def test_create_ephemeral_with_bdi(self): ephemerals = [{'device_type': 'disk', 'disk_bus': 'virtio', 'device_name': '/dev/vdb', 'size': 1}] block_device_info = {'ephemerals': ephemerals} vi = self._get_fake_vi() with mock.patch.object( self._vmops, '_create_and_attach_thin_disk') as mock_caa: self._vmops._create_ephemeral(block_device_info, self._instance, 'fake-vm-ref', vi.dc_info, vi.datastore, self._uuid, vi.ii.adapter_type) mock_caa.assert_called_once_with( self._instance, 'fake-vm-ref', vi.dc_info, 1 * units.Mi, 'virtio', '[fake_ds] %s/ephemeral_0.vmdk' % self._uuid) def _test_create_ephemeral_from_instance(self, bdi): vi = self._get_fake_vi() with mock.patch.object( self._vmops, '_create_and_attach_thin_disk') as mock_caa: self._vmops._create_ephemeral(bdi, self._instance, 'fake-vm-ref', vi.dc_info, vi.datastore, self._uuid, vi.ii.adapter_type) mock_caa.assert_called_once_with( self._instance, 'fake-vm-ref', vi.dc_info, 1 * units.Mi, constants.DEFAULT_ADAPTER_TYPE, '[fake_ds] %s/ephemeral_0.vmdk' % self._uuid) def test_create_ephemeral_with_bdi_but_no_ephemerals(self): block_device_info = {'ephemerals': []} self._instance.flavor.ephemeral_gb = 1 self._test_create_ephemeral_from_instance(block_device_info) def test_create_ephemeral_with_no_bdi(self): self._instance.flavor.ephemeral_gb = 1 self._test_create_ephemeral_from_instance(None) def _test_create_swap_from_instance(self, bdi): vi = self._get_fake_vi() flavor = objects.Flavor(vcpus=1, memory_mb=1024, ephemeral_gb=1, swap=1024, extra_specs={}) self._instance.flavor = flavor with mock.patch.object( self._vmops, '_create_and_attach_thin_disk' ) as create_and_attach: self._vmops._create_swap(bdi, self._instance, 'fake-vm-ref', vi.dc_info, vi.datastore, self._uuid, 'lsiLogic') size = flavor.swap * units.Ki if bdi is not None: swap = bdi.get('swap', {}) size = swap.get('swap_size', 0) * units.Ki path = str(ds_obj.DatastorePath(vi.datastore.name, self._uuid, 'swap.vmdk')) create_and_attach.assert_called_once_with(self._instance, 'fake-vm-ref', vi.dc_info, size, 'lsiLogic', path) def test_create_swap_with_bdi(self): block_device_info = {'swap': {'disk_bus': None, 'swap_size': 512, 'device_name': '/dev/sdb'}} self._test_create_swap_from_instance(block_device_info) def test_create_swap_with_no_bdi(self): self._test_create_swap_from_instance(None) @mock.patch.object(vmops.VMwareVMOps, '_create_folders', return_value='fake_vm_folder') def test_build_virtual_machine(self, mock_create_folder): image_id = nova.tests.unit.image.fake.get_valid_image_id() image = images.VMwareImage(image_id=image_id) extra_specs = vm_util.ExtraSpecs() vm_ref = self._vmops.build_virtual_machine(self._instance, image, self._dc_info, self._ds, self.network_info, extra_specs, self._metadata) vm = vmwareapi_fake._get_object(vm_ref) # Test basic VM parameters self.assertEqual(self._instance.uuid, vm.name) self.assertEqual(self._instance.uuid, vm.get('summary.config.instanceUuid')) self.assertEqual(self._instance_values['vcpus'], vm.get('summary.config.numCpu')) self.assertEqual(self._instance_values['memory_mb'], vm.get('summary.config.memorySizeMB')) # Test NSX config for optval in vm.get('config.extraConfig').OptionValue: if optval.key == 'nvp.vm-uuid': self.assertEqual(self._instance_values['uuid'], optval.value) break else: self.fail('nvp.vm-uuid not found in extraConfig') # Test that the VM is associated with the specified datastore datastores = vm.datastore.ManagedObjectReference self.assertEqual(1, len(datastores)) datastore = vmwareapi_fake._get_object(datastores[0]) self.assertEqual(self._ds.name, datastore.get('summary.name')) # Test that the VM's network is configured as specified devices = vm.get('config.hardware.device').VirtualDevice for device in devices: if device.obj_name != 'ns0:VirtualE1000': continue self.assertEqual(self._network_values['address'], device.macAddress) break else: self.fail('NIC not configured') def test_spawn_cpu_limit(self): cpu_limits = vm_util.Limits(limit=7) extra_specs = vm_util.ExtraSpecs(cpu_limits=cpu_limits) self._test_spawn(extra_specs=extra_specs) def test_spawn_cpu_reservation(self): cpu_limits = vm_util.Limits(reservation=7) extra_specs = vm_util.ExtraSpecs(cpu_limits=cpu_limits) self._test_spawn(extra_specs=extra_specs) def test_spawn_cpu_allocations(self): cpu_limits = vm_util.Limits(limit=7, reservation=6) extra_specs = vm_util.ExtraSpecs(cpu_limits=cpu_limits) self._test_spawn(extra_specs=extra_specs) def test_spawn_cpu_shares_level(self): cpu_limits = vm_util.Limits(shares_level='high') extra_specs = vm_util.ExtraSpecs(cpu_limits=cpu_limits) self._test_spawn(extra_specs=extra_specs) def test_spawn_cpu_shares_custom(self): cpu_limits = vm_util.Limits(shares_level='custom', shares_share=1948) extra_specs = vm_util.ExtraSpecs(cpu_limits=cpu_limits) self._test_spawn(extra_specs=extra_specs) def test_spawn_memory_limit(self): memory_limits = vm_util.Limits(limit=7) extra_specs = vm_util.ExtraSpecs(memory_limits=memory_limits) self._test_spawn(extra_specs=extra_specs) def test_spawn_memory_reservation(self): memory_limits = vm_util.Limits(reservation=7) extra_specs = vm_util.ExtraSpecs(memory_limits=memory_limits) self._test_spawn(extra_specs=extra_specs) def test_spawn_memory_allocations(self): memory_limits = vm_util.Limits(limit=7, reservation=6) extra_specs = vm_util.ExtraSpecs(memory_limits=memory_limits) self._test_spawn(extra_specs=extra_specs) def test_spawn_memory_shares_level(self): memory_limits = vm_util.Limits(shares_level='high') extra_specs = vm_util.ExtraSpecs(memory_limits=memory_limits) self._test_spawn(extra_specs=extra_specs) def test_spawn_memory_shares_custom(self): memory_limits = vm_util.Limits(shares_level='custom', shares_share=1948) extra_specs = vm_util.ExtraSpecs(memory_limits=memory_limits) self._test_spawn(extra_specs=extra_specs) def test_spawn_vif_limit(self): vif_limits = vm_util.Limits(limit=7) extra_specs = vm_util.ExtraSpecs(vif_limits=vif_limits) self._test_spawn(extra_specs=extra_specs) def test_spawn_vif_reservation(self): vif_limits = vm_util.Limits(reservation=7) extra_specs = vm_util.ExtraSpecs(vif_limits=vif_limits) self._test_spawn(extra_specs=extra_specs) def test_spawn_vif_shares_level(self): vif_limits = vm_util.Limits(shares_level='high') extra_specs = vm_util.ExtraSpecs(vif_limits=vif_limits) self._test_spawn(extra_specs=extra_specs) def test_spawn_vif_shares_custom(self): vif_limits = vm_util.Limits(shares_level='custom', shares_share=1948) extra_specs = vm_util.ExtraSpecs(vif_limits=vif_limits) self._test_spawn(extra_specs=extra_specs) def _validate_extra_specs(self, expected, actual): self.assertEqual(expected.cpu_limits.limit, actual.cpu_limits.limit) self.assertEqual(expected.cpu_limits.reservation, actual.cpu_limits.reservation) self.assertEqual(expected.cpu_limits.shares_level, actual.cpu_limits.shares_level) self.assertEqual(expected.cpu_limits.shares_share, actual.cpu_limits.shares_share) def _validate_flavor_extra_specs(self, flavor_extra_specs, expected): # Validate that the extra specs are parsed correctly flavor = objects.Flavor(name='my-flavor', memory_mb=6, vcpus=28, root_gb=496, ephemeral_gb=8128, swap=33550336, extra_specs=flavor_extra_specs) flavor_extra_specs = self._vmops._get_extra_specs(flavor, None) self._validate_extra_specs(expected, flavor_extra_specs) def test_extra_specs_cpu_limit(self): flavor_extra_specs = {'quota:cpu_limit': 7} cpu_limits = vm_util.Limits(limit=7) extra_specs = vm_util.ExtraSpecs(cpu_limits=cpu_limits) self._validate_flavor_extra_specs(flavor_extra_specs, extra_specs) def test_extra_specs_cpu_reservations(self): flavor_extra_specs = {'quota:cpu_reservation': 7} cpu_limits = vm_util.Limits(reservation=7) extra_specs = vm_util.ExtraSpecs(cpu_limits=cpu_limits) self._validate_flavor_extra_specs(flavor_extra_specs, extra_specs) def test_extra_specs_cpu_allocations(self): flavor_extra_specs = {'quota:cpu_limit': 7, 'quota:cpu_reservation': 6} cpu_limits = vm_util.Limits(limit=7, reservation=6) extra_specs = vm_util.ExtraSpecs(cpu_limits=cpu_limits) self._validate_flavor_extra_specs(flavor_extra_specs, extra_specs) def test_extra_specs_cpu_shares_level(self): flavor_extra_specs = {'quota:cpu_shares_level': 'high'} cpu_limits = vm_util.Limits(shares_level='high') extra_specs = vm_util.ExtraSpecs(cpu_limits=cpu_limits) self._validate_flavor_extra_specs(flavor_extra_specs, extra_specs) def test_extra_specs_cpu_shares_custom(self): flavor_extra_specs = {'quota:cpu_shares_level': 'custom', 'quota:cpu_shares_share': 1948} cpu_limits = vm_util.Limits(shares_level='custom', shares_share=1948) extra_specs = vm_util.ExtraSpecs(cpu_limits=cpu_limits) self._validate_flavor_extra_specs(flavor_extra_specs, extra_specs) def test_extra_specs_vif_shares_custom_pos01(self): flavor_extra_specs = {'quota:vif_shares_level': 'custom', 'quota:vif_shares_share': 40} vif_limits = vm_util.Limits(shares_level='custom', shares_share=40) extra_specs = vm_util.ExtraSpecs(vif_limits=vif_limits) self._validate_flavor_extra_specs(flavor_extra_specs, extra_specs) def test_extra_specs_vif_shares_with_invalid_level(self): flavor_extra_specs = {'quota:vif_shares_level': 'high', 'quota:vif_shares_share': 40} vif_limits = vm_util.Limits(shares_level='custom', shares_share=40) extra_specs = vm_util.ExtraSpecs(vif_limits=vif_limits) self.assertRaises(exception.InvalidInput, self._validate_flavor_extra_specs, flavor_extra_specs, extra_specs) def _make_vm_config_info(self, is_iso=False, is_sparse_disk=False, vsphere_location=None): disk_type = (constants.DISK_TYPE_SPARSE if is_sparse_disk else constants.DEFAULT_DISK_TYPE) file_type = (constants.DISK_FORMAT_ISO if is_iso else constants.DEFAULT_DISK_FORMAT) image_info = images.VMwareImage( image_id=self._image_id, file_size=10 * units.Mi, file_type=file_type, disk_type=disk_type, linked_clone=True, vsphere_location=vsphere_location) cache_root_folder = self._ds.build_path("vmware_base", self._image_id) mock_imagecache = mock.Mock() mock_imagecache.get_image_cache_folder.return_value = cache_root_folder vi = vmops.VirtualMachineInstanceConfigInfo( self._instance, image_info, self._ds, self._dc_info, mock_imagecache) return vi @mock.patch.object(vmops.VMwareVMOps, 'check_cache_folder') @mock.patch.object(vmops.VMwareVMOps, '_fetch_image_as_file') @mock.patch.object(vmops.VMwareVMOps, '_prepare_iso_image') @mock.patch.object(vmops.VMwareVMOps, '_prepare_sparse_image') @mock.patch.object(vmops.VMwareVMOps, '_prepare_flat_image') @mock.patch.object(vmops.VMwareVMOps, '_cache_iso_image') @mock.patch.object(vmops.VMwareVMOps, '_cache_sparse_image') @mock.patch.object(vmops.VMwareVMOps, '_cache_flat_image') @mock.patch.object(vmops.VMwareVMOps, '_delete_datastore_file') @mock.patch.object(vmops.VMwareVMOps, '_update_image_size') def _test_fetch_image_if_missing(self, mock_update_image_size, mock_delete_datastore_file, mock_cache_flat_image, mock_cache_sparse_image, mock_cache_iso_image, mock_prepare_flat_image, mock_prepare_sparse_image, mock_prepare_iso_image, mock_fetch_image_as_file, mock_check_cache_folder, is_iso=False, is_sparse_disk=False): tmp_dir_path = mock.Mock() tmp_image_path = mock.Mock() if is_iso: mock_prepare = mock_prepare_iso_image mock_cache = mock_cache_iso_image elif is_sparse_disk: mock_prepare = mock_prepare_sparse_image mock_cache = mock_cache_sparse_image else: mock_prepare = mock_prepare_flat_image mock_cache = mock_cache_flat_image mock_prepare.return_value = tmp_dir_path, tmp_image_path vi = self._make_vm_config_info(is_iso, is_sparse_disk) self._vmops._fetch_image_if_missing(self._context, vi) mock_check_cache_folder.assert_called_once_with( self._ds.name, self._ds.ref) mock_prepare.assert_called_once_with(vi) mock_fetch_image_as_file.assert_called_once_with( self._context, vi, tmp_image_path) mock_cache.assert_called_once_with(vi, tmp_image_path) mock_delete_datastore_file.assert_called_once_with( str(tmp_dir_path), self._dc_info.ref) if is_sparse_disk: mock_update_image_size.assert_called_once_with(vi) def test_fetch_image_if_missing(self): self._test_fetch_image_if_missing() def test_fetch_image_if_missing_with_sparse(self): self._test_fetch_image_if_missing( is_sparse_disk=True) def test_fetch_image_if_missing_with_iso(self): self._test_fetch_image_if_missing( is_iso=True) def test_get_esx_host_and_cookies(self): datastore = mock.Mock() datastore.get_connected_hosts.return_value = ['fira-host'] file_path = mock.Mock() def fake_invoke(module, method, *args, **kwargs): if method == 'AcquireGenericServiceTicket': ticket = mock.Mock() ticket.id = 'fira-ticket' return ticket elif method == 'get_object_property': return 'fira-host' with mock.patch.object(self._session, 'invoke_api', fake_invoke): result = self._vmops._get_esx_host_and_cookies(datastore, 'ha-datacenter', file_path) self.assertEqual('fira-host', result[0]) cookies = result[1] self.assertEqual(1, len(cookies)) self.assertEqual('vmware_cgi_ticket', cookies[0].name) self.assertEqual('"fira-ticket"', cookies[0].value) def test_fetch_vsphere_image(self): vsphere_location = 'vsphere://my?dcPath=mycenter&dsName=mystore' vi = self._make_vm_config_info(vsphere_location=vsphere_location) image_ds_loc = mock.Mock() datacenter_moref = mock.Mock() fake_copy_task = mock.Mock() with test.nested( mock.patch.object( self._session, 'invoke_api', side_effect=[datacenter_moref, fake_copy_task]), mock.patch.object(self._session, '_wait_for_task')) as ( invoke_api, wait_for_task): self._vmops._fetch_vsphere_image(self._context, vi, image_ds_loc) expected_calls = [ mock.call( self._session.vim, 'FindByInventoryPath', self._session.vim.service_content.searchIndex, inventoryPath='mycenter'), mock.call(self._session.vim, 'CopyDatastoreFile_Task', self._session.vim.service_content.fileManager, destinationDatacenter=self._dc_info.ref, destinationName=str(image_ds_loc), sourceDatacenter=datacenter_moref, sourceName='[mystore]')] invoke_api.assert_has_calls(expected_calls) wait_for_task.assert_called_once_with(fake_copy_task) @mock.patch.object(images, 'fetch_image') @mock.patch.object(vmops.VMwareVMOps, '_get_esx_host_and_cookies') def test_fetch_image_as_file(self, mock_get_esx_host_and_cookies, mock_fetch_image): vi = self._make_vm_config_info() image_ds_loc = mock.Mock() host = mock.Mock() dc_name = 'ha-datacenter' cookies = mock.Mock() mock_get_esx_host_and_cookies.return_value = host, cookies self._vmops._fetch_image_as_file(self._context, vi, image_ds_loc) mock_get_esx_host_and_cookies.assert_called_once_with( vi.datastore, dc_name, image_ds_loc.rel_path) mock_fetch_image.assert_called_once_with( self._context, vi.instance, host, self._session._port, dc_name, self._ds.name, image_ds_loc.rel_path, cookies=cookies) @mock.patch.object(vutil, 'get_inventory_path') @mock.patch.object(images, 'fetch_image') @mock.patch.object(vmops.VMwareVMOps, '_get_esx_host_and_cookies') def test_fetch_image_as_file_exception(self, mock_get_esx_host_and_cookies, mock_fetch_image, mock_get_inventory_path): vi = self._make_vm_config_info() image_ds_loc = mock.Mock() dc_name = 'ha-datacenter' mock_get_esx_host_and_cookies.side_effect = \ exception.HostNotFound(host='') mock_get_inventory_path.return_value = self._dc_info.name self._vmops._fetch_image_as_file(self._context, vi, image_ds_loc) mock_get_esx_host_and_cookies.assert_called_once_with( vi.datastore, dc_name, image_ds_loc.rel_path) mock_fetch_image.assert_called_once_with( self._context, vi.instance, self._session._host, self._session._port, self._dc_info.name, self._ds.name, image_ds_loc.rel_path, cookies='Fake-CookieJar') @mock.patch.object(images, 'fetch_image_stream_optimized', return_value=123) def test_fetch_image_as_vapp(self, mock_fetch_image): vi = self._make_vm_config_info() image_ds_loc = mock.Mock() image_ds_loc.parent.basename = 'fake-name' self._vmops._fetch_image_as_vapp(self._context, vi, image_ds_loc) mock_fetch_image.assert_called_once_with( self._context, vi.instance, self._session, 'fake-name', self._ds.name, vi.dc_info.vmFolder, self._vmops._root_resource_pool) self.assertEqual(vi.ii.file_size, 123) @mock.patch.object(images, 'fetch_image_ova', return_value=123) def test_fetch_image_as_ova(self, mock_fetch_image): vi = self._make_vm_config_info() image_ds_loc = mock.Mock() image_ds_loc.parent.basename = 'fake-name' self._vmops._fetch_image_as_ova(self._context, vi, image_ds_loc) mock_fetch_image.assert_called_once_with( self._context, vi.instance, self._session, 'fake-name', self._ds.name, vi.dc_info.vmFolder, self._vmops._root_resource_pool) self.assertEqual(vi.ii.file_size, 123) @mock.patch.object(uuidutils, 'generate_uuid', return_value='tmp-uuid') def test_prepare_iso_image(self, mock_generate_uuid): vi = self._make_vm_config_info(is_iso=True) tmp_dir_loc, tmp_image_ds_loc = self._vmops._prepare_iso_image(vi) expected_tmp_dir_path = '[%s] vmware_temp/tmp-uuid' % (self._ds.name) expected_image_path = '[%s] vmware_temp/tmp-uuid/%s/%s.iso' % ( self._ds.name, self._image_id, self._image_id) self.assertEqual(str(tmp_dir_loc), expected_tmp_dir_path) self.assertEqual(str(tmp_image_ds_loc), expected_image_path) @mock.patch.object(uuidutils, 'generate_uuid', return_value='tmp-uuid') @mock.patch.object(ds_util, 'mkdir') def test_prepare_sparse_image(self, mock_mkdir, mock_generate_uuid): vi = self._make_vm_config_info(is_sparse_disk=True) tmp_dir_loc, tmp_image_ds_loc = self._vmops._prepare_sparse_image(vi) expected_tmp_dir_path = '[%s] vmware_temp/tmp-uuid' % (self._ds.name) expected_image_path = '[%s] vmware_temp/tmp-uuid/%s/%s' % ( self._ds.name, self._image_id, "tmp-sparse.vmdk") self.assertEqual(str(tmp_dir_loc), expected_tmp_dir_path) self.assertEqual(str(tmp_image_ds_loc), expected_image_path) mock_mkdir.assert_called_once_with(self._session, tmp_image_ds_loc.parent, vi.dc_info.ref) @mock.patch.object(ds_util, 'mkdir') @mock.patch.object(vm_util, 'create_virtual_disk') @mock.patch.object(vmops.VMwareVMOps, '_delete_datastore_file') @mock.patch.object(uuidutils, 'generate_uuid', return_value='tmp-uuid') def test_prepare_flat_image(self, mock_generate_uuid, mock_delete_datastore_file, mock_create_virtual_disk, mock_mkdir): vi = self._make_vm_config_info() tmp_dir_loc, tmp_image_ds_loc = self._vmops._prepare_flat_image(vi) expected_tmp_dir_path = '[%s] vmware_temp/tmp-uuid' % (self._ds.name) expected_image_path = '[%s] vmware_temp/tmp-uuid/%s/%s-flat.vmdk' % ( self._ds.name, self._image_id, self._image_id) expected_image_path_parent = '[%s] vmware_temp/tmp-uuid/%s' % ( self._ds.name, self._image_id) expected_path_to_create = '[%s] vmware_temp/tmp-uuid/%s/%s.vmdk' % ( self._ds.name, self._image_id, self._image_id) mock_mkdir.assert_called_once_with( self._session, DsPathMatcher(expected_image_path_parent), self._dc_info.ref) self.assertEqual(str(tmp_dir_loc), expected_tmp_dir_path) self.assertEqual(str(tmp_image_ds_loc), expected_image_path) image_info = vi.ii mock_create_virtual_disk.assert_called_once_with( self._session, self._dc_info.ref, image_info.adapter_type, image_info.disk_type, DsPathMatcher(expected_path_to_create), image_info.file_size_in_kb) mock_delete_datastore_file.assert_called_once_with( DsPathMatcher(expected_image_path), self._dc_info.ref) @mock.patch.object(ds_util, 'file_move') def test_cache_iso_image(self, mock_file_move): vi = self._make_vm_config_info(is_iso=True) tmp_image_ds_loc = mock.Mock() self._vmops._cache_iso_image(vi, tmp_image_ds_loc) mock_file_move.assert_called_once_with( self._session, self._dc_info.ref, tmp_image_ds_loc.parent, DsPathMatcher('[fake_ds] vmware_base/%s' % self._image_id)) @mock.patch.object(ds_util, 'file_move') def test_cache_flat_image(self, mock_file_move): vi = self._make_vm_config_info() tmp_image_ds_loc = mock.Mock() self._vmops._cache_flat_image(vi, tmp_image_ds_loc) mock_file_move.assert_called_once_with( self._session, self._dc_info.ref, tmp_image_ds_loc.parent, DsPathMatcher('[fake_ds] vmware_base/%s' % self._image_id)) @mock.patch.object(ds_util, 'disk_move') @mock.patch.object(ds_util, 'mkdir') def test_cache_stream_optimized_image(self, mock_mkdir, mock_disk_move): vi = self._make_vm_config_info() self._vmops._cache_stream_optimized_image(vi, mock.sentinel.tmp_image) mock_mkdir.assert_called_once_with( self._session, DsPathMatcher('[fake_ds] vmware_base/%s' % self._image_id), self._dc_info.ref) mock_disk_move.assert_called_once_with( self._session, self._dc_info.ref, mock.sentinel.tmp_image, DsPathMatcher('[fake_ds] vmware_base/%s/%s.vmdk' % (self._image_id, self._image_id))) @mock.patch.object(ds_util, 'file_move') @mock.patch.object(vm_util, 'copy_virtual_disk') @mock.patch.object(vmops.VMwareVMOps, '_delete_datastore_file') def test_cache_sparse_image(self, mock_delete_datastore_file, mock_copy_virtual_disk, mock_file_move): vi = self._make_vm_config_info(is_sparse_disk=True) sparse_disk_path = "[%s] vmware_temp/tmp-uuid/%s/tmp-sparse.vmdk" % ( self._ds.name, self._image_id) tmp_image_ds_loc = ds_obj.DatastorePath.parse(sparse_disk_path) self._vmops._cache_sparse_image(vi, tmp_image_ds_loc) target_disk_path = "[%s] vmware_temp/tmp-uuid/%s/%s.vmdk" % ( self._ds.name, self._image_id, self._image_id) mock_copy_virtual_disk.assert_called_once_with( self._session, self._dc_info.ref, sparse_disk_path, DsPathMatcher(target_disk_path)) def test_get_storage_policy_none(self): flavor = objects.Flavor(name='m1.small', memory_mb=6, vcpus=28, root_gb=496, ephemeral_gb=8128, swap=33550336, extra_specs={}) self.flags(pbm_enabled=True, pbm_default_policy='fake-policy', group='vmware') extra_specs = self._vmops._get_extra_specs(flavor, None) self.assertEqual('fake-policy', extra_specs.storage_policy) def test_get_storage_policy_extra_specs(self): extra_specs = {'vmware:storage_policy': 'flavor-policy'} flavor = objects.Flavor(name='m1.small', memory_mb=6, vcpus=28, root_gb=496, ephemeral_gb=8128, swap=33550336, extra_specs=extra_specs) self.flags(pbm_enabled=True, pbm_default_policy='default-policy', group='vmware') extra_specs = self._vmops._get_extra_specs(flavor, None) self.assertEqual('flavor-policy', extra_specs.storage_policy) def test_get_base_folder_not_set(self): self.flags(image_cache_subdirectory_name='vmware_base') base_folder = self._vmops._get_base_folder() self.assertEqual('vmware_base', base_folder) def test_get_base_folder_host_ip(self): self.flags(my_ip='7.7.7.7', image_cache_subdirectory_name='_base') base_folder = self._vmops._get_base_folder() self.assertEqual('7.7.7.7_base', base_folder) def test_get_base_folder_cache_prefix(self): self.flags(cache_prefix='my_prefix', group='vmware') self.flags(image_cache_subdirectory_name='_base') base_folder = self._vmops._get_base_folder() self.assertEqual('my_prefix_base', base_folder) def _test_reboot_vm(self, reboot_type="SOFT", tool_status=True): expected_methods = ['get_object_properties_dict'] if reboot_type == "SOFT": expected_methods.append('RebootGuest') else: expected_methods.append('ResetVM_Task') def fake_call_method(module, method, *args, **kwargs): expected_method = expected_methods.pop(0) self.assertEqual(expected_method, method) if expected_method == 'get_object_properties_dict' and tool_status: return { "runtime.powerState": "poweredOn", "summary.guest.toolsStatus": "toolsOk", "summary.guest.toolsRunningStatus": "guestToolsRunning"} elif expected_method == 'get_object_properties_dict': return {"runtime.powerState": "poweredOn"} elif expected_method == 'ResetVM_Task': return 'fake-task' with test.nested( mock.patch.object(vm_util, "get_vm_ref", return_value='fake-vm-ref'), mock.patch.object(self._session, "_call_method", fake_call_method), mock.patch.object(self._session, "_wait_for_task") ) as (_get_vm_ref, fake_call_method, _wait_for_task): self._vmops.reboot(self._instance, self.network_info, reboot_type) _get_vm_ref.assert_called_once_with(self._session, self._instance) if reboot_type == "HARD": _wait_for_task.assert_has_calls([ mock.call('fake-task')]) def test_reboot_vm_soft(self): self._test_reboot_vm() def test_reboot_vm_hard_toolstatus(self): self._test_reboot_vm(reboot_type="HARD", tool_status=False) def test_reboot_vm_hard(self): self._test_reboot_vm(reboot_type="HARD") def test_get_instance_metadata(self): flavor = objects.Flavor(id=7, name='m1.small', memory_mb=6, vcpus=28, root_gb=496, ephemeral_gb=8128, swap=33550336, extra_specs={}) self._instance.flavor = flavor metadata = self._vmops._get_instance_metadata( self._context, self._instance) expected = ("name:fake_display_name\n" "userid:fake_user\n" "username:None\n" "projectid:fake_project\n" "projectname:None\n" "flavor:name:m1.small\n" "flavor:memory_mb:6\n" "flavor:vcpus:28\n" "flavor:ephemeral_gb:8128\n" "flavor:root_gb:496\n" "flavor:swap:33550336\n" "imageid:70a599e0-31e7-49b7-b260-868f441e862b\n" "package:%s\n" % version.version_string_with_package()) self.assertEqual(expected, metadata) @mock.patch.object(vmops.VMwareVMOps, '_get_extra_specs') @mock.patch.object(vm_util, 'reconfigure_vm') @mock.patch.object(vm_util, 'get_network_attach_config_spec', return_value='fake-attach-spec') @mock.patch.object(vm_util, 'get_attach_port_index', return_value=1) @mock.patch.object(vm_util, 'get_vm_ref', return_value='fake-ref') def test_attach_interface(self, mock_get_vm_ref, mock_get_attach_port_index, mock_get_network_attach_config_spec, mock_reconfigure_vm, mock_extra_specs): _network_api = mock.Mock() self._vmops._network_api = _network_api vif_info = vif.get_vif_dict(self._session, self._cluster, 'VirtualE1000', utils.is_neutron(), self._network_values) extra_specs = vm_util.ExtraSpecs() mock_extra_specs.return_value = extra_specs self._vmops.attach_interface(self._context, self._instance, self._image_meta, self._network_values) mock_get_vm_ref.assert_called_once_with(self._session, self._instance) mock_get_attach_port_index(self._session, 'fake-ref') mock_get_network_attach_config_spec.assert_called_once_with( self._session.vim.client.factory, vif_info, 1, extra_specs.vif_limits) mock_reconfigure_vm.assert_called_once_with(self._session, 'fake-ref', 'fake-attach-spec') _network_api.update_instance_vnic_index(mock.ANY, self._instance, self._network_values, 1) @mock.patch.object(vif, 'get_network_device', return_value='device') @mock.patch.object(vm_util, 'reconfigure_vm') @mock.patch.object(vm_util, 'get_network_detach_config_spec', return_value='fake-detach-spec') @mock.patch.object(vm_util, 'get_vm_detach_port_index', return_value=1) @mock.patch.object(vm_util, 'get_vm_ref', return_value='fake-ref') def test_detach_interface(self, mock_get_vm_ref, mock_get_detach_port_index, mock_get_network_detach_config_spec, mock_reconfigure_vm, mock_get_network_device): _network_api = mock.Mock() self._vmops._network_api = _network_api with mock.patch.object(self._session, '_call_method', return_value='hardware-devices'): self._vmops.detach_interface(self._context, self._instance, self._network_values) mock_get_vm_ref.assert_called_once_with(self._session, self._instance) mock_get_detach_port_index(self._session, 'fake-ref') mock_get_network_detach_config_spec.assert_called_once_with( self._session.vim.client.factory, 'device', 1) mock_reconfigure_vm.assert_called_once_with(self._session, 'fake-ref', 'fake-detach-spec') _network_api.update_instance_vnic_index(mock.ANY, self._instance, self._network_values, None) @mock.patch.object(vm_util, 'get_vm_ref', return_value='fake-ref') def test_get_mks_console(self, mock_get_vm_ref): ticket = mock.MagicMock() ticket.host = 'esx1' ticket.port = 902 ticket.ticket = 'fira' ticket.sslThumbprint = 'aa:bb:cc:dd:ee:ff' ticket.cfgFile = '[ds1] fira/foo.vmx' with mock.patch.object(self._session, '_call_method', return_value=ticket): console = self._vmops.get_mks_console(self._instance) self.assertEqual('esx1', console.host) self.assertEqual(902, console.port) path = jsonutils.loads(console.internal_access_path) self.assertEqual('fira', path['ticket']) self.assertEqual('aabbccddeeff', path['thumbprint']) self.assertEqual('[ds1] fira/foo.vmx', path['cfgFile']) def test_get_cores_per_socket(self): extra_specs = {'hw:cpu_sockets': 7} flavor = objects.Flavor(name='m1.small', memory_mb=6, vcpus=28, root_gb=496, ephemeral_gb=8128, swap=33550336, extra_specs=extra_specs) extra_specs = self._vmops._get_extra_specs(flavor, None) self.assertEqual(4, int(extra_specs.cores_per_socket)) def test_get_folder_name(self): uuid = uuidutils.generate_uuid() name = 'fira' expected = 'fira (%s)' % uuid folder_name = self._vmops._get_folder_name(name, uuid) self.assertEqual(expected, folder_name) name = 'X' * 255 expected = '%s (%s)' % ('X' * 40, uuid) folder_name = self._vmops._get_folder_name(name, uuid) self.assertEqual(expected, folder_name) self.assertEqual(79, len(folder_name)) @mock.patch.object(vmops.VMwareVMOps, '_get_extra_specs') @mock.patch.object(vm_util, 'reconfigure_vm') @mock.patch.object(vm_util, 'get_network_attach_config_spec', return_value='fake-attach-spec') @mock.patch.object(vm_util, 'get_attach_port_index', return_value=1) @mock.patch.object(vm_util, 'get_vm_ref', return_value='fake-ref') def test_attach_interface_with_limits(self, mock_get_vm_ref, mock_get_attach_port_index, mock_get_network_attach_config_spec, mock_reconfigure_vm, mock_extra_specs): _network_api = mock.Mock() self._vmops._network_api = _network_api vif_info = vif.get_vif_dict(self._session, self._cluster, 'VirtualE1000', utils.is_neutron(), self._network_values) vif_limits = vm_util.Limits(shares_level='custom', shares_share=40) extra_specs = vm_util.ExtraSpecs(vif_limits=vif_limits) mock_extra_specs.return_value = extra_specs self._vmops.attach_interface(self._context, self._instance, self._image_meta, self._network_values) mock_get_vm_ref.assert_called_once_with(self._session, self._instance) mock_get_attach_port_index(self._session, 'fake-ref') mock_get_network_attach_config_spec.assert_called_once_with( self._session.vim.client.factory, vif_info, 1, extra_specs.vif_limits) mock_reconfigure_vm.assert_called_once_with(self._session, 'fake-ref', 'fake-attach-spec') _network_api.update_instance_vnic_index(mock.ANY, self._instance, self._network_values, 1) nova-17.0.1/nova/tests/unit/virt/vmwareapi/test_ds_util.py0000666000175000017500000005271613250073126023720 0ustar zuulzuul00000000000000# Copyright (c) 2014 VMware, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import re import mock from oslo_utils import units from oslo_vmware import exceptions as vexc from oslo_vmware.objects import datastore as ds_obj from nova import exception from nova import test from nova.tests.unit.virt.vmwareapi import fake from nova.virt.vmwareapi import ds_util class DsUtilTestCase(test.NoDBTestCase): def setUp(self): super(DsUtilTestCase, self).setUp() self.session = fake.FakeSession() self.flags(api_retry_count=1, group='vmware') fake.reset() def tearDown(self): super(DsUtilTestCase, self).tearDown() fake.reset() def test_get_datacenter_ref(self): with mock.patch.object(self.session, '_call_method') as call_method: ds_util.get_datacenter_ref(self.session, "datacenter") call_method.assert_called_once_with( self.session.vim, "FindByInventoryPath", self.session.vim.service_content.searchIndex, inventoryPath="datacenter") def test_file_delete(self): def fake_call_method(module, method, *args, **kwargs): self.assertEqual('DeleteDatastoreFile_Task', method) name = kwargs.get('name') self.assertEqual('[ds] fake/path', name) datacenter = kwargs.get('datacenter') self.assertEqual('fake-dc-ref', datacenter) return 'fake_delete_task' with test.nested( mock.patch.object(self.session, '_wait_for_task'), mock.patch.object(self.session, '_call_method', fake_call_method) ) as (_wait_for_task, _call_method): ds_path = ds_obj.DatastorePath('ds', 'fake/path') ds_util.file_delete(self.session, ds_path, 'fake-dc-ref') _wait_for_task.assert_has_calls([ mock.call('fake_delete_task')]) def test_file_copy(self): def fake_call_method(module, method, *args, **kwargs): self.assertEqual('CopyDatastoreFile_Task', method) src_name = kwargs.get('sourceName') self.assertEqual('[ds] fake/path/src_file', src_name) src_dc_ref = kwargs.get('sourceDatacenter') self.assertEqual('fake-src-dc-ref', src_dc_ref) dst_name = kwargs.get('destinationName') self.assertEqual('[ds] fake/path/dst_file', dst_name) dst_dc_ref = kwargs.get('destinationDatacenter') self.assertEqual('fake-dst-dc-ref', dst_dc_ref) return 'fake_copy_task' with test.nested( mock.patch.object(self.session, '_wait_for_task'), mock.patch.object(self.session, '_call_method', fake_call_method) ) as (_wait_for_task, _call_method): src_ds_path = ds_obj.DatastorePath('ds', 'fake/path', 'src_file') dst_ds_path = ds_obj.DatastorePath('ds', 'fake/path', 'dst_file') ds_util.file_copy(self.session, str(src_ds_path), 'fake-src-dc-ref', str(dst_ds_path), 'fake-dst-dc-ref') _wait_for_task.assert_has_calls([ mock.call('fake_copy_task')]) def test_file_move(self): def fake_call_method(module, method, *args, **kwargs): self.assertEqual('MoveDatastoreFile_Task', method) sourceName = kwargs.get('sourceName') self.assertEqual('[ds] tmp/src', sourceName) destinationName = kwargs.get('destinationName') self.assertEqual('[ds] base/dst', destinationName) sourceDatacenter = kwargs.get('sourceDatacenter') self.assertEqual('fake-dc-ref', sourceDatacenter) destinationDatacenter = kwargs.get('destinationDatacenter') self.assertEqual('fake-dc-ref', destinationDatacenter) return 'fake_move_task' with test.nested( mock.patch.object(self.session, '_wait_for_task'), mock.patch.object(self.session, '_call_method', fake_call_method) ) as (_wait_for_task, _call_method): src_ds_path = ds_obj.DatastorePath('ds', 'tmp/src') dst_ds_path = ds_obj.DatastorePath('ds', 'base/dst') ds_util.file_move(self.session, 'fake-dc-ref', src_ds_path, dst_ds_path) _wait_for_task.assert_has_calls([ mock.call('fake_move_task')]) def test_disk_move(self): def fake_call_method(module, method, *args, **kwargs): self.assertEqual('MoveVirtualDisk_Task', method) src_name = kwargs.get('sourceName') self.assertEqual('[ds] tmp/src', src_name) dest_name = kwargs.get('destName') self.assertEqual('[ds] base/dst', dest_name) src_datacenter = kwargs.get('sourceDatacenter') self.assertEqual('fake-dc-ref', src_datacenter) dest_datacenter = kwargs.get('destDatacenter') self.assertEqual('fake-dc-ref', dest_datacenter) return 'fake_move_task' with test.nested( mock.patch.object(self.session, '_wait_for_task'), mock.patch.object(self.session, '_call_method', fake_call_method) ) as (_wait_for_task, _call_method): ds_util.disk_move(self.session, 'fake-dc-ref', '[ds] tmp/src', '[ds] base/dst') _wait_for_task.assert_has_calls([ mock.call('fake_move_task')]) def test_disk_copy(self): with test.nested( mock.patch.object(self.session, '_wait_for_task'), mock.patch.object(self.session, '_call_method', return_value=mock.sentinel.cm) ) as (_wait_for_task, _call_method): ds_util.disk_copy(self.session, mock.sentinel.dc_ref, mock.sentinel.source_ds, mock.sentinel.dest_ds) _wait_for_task.assert_called_once_with(mock.sentinel.cm) _call_method.assert_called_once_with( mock.ANY, 'CopyVirtualDisk_Task', 'VirtualDiskManager', sourceName='sentinel.source_ds', destDatacenter=mock.sentinel.dc_ref, sourceDatacenter=mock.sentinel.dc_ref, force=False, destName='sentinel.dest_ds') def test_disk_delete(self): with test.nested( mock.patch.object(self.session, '_wait_for_task'), mock.patch.object(self.session, '_call_method', return_value=mock.sentinel.cm) ) as (_wait_for_task, _call_method): ds_util.disk_delete(self.session, 'fake-dc-ref', '[ds] tmp/disk.vmdk') _wait_for_task.assert_called_once_with(mock.sentinel.cm) _call_method.assert_called_once_with( mock.ANY, 'DeleteVirtualDisk_Task', 'VirtualDiskManager', datacenter='fake-dc-ref', name='[ds] tmp/disk.vmdk') def test_mkdir(self): def fake_call_method(module, method, *args, **kwargs): self.assertEqual('MakeDirectory', method) name = kwargs.get('name') self.assertEqual('[ds] fake/path', name) datacenter = kwargs.get('datacenter') self.assertEqual('fake-dc-ref', datacenter) createParentDirectories = kwargs.get('createParentDirectories') self.assertTrue(createParentDirectories) with mock.patch.object(self.session, '_call_method', fake_call_method): ds_path = ds_obj.DatastorePath('ds', 'fake/path') ds_util.mkdir(self.session, ds_path, 'fake-dc-ref') def test_file_exists(self): def fake_call_method(module, method, *args, **kwargs): if method == 'SearchDatastore_Task': ds_browser = args[0] self.assertEqual('fake-browser', ds_browser) datastorePath = kwargs.get('datastorePath') self.assertEqual('[ds] fake/path', datastorePath) return 'fake_exists_task' # Should never get here self.fail() def fake_wait_for_task(task_ref): if task_ref == 'fake_exists_task': result_file = fake.DataObject() result_file.path = 'fake-file' result = fake.DataObject() result.file = [result_file] result.path = '[ds] fake/path' task_info = fake.DataObject() task_info.result = result return task_info # Should never get here self.fail() with test.nested( mock.patch.object(self.session, '_call_method', fake_call_method), mock.patch.object(self.session, '_wait_for_task', fake_wait_for_task)): ds_path = ds_obj.DatastorePath('ds', 'fake/path') file_exists = ds_util.file_exists(self.session, 'fake-browser', ds_path, 'fake-file') self.assertTrue(file_exists) def test_file_exists_fails(self): def fake_call_method(module, method, *args, **kwargs): if method == 'SearchDatastore_Task': return 'fake_exists_task' # Should never get here self.fail() def fake_wait_for_task(task_ref): if task_ref == 'fake_exists_task': raise vexc.FileNotFoundException() # Should never get here self.fail() with test.nested( mock.patch.object(self.session, '_call_method', fake_call_method), mock.patch.object(self.session, '_wait_for_task', fake_wait_for_task)): ds_path = ds_obj.DatastorePath('ds', 'fake/path') file_exists = ds_util.file_exists(self.session, 'fake-browser', ds_path, 'fake-file') self.assertFalse(file_exists) def _mock_get_datastore_calls(self, *datastores): """Mock vim_util calls made by get_datastore.""" datastores_i = [None] # For the moment, at least, this list of datastores is simply passed to # get_properties_for_a_collection_of_objects, which we mock below. We # don't need to over-complicate the fake function by worrying about its # contents. fake_ds_list = ['fake-ds'] def fake_call_method(module, method, *args, **kwargs): # Mock the call which returns a list of datastores for the cluster if (module == ds_util.vutil and method == 'get_object_property' and args == ('fake-cluster', 'datastore')): fake_ds_mor = fake.DataObject() fake_ds_mor.ManagedObjectReference = fake_ds_list return fake_ds_mor # Return the datastore result sets we were passed in, in the order # given if (module == ds_util.vim_util and method == 'get_properties_for_a_collection_of_objects' and args[0] == 'Datastore' and args[1] == fake_ds_list): # Start a new iterator over given datastores datastores_i[0] = iter(datastores) return next(datastores_i[0]) # Continue returning results from the current iterator. if (module == ds_util.vutil and method == 'continue_retrieval'): try: return next(datastores_i[0]) except StopIteration: return None if (method == 'continue_retrieval' or method == 'cancel_retrieval'): return # Sentinel that get_datastore's use of vim has changed self.fail('Unexpected vim call in get_datastore: %s' % method) return mock.patch.object(self.session, '_call_method', side_effect=fake_call_method) def test_get_datastore(self): fake_objects = fake.FakeRetrieveResult() fake_objects.add_object(fake.Datastore()) fake_objects.add_object(fake.Datastore("fake-ds-2", 2048, 1000, False, "normal")) fake_objects.add_object(fake.Datastore("fake-ds-3", 4096, 2000, True, "inMaintenance")) with self._mock_get_datastore_calls(fake_objects): result = ds_util.get_datastore(self.session, 'fake-cluster') self.assertEqual("fake-ds", result.name) self.assertEqual(units.Ti, result.capacity) self.assertEqual(500 * units.Gi, result.freespace) def test_get_datastore_with_regex(self): # Test with a regex that matches with a datastore datastore_valid_regex = re.compile("^openstack.*\d$") fake_objects = fake.FakeRetrieveResult() fake_objects.add_object(fake.Datastore("openstack-ds0")) fake_objects.add_object(fake.Datastore("fake-ds0")) fake_objects.add_object(fake.Datastore("fake-ds1")) with self._mock_get_datastore_calls(fake_objects): result = ds_util.get_datastore(self.session, 'fake-cluster', datastore_valid_regex) self.assertEqual("openstack-ds0", result.name) def test_get_datastore_with_token(self): regex = re.compile("^ds.*\d$") fake0 = fake.FakeRetrieveResult() fake0.add_object(fake.Datastore("ds0", 10 * units.Gi, 5 * units.Gi)) fake0.add_object(fake.Datastore("foo", 10 * units.Gi, 9 * units.Gi)) setattr(fake0, 'token', 'token-0') fake1 = fake.FakeRetrieveResult() fake1.add_object(fake.Datastore("ds2", 10 * units.Gi, 8 * units.Gi)) fake1.add_object(fake.Datastore("ds3", 10 * units.Gi, 1 * units.Gi)) with self._mock_get_datastore_calls(fake0, fake1): result = ds_util.get_datastore(self.session, 'fake-cluster', regex) self.assertEqual("ds2", result.name) def test_get_datastore_with_list(self): # Test with a regex containing whitelist of datastores datastore_valid_regex = re.compile("(openstack-ds0|openstack-ds2)") fake_objects = fake.FakeRetrieveResult() fake_objects.add_object(fake.Datastore("openstack-ds0")) fake_objects.add_object(fake.Datastore("openstack-ds1")) fake_objects.add_object(fake.Datastore("openstack-ds2")) with self._mock_get_datastore_calls(fake_objects): result = ds_util.get_datastore(self.session, 'fake-cluster', datastore_valid_regex) self.assertNotEqual("openstack-ds1", result.name) def test_get_datastore_with_regex_error(self): # Test with a regex that has no match # Checks if code raises DatastoreNotFound with a specific message datastore_invalid_regex = re.compile("unknown-ds") exp_message = ("Datastore regex %s did not match any datastores" % datastore_invalid_regex.pattern) fake_objects = fake.FakeRetrieveResult() fake_objects.add_object(fake.Datastore("fake-ds0")) fake_objects.add_object(fake.Datastore("fake-ds1")) # assertRaisesRegExp would have been a good choice instead of # try/catch block, but it's available only from Py 2.7. try: with self._mock_get_datastore_calls(fake_objects): ds_util.get_datastore(self.session, 'fake-cluster', datastore_invalid_regex) except exception.DatastoreNotFound as e: self.assertEqual(exp_message, e.args[0]) else: self.fail("DatastoreNotFound Exception was not raised with " "message: %s" % exp_message) def test_get_datastore_without_datastore(self): self.assertRaises(exception.DatastoreNotFound, ds_util.get_datastore, fake.FakeObjectRetrievalSession(None), cluster="fake-cluster") def test_get_datastore_inaccessible_ds(self): data_store = fake.Datastore() data_store.set("summary.accessible", False) fake_objects = fake.FakeRetrieveResult() fake_objects.add_object(data_store) with self._mock_get_datastore_calls(fake_objects): self.assertRaises(exception.DatastoreNotFound, ds_util.get_datastore, self.session, 'fake-cluster') def test_get_datastore_ds_in_maintenance(self): data_store = fake.Datastore() data_store.set("summary.maintenanceMode", "inMaintenance") fake_objects = fake.FakeRetrieveResult() fake_objects.add_object(data_store) with self._mock_get_datastore_calls(fake_objects): self.assertRaises(exception.DatastoreNotFound, ds_util.get_datastore, self.session, 'fake-cluster') def test_get_datastore_no_host_in_cluster(self): def fake_call_method(module, method, *args, **kwargs): return '' with mock.patch.object(self.session, '_call_method', fake_call_method): self.assertRaises(exception.DatastoreNotFound, ds_util.get_datastore, self.session, 'fake-cluster') def _test_is_datastore_valid(self, accessible=True, maintenance_mode="normal", type="VMFS", datastore_regex=None, ds_types=ds_util.ALL_SUPPORTED_DS_TYPES): propdict = {} propdict["summary.accessible"] = accessible propdict["summary.maintenanceMode"] = maintenance_mode propdict["summary.type"] = type propdict["summary.name"] = "ds-1" return ds_util._is_datastore_valid(propdict, datastore_regex, ds_types) def test_is_datastore_valid(self): for ds_type in ds_util.ALL_SUPPORTED_DS_TYPES: self.assertTrue(self._test_is_datastore_valid(True, "normal", ds_type)) def test_is_datastore_valid_inaccessible_ds(self): self.assertFalse(self._test_is_datastore_valid(False, "normal", "VMFS")) def test_is_datastore_valid_ds_in_maintenance(self): self.assertFalse(self._test_is_datastore_valid(True, "inMaintenance", "VMFS")) def test_is_datastore_valid_ds_type_invalid(self): self.assertFalse(self._test_is_datastore_valid(True, "normal", "vfat")) def test_is_datastore_valid_not_matching_regex(self): datastore_regex = re.compile("ds-2") self.assertFalse(self._test_is_datastore_valid(True, "normal", "VMFS", datastore_regex)) def test_is_datastore_valid_matching_regex(self): datastore_regex = re.compile("ds-1") self.assertTrue(self._test_is_datastore_valid(True, "normal", "VMFS", datastore_regex)) def test_get_connected_hosts_none(self): with mock.patch.object(self.session, '_call_method') as _call_method: hosts = ds_util.get_connected_hosts(self.session, 'fake_datastore') self.assertEqual([], hosts) _call_method.assert_called_once_with( mock.ANY, 'get_object_property', 'fake_datastore', 'host') def test_get_connected_hosts(self): host = mock.Mock(spec=object) host.value = 'fake-host' host_mount = mock.Mock(spec=object) host_mount.key = host host_mounts = mock.Mock(spec=object) host_mounts.DatastoreHostMount = [host_mount] with mock.patch.object(self.session, '_call_method', return_value=host_mounts) as _call_method: hosts = ds_util.get_connected_hosts(self.session, 'fake_datastore') self.assertEqual(['fake-host'], hosts) _call_method.assert_called_once_with( mock.ANY, 'get_object_property', 'fake_datastore', 'host') nova-17.0.1/nova/tests/unit/virt/vmwareapi/test_configdrive.py0000666000175000017500000001633013250073126024544 0ustar zuulzuul00000000000000# Copyright 2013 IBM Corp. # Copyright 2011 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import fixtures import mock from nova import context from nova.image import glance from nova import objects from nova import test from nova.tests.unit import fake_instance import nova.tests.unit.image.fake from nova.tests.unit import utils from nova.tests.unit.virt.vmwareapi import fake as vmwareapi_fake from nova.tests.unit.virt.vmwareapi import stubs from nova.tests import uuidsentinel from nova.virt import fake from nova.virt.vmwareapi import driver from nova.virt.vmwareapi import vm_util from nova.virt.vmwareapi import vmops class ConfigDriveTestCase(test.NoDBTestCase): REQUIRES_LOCKING = True @mock.patch.object(objects.Service, 'get_by_compute_host') @mock.patch.object(driver.VMwareVCDriver, '_register_openstack_extension') def setUp(self, mock_register, mock_service): super(ConfigDriveTestCase, self).setUp() vm_util.vm_refs_cache_reset() self.context = context.RequestContext('fake', 'fake', is_admin=False) self.flags(cluster_name='test_cluster', host_ip='testhostname', host_username='test_username', host_password='test_pass', use_linked_clone=False, group='vmware') self.flags(enabled=False, group='vnc') vmwareapi_fake.reset() stubs.set_stubs(self) nova.tests.unit.image.fake.stub_out_image_service(self) self.conn = driver.VMwareVCDriver(fake.FakeVirtAPI) self.network_info = utils.get_test_network_info() self.node_name = self.conn._nodename image_ref = nova.tests.unit.image.fake.get_valid_image_id() instance_values = { 'vm_state': 'building', 'project_id': 'fake', 'user_id': 'fake', 'name': '1', 'kernel_id': '1', 'ramdisk_id': '1', 'mac_addresses': [{'address': 'de:ad:be:ef:be:ef'}], 'memory_mb': 8192, 'flavor': objects.Flavor(vcpus=4, extra_specs={}), 'instance_type_id': 0, 'vcpus': 4, 'root_gb': 80, 'image_ref': image_ref, 'host': 'fake_host', 'task_state': 'scheduling', 'reservation_id': 'r-3t8muvr0', 'id': 1, 'uuid': uuidsentinel.foo, 'node': self.node_name, 'metadata': [], 'expected_attrs': ['system_metadata'], } self.test_instance = fake_instance.fake_instance_obj(self.context, **instance_values) self.test_instance.flavor = objects.Flavor(vcpus=4, memory_mb=8192, root_gb=80, ephemeral_gb=0, swap=0, extra_specs={}) (image_service, image_id) = glance.get_remote_image_service(context, image_ref) metadata = image_service.show(context, image_id) self.image = objects.ImageMeta.from_dict({ 'id': image_ref, 'disk_format': 'vmdk', 'size': int(metadata['size']), }) class FakeInstanceMetadata(object): def __init__(self, instance, content=None, extra_md=None, network_info=None, request_context=None): pass def metadata_for_config_drive(self): return [] self.useFixture(fixtures.MonkeyPatch( 'nova.api.metadata.base.InstanceMetadata', FakeInstanceMetadata)) def fake_make_drive(_self, _path): pass # We can't actually make a config drive v2 because ensure_tree has # been faked out self.stub_out('nova.virt.configdrive.ConfigDriveBuilder.make_drive', fake_make_drive) def fake_upload_iso_to_datastore(iso_path, instance, **kwargs): pass self.stub_out('nova.virt.vmwareapi.images.upload_iso_to_datastore', fake_upload_iso_to_datastore) def tearDown(self): super(ConfigDriveTestCase, self).tearDown() vmwareapi_fake.cleanup() nova.tests.unit.image.fake.FakeImageService_reset() @mock.patch.object(vmops.VMwareVMOps, '_get_instance_metadata', return_value='fake_metadata') def _spawn_vm(self, fake_get_instance_meta, injected_files=None, admin_password=None, block_device_info=None): injected_files = injected_files or [] self.conn.spawn(self.context, self.test_instance, self.image, injected_files=injected_files, admin_password=admin_password, allocations={}, network_info=self.network_info, block_device_info=block_device_info) @mock.patch.object(vmops.VMwareVMOps, '_create_config_drive', return_value=('[ds1] fake.iso')) @mock.patch.object(vmops.VMwareVMOps, '_attach_cdrom_to_vm') def test_create_vm_with_config_drive_verify_method_invocation(self, mock_attach_cdrom, mock_create_config_drive): self.test_instance.config_drive = 'True' self._spawn_vm() mock_create_config_drive.assert_called_once_with(mock.ANY, self.test_instance, mock.ANY, mock.ANY, mock.ANY, mock.ANY, mock.ANY, mock.ANY, mock.ANY) mock_attach_cdrom.assert_called_once_with(mock.ANY, mock.ANY, mock.ANY, mock.ANY) @mock.patch.object(vmops.VMwareVMOps, '_create_config_drive', return_value=('[ds1] fake.iso')) @mock.patch.object(vmops.VMwareVMOps, '_attach_cdrom_to_vm') def test_create_vm_without_config_drive(self, mock_attach_cdrom, mock_create_config_drive): self.test_instance.config_drive = None self._spawn_vm() mock_create_config_drive.assert_not_called() mock_attach_cdrom.assert_not_called() def test_create_vm_with_config_drive(self): self.test_instance.config_drive = 'True' self._spawn_vm() nova-17.0.1/nova/tests/unit/virt/vmwareapi/test_volumeops.py0000666000175000017500000010065613250073126024303 0ustar zuulzuul00000000000000# Copyright 2013 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_vmware import exceptions as oslo_vmw_exceptions from oslo_vmware import vim_util as vutil from nova.compute import power_state from nova.compute import vm_states from nova import context from nova import exception from nova import test from nova.tests.unit import fake_instance from nova.tests.unit.image import fake as image_fake from nova.tests.unit.virt.vmwareapi import fake as vmwareapi_fake from nova.tests.unit.virt.vmwareapi import stubs from nova.tests import uuidsentinel from nova.virt.vmwareapi import constants from nova.virt.vmwareapi import driver from nova.virt.vmwareapi import vm_util from nova.virt.vmwareapi import volumeops class VMwareVolumeOpsTestCase(test.NoDBTestCase): def setUp(self): super(VMwareVolumeOpsTestCase, self).setUp() vmwareapi_fake.reset() stubs.set_stubs(self) self._session = driver.VMwareAPISession() self._context = context.RequestContext('fake_user', 'fake_project') self._volumeops = volumeops.VMwareVolumeOps(self._session) self._image_id = image_fake.get_valid_image_id() self._instance_values = { 'name': 'fake_name', 'uuid': uuidsentinel.foo, 'vcpus': 1, 'memory_mb': 512, 'image_ref': self._image_id, 'root_gb': 10, 'node': 'respool-1001(MyResPoolName)', 'expected_attrs': ['system_metadata'], } self._instance = fake_instance.fake_instance_obj(self._context, **self._instance_values) def _test_detach_disk_from_vm(self, destroy_disk=False): def fake_call_method(module, method, *args, **kwargs): vmdk_detach_config_spec = kwargs.get('spec') virtual_device_config = vmdk_detach_config_spec.deviceChange[0] self.assertEqual('remove', virtual_device_config.operation) self.assertEqual('ns0:VirtualDeviceConfigSpec', virtual_device_config.obj_name) if destroy_disk: self.assertEqual('destroy', virtual_device_config.fileOperation) else: self.assertFalse(hasattr(virtual_device_config, 'fileOperation')) return 'fake_configure_task' with test.nested( mock.patch.object(self._session, '_wait_for_task'), mock.patch.object(self._session, '_call_method', fake_call_method) ) as (_wait_for_task, _call_method): fake_device = vmwareapi_fake.DataObject() fake_device.backing = vmwareapi_fake.DataObject() fake_device.backing.fileName = 'fake_path' fake_device.key = 'fake_key' self._volumeops.detach_disk_from_vm('fake_vm_ref', self._instance, fake_device, destroy_disk) _wait_for_task.assert_has_calls([ mock.call('fake_configure_task')]) def test_detach_with_destroy_disk_from_vm(self): self._test_detach_disk_from_vm(destroy_disk=True) def test_detach_without_destroy_disk_from_vm(self): self._test_detach_disk_from_vm(destroy_disk=False) def _fake_call_get_object_property(self, uuid, result): def fake_call_method(vim, method, vm_ref, prop): expected_prop = 'config.extraConfig["volume-%s"]' % uuid self.assertEqual('VirtualMachine', vm_ref._type) self.assertEqual(expected_prop, prop) return result return fake_call_method def test_get_volume_uuid(self): vm_ref = vmwareapi_fake.ManagedObjectReference('VirtualMachine', 'vm-134') uuid = '1234' opt_val = vmwareapi_fake.OptionValue('volume-%s' % uuid, 'volume-val') fake_call = self._fake_call_get_object_property(uuid, opt_val) with mock.patch.object(self._session, "_call_method", fake_call): val = self._volumeops._get_volume_uuid(vm_ref, uuid) self.assertEqual('volume-val', val) def test_get_volume_uuid_not_found(self): vm_ref = vmwareapi_fake.ManagedObjectReference('VirtualMachine', 'vm-134') uuid = '1234' fake_call = self._fake_call_get_object_property(uuid, None) with mock.patch.object(self._session, "_call_method", fake_call): val = self._volumeops._get_volume_uuid(vm_ref, uuid) self.assertIsNone(val) def test_attach_volume_vmdk_invalid(self): connection_info = {'driver_volume_type': 'vmdk', 'serial': 'volume-fake-id', 'data': {'volume': 'vm-10', 'volume_id': 'volume-fake-id'}} instance = mock.MagicMock(name='fake-name', vm_state=vm_states.ACTIVE) vmdk_info = vm_util.VmdkInfo('fake-path', constants.ADAPTER_TYPE_IDE, constants.DISK_TYPE_PREALLOCATED, 1024, 'fake-device') with test.nested( mock.patch.object(vm_util, 'get_vm_ref'), mock.patch.object(self._volumeops, '_get_volume_ref'), mock.patch.object(vm_util, 'get_vmdk_info', return_value=vmdk_info), mock.patch.object(vm_util, 'get_vm_state', return_value=power_state.RUNNING) ) as (get_vm_ref, get_volume_ref, get_vmdk_info, get_vm_state): self.assertRaises(exception.Invalid, self._volumeops._attach_volume_vmdk, connection_info, instance) get_vm_ref.assert_called_once_with(self._volumeops._session, instance) get_volume_ref.assert_called_once_with( connection_info['data']['volume']) self.assertTrue(get_vmdk_info.called) get_vm_state.assert_called_once_with(self._volumeops._session, instance) @mock.patch.object(vm_util, 'get_vm_extra_config_spec', return_value=mock.sentinel.extra_config) @mock.patch.object(vm_util, 'reconfigure_vm') def test_update_volume_details(self, reconfigure_vm, get_vm_extra_config_spec): volume_uuid = '26f5948e-52a3-4ee6-8d48-0a379afd0828' device_uuid = '0d86246a-2adb-470d-a9f7-bce09930c5d' self._volumeops._update_volume_details( mock.sentinel.vm_ref, volume_uuid, device_uuid) get_vm_extra_config_spec.assert_called_once_with( self._volumeops._session.vim.client.factory, {'volume-%s' % volume_uuid: device_uuid}) reconfigure_vm.assert_called_once_with(self._volumeops._session, mock.sentinel.vm_ref, mock.sentinel.extra_config) def _fake_connection_info(self): return {'driver_volume_type': 'vmdk', 'serial': 'volume-fake-id', 'data': {'volume': 'vm-10', 'volume_id': 'volume-fake-id'}} @mock.patch.object(volumeops.VMwareVolumeOps, '_get_volume_uuid') @mock.patch.object(vm_util, 'get_vmdk_backed_disk_device') def test_get_vmdk_backed_disk_device(self, get_vmdk_backed_disk_device, get_volume_uuid): session = mock.Mock() self._volumeops._session = session hardware_devices = mock.sentinel.hardware_devices session._call_method.return_value = hardware_devices disk_uuid = mock.sentinel.disk_uuid get_volume_uuid.return_value = disk_uuid device = mock.sentinel.device get_vmdk_backed_disk_device.return_value = device vm_ref = mock.sentinel.vm_ref connection_info = self._fake_connection_info() ret = self._volumeops._get_vmdk_backed_disk_device( vm_ref, connection_info['data']) self.assertEqual(device, ret) session._call_method.assert_called_once_with( vutil, "get_object_property", vm_ref, "config.hardware.device") get_volume_uuid.assert_called_once_with( vm_ref, connection_info['data']['volume_id']) get_vmdk_backed_disk_device.assert_called_once_with(hardware_devices, disk_uuid) @mock.patch.object(volumeops.VMwareVolumeOps, '_get_volume_uuid') @mock.patch.object(vm_util, 'get_vmdk_backed_disk_device') def test_get_vmdk_backed_disk_device_with_missing_disk_device( self, get_vmdk_backed_disk_device, get_volume_uuid): session = mock.Mock() self._volumeops._session = session hardware_devices = mock.sentinel.hardware_devices session._call_method.return_value = hardware_devices disk_uuid = mock.sentinel.disk_uuid get_volume_uuid.return_value = disk_uuid get_vmdk_backed_disk_device.return_value = None vm_ref = mock.sentinel.vm_ref connection_info = self._fake_connection_info() self.assertRaises(exception.DiskNotFound, self._volumeops._get_vmdk_backed_disk_device, vm_ref, connection_info['data']) session._call_method.assert_called_once_with( vutil, "get_object_property", vm_ref, "config.hardware.device") get_volume_uuid.assert_called_once_with( vm_ref, connection_info['data']['volume_id']) get_vmdk_backed_disk_device.assert_called_once_with(hardware_devices, disk_uuid) def test_detach_volume_vmdk(self): vmdk_info = vm_util.VmdkInfo('fake-path', 'lsiLogic', 'thin', 1024, 'fake-device') with test.nested( mock.patch.object(vm_util, 'get_vm_ref', return_value=mock.sentinel.vm_ref), mock.patch.object(self._volumeops, '_get_volume_ref', return_value=mock.sentinel.volume_ref), mock.patch.object(self._volumeops, '_get_vmdk_backed_disk_device', return_value=mock.sentinel.device), mock.patch.object(vm_util, 'get_vmdk_info', return_value=vmdk_info), mock.patch.object(self._volumeops, '_consolidate_vmdk_volume'), mock.patch.object(self._volumeops, 'detach_disk_from_vm'), mock.patch.object(self._volumeops, '_update_volume_details'), ) as (get_vm_ref, get_volume_ref, get_vmdk_backed_disk_device, get_vmdk_info, consolidate_vmdk_volume, detach_disk_from_vm, update_volume_details): connection_info = {'driver_volume_type': 'vmdk', 'serial': 'volume-fake-id', 'data': {'volume': 'vm-10', 'volume_id': 'd11a82de-ddaa-448d-b50a-a255a7e61a1e' }} instance = mock.MagicMock(name='fake-name', vm_state=vm_states.ACTIVE) self._volumeops._detach_volume_vmdk(connection_info, instance) get_vm_ref.assert_called_once_with(self._volumeops._session, instance) get_volume_ref.assert_called_once_with( connection_info['data']['volume']) get_vmdk_backed_disk_device.assert_called_once_with( mock.sentinel.vm_ref, connection_info['data']) get_vmdk_info.assert_called_once_with(self._volumeops._session, mock.sentinel.volume_ref) consolidate_vmdk_volume.assert_called_once_with( instance, mock.sentinel.vm_ref, mock.sentinel.device, mock.sentinel.volume_ref, adapter_type=vmdk_info.adapter_type, disk_type=vmdk_info.disk_type) detach_disk_from_vm.assert_called_once_with(mock.sentinel.vm_ref, instance, mock.sentinel.device) update_volume_details.assert_called_once_with( mock.sentinel.vm_ref, connection_info['data']['volume_id'], "") def test_detach_volume_vmdk_invalid(self): connection_info = {'driver_volume_type': 'vmdk', 'serial': 'volume-fake-id', 'data': {'volume': 'vm-10', 'volume_id': 'volume-fake-id'}} instance = mock.MagicMock(name='fake-name', vm_state=vm_states.ACTIVE) vmdk_info = vm_util.VmdkInfo('fake-path', constants.ADAPTER_TYPE_IDE, constants.DISK_TYPE_PREALLOCATED, 1024, 'fake-device') with test.nested( mock.patch.object(vm_util, 'get_vm_ref', return_value=mock.sentinel.vm_ref), mock.patch.object(self._volumeops, '_get_volume_ref'), mock.patch.object(self._volumeops, '_get_vmdk_backed_disk_device'), mock.patch.object(vm_util, 'get_vmdk_info', return_value=vmdk_info), mock.patch.object(vm_util, 'get_vm_state', return_value=power_state.RUNNING) ) as (get_vm_ref, get_volume_ref, get_vmdk_backed_disk_device, get_vmdk_info, get_vm_state): self.assertRaises(exception.Invalid, self._volumeops._detach_volume_vmdk, connection_info, instance) get_vm_ref.assert_called_once_with(self._volumeops._session, instance) get_volume_ref.assert_called_once_with( connection_info['data']['volume']) get_vmdk_backed_disk_device.assert_called_once_with( mock.sentinel.vm_ref, connection_info['data']) self.assertTrue(get_vmdk_info.called) get_vm_state.assert_called_once_with(self._volumeops._session, instance) @mock.patch.object(vm_util, 'get_vm_ref') @mock.patch.object(vm_util, 'get_rdm_disk') @mock.patch.object(volumeops.VMwareVolumeOps, '_iscsi_get_target') @mock.patch.object(volumeops.VMwareVolumeOps, 'detach_disk_from_vm') def test_detach_volume_iscsi(self, detach_disk_from_vm, iscsi_get_target, get_rdm_disk, get_vm_ref): vm_ref = mock.sentinel.vm_ref get_vm_ref.return_value = vm_ref device_name = mock.sentinel.device_name disk_uuid = mock.sentinel.disk_uuid iscsi_get_target.return_value = (device_name, disk_uuid) session = mock.Mock() self._volumeops._session = session hardware_devices = mock.sentinel.hardware_devices session._call_method.return_value = hardware_devices device = mock.sentinel.device get_rdm_disk.return_value = device connection_info = self._fake_connection_info() instance = mock.sentinel.instance self._volumeops._detach_volume_iscsi(connection_info, instance) get_vm_ref.assert_called_once_with(session, instance) iscsi_get_target.assert_called_once_with(connection_info['data']) session._call_method.assert_called_once_with( vutil, "get_object_property", vm_ref, "config.hardware.device") get_rdm_disk.assert_called_once_with(hardware_devices, disk_uuid) detach_disk_from_vm.assert_called_once_with( vm_ref, instance, device, destroy_disk=True) @mock.patch.object(vm_util, 'get_vm_ref') @mock.patch.object(volumeops.VMwareVolumeOps, '_iscsi_get_target') def test_detach_volume_iscsi_with_missing_iscsi_target( self, iscsi_get_target, get_vm_ref): vm_ref = mock.sentinel.vm_ref get_vm_ref.return_value = vm_ref iscsi_get_target.return_value = (None, None) connection_info = self._fake_connection_info() instance = mock.sentinel.instance self.assertRaises( exception.StorageError, self._volumeops._detach_volume_iscsi, connection_info, instance) get_vm_ref.assert_called_once_with(self._volumeops._session, instance) iscsi_get_target.assert_called_once_with(connection_info['data']) @mock.patch.object(vm_util, 'get_vm_ref') @mock.patch.object(vm_util, 'get_rdm_disk') @mock.patch.object(volumeops.VMwareVolumeOps, '_iscsi_get_target') @mock.patch.object(volumeops.VMwareVolumeOps, 'detach_disk_from_vm') def test_detach_volume_iscsi_with_missing_disk_device( self, detach_disk_from_vm, iscsi_get_target, get_rdm_disk, get_vm_ref): vm_ref = mock.sentinel.vm_ref get_vm_ref.return_value = vm_ref device_name = mock.sentinel.device_name disk_uuid = mock.sentinel.disk_uuid iscsi_get_target.return_value = (device_name, disk_uuid) session = mock.Mock() self._volumeops._session = session hardware_devices = mock.sentinel.hardware_devices session._call_method.return_value = hardware_devices get_rdm_disk.return_value = None connection_info = self._fake_connection_info() instance = mock.sentinel.instance self.assertRaises( exception.DiskNotFound, self._volumeops._detach_volume_iscsi, connection_info, instance) get_vm_ref.assert_called_once_with(session, instance) iscsi_get_target.assert_called_once_with(connection_info['data']) session._call_method.assert_called_once_with( vutil, "get_object_property", vm_ref, "config.hardware.device") get_rdm_disk.assert_called_once_with(hardware_devices, disk_uuid) self.assertFalse(detach_disk_from_vm.called) def _test_attach_volume_vmdk(self, adapter_type=None): connection_info = {'driver_volume_type': constants.DISK_FORMAT_VMDK, 'serial': 'volume-fake-id', 'data': {'volume': 'vm-10', 'volume_id': 'volume-fake-id'}} vm_ref = 'fake-vm-ref' volume_device = mock.MagicMock() volume_device.backing.fileName = 'fake-path' default_adapter_type = constants.DEFAULT_ADAPTER_TYPE disk_type = constants.DEFAULT_DISK_TYPE disk_uuid = 'e97f357b-331e-4ad1-b726-89be048fb811' backing = mock.Mock(uuid=disk_uuid) device = mock.Mock(backing=backing) vmdk_info = vm_util.VmdkInfo('fake-path', default_adapter_type, disk_type, 1024, device) adapter_type = adapter_type or default_adapter_type if adapter_type == constants.ADAPTER_TYPE_IDE: vm_state = power_state.SHUTDOWN else: vm_state = power_state.RUNNING with test.nested( mock.patch.object(vm_util, 'get_vm_ref', return_value=vm_ref), mock.patch.object(self._volumeops, '_get_volume_ref'), mock.patch.object(vm_util, 'get_vmdk_info', return_value=vmdk_info), mock.patch.object(self._volumeops, 'attach_disk_to_vm'), mock.patch.object(self._volumeops, '_update_volume_details'), mock.patch.object(vm_util, 'get_vm_state', return_value=vm_state) ) as (get_vm_ref, get_volume_ref, get_vmdk_info, attach_disk_to_vm, update_volume_details, get_vm_state): self._volumeops.attach_volume(connection_info, self._instance, adapter_type) get_vm_ref.assert_called_once_with(self._volumeops._session, self._instance) get_volume_ref.assert_called_once_with( connection_info['data']['volume']) self.assertTrue(get_vmdk_info.called) attach_disk_to_vm.assert_called_once_with( vm_ref, self._instance, adapter_type, constants.DISK_TYPE_PREALLOCATED, vmdk_path='fake-path') update_volume_details.assert_called_once_with( vm_ref, connection_info['data']['volume_id'], disk_uuid) if adapter_type == constants.ADAPTER_TYPE_IDE: get_vm_state.assert_called_once_with(self._volumeops._session, self._instance) else: self.assertFalse(get_vm_state.called) def _test_attach_volume_iscsi(self, adapter_type=None): connection_info = {'driver_volume_type': constants.DISK_FORMAT_ISCSI, 'serial': 'volume-fake-id', 'data': {'volume': 'vm-10', 'volume_id': 'volume-fake-id'}} vm_ref = 'fake-vm-ref' default_adapter_type = constants.DEFAULT_ADAPTER_TYPE adapter_type = adapter_type or default_adapter_type with test.nested( mock.patch.object(vm_util, 'get_vm_ref', return_value=vm_ref), mock.patch.object(self._volumeops, '_iscsi_discover_target', return_value=(mock.sentinel.device_name, mock.sentinel.uuid)), mock.patch.object(vm_util, 'get_scsi_adapter_type', return_value=adapter_type), mock.patch.object(self._volumeops, 'attach_disk_to_vm') ) as (get_vm_ref, iscsi_discover_target, get_scsi_adapter_type, attach_disk_to_vm): self._volumeops.attach_volume(connection_info, self._instance, adapter_type) get_vm_ref.assert_called_once_with(self._volumeops._session, self._instance) iscsi_discover_target.assert_called_once_with( connection_info['data']) if adapter_type is None: self.assertTrue(get_scsi_adapter_type.called) attach_disk_to_vm.assert_called_once_with(vm_ref, self._instance, adapter_type, 'rdmp', device_name=mock.sentinel.device_name) def test_attach_volume_vmdk(self): for adapter_type in (None, constants.DEFAULT_ADAPTER_TYPE, constants.ADAPTER_TYPE_BUSLOGIC, constants.ADAPTER_TYPE_IDE, constants.ADAPTER_TYPE_LSILOGICSAS, constants.ADAPTER_TYPE_PARAVIRTUAL): self._test_attach_volume_vmdk(adapter_type) def test_attach_volume_iscsi(self): for adapter_type in (None, constants.DEFAULT_ADAPTER_TYPE, constants.ADAPTER_TYPE_BUSLOGIC, constants.ADAPTER_TYPE_LSILOGICSAS, constants.ADAPTER_TYPE_PARAVIRTUAL): self._test_attach_volume_iscsi(adapter_type) @mock.patch.object(volumeops.VMwareVolumeOps, '_get_vmdk_base_volume_device') @mock.patch.object(vm_util, 'relocate_vm') def test_consolidate_vmdk_volume_with_no_relocate( self, relocate_vm, get_vmdk_base_volume_device): file_name = mock.sentinel.file_name backing = mock.Mock(fileName=file_name) original_device = mock.Mock(backing=backing) get_vmdk_base_volume_device.return_value = original_device device = mock.Mock(backing=backing) volume_ref = mock.sentinel.volume_ref vm_ref = mock.sentinel.vm_ref self._volumeops._consolidate_vmdk_volume(self._instance, vm_ref, device, volume_ref) get_vmdk_base_volume_device.assert_called_once_with(volume_ref) self.assertFalse(relocate_vm.called) @mock.patch.object(volumeops.VMwareVolumeOps, '_get_vmdk_base_volume_device') @mock.patch.object(vm_util, 'relocate_vm') @mock.patch.object(volumeops.VMwareVolumeOps, '_get_host_of_vm') @mock.patch.object(volumeops.VMwareVolumeOps, '_get_res_pool_of_host') @mock.patch.object(volumeops.VMwareVolumeOps, 'detach_disk_from_vm') @mock.patch.object(volumeops.VMwareVolumeOps, 'attach_disk_to_vm') def test_consolidate_vmdk_volume_with_relocate( self, attach_disk_to_vm, detach_disk_from_vm, get_res_pool_of_host, get_host_of_vm, relocate_vm, get_vmdk_base_volume_device): file_name = mock.sentinel.file_name backing = mock.Mock(fileName=file_name) original_device = mock.Mock(backing=backing) get_vmdk_base_volume_device.return_value = original_device new_file_name = mock.sentinel.new_file_name datastore = mock.sentinel.datastore new_backing = mock.Mock(fileName=new_file_name, datastore=datastore) device = mock.Mock(backing=new_backing) host = mock.sentinel.host get_host_of_vm.return_value = host rp = mock.sentinel.rp get_res_pool_of_host.return_value = rp detach_disk_from_vm.side_effect = [ oslo_vmw_exceptions.FileNotFoundException] instance = self._instance volume_ref = mock.sentinel.volume_ref vm_ref = mock.sentinel.vm_ref adapter_type = constants.ADAPTER_TYPE_BUSLOGIC disk_type = constants.DISK_TYPE_EAGER_ZEROED_THICK self._volumeops._consolidate_vmdk_volume(instance, vm_ref, device, volume_ref, adapter_type, disk_type) get_vmdk_base_volume_device.assert_called_once_with(volume_ref) relocate_vm.assert_called_once_with(self._session, volume_ref, rp, datastore, host) detach_disk_from_vm.assert_called_once_with( volume_ref, instance, original_device, destroy_disk=True) attach_disk_to_vm.assert_called_once_with( volume_ref, instance, adapter_type, disk_type, vmdk_path=new_file_name) @mock.patch.object(volumeops.VMwareVolumeOps, '_get_vmdk_base_volume_device') @mock.patch.object(vm_util, 'relocate_vm') @mock.patch.object(volumeops.VMwareVolumeOps, '_get_host_of_vm') @mock.patch.object(volumeops.VMwareVolumeOps, '_get_res_pool_of_host') @mock.patch.object(volumeops.VMwareVolumeOps, 'detach_disk_from_vm') @mock.patch.object(volumeops.VMwareVolumeOps, 'attach_disk_to_vm') def test_consolidate_vmdk_volume_with_missing_vmdk( self, attach_disk_to_vm, detach_disk_from_vm, get_res_pool_of_host, get_host_of_vm, relocate_vm, get_vmdk_base_volume_device): file_name = mock.sentinel.file_name backing = mock.Mock(fileName=file_name) original_device = mock.Mock(backing=backing) get_vmdk_base_volume_device.return_value = original_device new_file_name = mock.sentinel.new_file_name datastore = mock.sentinel.datastore new_backing = mock.Mock(fileName=new_file_name, datastore=datastore) device = mock.Mock(backing=new_backing) host = mock.sentinel.host get_host_of_vm.return_value = host rp = mock.sentinel.rp get_res_pool_of_host.return_value = rp relocate_vm.side_effect = [ oslo_vmw_exceptions.FileNotFoundException, None] instance = mock.sentinel.instance volume_ref = mock.sentinel.volume_ref vm_ref = mock.sentinel.vm_ref adapter_type = constants.ADAPTER_TYPE_BUSLOGIC disk_type = constants.DISK_TYPE_EAGER_ZEROED_THICK self._volumeops._consolidate_vmdk_volume(instance, vm_ref, device, volume_ref, adapter_type, disk_type) get_vmdk_base_volume_device.assert_called_once_with(volume_ref) relocate_calls = [mock.call(self._session, volume_ref, rp, datastore, host), mock.call(self._session, volume_ref, rp, datastore, host)] self.assertEqual(relocate_calls, relocate_vm.call_args_list) detach_disk_from_vm.assert_called_once_with( volume_ref, instance, original_device) attach_disk_to_vm.assert_called_once_with( volume_ref, instance, adapter_type, disk_type, vmdk_path=new_file_name) def test_iscsi_get_host_iqn(self): host_mor = mock.Mock() iqn = 'iscsi-name' hba = vmwareapi_fake.HostInternetScsiHba(iqn) hbas = mock.MagicMock(HostHostBusAdapter=[hba]) with test.nested( mock.patch.object(vm_util, 'get_host_ref_for_vm', return_value=host_mor), mock.patch.object(self._volumeops._session, '_call_method', return_value=hbas) ) as (fake_get_host_ref_for_vm, fake_call_method): result = self._volumeops._iscsi_get_host_iqn(self._instance) fake_get_host_ref_for_vm.assert_called_once_with( self._volumeops._session, self._instance) fake_call_method.assert_called_once_with(vutil, "get_object_property", host_mor, "config.storageDevice.hostBusAdapter") self.assertEqual(iqn, result) def test_iscsi_get_host_iqn_instance_not_found(self): host_mor = mock.Mock() iqn = 'iscsi-name' hba = vmwareapi_fake.HostInternetScsiHba(iqn) hbas = mock.MagicMock(HostHostBusAdapter=[hba]) with test.nested( mock.patch.object(vm_util, 'get_host_ref_for_vm', side_effect=exception.InstanceNotFound('fake')), mock.patch.object(vm_util, 'get_host_ref', return_value=host_mor), mock.patch.object(self._volumeops._session, '_call_method', return_value=hbas) ) as (fake_get_host_ref_for_vm, fake_get_host_ref, fake_call_method): result = self._volumeops._iscsi_get_host_iqn(self._instance) fake_get_host_ref_for_vm.assert_called_once_with( self._volumeops._session, self._instance) fake_get_host_ref.assert_called_once_with( self._volumeops._session, self._volumeops._cluster) fake_call_method.assert_called_once_with(vutil, "get_object_property", host_mor, "config.storageDevice.hostBusAdapter") self.assertEqual(iqn, result) def test_get_volume_connector(self): vm_id = 'fake-vm' vm_ref = mock.MagicMock(value=vm_id) iqn = 'iscsi-name' host_ip = 'testhostname' self.flags(host_ip=host_ip, group='vmware') with test.nested( mock.patch.object(vm_util, 'get_vm_ref', return_value=vm_ref), mock.patch.object(self._volumeops, '_iscsi_get_host_iqn', return_value=iqn) ) as (fake_get_vm_ref, fake_iscsi_get_host_iqn): connector = self._volumeops.get_volume_connector(self._instance) fake_get_vm_ref.assert_called_once_with(self._volumeops._session, self._instance) fake_iscsi_get_host_iqn.assert_called_once_with(self._instance) self.assertEqual(host_ip, connector['ip']) self.assertEqual(host_ip, connector['host']) self.assertEqual(iqn, connector['initiator']) self.assertEqual(vm_id, connector['instance']) nova-17.0.1/nova/tests/unit/virt/vmwareapi/__init__.py0000666000175000017500000000000013250073126022730 0ustar zuulzuul00000000000000nova-17.0.1/nova/tests/unit/virt/vmwareapi/ovf.xml0000666000175000017500000000704413250073126022152 0ustar zuulzuul00000000000000 Virtual disk information The list of logical networks The VM Network network A virtual machine Damn Small Linux The kind of installed guest operating system Other Linux (32-bit) Virtual hardware requirements Virtual Hardware Family 0 Damn Small Linux vmx-07 hertz * 10^6 Number of Virtual CPUs 1 virtual CPU(s) 1 3 1 byte * 2^20 Memory Size 256MB of memory 2 4 256 7 true VM Network Spinderman network Network adapter 1 6 PCNet32 10 0 Hard disk 1 ovf:/disk/vmdisk1 8 4 17 A human-readable annotation Paiadzhina fostata boklici nova-17.0.1/nova/tests/unit/virt/vmwareapi/test_ds_util_datastore_selection.py0000666000175000017500000001630713250073126030027 0ustar zuulzuul00000000000000# # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import re from oslo_utils import units from oslo_vmware.objects import datastore as ds_obj from nova import test from nova.virt.vmwareapi import ds_util ResultSet = collections.namedtuple('ResultSet', ['objects']) ObjectContent = collections.namedtuple('ObjectContent', ['obj', 'propSet']) DynamicProperty = collections.namedtuple('Property', ['name', 'val']) MoRef = collections.namedtuple('ManagedObjectReference', ['value']) class VMwareDSUtilDatastoreSelectionTestCase(test.NoDBTestCase): def setUp(self): super(VMwareDSUtilDatastoreSelectionTestCase, self).setUp() self.data = [ ['VMFS', 'os-some-name', True, 'normal', 987654321, 12346789], ['NFS', 'another-name', True, 'normal', 9876543210, 123467890], ['BAD', 'some-name-bad', True, 'normal', 98765432100, 1234678900], ['VMFS', 'some-name-good', False, 'normal', 987654321, 12346789], ['VMFS', 'new-name', True, 'inMaintenance', 987654321, 12346789] ] def build_result_set(self, mock_data, name_list=None): # datastores will have a moref_id of ds-000 and # so on based on their index in the mock_data list if name_list is None: name_list = self.propset_name_list objects = [] for id, row in enumerate(mock_data): obj = ObjectContent( obj=MoRef(value="ds-%03d" % id), propSet=[]) for index, value in enumerate(row): obj.propSet.append( DynamicProperty(name=name_list[index], val=row[index])) objects.append(obj) return ResultSet(objects=objects) @property def propset_name_list(self): return ['summary.type', 'summary.name', 'summary.accessible', 'summary.maintenanceMode', 'summary.capacity', 'summary.freeSpace'] def test_filter_datastores_simple(self): datastores = self.build_result_set(self.data) best_match = ds_obj.Datastore(ref='fake_ref', name='ds', capacity=0, freespace=0) rec = ds_util._select_datastore(None, datastores, best_match) self.assertIsNotNone(rec.ref, "could not find datastore!") self.assertEqual('ds-001', rec.ref.value, "didn't find the right datastore!") self.assertEqual(123467890, rec.freespace, "did not obtain correct freespace!") def test_filter_datastores_empty(self): data = [] datastores = self.build_result_set(data) best_match = ds_obj.Datastore(ref='fake_ref', name='ds', capacity=0, freespace=0) rec = ds_util._select_datastore(None, datastores, best_match) self.assertEqual(best_match, rec) def test_filter_datastores_no_match(self): datastores = self.build_result_set(self.data) datastore_regex = re.compile('no_match.*') best_match = ds_obj.Datastore(ref='fake_ref', name='ds', capacity=0, freespace=0) rec = ds_util._select_datastore(None, datastores, best_match, datastore_regex) self.assertEqual(best_match, rec, "did not match datastore properly") def test_filter_datastores_specific_match(self): data = [ ['VMFS', 'os-some-name', True, 'normal', 987654321, 1234678], ['NFS', 'another-name', True, 'normal', 9876543210, 123467890], ['BAD', 'some-name-bad', True, 'normal', 98765432100, 1234678900], ['VMFS', 'some-name-good', True, 'normal', 987654321, 12346789], ['VMFS', 'some-other-good', False, 'normal', 987654321000, 12346789000], ['VMFS', 'new-name', True, 'inMaintenance', 987654321000, 12346789000] ] # only the DS some-name-good is accessible and matches the regex datastores = self.build_result_set(data) datastore_regex = re.compile('.*-good$') best_match = ds_obj.Datastore(ref='fake_ref', name='ds', capacity=0, freespace=0) rec = ds_util._select_datastore(None, datastores, best_match, datastore_regex) self.assertIsNotNone(rec, "could not find datastore!") self.assertEqual('ds-003', rec.ref.value, "didn't find the right datastore!") self.assertNotEqual('ds-004', rec.ref.value, "accepted an unreachable datastore!") self.assertEqual('some-name-good', rec.name) self.assertEqual(12346789, rec.freespace, "did not obtain correct freespace!") self.assertEqual(987654321, rec.capacity, "did not obtain correct capacity!") def test_filter_datastores_missing_props(self): data = [ ['VMFS', 'os-some-name', 987654321, 1234678], ['NFS', 'another-name', 9876543210, 123467890], ] # no matches are expected when 'summary.accessible' is missing prop_names = ['summary.type', 'summary.name', 'summary.capacity', 'summary.freeSpace'] datastores = self.build_result_set(data, prop_names) best_match = ds_obj.Datastore(ref='fake_ref', name='ds', capacity=0, freespace=0) rec = ds_util._select_datastore(None, datastores, best_match) self.assertEqual(best_match, rec, "no matches were expected") def test_filter_datastores_best_match(self): data = [ ['VMFS', 'spam-good', True, 20 * units.Gi, 10 * units.Gi], ['NFS', 'eggs-good', True, 40 * units.Gi, 15 * units.Gi], ['NFS41', 'nfs41-is-good', True, 35 * units.Gi, 12 * units.Gi], ['BAD', 'some-name-bad', True, 30 * units.Gi, 20 * units.Gi], ['VMFS', 'some-name-good', True, 50 * units.Gi, 5 * units.Gi], ['VMFS', 'some-other-good', True, 10 * units.Gi, 10 * units.Gi], ] datastores = self.build_result_set(data) datastore_regex = re.compile('.*-good$') # the current best match is better than all candidates best_match = ds_obj.Datastore(ref='ds-100', name='best-ds-good', capacity=20 * units.Gi, freespace=19 * units.Gi) rec = ds_util._select_datastore(None, datastores, best_match, datastore_regex) self.assertEqual(best_match, rec, "did not match datastore properly") nova-17.0.1/nova/tests/unit/virt/vmwareapi/test_driver_api.py0000666000175000017500000032515113250073126024375 0ustar zuulzuul00000000000000# Copyright (c) 2013 Hewlett-Packard Development Company, L.P. # Copyright (c) 2012 VMware, Inc. # Copyright (c) 2011 Citrix Systems, Inc. # Copyright 2011 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Test suite for VMwareAPI. """ import collections import datetime from eventlet import greenthread import mock from oslo_utils import fixture as utils_fixture from oslo_utils import units from oslo_utils import uuidutils from oslo_vmware import exceptions as vexc from oslo_vmware.objects import datastore as ds_obj from oslo_vmware import pbm from oslo_vmware import vim_util as oslo_vim_util from nova.compute import api as compute_api from nova.compute import power_state from nova.compute import task_states from nova.compute import vm_states import nova.conf from nova import context from nova import exception from nova.image import glance from nova.network import model as network_model from nova import objects from nova.objects import fields from nova import test from nova.tests.unit import fake_diagnostics from nova.tests.unit import fake_instance import nova.tests.unit.image.fake from nova.tests.unit import matchers from nova.tests.unit.objects import test_diagnostics from nova.tests.unit import utils from nova.tests.unit.virt.vmwareapi import fake as vmwareapi_fake from nova.tests.unit.virt.vmwareapi import stubs from nova.tests import uuidsentinel from nova.virt import driver as v_driver from nova.virt.vmwareapi import constants from nova.virt.vmwareapi import driver from nova.virt.vmwareapi import ds_util from nova.virt.vmwareapi import error_util from nova.virt.vmwareapi import imagecache from nova.virt.vmwareapi import images from nova.virt.vmwareapi import vif from nova.virt.vmwareapi import vim_util from nova.virt.vmwareapi import vm_util from nova.virt.vmwareapi import vmops from nova.virt.vmwareapi import volumeops CONF = nova.conf.CONF HOST = 'testhostname' DEFAULT_FLAVORS = [ {'memory_mb': 512, 'root_gb': 1, 'deleted_at': None, 'name': 'm1.tiny', 'deleted': 0, 'created_at': None, 'ephemeral_gb': 0, 'updated_at': None, 'disabled': False, 'vcpus': 1, 'extra_specs': {}, 'swap': 0, 'rxtx_factor': 1.0, 'is_public': True, 'flavorid': '1', 'vcpu_weight': None, 'id': 2}, {'memory_mb': 2048, 'root_gb': 20, 'deleted_at': None, 'name': 'm1.small', 'deleted': 0, 'created_at': None, 'ephemeral_gb': 0, 'updated_at': None, 'disabled': False, 'vcpus': 1, 'extra_specs': {}, 'swap': 0, 'rxtx_factor': 1.0, 'is_public': True, 'flavorid': '2', 'vcpu_weight': None, 'id': 5}, {'memory_mb': 4096, 'root_gb': 40, 'deleted_at': None, 'name': 'm1.medium', 'deleted': 0, 'created_at': None, 'ephemeral_gb': 0, 'updated_at': None, 'disabled': False, 'vcpus': 2, 'extra_specs': {}, 'swap': 0, 'rxtx_factor': 1.0, 'is_public': True, 'flavorid': '3', 'vcpu_weight': None, 'id': 1}, {'memory_mb': 8192, 'root_gb': 80, 'deleted_at': None, 'name': 'm1.large', 'deleted': 0, 'created_at': None, 'ephemeral_gb': 0, 'updated_at': None, 'disabled': False, 'vcpus': 4, 'extra_specs': {}, 'swap': 0, 'rxtx_factor': 1.0, 'is_public': True, 'flavorid': '4', 'vcpu_weight': None, 'id': 3}, {'memory_mb': 16384, 'root_gb': 160, 'deleted_at': None, 'name': 'm1.xlarge', 'deleted': 0, 'created_at': None, 'ephemeral_gb': 0, 'updated_at': None, 'disabled': False, 'vcpus': 8, 'extra_specs': {}, 'swap': 0, 'rxtx_factor': 1.0, 'is_public': True, 'flavorid': '5', 'vcpu_weight': None, 'id': 4} ] CONTEXT = context.RequestContext('fake', 'fake', is_admin=False) DEFAULT_FLAVOR_OBJS = [ objects.Flavor._obj_from_primitive(CONTEXT, objects.Flavor.VERSION, {'nova_object.data': flavor}) for flavor in DEFAULT_FLAVORS ] def _fake_create_session(inst): session = vmwareapi_fake.DataObject() session.key = 'fake_key' session.userName = 'fake_username' session._pbm_wsdl_loc = None session._pbm = None inst._session = session class VMwareDriverStartupTestCase(test.NoDBTestCase): def _start_driver_with_flags(self, expected_exception_type, startup_flags): self.flags(**startup_flags) with mock.patch( 'nova.virt.vmwareapi.driver.VMwareAPISession.__init__'): e = self.assertRaises( Exception, driver.VMwareVCDriver, None) # noqa self.assertIs(type(e), expected_exception_type) def test_start_driver_no_user(self): self._start_driver_with_flags( Exception, dict(host_ip='ip', host_password='password', group='vmware')) def test_start_driver_no_host(self): self._start_driver_with_flags( Exception, dict(host_username='username', host_password='password', group='vmware')) def test_start_driver_no_password(self): self._start_driver_with_flags( Exception, dict(host_ip='ip', host_username='username', group='vmware')) def test_start_driver_with_user_host_password(self): # Getting the InvalidInput exception signifies that no exception # is raised regarding missing user/password/host self._start_driver_with_flags( nova.exception.InvalidInput, dict(host_ip='ip', host_password='password', host_username="user", datastore_regex="bad(regex", group='vmware')) class VMwareSessionTestCase(test.NoDBTestCase): @mock.patch.object(driver.VMwareAPISession, '_is_vim_object', return_value=False) def test_call_method(self, mock_is_vim): with test.nested( mock.patch.object(driver.VMwareAPISession, '_create_session', _fake_create_session), mock.patch.object(driver.VMwareAPISession, 'invoke_api'), ) as (fake_create, fake_invoke): session = driver.VMwareAPISession() session._vim = mock.Mock() module = mock.Mock() session._call_method(module, 'fira') fake_invoke.assert_called_once_with(module, 'fira', session._vim) @mock.patch.object(driver.VMwareAPISession, '_is_vim_object', return_value=True) def test_call_method_vim(self, mock_is_vim): with test.nested( mock.patch.object(driver.VMwareAPISession, '_create_session', _fake_create_session), mock.patch.object(driver.VMwareAPISession, 'invoke_api'), ) as (fake_create, fake_invoke): session = driver.VMwareAPISession() module = mock.Mock() session._call_method(module, 'fira') fake_invoke.assert_called_once_with(module, 'fira') class VMwareAPIVMTestCase(test.NoDBTestCase, test_diagnostics.DiagnosticsComparisonMixin): """Unit tests for Vmware API connection calls.""" REQUIRES_LOCKING = True def _create_service(self, **kwargs): service_ref = {'host': kwargs.get('host', 'dummy'), 'disabled': kwargs.get('disabled', False), 'binary': 'nova-compute', 'topic': 'compute', 'report_count': 0, 'forced_down': kwargs.get('forced_down', False)} return objects.Service(**service_ref) @mock.patch.object(driver.VMwareVCDriver, '_register_openstack_extension') def setUp(self, mock_register): super(VMwareAPIVMTestCase, self).setUp() ds_util.dc_cache_reset() vm_util.vm_refs_cache_reset() self.context = context.RequestContext('fake', 'fake', is_admin=False) self.flags(cluster_name='test_cluster', host_ip=HOST, host_username='test_username', host_password='test_pass', api_retry_count=1, use_linked_clone=False, group='vmware') self.flags(enabled=False, group='vnc') self.flags(image_cache_subdirectory_name='vmware_base', my_ip='') self.user_id = 'fake' self.project_id = 'fake' self.context = context.RequestContext(self.user_id, self.project_id) stubs.set_stubs(self) vmwareapi_fake.reset() nova.tests.unit.image.fake.stub_out_image_service(self) service = self._create_service(host=HOST) self.conn = driver.VMwareVCDriver(None, False) self.assertFalse(service.disabled) self._set_exception_vars() self.node_name = self.conn._nodename self.ds = 'ds1' self._display_name = 'fake-display-name' self.vim = vmwareapi_fake.FakeVim() # NOTE(vish): none of the network plugging code is actually # being tested self.network_info = utils.get_test_network_info() image_ref = nova.tests.unit.image.fake.get_valid_image_id() (image_service, image_id) = glance.get_remote_image_service( self.context, image_ref) metadata = image_service.show(self.context, image_id) self.image = objects.ImageMeta.from_dict({ 'id': image_ref, 'disk_format': 'vmdk', 'size': int(metadata['size']), }) self.fake_image_uuid = self.image.id nova.tests.unit.image.fake.stub_out_image_service(self) self.vnc_host = 'ha-host' def tearDown(self): super(VMwareAPIVMTestCase, self).tearDown() vmwareapi_fake.cleanup() nova.tests.unit.image.fake.FakeImageService_reset() def test_legacy_block_device_info(self): self.assertFalse(self.conn.need_legacy_block_device_info) def test_get_host_ip_addr(self): self.assertEqual(HOST, self.conn.get_host_ip_addr()) def test_init_host_with_no_session(self): self.conn._session = mock.Mock() self.conn._session.vim = None self.conn.init_host('fake_host') self.conn._session._create_session.assert_called_once_with() def test_init_host(self): try: self.conn.init_host("fake_host") except Exception as ex: self.fail("init_host raised: %s" % ex) def _set_exception_vars(self): self.wait_task = self.conn._session._wait_for_task self.call_method = self.conn._session._call_method self.task_ref = None self.exception = False def test_cleanup_host(self): self.conn.init_host("fake_host") try: self.conn.cleanup_host("fake_host") except Exception as ex: self.fail("cleanup_host raised: %s" % ex) def test_driver_capabilities(self): self.assertTrue(self.conn.capabilities['has_imagecache']) self.assertFalse(self.conn.capabilities['supports_recreate']) self.assertTrue( self.conn.capabilities['supports_migrate_to_same_host']) @mock.patch.object(pbm, 'get_profile_id_by_name') def test_configuration_pbm(self, get_profile_mock): get_profile_mock.return_value = 'fake-profile' self.flags(pbm_enabled=True, pbm_default_policy='fake-policy', pbm_wsdl_location='fake-location', group='vmware') self.conn._validate_configuration() @mock.patch.object(pbm, 'get_profile_id_by_name') def test_configuration_pbm_bad_default(self, get_profile_mock): get_profile_mock.return_value = None self.flags(pbm_enabled=True, pbm_wsdl_location='fake-location', pbm_default_policy='fake-policy', group='vmware') self.assertRaises(error_util.PbmDefaultPolicyDoesNotExist, self.conn._validate_configuration) def test_login_retries(self): self.attempts = 0 self.login_session = vmwareapi_fake.FakeVim()._login() def _fake_login(_self): self.attempts += 1 if self.attempts == 1: raise vexc.VimConnectionException('Here is my fake exception') return self.login_session def _fake_check_session(_self): return True self.stub_out('nova.tests.unit.virt.vmwareapi.fake.FakeVim._login', _fake_login) self.stub_out('nova.tests.unit.virt.vmwareapi.' 'fake.FakeVim._check_session', _fake_check_session) with mock.patch.object(greenthread, 'sleep'): self.conn = driver.VMwareAPISession() self.assertEqual(2, self.attempts) def _get_instance_type_by_name(self, type): for instance_type in DEFAULT_FLAVOR_OBJS: if instance_type.name == type: return instance_type if type == 'm1.micro': return {'memory_mb': 128, 'root_gb': 0, 'deleted_at': None, 'name': 'm1.micro', 'deleted': 0, 'created_at': None, 'ephemeral_gb': 0, 'updated_at': None, 'disabled': False, 'vcpus': 1, 'extra_specs': {}, 'swap': 0, 'rxtx_factor': 1.0, 'is_public': True, 'flavorid': '1', 'vcpu_weight': None, 'id': 2} def _create_instance(self, node=None, set_image_ref=True, uuid=None, instance_type='m1.large', ephemeral=None, instance_type_updates=None): if not node: node = self.node_name if not uuid: uuid = uuidutils.generate_uuid() self.type_data = dict(self._get_instance_type_by_name(instance_type)) if instance_type_updates: self.type_data.update(instance_type_updates) if ephemeral is not None: self.type_data['ephemeral_gb'] = ephemeral values = {'name': 'fake_name', 'display_name': self._display_name, 'id': 1, 'uuid': uuid, 'project_id': self.project_id, 'user_id': self.user_id, 'kernel_id': "fake_kernel_uuid", 'ramdisk_id': "fake_ramdisk_uuid", 'mac_address': "de:ad:be:ef:be:ef", 'flavor': objects.Flavor(**self.type_data), 'node': node, 'memory_mb': self.type_data['memory_mb'], 'root_gb': self.type_data['root_gb'], 'ephemeral_gb': self.type_data['ephemeral_gb'], 'vcpus': self.type_data['vcpus'], 'swap': self.type_data['swap'], 'expected_attrs': ['system_metadata'], } if set_image_ref: values['image_ref'] = self.fake_image_uuid self.instance_node = node self.uuid = uuid self.instance = fake_instance.fake_instance_obj( self.context, **values) def _create_vm(self, node=None, num_instances=1, uuid=None, instance_type='m1.large', powered_on=True, ephemeral=None, bdi=None, instance_type_updates=None): """Create and spawn the VM.""" if not node: node = self.node_name self._create_instance(node=node, uuid=uuid, instance_type=instance_type, ephemeral=ephemeral, instance_type_updates=instance_type_updates) self.assertIsNone(vm_util.vm_ref_cache_get(self.uuid)) self.conn.spawn(self.context, self.instance, self.image, injected_files=[], admin_password=None, allocations={}, network_info=self.network_info, block_device_info=bdi) self._check_vm_record(num_instances=num_instances, powered_on=powered_on, uuid=uuid) self.assertIsNotNone(vm_util.vm_ref_cache_get(self.uuid)) def _get_vm_record(self): # Get record for VM vms = vmwareapi_fake._get_objects("VirtualMachine") for vm in vms.objects: if vm.get('name') == vm_util._get_vm_name(self._display_name, self.uuid): return vm self.fail('Unable to find VM backing!') def _get_info(self, uuid=None, node=None, name=None): uuid = uuid if uuid else self.uuid node = node if node else self.instance_node name = name if node else '1' return self.conn.get_info(fake_instance.fake_instance_obj( None, **{'uuid': uuid, 'name': name, 'node': node})) def _check_vm_record(self, num_instances=1, powered_on=True, uuid=None): """Check if the spawned VM's properties correspond to the instance in the db. """ instances = self.conn.list_instances() if uuidutils.is_uuid_like(uuid): self.assertEqual(num_instances, len(instances)) # Get Nova record for VM vm_info = self._get_info() vm = self._get_vm_record() # Check that m1.large above turned into the right thing. vcpus = self.type_data['vcpus'] self.assertEqual(vm.get("summary.config.instanceUuid"), self.uuid) self.assertEqual(vm.get("summary.config.numCpu"), vcpus) self.assertEqual(vm.get("summary.config.memorySizeMB"), self.type_data['memory_mb']) self.assertEqual("ns0:VirtualE1000", vm.get("config.hardware.device").VirtualDevice[2].obj_name) if powered_on: # Check that the VM is running according to Nova self.assertEqual(power_state.RUNNING, vm_info.state) # Check that the VM is running according to vSphere API. self.assertEqual('poweredOn', vm.get("runtime.powerState")) else: # Check that the VM is not running according to Nova self.assertEqual(power_state.SHUTDOWN, vm_info.state) # Check that the VM is not running according to vSphere API. self.assertEqual('poweredOff', vm.get("runtime.powerState")) found_vm_uuid = False found_iface_id = False extras = vm.get("config.extraConfig") for c in extras.OptionValue: if (c.key == "nvp.vm-uuid" and c.value == self.instance['uuid']): found_vm_uuid = True if (c.key == "nvp.iface-id.0" and c.value == utils.FAKE_VIF_UUID): found_iface_id = True self.assertTrue(found_vm_uuid) self.assertTrue(found_iface_id) def _check_vm_info(self, info, pwr_state=power_state.RUNNING): """Check if the get_info returned values correspond to the instance object in the db. """ self.assertEqual(info.state, pwr_state) def test_instance_exists(self): self._create_vm() self.assertTrue(self.conn.instance_exists(self.instance)) invalid_instance = fake_instance.fake_instance_obj( None, uuid=uuidsentinel.foo, name='bar', node=self.node_name) self.assertFalse(self.conn.instance_exists(invalid_instance)) def test_list_instances_1(self): self._create_vm() instances = self.conn.list_instances() self.assertEqual(1, len(instances)) def test_list_instance_uuids(self): self._create_vm() uuids = self.conn.list_instance_uuids() self.assertEqual(1, len(uuids)) def _cached_files_exist(self, exists=True): cache = ds_obj.DatastorePath(self.ds, 'vmware_base', self.fake_image_uuid, '%s.vmdk' % self.fake_image_uuid) if exists: vmwareapi_fake.assertPathExists(self, str(cache)) else: vmwareapi_fake.assertPathNotExists(self, str(cache)) @mock.patch.object(nova.virt.vmwareapi.images.VMwareImage, 'from_image') def test_instance_dir_disk_created(self, mock_from_image): """Test image file is cached when even when use_linked_clone is False """ img_props = images.VMwareImage( image_id=self.fake_image_uuid, linked_clone=False) mock_from_image.return_value = img_props self._create_vm() path = ds_obj.DatastorePath(self.ds, self.uuid, '%s.vmdk' % self.uuid) vmwareapi_fake.assertPathExists(self, str(path)) self._cached_files_exist() @mock.patch.object(nova.virt.vmwareapi.images.VMwareImage, 'from_image') def test_cache_dir_disk_created(self, mock_from_image): """Test image disk is cached when use_linked_clone is True.""" self.flags(use_linked_clone=True, group='vmware') img_props = images.VMwareImage( image_id=self.fake_image_uuid, file_size=1 * units.Ki, disk_type=constants.DISK_TYPE_SPARSE) mock_from_image.return_value = img_props self._create_vm() path = ds_obj.DatastorePath(self.ds, 'vmware_base', self.fake_image_uuid, '%s.vmdk' % self.fake_image_uuid) root = ds_obj.DatastorePath(self.ds, 'vmware_base', self.fake_image_uuid, '%s.80.vmdk' % self.fake_image_uuid) vmwareapi_fake.assertPathExists(self, str(path)) vmwareapi_fake.assertPathExists(self, str(root)) def _iso_disk_type_created(self, instance_type='m1.large'): self.image.disk_format = 'iso' self._create_vm(instance_type=instance_type) path = ds_obj.DatastorePath(self.ds, 'vmware_base', self.fake_image_uuid, '%s.iso' % self.fake_image_uuid) vmwareapi_fake.assertPathExists(self, str(path)) def test_iso_disk_type_created(self): self._iso_disk_type_created() path = ds_obj.DatastorePath(self.ds, self.uuid, '%s.vmdk' % self.uuid) vmwareapi_fake.assertPathExists(self, str(path)) def test_iso_disk_type_created_with_root_gb_0(self): self._iso_disk_type_created(instance_type='m1.micro') path = ds_obj.DatastorePath(self.ds, self.uuid, '%s.vmdk' % self.uuid) vmwareapi_fake.assertPathNotExists(self, str(path)) def test_iso_disk_cdrom_attach(self): iso_path = ds_obj.DatastorePath(self.ds, 'vmware_base', self.fake_image_uuid, '%s.iso' % self.fake_image_uuid) def fake_attach_cdrom(vm_ref, instance, data_store_ref, iso_uploaded_path): self.assertEqual(iso_uploaded_path, str(iso_path)) self.stub_out('nova.virt.vmwareapi.vmops._attach_cdrom_to_vm', fake_attach_cdrom) self.image.disk_format = 'iso' self._create_vm() @mock.patch.object(nova.virt.vmwareapi.images.VMwareImage, 'from_image') def test_iso_disk_cdrom_attach_with_config_drive(self, mock_from_image): img_props = images.VMwareImage( image_id=self.fake_image_uuid, file_size=80 * units.Gi, file_type='iso', linked_clone=False) mock_from_image.return_value = img_props self.flags(force_config_drive=True) iso_path = [ ds_obj.DatastorePath(self.ds, 'vmware_base', self.fake_image_uuid, '%s.iso' % self.fake_image_uuid), ds_obj.DatastorePath(self.ds, 'fake-config-drive')] self.iso_index = 0 def fake_attach_cdrom(vm_ref, instance, data_store_ref, iso_uploaded_path): self.assertEqual(iso_uploaded_path, str(iso_path[self.iso_index])) self.iso_index += 1 with test.nested( mock.patch.object(self.conn._vmops, '_attach_cdrom_to_vm', side_effect=fake_attach_cdrom), mock.patch.object(self.conn._vmops, '_create_config_drive', return_value='fake-config-drive'), ) as (fake_attach_cdrom_to_vm, fake_create_config_drive): self.image.disk_format = 'iso' self._create_vm() self.assertEqual(2, self.iso_index) self.assertEqual(fake_attach_cdrom_to_vm.call_count, 2) self.assertEqual(fake_create_config_drive.call_count, 1) def test_ephemeral_disk_attach(self): self._create_vm(ephemeral=50) path = ds_obj.DatastorePath(self.ds, self.uuid, 'ephemeral_0.vmdk') vmwareapi_fake.assertPathExists(self, str(path)) def test_ephemeral_disk_attach_from_bdi(self): ephemerals = [{'device_type': 'disk', 'disk_bus': constants.DEFAULT_ADAPTER_TYPE, 'size': 25}, {'device_type': 'disk', 'disk_bus': constants.DEFAULT_ADAPTER_TYPE, 'size': 25}] bdi = {'ephemerals': ephemerals} self._create_vm(bdi=bdi, ephemeral=50) path = ds_obj.DatastorePath(self.ds, self.uuid, 'ephemeral_0.vmdk') vmwareapi_fake.assertPathExists(self, str(path)) path = ds_obj.DatastorePath(self.ds, self.uuid, 'ephemeral_1.vmdk') vmwareapi_fake.assertPathExists(self, str(path)) def test_ephemeral_disk_attach_from_bdii_with_no_ephs(self): bdi = {'ephemerals': []} self._create_vm(bdi=bdi, ephemeral=50) path = ds_obj.DatastorePath(self.ds, self.uuid, 'ephemeral_0.vmdk') vmwareapi_fake.assertPathExists(self, str(path)) def test_cdrom_attach_with_config_drive(self): self.flags(force_config_drive=True) iso_path = ds_obj.DatastorePath(self.ds, 'fake-config-drive') self.cd_attach_called = False def fake_attach_cdrom(vm_ref, instance, data_store_ref, iso_uploaded_path): self.assertEqual(iso_uploaded_path, str(iso_path)) self.cd_attach_called = True with test.nested( mock.patch.object(self.conn._vmops, '_attach_cdrom_to_vm', side_effect=fake_attach_cdrom), mock.patch.object(self.conn._vmops, '_create_config_drive', return_value='fake-config-drive'), ) as (fake_attach_cdrom_to_vm, fake_create_config_drive): self._create_vm() self.assertTrue(self.cd_attach_called) @mock.patch.object(vmops.VMwareVMOps, 'power_off') @mock.patch.object(driver.VMwareVCDriver, 'detach_volume') @mock.patch.object(vmops.VMwareVMOps, 'destroy') def test_destroy_with_attached_volumes(self, mock_destroy, mock_detach_volume, mock_power_off): self._create_vm() connection_info = {'data': 'fake-data', 'serial': 'volume-fake-id'} bdm = [{'connection_info': connection_info, 'disk_bus': 'fake-bus', 'device_name': 'fake-name', 'mount_device': '/dev/sdb'}] bdi = {'block_device_mapping': bdm, 'root_device_name': '/dev/sda'} self.assertNotEqual(vm_states.STOPPED, self.instance.vm_state) self.conn.destroy(self.context, self.instance, self.network_info, block_device_info=bdi) mock_power_off.assert_called_once_with(self.instance) mock_detach_volume.assert_called_once_with( connection_info, self.instance, 'fake-name') mock_destroy.assert_called_once_with(self.instance, True) @mock.patch.object(vmops.VMwareVMOps, 'power_off', side_effect=vexc.ManagedObjectNotFoundException()) @mock.patch.object(vmops.VMwareVMOps, 'destroy') def test_destroy_with_attached_volumes_missing(self, mock_destroy, mock_power_off): self._create_vm() connection_info = {'data': 'fake-data', 'serial': 'volume-fake-id'} bdm = [{'connection_info': connection_info, 'disk_bus': 'fake-bus', 'device_name': 'fake-name', 'mount_device': '/dev/sdb'}] bdi = {'block_device_mapping': bdm, 'root_device_name': '/dev/sda'} self.assertNotEqual(vm_states.STOPPED, self.instance.vm_state) self.conn.destroy(self.context, self.instance, self.network_info, block_device_info=bdi) mock_power_off.assert_called_once_with(self.instance) mock_destroy.assert_called_once_with(self.instance, True) @mock.patch.object(driver.VMwareVCDriver, 'detach_volume', side_effect=exception.NovaException()) @mock.patch.object(vmops.VMwareVMOps, 'destroy') def test_destroy_with_attached_volumes_with_exception( self, mock_destroy, mock_detach_volume): self._create_vm() connection_info = {'data': 'fake-data', 'serial': 'volume-fake-id'} bdm = [{'connection_info': connection_info, 'disk_bus': 'fake-bus', 'device_name': 'fake-name', 'mount_device': '/dev/sdb'}] bdi = {'block_device_mapping': bdm, 'root_device_name': '/dev/sda'} self.assertRaises(exception.NovaException, self.conn.destroy, self.context, self.instance, self.network_info, block_device_info=bdi) mock_detach_volume.assert_called_once_with( connection_info, self.instance, 'fake-name') self.assertFalse(mock_destroy.called) @mock.patch.object(driver.VMwareVCDriver, 'detach_volume', side_effect=exception.DiskNotFound(message='oh man')) @mock.patch.object(vmops.VMwareVMOps, 'destroy') def test_destroy_with_attached_volumes_with_disk_not_found( self, mock_destroy, mock_detach_volume): self._create_vm() connection_info = {'data': 'fake-data', 'serial': 'volume-fake-id'} bdm = [{'connection_info': connection_info, 'disk_bus': 'fake-bus', 'device_name': 'fake-name', 'mount_device': '/dev/sdb'}] bdi = {'block_device_mapping': bdm, 'root_device_name': '/dev/sda'} self.conn.destroy(self.context, self.instance, self.network_info, block_device_info=bdi) mock_detach_volume.assert_called_once_with( connection_info, self.instance, 'fake-name') self.assertTrue(mock_destroy.called) mock_destroy.assert_called_once_with(self.instance, True) def test_spawn(self): self._create_vm() info = self._get_info() self._check_vm_info(info, power_state.RUNNING) def test_spawn_vm_ref_cached(self): uuid = uuidutils.generate_uuid() self.assertIsNone(vm_util.vm_ref_cache_get(uuid)) self._create_vm(uuid=uuid) self.assertIsNotNone(vm_util.vm_ref_cache_get(uuid)) def test_spawn_power_on(self): self._create_vm() info = self._get_info() self._check_vm_info(info, power_state.RUNNING) def test_spawn_root_size_0(self): self._create_vm(instance_type='m1.micro') info = self._get_info() self._check_vm_info(info, power_state.RUNNING) cache = ('[%s] vmware_base/%s/%s.vmdk' % (self.ds, self.fake_image_uuid, self.fake_image_uuid)) gb_cache = ('[%s] vmware_base/%s/%s.0.vmdk' % (self.ds, self.fake_image_uuid, self.fake_image_uuid)) vmwareapi_fake.assertPathExists(self, cache) vmwareapi_fake.assertPathNotExists(self, gb_cache) def _spawn_with_delete_exception(self, fault=None): def fake_call_method(module, method, *args, **kwargs): task_ref = self.call_method(module, method, *args, **kwargs) if method == "DeleteDatastoreFile_Task": self.exception = True task_mdo = vmwareapi_fake.create_task(method, "error", error_fault=fault) return task_mdo.obj return task_ref with ( mock.patch.object(self.conn._session, '_call_method', fake_call_method) ): if fault: self._create_vm() info = self._get_info() self._check_vm_info(info, power_state.RUNNING) else: self.assertRaises(vexc.VMwareDriverException, self._create_vm) self.assertTrue(self.exception) def test_spawn_with_delete_exception_not_found(self): self._spawn_with_delete_exception(vmwareapi_fake.FileNotFound()) def test_spawn_with_delete_exception_file_fault(self): self._spawn_with_delete_exception(vmwareapi_fake.FileFault()) def test_spawn_with_delete_exception_cannot_delete_file(self): self._spawn_with_delete_exception(vmwareapi_fake.CannotDeleteFile()) def test_spawn_with_delete_exception_file_locked(self): self._spawn_with_delete_exception(vmwareapi_fake.FileLocked()) def test_spawn_with_delete_exception_general(self): self._spawn_with_delete_exception() @mock.patch.object(vmops.VMwareVMOps, '_extend_virtual_disk') def test_spawn_disk_extend(self, mock_extend): requested_size = 80 * units.Mi self._create_vm() info = self._get_info() self._check_vm_info(info, power_state.RUNNING) mock_extend.assert_called_once_with(mock.ANY, requested_size, mock.ANY, mock.ANY) def test_spawn_disk_extend_exists(self): root = ds_obj.DatastorePath(self.ds, 'vmware_base', self.fake_image_uuid, '%s.80.vmdk' % self.fake_image_uuid) def _fake_extend(instance, requested_size, name, dc_ref): vmwareapi_fake._add_file(str(root)) with test.nested( mock.patch.object(self.conn._vmops, '_extend_virtual_disk', side_effect=_fake_extend) ) as (fake_extend_virtual_disk): self._create_vm() info = self._get_info() self._check_vm_info(info, power_state.RUNNING) vmwareapi_fake.assertPathExists(self, str(root)) self.assertEqual(1, fake_extend_virtual_disk[0].call_count) @mock.patch.object(nova.virt.vmwareapi.images.VMwareImage, 'from_image') def test_spawn_disk_extend_sparse(self, mock_from_image): img_props = images.VMwareImage( image_id=self.fake_image_uuid, file_size=units.Ki, disk_type=constants.DISK_TYPE_SPARSE, linked_clone=True) mock_from_image.return_value = img_props with test.nested( mock.patch.object(self.conn._vmops, '_extend_virtual_disk'), mock.patch.object(self.conn._vmops, 'get_datacenter_ref_and_name'), ) as (mock_extend, mock_get_dc): dc_val = mock.Mock() dc_val.ref = "fake_dc_ref" dc_val.name = "dc1" mock_get_dc.return_value = dc_val self._create_vm() iid = img_props.image_id cached_image = ds_obj.DatastorePath(self.ds, 'vmware_base', iid, '%s.80.vmdk' % iid) mock_extend.assert_called_once_with( self.instance, self.instance.flavor.root_gb * units.Mi, str(cached_image), "fake_dc_ref") def test_spawn_disk_extend_failed_copy(self): # Spawn instance # copy for extend fails without creating a file # # Expect the copy error to be raised self.flags(use_linked_clone=True, group='vmware') CopyError = vexc.FileFaultException def fake_wait_for_task(task_ref): if task_ref == 'fake-copy-task': raise CopyError('Copy failed!') return self.wait_task(task_ref) def fake_call_method(module, method, *args, **kwargs): if method == "CopyVirtualDisk_Task": return 'fake-copy-task' return self.call_method(module, method, *args, **kwargs) with test.nested( mock.patch.object(self.conn._session, '_call_method', new=fake_call_method), mock.patch.object(self.conn._session, '_wait_for_task', new=fake_wait_for_task)): self.assertRaises(CopyError, self._create_vm) def test_spawn_disk_extend_failed_partial_copy(self): # Spawn instance # Copy for extend fails, leaving a file behind # # Expect the file to be cleaned up # Expect the copy error to be raised self.flags(use_linked_clone=True, group='vmware') self.task_ref = None uuid = self.fake_image_uuid cached_image = '[%s] vmware_base/%s/%s.80.vmdk' % (self.ds, uuid, uuid) CopyError = vexc.FileFaultException def fake_wait_for_task(task_ref): if task_ref == self.task_ref: self.task_ref = None vmwareapi_fake.assertPathExists(self, cached_image) # N.B. We don't test for -flat here because real # CopyVirtualDisk_Task doesn't actually create it raise CopyError('Copy failed!') return self.wait_task(task_ref) def fake_call_method(module, method, *args, **kwargs): task_ref = self.call_method(module, method, *args, **kwargs) if method == "CopyVirtualDisk_Task": self.task_ref = task_ref return task_ref with test.nested( mock.patch.object(self.conn._session, '_call_method', new=fake_call_method), mock.patch.object(self.conn._session, '_wait_for_task', new=fake_wait_for_task)): self.assertRaises(CopyError, self._create_vm) vmwareapi_fake.assertPathNotExists(self, cached_image) def test_spawn_disk_extend_failed_partial_copy_failed_cleanup(self): # Spawn instance # Copy for extend fails, leaves file behind # File cleanup fails # # Expect file to be left behind # Expect file cleanup error to be raised self.flags(use_linked_clone=True, group='vmware') self.task_ref = None uuid = self.fake_image_uuid cached_image = '[%s] vmware_base/%s/%s.80.vmdk' % (self.ds, uuid, uuid) CopyError = vexc.FileFaultException DeleteError = vexc.CannotDeleteFileException def fake_wait_for_task(task_ref): if task_ref == self.task_ref: self.task_ref = None vmwareapi_fake.assertPathExists(self, cached_image) # N.B. We don't test for -flat here because real # CopyVirtualDisk_Task doesn't actually create it raise CopyError('Copy failed!') elif task_ref == 'fake-delete-task': raise DeleteError('Delete failed!') return self.wait_task(task_ref) def fake_call_method(module, method, *args, **kwargs): if method == "DeleteDatastoreFile_Task": return 'fake-delete-task' task_ref = self.call_method(module, method, *args, **kwargs) if method == "CopyVirtualDisk_Task": self.task_ref = task_ref return task_ref with test.nested( mock.patch.object(self.conn._session, '_wait_for_task', new=fake_wait_for_task), mock.patch.object(self.conn._session, '_call_method', new=fake_call_method)): self.assertRaises(DeleteError, self._create_vm) vmwareapi_fake.assertPathExists(self, cached_image) @mock.patch.object(nova.virt.vmwareapi.images.VMwareImage, 'from_image') def test_spawn_disk_invalid_disk_size(self, mock_from_image): img_props = images.VMwareImage( image_id=self.fake_image_uuid, file_size=82 * units.Gi, disk_type=constants.DISK_TYPE_SPARSE, linked_clone=True) mock_from_image.return_value = img_props self.assertRaises(exception.InstanceUnacceptable, self._create_vm) @mock.patch.object(nova.virt.vmwareapi.images.VMwareImage, 'from_image') def test_spawn_disk_extend_insufficient_disk_space(self, mock_from_image): img_props = images.VMwareImage( image_id=self.fake_image_uuid, file_size=1024, disk_type=constants.DISK_TYPE_SPARSE, linked_clone=True) mock_from_image.return_value = img_props cached_image = ds_obj.DatastorePath(self.ds, 'vmware_base', self.fake_image_uuid, '%s.80.vmdk' % self.fake_image_uuid) tmp_file = ds_obj.DatastorePath(self.ds, 'vmware_base', self.fake_image_uuid, '%s.80-flat.vmdk' % self.fake_image_uuid) NoDiskSpace = vexc.get_fault_class('NoDiskSpace') def fake_wait_for_task(task_ref): if task_ref == self.task_ref: self.task_ref = None raise NoDiskSpace() return self.wait_task(task_ref) def fake_call_method(module, method, *args, **kwargs): task_ref = self.call_method(module, method, *args, **kwargs) if method == 'ExtendVirtualDisk_Task': self.task_ref = task_ref return task_ref with test.nested( mock.patch.object(self.conn._session, '_wait_for_task', fake_wait_for_task), mock.patch.object(self.conn._session, '_call_method', fake_call_method) ) as (mock_wait_for_task, mock_call_method): self.assertRaises(NoDiskSpace, self._create_vm) vmwareapi_fake.assertPathNotExists(self, str(cached_image)) vmwareapi_fake.assertPathNotExists(self, str(tmp_file)) def test_spawn_with_move_file_exists_exception(self): # The test will validate that the spawn completes # successfully. The "MoveDatastoreFile_Task" will # raise an file exists exception. The flag # self.exception will be checked to see that # the exception has indeed been raised. def fake_wait_for_task(task_ref): if task_ref == self.task_ref: self.task_ref = None self.exception = True raise vexc.FileAlreadyExistsException() return self.wait_task(task_ref) def fake_call_method(module, method, *args, **kwargs): task_ref = self.call_method(module, method, *args, **kwargs) if method == "MoveDatastoreFile_Task": self.task_ref = task_ref return task_ref with test.nested( mock.patch.object(self.conn._session, '_wait_for_task', fake_wait_for_task), mock.patch.object(self.conn._session, '_call_method', fake_call_method) ) as (_wait_for_task, _call_method): self._create_vm() info = self._get_info() self._check_vm_info(info, power_state.RUNNING) self.assertTrue(self.exception) def test_spawn_with_move_general_exception(self): # The test will validate that the spawn completes # successfully. The "MoveDatastoreFile_Task" will # raise a general exception. The flag self.exception # will be checked to see that the exception has # indeed been raised. def fake_wait_for_task(task_ref): if task_ref == self.task_ref: self.task_ref = None self.exception = True raise vexc.VMwareDriverException('Exception!') return self.wait_task(task_ref) def fake_call_method(module, method, *args, **kwargs): task_ref = self.call_method(module, method, *args, **kwargs) if method == "MoveDatastoreFile_Task": self.task_ref = task_ref return task_ref with test.nested( mock.patch.object(self.conn._session, '_wait_for_task', fake_wait_for_task), mock.patch.object(self.conn._session, '_call_method', fake_call_method) ) as (_wait_for_task, _call_method): self.assertRaises(vexc.VMwareDriverException, self._create_vm) self.assertTrue(self.exception) def test_spawn_with_move_poll_exception(self): self.call_method = self.conn._session._call_method def fake_call_method(module, method, *args, **kwargs): task_ref = self.call_method(module, method, *args, **kwargs) if method == "MoveDatastoreFile_Task": task_mdo = vmwareapi_fake.create_task(method, "error") return task_mdo.obj return task_ref with ( mock.patch.object(self.conn._session, '_call_method', fake_call_method) ): self.assertRaises(vexc.VMwareDriverException, self._create_vm) def test_spawn_with_move_file_exists_poll_exception(self): # The test will validate that the spawn completes # successfully. The "MoveDatastoreFile_Task" will # raise a file exists exception. The flag self.exception # will be checked to see that the exception has # indeed been raised. def fake_call_method(module, method, *args, **kwargs): task_ref = self.call_method(module, method, *args, **kwargs) if method == "MoveDatastoreFile_Task": self.exception = True task_mdo = vmwareapi_fake.create_task(method, "error", error_fault=vmwareapi_fake.FileAlreadyExists()) return task_mdo.obj return task_ref with ( mock.patch.object(self.conn._session, '_call_method', fake_call_method) ): self._create_vm() info = self._get_info() self._check_vm_info(info, power_state.RUNNING) self.assertTrue(self.exception) @mock.patch.object(vm_util, 'relocate_vm') @mock.patch('nova.virt.vmwareapi.volumeops.VMwareVolumeOps.' 'attach_volume') @mock.patch('nova.virt.vmwareapi.volumeops.VMwareVolumeOps.' '_get_res_pool_of_vm') @mock.patch('nova.block_device.volume_in_mapping') @mock.patch('nova.virt.driver.block_device_info_get_mapping') def _spawn_attach_volume_vmdk(self, mock_info_get_mapping, mock_volume_in_mapping, mock_get_res_pool_of_vm, mock_attach_volume, mock_relocate_vm, set_image_ref=True): self._create_instance(set_image_ref=set_image_ref) connection_info = self._test_vmdk_connection_info('vmdk') root_disk = [{'connection_info': connection_info, 'boot_index': 0}] mock_info_get_mapping.return_value = root_disk mock_get_res_pool_of_vm.return_value = 'fake_res_pool' block_device_info = {'block_device_mapping': root_disk} self.conn.spawn(self.context, self.instance, self.image, injected_files=[], admin_password=None, allocations={}, network_info=self.network_info, block_device_info=block_device_info) mock_info_get_mapping.assert_called_once_with(mock.ANY) mock_get_res_pool_of_vm.assert_called_once_with(mock.ANY) mock_relocate_vm.assert_called_once_with(mock.ANY, mock.ANY, 'fake_res_pool', mock.ANY) mock_attach_volume.assert_called_once_with(connection_info, self.instance, constants.DEFAULT_ADAPTER_TYPE) @mock.patch('nova.virt.vmwareapi.volumeops.VMwareVolumeOps.' 'attach_volume') @mock.patch('nova.block_device.volume_in_mapping') @mock.patch('nova.virt.driver.block_device_info_get_mapping') def test_spawn_attach_volume_iscsi(self, mock_info_get_mapping, mock_block_volume_in_mapping, mock_attach_volume): self._create_instance() connection_info = self._test_vmdk_connection_info('iscsi') root_disk = [{'connection_info': connection_info, 'boot_index': 0}] mock_info_get_mapping.return_value = root_disk block_device_info = {'mount_device': 'vda'} self.conn.spawn(self.context, self.instance, self.image, injected_files=[], admin_password=None, allocations={}, network_info=self.network_info, block_device_info=block_device_info) mock_info_get_mapping.assert_called_once_with(mock.ANY) mock_attach_volume.assert_called_once_with(connection_info, self.instance, constants.DEFAULT_ADAPTER_TYPE) def test_spawn_hw_versions(self): updates = {'extra_specs': {'vmware:hw_version': 'vmx-08'}} self._create_vm(instance_type_updates=updates) vm = self._get_vm_record() version = vm.get("version") self.assertEqual('vmx-08', version) def mock_upload_image(self, context, image, instance, session, **kwargs): self.assertEqual('Test-Snapshot', image) self.assertEqual(self.instance, instance) self.assertEqual(1024, kwargs['vmdk_size']) def test_get_vm_ref_using_extra_config(self): self._create_vm() vm_ref = vm_util._get_vm_ref_from_extraconfig(self.conn._session, self.instance['uuid']) self.assertIsNotNone(vm_ref, 'VM Reference cannot be none') # Disrupt the fake Virtual Machine object so that extraConfig # cannot be matched. fake_vm = self._get_vm_record() fake_vm.get('config.extraConfig["nvp.vm-uuid"]').value = "" # We should not get a Virtual Machine through extraConfig. vm_ref = vm_util._get_vm_ref_from_extraconfig(self.conn._session, self.instance['uuid']) self.assertIsNone(vm_ref, 'VM Reference should be none') # Check if we can find the Virtual Machine using the name. vm_ref = vm_util.get_vm_ref(self.conn._session, self.instance) self.assertIsNotNone(vm_ref, 'VM Reference cannot be none') def test_search_vm_ref_by_identifier(self): self._create_vm() vm_ref = vm_util.search_vm_ref_by_identifier(self.conn._session, self.instance['uuid']) self.assertIsNotNone(vm_ref, 'VM Reference cannot be none') fake_vm = self._get_vm_record() fake_vm.set("summary.config.instanceUuid", "foo") fake_vm.set("name", "foo") fake_vm.get('config.extraConfig["nvp.vm-uuid"]').value = "foo" self.assertIsNone(vm_util.search_vm_ref_by_identifier( self.conn._session, self.instance['uuid']), "VM Reference should be none") self.assertIsNotNone( vm_util.search_vm_ref_by_identifier(self.conn._session, "foo"), "VM Reference should not be none") def test_get_object_for_optionvalue(self): self._create_vm() vms = self.conn._session._call_method(vim_util, "get_objects", "VirtualMachine", ['config.extraConfig["nvp.vm-uuid"]']) vm_ref = vm_util._get_object_for_optionvalue(vms, self.instance["uuid"]) self.assertIsNotNone(vm_ref, 'VM Reference cannot be none') def _test_snapshot(self): expected_calls = [ {'args': (), 'kwargs': {'task_state': task_states.IMAGE_PENDING_UPLOAD}}, {'args': (), 'kwargs': {'task_state': task_states.IMAGE_UPLOADING, 'expected_state': task_states.IMAGE_PENDING_UPLOAD}}] func_call_matcher = matchers.FunctionCallMatcher(expected_calls) info = self._get_info() self._check_vm_info(info, power_state.RUNNING) with mock.patch.object(images, 'upload_image_stream_optimized', self.mock_upload_image): self.conn.snapshot(self.context, self.instance, "Test-Snapshot", func_call_matcher.call) info = self._get_info() self._check_vm_info(info, power_state.RUNNING) self.assertIsNone(func_call_matcher.match()) def test_snapshot(self): self._create_vm() self._test_snapshot() def test_snapshot_no_root_disk(self): self._iso_disk_type_created(instance_type='m1.micro') self.assertRaises(error_util.NoRootDiskDefined, self.conn.snapshot, self.context, self.instance, "Test-Snapshot", lambda *args, **kwargs: None) def test_snapshot_non_existent(self): self._create_instance() self.assertRaises(exception.InstanceNotFound, self.conn.snapshot, self.context, self.instance, "Test-Snapshot", lambda *args, **kwargs: None) @mock.patch('nova.virt.vmwareapi.vmops.VMwareVMOps._delete_vm_snapshot') @mock.patch('nova.virt.vmwareapi.vmops.VMwareVMOps._create_vm_snapshot') @mock.patch('nova.virt.vmwareapi.volumeops.VMwareVolumeOps.' 'attach_volume') def test_snapshot_delete_vm_snapshot(self, mock_attach_volume, mock_create_vm_snapshot, mock_delete_vm_snapshot): self._create_vm() fake_vm = self._get_vm_record() snapshot_ref = vmwareapi_fake.ManagedObjectReference( value="Snapshot-123", name="VirtualMachineSnapshot") mock_create_vm_snapshot.return_value = snapshot_ref mock_delete_vm_snapshot.return_value = None self._test_snapshot() mock_create_vm_snapshot.assert_called_once_with(self.instance, fake_vm.obj) mock_delete_vm_snapshot.assert_called_once_with(self.instance, fake_vm.obj, snapshot_ref) def _snapshot_delete_vm_snapshot_exception(self, exception, call_count=1): self._create_vm() fake_vm = vmwareapi_fake._get_objects("VirtualMachine").objects[0].obj snapshot_ref = vmwareapi_fake.ManagedObjectReference( value="Snapshot-123", name="VirtualMachineSnapshot") with test.nested( mock.patch.object(self.conn._session, '_wait_for_task', side_effect=exception), mock.patch.object(vmops, '_time_sleep_wrapper') ) as (_fake_wait, _fake_sleep): if exception != vexc.TaskInProgress: self.assertRaises(exception, self.conn._vmops._delete_vm_snapshot, self.instance, fake_vm, snapshot_ref) self.assertEqual(0, _fake_sleep.call_count) else: self.conn._vmops._delete_vm_snapshot(self.instance, fake_vm, snapshot_ref) self.assertEqual(call_count - 1, _fake_sleep.call_count) self.assertEqual(call_count, _fake_wait.call_count) def test_snapshot_delete_vm_snapshot_exception(self): self._snapshot_delete_vm_snapshot_exception(exception.NovaException) def test_snapshot_delete_vm_snapshot_exception_retry(self): self.flags(api_retry_count=5, group='vmware') self._snapshot_delete_vm_snapshot_exception(vexc.TaskInProgress, 5) def test_reboot(self): self._create_vm() info = self._get_info() self._check_vm_info(info, power_state.RUNNING) reboot_type = "SOFT" self.conn.reboot(self.context, self.instance, self.network_info, reboot_type) info = self._get_info() self._check_vm_info(info, power_state.RUNNING) def test_reboot_hard(self): self._create_vm() info = self._get_info() self._check_vm_info(info, power_state.RUNNING) reboot_type = "HARD" self.conn.reboot(self.context, self.instance, self.network_info, reboot_type) info = self._get_info() self._check_vm_info(info, power_state.RUNNING) def test_reboot_with_uuid(self): """Test fall back to use name when can't find by uuid.""" self._create_vm() info = self._get_info() self._check_vm_info(info, power_state.RUNNING) reboot_type = "SOFT" self.conn.reboot(self.context, self.instance, self.network_info, reboot_type) info = self._get_info() self._check_vm_info(info, power_state.RUNNING) def test_reboot_non_existent(self): self._create_instance() self.assertRaises(exception.InstanceNotFound, self.conn.reboot, self.context, self.instance, self.network_info, 'SOFT') @mock.patch.object(compute_api.API, 'reboot') def test_poll_rebooting_instances(self, mock_reboot): self._create_vm() instances = [self.instance] self.conn.poll_rebooting_instances(60, instances) mock_reboot.assert_called_once_with(mock.ANY, mock.ANY, mock.ANY) def test_reboot_not_poweredon(self): self._create_vm() info = self._get_info() self._check_vm_info(info, power_state.RUNNING) self.conn.suspend(self.context, self.instance) info = self._get_info() self._check_vm_info(info, power_state.SUSPENDED) self.assertRaises(exception.InstanceRebootFailure, self.conn.reboot, self.context, self.instance, self.network_info, 'SOFT') def test_suspend(self): self._create_vm() info = self._get_info() self._check_vm_info(info, power_state.RUNNING) self.conn.suspend(self.context, self.instance) info = self._get_info() self._check_vm_info(info, power_state.SUSPENDED) def test_suspend_non_existent(self): self._create_instance() self.assertRaises(exception.InstanceNotFound, self.conn.suspend, self.context, self.instance) def test_resume(self): self._create_vm() info = self._get_info() self._check_vm_info(info, power_state.RUNNING) self.conn.suspend(self.context, self.instance) info = self._get_info() self._check_vm_info(info, power_state.SUSPENDED) self.conn.resume(self.context, self.instance, self.network_info) info = self._get_info() self._check_vm_info(info, power_state.RUNNING) def test_resume_non_existent(self): self._create_instance() self.assertRaises(exception.InstanceNotFound, self.conn.resume, self.context, self.instance, self.network_info) def test_resume_not_suspended(self): self._create_vm() info = self._get_info() self._check_vm_info(info, power_state.RUNNING) self.assertRaises(exception.InstanceResumeFailure, self.conn.resume, self.context, self.instance, self.network_info) def test_power_on(self): self._create_vm() info = self._get_info() self._check_vm_info(info, power_state.RUNNING) self.conn.power_off(self.instance) info = self._get_info() self._check_vm_info(info, power_state.SHUTDOWN) self.conn.power_on(self.context, self.instance, self.network_info) info = self._get_info() self._check_vm_info(info, power_state.RUNNING) def test_power_on_non_existent(self): self._create_instance() self.assertRaises(exception.InstanceNotFound, self.conn.power_on, self.context, self.instance, self.network_info) def test_power_off(self): self._create_vm() info = self._get_info() self._check_vm_info(info, power_state.RUNNING) self.conn.power_off(self.instance) info = self._get_info() self._check_vm_info(info, power_state.SHUTDOWN) def test_power_off_non_existent(self): self._create_instance() self.assertRaises(exception.InstanceNotFound, self.conn.power_off, self.instance) @mock.patch.object(driver.VMwareVCDriver, 'reboot') @mock.patch.object(vm_util, 'get_vm_state', return_value=power_state.SHUTDOWN) def test_resume_state_on_host_boot(self, mock_get_vm_state, mock_reboot): self._create_instance() self.conn.resume_state_on_host_boot(self.context, self.instance, 'network_info') mock_get_vm_state.assert_called_once_with(self.conn._session, self.instance) mock_reboot.assert_called_once_with(self.context, self.instance, 'network_info', 'hard', None) def test_resume_state_on_host_boot_no_reboot(self): self._create_instance() for state in [power_state.RUNNING, power_state.SUSPENDED]: with test.nested( mock.patch.object(driver.VMwareVCDriver, 'reboot'), mock.patch.object(vm_util, 'get_vm_state', return_value=state) ) as (mock_reboot, mock_get_vm_state): self.conn.resume_state_on_host_boot(self.context, self.instance, 'network_info') mock_get_vm_state.assert_called_once_with(self.conn._session, self.instance) self.assertFalse(mock_reboot.called) @mock.patch('nova.virt.driver.block_device_info_get_mapping') @mock.patch('nova.virt.vmwareapi.driver.VMwareVCDriver.detach_volume') def test_detach_instance_volumes( self, detach_volume, block_device_info_get_mapping): self._create_vm() def _mock_bdm(connection_info, device_name): return {'connection_info': connection_info, 'device_name': device_name} disk_1 = _mock_bdm(mock.sentinel.connection_info_1, 'dev1') disk_2 = _mock_bdm(mock.sentinel.connection_info_2, 'dev2') block_device_info_get_mapping.return_value = [disk_1, disk_2] detach_volume.side_effect = [None, exception.DiskNotFound("Error")] with mock.patch.object(self.conn, '_vmops') as vmops: block_device_info = mock.sentinel.block_device_info self.conn._detach_instance_volumes(self.instance, block_device_info) block_device_info_get_mapping.assert_called_once_with( block_device_info) vmops.power_off.assert_called_once_with(self.instance) exp_detach_calls = [mock.call(mock.sentinel.connection_info_1, self.instance, 'dev1'), mock.call(mock.sentinel.connection_info_2, self.instance, 'dev2')] self.assertEqual(exp_detach_calls, detach_volume.call_args_list) def test_destroy(self): self._create_vm() info = self._get_info() self._check_vm_info(info, power_state.RUNNING) instances = self.conn.list_instances() self.assertEqual(1, len(instances)) self.conn.destroy(self.context, self.instance, self.network_info) instances = self.conn.list_instances() self.assertEqual(0, len(instances)) self.assertIsNone(vm_util.vm_ref_cache_get(self.uuid)) def test_destroy_no_datastore(self): self._create_vm() info = self._get_info() self._check_vm_info(info, power_state.RUNNING) instances = self.conn.list_instances() self.assertEqual(1, len(instances)) # Delete the vmPathName vm = self._get_vm_record() vm.delete('config.files.vmPathName') self.conn.destroy(self.context, self.instance, self.network_info) instances = self.conn.list_instances() self.assertEqual(0, len(instances)) def test_destroy_non_existent(self): self.destroy_disks = True with mock.patch.object(self.conn._vmops, "destroy") as mock_destroy: self._create_instance() self.conn.destroy(self.context, self.instance, self.network_info, None, self.destroy_disks) mock_destroy.assert_called_once_with(self.instance, self.destroy_disks) def test_destroy_instance_without_compute(self): instance = fake_instance.fake_instance_obj(None) self.destroy_disks = True with mock.patch.object(self.conn._vmops, "destroy") as mock_destroy: self.conn.destroy(self.context, instance, self.network_info, None, self.destroy_disks) self.assertFalse(mock_destroy.called) def _destroy_instance_without_vm_ref(self, task_state=None): def fake_vm_ref_from_name(session, vm_name): return 'fake-ref' self._create_instance() with test.nested( mock.patch.object(vm_util, 'get_vm_ref_from_name', fake_vm_ref_from_name), mock.patch.object(self.conn._session, '_call_method'), mock.patch.object(self.conn._vmops, '_destroy_instance') ) as (mock_get, mock_call, mock_destroy): self.instance.task_state = task_state self.conn.destroy(self.context, self.instance, self.network_info, None, True) if task_state == task_states.RESIZE_REVERTING: expected = 0 else: expected = 1 self.assertEqual(expected, mock_destroy.call_count) self.assertFalse(mock_call.called) def test_destroy_instance_without_vm_ref(self): self._destroy_instance_without_vm_ref() def test_destroy_instance_without_vm_ref_with_resize_revert(self): self._destroy_instance_without_vm_ref( task_state=task_states.RESIZE_REVERTING) def _rescue(self, config_drive=False): # validate that the power on is only called once self._power_on = vm_util.power_on_instance self._power_on_called = 0 def fake_attach_disk_to_vm(vm_ref, instance, adapter_type, disk_type, vmdk_path=None, disk_size=None, linked_clone=False, controller_key=None, unit_number=None, device_name=None): info = self.conn.get_info(instance) self._check_vm_info(info, power_state.SHUTDOWN) if config_drive: def fake_create_config_drive(instance, injected_files, password, network_info, data_store_name, folder, instance_uuid, cookies): self.assertTrue(uuidutils.is_uuid_like(instance['uuid'])) return str(ds_obj.DatastorePath(data_store_name, instance_uuid, 'fake.iso')) self.stub_out('nova.virt.vmwareapi.vmops._create_config_drive', fake_create_config_drive) self._create_vm() def fake_power_on_instance(session, instance, vm_ref=None): self._power_on_called += 1 return self._power_on(session, instance, vm_ref=vm_ref) info = self._get_info() self._check_vm_info(info, power_state.RUNNING) self.stub_out('nova.virt.vmwareapi.vm_util.power_on_instance', fake_power_on_instance) self.stub_out('nova.virt.vmwareapi.volumeops.' 'VMwareVolumeOps.attach_disk_to_vm', fake_attach_disk_to_vm) self.conn.rescue(self.context, self.instance, self.network_info, self.image, 'fake-password') info = self.conn.get_info({'name': '1', 'uuid': self.uuid, 'node': self.instance_node}) self._check_vm_info(info, power_state.RUNNING) info = self.conn.get_info({'name': '1-orig', 'uuid': '%s-orig' % self.uuid, 'node': self.instance_node}) self._check_vm_info(info, power_state.SHUTDOWN) self.assertIsNotNone(vm_util.vm_ref_cache_get(self.uuid)) self.assertEqual(1, self._power_on_called) def test_get_diagnostics(self): self._create_vm() expected = {'memoryReservation': 0, 'suspendInterval': 0, 'maxCpuUsage': 2000, 'toolsInstallerMounted': False, 'consumedOverheadMemory': 20, 'numEthernetCards': 1, 'numCpu': 1, 'featureRequirement': [{'key': 'cpuid.AES'}], 'memoryOverhead': 21417984, 'guestMemoryUsage': 0, 'connectionState': 'connected', 'memorySizeMB': 512, 'balloonedMemory': 0, 'vmPathName': 'fake_path', 'template': False, 'overallCpuUsage': 0, 'powerState': 'poweredOn', 'cpuReservation': 0, 'overallCpuDemand': 0, 'numVirtualDisks': 1, 'hostMemoryUsage': 141} expected = {'vmware:' + k: v for k, v in expected.items()} instance = fake_instance.fake_instance_obj(None, name=1, uuid=self.uuid, node=self.instance_node) self.assertThat( self.conn.get_diagnostics(instance), matchers.DictMatches(expected)) def test_get_instance_diagnostics(self): self._create_vm() expected = fake_diagnostics.fake_diagnostics_obj( uptime=0, memory_details={'used': 0, 'maximum': 512}, nic_details=[], driver='vmwareapi', state='running', cpu_details=[], disk_details=[], hypervisor_os='esxi', config_drive=True) instance = objects.Instance(uuid=self.uuid, config_drive=False, system_metadata={}, node=self.instance_node) actual = self.conn.get_instance_diagnostics(instance) self.assertDiagnosticsEqual(expected, actual) def test_get_vnc_console_non_existent(self): self._create_instance() self.assertRaises(exception.InstanceNotFound, self.conn.get_vnc_console, self.context, self.instance) def _test_get_vnc_console(self): self._create_vm() fake_vm = self._get_vm_record() OptionValue = collections.namedtuple('OptionValue', ['key', 'value']) opt_val = OptionValue(key='', value=5906) fake_vm.set(vm_util.VNC_CONFIG_KEY, opt_val) vnc_console = self.conn.get_vnc_console(self.context, self.instance) self.assertEqual(self.vnc_host, vnc_console.host) self.assertEqual(5906, vnc_console.port) def test_get_vnc_console(self): self._test_get_vnc_console() def test_get_vnc_console_noport(self): self._create_vm() self.assertRaises(exception.ConsoleTypeUnavailable, self.conn.get_vnc_console, self.context, self.instance) def test_get_console_output(self): self.flags(serial_log_dir='/opt/vspc', group='vmware') self._create_instance() with test.nested( mock.patch('os.path.exists', return_value=True), mock.patch('{}.open'.format(driver.__name__), create=True), mock.patch('nova.privsep.path.last_bytes') ) as (fake_exists, fake_open, fake_last_bytes): fake_open.return_value = mock.MagicMock() fake_fd = fake_open.return_value.__enter__.return_value fake_last_bytes.return_value = b'fira', 0 output = self.conn.get_console_output(self.context, self.instance) fname = self.instance.uuid.replace('-', '') fake_exists.assert_called_once_with('/opt/vspc/{}'.format(fname)) fake_last_bytes.assert_called_once_with(fake_fd, driver.MAX_CONSOLE_BYTES) self.assertEqual(b'fira', output) def test_get_volume_connector(self): self._create_vm() connector_dict = self.conn.get_volume_connector(self.instance) fake_vm = self._get_vm_record() fake_vm_id = fake_vm.obj.value self.assertEqual(HOST, connector_dict['ip']) self.assertEqual('iscsi-name', connector_dict['initiator']) self.assertEqual(HOST, connector_dict['host']) self.assertEqual(fake_vm_id, connector_dict['instance']) def _test_vmdk_connection_info(self, type): return {'driver_volume_type': type, 'serial': 'volume-fake-id', 'data': {'volume': 'vm-10', 'volume_id': 'volume-fake-id'}} @mock.patch('nova.virt.vmwareapi.volumeops.VMwareVolumeOps.' '_attach_volume_vmdk') def test_volume_attach_vmdk(self, mock_attach_volume_vmdk): self._create_vm() connection_info = self._test_vmdk_connection_info('vmdk') mount_point = '/dev/vdc' self.conn.attach_volume(None, connection_info, self.instance, mount_point) mock_attach_volume_vmdk.assert_called_once_with(connection_info, self.instance, None) @mock.patch('nova.virt.vmwareapi.volumeops.VMwareVolumeOps.' '_detach_volume_vmdk') def test_volume_detach_vmdk(self, mock_detach_volume_vmdk): self._create_vm() connection_info = self._test_vmdk_connection_info('vmdk') mount_point = '/dev/vdc' self.conn.detach_volume(mock.sentinel.context, connection_info, self.instance, mount_point, encryption=None) mock_detach_volume_vmdk.assert_called_once_with(connection_info, self.instance) def test_attach_vmdk_disk_to_vm(self): self._create_vm() connection_info = self._test_vmdk_connection_info('vmdk') adapter_type = constants.DEFAULT_ADAPTER_TYPE disk_type = constants.DEFAULT_DISK_TYPE disk_uuid = 'e97f357b-331e-4ad1-b726-89be048fb811' backing = mock.Mock(uuid=disk_uuid) device = mock.Mock(backing=backing) vmdk_info = vm_util.VmdkInfo('fake-path', adapter_type, disk_type, 64, device) with test.nested( mock.patch.object(vm_util, 'get_vm_ref', return_value=mock.sentinel.vm_ref), mock.patch.object(volumeops.VMwareVolumeOps, '_get_volume_ref'), mock.patch.object(vm_util, 'get_vmdk_info', return_value=vmdk_info), mock.patch.object(volumeops.VMwareVolumeOps, 'attach_disk_to_vm'), mock.patch.object(volumeops.VMwareVolumeOps, '_update_volume_details') ) as (get_vm_ref, get_volume_ref, get_vmdk_info, attach_disk_to_vm, update_volume_details): self.conn.attach_volume(None, connection_info, self.instance, '/dev/vdc') get_vm_ref.assert_called_once_with(self.conn._session, self.instance) get_volume_ref.assert_called_once_with( connection_info['data']['volume']) self.assertTrue(get_vmdk_info.called) attach_disk_to_vm.assert_called_once_with(mock.sentinel.vm_ref, self.instance, adapter_type, disk_type, vmdk_path='fake-path') update_volume_details.assert_called_once_with( mock.sentinel.vm_ref, connection_info['data']['volume_id'], disk_uuid) def test_detach_vmdk_disk_from_vm(self): self._create_vm() connection_info = self._test_vmdk_connection_info('vmdk') with mock.patch.object(volumeops.VMwareVolumeOps, 'detach_volume') as detach_volume: self.conn.detach_volume(mock.sentinel.context, connection_info, self.instance, '/dev/vdc', encryption=None) detach_volume.assert_called_once_with(connection_info, self.instance) @mock.patch('nova.virt.vmwareapi.volumeops.VMwareVolumeOps.' '_attach_volume_iscsi') def test_volume_attach_iscsi(self, mock_attach_volume_iscsi): self._create_vm() connection_info = self._test_vmdk_connection_info('iscsi') mount_point = '/dev/vdc' self.conn.attach_volume(None, connection_info, self.instance, mount_point) mock_attach_volume_iscsi.assert_called_once_with(connection_info, self.instance, None) @mock.patch('nova.virt.vmwareapi.volumeops.VMwareVolumeOps.' '_detach_volume_iscsi') def test_volume_detach_iscsi(self, mock_detach_volume_iscsi): self._create_vm() connection_info = self._test_vmdk_connection_info('iscsi') mount_point = '/dev/vdc' self.conn.detach_volume(mock.sentinel.context, connection_info, self.instance, mount_point, encryption=None) mock_detach_volume_iscsi.assert_called_once_with(connection_info, self.instance) def test_attach_iscsi_disk_to_vm(self): self._create_vm() with mock.patch('nova.virt.vmwareapi.volumeops.VMwareVolumeOps.' '_iscsi_get_target') as mock_iscsi_get_target, \ mock.patch('nova.virt.vmwareapi.volumeops.VMwareVolumeOps.' '_iscsi_add_send_target_host'), \ mock.patch('nova.virt.vmwareapi.volumeops.VMwareVolumeOps.' '_iscsi_rescan_hba'), \ mock.patch('nova.virt.vmwareapi.volumeops.VMwareVolumeOps.' 'attach_disk_to_vm') as mock_attach_disk: connection_info = self._test_vmdk_connection_info('iscsi') connection_info['data']['target_portal'] = 'fake_target_host:port' connection_info['data']['target_iqn'] = 'fake_target_iqn' mount_point = '/dev/vdc' discover = ('fake_name', 'fake_uuid') # simulate target not found mock_iscsi_get_target.return_value = (None, None) volumeops.VMwareVolumeOps._iscsi_rescan_hba( connection_info['data']['target_portal']) # simulate target found mock_iscsi_get_target.return_value = discover self.conn.attach_volume(None, connection_info, self.instance, mount_point) mock_attach_disk.assert_called_once_with(mock.ANY, self.instance, mock.ANY, 'rdmp', device_name=mock.ANY) mock_iscsi_get_target.assert_called_once_with( connection_info['data']) def test_iscsi_rescan_hba(self): fake_target_portal = 'fake_target_host:port' host_storage_sys = vmwareapi_fake._get_objects( "HostStorageSystem").objects[0] iscsi_hba_array = host_storage_sys.get('storageDeviceInfo' '.hostBusAdapter') iscsi_hba = iscsi_hba_array.HostHostBusAdapter[0] # Check the host system does not have the send target self.assertRaises(AttributeError, getattr, iscsi_hba, 'configuredSendTarget') # Rescan HBA with the target portal vops = volumeops.VMwareVolumeOps(self.conn._session) vops._iscsi_rescan_hba(fake_target_portal) # Check if HBA has the target portal configured self.assertEqual('fake_target_host', iscsi_hba.configuredSendTarget[0].address) # Rescan HBA with same portal vops._iscsi_rescan_hba(fake_target_portal) self.assertEqual(1, len(iscsi_hba.configuredSendTarget)) def test_iscsi_get_target(self): data = {'target_portal': 'fake_target_host:port', 'target_iqn': 'fake_target_iqn'} host = vmwareapi_fake._get_objects('HostSystem').objects[0] host._add_iscsi_target(data) vops = volumeops.VMwareVolumeOps(self.conn._session) result = vops._iscsi_get_target(data) self.assertEqual(('fake-device', 'fake-uuid'), result) @mock.patch('nova.virt.vmwareapi.volumeops.VMwareVolumeOps.' 'detach_disk_from_vm') @mock.patch('nova.virt.vmwareapi.vm_util.get_rdm_disk') @mock.patch('nova.virt.vmwareapi.volumeops.VMwareVolumeOps.' '_iscsi_get_target') def test_detach_iscsi_disk_from_vm(self, mock_iscsi_get_target, mock_get_rdm_disk, mock_detach_disk_from_vm): self._create_vm() connection_info = self._test_vmdk_connection_info('iscsi') connection_info['data']['target_portal'] = 'fake_target_portal' connection_info['data']['target_iqn'] = 'fake_target_iqn' mount_point = '/dev/vdc' find = ('fake_name', 'fake_uuid') mock_iscsi_get_target.return_value = find device = 'fake_device' mock_get_rdm_disk.return_value = device self.conn.detach_volume(mock.sentinel.context, connection_info, self.instance, mount_point, encryption=None) mock_iscsi_get_target.assert_called_once_with(connection_info['data']) mock_get_rdm_disk.assert_called_once() mock_detach_disk_from_vm.assert_called_once_with(mock.ANY, self.instance, device, destroy_disk=True) def test_connection_info_get(self): self._create_vm() connector = self.conn.get_volume_connector(self.instance) self.assertEqual(HOST, connector['ip']) self.assertEqual(HOST, connector['host']) self.assertEqual('iscsi-name', connector['initiator']) self.assertIn('instance', connector) def test_connection_info_get_after_destroy(self): self._create_vm() self.conn.destroy(self.context, self.instance, self.network_info) connector = self.conn.get_volume_connector(self.instance) self.assertEqual(HOST, connector['ip']) self.assertEqual(HOST, connector['host']) self.assertEqual('iscsi-name', connector['initiator']) self.assertNotIn('instance', connector) def test_refresh_instance_security_rules(self): self.assertRaises(NotImplementedError, self.conn.refresh_instance_security_rules, instance=None) @mock.patch.object(objects.block_device.BlockDeviceMappingList, 'get_by_instance_uuids') def test_image_aging_image_used(self, mock_get_by_inst): self._create_vm() all_instances = [self.instance] self.conn.manage_image_cache(self.context, all_instances) self._cached_files_exist() def _get_timestamp_filename(self): return '%s%s' % (imagecache.TIMESTAMP_PREFIX, self.old_time.strftime(imagecache.TIMESTAMP_FORMAT)) def _override_time(self): self.old_time = datetime.datetime(2012, 11, 22, 12, 00, 00) def _fake_get_timestamp_filename(fake): return self._get_timestamp_filename() self.stub_out('nova.virt.vmwareapi.imagecache.' 'ImageCacheManager._get_timestamp_filename', _fake_get_timestamp_filename) def _timestamp_file_exists(self, exists=True): timestamp = ds_obj.DatastorePath(self.ds, 'vmware_base', self.fake_image_uuid, self._get_timestamp_filename() + '/') if exists: vmwareapi_fake.assertPathExists(self, str(timestamp)) else: vmwareapi_fake.assertPathNotExists(self, str(timestamp)) def _image_aging_image_marked_for_deletion(self): self._create_vm(uuid=uuidutils.generate_uuid()) self._cached_files_exist() all_instances = [] self.conn.manage_image_cache(self.context, all_instances) self._cached_files_exist() self._timestamp_file_exists() @mock.patch.object(objects.block_device.BlockDeviceMappingList, 'get_by_instance_uuids') def test_image_aging_image_marked_for_deletion(self, mock_get_by_inst): self._override_time() self._image_aging_image_marked_for_deletion() def _timestamp_file_removed(self): self._override_time() self._image_aging_image_marked_for_deletion() self._create_vm(num_instances=2, uuid=uuidutils.generate_uuid()) self._timestamp_file_exists(exists=False) @mock.patch.object(objects.block_device.BlockDeviceMappingList, 'get_by_instance_uuids') def test_timestamp_file_removed_spawn(self, mock_get_by_inst): self._timestamp_file_removed() @mock.patch.object(objects.block_device.BlockDeviceMappingList, 'get_by_instance_uuids') def test_timestamp_file_removed_aging(self, mock_get_by_inst): self._timestamp_file_removed() ts = self._get_timestamp_filename() ts_path = ds_obj.DatastorePath(self.ds, 'vmware_base', self.fake_image_uuid, ts + '/') vmwareapi_fake._add_file(str(ts_path)) self._timestamp_file_exists() all_instances = [self.instance] self.conn.manage_image_cache(self.context, all_instances) self._timestamp_file_exists(exists=False) def test_image_aging_disabled(self): self._override_time() self.flags(remove_unused_base_images=False) self._create_vm() self._cached_files_exist() all_instances = [] self.conn.manage_image_cache(self.context, all_instances) self._cached_files_exist(exists=True) self._timestamp_file_exists(exists=False) def _image_aging_aged(self, aging_time=100): self._override_time() cur_time = datetime.datetime(2012, 11, 22, 12, 00, 10) self.flags(remove_unused_original_minimum_age_seconds=aging_time) self._image_aging_image_marked_for_deletion() all_instances = [] self.useFixture(utils_fixture.TimeFixture(cur_time)) self.conn.manage_image_cache(self.context, all_instances) @mock.patch.object(objects.block_device.BlockDeviceMappingList, 'get_by_instance_uuids') def test_image_aging_aged(self, mock_get_by_inst): self._image_aging_aged(aging_time=8) self._cached_files_exist(exists=False) @mock.patch.object(objects.block_device.BlockDeviceMappingList, 'get_by_instance_uuids') def test_image_aging_not_aged(self, mock_get_by_inst): self._image_aging_aged() self._cached_files_exist() def test_public_api_signatures(self): self.assertPublicAPISignatures(v_driver.ComputeDriver(None), self.conn) def test_register_extension(self): with mock.patch.object(self.conn._session, '_call_method', return_value=None) as mock_call_method: self.conn._register_openstack_extension() mock_call_method.assert_has_calls( [mock.call(oslo_vim_util, 'find_extension', constants.EXTENSION_KEY), mock.call(oslo_vim_util, 'register_extension', constants.EXTENSION_KEY, constants.EXTENSION_TYPE_INSTANCE)]) def test_register_extension_already_exists(self): with mock.patch.object(self.conn._session, '_call_method', return_value='fake-extension') as mock_find_ext: self.conn._register_openstack_extension() mock_find_ext.assert_called_once_with(oslo_vim_util, 'find_extension', constants.EXTENSION_KEY) def test_register_extension_concurrent(self): def fake_call_method(module, method, *args, **kwargs): if method == "find_extension": return None elif method == "register_extension": raise vexc.VimFaultException(['InvalidArgument'], 'error') else: raise Exception() with (mock.patch.object( self.conn._session, '_call_method', fake_call_method)): self.conn._register_openstack_extension() def test_list_instances(self): instances = self.conn.list_instances() self.assertEqual(0, len(instances)) def _setup_mocks_for_session(self, mock_init): mock_init.return_value = None vcdriver = driver.VMwareVCDriver(None, False) vcdriver._session = mock.Mock() vcdriver._session.vim = None def side_effect(): vcdriver._session.vim = mock.Mock() vcdriver._session._create_session.side_effect = side_effect return vcdriver def test_host_power_action(self): self.assertRaises(NotImplementedError, self.conn.host_power_action, 'action') def test_host_maintenance_mode(self): self.assertRaises(NotImplementedError, self.conn.host_maintenance_mode, 'host', 'mode') def test_set_host_enabled(self): self.assertRaises(NotImplementedError, self.conn.set_host_enabled, 'state') def test_datastore_regex_configured(self): self.assertEqual(self.conn._datastore_regex, self.conn._vmops._datastore_regex) self.assertEqual(self.conn._datastore_regex, self.conn._vc_state._datastore_regex) @mock.patch('nova.virt.vmwareapi.ds_util.get_datastore') def test_datastore_regex_configured_vcstate(self, mock_get_ds_ref): vcstate = self.conn._vc_state self.conn.get_available_resource(self.node_name) mock_get_ds_ref.assert_called_with( vcstate._session, vcstate._cluster, vcstate._datastore_regex) def test_get_available_resource(self): stats = self.conn.get_available_resource(self.node_name) self.assertEqual(32, stats['vcpus']) self.assertEqual(1024, stats['local_gb']) self.assertEqual(1024 - 500, stats['local_gb_used']) self.assertEqual(2048, stats['memory_mb']) self.assertEqual(1000, stats['memory_mb_used']) self.assertEqual('VMware vCenter Server', stats['hypervisor_type']) self.assertEqual(5001000, stats['hypervisor_version']) self.assertEqual(self.node_name, stats['hypervisor_hostname']) self.assertIsNone(stats['cpu_info']) self.assertEqual( [("i686", "vmware", "hvm"), ("x86_64", "vmware", "hvm")], stats['supported_instances']) @mock.patch('nova.virt.vmwareapi.ds_util.get_available_datastores') def test_get_inventory(self, mock_get_avail_ds): ds1 = ds_obj.Datastore(ref='fake-ref', name='datastore1', capacity=10 * units.Gi, freespace=3 * units.Gi) ds2 = ds_obj.Datastore(ref='fake-ref', name='datastore2', capacity=35 * units.Gi, freespace=25 * units.Gi) ds3 = ds_obj.Datastore(ref='fake-ref', name='datastore3', capacity=50 * units.Gi, freespace=15 * units.Gi) mock_get_avail_ds.return_value = [ds1, ds2, ds3] inv = self.conn.get_inventory(self.node_name) expected = { fields.ResourceClass.VCPU: { 'total': 32, 'reserved': 0, 'min_unit': 1, 'max_unit': 16, 'step_size': 1, }, fields.ResourceClass.MEMORY_MB: { 'total': 2048, 'reserved': 512, 'min_unit': 1, 'max_unit': 1024, 'step_size': 1, }, fields.ResourceClass.DISK_GB: { 'total': 95, 'reserved': 0, 'min_unit': 1, 'max_unit': 25, 'step_size': 1, }, } self.assertEqual(expected, inv) def test_invalid_datastore_regex(self): # Tests if we raise an exception for Invalid Regular Expression in # vmware_datastore_regex self.flags(cluster_name='test_cluster', datastore_regex='fake-ds(01', group='vmware') self.assertRaises(exception.InvalidInput, driver.VMwareVCDriver, None) def test_get_available_nodes(self): nodelist = self.conn.get_available_nodes() self.assertEqual(1, len(nodelist)) self.assertIn(self.node_name, nodelist) @mock.patch.object(nova.virt.vmwareapi.images.VMwareImage, 'from_image') def test_spawn_with_sparse_image(self, mock_from_image): img_info = images.VMwareImage( image_id=self.fake_image_uuid, file_size=1024, disk_type=constants.DISK_TYPE_SPARSE, linked_clone=False) mock_from_image.return_value = img_info self._create_vm() info = self._get_info() self._check_vm_info(info, power_state.RUNNING) def test_plug_vifs(self): # Check to make sure the method raises NotImplementedError. self._create_instance() self.assertRaises(NotImplementedError, self.conn.plug_vifs, instance=self.instance, network_info=None) def test_unplug_vifs(self): # Check to make sure the method raises NotImplementedError. self._create_instance() self.assertRaises(NotImplementedError, self.conn.unplug_vifs, instance=self.instance, network_info=None) def _create_vif(self): gw_4 = network_model.IP(address='101.168.1.1', type='gateway') dns_4 = network_model.IP(address='8.8.8.8', type=None) subnet_4 = network_model.Subnet(cidr='101.168.1.0/24', dns=[dns_4], gateway=gw_4, routes=None, dhcp_server='191.168.1.1') gw_6 = network_model.IP(address='101:1db9::1', type='gateway') subnet_6 = network_model.Subnet(cidr='101:1db9::/64', dns=None, gateway=gw_6, ips=None, routes=None) network_neutron = network_model.Network(id='network-id-xxx-yyy-zzz', bridge=None, label=None, subnets=[subnet_4, subnet_6], bridge_interface='eth0', vlan=99) vif_bridge_neutron = network_model.VIF(id='new-vif-xxx-yyy-zzz', address='ca:fe:de:ad:be:ef', network=network_neutron, type=network_model.VIF_TYPE_OVS, devname='tap-xxx-yyy-zzz', ovs_interfaceid='aaa-bbb-ccc') return vif_bridge_neutron def _validate_interfaces(self, id, index, num_iface_ids): vm = self._get_vm_record() found_iface_id = False extras = vm.get("config.extraConfig") key = "nvp.iface-id.%s" % index num_found = 0 for c in extras.OptionValue: if c.key.startswith("nvp.iface-id."): num_found += 1 if c.key == key and c.value == id: found_iface_id = True self.assertTrue(found_iface_id) self.assertEqual(num_iface_ids, num_found) def _attach_interface(self, vif): self.conn.attach_interface(self.context, self.instance, self.image, vif) self._validate_interfaces(vif['id'], 1, 2) def test_attach_interface(self): self._create_vm() vif = self._create_vif() self._attach_interface(vif) def test_attach_interface_with_exception(self): self._create_vm() vif = self._create_vif() with mock.patch.object(self.conn._session, '_wait_for_task', side_effect=Exception): self.assertRaises(exception.InterfaceAttachFailed, self.conn.attach_interface, self.context, self.instance, self.image, vif) @mock.patch.object(vif, 'get_network_device', return_value='fake_device') def _detach_interface(self, vif, mock_get_device): self._create_vm() self._attach_interface(vif) self.conn.detach_interface(self.context, self.instance, vif) self._validate_interfaces('free', 1, 2) def test_detach_interface(self): vif = self._create_vif() self._detach_interface(vif) def test_detach_interface_and_attach(self): vif = self._create_vif() self._detach_interface(vif) self.conn.attach_interface(self.context, self.instance, self.image, vif) self._validate_interfaces(vif['id'], 1, 2) def test_detach_interface_no_device(self): self._create_vm() vif = self._create_vif() self._attach_interface(vif) self.assertRaises(exception.NotFound, self.conn.detach_interface, self.context, self.instance, vif) def test_detach_interface_no_vif_match(self): self._create_vm() vif = self._create_vif() self._attach_interface(vif) vif['id'] = 'bad-id' self.assertRaises(exception.NotFound, self.conn.detach_interface, self.context, self.instance, vif) @mock.patch.object(vif, 'get_network_device', return_value='fake_device') def test_detach_interface_with_exception(self, mock_get_device): self._create_vm() vif = self._create_vif() self._attach_interface(vif) with mock.patch.object(self.conn._session, '_wait_for_task', side_effect=Exception): self.assertRaises(exception.InterfaceDetachFailed, self.conn.detach_interface, self.context, self.instance, vif) def test_resize_to_smaller_disk(self): self._create_vm(instance_type='m1.large') flavor = self._get_instance_type_by_name('m1.small') self.assertRaises(exception.InstanceFaultRollback, self.conn.migrate_disk_and_power_off, self.context, self.instance, 'fake_dest', flavor, None) def test_spawn_attach_volume_vmdk(self): self._spawn_attach_volume_vmdk() def test_spawn_attach_volume_vmdk_no_image_ref(self): self._spawn_attach_volume_vmdk(set_image_ref=False) def test_pause(self): # Tests that the VMwareVCDriver does not implement the pause method. self._create_instance() self.assertRaises(NotImplementedError, self.conn.pause, self.instance) def test_unpause(self): # Tests that the VMwareVCDriver does not implement the unpause method. self._create_instance() self.assertRaises(NotImplementedError, self.conn.unpause, self.instance) def test_datastore_dc_map(self): self.assertEqual({}, ds_util._DS_DC_MAPPING) self._create_vm() # currently there are 2 data stores self.assertEqual(2, len(ds_util._DS_DC_MAPPING)) def test_pre_live_migration(self): migrate_data = objects.migrate_data.LiveMigrateData() self.assertRaises(NotImplementedError, self.conn.pre_live_migration, self.context, 'fake_instance', 'fake_block_device_info', 'fake_network_info', 'fake_disk_info', migrate_data) def test_live_migration(self): self.assertRaises(NotImplementedError, self.conn.live_migration, self.context, 'fake_instance', 'fake_dest', 'fake_post_method', 'fake_recover_method') def test_rollback_live_migration_at_destination(self): self.assertRaises(NotImplementedError, self.conn.rollback_live_migration_at_destination, self.context, 'fake_instance', 'fake_network_info', 'fake_block_device_info') def test_post_live_migration(self): self.assertIsNone(self.conn.post_live_migration(self.context, 'fake_instance', 'fake_block_device_info')) def test_get_instance_disk_info_is_implemented(self): # Ensure that the method has been implemented in the driver instance = objects.Instance() try: disk_info = self.conn.get_instance_disk_info(instance) self.assertIsNone(disk_info) except NotImplementedError: self.fail("test_get_instance_disk_info() should not raise " "NotImplementedError") def test_get_host_uptime(self): self.assertRaises(NotImplementedError, self.conn.get_host_uptime) def test_pbm_wsdl_location(self): self.flags(pbm_enabled=True, pbm_wsdl_location='fira', group='vmware') self.conn._update_pbm_location() self.assertEqual('fira', self.conn._session._pbm_wsdl_loc) self.assertIsNone(self.conn._session._pbm) def test_nodename(self): test_mor = "domain-26" self.assertEqual("%s.%s" % (test_mor, vmwareapi_fake._FAKE_VCENTER_UUID), self.conn._create_nodename(test_mor), "VC driver failed to create the proper node name") @mock.patch.object(oslo_vim_util, 'get_vc_version', return_value='5.0.0') def test_invalid_min_version(self, mock_version): self.assertRaises(exception.NovaException, self.conn._check_min_version) @mock.patch.object(driver.LOG, 'warning') @mock.patch.object(oslo_vim_util, 'get_vc_version', return_value='5.1.0') def test_warning_deprecated_version(self, mock_version, mock_warning): self.conn._check_min_version() # assert that the next min version is in the warning message expected_arg = {'version': constants.NEXT_MIN_VC_VERSION} version_arg_found = False for call in mock_warning.call_args_list: if call[0][1] == expected_arg: version_arg_found = True break self.assertTrue(version_arg_found) @mock.patch.object(objects.Service, 'get_by_compute_host') def test_host_state_service_disabled(self, mock_service): service = self._create_service(disabled=False, host='fake-mini') mock_service.return_value = service fake_stats = {'cpu': {'vcpus': 4}, 'mem': {'total': '8194', 'free': '2048'}} with test.nested( mock.patch.object(vm_util, 'get_stats_from_cluster', side_effect=[vexc.VimConnectionException('fake'), fake_stats, fake_stats]), mock.patch.object(service, 'save')) as (mock_stats, mock_save): self.conn._vc_state.update_status() self.assertEqual(1, mock_save.call_count) self.assertTrue(service.disabled) self.assertTrue(self.conn._vc_state._auto_service_disabled) # ensure the service is enabled again when there is no connection # exception self.conn._vc_state.update_status() self.assertEqual(2, mock_save.call_count) self.assertFalse(service.disabled) self.assertFalse(self.conn._vc_state._auto_service_disabled) # ensure objects.Service.save method is not called more than once # after the service is enabled self.conn._vc_state.update_status() self.assertEqual(2, mock_save.call_count) self.assertFalse(service.disabled) self.assertFalse(self.conn._vc_state._auto_service_disabled) nova-17.0.1/nova/tests/unit/virt/vmwareapi/test_network_util.py0000666000175000017500000002051613250073126024774 0ustar zuulzuul00000000000000# Copyright (c) 2014 VMware, Inc. # # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import mock from oslo_vmware import vim_util from nova import exception from nova import test from nova.tests.unit.virt.vmwareapi import fake from nova.tests.unit.virt.vmwareapi import stubs from nova.virt.vmwareapi import driver from nova.virt.vmwareapi import network_util from nova.virt.vmwareapi import vm_util ResultSet = collections.namedtuple('ResultSet', ['objects']) ObjectContent = collections.namedtuple('ObjectContent', ['obj', 'propSet']) DynamicProperty = collections.namedtuple('DynamicProperty', ['name', 'val']) class GetNetworkWithTheNameTestCase(test.NoDBTestCase): def setUp(self): super(GetNetworkWithTheNameTestCase, self).setUp() fake.reset() self.stub_out('nova.virt.vmwareapi.driver.VMwareAPISession.vim', stubs.fake_vim_prop) self.stub_out('nova.virt.vmwareapi.driver.' 'VMwareAPISession.is_vim_object', stubs.fake_is_vim_object) self._session = driver.VMwareAPISession() def _build_cluster_networks(self, networks): """Returns a set of results for a cluster network lookup. This is an example: (ObjectContent){ obj = (obj){ value = "domain-c7" _type = "ClusterComputeResource" } propSet[] = (DynamicProperty){ name = "network" val = (ArrayOfManagedObjectReference){ ManagedObjectReference[] = (ManagedObjectReference){ value = "network-54" _type = "Network" }, (ManagedObjectReference){ value = "dvportgroup-14" _type = "DistributedVirtualPortgroup" }, } }, }] """ objects = [] obj = ObjectContent(obj=vim_util.get_moref("domain-c7", "ClusterComputeResource"), propSet=[]) value = fake.DataObject() value.ManagedObjectReference = [] for network in networks: value.ManagedObjectReference.append(network) obj.propSet.append( DynamicProperty(name='network', val=value)) objects.append(obj) return ResultSet(objects=objects) def test_get_network_no_match(self): net_morefs = [vim_util.get_moref("dvportgroup-135", "DistributedVirtualPortgroup"), vim_util.get_moref("dvportgroup-136", "DistributedVirtualPortgroup")] networks = self._build_cluster_networks(net_morefs) def mock_call_method(module, method, *args, **kwargs): if method == 'get_object_properties': return networks if method == 'get_object_property': result = fake.DataObject() result.name = 'no-match' return result with mock.patch.object(self._session, '_call_method', mock_call_method): res = network_util.get_network_with_the_name(self._session, 'fake_net', 'fake_cluster') self.assertIsNone(res) def _get_network_dvs_match(self, name): net_morefs = [vim_util.get_moref("dvportgroup-135", "DistributedVirtualPortgroup")] networks = self._build_cluster_networks(net_morefs) def mock_call_method(module, method, *args, **kwargs): if method == 'get_object_properties': return networks if method == 'get_object_property': result = fake.DataObject() result.name = name result.key = 'fake_key' result.distributedVirtualSwitch = 'fake_dvs' return result with mock.patch.object(self._session, '_call_method', mock_call_method): res = network_util.get_network_with_the_name(self._session, 'fake_net', 'fake_cluster') self.assertIsNotNone(res) def test_get_network_dvs_exact_match(self): self._get_network_dvs_match('fake_net') def test_get_network_dvs_match(self): self._get_network_dvs_match('dvs_7-virtualwire-7-fake_net') def test_get_network_network_match(self): net_morefs = [vim_util.get_moref("network-54", "Network")] networks = self._build_cluster_networks(net_morefs) def mock_call_method(module, method, *args, **kwargs): if method == 'get_object_properties': return networks if method == 'get_object_property': return 'fake_net' with mock.patch.object(self._session, '_call_method', mock_call_method): res = network_util.get_network_with_the_name(self._session, 'fake_net', 'fake_cluster') self.assertIsNotNone(res) class GetVlanIdAndVswitchForPortgroupTestCase(test.NoDBTestCase): @mock.patch.object(vm_util, 'get_host_ref') def test_no_port_groups(self, mock_get_host_ref): session = mock.Mock() session._call_method.return_value = None self.assertRaises( exception.NovaException, network_util.get_vlanid_and_vswitch_for_portgroup, session, 'port_group_name', 'fake_cluster' ) @mock.patch.object(vm_util, 'get_host_ref') def test_valid_port_group(self, mock_get_host_ref): session = mock.Mock() session._call_method.return_value = self._fake_port_groups() vlanid, vswitch = network_util.get_vlanid_and_vswitch_for_portgroup( session, 'port_group_name', 'fake_cluster' ) self.assertEqual(vlanid, 100) self.assertEqual(vswitch, 'vswitch_name') @mock.patch.object(vm_util, 'get_host_ref') def test_unknown_port_group(self, mock_get_host_ref): session = mock.Mock() session._call_method.return_value = self._fake_port_groups() vlanid, vswitch = network_util.get_vlanid_and_vswitch_for_portgroup( session, 'unknown_port_group', 'fake_cluster' ) self.assertIsNone(vlanid) self.assertIsNone(vswitch) def _fake_port_groups(self): port_group_spec = fake.DataObject() port_group_spec.name = 'port_group_name' port_group_spec.vlanId = 100 port_group_spec.vswitchName = 'vswitch_name' port_group = fake.DataObject() port_group.vswitch = 'vswitch_name' port_group.spec = port_group_spec response = fake.DataObject() response.HostPortGroup = [port_group] return response class GetDVSNetworkNameTestCase(test.NoDBTestCase): def test__get_name_from_dvs_name(self): vxw = 'vxw-dvs-22-virtualwire-89-sid-5008-' uuid = '2425c130-fd39-45a6-91d8-bf7ebe66b77c' cases = [('name', 'name'), ('%sname' % vxw, 'name'), ('%s%s' % (vxw, uuid), uuid)] for (dvsname, expected) in cases: self.assertEqual(expected, network_util._get_name_from_dvs_name(dvsname)) nova-17.0.1/nova/tests/unit/virt/vmwareapi/test_vif.py0000666000175000017500000004636013250073126023037 0ustar zuulzuul00000000000000# Copyright 2013 Canonical Corp. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_vmware import exceptions as vexc from oslo_vmware import vim_util from nova import exception from nova.network import model as network_model from nova import test from nova.tests.unit import matchers from nova.tests.unit import utils from nova.tests.unit.virt.vmwareapi import fake from nova.virt.vmwareapi import constants from nova.virt.vmwareapi import network_util from nova.virt.vmwareapi import vif from nova.virt.vmwareapi import vm_util class VMwareVifTestCase(test.NoDBTestCase): def setUp(self): super(VMwareVifTestCase, self).setUp() self.flags(vlan_interface='vmnet0', group='vmware') network = network_model.Network(id=0, bridge='fa0', label='fake', vlan=3, bridge_interface='eth0', injected=True) self._network = network self.vif = network_model.NetworkInfo([ network_model.VIF(id=None, address='DE:AD:BE:EF:00:00', network=network, type=None, devname=None, ovs_interfaceid=None, rxtx_cap=3) ])[0] self.session = fake.FakeSession() self.cluster = None @mock.patch.object(network_util, 'get_network_with_the_name', return_value=None) @mock.patch.object(network_util, 'get_vswitch_for_vlan_interface', return_value='vmnet0') @mock.patch.object(network_util, 'check_if_vlan_interface_exists', return_value=True) @mock.patch.object(network_util, 'create_port_group') def test_ensure_vlan_bridge(self, mock_create_port_group, mock_check_if_vlan_exists, mock_get_vswitch_for_vlan, mock_get_network_with_name): vif.ensure_vlan_bridge(self.session, self.vif, create_vlan=True) expected_calls = [mock.call(self.session, 'fa0', self.cluster), mock.call(self.session, 'fa0', None)] mock_get_network_with_name.assert_has_calls(expected_calls) self.assertEqual(2, mock_get_network_with_name.call_count) mock_get_vswitch_for_vlan.assert_called_once_with( self.session, 'vmnet0', self.cluster) mock_check_if_vlan_exists.assert_called_once_with( self.session, 'vmnet0', self.cluster) mock_create_port_group.assert_called_once_with( self.session, 'fa0', 'vmnet0', 3, self.cluster) # FlatDHCP network mode without vlan - network doesn't exist with the host @mock.patch.object(network_util, 'get_network_with_the_name', return_value=None) @mock.patch.object(network_util, 'get_vswitch_for_vlan_interface', return_value='vmnet0') @mock.patch.object(network_util, 'check_if_vlan_interface_exists', return_value=True) @mock.patch.object(network_util, 'create_port_group') def test_ensure_vlan_bridge_without_vlan(self, mock_create_port_group, mock_check_if_vlan_exists, mock_get_vswitch_for_vlan, mock_get_network_with_name): vif.ensure_vlan_bridge(self.session, self.vif, create_vlan=False) expected_calls = [mock.call(self.session, 'fa0', self.cluster), mock.call(self.session, 'fa0', None)] mock_get_network_with_name.assert_has_calls(expected_calls) self.assertEqual(2, mock_get_network_with_name.call_count) mock_get_vswitch_for_vlan.assert_called_once_with( self.session, 'vmnet0', self.cluster) mock_check_if_vlan_exists.assert_called_once_with( self.session, 'vmnet0', self.cluster) mock_create_port_group.assert_called_once_with( self.session, 'fa0', 'vmnet0', 0, self.cluster) # FlatDHCP network mode without vlan - network exists with the host # Get vswitch and check vlan interface should not be called @mock.patch.object(network_util, 'get_network_with_the_name') @mock.patch.object(network_util, 'get_vswitch_for_vlan_interface') @mock.patch.object(network_util, 'check_if_vlan_interface_exists') @mock.patch.object(network_util, 'create_port_group') def test_ensure_vlan_bridge_with_network(self, mock_create_port_group, mock_check_if_vlan_exists, mock_get_vswitch_for_vlan, mock_get_network_with_name ): vm_network = {'name': 'VM Network', 'type': 'Network'} mock_get_network_with_name.return_value = vm_network vif.ensure_vlan_bridge(self.session, self.vif, create_vlan=False) mock_get_network_with_name.assert_called_once_with(self.session, 'fa0', self.cluster) mock_check_if_vlan_exists.assert_not_called() mock_get_vswitch_for_vlan.assert_not_called() mock_create_port_group.assert_not_called() # Flat network mode with DVS @mock.patch.object(network_util, 'get_network_with_the_name') @mock.patch.object(network_util, 'get_vswitch_for_vlan_interface') @mock.patch.object(network_util, 'check_if_vlan_interface_exists') @mock.patch.object(network_util, 'create_port_group') def test_ensure_vlan_bridge_with_existing_dvs(self, mock_create_port_group, mock_check_if_vlan_exists, mock_get_vswitch_for_vlan, mock_get_network_with_name ): network_ref = {'dvpg': 'dvportgroup-2062', 'type': 'DistributedVirtualPortgroup'} mock_get_network_with_name.return_value = network_ref ref = vif.ensure_vlan_bridge(self.session, self.vif, create_vlan=False) self.assertThat(ref, matchers.DictMatches(network_ref)) mock_get_network_with_name.assert_called_once_with(self.session, 'fa0', self.cluster) mock_check_if_vlan_exists.assert_not_called() mock_get_vswitch_for_vlan.assert_not_called() mock_create_port_group.assert_not_called() @mock.patch.object(vif, 'ensure_vlan_bridge') def test_get_network_ref_flat_dhcp(self, mock_ensure_vlan_bridge): vif.get_network_ref(self.session, self.cluster, self.vif, False) mock_ensure_vlan_bridge.assert_called_once_with( self.session, self.vif, cluster=self.cluster, create_vlan=False) @mock.patch.object(vif, 'ensure_vlan_bridge') def test_get_network_ref_bridge(self, mock_ensure_vlan_bridge): network = network_model.Network(id=0, bridge='fa0', label='fake', vlan=3, bridge_interface='eth0', injected=True, should_create_vlan=True) self.vif = network_model.NetworkInfo([ network_model.VIF(id=None, address='DE:AD:BE:EF:00:00', network=network, type=None, devname=None, ovs_interfaceid=None, rxtx_cap=3) ])[0] vif.get_network_ref(self.session, self.cluster, self.vif, False) mock_ensure_vlan_bridge.assert_called_once_with( self.session, self.vif, cluster=self.cluster, create_vlan=True) def test_create_port_group_already_exists(self): def fake_call_method(module, method, *args, **kwargs): if method == 'AddPortGroup': raise vexc.AlreadyExistsException() with test.nested( mock.patch.object(vm_util, 'get_add_vswitch_port_group_spec'), mock.patch.object(vm_util, 'get_host_ref'), mock.patch.object(self.session, '_call_method', fake_call_method) ) as (_add_vswitch, _get_host, _call_method): network_util.create_port_group(self.session, 'pg_name', 'vswitch_name', vlan_id=0, cluster=None) def test_create_port_group_exception(self): def fake_call_method(module, method, *args, **kwargs): if method == 'AddPortGroup': raise vexc.VMwareDriverException() with test.nested( mock.patch.object(vm_util, 'get_add_vswitch_port_group_spec'), mock.patch.object(vm_util, 'get_host_ref'), mock.patch.object(self.session, '_call_method', fake_call_method) ) as (_add_vswitch, _get_host, _call_method): self.assertRaises(vexc.VMwareDriverException, network_util.create_port_group, self.session, 'pg_name', 'vswitch_name', vlan_id=0, cluster=None) def test_get_vif_info_none(self): vif_info = vif.get_vif_info('fake_session', 'fake_cluster', 'is_neutron', 'fake_model', None) self.assertEqual([], vif_info) def test_get_vif_info_empty_list(self): vif_info = vif.get_vif_info('fake_session', 'fake_cluster', 'is_neutron', 'fake_model', []) self.assertEqual([], vif_info) @mock.patch.object(vif, 'get_network_ref', return_value='fake_ref') def test_get_vif_info(self, mock_get_network_ref): network_info = utils.get_test_network_info() vif_info = vif.get_vif_info('fake_session', 'fake_cluster', 'is_neutron', 'fake_model', network_info) expected = [{'iface_id': utils.FAKE_VIF_UUID, 'mac_address': utils.FAKE_VIF_MAC, 'network_name': utils.FAKE_NETWORK_BRIDGE, 'network_ref': 'fake_ref', 'vif_model': 'fake_model'}] self.assertEqual(expected, vif_info) @mock.patch.object(vif, '_check_ovs_supported_version') def test_get_neutron_network_ovs_integration_bridge(self, mock_check): self.flags(integration_bridge='fake-bridge-id', group='vmware') vif_info = network_model.NetworkInfo([ network_model.VIF(type=network_model.VIF_TYPE_OVS, address='DE:AD:BE:EF:00:00', network=self._network)] )[0] network_ref = vif._get_neutron_network('fake-session', 'fake-cluster', vif_info) expected_ref = {'type': 'OpaqueNetwork', 'network-id': 'fake-bridge-id', 'network-type': 'opaque', 'use-external-id': False} self.assertEqual(expected_ref, network_ref) mock_check.assert_called_once_with('fake-session') @mock.patch.object(vif, '_check_ovs_supported_version') def test_get_neutron_network_ovs(self, mock_check): vif_info = network_model.NetworkInfo([ network_model.VIF(type=network_model.VIF_TYPE_OVS, address='DE:AD:BE:EF:00:00', network=self._network)] )[0] network_ref = vif._get_neutron_network('fake-session', 'fake-cluster', vif_info) expected_ref = {'type': 'OpaqueNetwork', 'network-id': 0, 'network-type': 'nsx.LogicalSwitch', 'use-external-id': True} self.assertEqual(expected_ref, network_ref) mock_check.assert_called_once_with('fake-session') @mock.patch.object(vif, '_check_ovs_supported_version') def test_get_neutron_network_ovs_logical_switch_id(self, mock_check): vif_info = network_model.NetworkInfo([ network_model.VIF(type=network_model.VIF_TYPE_OVS, address='DE:AD:BE:EF:00:00', network=self._network, details={'nsx-logical-switch-id': 'fake-nsx-id'})] )[0] network_ref = vif._get_neutron_network('fake-session', 'fake-cluster', vif_info) expected_ref = {'type': 'OpaqueNetwork', 'network-id': 'fake-nsx-id', 'network-type': 'nsx.LogicalSwitch', 'use-external-id': True} self.assertEqual(expected_ref, network_ref) mock_check.assert_called_once_with('fake-session') @mock.patch.object(network_util, 'get_network_with_the_name') def test_get_neutron_network_dvs(self, mock_network_name): fake_network_obj = {'type': 'DistributedVirtualPortgroup', 'dvpg': 'fake-key', 'dvsw': 'fake-props'} mock_network_name.return_value = fake_network_obj vif_info = network_model.NetworkInfo([ network_model.VIF(type=network_model.VIF_TYPE_DVS, address='DE:AD:BE:EF:00:00', network=self._network)] )[0] network_ref = vif._get_neutron_network('fake-session', 'fake-cluster', vif_info) mock_network_name.assert_called_once_with('fake-session', 'fa0', 'fake-cluster') self.assertEqual(fake_network_obj, network_ref) @mock.patch.object(network_util, 'get_network_with_the_name') def test_get_neutron_network_dvs_vif_details(self, mock_network_name): fake_network_obj = {'type': 'DistributedVirtualPortgroup', 'dvpg': 'pg1', 'dvsw': 'fake-props'} mock_network_name.return_value = fake_network_obj vif_info = network_model.NetworkInfo([ network_model.VIF(type=network_model.VIF_TYPE_DVS, details={'dvs_port_key': 'key1', 'dvs_port_group_name': 'pg1'}, address='DE:AD:BE:EF:00:00', network=self._network)])[0] network_ref = vif._get_neutron_network('fake-session', 'fake-cluster', vif_info) mock_network_name.assert_called_once_with('fake-session', 'pg1', 'fake-cluster') self.assertEqual(fake_network_obj, network_ref) @mock.patch.object(network_util, 'get_network_with_the_name', return_value=None) def test_get_neutron_network_dvs_no_match(self, mock_network_name): vif_info = network_model.NetworkInfo([ network_model.VIF(type=network_model.VIF_TYPE_DVS, address='DE:AD:BE:EF:00:00', network=self._network)] )[0] self.assertRaises(exception.NetworkNotFoundForBridge, vif._get_neutron_network, 'fake-session', 'fake-cluster', vif_info) def test_get_neutron_network_invalid_type(self): vif_info = network_model.NetworkInfo([ network_model.VIF(address='DE:AD:BE:EF:00:00', network=self._network)] )[0] self.assertRaises(exception.InvalidInput, vif._get_neutron_network, 'fake-session', 'fake-cluster', vif_info) @mock.patch.object(vif.LOG, 'warning') @mock.patch.object(vim_util, 'get_vc_version', return_value='5.0.0') def test_check_invalid_ovs_version(self, mock_version, mock_warning): vif._check_ovs_supported_version('fake_session') # assert that the min version is in a warning message expected_arg = {'version': constants.MIN_VC_OVS_VERSION} version_arg_found = False for call in mock_warning.call_args_list: if call[0][1] == expected_arg: version_arg_found = True break self.assertTrue(version_arg_found) @mock.patch.object(network_util, 'get_network_with_the_name') def test_get_neutron_network_dvs_provider(self, mock_network_name): fake_network_obj = {'type': 'DistributedVirtualPortgroup', 'dvpg': 'fake-key', 'dvsw': 'fake-props'} mock_network_name.side_effect = [None, fake_network_obj] vif_info = network_model.NetworkInfo([ network_model.VIF(type=network_model.VIF_TYPE_DVS, address='DE:AD:BE:EF:00:00', network=self._network)] )[0] network_ref = vif._get_neutron_network('fake-session', 'fake-cluster', vif_info) calls = [mock.call('fake-session', 'fa0', 'fake-cluster'), mock.call('fake-session', 'fake', 'fake-cluster')] mock_network_name.assert_has_calls(calls) self.assertEqual(fake_network_obj, network_ref) nova-17.0.1/nova/tests/unit/virt/vmwareapi/stubs.py0000666000175000017500000000533613250073126022352 0ustar zuulzuul00000000000000# Copyright (c) 2011 Citrix Systems, Inc. # Copyright 2011 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Stubouts for the test suite """ from oslo_vmware import exceptions as vexc import nova.conf from nova.tests.unit.virt.vmwareapi import fake CONF = nova.conf.CONF def fake_get_vim_object(arg): """Stubs out the VMwareAPISession's get_vim_object method.""" return fake.FakeVim() @property def fake_vim_prop(arg): """Stubs out the VMwareAPISession's vim property access method.""" return fake.get_fake_vim_object(arg) def fake_is_vim_object(arg, module): """Stubs out the VMwareAPISession's is_vim_object method.""" return isinstance(module, fake.FakeVim) def fake_temp_method_exception(): raise vexc.VimFaultException( [vexc.NOT_AUTHENTICATED], "Session Empty/Not Authenticated") def fake_temp_session_exception(): raise vexc.VimConnectionException("it's a fake!", "Session Exception") def fake_session_file_exception(): fault_list = [vexc.FILE_ALREADY_EXISTS] raise vexc.VimFaultException(fault_list, Exception('fake')) def fake_session_permission_exception(): fault_list = [vexc.NO_PERMISSION] fault_string = 'Permission to perform this operation was denied.' details = {'privilegeId': 'Resource.AssignVMToPool', 'object': 'domain-c7'} raise vexc.VimFaultException(fault_list, fault_string, details=details) def set_stubs(test): """Set the stubs.""" test.stub_out('nova.virt.vmwareapi.network_util.get_network_with_the_name', fake.fake_get_network) test.stub_out('nova.virt.vmwareapi.images.upload_image_stream_optimized', fake.fake_upload_image) test.stub_out('nova.virt.vmwareapi.images.fetch_image', fake.fake_fetch_image) test.stub_out('nova.virt.vmwareapi.driver.VMwareAPISession.vim', fake_vim_prop) test.stub_out('nova.virt.vmwareapi.driver.VMwareAPISession._is_vim_object', fake_is_vim_object) if CONF.use_neutron: test.stub_out( 'nova.network.neutronv2.api.API.update_instance_vnic_index', lambda *args, **kwargs: None) nova-17.0.1/nova/tests/unit/virt/vmwareapi/test_images.py0000666000175000017500000004030413250073126023510 0ustar zuulzuul00000000000000# Copyright (c) 2014 VMware, Inc. # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Test suite for images. """ import os import tarfile import mock from oslo_utils import units from oslo_vmware import rw_handles from nova import exception from nova import objects from nova import test import nova.tests.unit.image.fake from nova.tests import uuidsentinel from nova.virt.vmwareapi import constants from nova.virt.vmwareapi import images from nova.virt.vmwareapi import vm_util class VMwareImagesTestCase(test.NoDBTestCase): """Unit tests for Vmware API connection calls.""" def test_fetch_image(self): """Test fetching images.""" dc_name = 'fake-dc' file_path = 'fake_file' ds_name = 'ds1' host = mock.MagicMock() port = 7443 context = mock.MagicMock() image_data = { 'id': nova.tests.unit.image.fake.get_valid_image_id(), 'disk_format': 'vmdk', 'size': 512, } read_file_handle = mock.MagicMock() write_file_handle = mock.MagicMock() read_iter = mock.MagicMock() instance = objects.Instance(id=1, uuid=uuidsentinel.foo, image_ref=image_data['id']) def fake_read_handle(read_iter): return read_file_handle def fake_write_handle(host, port, dc_name, ds_name, cookies, file_path, file_size): return write_file_handle with test.nested( mock.patch.object(rw_handles, 'ImageReadHandle', side_effect=fake_read_handle), mock.patch.object(rw_handles, 'FileWriteHandle', side_effect=fake_write_handle), mock.patch.object(images, 'image_transfer'), mock.patch.object(images.IMAGE_API, 'get', return_value=image_data), mock.patch.object(images.IMAGE_API, 'download', return_value=read_iter), ) as (glance_read, http_write, image_transfer, image_show, image_download): images.fetch_image(context, instance, host, port, dc_name, ds_name, file_path) glance_read.assert_called_once_with(read_iter) http_write.assert_called_once_with(host, port, dc_name, ds_name, None, file_path, image_data['size']) image_transfer.assert_called_once_with(read_file_handle, write_file_handle) image_download.assert_called_once_with(context, instance['image_ref']) image_show.assert_called_once_with(context, instance['image_ref']) def _setup_mock_get_remote_image_service(self, mock_get_remote_image_service, metadata): mock_image_service = mock.MagicMock() mock_image_service.show.return_value = metadata mock_get_remote_image_service.return_value = [mock_image_service, 'i'] def test_get_vmdk_name_from_ovf(self): ovf_path = os.path.join(os.path.dirname(__file__), 'ovf.xml') with open(ovf_path) as f: ovf_descriptor = f.read() vmdk_name = images.get_vmdk_name_from_ovf(ovf_descriptor) self.assertEqual("Damn_Small_Linux-disk1.vmdk", vmdk_name) @mock.patch('oslo_vmware.rw_handles.ImageReadHandle') @mock.patch('oslo_vmware.rw_handles.VmdkWriteHandle') @mock.patch.object(tarfile, 'open') def test_fetch_image_ova(self, mock_tar_open, mock_write_class, mock_read_class): session = mock.MagicMock() ovf_descriptor = None ovf_path = os.path.join(os.path.dirname(__file__), 'ovf.xml') with open(ovf_path) as f: ovf_descriptor = f.read() with test.nested( mock.patch.object(images.IMAGE_API, 'get'), mock.patch.object(images.IMAGE_API, 'download'), mock.patch.object(images, 'image_transfer'), mock.patch.object(images, '_build_shadow_vm_config_spec'), mock.patch.object(session, '_call_method'), mock.patch.object(vm_util, 'get_vmdk_info') ) as (mock_image_api_get, mock_image_api_download, mock_image_transfer, mock_build_shadow_vm_config_spec, mock_call_method, mock_get_vmdk_info): image_data = {'id': 'fake-id', 'disk_format': 'vmdk', 'size': 512} instance = mock.MagicMock() instance.image_ref = image_data['id'] mock_image_api_get.return_value = image_data vm_folder_ref = mock.MagicMock() res_pool_ref = mock.MagicMock() context = mock.MagicMock() mock_read_handle = mock.MagicMock() mock_read_class.return_value = mock_read_handle mock_write_handle = mock.MagicMock() mock_write_class.return_value = mock_write_handle mock_write_handle.get_imported_vm.return_value = \ mock.sentinel.vm_ref mock_ovf = mock.MagicMock() mock_ovf.name = 'dsl.ovf' mock_vmdk = mock.MagicMock() mock_vmdk.name = "Damn_Small_Linux-disk1.vmdk" def fake_extract(name): if name == mock_ovf: m = mock.MagicMock() m.read.return_value = ovf_descriptor return m elif name == mock_vmdk: return mock_read_handle mock_tar = mock.MagicMock() mock_tar.__iter__ = mock.Mock(return_value = iter([mock_ovf, mock_vmdk])) mock_tar.extractfile = fake_extract mock_tar_open.return_value.__enter__.return_value = mock_tar images.fetch_image_ova( context, instance, session, 'fake-vm', 'fake-datastore', vm_folder_ref, res_pool_ref) mock_tar_open.assert_called_once_with(mode='r|', fileobj=mock_read_handle) mock_image_transfer.assert_called_once_with(mock_read_handle, mock_write_handle) mock_get_vmdk_info.assert_called_once_with( session, mock.sentinel.vm_ref, 'fake-vm') mock_call_method.assert_called_once_with( session.vim, "UnregisterVM", mock.sentinel.vm_ref) @mock.patch('oslo_vmware.rw_handles.ImageReadHandle') @mock.patch('oslo_vmware.rw_handles.VmdkWriteHandle') def test_fetch_image_stream_optimized(self, mock_write_class, mock_read_class): """Test fetching streamOptimized disk image.""" session = mock.MagicMock() with test.nested( mock.patch.object(images.IMAGE_API, 'get'), mock.patch.object(images.IMAGE_API, 'download'), mock.patch.object(images, 'image_transfer'), mock.patch.object(images, '_build_shadow_vm_config_spec'), mock.patch.object(session, '_call_method'), mock.patch.object(vm_util, 'get_vmdk_info') ) as (mock_image_api_get, mock_image_api_download, mock_image_transfer, mock_build_shadow_vm_config_spec, mock_call_method, mock_get_vmdk_info): image_data = {'id': 'fake-id', 'disk_format': 'vmdk', 'size': 512} instance = mock.MagicMock() instance.image_ref = image_data['id'] mock_image_api_get.return_value = image_data vm_folder_ref = mock.MagicMock() res_pool_ref = mock.MagicMock() context = mock.MagicMock() mock_read_handle = mock.MagicMock() mock_read_class.return_value = mock_read_handle mock_write_handle = mock.MagicMock() mock_write_class.return_value = mock_write_handle mock_write_handle.get_imported_vm.return_value = \ mock.sentinel.vm_ref images.fetch_image_stream_optimized( context, instance, session, 'fake-vm', 'fake-datastore', vm_folder_ref, res_pool_ref) mock_image_transfer.assert_called_once_with(mock_read_handle, mock_write_handle) mock_call_method.assert_called_once_with( session.vim, "UnregisterVM", mock.sentinel.vm_ref) mock_get_vmdk_info.assert_called_once_with( session, mock.sentinel.vm_ref, 'fake-vm') def test_from_image_with_image_ref(self): raw_disk_size_in_gb = 83 raw_disk_size_in_bytes = raw_disk_size_in_gb * units.Gi image_id = nova.tests.unit.image.fake.get_valid_image_id() mdata = {'size': raw_disk_size_in_bytes, 'disk_format': 'vmdk', 'properties': { "vmware_ostype": constants.DEFAULT_OS_TYPE, "vmware_adaptertype": constants.DEFAULT_ADAPTER_TYPE, "vmware_disktype": constants.DEFAULT_DISK_TYPE, "hw_vif_model": constants.DEFAULT_VIF_MODEL, "vmware_linked_clone": True}} mdata = objects.ImageMeta.from_dict(mdata) with mock.patch.object(images, 'get_vsphere_location', return_value=None): img_props = images.VMwareImage.from_image(None, image_id, mdata) image_size_in_kb = raw_disk_size_in_bytes / units.Ki # assert that defaults are set and no value returned is left empty self.assertEqual(constants.DEFAULT_OS_TYPE, img_props.os_type) self.assertEqual(constants.DEFAULT_ADAPTER_TYPE, img_props.adapter_type) self.assertEqual(constants.DEFAULT_DISK_TYPE, img_props.disk_type) self.assertEqual(constants.DEFAULT_VIF_MODEL, img_props.vif_model) self.assertTrue(img_props.linked_clone) self.assertEqual(image_size_in_kb, img_props.file_size_in_kb) def _image_build(self, image_lc_setting, global_lc_setting, disk_format=constants.DEFAULT_DISK_FORMAT, os_type=constants.DEFAULT_OS_TYPE, adapter_type=constants.DEFAULT_ADAPTER_TYPE, disk_type=constants.DEFAULT_DISK_TYPE, vif_model=constants.DEFAULT_VIF_MODEL, vsphere_location=None): self.flags(use_linked_clone=global_lc_setting, group='vmware') raw_disk_size_in_gb = 93 raw_disk_size_in_btyes = raw_disk_size_in_gb * units.Gi image_id = nova.tests.unit.image.fake.get_valid_image_id() mdata = {'size': raw_disk_size_in_btyes, 'disk_format': disk_format, 'properties': { "vmware_ostype": os_type, "vmware_adaptertype": adapter_type, "vmware_disktype": disk_type, "hw_vif_model": vif_model}} if image_lc_setting is not None: mdata['properties']["vmware_linked_clone"] = image_lc_setting context = mock.Mock() mdata = objects.ImageMeta.from_dict(mdata) with mock.patch.object( images, 'get_vsphere_location', return_value=vsphere_location): return images.VMwareImage.from_image(context, image_id, mdata) def test_use_linked_clone_override_nf(self): image_props = self._image_build(None, False) self.assertFalse(image_props.linked_clone, "No overrides present but still overridden!") def test_use_linked_clone_override_nt(self): image_props = self._image_build(None, True) self.assertTrue(image_props.linked_clone, "No overrides present but still overridden!") def test_use_linked_clone_override_ny(self): image_props = self._image_build(None, "yes") self.assertTrue(image_props.linked_clone, "No overrides present but still overridden!") def test_use_linked_clone_override_ft(self): image_props = self._image_build(False, True) self.assertFalse(image_props.linked_clone, "image level metadata failed to override global") def test_use_linked_clone_override_string_nt(self): image_props = self._image_build("no", True) self.assertFalse(image_props.linked_clone, "image level metadata failed to override global") def test_use_linked_clone_override_string_yf(self): image_props = self._image_build("yes", False) self.assertTrue(image_props.linked_clone, "image level metadata failed to override global") def test_use_disk_format_iso(self): image = self._image_build(None, True, disk_format='iso') self.assertEqual('iso', image.file_type) self.assertTrue(image.is_iso) def test_use_bad_disk_format(self): self.assertRaises(exception.InvalidDiskFormat, self._image_build, None, True, disk_format='bad_disk_format') def test_image_no_defaults(self): image = self._image_build(False, False, disk_format='iso', os_type='otherGuest', adapter_type='lsiLogic', disk_type='preallocated', vif_model='e1000e') self.assertEqual('iso', image.file_type) self.assertEqual('otherGuest', image.os_type) self.assertEqual('lsiLogic', image.adapter_type) self.assertEqual('preallocated', image.disk_type) self.assertEqual('e1000e', image.vif_model) self.assertFalse(image.linked_clone) def test_image_defaults(self): image = images.VMwareImage(image_id='fake-image-id') # N.B. We intentially don't use the defined constants here. Amongst # other potential failures, we're interested in changes to their # values, which would not otherwise be picked up. self.assertEqual('otherGuest', image.os_type) self.assertEqual('lsiLogic', image.adapter_type) self.assertEqual('preallocated', image.disk_type) self.assertEqual('e1000', image.vif_model) def test_use_vsphere_location(self): image = self._image_build(None, True, vsphere_location='vsphere://ok') self.assertEqual('vsphere://ok', image.vsphere_location) def test_get_vsphere_location(self): expected = 'vsphere://ok' metadata = {'locations': [{}, {'url': 'http://ko'}, {'url': expected}]} with mock.patch.object(images.IMAGE_API, 'get', return_value=metadata): context = mock.Mock() observed = images.get_vsphere_location(context, 'image_id') self.assertEqual(expected, observed) def test_get_no_vsphere_location(self): metadata = {'locations': [{}, {'url': 'http://ko'}]} with mock.patch.object(images.IMAGE_API, 'get', return_value=metadata): context = mock.Mock() observed = images.get_vsphere_location(context, 'image_id') self.assertIsNone(observed) def test_get_vsphere_location_no_image(self): context = mock.Mock() observed = images.get_vsphere_location(context, None) self.assertIsNone(observed) nova-17.0.1/nova/tests/unit/virt/vmwareapi/test_vim_util.py0000666000175000017500000000300313250073126024066 0ustar zuulzuul00000000000000# Copyright (c) 2013 VMware, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova import test from nova.tests.unit.virt.vmwareapi import fake from nova.virt.vmwareapi import vim_util class VMwareVIMUtilTestCase(test.NoDBTestCase): def setUp(self): super(VMwareVIMUtilTestCase, self).setUp() fake.reset() self.vim = fake.FakeVim() self.vim._login() def test_get_inner_objects(self): property = ['summary.name'] # Get the fake datastores directly from the cluster cluster_refs = fake._get_object_refs('ClusterComputeResource') cluster = fake._get_object(cluster_refs[0]) expected_ds = cluster.datastore.ManagedObjectReference # Get the fake datastores using inner objects utility method result = vim_util.get_inner_objects( self.vim, cluster_refs[0], 'datastore', 'Datastore', property) datastores = [oc.obj for oc in result.objects] self.assertEqual(expected_ds, datastores) nova-17.0.1/nova/tests/unit/virt/hyperv/0000775000175000017500000000000013250073472020155 5ustar zuulzuul00000000000000nova-17.0.1/nova/tests/unit/virt/hyperv/test_driver.py0000666000175000017500000005133413250073126023065 0ustar zuulzuul00000000000000# Copyright 2015 Cloudbase Solutions SRL # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Unit tests for the Hyper-V Driver. """ import platform import sys import mock from os_win import exceptions as os_win_exc from nova import exception from nova import safe_utils from nova.tests.unit import fake_instance from nova.tests.unit.virt.hyperv import test_base from nova.virt import driver as base_driver from nova.virt.hyperv import driver class HyperVDriverTestCase(test_base.HyperVBaseTestCase): FAKE_WIN_2008R2_VERSION = '6.0.0' @mock.patch.object(driver.HyperVDriver, '_check_minimum_windows_version') def setUp(self, mock_check_minimum_windows_version): super(HyperVDriverTestCase, self).setUp() self.context = 'context' self.driver = driver.HyperVDriver(mock.sentinel.virtapi) self.driver._hostops = mock.MagicMock() self.driver._volumeops = mock.MagicMock() self.driver._vmops = mock.MagicMock() self.driver._snapshotops = mock.MagicMock() self.driver._livemigrationops = mock.MagicMock() self.driver._migrationops = mock.MagicMock() self.driver._rdpconsoleops = mock.MagicMock() self.driver._serialconsoleops = mock.MagicMock() self.driver._imagecache = mock.MagicMock() @mock.patch.object(driver.LOG, 'warning') @mock.patch.object(driver.utilsfactory, 'get_hostutils') def test_check_minimum_windows_version(self, mock_get_hostutils, mock_warning): mock_hostutils = mock_get_hostutils.return_value mock_hostutils.check_min_windows_version.return_value = False self.assertRaises(exception.HypervisorTooOld, self.driver._check_minimum_windows_version) mock_hostutils.check_min_windows_version.side_effect = [True, False] self.driver._check_minimum_windows_version() self.assertTrue(mock_warning.called) def test_public_api_signatures(self): # NOTE(claudiub): wrapped functions do not keep the same signature in # Python 2.7, which causes this test to fail. Instead, we should # compare the public API signatures of the unwrapped methods. for attr in driver.HyperVDriver.__dict__: class_member = getattr(driver.HyperVDriver, attr) if callable(class_member): mocked_method = mock.patch.object( driver.HyperVDriver, attr, safe_utils.get_wrapped_function(class_member)) mocked_method.start() self.addCleanup(mocked_method.stop) self.assertPublicAPISignatures(base_driver.ComputeDriver, driver.HyperVDriver) def test_converted_exception(self): self.driver._vmops.get_info.side_effect = ( os_win_exc.OSWinException) self.assertRaises(exception.NovaException, self.driver.get_info, mock.sentinel.instance) self.driver._vmops.get_info.side_effect = os_win_exc.HyperVException self.assertRaises(exception.NovaException, self.driver.get_info, mock.sentinel.instance) self.driver._vmops.get_info.side_effect = ( os_win_exc.HyperVVMNotFoundException(vm_name='foofoo')) self.assertRaises(exception.InstanceNotFound, self.driver.get_info, mock.sentinel.instance) def test_assert_original_traceback_maintained(self): def bar(self): foo = "foofoo" raise os_win_exc.HyperVVMNotFoundException(vm_name=foo) self.driver._vmops.get_info.side_effect = bar try: self.driver.get_info(mock.sentinel.instance) self.fail("Test expected exception, but it was not raised.") except exception.InstanceNotFound: # exception has been raised as expected. _, _, trace = sys.exc_info() while trace.tb_next: # iterate until the original exception source, bar. trace = trace.tb_next # original frame will contain the 'foo' variable. self.assertEqual('foofoo', trace.tb_frame.f_locals['foo']) @mock.patch.object(driver.eventhandler, 'InstanceEventHandler') def test_init_host(self, mock_InstanceEventHandler): self.driver.init_host(mock.sentinel.host) mock_start_console_handlers = ( self.driver._serialconsoleops.start_console_handlers) mock_start_console_handlers.assert_called_once_with() mock_InstanceEventHandler.assert_called_once_with( state_change_callback=self.driver.emit_event) fake_event_handler = mock_InstanceEventHandler.return_value fake_event_handler.start_listener.assert_called_once_with() def test_list_instance_uuids(self): self.driver.list_instance_uuids() self.driver._vmops.list_instance_uuids.assert_called_once_with() def test_list_instances(self): self.driver.list_instances() self.driver._vmops.list_instances.assert_called_once_with() def test_estimate_instance_overhead(self): self.driver.estimate_instance_overhead(mock.sentinel.instance) self.driver._vmops.estimate_instance_overhead.assert_called_once_with( mock.sentinel.instance) def test_spawn(self): self.driver.spawn( mock.sentinel.context, mock.sentinel.instance, mock.sentinel.image_meta, mock.sentinel.injected_files, mock.sentinel.admin_password, mock.sentinel.allocations, mock.sentinel.network_info, mock.sentinel.block_device_info) self.driver._vmops.spawn.assert_called_once_with( mock.sentinel.context, mock.sentinel.instance, mock.sentinel.image_meta, mock.sentinel.injected_files, mock.sentinel.admin_password, mock.sentinel.network_info, mock.sentinel.block_device_info) def test_reboot(self): self.driver.reboot( mock.sentinel.context, mock.sentinel.instance, mock.sentinel.network_info, mock.sentinel.reboot_type, mock.sentinel.block_device_info, mock.sentinel.bad_vol_callback) self.driver._vmops.reboot.assert_called_once_with( mock.sentinel.instance, mock.sentinel.network_info, mock.sentinel.reboot_type) def test_destroy(self): self.driver.destroy( mock.sentinel.context, mock.sentinel.instance, mock.sentinel.network_info, mock.sentinel.block_device_info, mock.sentinel.destroy_disks) self.driver._vmops.destroy.assert_called_once_with( mock.sentinel.instance, mock.sentinel.network_info, mock.sentinel.block_device_info, mock.sentinel.destroy_disks) def test_cleanup(self): self.driver.cleanup( mock.sentinel.context, mock.sentinel.instance, mock.sentinel.network_info, mock.sentinel.block_device_info, mock.sentinel.destroy_disks, mock.sentinel.migrate_data, mock.sentinel.destroy_vifs) self.driver._vmops.unplug_vifs.assert_called_once_with( mock.sentinel.instance, mock.sentinel.network_info) def test_get_info(self): self.driver.get_info(mock.sentinel.instance) self.driver._vmops.get_info.assert_called_once_with( mock.sentinel.instance) def test_attach_volume(self): mock_instance = fake_instance.fake_instance_obj(self.context) self.driver.attach_volume( mock.sentinel.context, mock.sentinel.connection_info, mock_instance, mock.sentinel.mountpoint, mock.sentinel.disk_bus, mock.sentinel.device_type, mock.sentinel.encryption) self.driver._volumeops.attach_volume.assert_called_once_with( mock.sentinel.connection_info, mock_instance.name) def test_detach_volume(self): mock_instance = fake_instance.fake_instance_obj(self.context) self.driver.detach_volume( mock.sentinel.context, mock.sentinel.connection_info, mock_instance, mock.sentinel.mountpoint, mock.sentinel.encryption) self.driver._volumeops.detach_volume.assert_called_once_with( mock.sentinel.connection_info, mock_instance.name) def test_get_volume_connector(self): self.driver.get_volume_connector(mock.sentinel.instance) self.driver._volumeops.get_volume_connector.assert_called_once_with() def test_get_available_resource(self): self.driver.get_available_resource(mock.sentinel.nodename) self.driver._hostops.get_available_resource.assert_called_once_with() def test_get_available_nodes(self): response = self.driver.get_available_nodes(mock.sentinel.refresh) self.assertEqual([platform.node()], response) def test_host_power_action(self): self.driver.host_power_action(mock.sentinel.action) self.driver._hostops.host_power_action.assert_called_once_with( mock.sentinel.action) def test_snapshot(self): self.driver.snapshot( mock.sentinel.context, mock.sentinel.instance, mock.sentinel.image_id, mock.sentinel.update_task_state) self.driver._snapshotops.snapshot.assert_called_once_with( mock.sentinel.context, mock.sentinel.instance, mock.sentinel.image_id, mock.sentinel.update_task_state) def test_pause(self): self.driver.pause(mock.sentinel.instance) self.driver._vmops.pause.assert_called_once_with( mock.sentinel.instance) def test_unpause(self): self.driver.unpause(mock.sentinel.instance) self.driver._vmops.unpause.assert_called_once_with( mock.sentinel.instance) def test_suspend(self): self.driver.suspend(mock.sentinel.context, mock.sentinel.instance) self.driver._vmops.suspend.assert_called_once_with( mock.sentinel.instance) def test_resume(self): self.driver.resume( mock.sentinel.context, mock.sentinel.instance, mock.sentinel.network_info, mock.sentinel.block_device_info) self.driver._vmops.resume.assert_called_once_with( mock.sentinel.instance) def test_power_off(self): self.driver.power_off( mock.sentinel.instance, mock.sentinel.timeout, mock.sentinel.retry_interval) self.driver._vmops.power_off.assert_called_once_with( mock.sentinel.instance, mock.sentinel.timeout, mock.sentinel.retry_interval) def test_power_on(self): self.driver.power_on( mock.sentinel.context, mock.sentinel.instance, mock.sentinel.network_info, mock.sentinel.block_device_info) self.driver._vmops.power_on.assert_called_once_with( mock.sentinel.instance, mock.sentinel.block_device_info, mock.sentinel.network_info) def test_resume_state_on_host_boot(self): self.driver.resume_state_on_host_boot( mock.sentinel.context, mock.sentinel.instance, mock.sentinel.network_info, mock.sentinel.block_device_info) self.driver._vmops.resume_state_on_host_boot.assert_called_once_with( mock.sentinel.context, mock.sentinel.instance, mock.sentinel.network_info, mock.sentinel.block_device_info) def test_live_migration(self): self.driver.live_migration( mock.sentinel.context, mock.sentinel.instance, mock.sentinel.dest, mock.sentinel.post_method, mock.sentinel.recover_method, mock.sentinel.block_migration, mock.sentinel.migrate_data) self.driver._livemigrationops.live_migration.assert_called_once_with( mock.sentinel.context, mock.sentinel.instance, mock.sentinel.dest, mock.sentinel.post_method, mock.sentinel.recover_method, mock.sentinel.block_migration, mock.sentinel.migrate_data) @mock.patch.object(driver.HyperVDriver, 'destroy') def test_rollback_live_migration_at_destination(self, mock_destroy): self.driver.rollback_live_migration_at_destination( mock.sentinel.context, mock.sentinel.instance, mock.sentinel.network_info, mock.sentinel.block_device_info, mock.sentinel.destroy_disks, mock.sentinel.migrate_data) mock_destroy.assert_called_once_with( mock.sentinel.context, mock.sentinel.instance, mock.sentinel.network_info, mock.sentinel.block_device_info, destroy_disks=mock.sentinel.destroy_disks) def test_pre_live_migration(self): migrate_data = self.driver.pre_live_migration( mock.sentinel.context, mock.sentinel.instance, mock.sentinel.block_device_info, mock.sentinel.network_info, mock.sentinel.disk_info, mock.sentinel.migrate_data) self.assertEqual(mock.sentinel.migrate_data, migrate_data) pre_live_migration = self.driver._livemigrationops.pre_live_migration pre_live_migration.assert_called_once_with( mock.sentinel.context, mock.sentinel.instance, mock.sentinel.block_device_info, mock.sentinel.network_info) def test_post_live_migration(self): self.driver.post_live_migration( mock.sentinel.context, mock.sentinel.instance, mock.sentinel.block_device_info, mock.sentinel.migrate_data) post_live_migration = self.driver._livemigrationops.post_live_migration post_live_migration.assert_called_once_with( mock.sentinel.context, mock.sentinel.instance, mock.sentinel.block_device_info, mock.sentinel.migrate_data) def test_post_live_migration_at_destination(self): self.driver.post_live_migration_at_destination( mock.sentinel.context, mock.sentinel.instance, mock.sentinel.network_info, mock.sentinel.block_migration, mock.sentinel.block_device_info) mtd = self.driver._livemigrationops.post_live_migration_at_destination mtd.assert_called_once_with( mock.sentinel.context, mock.sentinel.instance, mock.sentinel.network_info, mock.sentinel.block_migration) def test_check_can_live_migrate_destination(self): self.driver.check_can_live_migrate_destination( mock.sentinel.context, mock.sentinel.instance, mock.sentinel.src_compute_info, mock.sentinel.dst_compute_info, mock.sentinel.block_migration, mock.sentinel.disk_over_commit) mtd = self.driver._livemigrationops.check_can_live_migrate_destination mtd.assert_called_once_with( mock.sentinel.context, mock.sentinel.instance, mock.sentinel.src_compute_info, mock.sentinel.dst_compute_info, mock.sentinel.block_migration, mock.sentinel.disk_over_commit) def test_cleanup_live_migration_destination_check(self): self.driver.cleanup_live_migration_destination_check( mock.sentinel.context, mock.sentinel.dest_check_data) _livemigrops = self.driver._livemigrationops method = _livemigrops.cleanup_live_migration_destination_check method.assert_called_once_with( mock.sentinel.context, mock.sentinel.dest_check_data) def test_check_can_live_migrate_source(self): self.driver.check_can_live_migrate_source( mock.sentinel.context, mock.sentinel.instance, mock.sentinel.dest_check_data, mock.sentinel.block_device_info) method = self.driver._livemigrationops.check_can_live_migrate_source method.assert_called_once_with( mock.sentinel.context, mock.sentinel.instance, mock.sentinel.dest_check_data) def test_plug_vifs(self): self.driver.plug_vifs( mock.sentinel.instance, mock.sentinel.network_info) self.driver._vmops.plug_vifs.assert_called_once_with( mock.sentinel.instance, mock.sentinel.network_info) def test_unplug_vifs(self): self.driver.unplug_vifs( mock.sentinel.instance, mock.sentinel.network_info) self.driver._vmops.unplug_vifs.assert_called_once_with( mock.sentinel.instance, mock.sentinel.network_info) def test_refresh_instance_security_rules(self): self.assertRaises(NotImplementedError, self.driver.refresh_instance_security_rules, instance=mock.sentinel.instance) def test_migrate_disk_and_power_off(self): self.driver.migrate_disk_and_power_off( mock.sentinel.context, mock.sentinel.instance, mock.sentinel.dest, mock.sentinel.flavor, mock.sentinel.network_info, mock.sentinel.block_device_info, mock.sentinel.timeout, mock.sentinel.retry_interval) migr_power_off = self.driver._migrationops.migrate_disk_and_power_off migr_power_off.assert_called_once_with( mock.sentinel.context, mock.sentinel.instance, mock.sentinel.dest, mock.sentinel.flavor, mock.sentinel.network_info, mock.sentinel.block_device_info, mock.sentinel.timeout, mock.sentinel.retry_interval) def test_confirm_migration(self): self.driver.confirm_migration( mock.sentinel.context, mock.sentinel.migration, mock.sentinel.instance, mock.sentinel.network_info) self.driver._migrationops.confirm_migration.assert_called_once_with( mock.sentinel.context, mock.sentinel.migration, mock.sentinel.instance, mock.sentinel.network_info) def test_finish_revert_migration(self): self.driver.finish_revert_migration( mock.sentinel.context, mock.sentinel.instance, mock.sentinel.network_info, mock.sentinel.block_device_info, mock.sentinel.power_on) finish_revert_migr = self.driver._migrationops.finish_revert_migration finish_revert_migr.assert_called_once_with( mock.sentinel.context, mock.sentinel.instance, mock.sentinel.network_info, mock.sentinel.block_device_info, mock.sentinel.power_on) def test_finish_migration(self): self.driver.finish_migration( mock.sentinel.context, mock.sentinel.migration, mock.sentinel.instance, mock.sentinel.disk_info, mock.sentinel.network_info, mock.sentinel.image_meta, mock.sentinel.resize_instance, mock.sentinel.block_device_info, mock.sentinel.power_on) self.driver._migrationops.finish_migration.assert_called_once_with( mock.sentinel.context, mock.sentinel.migration, mock.sentinel.instance, mock.sentinel.disk_info, mock.sentinel.network_info, mock.sentinel.image_meta, mock.sentinel.resize_instance, mock.sentinel.block_device_info, mock.sentinel.power_on) def test_get_host_ip_addr(self): self.driver.get_host_ip_addr() self.driver._hostops.get_host_ip_addr.assert_called_once_with() def test_get_host_uptime(self): self.driver.get_host_uptime() self.driver._hostops.get_host_uptime.assert_called_once_with() def test_get_rdp_console(self): self.driver.get_rdp_console( mock.sentinel.context, mock.sentinel.instance) self.driver._rdpconsoleops.get_rdp_console.assert_called_once_with( mock.sentinel.instance) def test_get_console_output(self): mock_instance = fake_instance.fake_instance_obj(self.context) self.driver.get_console_output(self.context, mock_instance) mock_get_console_output = ( self.driver._serialconsoleops.get_console_output) mock_get_console_output.assert_called_once_with( mock_instance.name) def test_get_serial_console(self): mock_instance = fake_instance.fake_instance_obj(self.context) self.driver.get_console_output(self.context, mock_instance) mock_get_serial_console = ( self.driver._serialconsoleops.get_console_output) mock_get_serial_console.assert_called_once_with( mock_instance.name) def test_manage_image_cache(self): self.driver.manage_image_cache(mock.sentinel.context, mock.sentinel.all_instances) self.driver._imagecache.update.assert_called_once_with( mock.sentinel.context, mock.sentinel.all_instances) nova-17.0.1/nova/tests/unit/virt/hyperv/test_block_device_manager.py0000666000175000017500000004765013250073126025703 0ustar zuulzuul00000000000000# Copyright (c) 2016 Cloudbase Solutions Srl # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from os_win import constants as os_win_const from nova import exception from nova.tests.unit.virt.hyperv import test_base from nova.virt.hyperv import block_device_manager from nova.virt.hyperv import constants class BlockDeviceManagerTestCase(test_base.HyperVBaseTestCase): """Unit tests for the Hyper-V BlockDeviceInfoManager class.""" def setUp(self): super(BlockDeviceManagerTestCase, self).setUp() self._bdman = block_device_manager.BlockDeviceInfoManager() def test_get_device_bus_scsi(self): bdm = {'disk_bus': constants.CTRL_TYPE_SCSI, 'drive_addr': 0, 'ctrl_disk_addr': 2} bus = self._bdman._get_device_bus(bdm) self.assertEqual('0:0:0:2', bus.address) def test_get_device_bus_ide(self): bdm = {'disk_bus': constants.CTRL_TYPE_IDE, 'drive_addr': 0, 'ctrl_disk_addr': 1} bus = self._bdman._get_device_bus(bdm) self.assertEqual('0:1', bus.address) @staticmethod def _bdm_mock(**kwargs): bdm = mock.MagicMock(**kwargs) bdm.__contains__.side_effect = ( lambda attr: getattr(bdm, attr, None) is not None) return bdm @mock.patch.object(block_device_manager.objects, 'DiskMetadata') @mock.patch.object(block_device_manager.BlockDeviceInfoManager, '_get_device_bus') @mock.patch.object(block_device_manager.objects.BlockDeviceMappingList, 'get_by_instance_uuid') def test_get_bdm_metadata(self, mock_get_by_inst_uuid, mock_get_device_bus, mock_DiskMetadata): mock_instance = mock.MagicMock() root_disk = {'mount_device': mock.sentinel.dev0} ephemeral = {'device_name': mock.sentinel.dev1} block_device_info = { 'root_disk': root_disk, 'block_device_mapping': [ {'mount_device': mock.sentinel.dev2}, {'mount_device': mock.sentinel.dev3}, ], 'ephemerals': [ephemeral], } bdm = self._bdm_mock(device_name=mock.sentinel.dev0, tag='taggy', volume_id=mock.sentinel.uuid1) eph = self._bdm_mock(device_name=mock.sentinel.dev1, tag='ephy', volume_id=mock.sentinel.uuid2) mock_get_by_inst_uuid.return_value = [ bdm, eph, self._bdm_mock(device_name=mock.sentinel.dev2, tag=None), ] bdm_metadata = self._bdman.get_bdm_metadata(mock.sentinel.context, mock_instance, block_device_info) mock_get_by_inst_uuid.assert_called_once_with(mock.sentinel.context, mock_instance.uuid) mock_get_device_bus.assert_has_calls( [mock.call(root_disk), mock.call(ephemeral)], any_order=True) mock_DiskMetadata.assert_has_calls( [mock.call(bus=mock_get_device_bus.return_value, serial=bdm.volume_id, tags=[bdm.tag]), mock.call(bus=mock_get_device_bus.return_value, serial=eph.volume_id, tags=[eph.tag])], any_order=True) self.assertEqual([mock_DiskMetadata.return_value] * 2, bdm_metadata) @mock.patch('nova.virt.configdrive.required_by') def test_init_controller_slot_counter_gen1_no_configdrive( self, mock_cfg_drive_req): mock_cfg_drive_req.return_value = False slot_map = self._bdman._initialize_controller_slot_counter( mock.sentinel.FAKE_INSTANCE, constants.VM_GEN_1) self.assertEqual(slot_map[constants.CTRL_TYPE_IDE][0], os_win_const.IDE_CONTROLLER_SLOTS_NUMBER) self.assertEqual(slot_map[constants.CTRL_TYPE_IDE][1], os_win_const.IDE_CONTROLLER_SLOTS_NUMBER) self.assertEqual(slot_map[constants.CTRL_TYPE_SCSI][0], os_win_const.SCSI_CONTROLLER_SLOTS_NUMBER) @mock.patch('nova.virt.configdrive.required_by') def test_init_controller_slot_counter_gen1(self, mock_cfg_drive_req): slot_map = self._bdman._initialize_controller_slot_counter( mock.sentinel.FAKE_INSTANCE, constants.VM_GEN_1) self.assertEqual(slot_map[constants.CTRL_TYPE_IDE][1], os_win_const.IDE_CONTROLLER_SLOTS_NUMBER - 1) @mock.patch.object(block_device_manager.configdrive, 'required_by') @mock.patch.object(block_device_manager.BlockDeviceInfoManager, '_initialize_controller_slot_counter') @mock.patch.object(block_device_manager.BlockDeviceInfoManager, '_check_and_update_root_device') @mock.patch.object(block_device_manager.BlockDeviceInfoManager, '_check_and_update_ephemerals') @mock.patch.object(block_device_manager.BlockDeviceInfoManager, '_check_and_update_volumes') def _check_validate_and_update_bdi(self, mock_check_and_update_vol, mock_check_and_update_eph, mock_check_and_update_root, mock_init_ctrl_cntr, mock_required_by, available_slots=1): mock_required_by.return_value = True slot_map = {constants.CTRL_TYPE_SCSI: [available_slots]} mock_init_ctrl_cntr.return_value = slot_map if available_slots: self._bdman.validate_and_update_bdi(mock.sentinel.FAKE_INSTANCE, mock.sentinel.IMAGE_META, constants.VM_GEN_2, mock.sentinel.BLOCK_DEV_INFO) else: self.assertRaises(exception.InvalidBDMFormat, self._bdman.validate_and_update_bdi, mock.sentinel.FAKE_INSTANCE, mock.sentinel.IMAGE_META, constants.VM_GEN_2, mock.sentinel.BLOCK_DEV_INFO) mock_init_ctrl_cntr.assert_called_once_with( mock.sentinel.FAKE_INSTANCE, constants.VM_GEN_2) mock_check_and_update_root.assert_called_once_with( constants.VM_GEN_2, mock.sentinel.IMAGE_META, mock.sentinel.BLOCK_DEV_INFO, slot_map) mock_check_and_update_eph.assert_called_once_with( constants.VM_GEN_2, mock.sentinel.BLOCK_DEV_INFO, slot_map) mock_check_and_update_vol.assert_called_once_with( constants.VM_GEN_2, mock.sentinel.BLOCK_DEV_INFO, slot_map) mock_required_by.assert_called_once_with(mock.sentinel.FAKE_INSTANCE) def test_validate_and_update_bdi(self): self._check_validate_and_update_bdi() def test_validate_and_update_bdi_insufficient_slots(self): self._check_validate_and_update_bdi(available_slots=0) @mock.patch.object(block_device_manager.BlockDeviceInfoManager, '_get_available_controller_slot') @mock.patch.object(block_device_manager.BlockDeviceInfoManager, 'is_boot_from_volume') def _test_check_and_update_root_device(self, mock_is_boot_from_vol, mock_get_avail_ctrl_slot, disk_format, vm_gen=constants.VM_GEN_1, boot_from_volume=False): image_meta = mock.MagicMock(disk_format=disk_format) bdi = {'root_device': '/dev/sda', 'block_device_mapping': [ {'mount_device': '/dev/sda', 'connection_info': mock.sentinel.FAKE_CONN_INFO}]} mock_is_boot_from_vol.return_value = boot_from_volume mock_get_avail_ctrl_slot.return_value = (0, 0) self._bdman._check_and_update_root_device(vm_gen, image_meta, bdi, mock.sentinel.SLOT_MAP) root_disk = bdi['root_disk'] if boot_from_volume: self.assertEqual(root_disk['type'], constants.VOLUME) self.assertIsNone(root_disk['path']) self.assertEqual(root_disk['connection_info'], mock.sentinel.FAKE_CONN_INFO) else: image_type = self._bdman._TYPE_FOR_DISK_FORMAT.get( image_meta.disk_format) self.assertEqual(root_disk['type'], image_type) self.assertIsNone(root_disk['path']) self.assertIsNone(root_disk['connection_info']) disk_bus = (constants.CTRL_TYPE_IDE if vm_gen == constants.VM_GEN_1 else constants.CTRL_TYPE_SCSI) self.assertEqual(root_disk['disk_bus'], disk_bus) self.assertEqual(root_disk['drive_addr'], 0) self.assertEqual(root_disk['ctrl_disk_addr'], 0) self.assertEqual(root_disk['boot_index'], 0) self.assertEqual(root_disk['mount_device'], bdi['root_device']) mock_get_avail_ctrl_slot.assert_called_once_with( root_disk['disk_bus'], mock.sentinel.SLOT_MAP) @mock.patch.object(block_device_manager.BlockDeviceInfoManager, 'is_boot_from_volume', return_value=False) def test_check_and_update_root_device_exception(self, mock_is_boot_vol): bdi = {} image_meta = mock.MagicMock(disk_format=mock.sentinel.fake_format) self.assertRaises(exception.InvalidImageFormat, self._bdman._check_and_update_root_device, constants.VM_GEN_1, image_meta, bdi, mock.sentinel.SLOT_MAP) def test_check_and_update_root_device_gen1(self): self._test_check_and_update_root_device(disk_format='vhd') def test_check_and_update_root_device_gen1_vhdx(self): self._test_check_and_update_root_device(disk_format='vhdx') def test_check_and_update_root_device_gen1_iso(self): self._test_check_and_update_root_device(disk_format='iso') def test_check_and_update_root_device_gen2(self): self._test_check_and_update_root_device(disk_format='vhd', vm_gen=constants.VM_GEN_2) def test_check_and_update_root_device_boot_from_vol_gen1(self): self._test_check_and_update_root_device(disk_format='vhd', boot_from_volume=True) def test_check_and_update_root_device_boot_from_vol_gen2(self): self._test_check_and_update_root_device(disk_format='vhd', vm_gen=constants.VM_GEN_2, boot_from_volume=True) @mock.patch('nova.virt.configdrive.required_by', return_value=True) def _test_get_available_controller_slot(self, mock_config_drive_req, bus=constants.CTRL_TYPE_IDE, fail=False): slot_map = self._bdman._initialize_controller_slot_counter( mock.sentinel.FAKE_VM, constants.VM_GEN_1) if fail: slot_map[constants.CTRL_TYPE_IDE][0] = 0 slot_map[constants.CTRL_TYPE_IDE][1] = 0 self.assertRaises(exception.InvalidBDMFormat, self._bdman._get_available_controller_slot, constants.CTRL_TYPE_IDE, slot_map) else: (disk_addr, ctrl_disk_addr) = self._bdman._get_available_controller_slot( bus, slot_map) self.assertEqual(0, disk_addr) self.assertEqual(0, ctrl_disk_addr) def test_get_available_controller_slot(self): self._test_get_available_controller_slot() def test_get_available_controller_slot_scsi_ctrl(self): self._test_get_available_controller_slot(bus=constants.CTRL_TYPE_SCSI) def test_get_available_controller_slot_exception(self): self._test_get_available_controller_slot(fail=True) def test_is_boot_from_volume_true(self): vol = {'mount_device': self._bdman._DEFAULT_ROOT_DEVICE} block_device_info = {'block_device_mapping': [vol]} ret = self._bdman.is_boot_from_volume(block_device_info) self.assertTrue(ret) def test_is_boot_from_volume_false(self): block_device_info = {'block_device_mapping': []} ret = self._bdman.is_boot_from_volume(block_device_info) self.assertFalse(ret) def test_get_root_device_bdm(self): mount_device = '/dev/sda' bdm1 = {'mount_device': None} bdm2 = {'mount_device': mount_device} bdi = {'block_device_mapping': [bdm1, bdm2]} ret = self._bdman._get_root_device_bdm(bdi, mount_device) self.assertEqual(bdm2, ret) @mock.patch.object(block_device_manager.BlockDeviceInfoManager, '_check_and_update_bdm') def test_check_and_update_ephemerals(self, mock_check_and_update_bdm): fake_ephemerals = [mock.sentinel.eph1, mock.sentinel.eph2, mock.sentinel.eph3] fake_bdi = {'ephemerals': fake_ephemerals} expected_calls = [] for eph in fake_ephemerals: expected_calls.append(mock.call(mock.sentinel.fake_slot_map, mock.sentinel.fake_vm_gen, eph)) self._bdman._check_and_update_ephemerals(mock.sentinel.fake_vm_gen, fake_bdi, mock.sentinel.fake_slot_map) mock_check_and_update_bdm.assert_has_calls(expected_calls) @mock.patch.object(block_device_manager.BlockDeviceInfoManager, '_check_and_update_bdm') @mock.patch.object(block_device_manager.BlockDeviceInfoManager, '_get_root_device_bdm') def test_check_and_update_volumes(self, mock_get_root_dev_bdm, mock_check_and_update_bdm): fake_vol1 = {'mount_device': '/dev/sda'} fake_vol2 = {'mount_device': '/dev/sdb'} fake_volumes = [fake_vol1, fake_vol2] fake_bdi = {'block_device_mapping': fake_volumes, 'root_disk': {'mount_device': '/dev/sda'}} mock_get_root_dev_bdm.return_value = fake_vol1 self._bdman._check_and_update_volumes(mock.sentinel.fake_vm_gen, fake_bdi, mock.sentinel.fake_slot_map) mock_get_root_dev_bdm.assert_called_once_with(fake_bdi, '/dev/sda') mock_check_and_update_bdm.assert_called_once_with( mock.sentinel.fake_slot_map, mock.sentinel.fake_vm_gen, fake_vol2) self.assertNotIn(fake_vol1, fake_bdi) @mock.patch.object(block_device_manager.BlockDeviceInfoManager, '_get_available_controller_slot') def test_check_and_update_bdm_with_defaults(self, mock_get_ctrl_slot): mock_get_ctrl_slot.return_value = ((mock.sentinel.DRIVE_ADDR, mock.sentinel.CTRL_DISK_ADDR)) bdm = {'device_type': None, 'disk_bus': None, 'boot_index': None} self._bdman._check_and_update_bdm(mock.sentinel.FAKE_SLOT_MAP, constants.VM_GEN_1, bdm) mock_get_ctrl_slot.assert_called_once_with( bdm['disk_bus'], mock.sentinel.FAKE_SLOT_MAP) self.assertEqual(mock.sentinel.DRIVE_ADDR, bdm['drive_addr']) self.assertEqual(mock.sentinel.CTRL_DISK_ADDR, bdm['ctrl_disk_addr']) self.assertEqual('disk', bdm['device_type']) self.assertEqual(self._bdman._DEFAULT_BUS, bdm['disk_bus']) self.assertIsNone(bdm['boot_index']) def test_check_and_update_bdm_exception_device_type(self): bdm = {'device_type': 'cdrom', 'disk_bus': 'IDE'} self.assertRaises(exception.InvalidDiskInfo, self._bdman._check_and_update_bdm, mock.sentinel.FAKE_SLOT_MAP, constants.VM_GEN_1, bdm) def test_check_and_update_bdm_exception_disk_bus(self): bdm = {'device_type': 'disk', 'disk_bus': 'fake_bus'} self.assertRaises(exception.InvalidDiskInfo, self._bdman._check_and_update_bdm, mock.sentinel.FAKE_SLOT_MAP, constants.VM_GEN_1, bdm) def test_sort_by_boot_order(self): original = [{'boot_index': 2}, {'boot_index': None}, {'boot_index': 1}] expected = [original[2], original[0], original[1]] self._bdman._sort_by_boot_order(original) self.assertEqual(expected, original) @mock.patch.object(block_device_manager.BlockDeviceInfoManager, '_get_boot_order_gen1') def test_get_boot_order_gen1_vm(self, mock_get_boot_order): self._bdman.get_boot_order(constants.VM_GEN_1, mock.sentinel.BLOCK_DEV_INFO) mock_get_boot_order.assert_called_once_with( mock.sentinel.BLOCK_DEV_INFO) @mock.patch.object(block_device_manager.BlockDeviceInfoManager, '_get_boot_order_gen2') def test_get_boot_order_gen2_vm(self, mock_get_boot_order): self._bdman.get_boot_order(constants.VM_GEN_2, mock.sentinel.BLOCK_DEV_INFO) mock_get_boot_order.assert_called_once_with( mock.sentinel.BLOCK_DEV_INFO) def test_get_boot_order_gen1_iso(self): fake_bdi = {'root_disk': {'type': 'iso'}} expected = [os_win_const.BOOT_DEVICE_CDROM, os_win_const.BOOT_DEVICE_HARDDISK, os_win_const.BOOT_DEVICE_NETWORK, os_win_const.BOOT_DEVICE_FLOPPY] res = self._bdman._get_boot_order_gen1(fake_bdi) self.assertEqual(expected, res) def test_get_boot_order_gen1_vhd(self): fake_bdi = {'root_disk': {'type': 'vhd'}} expected = [os_win_const.BOOT_DEVICE_HARDDISK, os_win_const.BOOT_DEVICE_CDROM, os_win_const.BOOT_DEVICE_NETWORK, os_win_const.BOOT_DEVICE_FLOPPY] res = self._bdman._get_boot_order_gen1(fake_bdi) self.assertEqual(expected, res) @mock.patch('nova.virt.hyperv.volumeops.VolumeOps.get_disk_resource_path') def test_get_boot_order_gen2(self, mock_get_disk_path): fake_root_disk = {'boot_index': 0, 'path': mock.sentinel.FAKE_ROOT_PATH} fake_eph1 = {'boot_index': 2, 'path': mock.sentinel.FAKE_EPH_PATH1} fake_eph2 = {'boot_index': 3, 'path': mock.sentinel.FAKE_EPH_PATH2} fake_bdm = {'boot_index': 1, 'connection_info': mock.sentinel.FAKE_CONN_INFO} fake_bdi = {'root_disk': fake_root_disk, 'ephemerals': [fake_eph1, fake_eph2], 'block_device_mapping': [fake_bdm]} mock_get_disk_path.return_value = fake_bdm['connection_info'] expected_res = [mock.sentinel.FAKE_ROOT_PATH, mock.sentinel.FAKE_CONN_INFO, mock.sentinel.FAKE_EPH_PATH1, mock.sentinel.FAKE_EPH_PATH2] res = self._bdman._get_boot_order_gen2(fake_bdi) self.assertEqual(expected_res, res) nova-17.0.1/nova/tests/unit/virt/hyperv/test_migrationops.py0000666000175000017500000006256313250073126024313 0ustar zuulzuul00000000000000# Copyright 2014 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import mock from os_win import exceptions as os_win_exc from oslo_utils import units from nova import exception from nova import objects from nova import test from nova.tests.unit import fake_instance from nova.tests.unit.virt.hyperv import test_base from nova.virt.hyperv import constants from nova.virt.hyperv import migrationops class MigrationOpsTestCase(test_base.HyperVBaseTestCase): """Unit tests for the Hyper-V MigrationOps class.""" _FAKE_DISK = 'fake_disk' _FAKE_TIMEOUT = 10 _FAKE_RETRY_INTERVAL = 5 def setUp(self): super(MigrationOpsTestCase, self).setUp() self.context = 'fake-context' self._migrationops = migrationops.MigrationOps() self._migrationops._hostutils = mock.MagicMock() self._migrationops._vmops = mock.MagicMock() self._migrationops._vmutils = mock.MagicMock() self._migrationops._pathutils = mock.Mock() self._migrationops._vhdutils = mock.MagicMock() self._migrationops._pathutils = mock.MagicMock() self._migrationops._volumeops = mock.MagicMock() self._migrationops._imagecache = mock.MagicMock() self._migrationops._block_dev_man = mock.MagicMock() def _check_migrate_disk_files(self, shared_storage=False): instance_path = 'fake/instance/path' dest_instance_path = 'remote/instance/path' self._migrationops._pathutils.get_instance_dir.side_effect = ( instance_path, dest_instance_path) get_revert_dir = ( self._migrationops._pathutils.get_instance_migr_revert_dir) check_shared_storage = ( self._migrationops._pathutils.check_dirs_shared_storage) check_shared_storage.return_value = shared_storage self._migrationops._pathutils.exists.return_value = True fake_disk_files = [os.path.join(instance_path, disk_name) for disk_name in ['root.vhd', 'configdrive.vhd', 'configdrive.iso', 'eph0.vhd', 'eph1.vhdx']] expected_get_dir = [mock.call(mock.sentinel.instance_name), mock.call(mock.sentinel.instance_name, mock.sentinel.dest_path)] expected_move_calls = [mock.call(instance_path, get_revert_dir.return_value)] self._migrationops._migrate_disk_files( instance_name=mock.sentinel.instance_name, disk_files=fake_disk_files, dest=mock.sentinel.dest_path) self._migrationops._pathutils.exists.assert_called_once_with( dest_instance_path) check_shared_storage.assert_called_once_with( instance_path, dest_instance_path) get_revert_dir.assert_called_with(mock.sentinel.instance_name, remove_dir=True, create_dir=True) if shared_storage: fake_dest_path = '%s_tmp' % instance_path expected_move_calls.append(mock.call(fake_dest_path, instance_path)) self._migrationops._pathutils.rmtree.assert_called_once_with( fake_dest_path) else: fake_dest_path = dest_instance_path self._migrationops._pathutils.makedirs.assert_called_once_with( fake_dest_path) check_remove_dir = self._migrationops._pathutils.check_remove_dir check_remove_dir.assert_called_once_with(fake_dest_path) self._migrationops._pathutils.get_instance_dir.assert_has_calls( expected_get_dir) self._migrationops._pathutils.copy.assert_has_calls( mock.call(fake_disk_file, fake_dest_path) for fake_disk_file in fake_disk_files) self.assertEqual(len(fake_disk_files), self._migrationops._pathutils.copy.call_count) self._migrationops._pathutils.move_folder_files.assert_has_calls( expected_move_calls) def test_migrate_disk_files(self): self._check_migrate_disk_files() def test_migrate_disk_files_same_host(self): self._check_migrate_disk_files(shared_storage=True) @mock.patch.object(migrationops.MigrationOps, '_cleanup_failed_disk_migration') def test_migrate_disk_files_exception(self, mock_cleanup): instance_path = 'fake/instance/path' fake_dest_path = '%s_tmp' % instance_path self._migrationops._pathutils.get_instance_dir.return_value = ( instance_path) get_revert_dir = ( self._migrationops._pathutils.get_instance_migr_revert_dir) self._migrationops._hostutils.get_local_ips.return_value = [ mock.sentinel.dest_path] self._migrationops._pathutils.copy.side_effect = IOError( "Expected exception.") self.assertRaises(IOError, self._migrationops._migrate_disk_files, instance_name=mock.sentinel.instance_name, disk_files=[self._FAKE_DISK], dest=mock.sentinel.dest_path) mock_cleanup.assert_called_once_with(instance_path, get_revert_dir.return_value, fake_dest_path) def test_cleanup_failed_disk_migration(self): self._migrationops._pathutils.exists.return_value = True self._migrationops._cleanup_failed_disk_migration( instance_path=mock.sentinel.instance_path, revert_path=mock.sentinel.revert_path, dest_path=mock.sentinel.dest_path) expected = [mock.call(mock.sentinel.dest_path), mock.call(mock.sentinel.revert_path)] self._migrationops._pathutils.exists.assert_has_calls(expected) move_folder_files = self._migrationops._pathutils.move_folder_files move_folder_files.assert_called_once_with( mock.sentinel.revert_path, mock.sentinel.instance_path) self._migrationops._pathutils.rmtree.assert_has_calls([ mock.call(mock.sentinel.dest_path), mock.call(mock.sentinel.revert_path)]) def test_check_target_flavor(self): mock_instance = fake_instance.fake_instance_obj(self.context) mock_instance.flavor.root_gb = 1 mock_flavor = mock.MagicMock(root_gb=0) self.assertRaises(exception.InstanceFaultRollback, self._migrationops._check_target_flavor, mock_instance, mock_flavor) def test_check_and_attach_config_drive(self): mock_instance = fake_instance.fake_instance_obj( self.context, expected_attrs=['system_metadata']) mock_instance.config_drive = 'True' self._migrationops._check_and_attach_config_drive( mock_instance, mock.sentinel.vm_gen) self._migrationops._vmops.attach_config_drive.assert_called_once_with( mock_instance, self._migrationops._pathutils.lookup_configdrive_path.return_value, mock.sentinel.vm_gen) def test_check_and_attach_config_drive_unknown_path(self): instance = fake_instance.fake_instance_obj( self.context, expected_attrs=['system_metadata']) instance.config_drive = 'True' self._migrationops._pathutils.lookup_configdrive_path.return_value = ( None) self.assertRaises(exception.ConfigDriveNotFound, self._migrationops._check_and_attach_config_drive, instance, mock.sentinel.FAKE_VM_GEN) @mock.patch.object(migrationops.MigrationOps, '_migrate_disk_files') @mock.patch.object(migrationops.MigrationOps, '_check_target_flavor') def test_migrate_disk_and_power_off(self, mock_check_flavor, mock_migrate_disk_files): instance = fake_instance.fake_instance_obj(self.context) flavor = mock.MagicMock() network_info = mock.MagicMock() disk_files = [mock.MagicMock()] volume_drives = [mock.MagicMock()] mock_get_vm_st_path = self._migrationops._vmutils.get_vm_storage_paths mock_get_vm_st_path.return_value = (disk_files, volume_drives) self._migrationops.migrate_disk_and_power_off( self.context, instance, mock.sentinel.FAKE_DEST, flavor, network_info, mock.sentinel.bdi, self._FAKE_TIMEOUT, self._FAKE_RETRY_INTERVAL) mock_check_flavor.assert_called_once_with(instance, flavor) self._migrationops._vmops.power_off.assert_called_once_with( instance, self._FAKE_TIMEOUT, self._FAKE_RETRY_INTERVAL) mock_get_vm_st_path.assert_called_once_with(instance.name) mock_migrate_disk_files.assert_called_once_with( instance.name, disk_files, mock.sentinel.FAKE_DEST) self._migrationops._vmops.destroy.assert_called_once_with( instance, network_info, mock.sentinel.bdi, destroy_disks=False) def test_confirm_migration(self): mock_instance = fake_instance.fake_instance_obj(self.context) self._migrationops.confirm_migration( context=self.context, migration=mock.sentinel.migration, instance=mock_instance, network_info=mock.sentinel.network_info) get_instance_migr_revert_dir = ( self._migrationops._pathutils.get_instance_migr_revert_dir) get_instance_migr_revert_dir.assert_called_with(mock_instance.name, remove_dir=True) def test_revert_migration_files(self): instance_path = ( self._migrationops._pathutils.get_instance_dir.return_value) get_revert_dir = ( self._migrationops._pathutils.get_instance_migr_revert_dir) self._migrationops._revert_migration_files( instance_name=mock.sentinel.instance_name) self._migrationops._pathutils.get_instance_dir.assert_called_once_with( mock.sentinel.instance_name, create_dir=False, remove_dir=True) get_revert_dir.assert_called_with(mock.sentinel.instance_name) self._migrationops._pathutils.rename.assert_called_once_with( get_revert_dir.return_value, instance_path) @mock.patch.object(migrationops.MigrationOps, '_check_and_attach_config_drive') @mock.patch.object(migrationops.MigrationOps, '_revert_migration_files') @mock.patch.object(migrationops.MigrationOps, '_check_ephemeral_disks') @mock.patch.object(objects.ImageMeta, "from_instance") def _check_finish_revert_migration(self, mock_image, mock_check_eph_disks, mock_revert_migration_files, mock_check_attach_config_drive, disk_type=constants.DISK): mock_image.return_value = objects.ImageMeta.from_dict({}) mock_instance = fake_instance.fake_instance_obj(self.context) root_device = {'type': disk_type} block_device_info = {'root_disk': root_device, 'ephemerals': []} self._migrationops.finish_revert_migration( context=self.context, instance=mock_instance, network_info=mock.sentinel.network_info, block_device_info=block_device_info, power_on=True) mock_revert_migration_files.assert_called_once_with( mock_instance.name) if root_device['type'] == constants.DISK: lookup_root_vhd = ( self._migrationops._pathutils.lookup_root_vhd_path) lookup_root_vhd.assert_called_once_with(mock_instance.name) self.assertEqual(lookup_root_vhd.return_value, root_device['path']) get_image_vm_gen = self._migrationops._vmops.get_image_vm_generation get_image_vm_gen.assert_called_once_with( mock_instance.uuid, test.MatchType(objects.ImageMeta)) self._migrationops._vmops.create_instance.assert_called_once_with( mock_instance, mock.sentinel.network_info, root_device, block_device_info, get_image_vm_gen.return_value, mock_image.return_value) mock_check_attach_config_drive.assert_called_once_with( mock_instance, get_image_vm_gen.return_value) self._migrationops._vmops.set_boot_order.assert_called_once_with( mock_instance.name, get_image_vm_gen.return_value, block_device_info) self._migrationops._vmops.power_on.assert_called_once_with( mock_instance, network_info=mock.sentinel.network_info) def test_finish_revert_migration_boot_from_volume(self): self._check_finish_revert_migration(disk_type=constants.VOLUME) def test_finish_revert_migration_boot_from_disk(self): self._check_finish_revert_migration(disk_type=constants.DISK) @mock.patch.object(objects.ImageMeta, "from_instance") def test_finish_revert_migration_no_root_vhd(self, mock_image): mock_instance = fake_instance.fake_instance_obj(self.context) self._migrationops._pathutils.lookup_root_vhd_path.return_value = None bdi = {'root_disk': {'type': constants.DISK}, 'ephemerals': []} self.assertRaises( exception.DiskNotFound, self._migrationops.finish_revert_migration, self.context, mock_instance, mock.sentinel.network_info, bdi, True) def test_merge_base_vhd(self): fake_diff_vhd_path = 'fake/diff/path' fake_base_vhd_path = 'fake/base/path' base_vhd_copy_path = os.path.join( os.path.dirname(fake_diff_vhd_path), os.path.basename(fake_base_vhd_path)) self._migrationops._merge_base_vhd(diff_vhd_path=fake_diff_vhd_path, base_vhd_path=fake_base_vhd_path) self._migrationops._pathutils.copyfile.assert_called_once_with( fake_base_vhd_path, base_vhd_copy_path) recon_parent_vhd = self._migrationops._vhdutils.reconnect_parent_vhd recon_parent_vhd.assert_called_once_with(fake_diff_vhd_path, base_vhd_copy_path) self._migrationops._vhdutils.merge_vhd.assert_called_once_with( fake_diff_vhd_path) self._migrationops._pathutils.rename.assert_called_once_with( base_vhd_copy_path, fake_diff_vhd_path) def test_merge_base_vhd_exception(self): fake_diff_vhd_path = 'fake/diff/path' fake_base_vhd_path = 'fake/base/path' base_vhd_copy_path = os.path.join( os.path.dirname(fake_diff_vhd_path), os.path.basename(fake_base_vhd_path)) self._migrationops._vhdutils.reconnect_parent_vhd.side_effect = ( os_win_exc.HyperVException) self._migrationops._pathutils.exists.return_value = True self.assertRaises(os_win_exc.HyperVException, self._migrationops._merge_base_vhd, fake_diff_vhd_path, fake_base_vhd_path) self._migrationops._pathutils.exists.assert_called_once_with( base_vhd_copy_path) self._migrationops._pathutils.remove.assert_called_once_with( base_vhd_copy_path) @mock.patch.object(migrationops.MigrationOps, '_resize_vhd') def test_check_resize_vhd(self, mock_resize_vhd): self._migrationops._check_resize_vhd( vhd_path=mock.sentinel.vhd_path, vhd_info={'VirtualSize': 1}, new_size=2) mock_resize_vhd.assert_called_once_with(mock.sentinel.vhd_path, 2) def test_check_resize_vhd_exception(self): self.assertRaises(exception.CannotResizeDisk, self._migrationops._check_resize_vhd, mock.sentinel.vhd_path, {'VirtualSize': 1}, 0) @mock.patch.object(migrationops.MigrationOps, '_merge_base_vhd') def test_resize_vhd(self, mock_merge_base_vhd): fake_vhd_path = 'fake/path.vhd' new_vhd_size = 2 self._migrationops._resize_vhd(vhd_path=fake_vhd_path, new_size=new_vhd_size) get_vhd_parent_path = self._migrationops._vhdutils.get_vhd_parent_path get_vhd_parent_path.assert_called_once_with(fake_vhd_path) mock_merge_base_vhd.assert_called_once_with( fake_vhd_path, self._migrationops._vhdutils.get_vhd_parent_path.return_value) self._migrationops._vhdutils.resize_vhd.assert_called_once_with( fake_vhd_path, new_vhd_size) def test_check_base_disk(self): mock_instance = fake_instance.fake_instance_obj(self.context) fake_src_vhd_path = 'fake/src/path' fake_base_vhd = 'fake/vhd' get_cached_image = self._migrationops._imagecache.get_cached_image get_cached_image.return_value = fake_base_vhd self._migrationops._check_base_disk( context=self.context, instance=mock_instance, diff_vhd_path=mock.sentinel.diff_vhd_path, src_base_disk_path=fake_src_vhd_path) get_cached_image.assert_called_once_with(self.context, mock_instance) recon_parent_vhd = self._migrationops._vhdutils.reconnect_parent_vhd recon_parent_vhd.assert_called_once_with( mock.sentinel.diff_vhd_path, fake_base_vhd) @mock.patch.object(migrationops.MigrationOps, '_check_and_attach_config_drive') @mock.patch.object(migrationops.MigrationOps, '_check_base_disk') @mock.patch.object(migrationops.MigrationOps, '_check_resize_vhd') @mock.patch.object(migrationops.MigrationOps, '_check_ephemeral_disks') def _check_finish_migration(self, mock_check_eph_disks, mock_check_resize_vhd, mock_check_base_disk, mock_check_attach_config_drive, disk_type=constants.DISK): mock_instance = fake_instance.fake_instance_obj(self.context) mock_instance.flavor.ephemeral_gb = 1 root_device = {'type': disk_type} block_device_info = {'root_disk': root_device, 'ephemerals': []} lookup_root_vhd = self._migrationops._pathutils.lookup_root_vhd_path get_vhd_info = self._migrationops._vhdutils.get_vhd_info mock_vhd_info = get_vhd_info.return_value expected_check_resize = [] expected_get_info = [] self._migrationops.finish_migration( context=self.context, migration=mock.sentinel.migration, instance=mock_instance, disk_info=mock.sentinel.disk_info, network_info=mock.sentinel.network_info, image_meta=mock.sentinel.image_meta, resize_instance=True, block_device_info=block_device_info) if root_device['type'] == constants.DISK: root_device_path = lookup_root_vhd.return_value lookup_root_vhd.assert_called_with(mock_instance.name) expected_get_info = [mock.call(root_device_path)] mock_vhd_info.get.assert_called_once_with("ParentPath") mock_check_base_disk.assert_called_once_with( self.context, mock_instance, root_device_path, mock_vhd_info.get.return_value) expected_check_resize.append( mock.call(root_device_path, mock_vhd_info, mock_instance.flavor.root_gb * units.Gi)) ephemerals = block_device_info['ephemerals'] mock_check_eph_disks.assert_called_once_with( mock_instance, ephemerals, True) mock_check_resize_vhd.assert_has_calls(expected_check_resize) self._migrationops._vhdutils.get_vhd_info.assert_has_calls( expected_get_info) get_image_vm_gen = self._migrationops._vmops.get_image_vm_generation get_image_vm_gen.assert_called_once_with(mock_instance.uuid, mock.sentinel.image_meta) self._migrationops._vmops.create_instance.assert_called_once_with( mock_instance, mock.sentinel.network_info, root_device, block_device_info, get_image_vm_gen.return_value, mock.sentinel.image_meta) mock_check_attach_config_drive.assert_called_once_with( mock_instance, get_image_vm_gen.return_value) self._migrationops._vmops.set_boot_order.assert_called_once_with( mock_instance.name, get_image_vm_gen.return_value, block_device_info) self._migrationops._vmops.power_on.assert_called_once_with( mock_instance, network_info=mock.sentinel.network_info) def test_finish_migration(self): self._check_finish_migration(disk_type=constants.DISK) def test_finish_migration_boot_from_volume(self): self._check_finish_migration(disk_type=constants.VOLUME) def test_finish_migration_no_root(self): mock_instance = fake_instance.fake_instance_obj(self.context) self._migrationops._pathutils.lookup_root_vhd_path.return_value = None bdi = {'root_disk': {'type': constants.DISK}, 'ephemerals': []} self.assertRaises(exception.DiskNotFound, self._migrationops.finish_migration, self.context, mock.sentinel.migration, mock_instance, mock.sentinel.disk_info, mock.sentinel.network_info, mock.sentinel.image_meta, True, bdi, True) @mock.patch.object(migrationops.MigrationOps, '_check_resize_vhd') @mock.patch.object(migrationops.LOG, 'warning') def test_check_ephemeral_disks_multiple_eph_warn(self, mock_warn, mock_check_resize_vhd): mock_instance = fake_instance.fake_instance_obj(self.context) mock_instance.ephemeral_gb = 3 mock_ephemerals = [{'size': 1}, {'size': 1}] self._migrationops._check_ephemeral_disks(mock_instance, mock_ephemerals, True) mock_warn.assert_called_once_with( "Cannot resize multiple ephemeral disks for instance.", instance=mock_instance) def test_check_ephemeral_disks_exception(self): mock_instance = fake_instance.fake_instance_obj(self.context) mock_ephemerals = [dict()] lookup_eph_path = ( self._migrationops._pathutils.lookup_ephemeral_vhd_path) lookup_eph_path.return_value = None self.assertRaises(exception.DiskNotFound, self._migrationops._check_ephemeral_disks, mock_instance, mock_ephemerals) @mock.patch.object(migrationops.MigrationOps, '_check_resize_vhd') def _test_check_ephemeral_disks(self, mock_check_resize_vhd, existing_eph_path=None, new_eph_size=42): mock_instance = fake_instance.fake_instance_obj(self.context) mock_instance.ephemeral_gb = new_eph_size eph = {} mock_ephemerals = [eph] mock_pathutils = self._migrationops._pathutils lookup_eph_path = mock_pathutils.lookup_ephemeral_vhd_path lookup_eph_path.return_value = existing_eph_path mock_get_eph_vhd_path = mock_pathutils.get_ephemeral_vhd_path mock_get_eph_vhd_path.return_value = mock.sentinel.get_path mock_vhdutils = self._migrationops._vhdutils mock_get_vhd_format = mock_vhdutils.get_best_supported_vhd_format mock_get_vhd_format.return_value = mock.sentinel.vhd_format self._migrationops._check_ephemeral_disks(mock_instance, mock_ephemerals, True) self.assertEqual(mock_instance.ephemeral_gb, eph['size']) if not existing_eph_path: mock_vmops = self._migrationops._vmops mock_vmops.create_ephemeral_disk.assert_called_once_with( mock_instance.name, eph) self.assertEqual(mock.sentinel.vhd_format, eph['format']) self.assertEqual(mock.sentinel.get_path, eph['path']) elif new_eph_size: mock_check_resize_vhd.assert_called_once_with( existing_eph_path, self._migrationops._vhdutils.get_vhd_info.return_value, mock_instance.ephemeral_gb * units.Gi) self.assertEqual(existing_eph_path, eph['path']) else: self._migrationops._pathutils.remove.assert_called_once_with( existing_eph_path) def test_check_ephemeral_disks_create(self): self._test_check_ephemeral_disks() def test_check_ephemeral_disks_resize(self): self._test_check_ephemeral_disks(existing_eph_path=mock.sentinel.path) def test_check_ephemeral_disks_remove(self): self._test_check_ephemeral_disks(existing_eph_path=mock.sentinel.path, new_eph_size=0) nova-17.0.1/nova/tests/unit/virt/hyperv/test_base.py0000666000175000017500000000260013250073126022474 0ustar zuulzuul00000000000000# Copyright 2014 Cloudbase Solutions Srl # # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from os_win import utilsfactory from six.moves import builtins from nova import test class HyperVBaseTestCase(test.NoDBTestCase): def setUp(self): super(HyperVBaseTestCase, self).setUp() self._mock_wmi = mock.MagicMock() wmi_patcher = mock.patch.object(builtins, 'wmi', create=True, new=self._mock_wmi) platform_patcher = mock.patch('sys.platform', 'win32') utilsfactory_patcher = mock.patch.object(utilsfactory, '_get_class') platform_patcher.start() wmi_patcher.start() utilsfactory_patcher.start() self.addCleanup(wmi_patcher.stop) self.addCleanup(platform_patcher.stop) self.addCleanup(utilsfactory_patcher.stop) nova-17.0.1/nova/tests/unit/virt/hyperv/test_livemigrationops.py0000666000175000017500000002602213250073126025161 0ustar zuulzuul00000000000000# Copyright 2014 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from os_win import exceptions as os_win_exc from oslo_config import cfg from nova import exception from nova.objects import migrate_data as migrate_data_obj from nova.tests.unit import fake_instance from nova.tests.unit.virt.hyperv import test_base from nova.virt.hyperv import livemigrationops from nova.virt.hyperv import serialconsoleops CONF = cfg.CONF class LiveMigrationOpsTestCase(test_base.HyperVBaseTestCase): """Unit tests for the Hyper-V LiveMigrationOps class.""" def setUp(self): super(LiveMigrationOpsTestCase, self).setUp() self.context = 'fake_context' self._livemigrops = livemigrationops.LiveMigrationOps() self._livemigrops._livemigrutils = mock.MagicMock() self._livemigrops._pathutils = mock.MagicMock() self._livemigrops._block_dev_man = mock.MagicMock() self._pathutils = self._livemigrops._pathutils @mock.patch.object(serialconsoleops.SerialConsoleOps, 'stop_console_handler') @mock.patch('nova.virt.hyperv.vmops.VMOps.copy_vm_dvd_disks') def _test_live_migration(self, mock_copy_dvd_disk, mock_stop_console_handler, side_effect=None, shared_storage=False, migrate_data_received=True, migrate_data_version='1.1'): mock_instance = fake_instance.fake_instance_obj(self.context) mock_post = mock.MagicMock() mock_recover = mock.MagicMock() mock_copy_logs = self._livemigrops._pathutils.copy_vm_console_logs fake_dest = mock.sentinel.DESTINATION mock_check_shared_inst_dir = ( self._pathutils.check_remote_instances_dir_shared) mock_check_shared_inst_dir.return_value = shared_storage self._livemigrops._livemigrutils.live_migrate_vm.side_effect = [ side_effect] if migrate_data_received: migrate_data = migrate_data_obj.HyperVLiveMigrateData() if migrate_data_version != '1.0': migrate_data.is_shared_instance_path = shared_storage else: migrate_data = None if side_effect is os_win_exc.HyperVException: self.assertRaises(os_win_exc.HyperVException, self._livemigrops.live_migration, self.context, mock_instance, fake_dest, mock_post, mock_recover, mock.sentinel.block_migr, migrate_data) mock_recover.assert_called_once_with(self.context, mock_instance, fake_dest, migrate_data) else: self._livemigrops.live_migration(context=self.context, instance_ref=mock_instance, dest=fake_dest, post_method=mock_post, recover_method=mock_recover, block_migration=( mock.sentinel.block_migr), migrate_data=migrate_data) post_call_args = mock_post.call_args_list self.assertEqual(1, len(post_call_args)) post_call_args_list = post_call_args[0][0] self.assertEqual((self.context, mock_instance, fake_dest, mock.sentinel.block_migr), post_call_args_list[:-1]) # The last argument, the migrate_data object, should be created # by the callee if not received. migrate_data_arg = post_call_args_list[-1] self.assertIsInstance( migrate_data_arg, migrate_data_obj.HyperVLiveMigrateData) self.assertEqual(shared_storage, migrate_data_arg.is_shared_instance_path) if not migrate_data_received or migrate_data_version == '1.0': mock_check_shared_inst_dir.assert_called_once_with(fake_dest) else: self.assertFalse(mock_check_shared_inst_dir.called) mock_stop_console_handler.assert_called_once_with(mock_instance.name) if not shared_storage: mock_copy_logs.assert_called_once_with(mock_instance.name, fake_dest) mock_copy_dvd_disk.assert_called_once_with(mock_instance.name, fake_dest) else: self.assertFalse(mock_copy_logs.called) self.assertFalse(mock_copy_dvd_disk.called) mock_live_migr = self._livemigrops._livemigrutils.live_migrate_vm mock_live_migr.assert_called_once_with( mock_instance.name, fake_dest, migrate_disks=not shared_storage) def test_live_migration(self): self._test_live_migration(migrate_data_received=False) def test_live_migration_old_migrate_data_version(self): self._test_live_migration(migrate_data_version='1.0') def test_live_migration_exception(self): self._test_live_migration(side_effect=os_win_exc.HyperVException) def test_live_migration_shared_storage(self): self._test_live_migration(shared_storage=True) @mock.patch('nova.virt.hyperv.volumeops.VolumeOps.get_disk_path_mapping') @mock.patch('nova.virt.hyperv.imagecache.ImageCache.get_cached_image') @mock.patch('nova.virt.hyperv.volumeops.VolumeOps.connect_volumes') def _test_pre_live_migration(self, mock_initialize_connection, mock_get_cached_image, mock_get_disk_path_mapping, phys_disks_attached=True): mock_instance = fake_instance.fake_instance_obj(self.context) mock_instance.image_ref = "fake_image_ref" mock_get_disk_path_mapping.return_value = ( mock.sentinel.disk_path_mapping if phys_disks_attached else None) bdman = self._livemigrops._block_dev_man mock_is_boot_from_vol = bdman.is_boot_from_volume mock_is_boot_from_vol.return_value = None CONF.set_override('use_cow_images', True) self._livemigrops.pre_live_migration( self.context, mock_instance, block_device_info=mock.sentinel.BLOCK_INFO, network_info=mock.sentinel.NET_INFO) check_config = ( self._livemigrops._livemigrutils.check_live_migration_config) check_config.assert_called_once_with() mock_is_boot_from_vol.assert_called_once_with( mock.sentinel.BLOCK_INFO) mock_get_cached_image.assert_called_once_with(self.context, mock_instance) mock_initialize_connection.assert_called_once_with( mock.sentinel.BLOCK_INFO) mock_get_disk_path_mapping.assert_called_once_with( mock.sentinel.BLOCK_INFO) if phys_disks_attached: livemigrutils = self._livemigrops._livemigrutils livemigrutils.create_planned_vm.assert_called_once_with( mock_instance.name, mock_instance.host, mock.sentinel.disk_path_mapping) def test_pre_live_migration(self): self._test_pre_live_migration() def test_pre_live_migration_invalid_disk_mapping(self): self._test_pre_live_migration(phys_disks_attached=False) @mock.patch('nova.virt.hyperv.volumeops.VolumeOps.disconnect_volumes') def _test_post_live_migration(self, mock_disconnect_volumes, shared_storage=False): migrate_data = migrate_data_obj.HyperVLiveMigrateData( is_shared_instance_path=shared_storage) self._livemigrops.post_live_migration( self.context, mock.sentinel.instance, mock.sentinel.block_device_info, migrate_data) mock_disconnect_volumes.assert_called_once_with( mock.sentinel.block_device_info) mock_get_inst_dir = self._pathutils.get_instance_dir if not shared_storage: mock_get_inst_dir.assert_called_once_with( mock.sentinel.instance.name, create_dir=False, remove_dir=True) else: self.assertFalse(mock_get_inst_dir.called) def test_post_block_migration(self): self._test_post_live_migration() def test_post_live_migration_shared_storage(self): self._test_post_live_migration(shared_storage=True) @mock.patch.object(migrate_data_obj, 'HyperVLiveMigrateData') def test_check_can_live_migrate_destination(self, mock_migr_data_cls): mock_instance = fake_instance.fake_instance_obj(self.context) migr_data = self._livemigrops.check_can_live_migrate_destination( mock.sentinel.context, mock_instance, mock.sentinel.src_comp_info, mock.sentinel.dest_comp_info) mock_check_shared_inst_dir = ( self._pathutils.check_remote_instances_dir_shared) mock_check_shared_inst_dir.assert_called_once_with(mock_instance.host) self.assertEqual(mock_migr_data_cls.return_value, migr_data) self.assertEqual(mock_check_shared_inst_dir.return_value, migr_data.is_shared_instance_path) @mock.patch('nova.virt.hyperv.vmops.VMOps.plug_vifs') def test_post_live_migration_at_destination(self, mock_plug_vifs): self._livemigrops.post_live_migration_at_destination( self.context, mock.sentinel.instance, network_info=mock.sentinel.NET_INFO, block_migration=mock.sentinel.BLOCK_INFO) mock_plug_vifs.assert_called_once_with(mock.sentinel.instance, mock.sentinel.NET_INFO) def test_check_can_live_migrate_destination_exception(self): mock_instance = fake_instance.fake_instance_obj(self.context) mock_check = self._pathutils.check_remote_instances_dir_shared mock_check.side_effect = exception.FileNotFound(file_path='C:\\baddir') self.assertRaises( exception.MigrationPreCheckError, self._livemigrops.check_can_live_migrate_destination, mock.sentinel.context, mock_instance, mock.sentinel.src_comp_info, mock.sentinel.dest_comp_info) nova-17.0.1/nova/tests/unit/virt/hyperv/test_eventhandler.py0000666000175000017500000001345013250073126024246 0ustar zuulzuul00000000000000# Copyright 2015 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from os_win import constants from os_win import exceptions as os_win_exc from os_win import utilsfactory from nova.tests.unit.virt.hyperv import test_base from nova import utils from nova.virt.hyperv import eventhandler class EventHandlerTestCase(test_base.HyperVBaseTestCase): _FAKE_POLLING_INTERVAL = 3 _FAKE_EVENT_CHECK_TIMEFRAME = 15 @mock.patch.object(utilsfactory, 'get_vmutils') def setUp(self, mock_get_vmutils): super(EventHandlerTestCase, self).setUp() self._state_change_callback = mock.Mock() self.flags( power_state_check_timeframe=self._FAKE_EVENT_CHECK_TIMEFRAME, group='hyperv') self.flags( power_state_event_polling_interval=self._FAKE_POLLING_INTERVAL, group='hyperv') self._event_handler = eventhandler.InstanceEventHandler( self._state_change_callback) self._event_handler._serial_console_ops = mock.Mock() @mock.patch.object(eventhandler.InstanceEventHandler, '_get_instance_uuid') @mock.patch.object(eventhandler.InstanceEventHandler, '_emit_event') def _test_event_callback(self, mock_emit_event, mock_get_uuid, missing_uuid=False): mock_get_uuid.return_value = ( mock.sentinel.instance_uuid if not missing_uuid else None) self._event_handler._vmutils.get_vm_power_state.return_value = ( mock.sentinel.power_state) self._event_handler._event_callback(mock.sentinel.instance_name, mock.sentinel.power_state) if not missing_uuid: mock_emit_event.assert_called_once_with( mock.sentinel.instance_name, mock.sentinel.instance_uuid, mock.sentinel.power_state) else: self.assertFalse(mock_emit_event.called) def test_event_callback_uuid_present(self): self._test_event_callback() def test_event_callback_missing_uuid(self): self._test_event_callback(missing_uuid=True) @mock.patch.object(eventhandler.InstanceEventHandler, '_get_virt_event') @mock.patch.object(utils, 'spawn_n') def test_emit_event(self, mock_spawn, mock_get_event): self._event_handler._emit_event(mock.sentinel.instance_name, mock.sentinel.instance_uuid, mock.sentinel.instance_state) virt_event = mock_get_event.return_value mock_spawn.assert_has_calls( [mock.call(self._state_change_callback, virt_event), mock.call(self._event_handler._handle_serial_console_workers, mock.sentinel.instance_name, mock.sentinel.instance_state)]) def test_handle_serial_console_instance_running(self): self._event_handler._handle_serial_console_workers( mock.sentinel.instance_name, constants.HYPERV_VM_STATE_ENABLED) serialops = self._event_handler._serial_console_ops serialops.start_console_handler.assert_called_once_with( mock.sentinel.instance_name) def test_handle_serial_console_instance_stopped(self): self._event_handler._handle_serial_console_workers( mock.sentinel.instance_name, constants.HYPERV_VM_STATE_DISABLED) serialops = self._event_handler._serial_console_ops serialops.stop_console_handler.assert_called_once_with( mock.sentinel.instance_name) def _test_get_instance_uuid(self, instance_found=True, missing_uuid=False): if instance_found: side_effect = (mock.sentinel.instance_uuid if not missing_uuid else None, ) else: side_effect = os_win_exc.HyperVVMNotFoundException( vm_name=mock.sentinel.instance_name) mock_get_uuid = self._event_handler._vmutils.get_instance_uuid mock_get_uuid.side_effect = side_effect instance_uuid = self._event_handler._get_instance_uuid( mock.sentinel.instance_name) expected_uuid = (mock.sentinel.instance_uuid if instance_found and not missing_uuid else None) self.assertEqual(expected_uuid, instance_uuid) def test_get_nova_created_instance_uuid(self): self._test_get_instance_uuid() def test_get_deleted_instance_uuid(self): self._test_get_instance_uuid(instance_found=False) def test_get_instance_uuid_missing_notes(self): self._test_get_instance_uuid(missing_uuid=True) @mock.patch('nova.virt.event.LifecycleEvent') def test_get_virt_event(self, mock_lifecycle_event): instance_state = constants.HYPERV_VM_STATE_ENABLED expected_transition = self._event_handler._TRANSITION_MAP[ instance_state] virt_event = self._event_handler._get_virt_event( mock.sentinel.instance_uuid, instance_state) self.assertEqual(mock_lifecycle_event.return_value, virt_event) mock_lifecycle_event.assert_called_once_with( uuid=mock.sentinel.instance_uuid, transition=expected_transition) nova-17.0.1/nova/tests/unit/virt/hyperv/test_imagecache.py0000666000175000017500000003140313250073126023633 0ustar zuulzuul00000000000000# Copyright 2014 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import fixtures import mock from oslo_config import cfg from oslo_utils import units from nova import exception from nova import objects from nova.tests.unit import fake_instance from nova.tests.unit.objects import test_flavor from nova.tests.unit.virt.hyperv import test_base from nova.tests import uuidsentinel as uuids from nova.virt.hyperv import constants from nova.virt.hyperv import imagecache CONF = cfg.CONF class ImageCacheTestCase(test_base.HyperVBaseTestCase): """Unit tests for the Hyper-V ImageCache class.""" FAKE_FORMAT = 'fake_format' FAKE_IMAGE_REF = 'fake_image_ref' FAKE_VHD_SIZE_GB = 1 def setUp(self): super(ImageCacheTestCase, self).setUp() self.context = 'fake-context' self.instance = fake_instance.fake_instance_obj(self.context) # utilsfactory will check the host OS version via get_hostutils, # in order to return the proper Utils Class, so it must be mocked. patched_get_hostutils = mock.patch.object(imagecache.utilsfactory, "get_hostutils") patched_get_vhdutils = mock.patch.object(imagecache.utilsfactory, "get_vhdutils") patched_get_hostutils.start() patched_get_vhdutils.start() self.addCleanup(patched_get_hostutils.stop) self.addCleanup(patched_get_vhdutils.stop) self.imagecache = imagecache.ImageCache() self.imagecache._pathutils = mock.MagicMock() self.imagecache._vhdutils = mock.MagicMock() self.tmpdir = self.useFixture(fixtures.TempDir()).path def _test_get_root_vhd_size_gb(self, old_flavor=True): if old_flavor: mock_flavor = objects.Flavor(**test_flavor.fake_flavor) self.instance.old_flavor = mock_flavor else: self.instance.old_flavor = None return self.imagecache._get_root_vhd_size_gb(self.instance) def test_get_root_vhd_size_gb_old_flavor(self): ret_val = self._test_get_root_vhd_size_gb() self.assertEqual(test_flavor.fake_flavor['root_gb'], ret_val) def test_get_root_vhd_size_gb(self): ret_val = self._test_get_root_vhd_size_gb(old_flavor=False) self.assertEqual(self.instance.flavor.root_gb, ret_val) @mock.patch.object(imagecache.ImageCache, '_get_root_vhd_size_gb') def test_resize_and_cache_vhd_smaller(self, mock_get_vhd_size_gb): self.imagecache._vhdutils.get_vhd_size.return_value = { 'VirtualSize': (self.FAKE_VHD_SIZE_GB + 1) * units.Gi } mock_get_vhd_size_gb.return_value = self.FAKE_VHD_SIZE_GB mock_internal_vhd_size = ( self.imagecache._vhdutils.get_internal_vhd_size_by_file_size) mock_internal_vhd_size.return_value = self.FAKE_VHD_SIZE_GB * units.Gi self.assertRaises(exception.FlavorDiskSmallerThanImage, self.imagecache._resize_and_cache_vhd, mock.sentinel.instance, mock.sentinel.vhd_path) self.imagecache._vhdutils.get_vhd_size.assert_called_once_with( mock.sentinel.vhd_path) mock_get_vhd_size_gb.assert_called_once_with(mock.sentinel.instance) mock_internal_vhd_size.assert_called_once_with( mock.sentinel.vhd_path, self.FAKE_VHD_SIZE_GB * units.Gi) def _prepare_get_cached_image(self, path_exists=False, use_cow=False, rescue_image_id=None): self.instance.image_ref = self.FAKE_IMAGE_REF self.imagecache._pathutils.get_base_vhd_dir.return_value = ( self.tmpdir) self.imagecache._pathutils.exists.return_value = path_exists self.imagecache._vhdutils.get_vhd_format.return_value = ( constants.DISK_FORMAT_VHD) CONF.set_override('use_cow_images', use_cow) image_file_name = rescue_image_id or self.FAKE_IMAGE_REF expected_path = os.path.join(self.tmpdir, image_file_name) expected_vhd_path = "%s.%s" % (expected_path, constants.DISK_FORMAT_VHD.lower()) return (expected_path, expected_vhd_path) @mock.patch.object(imagecache.images, 'fetch') def test_get_cached_image_with_fetch(self, mock_fetch): (expected_path, expected_vhd_path) = self._prepare_get_cached_image(False, False) result = self.imagecache.get_cached_image(self.context, self.instance) self.assertEqual(expected_vhd_path, result) mock_fetch.assert_called_once_with(self.context, self.FAKE_IMAGE_REF, expected_path) self.imagecache._vhdutils.get_vhd_format.assert_called_once_with( expected_path) self.imagecache._pathutils.rename.assert_called_once_with( expected_path, expected_vhd_path) @mock.patch.object(imagecache.images, 'fetch') def test_get_cached_image_with_fetch_exception(self, mock_fetch): (expected_path, expected_vhd_path) = self._prepare_get_cached_image(False, False) # path doesn't exist until fetched. self.imagecache._pathutils.exists.side_effect = [False, False, True] mock_fetch.side_effect = exception.InvalidImageRef( image_href=self.FAKE_IMAGE_REF) self.assertRaises(exception.InvalidImageRef, self.imagecache.get_cached_image, self.context, self.instance) self.imagecache._pathutils.remove.assert_called_once_with( expected_path) @mock.patch.object(imagecache.ImageCache, '_resize_and_cache_vhd') def test_get_cached_image_use_cow(self, mock_resize): (expected_path, expected_vhd_path) = self._prepare_get_cached_image(True, True) expected_resized_vhd_path = expected_vhd_path + 'x' mock_resize.return_value = expected_resized_vhd_path result = self.imagecache.get_cached_image(self.context, self.instance) self.assertEqual(expected_resized_vhd_path, result) mock_resize.assert_called_once_with(self.instance, expected_vhd_path) @mock.patch.object(imagecache.images, 'fetch') def test_cache_rescue_image_bigger_than_flavor(self, mock_fetch): fake_rescue_image_id = 'fake_rescue_image_id' self.imagecache._vhdutils.get_vhd_info.return_value = { 'VirtualSize': (self.instance.flavor.root_gb + 1) * units.Gi} (expected_path, expected_vhd_path) = self._prepare_get_cached_image( rescue_image_id=fake_rescue_image_id) self.assertRaises(exception.ImageUnacceptable, self.imagecache.get_cached_image, self.context, self.instance, fake_rescue_image_id) mock_fetch.assert_called_once_with(self.context, fake_rescue_image_id, expected_path) self.imagecache._vhdutils.get_vhd_info.assert_called_once_with( expected_vhd_path) def test_age_and_verify_cached_images(self): fake_images = [mock.sentinel.FAKE_IMG1, mock.sentinel.FAKE_IMG2] fake_used_images = [mock.sentinel.FAKE_IMG1] self.imagecache.originals = fake_images self.imagecache.used_images = fake_used_images self.imagecache._update_image_timestamp = mock.Mock() self.imagecache._remove_if_old_image = mock.Mock() self.imagecache._age_and_verify_cached_images( mock.sentinel.FAKE_CONTEXT, mock.sentinel.all_instances, mock.sentinel.tmpdir) self.imagecache._update_image_timestamp.assert_called_once_with( mock.sentinel.FAKE_IMG1) self.imagecache._remove_if_old_image.assert_called_once_with( mock.sentinel.FAKE_IMG2) @mock.patch.object(imagecache.os, 'utime') @mock.patch.object(imagecache.ImageCache, '_get_image_backing_files') def test_update_image_timestamp(self, mock_get_backing_files, mock_utime): mock_get_backing_files.return_value = [mock.sentinel.backing_file, mock.sentinel.resized_file] self.imagecache._update_image_timestamp(mock.sentinel.image) mock_get_backing_files.assert_called_once_with(mock.sentinel.image) mock_utime.assert_has_calls([ mock.call(mock.sentinel.backing_file, None), mock.call(mock.sentinel.resized_file, None)]) def test_get_image_backing_files(self): image = 'fake-img' self.imagecache.unexplained_images = ['%s_42' % image, 'unexplained-img'] self.imagecache._pathutils.get_image_path.side_effect = [ mock.sentinel.base_file, mock.sentinel.resized_file] backing_files = self.imagecache._get_image_backing_files(image) self.assertEqual([mock.sentinel.base_file, mock.sentinel.resized_file], backing_files) self.imagecache._pathutils.get_image_path.assert_has_calls( [mock.call(image), mock.call('%s_42' % image)]) @mock.patch.object(imagecache.ImageCache, '_get_image_backing_files') def test_remove_if_old_image(self, mock_get_backing_files): mock_get_backing_files.return_value = [mock.sentinel.backing_file, mock.sentinel.resized_file] self.imagecache._pathutils.get_age_of_file.return_value = 3600 self.imagecache._remove_if_old_image(mock.sentinel.image) calls = [mock.call(mock.sentinel.backing_file), mock.call(mock.sentinel.resized_file)] self.imagecache._pathutils.get_age_of_file.assert_has_calls(calls) mock_get_backing_files.assert_called_once_with(mock.sentinel.image) def test_remove_old_image(self): fake_img_path = os.path.join(self.tmpdir, self.FAKE_IMAGE_REF) self.imagecache._remove_old_image(fake_img_path) self.imagecache._pathutils.remove.assert_called_once_with( fake_img_path) @mock.patch.object(imagecache.ImageCache, '_age_and_verify_cached_images') @mock.patch.object(imagecache.ImageCache, '_list_base_images') @mock.patch.object(imagecache.ImageCache, '_list_running_instances') def test_update(self, mock_list_instances, mock_list_images, mock_age_cached_images): base_vhd_dir = self.imagecache._pathutils.get_base_vhd_dir.return_value mock_list_instances.return_value = { 'used_images': {mock.sentinel.image: mock.sentinel.instances}} mock_list_images.return_value = { 'originals': [mock.sentinel.original_image], 'unexplained_images': [mock.sentinel.unexplained_image]} self.imagecache.update(mock.sentinel.context, mock.sentinel.all_instances) self.assertEqual([mock.sentinel.image], list(self.imagecache.used_images)) self.assertEqual([mock.sentinel.original_image], self.imagecache.originals) self.assertEqual([mock.sentinel.unexplained_image], self.imagecache.unexplained_images) mock_list_instances.assert_called_once_with( mock.sentinel.context, mock.sentinel.all_instances) mock_list_images.assert_called_once_with(base_vhd_dir) mock_age_cached_images.assert_called_once_with( mock.sentinel.context, mock.sentinel.all_instances, base_vhd_dir) @mock.patch.object(imagecache.os, 'listdir') def test_list_base_images(self, mock_listdir): original_image = uuids.fake unexplained_image = 'just-an-image' ignored_file = 'foo.bar' mock_listdir.return_value = ['%s.VHD' % original_image, '%s.vhdx' % unexplained_image, ignored_file] images = self.imagecache._list_base_images(mock.sentinel.base_dir) self.assertEqual([original_image], images['originals']) self.assertEqual([unexplained_image], images['unexplained_images']) mock_listdir.assert_called_once_with(mock.sentinel.base_dir) nova-17.0.1/nova/tests/unit/virt/hyperv/test_vmops.py0000666000175000017500000025107513250073136022743 0ustar zuulzuul00000000000000# Copyright 2014 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import ddt from eventlet import timeout as etimeout import mock from os_win import constants as os_win_const from os_win import exceptions as os_win_exc from oslo_concurrency import processutils from oslo_config import cfg from oslo_utils import fileutils from oslo_utils import units from nova.compute import vm_states from nova import exception from nova import objects from nova.objects import fields from nova.objects import flavor as flavor_obj from nova.tests.unit import fake_instance from nova.tests.unit.objects import test_flavor from nova.tests.unit.objects import test_virtual_interface from nova.tests.unit.virt.hyperv import test_base from nova.virt import hardware from nova.virt.hyperv import constants from nova.virt.hyperv import vmops from nova.virt.hyperv import volumeops CONF = cfg.CONF @ddt.ddt class VMOpsTestCase(test_base.HyperVBaseTestCase): """Unit tests for the Hyper-V VMOps class.""" _FAKE_TIMEOUT = 2 FAKE_SIZE = 10 FAKE_DIR = 'fake_dir' FAKE_ROOT_PATH = 'C:\\path\\to\\fake.%s' FAKE_CONFIG_DRIVE_ISO = 'configdrive.iso' FAKE_CONFIG_DRIVE_VHD = 'configdrive.vhd' FAKE_UUID = '4f54fb69-d3a2-45b7-bb9b-b6e6b3d893b3' FAKE_LOG = 'fake_log' _WIN_VERSION_6_3 = '6.3.0' _WIN_VERSION_10 = '10.0' ISO9660 = 'iso9660' _FAKE_CONFIGDRIVE_PATH = 'C:/fake_instance_dir/configdrive.vhd' def setUp(self): super(VMOpsTestCase, self).setUp() self.context = 'fake-context' self._vmops = vmops.VMOps(virtapi=mock.MagicMock()) self._vmops._vmutils = mock.MagicMock() self._vmops._metricsutils = mock.MagicMock() self._vmops._vhdutils = mock.MagicMock() self._vmops._pathutils = mock.MagicMock() self._vmops._hostutils = mock.MagicMock() self._vmops._serial_console_ops = mock.MagicMock() self._vmops._block_dev_man = mock.MagicMock() self._vmops._vif_driver = mock.MagicMock() def test_list_instances(self): mock_instance = mock.MagicMock() self._vmops._vmutils.list_instances.return_value = [mock_instance] response = self._vmops.list_instances() self._vmops._vmutils.list_instances.assert_called_once_with() self.assertEqual(response, [mock_instance]) def test_estimate_instance_overhead(self): instance_info = {'memory_mb': 512} overhead = self._vmops.estimate_instance_overhead(instance_info) self.assertEqual(0, overhead['memory_mb']) self.assertEqual(1, overhead['disk_gb']) instance_info = {'memory_mb': 500} overhead = self._vmops.estimate_instance_overhead(instance_info) self.assertEqual(0, overhead['disk_gb']) def _test_get_info(self, vm_exists): mock_instance = fake_instance.fake_instance_obj(self.context) mock_info = mock.MagicMock(spec_set=dict) fake_info = {'EnabledState': 2, 'MemoryUsage': mock.sentinel.FAKE_MEM_KB, 'NumberOfProcessors': mock.sentinel.FAKE_NUM_CPU, 'UpTime': mock.sentinel.FAKE_CPU_NS} def getitem(key): return fake_info[key] mock_info.__getitem__.side_effect = getitem expected = hardware.InstanceInfo(state=constants.HYPERV_POWER_STATE[2]) self._vmops._vmutils.vm_exists.return_value = vm_exists self._vmops._vmutils.get_vm_summary_info.return_value = mock_info if not vm_exists: self.assertRaises(exception.InstanceNotFound, self._vmops.get_info, mock_instance) else: response = self._vmops.get_info(mock_instance) self._vmops._vmutils.vm_exists.assert_called_once_with( mock_instance.name) self._vmops._vmutils.get_vm_summary_info.assert_called_once_with( mock_instance.name) self.assertEqual(response, expected) def test_get_info(self): self._test_get_info(vm_exists=True) def test_get_info_exception(self): self._test_get_info(vm_exists=False) @mock.patch.object(vmops.VMOps, 'check_vm_image_type') @mock.patch.object(vmops.VMOps, '_create_root_vhd') def test_create_root_device_type_disk(self, mock_create_root_device, mock_check_vm_image_type): mock_instance = fake_instance.fake_instance_obj(self.context) mock_root_disk_info = {'type': constants.DISK} self._vmops._create_root_device(self.context, mock_instance, mock_root_disk_info, mock.sentinel.VM_GEN_1) mock_create_root_device.assert_called_once_with( self.context, mock_instance) mock_check_vm_image_type.assert_called_once_with( mock_instance.uuid, mock.sentinel.VM_GEN_1, mock_create_root_device.return_value) def _prepare_create_root_device_mocks(self, use_cow_images, vhd_format, vhd_size): mock_instance = fake_instance.fake_instance_obj(self.context) mock_instance.flavor.root_gb = self.FAKE_SIZE self.flags(use_cow_images=use_cow_images) self._vmops._vhdutils.get_vhd_info.return_value = {'VirtualSize': vhd_size * units.Gi} self._vmops._vhdutils.get_vhd_format.return_value = vhd_format root_vhd_internal_size = mock_instance.flavor.root_gb * units.Gi get_size = self._vmops._vhdutils.get_internal_vhd_size_by_file_size get_size.return_value = root_vhd_internal_size self._vmops._pathutils.exists.return_value = True return mock_instance @mock.patch('nova.virt.hyperv.imagecache.ImageCache.get_cached_image') def _test_create_root_vhd_exception(self, mock_get_cached_image, vhd_format): mock_instance = self._prepare_create_root_device_mocks( use_cow_images=False, vhd_format=vhd_format, vhd_size=(self.FAKE_SIZE + 1)) fake_vhd_path = self.FAKE_ROOT_PATH % vhd_format mock_get_cached_image.return_value = fake_vhd_path fake_root_path = self._vmops._pathutils.get_root_vhd_path.return_value self.assertRaises(exception.FlavorDiskSmallerThanImage, self._vmops._create_root_vhd, self.context, mock_instance) self.assertFalse(self._vmops._vhdutils.resize_vhd.called) self._vmops._pathutils.exists.assert_called_once_with( fake_root_path) self._vmops._pathutils.remove.assert_called_once_with( fake_root_path) @mock.patch('nova.virt.hyperv.imagecache.ImageCache.get_cached_image') def _test_create_root_vhd_qcow(self, mock_get_cached_image, vhd_format): mock_instance = self._prepare_create_root_device_mocks( use_cow_images=True, vhd_format=vhd_format, vhd_size=(self.FAKE_SIZE - 1)) fake_vhd_path = self.FAKE_ROOT_PATH % vhd_format mock_get_cached_image.return_value = fake_vhd_path fake_root_path = self._vmops._pathutils.get_root_vhd_path.return_value root_vhd_internal_size = mock_instance.flavor.root_gb * units.Gi get_size = self._vmops._vhdutils.get_internal_vhd_size_by_file_size response = self._vmops._create_root_vhd(context=self.context, instance=mock_instance) self.assertEqual(fake_root_path, response) self._vmops._pathutils.get_root_vhd_path.assert_called_with( mock_instance.name, vhd_format, False) differencing_vhd = self._vmops._vhdutils.create_differencing_vhd differencing_vhd.assert_called_with(fake_root_path, fake_vhd_path) self._vmops._vhdutils.get_vhd_info.assert_called_once_with( fake_vhd_path) if vhd_format is constants.DISK_FORMAT_VHD: self.assertFalse(get_size.called) self.assertFalse(self._vmops._vhdutils.resize_vhd.called) else: get_size.assert_called_once_with(fake_vhd_path, root_vhd_internal_size) self._vmops._vhdutils.resize_vhd.assert_called_once_with( fake_root_path, root_vhd_internal_size, is_file_max_size=False) @mock.patch('nova.virt.hyperv.imagecache.ImageCache.get_cached_image') def _test_create_root_vhd(self, mock_get_cached_image, vhd_format, is_rescue_vhd=False): mock_instance = self._prepare_create_root_device_mocks( use_cow_images=False, vhd_format=vhd_format, vhd_size=(self.FAKE_SIZE - 1)) fake_vhd_path = self.FAKE_ROOT_PATH % vhd_format mock_get_cached_image.return_value = fake_vhd_path rescue_image_id = ( mock.sentinel.rescue_image_id if is_rescue_vhd else None) fake_root_path = self._vmops._pathutils.get_root_vhd_path.return_value root_vhd_internal_size = mock_instance.flavor.root_gb * units.Gi get_size = self._vmops._vhdutils.get_internal_vhd_size_by_file_size response = self._vmops._create_root_vhd( context=self.context, instance=mock_instance, rescue_image_id=rescue_image_id) self.assertEqual(fake_root_path, response) mock_get_cached_image.assert_called_once_with(self.context, mock_instance, rescue_image_id) self._vmops._pathutils.get_root_vhd_path.assert_called_with( mock_instance.name, vhd_format, is_rescue_vhd) self._vmops._pathutils.copyfile.assert_called_once_with( fake_vhd_path, fake_root_path) get_size.assert_called_once_with(fake_vhd_path, root_vhd_internal_size) if is_rescue_vhd: self.assertFalse(self._vmops._vhdutils.resize_vhd.called) else: self._vmops._vhdutils.resize_vhd.assert_called_once_with( fake_root_path, root_vhd_internal_size, is_file_max_size=False) def test_create_root_vhd(self): self._test_create_root_vhd(vhd_format=constants.DISK_FORMAT_VHD) def test_create_root_vhdx(self): self._test_create_root_vhd(vhd_format=constants.DISK_FORMAT_VHDX) def test_create_root_vhd_use_cow_images_true(self): self._test_create_root_vhd_qcow(vhd_format=constants.DISK_FORMAT_VHD) def test_create_root_vhdx_use_cow_images_true(self): self._test_create_root_vhd_qcow(vhd_format=constants.DISK_FORMAT_VHDX) def test_create_rescue_vhd(self): self._test_create_root_vhd(vhd_format=constants.DISK_FORMAT_VHD, is_rescue_vhd=True) def test_create_root_vhdx_size_less_than_internal(self): self._test_create_root_vhd_exception( vhd_format=constants.DISK_FORMAT_VHD) def test_is_resize_needed_exception(self): inst = mock.MagicMock() self.assertRaises( exception.FlavorDiskSmallerThanImage, self._vmops._is_resize_needed, mock.sentinel.FAKE_PATH, self.FAKE_SIZE, self.FAKE_SIZE - 1, inst) def test_is_resize_needed_true(self): inst = mock.MagicMock() self.assertTrue(self._vmops._is_resize_needed( mock.sentinel.FAKE_PATH, self.FAKE_SIZE, self.FAKE_SIZE + 1, inst)) def test_is_resize_needed_false(self): inst = mock.MagicMock() self.assertFalse(self._vmops._is_resize_needed( mock.sentinel.FAKE_PATH, self.FAKE_SIZE, self.FAKE_SIZE, inst)) @mock.patch.object(vmops.VMOps, 'create_ephemeral_disk') def test_create_ephemerals(self, mock_create_ephemeral_disk): mock_instance = fake_instance.fake_instance_obj(self.context) fake_ephemerals = [dict(), dict()] self._vmops._vhdutils.get_best_supported_vhd_format.return_value = ( mock.sentinel.format) self._vmops._pathutils.get_ephemeral_vhd_path.side_effect = [ mock.sentinel.FAKE_PATH0, mock.sentinel.FAKE_PATH1] self._vmops._create_ephemerals(mock_instance, fake_ephemerals) self._vmops._pathutils.get_ephemeral_vhd_path.assert_has_calls( [mock.call(mock_instance.name, mock.sentinel.format, 'eph0'), mock.call(mock_instance.name, mock.sentinel.format, 'eph1')]) mock_create_ephemeral_disk.assert_has_calls( [mock.call(mock_instance.name, fake_ephemerals[0]), mock.call(mock_instance.name, fake_ephemerals[1])]) def test_create_ephemeral_disk(self): mock_instance = fake_instance.fake_instance_obj(self.context) mock_ephemeral_info = {'path': 'fake_eph_path', 'size': 10} self._vmops.create_ephemeral_disk(mock_instance.name, mock_ephemeral_info) mock_create_dynamic_vhd = self._vmops._vhdutils.create_dynamic_vhd mock_create_dynamic_vhd.assert_called_once_with('fake_eph_path', 10 * units.Gi) @mock.patch.object(vmops.objects, 'PCIDeviceBus') @mock.patch.object(vmops.objects, 'NetworkInterfaceMetadata') @mock.patch.object(vmops.objects.VirtualInterfaceList, 'get_by_instance_uuid') def test_get_vif_metadata(self, mock_get_by_inst_uuid, mock_NetworkInterfaceMetadata, mock_PCIDevBus): mock_vif = mock.MagicMock(tag='taggy') mock_vif.__contains__.side_effect = ( lambda attr: getattr(mock_vif, attr, None) is not None) mock_get_by_inst_uuid.return_value = [mock_vif, mock.MagicMock(tag=None)] vif_metadata = self._vmops._get_vif_metadata(self.context, mock.sentinel.instance_id) mock_get_by_inst_uuid.assert_called_once_with( self.context, mock.sentinel.instance_id) mock_NetworkInterfaceMetadata.assert_called_once_with( mac=mock_vif.address, bus=mock_PCIDevBus.return_value, tags=[mock_vif.tag]) self.assertEqual([mock_NetworkInterfaceMetadata.return_value], vif_metadata) @mock.patch.object(vmops.objects, 'InstanceDeviceMetadata') @mock.patch.object(vmops.VMOps, '_get_vif_metadata') def test_save_device_metadata(self, mock_get_vif_metadata, mock_InstanceDeviceMetadata): mock_instance = mock.MagicMock() mock_get_vif_metadata.return_value = [mock.sentinel.vif_metadata] self._vmops._block_dev_man.get_bdm_metadata.return_value = [ mock.sentinel.bdm_metadata] self._vmops._save_device_metadata(self.context, mock_instance, mock.sentinel.block_device_info) mock_get_vif_metadata.assert_called_once_with(self.context, mock_instance.uuid) self._vmops._block_dev_man.get_bdm_metadata.assert_called_once_with( self.context, mock_instance, mock.sentinel.block_device_info) expected_metadata = [mock.sentinel.vif_metadata, mock.sentinel.bdm_metadata] mock_InstanceDeviceMetadata.assert_called_once_with( devices=expected_metadata) self.assertEqual(mock_InstanceDeviceMetadata.return_value, mock_instance.device_metadata) def test_set_boot_order(self): self._vmops.set_boot_order(mock.sentinel.instance_name, mock.sentinel.vm_gen, mock.sentinel.bdi) mock_get_boot_order = self._vmops._block_dev_man.get_boot_order mock_get_boot_order.assert_called_once_with( mock.sentinel.vm_gen, mock.sentinel.bdi) self._vmops._vmutils.set_boot_order.assert_called_once_with( mock.sentinel.instance_name, mock_get_boot_order.return_value) @mock.patch('nova.virt.hyperv.vmops.VMOps.destroy') @mock.patch('nova.virt.hyperv.vmops.VMOps.power_on') @mock.patch('nova.virt.hyperv.vmops.VMOps.set_boot_order') @mock.patch('nova.virt.hyperv.vmops.VMOps.attach_config_drive') @mock.patch('nova.virt.hyperv.vmops.VMOps._create_config_drive') @mock.patch('nova.virt.configdrive.required_by') @mock.patch('nova.virt.hyperv.vmops.VMOps._save_device_metadata') @mock.patch('nova.virt.hyperv.vmops.VMOps.create_instance') @mock.patch('nova.virt.hyperv.vmops.VMOps.get_image_vm_generation') @mock.patch('nova.virt.hyperv.vmops.VMOps._create_ephemerals') @mock.patch('nova.virt.hyperv.vmops.VMOps._create_root_device') @mock.patch('nova.virt.hyperv.vmops.VMOps._delete_disk_files') @mock.patch('nova.virt.hyperv.vmops.VMOps._get_neutron_events', return_value=[]) def _test_spawn(self, mock_get_neutron_events, mock_delete_disk_files, mock_create_root_device, mock_create_ephemerals, mock_get_image_vm_gen, mock_create_instance, mock_save_device_metadata, mock_configdrive_required, mock_create_config_drive, mock_attach_config_drive, mock_set_boot_order, mock_power_on, mock_destroy, exists, configdrive_required, fail, fake_vm_gen=constants.VM_GEN_2): mock_instance = fake_instance.fake_instance_obj(self.context) mock_image_meta = mock.MagicMock() root_device_info = mock.sentinel.ROOT_DEV_INFO mock_get_image_vm_gen.return_value = fake_vm_gen fake_config_drive_path = mock_create_config_drive.return_value block_device_info = {'ephemerals': [], 'root_disk': root_device_info} self._vmops._vmutils.vm_exists.return_value = exists mock_configdrive_required.return_value = configdrive_required mock_create_instance.side_effect = fail if exists: self.assertRaises(exception.InstanceExists, self._vmops.spawn, self.context, mock_instance, mock_image_meta, [mock.sentinel.FILE], mock.sentinel.PASSWORD, mock.sentinel.network_info, block_device_info) elif fail is os_win_exc.HyperVException: self.assertRaises(os_win_exc.HyperVException, self._vmops.spawn, self.context, mock_instance, mock_image_meta, [mock.sentinel.FILE], mock.sentinel.PASSWORD, mock.sentinel.network_info, block_device_info) mock_destroy.assert_called_once_with(mock_instance, mock.sentinel.network_info, block_device_info) else: self._vmops.spawn(self.context, mock_instance, mock_image_meta, [mock.sentinel.FILE], mock.sentinel.PASSWORD, mock.sentinel.network_info, block_device_info) self._vmops._vmutils.vm_exists.assert_called_once_with( mock_instance.name) mock_delete_disk_files.assert_called_once_with( mock_instance.name) mock_validate_and_update_bdi = ( self._vmops._block_dev_man.validate_and_update_bdi) mock_validate_and_update_bdi.assert_called_once_with( mock_instance, mock_image_meta, fake_vm_gen, block_device_info) mock_create_root_device.assert_called_once_with(self.context, mock_instance, root_device_info, fake_vm_gen) mock_create_ephemerals.assert_called_once_with( mock_instance, block_device_info['ephemerals']) mock_get_neutron_events.assert_called_once_with( mock.sentinel.network_info) mock_get_image_vm_gen.assert_called_once_with(mock_instance.uuid, mock_image_meta) mock_create_instance.assert_called_once_with( mock_instance, mock.sentinel.network_info, root_device_info, block_device_info, fake_vm_gen, mock_image_meta) mock_save_device_metadata.assert_called_once_with( self.context, mock_instance, block_device_info) mock_configdrive_required.assert_called_once_with(mock_instance) if configdrive_required: mock_create_config_drive.assert_called_once_with( self.context, mock_instance, [mock.sentinel.FILE], mock.sentinel.PASSWORD, mock.sentinel.network_info) mock_attach_config_drive.assert_called_once_with( mock_instance, fake_config_drive_path, fake_vm_gen) mock_set_boot_order.assert_called_once_with( mock_instance.name, fake_vm_gen, block_device_info) mock_power_on.assert_called_once_with( mock_instance, network_info=mock.sentinel.network_info) def test_spawn(self): self._test_spawn(exists=False, configdrive_required=True, fail=None) def test_spawn_instance_exists(self): self._test_spawn(exists=True, configdrive_required=True, fail=None) def test_spawn_create_instance_exception(self): self._test_spawn(exists=False, configdrive_required=True, fail=os_win_exc.HyperVException) def test_spawn_not_required(self): self._test_spawn(exists=False, configdrive_required=False, fail=None) def test_spawn_no_admin_permissions(self): self._vmops._vmutils.check_admin_permissions.side_effect = ( os_win_exc.HyperVException) self.assertRaises(os_win_exc.HyperVException, self._vmops.spawn, self.context, mock.DEFAULT, mock.DEFAULT, [mock.sentinel.FILE], mock.sentinel.PASSWORD, mock.sentinel.INFO, mock.sentinel.DEV_INFO) @mock.patch.object(vmops.VMOps, '_get_neutron_events') def test_wait_vif_plug_events(self, mock_get_events): self._vmops._virtapi.wait_for_instance_event.side_effect = ( etimeout.Timeout) self.flags(vif_plugging_timeout=1) self.flags(vif_plugging_is_fatal=True) def _context_user(): with self._vmops.wait_vif_plug_events(mock.sentinel.instance, mock.sentinel.network_info): pass self.assertRaises(exception.VirtualInterfaceCreateException, _context_user) mock_get_events.assert_called_once_with(mock.sentinel.network_info) self._vmops._virtapi.wait_for_instance_event.assert_called_once_with( mock.sentinel.instance, mock_get_events.return_value, deadline=CONF.vif_plugging_timeout, error_callback=self._vmops._neutron_failed_callback) def test_neutron_failed_callback(self): self.flags(vif_plugging_is_fatal=True) self.assertRaises(exception.VirtualInterfaceCreateException, self._vmops._neutron_failed_callback, mock.sentinel.event_name, mock.sentinel.instance) @mock.patch.object(vmops.utils, 'is_neutron') def test_get_neutron_events(self, mock_is_neutron): network_info = [{'id': mock.sentinel.vif_id1, 'active': True}, {'id': mock.sentinel.vif_id2, 'active': False}, {'id': mock.sentinel.vif_id3}] events = self._vmops._get_neutron_events(network_info) self.assertEqual([('network-vif-plugged', mock.sentinel.vif_id2)], events) mock_is_neutron.assert_called_once_with() @mock.patch.object(vmops.utils, 'is_neutron') def test_get_neutron_events_no_timeout(self, mock_is_neutron): self.flags(vif_plugging_timeout=0) network_info = [{'id': mock.sentinel.vif_id1, 'active': True}] events = self._vmops._get_neutron_events(network_info) self.assertEqual([], events) mock_is_neutron.assert_called_once_with() @mock.patch.object(vmops.VMOps, '_attach_pci_devices') @mock.patch.object(vmops.VMOps, '_requires_secure_boot') @mock.patch.object(vmops.VMOps, '_requires_certificate') @mock.patch.object(vmops.VMOps, '_get_instance_vnuma_config') @mock.patch('nova.virt.hyperv.volumeops.VolumeOps' '.attach_volumes') @mock.patch.object(vmops.VMOps, '_set_instance_disk_qos_specs') @mock.patch.object(vmops.VMOps, '_create_vm_com_port_pipes') @mock.patch.object(vmops.VMOps, '_attach_ephemerals') @mock.patch.object(vmops.VMOps, '_attach_root_device') @mock.patch.object(vmops.VMOps, '_configure_remotefx') def _test_create_instance(self, mock_configure_remotefx, mock_attach_root_device, mock_attach_ephemerals, mock_create_pipes, mock_set_qos_specs, mock_attach_volumes, mock_get_vnuma_config, mock_requires_certificate, mock_requires_secure_boot, mock_attach_pci_devices, enable_instance_metrics, vm_gen=constants.VM_GEN_1, vnuma_enabled=False, pci_requests=None): self.flags(dynamic_memory_ratio=2.0, group='hyperv') self.flags(enable_instance_metrics_collection=enable_instance_metrics, group='hyperv') root_device_info = mock.sentinel.ROOT_DEV_INFO block_device_info = {'ephemerals': [], 'block_device_mapping': []} fake_network_info = {'id': mock.sentinel.ID, 'address': mock.sentinel.ADDRESS} mock_instance = fake_instance.fake_instance_obj(self.context) instance_path = os.path.join(CONF.instances_path, mock_instance.name) mock_requires_secure_boot.return_value = True flavor = flavor_obj.Flavor(**test_flavor.fake_flavor) mock_instance.flavor = flavor instance_pci_requests = objects.InstancePCIRequests( requests=pci_requests or [], instance_uuid=mock_instance.uuid) mock_instance.pci_requests = instance_pci_requests host_shutdown_action = (os_win_const.HOST_SHUTDOWN_ACTION_SHUTDOWN if pci_requests else None) if vnuma_enabled: mock_get_vnuma_config.return_value = ( mock.sentinel.mem_per_numa, mock.sentinel.cpus_per_numa) cpus_per_numa = mock.sentinel.cpus_per_numa mem_per_numa = mock.sentinel.mem_per_numa dynamic_memory_ratio = 1.0 else: mock_get_vnuma_config.return_value = (None, None) mem_per_numa, cpus_per_numa = (None, None) dynamic_memory_ratio = CONF.hyperv.dynamic_memory_ratio self._vmops.create_instance(instance=mock_instance, network_info=[fake_network_info], root_device=root_device_info, block_device_info=block_device_info, vm_gen=vm_gen, image_meta=mock.sentinel.image_meta) mock_get_vnuma_config.assert_called_once_with(mock_instance, mock.sentinel.image_meta) self._vmops._vmutils.create_vm.assert_called_once_with( mock_instance.name, vnuma_enabled, vm_gen, instance_path, [mock_instance.uuid]) self._vmops._vmutils.update_vm.assert_called_once_with( mock_instance.name, mock_instance.flavor.memory_mb, mem_per_numa, mock_instance.flavor.vcpus, cpus_per_numa, CONF.hyperv.limit_cpu_features, dynamic_memory_ratio, host_shutdown_action=host_shutdown_action) mock_configure_remotefx.assert_called_once_with(mock_instance, vm_gen) mock_create_scsi_ctrl = self._vmops._vmutils.create_scsi_controller mock_create_scsi_ctrl.assert_called_once_with(mock_instance.name) mock_attach_root_device.assert_called_once_with(mock_instance.name, root_device_info) mock_attach_ephemerals.assert_called_once_with(mock_instance.name, block_device_info['ephemerals']) mock_attach_volumes.assert_called_once_with( block_device_info['block_device_mapping'], mock_instance.name) self._vmops._vmutils.create_nic.assert_called_once_with( mock_instance.name, mock.sentinel.ID, mock.sentinel.ADDRESS) mock_enable = self._vmops._metricsutils.enable_vm_metrics_collection if enable_instance_metrics: mock_enable.assert_called_once_with(mock_instance.name) mock_set_qos_specs.assert_called_once_with(mock_instance) mock_requires_secure_boot.assert_called_once_with( mock_instance, mock.sentinel.image_meta, vm_gen) mock_requires_certificate.assert_called_once_with( mock.sentinel.image_meta) enable_secure_boot = self._vmops._vmutils.enable_secure_boot enable_secure_boot.assert_called_once_with( mock_instance.name, msft_ca_required=mock_requires_certificate.return_value) mock_attach_pci_devices.assert_called_once_with(mock_instance) def test_create_instance(self): self._test_create_instance(enable_instance_metrics=True) def test_create_instance_enable_instance_metrics_false(self): self._test_create_instance(enable_instance_metrics=False) def test_create_instance_gen2(self): self._test_create_instance(enable_instance_metrics=False, vm_gen=constants.VM_GEN_2) def test_create_instance_vnuma_enabled(self): self._test_create_instance(enable_instance_metrics=False, vnuma_enabled=True) def test_create_instance_pci_requested(self): vendor_id = 'fake_vendor_id' product_id = 'fake_product_id' spec = {'vendor_id': vendor_id, 'product_id': product_id} request = objects.InstancePCIRequest(count=1, spec=[spec]) self._test_create_instance(enable_instance_metrics=False, pci_requests=[request]) def test_attach_pci_devices(self): mock_instance = fake_instance.fake_instance_obj(self.context) vendor_id = 'fake_vendor_id' product_id = 'fake_product_id' spec = {'vendor_id': vendor_id, 'product_id': product_id} request = objects.InstancePCIRequest(count=2, spec=[spec]) instance_pci_requests = objects.InstancePCIRequests( requests=[request], instance_uuid=mock_instance.uuid) mock_instance.pci_requests = instance_pci_requests self._vmops._attach_pci_devices(mock_instance) self._vmops._vmutils.add_pci_device.assert_has_calls( [mock.call(mock_instance.name, vendor_id, product_id)] * 2) @mock.patch.object(vmops.hardware, 'numa_get_constraints') def _check_get_instance_vnuma_config_exception(self, mock_get_numa, numa_cells): flavor = {'extra_specs': {}} mock_instance = mock.MagicMock(flavor=flavor) image_meta = mock.MagicMock(properties={}) numa_topology = objects.InstanceNUMATopology(cells=numa_cells) mock_get_numa.return_value = numa_topology self.assertRaises(exception.InstanceUnacceptable, self._vmops._get_instance_vnuma_config, mock_instance, image_meta) def test_get_instance_vnuma_config_bad_cpuset(self): cell1 = objects.InstanceNUMACell(cpuset=set([0]), memory=1024) cell2 = objects.InstanceNUMACell(cpuset=set([1, 2]), memory=1024) self._check_get_instance_vnuma_config_exception( numa_cells=[cell1, cell2]) def test_get_instance_vnuma_config_bad_memory(self): cell1 = objects.InstanceNUMACell(cpuset=set([0]), memory=1024) cell2 = objects.InstanceNUMACell(cpuset=set([1]), memory=2048) self._check_get_instance_vnuma_config_exception( numa_cells=[cell1, cell2]) def test_get_instance_vnuma_config_cpu_pinning(self): cell1 = objects.InstanceNUMACell( cpuset=set([0]), memory=1024, cpu_policy=fields.CPUAllocationPolicy.DEDICATED) cell2 = objects.InstanceNUMACell( cpuset=set([1]), memory=1024, cpu_policy=fields.CPUAllocationPolicy.DEDICATED) self._check_get_instance_vnuma_config_exception( numa_cells=[cell1, cell2]) @mock.patch.object(vmops.hardware, 'numa_get_constraints') def _check_get_instance_vnuma_config( self, mock_get_numa, numa_topology=None, expected_mem_per_numa=None, expected_cpus_per_numa=None): mock_instance = mock.MagicMock() image_meta = mock.MagicMock() mock_get_numa.return_value = numa_topology result_memory_per_numa, result_cpus_per_numa = ( self._vmops._get_instance_vnuma_config(mock_instance, image_meta)) self.assertEqual(expected_cpus_per_numa, result_cpus_per_numa) self.assertEqual(expected_mem_per_numa, result_memory_per_numa) def test_get_instance_vnuma_config(self): cell1 = objects.InstanceNUMACell(cpuset=set([0]), memory=2048) cell2 = objects.InstanceNUMACell(cpuset=set([1]), memory=2048) numa_topology = objects.InstanceNUMATopology(cells=[cell1, cell2]) self._check_get_instance_vnuma_config(numa_topology=numa_topology, expected_cpus_per_numa=1, expected_mem_per_numa=2048) def test_get_instance_vnuma_config_no_topology(self): self._check_get_instance_vnuma_config() @mock.patch.object(vmops.volumeops.VolumeOps, 'attach_volume') def test_attach_root_device_volume(self, mock_attach_volume): mock_instance = fake_instance.fake_instance_obj(self.context) root_device_info = {'type': constants.VOLUME, 'connection_info': mock.sentinel.CONN_INFO, 'disk_bus': constants.CTRL_TYPE_IDE} self._vmops._attach_root_device(mock_instance.name, root_device_info) mock_attach_volume.assert_called_once_with( root_device_info['connection_info'], mock_instance.name, disk_bus=root_device_info['disk_bus']) @mock.patch.object(vmops.VMOps, '_attach_drive') def test_attach_root_device_disk(self, mock_attach_drive): mock_instance = fake_instance.fake_instance_obj(self.context) root_device_info = {'type': constants.DISK, 'boot_index': 0, 'disk_bus': constants.CTRL_TYPE_IDE, 'path': 'fake_path', 'drive_addr': 0, 'ctrl_disk_addr': 1} self._vmops._attach_root_device(mock_instance.name, root_device_info) mock_attach_drive.assert_called_once_with( mock_instance.name, root_device_info['path'], root_device_info['drive_addr'], root_device_info['ctrl_disk_addr'], root_device_info['disk_bus'], root_device_info['type']) @mock.patch.object(vmops.VMOps, '_attach_drive') def test_attach_ephemerals(self, mock_attach_drive): mock_instance = fake_instance.fake_instance_obj(self.context) ephemerals = [{'path': mock.sentinel.PATH1, 'boot_index': 1, 'disk_bus': constants.CTRL_TYPE_IDE, 'device_type': 'disk', 'drive_addr': 0, 'ctrl_disk_addr': 1}, {'path': mock.sentinel.PATH2, 'boot_index': 2, 'disk_bus': constants.CTRL_TYPE_SCSI, 'device_type': 'disk', 'drive_addr': 0, 'ctrl_disk_addr': 0}, {'path': None}] self._vmops._attach_ephemerals(mock_instance.name, ephemerals) mock_attach_drive.assert_has_calls( [mock.call(mock_instance.name, mock.sentinel.PATH1, 0, 1, constants.CTRL_TYPE_IDE, constants.DISK), mock.call(mock_instance.name, mock.sentinel.PATH2, 0, 0, constants.CTRL_TYPE_SCSI, constants.DISK) ]) def test_attach_drive_vm_to_scsi(self): self._vmops._attach_drive( mock.sentinel.FAKE_VM_NAME, mock.sentinel.FAKE_PATH, mock.sentinel.FAKE_DRIVE_ADDR, mock.sentinel.FAKE_CTRL_DISK_ADDR, constants.CTRL_TYPE_SCSI) self._vmops._vmutils.attach_scsi_drive.assert_called_once_with( mock.sentinel.FAKE_VM_NAME, mock.sentinel.FAKE_PATH, constants.DISK) def test_attach_drive_vm_to_ide(self): self._vmops._attach_drive( mock.sentinel.FAKE_VM_NAME, mock.sentinel.FAKE_PATH, mock.sentinel.FAKE_DRIVE_ADDR, mock.sentinel.FAKE_CTRL_DISK_ADDR, constants.CTRL_TYPE_IDE) self._vmops._vmutils.attach_ide_drive.assert_called_once_with( mock.sentinel.FAKE_VM_NAME, mock.sentinel.FAKE_PATH, mock.sentinel.FAKE_DRIVE_ADDR, mock.sentinel.FAKE_CTRL_DISK_ADDR, constants.DISK) def test_get_image_vm_generation_default(self): image_meta = objects.ImageMeta.from_dict({"properties": {}}) self._vmops._hostutils.get_default_vm_generation.return_value = ( constants.IMAGE_PROP_VM_GEN_1) self._vmops._hostutils.get_supported_vm_types.return_value = [ constants.IMAGE_PROP_VM_GEN_1, constants.IMAGE_PROP_VM_GEN_2] response = self._vmops.get_image_vm_generation( mock.sentinel.instance_id, image_meta) self.assertEqual(constants.VM_GEN_1, response) def test_get_image_vm_generation_gen2(self): image_meta = objects.ImageMeta.from_dict( {"properties": {"hw_machine_type": constants.IMAGE_PROP_VM_GEN_2}}) self._vmops._hostutils.get_supported_vm_types.return_value = [ constants.IMAGE_PROP_VM_GEN_1, constants.IMAGE_PROP_VM_GEN_2] response = self._vmops.get_image_vm_generation( mock.sentinel.instance_id, image_meta) self.assertEqual(constants.VM_GEN_2, response) def test_check_vm_image_type_exception(self): self._vmops._vhdutils.get_vhd_format.return_value = ( constants.DISK_FORMAT_VHD) self.assertRaises(exception.InstanceUnacceptable, self._vmops.check_vm_image_type, mock.sentinel.instance_id, constants.VM_GEN_2, mock.sentinel.FAKE_PATH) def _check_requires_certificate(self, os_type): mock_image_meta = mock.MagicMock() mock_image_meta.properties = {'os_type': os_type} expected_result = os_type == fields.OSType.LINUX result = self._vmops._requires_certificate(mock_image_meta) self.assertEqual(expected_result, result) def test_requires_certificate_windows(self): self._check_requires_certificate(os_type=fields.OSType.WINDOWS) def test_requires_certificate_linux(self): self._check_requires_certificate(os_type=fields.OSType.LINUX) def _check_requires_secure_boot( self, image_prop_os_type=fields.OSType.LINUX, image_prop_secure_boot=fields.SecureBoot.REQUIRED, flavor_secure_boot=fields.SecureBoot.REQUIRED, vm_gen=constants.VM_GEN_2, expected_exception=True): mock_instance = fake_instance.fake_instance_obj(self.context) if flavor_secure_boot: mock_instance.flavor.extra_specs = { constants.FLAVOR_SPEC_SECURE_BOOT: flavor_secure_boot} mock_image_meta = mock.MagicMock() mock_image_meta.properties = {'os_type': image_prop_os_type} if image_prop_secure_boot: mock_image_meta.properties['os_secure_boot'] = ( image_prop_secure_boot) if expected_exception: self.assertRaises(exception.InstanceUnacceptable, self._vmops._requires_secure_boot, mock_instance, mock_image_meta, vm_gen) else: result = self._vmops._requires_secure_boot(mock_instance, mock_image_meta, vm_gen) requires_sb = fields.SecureBoot.REQUIRED in [ flavor_secure_boot, image_prop_secure_boot] self.assertEqual(requires_sb, result) def test_requires_secure_boot_ok(self): self._check_requires_secure_boot( expected_exception=False) def test_requires_secure_boot_image_img_prop_none(self): self._check_requires_secure_boot( image_prop_secure_boot=None, expected_exception=False) def test_requires_secure_boot_image_extra_spec_none(self): self._check_requires_secure_boot( flavor_secure_boot=None, expected_exception=False) def test_requires_secure_boot_flavor_no_os_type(self): self._check_requires_secure_boot( image_prop_os_type=None) def test_requires_secure_boot_flavor_no_os_type_no_exc(self): self._check_requires_secure_boot( image_prop_os_type=None, image_prop_secure_boot=fields.SecureBoot.DISABLED, flavor_secure_boot=fields.SecureBoot.DISABLED, expected_exception=False) def test_requires_secure_boot_flavor_disabled(self): self._check_requires_secure_boot( flavor_secure_boot=fields.SecureBoot.DISABLED) def test_requires_secure_boot_image_disabled(self): self._check_requires_secure_boot( image_prop_secure_boot=fields.SecureBoot.DISABLED) def test_requires_secure_boot_generation_1(self): self._check_requires_secure_boot(vm_gen=constants.VM_GEN_1) @mock.patch('nova.api.metadata.base.InstanceMetadata') @mock.patch('nova.virt.configdrive.ConfigDriveBuilder') @mock.patch('nova.utils.execute') def _test_create_config_drive(self, mock_execute, mock_ConfigDriveBuilder, mock_InstanceMetadata, config_drive_format, config_drive_cdrom, side_effect, rescue=False): mock_instance = fake_instance.fake_instance_obj(self.context) self.flags(config_drive_format=config_drive_format) self.flags(config_drive_cdrom=config_drive_cdrom, group='hyperv') self.flags(config_drive_inject_password=True, group='hyperv') mock_ConfigDriveBuilder().__enter__().make_drive.side_effect = [ side_effect] path_iso = os.path.join(self.FAKE_DIR, self.FAKE_CONFIG_DRIVE_ISO) path_vhd = os.path.join(self.FAKE_DIR, self.FAKE_CONFIG_DRIVE_VHD) def fake_get_configdrive_path(instance_name, disk_format, rescue=False): return (path_iso if disk_format == constants.DVD_FORMAT else path_vhd) mock_get_configdrive_path = self._vmops._pathutils.get_configdrive_path mock_get_configdrive_path.side_effect = fake_get_configdrive_path expected_get_configdrive_path_calls = [mock.call(mock_instance.name, constants.DVD_FORMAT, rescue=rescue)] if not config_drive_cdrom: expected_call = mock.call(mock_instance.name, constants.DISK_FORMAT_VHD, rescue=rescue) expected_get_configdrive_path_calls.append(expected_call) if config_drive_format != self.ISO9660: self.assertRaises(exception.ConfigDriveUnsupportedFormat, self._vmops._create_config_drive, self.context, mock_instance, [mock.sentinel.FILE], mock.sentinel.PASSWORD, mock.sentinel.NET_INFO, rescue) elif side_effect is processutils.ProcessExecutionError: self.assertRaises(processutils.ProcessExecutionError, self._vmops._create_config_drive, self.context, mock_instance, [mock.sentinel.FILE], mock.sentinel.PASSWORD, mock.sentinel.NET_INFO, rescue) else: path = self._vmops._create_config_drive(self.context, mock_instance, [mock.sentinel.FILE], mock.sentinel.PASSWORD, mock.sentinel.NET_INFO, rescue) mock_InstanceMetadata.assert_called_once_with( mock_instance, content=[mock.sentinel.FILE], extra_md={'admin_pass': mock.sentinel.PASSWORD}, network_info=mock.sentinel.NET_INFO, request_context=self.context) mock_get_configdrive_path.assert_has_calls( expected_get_configdrive_path_calls) mock_ConfigDriveBuilder.assert_called_with( instance_md=mock_InstanceMetadata()) mock_make_drive = mock_ConfigDriveBuilder().__enter__().make_drive mock_make_drive.assert_called_once_with(path_iso) if not CONF.hyperv.config_drive_cdrom: expected = path_vhd mock_execute.assert_called_once_with( CONF.hyperv.qemu_img_cmd, 'convert', '-f', 'raw', '-O', 'vpc', path_iso, path_vhd, attempts=1) self._vmops._pathutils.remove.assert_called_once_with( os.path.join(self.FAKE_DIR, self.FAKE_CONFIG_DRIVE_ISO)) else: expected = path_iso self.assertEqual(expected, path) def test_create_config_drive_cdrom(self): self._test_create_config_drive(config_drive_format=self.ISO9660, config_drive_cdrom=True, side_effect=None) def test_create_config_drive_vhd(self): self._test_create_config_drive(config_drive_format=self.ISO9660, config_drive_cdrom=False, side_effect=None) def test_create_rescue_config_drive_vhd(self): self._test_create_config_drive(config_drive_format=self.ISO9660, config_drive_cdrom=False, side_effect=None, rescue=True) def test_create_config_drive_execution_error(self): self._test_create_config_drive( config_drive_format=self.ISO9660, config_drive_cdrom=False, side_effect=processutils.ProcessExecutionError) def test_attach_config_drive_exception(self): instance = fake_instance.fake_instance_obj(self.context) self.assertRaises(exception.InvalidDiskFormat, self._vmops.attach_config_drive, instance, 'C:/fake_instance_dir/configdrive.xxx', constants.VM_GEN_1) @mock.patch.object(vmops.VMOps, '_attach_drive') def test_attach_config_drive(self, mock_attach_drive): instance = fake_instance.fake_instance_obj(self.context) self._vmops.attach_config_drive(instance, self._FAKE_CONFIGDRIVE_PATH, constants.VM_GEN_1) mock_attach_drive.assert_called_once_with( instance.name, self._FAKE_CONFIGDRIVE_PATH, 1, 0, constants.CTRL_TYPE_IDE, constants.DISK) @mock.patch.object(vmops.VMOps, '_attach_drive') def test_attach_config_drive_gen2(self, mock_attach_drive): instance = fake_instance.fake_instance_obj(self.context) self._vmops.attach_config_drive(instance, self._FAKE_CONFIGDRIVE_PATH, constants.VM_GEN_2) mock_attach_drive.assert_called_once_with( instance.name, self._FAKE_CONFIGDRIVE_PATH, 1, 0, constants.CTRL_TYPE_SCSI, constants.DISK) def test_detach_config_drive(self): is_rescue_configdrive = True mock_lookup_configdrive = ( self._vmops._pathutils.lookup_configdrive_path) mock_lookup_configdrive.return_value = mock.sentinel.configdrive_path self._vmops._detach_config_drive(mock.sentinel.instance_name, rescue=is_rescue_configdrive, delete=True) mock_lookup_configdrive.assert_called_once_with( mock.sentinel.instance_name, rescue=is_rescue_configdrive) self._vmops._vmutils.detach_vm_disk.assert_called_once_with( mock.sentinel.instance_name, mock.sentinel.configdrive_path, is_physical=False) self._vmops._pathutils.remove.assert_called_once_with( mock.sentinel.configdrive_path) def test_delete_disk_files(self): mock_instance = fake_instance.fake_instance_obj(self.context) self._vmops._delete_disk_files(mock_instance.name) stop_console_handler = ( self._vmops._serial_console_ops.stop_console_handler_unsync) stop_console_handler.assert_called_once_with(mock_instance.name) self._vmops._pathutils.get_instance_dir.assert_called_once_with( mock_instance.name, create_dir=False, remove_dir=True) @ddt.data(True, False) @mock.patch('nova.virt.hyperv.volumeops.VolumeOps.disconnect_volumes') @mock.patch('nova.virt.hyperv.vmops.VMOps._delete_disk_files') @mock.patch('nova.virt.hyperv.vmops.VMOps.power_off') @mock.patch('nova.virt.hyperv.vmops.VMOps.unplug_vifs') def test_destroy(self, vm_exists, mock_unplug_vifs, mock_power_off, mock_delete_disk_files, mock_disconnect_volumes): mock_instance = fake_instance.fake_instance_obj(self.context) self._vmops._vmutils.vm_exists.return_value = vm_exists self._vmops.destroy(instance=mock_instance, block_device_info=mock.sentinel.FAKE_BD_INFO, network_info=mock.sentinel.fake_network_info) if vm_exists: self._vmops._vmutils.stop_vm_jobs.assert_called_once_with( mock_instance.name) mock_power_off.assert_called_once_with(mock_instance) self._vmops._vmutils.destroy_vm.assert_called_once_with( mock_instance.name) else: self.assertFalse(mock_power_off.called) self.assertFalse(self._vmops._vmutils.destroy_vm.called) self._vmops._vmutils.vm_exists.assert_called_with( mock_instance.name) mock_unplug_vifs.assert_called_once_with( mock_instance, mock.sentinel.fake_network_info) mock_disconnect_volumes.assert_called_once_with( mock.sentinel.FAKE_BD_INFO) mock_delete_disk_files.assert_called_once_with( mock_instance.name) @mock.patch('nova.virt.hyperv.vmops.VMOps.power_off') def test_destroy_exception(self, mock_power_off): mock_instance = fake_instance.fake_instance_obj(self.context) self._vmops._vmutils.destroy_vm.side_effect = ( os_win_exc.HyperVException) self._vmops._vmutils.vm_exists.return_value = True self.assertRaises(os_win_exc.HyperVException, self._vmops.destroy, mock_instance, mock.sentinel.network_info, mock.sentinel.block_device_info) def test_reboot_hard(self): self._test_reboot(vmops.REBOOT_TYPE_HARD, os_win_const.HYPERV_VM_STATE_REBOOT) @mock.patch("nova.virt.hyperv.vmops.VMOps._soft_shutdown") def test_reboot_soft(self, mock_soft_shutdown): mock_soft_shutdown.return_value = True self._test_reboot(vmops.REBOOT_TYPE_SOFT, os_win_const.HYPERV_VM_STATE_ENABLED) @mock.patch("nova.virt.hyperv.vmops.VMOps._soft_shutdown") def test_reboot_soft_failed(self, mock_soft_shutdown): mock_soft_shutdown.return_value = False self._test_reboot(vmops.REBOOT_TYPE_SOFT, os_win_const.HYPERV_VM_STATE_REBOOT) @mock.patch("nova.virt.hyperv.vmops.VMOps.power_on") @mock.patch("nova.virt.hyperv.vmops.VMOps._soft_shutdown") def test_reboot_soft_exception(self, mock_soft_shutdown, mock_power_on): mock_soft_shutdown.return_value = True mock_power_on.side_effect = os_win_exc.HyperVException( "Expected failure") instance = fake_instance.fake_instance_obj(self.context) self.assertRaises(os_win_exc.HyperVException, self._vmops.reboot, instance, {}, vmops.REBOOT_TYPE_SOFT) mock_soft_shutdown.assert_called_once_with(instance) mock_power_on.assert_called_once_with(instance, network_info={}) def _test_reboot(self, reboot_type, vm_state): instance = fake_instance.fake_instance_obj(self.context) with mock.patch.object(self._vmops, '_set_vm_state') as mock_set_state: self._vmops.reboot(instance, {}, reboot_type) mock_set_state.assert_called_once_with(instance, vm_state) @mock.patch("nova.virt.hyperv.vmops.VMOps._wait_for_power_off") def test_soft_shutdown(self, mock_wait_for_power_off): instance = fake_instance.fake_instance_obj(self.context) mock_wait_for_power_off.return_value = True result = self._vmops._soft_shutdown(instance, self._FAKE_TIMEOUT) mock_shutdown_vm = self._vmops._vmutils.soft_shutdown_vm mock_shutdown_vm.assert_called_once_with(instance.name) mock_wait_for_power_off.assert_called_once_with( instance.name, self._FAKE_TIMEOUT) self.assertTrue(result) @mock.patch("time.sleep") def test_soft_shutdown_failed(self, mock_sleep): instance = fake_instance.fake_instance_obj(self.context) mock_shutdown_vm = self._vmops._vmutils.soft_shutdown_vm mock_shutdown_vm.side_effect = os_win_exc.HyperVException( "Expected failure.") result = self._vmops._soft_shutdown(instance, self._FAKE_TIMEOUT) mock_shutdown_vm.assert_called_once_with(instance.name) self.assertFalse(result) @mock.patch("nova.virt.hyperv.vmops.VMOps._wait_for_power_off") def test_soft_shutdown_wait(self, mock_wait_for_power_off): instance = fake_instance.fake_instance_obj(self.context) mock_wait_for_power_off.side_effect = [False, True] result = self._vmops._soft_shutdown(instance, self._FAKE_TIMEOUT, 1) calls = [mock.call(instance.name, 1), mock.call(instance.name, self._FAKE_TIMEOUT - 1)] mock_shutdown_vm = self._vmops._vmutils.soft_shutdown_vm mock_shutdown_vm.assert_called_with(instance.name) mock_wait_for_power_off.assert_has_calls(calls) self.assertTrue(result) @mock.patch("nova.virt.hyperv.vmops.VMOps._wait_for_power_off") def test_soft_shutdown_wait_timeout(self, mock_wait_for_power_off): instance = fake_instance.fake_instance_obj(self.context) mock_wait_for_power_off.return_value = False result = self._vmops._soft_shutdown(instance, self._FAKE_TIMEOUT, 1.5) calls = [mock.call(instance.name, 1.5), mock.call(instance.name, self._FAKE_TIMEOUT - 1.5)] mock_shutdown_vm = self._vmops._vmutils.soft_shutdown_vm mock_shutdown_vm.assert_called_with(instance.name) mock_wait_for_power_off.assert_has_calls(calls) self.assertFalse(result) @mock.patch('nova.virt.hyperv.vmops.VMOps._set_vm_state') def test_pause(self, mock_set_vm_state): mock_instance = fake_instance.fake_instance_obj(self.context) self._vmops.pause(instance=mock_instance) mock_set_vm_state.assert_called_once_with( mock_instance, os_win_const.HYPERV_VM_STATE_PAUSED) @mock.patch('nova.virt.hyperv.vmops.VMOps._set_vm_state') def test_unpause(self, mock_set_vm_state): mock_instance = fake_instance.fake_instance_obj(self.context) self._vmops.unpause(instance=mock_instance) mock_set_vm_state.assert_called_once_with( mock_instance, os_win_const.HYPERV_VM_STATE_ENABLED) @mock.patch('nova.virt.hyperv.vmops.VMOps._set_vm_state') def test_suspend(self, mock_set_vm_state): mock_instance = fake_instance.fake_instance_obj(self.context) self._vmops.suspend(instance=mock_instance) mock_set_vm_state.assert_called_once_with( mock_instance, os_win_const.HYPERV_VM_STATE_SUSPENDED) @mock.patch('nova.virt.hyperv.vmops.VMOps._set_vm_state') def test_resume(self, mock_set_vm_state): mock_instance = fake_instance.fake_instance_obj(self.context) self._vmops.resume(instance=mock_instance) mock_set_vm_state.assert_called_once_with( mock_instance, os_win_const.HYPERV_VM_STATE_ENABLED) def _test_power_off(self, timeout, set_state_expected=True): instance = fake_instance.fake_instance_obj(self.context) with mock.patch.object(self._vmops, '_set_vm_state') as mock_set_state: self._vmops.power_off(instance, timeout) serialops = self._vmops._serial_console_ops serialops.stop_console_handler.assert_called_once_with( instance.name) if set_state_expected: mock_set_state.assert_called_once_with( instance, os_win_const.HYPERV_VM_STATE_DISABLED) def test_power_off_hard(self): self._test_power_off(timeout=0) @mock.patch("nova.virt.hyperv.vmops.VMOps._soft_shutdown") def test_power_off_exception(self, mock_soft_shutdown): mock_soft_shutdown.return_value = False self._test_power_off(timeout=1) @mock.patch("nova.virt.hyperv.vmops.VMOps._set_vm_state") @mock.patch("nova.virt.hyperv.vmops.VMOps._soft_shutdown") def test_power_off_soft(self, mock_soft_shutdown, mock_set_state): instance = fake_instance.fake_instance_obj(self.context) mock_soft_shutdown.return_value = True self._vmops.power_off(instance, 1, 0) serialops = self._vmops._serial_console_ops serialops.stop_console_handler.assert_called_once_with( instance.name) mock_soft_shutdown.assert_called_once_with( instance, 1, vmops.SHUTDOWN_TIME_INCREMENT) self.assertFalse(mock_set_state.called) @mock.patch("nova.virt.hyperv.vmops.VMOps._soft_shutdown") def test_power_off_unexisting_instance(self, mock_soft_shutdown): mock_soft_shutdown.side_effect = os_win_exc.HyperVVMNotFoundException( vm_name=mock.sentinel.vm_name) self._test_power_off(timeout=1, set_state_expected=False) @mock.patch('nova.virt.hyperv.vmops.VMOps._set_vm_state') def test_power_on(self, mock_set_vm_state): mock_instance = fake_instance.fake_instance_obj(self.context) self._vmops.power_on(mock_instance) mock_set_vm_state.assert_called_once_with( mock_instance, os_win_const.HYPERV_VM_STATE_ENABLED) @mock.patch('nova.virt.hyperv.volumeops.VolumeOps' '.fix_instance_volume_disk_paths') @mock.patch('nova.virt.hyperv.vmops.VMOps._set_vm_state') def test_power_on_having_block_devices(self, mock_set_vm_state, mock_fix_instance_vol_paths): mock_instance = fake_instance.fake_instance_obj(self.context) self._vmops.power_on(mock_instance, mock.sentinel.block_device_info) mock_fix_instance_vol_paths.assert_called_once_with( mock_instance.name, mock.sentinel.block_device_info) mock_set_vm_state.assert_called_once_with( mock_instance, os_win_const.HYPERV_VM_STATE_ENABLED) @mock.patch.object(vmops.VMOps, 'plug_vifs') def test_power_on_with_network_info(self, mock_plug_vifs): mock_instance = fake_instance.fake_instance_obj(self.context) self._vmops.power_on(mock_instance, network_info=mock.sentinel.fake_network_info) mock_plug_vifs.assert_called_once_with( mock_instance, mock.sentinel.fake_network_info) def _test_set_vm_state(self, state): mock_instance = fake_instance.fake_instance_obj(self.context) self._vmops._set_vm_state(mock_instance, state) self._vmops._vmutils.set_vm_state.assert_called_once_with( mock_instance.name, state) def test_set_vm_state_disabled(self): self._test_set_vm_state(state=os_win_const.HYPERV_VM_STATE_DISABLED) def test_set_vm_state_enabled(self): self._test_set_vm_state(state=os_win_const.HYPERV_VM_STATE_ENABLED) def test_set_vm_state_reboot(self): self._test_set_vm_state(state=os_win_const.HYPERV_VM_STATE_REBOOT) def test_set_vm_state_exception(self): mock_instance = fake_instance.fake_instance_obj(self.context) self._vmops._vmutils.set_vm_state.side_effect = ( os_win_exc.HyperVException) self.assertRaises(os_win_exc.HyperVException, self._vmops._set_vm_state, mock_instance, mock.sentinel.STATE) def test_get_vm_state(self): summary_info = {'EnabledState': os_win_const.HYPERV_VM_STATE_DISABLED} with mock.patch.object(self._vmops._vmutils, 'get_vm_summary_info') as mock_get_summary_info: mock_get_summary_info.return_value = summary_info response = self._vmops._get_vm_state(mock.sentinel.FAKE_VM_NAME) self.assertEqual(response, os_win_const.HYPERV_VM_STATE_DISABLED) @mock.patch.object(vmops.VMOps, '_get_vm_state') def test_wait_for_power_off_true(self, mock_get_state): mock_get_state.return_value = os_win_const.HYPERV_VM_STATE_DISABLED result = self._vmops._wait_for_power_off( mock.sentinel.FAKE_VM_NAME, vmops.SHUTDOWN_TIME_INCREMENT) mock_get_state.assert_called_with(mock.sentinel.FAKE_VM_NAME) self.assertTrue(result) @mock.patch.object(vmops.etimeout, "with_timeout") def test_wait_for_power_off_false(self, mock_with_timeout): mock_with_timeout.side_effect = etimeout.Timeout() result = self._vmops._wait_for_power_off( mock.sentinel.FAKE_VM_NAME, vmops.SHUTDOWN_TIME_INCREMENT) self.assertFalse(result) def test_create_vm_com_port_pipes(self): mock_instance = fake_instance.fake_instance_obj(self.context) mock_serial_ports = { 1: constants.SERIAL_PORT_TYPE_RO, 2: constants.SERIAL_PORT_TYPE_RW } self._vmops._create_vm_com_port_pipes(mock_instance, mock_serial_ports) expected_calls = [] for port_number, port_type in mock_serial_ports.items(): expected_pipe = r'\\.\pipe\%s_%s' % (mock_instance.uuid, port_type) expected_calls.append(mock.call(mock_instance.name, port_number, expected_pipe)) mock_set_conn = self._vmops._vmutils.set_vm_serial_port_connection mock_set_conn.assert_has_calls(expected_calls) def test_list_instance_uuids(self): fake_uuid = '4f54fb69-d3a2-45b7-bb9b-b6e6b3d893b3' with mock.patch.object(self._vmops._vmutils, 'list_instance_notes') as mock_list_notes: mock_list_notes.return_value = [('fake_name', [fake_uuid])] response = self._vmops.list_instance_uuids() mock_list_notes.assert_called_once_with() self.assertEqual(response, [fake_uuid]) def test_copy_vm_dvd_disks(self): fake_paths = [mock.sentinel.FAKE_DVD_PATH1, mock.sentinel.FAKE_DVD_PATH2] mock_copy = self._vmops._pathutils.copyfile mock_get_dvd_disk_paths = self._vmops._vmutils.get_vm_dvd_disk_paths mock_get_dvd_disk_paths.return_value = fake_paths self._vmops._pathutils.get_instance_dir.return_value = ( mock.sentinel.FAKE_DEST_PATH) self._vmops.copy_vm_dvd_disks(mock.sentinel.FAKE_VM_NAME, mock.sentinel.FAKE_DEST_HOST) mock_get_dvd_disk_paths.assert_called_with(mock.sentinel.FAKE_VM_NAME) self._vmops._pathutils.get_instance_dir.assert_called_once_with( mock.sentinel.FAKE_VM_NAME, remote_server=mock.sentinel.FAKE_DEST_HOST) mock_copy.has_calls(mock.call(mock.sentinel.FAKE_DVD_PATH1, mock.sentinel.FAKE_DEST_PATH), mock.call(mock.sentinel.FAKE_DVD_PATH2, mock.sentinel.FAKE_DEST_PATH)) def test_plug_vifs(self): mock_instance = fake_instance.fake_instance_obj(self.context) fake_vif1 = {'id': mock.sentinel.ID1, 'type': mock.sentinel.vif_type1} fake_vif2 = {'id': mock.sentinel.ID2, 'type': mock.sentinel.vif_type2} mock_network_info = [fake_vif1, fake_vif2] calls = [mock.call(mock_instance, fake_vif1), mock.call(mock_instance, fake_vif2)] self._vmops.plug_vifs(mock_instance, network_info=mock_network_info) self._vmops._vif_driver.plug.assert_has_calls(calls) def test_unplug_vifs(self): mock_instance = fake_instance.fake_instance_obj(self.context) fake_vif1 = {'id': mock.sentinel.ID1, 'type': mock.sentinel.vif_type1} fake_vif2 = {'id': mock.sentinel.ID2, 'type': mock.sentinel.vif_type2} mock_network_info = [fake_vif1, fake_vif2] calls = [mock.call(mock_instance, fake_vif1), mock.call(mock_instance, fake_vif2)] self._vmops.unplug_vifs(mock_instance, network_info=mock_network_info) self._vmops._vif_driver.unplug.assert_has_calls(calls) def _setup_remotefx_mocks(self): mock_instance = fake_instance.fake_instance_obj(self.context) mock_instance.flavor.extra_specs = { 'os:resolution': os_win_const.REMOTEFX_MAX_RES_1920x1200, 'os:monitors': '2', 'os:vram': '256'} return mock_instance def test_configure_remotefx_not_required(self): self.flags(enable_remotefx=False, group='hyperv') mock_instance = fake_instance.fake_instance_obj(self.context) self._vmops._configure_remotefx(mock_instance, mock.sentinel.VM_GEN) def test_configure_remotefx_exception_enable_config(self): self.flags(enable_remotefx=False, group='hyperv') mock_instance = self._setup_remotefx_mocks() self.assertRaises(exception.InstanceUnacceptable, self._vmops._configure_remotefx, mock_instance, mock.sentinel.VM_GEN) def test_configure_remotefx_exception_server_feature(self): self.flags(enable_remotefx=True, group='hyperv') mock_instance = self._setup_remotefx_mocks() self._vmops._hostutils.check_server_feature.return_value = False self.assertRaises(exception.InstanceUnacceptable, self._vmops._configure_remotefx, mock_instance, mock.sentinel.VM_GEN) def test_configure_remotefx_exception_vm_gen(self): self.flags(enable_remotefx=True, group='hyperv') mock_instance = self._setup_remotefx_mocks() self._vmops._hostutils.check_server_feature.return_value = True self._vmops._vmutils.vm_gen_supports_remotefx.return_value = False self.assertRaises(exception.InstanceUnacceptable, self._vmops._configure_remotefx, mock_instance, mock.sentinel.VM_GEN) def test_configure_remotefx(self): self.flags(enable_remotefx=True, group='hyperv') mock_instance = self._setup_remotefx_mocks() self._vmops._hostutils.check_server_feature.return_value = True self._vmops._vmutils.vm_gen_supports_remotefx.return_value = True extra_specs = mock_instance.flavor.extra_specs self._vmops._configure_remotefx(mock_instance, constants.VM_GEN_1) mock_enable_remotefx = ( self._vmops._vmutils.enable_remotefx_video_adapter) mock_enable_remotefx.assert_called_once_with( mock_instance.name, int(extra_specs['os:monitors']), extra_specs['os:resolution'], int(extra_specs['os:vram']) * units.Mi) @mock.patch.object(vmops.VMOps, '_get_vm_state') def test_check_hotplug_available_vm_disabled(self, mock_get_vm_state): fake_vm = fake_instance.fake_instance_obj(self.context) mock_get_vm_state.return_value = os_win_const.HYPERV_VM_STATE_DISABLED result = self._vmops._check_hotplug_available(fake_vm) self.assertTrue(result) mock_get_vm_state.assert_called_once_with(fake_vm.name) self.assertFalse( self._vmops._hostutils.check_min_windows_version.called) self.assertFalse(self._vmops._vmutils.get_vm_generation.called) @mock.patch.object(vmops.VMOps, '_get_vm_state') def _test_check_hotplug_available( self, mock_get_vm_state, expected_result=False, vm_gen=constants.VM_GEN_2, windows_version=_WIN_VERSION_10): fake_vm = fake_instance.fake_instance_obj(self.context) mock_get_vm_state.return_value = os_win_const.HYPERV_VM_STATE_ENABLED self._vmops._vmutils.get_vm_generation.return_value = vm_gen fake_check_win_vers = self._vmops._hostutils.check_min_windows_version fake_check_win_vers.return_value = ( windows_version == self._WIN_VERSION_10) result = self._vmops._check_hotplug_available(fake_vm) self.assertEqual(expected_result, result) mock_get_vm_state.assert_called_once_with(fake_vm.name) fake_check_win_vers.assert_called_once_with(10, 0) def test_check_if_hotplug_available(self): self._test_check_hotplug_available(expected_result=True) def test_check_if_hotplug_available_gen1(self): self._test_check_hotplug_available( expected_result=False, vm_gen=constants.VM_GEN_1) def test_check_if_hotplug_available_win_6_3(self): self._test_check_hotplug_available( expected_result=False, windows_version=self._WIN_VERSION_6_3) @mock.patch.object(vmops.VMOps, '_check_hotplug_available') def test_attach_interface(self, mock_check_hotplug_available): mock_check_hotplug_available.return_value = True fake_vm = fake_instance.fake_instance_obj(self.context) fake_vif = test_virtual_interface.fake_vif self._vmops.attach_interface(fake_vm, fake_vif) mock_check_hotplug_available.assert_called_once_with(fake_vm) self._vmops._vif_driver.plug.assert_called_once_with( fake_vm, fake_vif) self._vmops._vmutils.create_nic.assert_called_once_with( fake_vm.name, fake_vif['id'], fake_vif['address']) @mock.patch.object(vmops.VMOps, '_check_hotplug_available') def test_attach_interface_failed(self, mock_check_hotplug_available): mock_check_hotplug_available.return_value = False self.assertRaises(exception.InterfaceAttachFailed, self._vmops.attach_interface, mock.MagicMock(), mock.sentinel.fake_vif) @mock.patch.object(vmops.VMOps, '_check_hotplug_available') def test_detach_interface(self, mock_check_hotplug_available): mock_check_hotplug_available.return_value = True fake_vm = fake_instance.fake_instance_obj(self.context) fake_vif = test_virtual_interface.fake_vif self._vmops.detach_interface(fake_vm, fake_vif) mock_check_hotplug_available.assert_called_once_with(fake_vm) self._vmops._vif_driver.unplug.assert_called_once_with( fake_vm, fake_vif) self._vmops._vmutils.destroy_nic.assert_called_once_with( fake_vm.name, fake_vif['id']) @mock.patch.object(vmops.VMOps, '_check_hotplug_available') def test_detach_interface_failed(self, mock_check_hotplug_available): mock_check_hotplug_available.return_value = False self.assertRaises(exception.InterfaceDetachFailed, self._vmops.detach_interface, mock.MagicMock(), mock.sentinel.fake_vif) @mock.patch.object(vmops.VMOps, '_check_hotplug_available') def test_detach_interface_missing_instance(self, mock_check_hotplug): mock_check_hotplug.side_effect = os_win_exc.HyperVVMNotFoundException( vm_name='fake_vm') self.assertRaises(exception.InterfaceDetachFailed, self._vmops.detach_interface, mock.MagicMock(), mock.sentinel.fake_vif) @mock.patch('nova.virt.configdrive.required_by') @mock.patch.object(vmops.VMOps, '_create_root_vhd') @mock.patch.object(vmops.VMOps, 'get_image_vm_generation') @mock.patch.object(vmops.VMOps, '_attach_drive') @mock.patch.object(vmops.VMOps, '_create_config_drive') @mock.patch.object(vmops.VMOps, 'attach_config_drive') @mock.patch.object(vmops.VMOps, '_detach_config_drive') @mock.patch.object(vmops.VMOps, 'power_on') def test_rescue_instance(self, mock_power_on, mock_detach_config_drive, mock_attach_config_drive, mock_create_config_drive, mock_attach_drive, mock_get_image_vm_gen, mock_create_root_vhd, mock_configdrive_required): mock_image_meta = mock.MagicMock() mock_vm_gen = constants.VM_GEN_2 mock_instance = fake_instance.fake_instance_obj(self.context) mock_configdrive_required.return_value = True mock_create_root_vhd.return_value = mock.sentinel.rescue_vhd_path mock_get_image_vm_gen.return_value = mock_vm_gen self._vmops._vmutils.get_vm_generation.return_value = mock_vm_gen self._vmops._pathutils.lookup_root_vhd_path.return_value = ( mock.sentinel.root_vhd_path) mock_create_config_drive.return_value = ( mock.sentinel.rescue_configdrive_path) self._vmops.rescue_instance(self.context, mock_instance, mock.sentinel.network_info, mock_image_meta, mock.sentinel.rescue_password) mock_get_image_vm_gen.assert_called_once_with( mock_instance.uuid, mock_image_meta) self._vmops._vmutils.detach_vm_disk.assert_called_once_with( mock_instance.name, mock.sentinel.root_vhd_path, is_physical=False) mock_attach_drive.assert_called_once_with( mock_instance.name, mock.sentinel.rescue_vhd_path, 0, self._vmops._ROOT_DISK_CTRL_ADDR, vmops.VM_GENERATIONS_CONTROLLER_TYPES[mock_vm_gen]) self._vmops._vmutils.attach_scsi_drive.assert_called_once_with( mock_instance.name, mock.sentinel.root_vhd_path, drive_type=constants.DISK) mock_detach_config_drive.assert_called_once_with(mock_instance.name) mock_create_config_drive.assert_called_once_with( self.context, mock_instance, injected_files=None, admin_password=mock.sentinel.rescue_password, network_info=mock.sentinel.network_info, rescue=True) mock_attach_config_drive.assert_called_once_with( mock_instance, mock.sentinel.rescue_configdrive_path, mock_vm_gen) @mock.patch.object(vmops.VMOps, '_create_root_vhd') @mock.patch.object(vmops.VMOps, 'get_image_vm_generation') @mock.patch.object(vmops.VMOps, 'unrescue_instance') def _test_rescue_instance_exception(self, mock_unrescue, mock_get_image_vm_gen, mock_create_root_vhd, wrong_vm_gen=False, boot_from_volume=False, expected_exc=None): mock_vm_gen = constants.VM_GEN_1 image_vm_gen = (mock_vm_gen if not wrong_vm_gen else constants.VM_GEN_2) mock_image_meta = mock.MagicMock() mock_instance = fake_instance.fake_instance_obj(self.context) mock_get_image_vm_gen.return_value = image_vm_gen self._vmops._vmutils.get_vm_generation.return_value = mock_vm_gen self._vmops._pathutils.lookup_root_vhd_path.return_value = ( mock.sentinel.root_vhd_path if not boot_from_volume else None) self.assertRaises(expected_exc, self._vmops.rescue_instance, self.context, mock_instance, mock.sentinel.network_info, mock_image_meta, mock.sentinel.rescue_password) mock_unrescue.assert_called_once_with(mock_instance) def test_rescue_instance_wrong_vm_gen(self): # Test the case when the rescue image requires a different # vm generation than the actual rescued instance. self._test_rescue_instance_exception( wrong_vm_gen=True, expected_exc=exception.ImageUnacceptable) def test_rescue_instance_boot_from_volume(self): # Rescuing instances booted from volume is not supported. self._test_rescue_instance_exception( boot_from_volume=True, expected_exc=exception.InstanceNotRescuable) @mock.patch.object(fileutils, 'delete_if_exists') @mock.patch.object(vmops.VMOps, '_attach_drive') @mock.patch.object(vmops.VMOps, 'attach_config_drive') @mock.patch.object(vmops.VMOps, '_detach_config_drive') @mock.patch.object(vmops.VMOps, 'power_on') @mock.patch.object(vmops.VMOps, 'power_off') def test_unrescue_instance(self, mock_power_on, mock_power_off, mock_detach_config_drive, mock_attach_configdrive, mock_attach_drive, mock_delete_if_exists): mock_instance = fake_instance.fake_instance_obj(self.context) mock_vm_gen = constants.VM_GEN_2 self._vmops._vmutils.get_vm_generation.return_value = mock_vm_gen self._vmops._vmutils.is_disk_attached.return_value = False self._vmops._pathutils.lookup_root_vhd_path.side_effect = ( mock.sentinel.root_vhd_path, mock.sentinel.rescue_vhd_path) self._vmops._pathutils.lookup_configdrive_path.return_value = ( mock.sentinel.configdrive_path) self._vmops.unrescue_instance(mock_instance) self._vmops._pathutils.lookup_root_vhd_path.assert_has_calls( [mock.call(mock_instance.name), mock.call(mock_instance.name, rescue=True)]) self._vmops._vmutils.detach_vm_disk.assert_has_calls( [mock.call(mock_instance.name, mock.sentinel.root_vhd_path, is_physical=False), mock.call(mock_instance.name, mock.sentinel.rescue_vhd_path, is_physical=False)]) mock_attach_drive.assert_called_once_with( mock_instance.name, mock.sentinel.root_vhd_path, 0, self._vmops._ROOT_DISK_CTRL_ADDR, vmops.VM_GENERATIONS_CONTROLLER_TYPES[mock_vm_gen]) mock_detach_config_drive.assert_called_once_with(mock_instance.name, rescue=True, delete=True) mock_delete_if_exists.assert_called_once_with( mock.sentinel.rescue_vhd_path) self._vmops._vmutils.is_disk_attached.assert_called_once_with( mock.sentinel.configdrive_path, is_physical=False) mock_attach_configdrive.assert_called_once_with( mock_instance, mock.sentinel.configdrive_path, mock_vm_gen) mock_power_on.assert_called_once_with(mock_instance) @mock.patch.object(vmops.VMOps, 'power_off') def test_unrescue_instance_missing_root_image(self, mock_power_off): mock_instance = fake_instance.fake_instance_obj(self.context) mock_instance.vm_state = vm_states.RESCUED self._vmops._pathutils.lookup_root_vhd_path.return_value = None self.assertRaises(exception.InstanceNotRescuable, self._vmops.unrescue_instance, mock_instance) @mock.patch.object(volumeops.VolumeOps, 'bytes_per_sec_to_iops') @mock.patch.object(vmops.VMOps, '_get_scoped_flavor_extra_specs') @mock.patch.object(vmops.VMOps, '_get_instance_local_disks') def test_set_instance_disk_qos_specs(self, mock_get_local_disks, mock_get_scoped_specs, mock_bytes_per_sec_to_iops): fake_total_bytes_sec = 8 fake_total_iops_sec = 1 mock_instance = fake_instance.fake_instance_obj(self.context) mock_local_disks = [mock.sentinel.root_vhd_path, mock.sentinel.eph_vhd_path] mock_get_local_disks.return_value = mock_local_disks mock_set_qos_specs = self._vmops._vmutils.set_disk_qos_specs mock_get_scoped_specs.return_value = dict( disk_total_bytes_sec=fake_total_bytes_sec) mock_bytes_per_sec_to_iops.return_value = fake_total_iops_sec self._vmops._set_instance_disk_qos_specs(mock_instance) mock_bytes_per_sec_to_iops.assert_called_once_with( fake_total_bytes_sec) mock_get_local_disks.assert_called_once_with(mock_instance.name) expected_calls = [mock.call(disk_path, fake_total_iops_sec) for disk_path in mock_local_disks] mock_set_qos_specs.assert_has_calls(expected_calls) def test_get_instance_local_disks(self): fake_instance_dir = 'fake_instance_dir' fake_local_disks = [os.path.join(fake_instance_dir, disk_name) for disk_name in ['root.vhd', 'configdrive.iso']] fake_instance_disks = ['fake_remote_disk'] + fake_local_disks mock_get_storage_paths = self._vmops._vmutils.get_vm_storage_paths mock_get_storage_paths.return_value = [fake_instance_disks, []] mock_get_instance_dir = self._vmops._pathutils.get_instance_dir mock_get_instance_dir.return_value = fake_instance_dir ret_val = self._vmops._get_instance_local_disks( mock.sentinel.instance_name) self.assertEqual(fake_local_disks, ret_val) def test_get_scoped_flavor_extra_specs(self): # The flavor extra spect dict contains only string values. fake_total_bytes_sec = '8' mock_instance = fake_instance.fake_instance_obj(self.context) mock_instance.flavor.extra_specs = { 'spec_key': 'spec_value', 'quota:total_bytes_sec': fake_total_bytes_sec} ret_val = self._vmops._get_scoped_flavor_extra_specs( mock_instance, scope='quota') expected_specs = { 'total_bytes_sec': fake_total_bytes_sec } self.assertEqual(expected_specs, ret_val) nova-17.0.1/nova/tests/unit/virt/hyperv/test_volumeops.py0000666000175000017500000005736013250073126023630 0ustar zuulzuul00000000000000# Copyright 2014 Cloudbase Solutions Srl # # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from os_brick.initiator import connector from oslo_config import cfg from oslo_utils import units from nova import exception from nova import test from nova.tests.unit import fake_block_device from nova.tests.unit.virt.hyperv import test_base from nova.virt.hyperv import constants from nova.virt.hyperv import volumeops CONF = cfg.CONF connection_data = {'volume_id': 'fake_vol_id', 'target_lun': mock.sentinel.fake_lun, 'target_iqn': mock.sentinel.fake_iqn, 'target_portal': mock.sentinel.fake_portal, 'auth_method': 'chap', 'auth_username': mock.sentinel.fake_user, 'auth_password': mock.sentinel.fake_pass} def get_fake_block_dev_info(): return {'block_device_mapping': [ fake_block_device.AnonFakeDbBlockDeviceDict({'source_type': 'volume'})] } def get_fake_connection_info(**kwargs): return {'data': dict(connection_data, **kwargs), 'serial': mock.sentinel.serial} class VolumeOpsTestCase(test_base.HyperVBaseTestCase): """Unit tests for VolumeOps class.""" def setUp(self): super(VolumeOpsTestCase, self).setUp() self._volumeops = volumeops.VolumeOps() self._volumeops._volutils = mock.MagicMock() self._volumeops._vmutils = mock.Mock() def test_get_volume_driver(self): fake_conn_info = {'driver_volume_type': mock.sentinel.fake_driver_type} self._volumeops.volume_drivers[mock.sentinel.fake_driver_type] = ( mock.sentinel.fake_driver) result = self._volumeops._get_volume_driver( connection_info=fake_conn_info) self.assertEqual(mock.sentinel.fake_driver, result) def test_get_volume_driver_exception(self): fake_conn_info = {'driver_volume_type': 'fake_driver'} self.assertRaises(exception.VolumeDriverNotFound, self._volumeops._get_volume_driver, connection_info=fake_conn_info) @mock.patch.object(volumeops.VolumeOps, 'attach_volume') def test_attach_volumes(self, mock_attach_volume): block_device_info = get_fake_block_dev_info() self._volumeops.attach_volumes( block_device_info['block_device_mapping'], mock.sentinel.instance_name) mock_attach_volume.assert_called_once_with( block_device_info['block_device_mapping'][0]['connection_info'], mock.sentinel.instance_name) def test_fix_instance_volume_disk_paths_empty_bdm(self): self._volumeops.fix_instance_volume_disk_paths( mock.sentinel.instance_name, block_device_info={}) self.assertFalse( self._volumeops._vmutils.get_vm_physical_disk_mapping.called) @mock.patch.object(volumeops.VolumeOps, 'get_disk_path_mapping') def test_fix_instance_volume_disk_paths(self, mock_get_disk_path_mapping): block_device_info = get_fake_block_dev_info() mock_disk1 = { 'mounted_disk_path': mock.sentinel.mounted_disk1_path, 'resource_path': mock.sentinel.resource1_path } mock_disk2 = { 'mounted_disk_path': mock.sentinel.mounted_disk2_path, 'resource_path': mock.sentinel.resource2_path } mock_vm_disk_mapping = { mock.sentinel.disk1_serial: mock_disk1, mock.sentinel.disk2_serial: mock_disk2 } # In this case, only the first disk needs to be updated. mock_phys_disk_path_mapping = { mock.sentinel.disk1_serial: mock.sentinel.actual_disk1_path, mock.sentinel.disk2_serial: mock.sentinel.mounted_disk2_path } vmutils = self._volumeops._vmutils vmutils.get_vm_physical_disk_mapping.return_value = ( mock_vm_disk_mapping) mock_get_disk_path_mapping.return_value = mock_phys_disk_path_mapping self._volumeops.fix_instance_volume_disk_paths( mock.sentinel.instance_name, block_device_info) vmutils.get_vm_physical_disk_mapping.assert_called_once_with( mock.sentinel.instance_name) mock_get_disk_path_mapping.assert_called_once_with( block_device_info) vmutils.set_disk_host_res.assert_called_once_with( mock.sentinel.resource1_path, mock.sentinel.actual_disk1_path) @mock.patch.object(volumeops.VolumeOps, '_get_volume_driver') def test_disconnect_volumes(self, mock_get_volume_driver): block_device_info = get_fake_block_dev_info() block_device_mapping = block_device_info['block_device_mapping'] fake_volume_driver = mock_get_volume_driver.return_value self._volumeops.disconnect_volumes(block_device_info) fake_volume_driver.disconnect_volume.assert_called_once_with( block_device_mapping[0]['connection_info']) @mock.patch('time.sleep') @mock.patch.object(volumeops.VolumeOps, '_get_volume_driver') def _test_attach_volume(self, mock_get_volume_driver, mock_sleep, attach_failed): fake_conn_info = get_fake_connection_info( qos_specs=mock.sentinel.qos_specs) fake_volume_driver = mock_get_volume_driver.return_value expected_try_count = 1 if attach_failed: expected_try_count += CONF.hyperv.volume_attach_retry_count fake_volume_driver.set_disk_qos_specs.side_effect = ( test.TestingException) self.assertRaises(exception.VolumeAttachFailed, self._volumeops.attach_volume, fake_conn_info, mock.sentinel.inst_name, mock.sentinel.disk_bus) else: self._volumeops.attach_volume( fake_conn_info, mock.sentinel.inst_name, mock.sentinel.disk_bus) mock_get_volume_driver.assert_any_call( fake_conn_info) fake_volume_driver.attach_volume.assert_has_calls( [mock.call(fake_conn_info, mock.sentinel.inst_name, mock.sentinel.disk_bus)] * expected_try_count) fake_volume_driver.set_disk_qos_specs.assert_has_calls( [mock.call(fake_conn_info, mock.sentinel.qos_specs)] * expected_try_count) if attach_failed: fake_volume_driver.disconnect_volume.assert_called_once_with( fake_conn_info) mock_sleep.assert_has_calls( [mock.call(CONF.hyperv.volume_attach_retry_interval)] * CONF.hyperv.volume_attach_retry_count) else: mock_sleep.assert_not_called() def test_attach_volume(self): self._test_attach_volume(attach_failed=False) def test_attach_volume_exc(self): self._test_attach_volume(attach_failed=True) @mock.patch.object(volumeops.VolumeOps, '_get_volume_driver') def test_disconnect_volume(self, mock_get_volume_driver): fake_volume_driver = mock_get_volume_driver.return_value self._volumeops.disconnect_volume(mock.sentinel.conn_info) mock_get_volume_driver.assert_called_once_with( mock.sentinel.conn_info) fake_volume_driver.disconnect_volume.assert_called_once_with( mock.sentinel.conn_info) @mock.patch.object(volumeops.VolumeOps, '_get_volume_driver') def test_detach_volume(self, mock_get_volume_driver): fake_volume_driver = mock_get_volume_driver.return_value fake_conn_info = {'data': 'fake_conn_info_data'} self._volumeops.detach_volume(fake_conn_info, mock.sentinel.inst_name) mock_get_volume_driver.assert_called_once_with( fake_conn_info) fake_volume_driver.detach_volume.assert_called_once_with( fake_conn_info, mock.sentinel.inst_name) fake_volume_driver.disconnect_volume.assert_called_once_with( fake_conn_info) @mock.patch.object(connector, 'get_connector_properties') def test_get_volume_connector(self, mock_get_connector): conn = self._volumeops.get_volume_connector() mock_get_connector.assert_called_once_with( root_helper=None, my_ip=CONF.my_block_storage_ip, multipath=CONF.hyperv.use_multipath_io, enforce_multipath=True, host=CONF.host) self.assertEqual(mock_get_connector.return_value, conn) @mock.patch.object(volumeops.VolumeOps, '_get_volume_driver') def test_connect_volumes(self, mock_get_volume_driver): block_device_info = get_fake_block_dev_info() self._volumeops.connect_volumes(block_device_info) init_vol_conn = ( mock_get_volume_driver.return_value.connect_volume) init_vol_conn.assert_called_once_with( block_device_info['block_device_mapping'][0]['connection_info']) @mock.patch.object(volumeops.VolumeOps, 'get_disk_resource_path') def test_get_disk_path_mapping(self, mock_get_disk_path): block_device_info = get_fake_block_dev_info() block_device_mapping = block_device_info['block_device_mapping'] fake_conn_info = get_fake_connection_info() block_device_mapping[0]['connection_info'] = fake_conn_info mock_get_disk_path.return_value = mock.sentinel.disk_path resulted_disk_path_mapping = self._volumeops.get_disk_path_mapping( block_device_info) mock_get_disk_path.assert_called_once_with(fake_conn_info) expected_disk_path_mapping = { mock.sentinel.serial: mock.sentinel.disk_path } self.assertEqual(expected_disk_path_mapping, resulted_disk_path_mapping) @mock.patch.object(volumeops.VolumeOps, '_get_volume_driver') def test_get_disk_resource_path(self, mock_get_volume_driver): fake_conn_info = get_fake_connection_info() fake_volume_driver = mock_get_volume_driver.return_value resulted_disk_path = self._volumeops.get_disk_resource_path( fake_conn_info) mock_get_volume_driver.assert_called_once_with(fake_conn_info) get_mounted_disk = fake_volume_driver.get_disk_resource_path get_mounted_disk.assert_called_once_with(fake_conn_info) self.assertEqual(get_mounted_disk.return_value, resulted_disk_path) def test_bytes_per_sec_to_iops(self): no_bytes = 15 * units.Ki expected_iops = 2 resulted_iops = self._volumeops.bytes_per_sec_to_iops(no_bytes) self.assertEqual(expected_iops, resulted_iops) @mock.patch.object(volumeops.LOG, 'warning') def test_validate_qos_specs(self, mock_warning): supported_qos_specs = [mock.sentinel.spec1, mock.sentinel.spec2] requested_qos_specs = {mock.sentinel.spec1: mock.sentinel.val, mock.sentinel.spec3: mock.sentinel.val2} self._volumeops.validate_qos_specs(requested_qos_specs, supported_qos_specs) self.assertTrue(mock_warning.called) class BaseVolumeDriverTestCase(test_base.HyperVBaseTestCase): """Unit tests for Hyper-V BaseVolumeDriver class.""" def setUp(self): super(BaseVolumeDriverTestCase, self).setUp() self._base_vol_driver = volumeops.BaseVolumeDriver() self._base_vol_driver._diskutils = mock.Mock() self._base_vol_driver._vmutils = mock.Mock() self._base_vol_driver._conn = mock.Mock() self._vmutils = self._base_vol_driver._vmutils self._diskutils = self._base_vol_driver._diskutils self._conn = self._base_vol_driver._conn @mock.patch.object(connector.InitiatorConnector, 'factory') def test_connector(self, mock_conn_factory): self._base_vol_driver._conn = None self._base_vol_driver._protocol = mock.sentinel.protocol self._base_vol_driver._extra_connector_args = dict( fake_conn_arg=mock.sentinel.conn_val) conn = self._base_vol_driver._connector self.assertEqual(mock_conn_factory.return_value, conn) mock_conn_factory.assert_called_once_with( protocol=mock.sentinel.protocol, root_helper=None, use_multipath=CONF.hyperv.use_multipath_io, device_scan_attempts=CONF.hyperv.mounted_disk_query_retry_count, device_scan_interval=( CONF.hyperv.mounted_disk_query_retry_interval), **self._base_vol_driver._extra_connector_args) def test_connect_volume(self): conn_info = get_fake_connection_info() dev_info = self._base_vol_driver.connect_volume(conn_info) expected_dev_info = self._conn.connect_volume.return_value self.assertEqual(expected_dev_info, dev_info) self._conn.connect_volume.assert_called_once_with( conn_info['data']) def test_disconnect_volume(self): conn_info = get_fake_connection_info() self._base_vol_driver.disconnect_volume(conn_info) self._conn.disconnect_volume.assert_called_once_with( conn_info['data']) @mock.patch.object(volumeops.BaseVolumeDriver, '_get_disk_res_path') def _test_get_disk_resource_path_by_conn_info(self, mock_get_disk_res_path, disk_found=True): conn_info = get_fake_connection_info() mock_vol_paths = [mock.sentinel.disk_path] if disk_found else [] self._conn.get_volume_paths.return_value = mock_vol_paths if disk_found: disk_res_path = self._base_vol_driver.get_disk_resource_path( conn_info) self._conn.get_volume_paths.assert_called_once_with( conn_info['data']) self.assertEqual(mock_get_disk_res_path.return_value, disk_res_path) mock_get_disk_res_path.assert_called_once_with( mock.sentinel.disk_path) else: self.assertRaises(exception.DiskNotFound, self._base_vol_driver.get_disk_resource_path, conn_info) def test_get_existing_disk_res_path(self): self._test_get_disk_resource_path_by_conn_info() def test_get_unfound_disk_res_path(self): self._test_get_disk_resource_path_by_conn_info(disk_found=False) def test_get_block_dev_res_path(self): self._base_vol_driver._is_block_dev = True mock_get_dev_number = ( self._diskutils.get_device_number_from_device_name) mock_get_dev_number.return_value = mock.sentinel.dev_number self._vmutils.get_mounted_disk_by_drive_number.return_value = ( mock.sentinel.disk_path) disk_path = self._base_vol_driver._get_disk_res_path( mock.sentinel.dev_name) mock_get_dev_number.assert_called_once_with(mock.sentinel.dev_name) self._vmutils.get_mounted_disk_by_drive_number.assert_called_once_with( mock.sentinel.dev_number) self.assertEqual(mock.sentinel.disk_path, disk_path) def test_get_virt_disk_res_path(self): # For virtual disk images, we expect the resource path to be the # actual image path, as opposed to passthrough disks, in which case we # need the Msvm_DiskDrive resource path when attaching it to a VM. self._base_vol_driver._is_block_dev = False path = self._base_vol_driver._get_disk_res_path( mock.sentinel.disk_path) self.assertEqual(mock.sentinel.disk_path, path) @mock.patch.object(volumeops.BaseVolumeDriver, '_get_disk_res_path') @mock.patch.object(volumeops.BaseVolumeDriver, '_get_disk_ctrl_and_slot') @mock.patch.object(volumeops.BaseVolumeDriver, 'connect_volume') def _test_attach_volume(self, mock_connect_volume, mock_get_disk_ctrl_and_slot, mock_get_disk_res_path, is_block_dev=True): connection_info = get_fake_connection_info() self._base_vol_driver._is_block_dev = is_block_dev mock_connect_volume.return_value = dict(path=mock.sentinel.raw_path) mock_get_disk_res_path.return_value = ( mock.sentinel.disk_path) mock_get_disk_ctrl_and_slot.return_value = ( mock.sentinel.ctrller_path, mock.sentinel.slot) self._base_vol_driver.attach_volume( connection_info=connection_info, instance_name=mock.sentinel.instance_name, disk_bus=mock.sentinel.disk_bus) if is_block_dev: self._vmutils.attach_volume_to_controller.assert_called_once_with( mock.sentinel.instance_name, mock.sentinel.ctrller_path, mock.sentinel.slot, mock.sentinel.disk_path, serial=connection_info['serial']) else: self._vmutils.attach_drive.assert_called_once_with( mock.sentinel.instance_name, mock.sentinel.disk_path, mock.sentinel.ctrller_path, mock.sentinel.slot) mock_get_disk_res_path.assert_called_once_with( mock.sentinel.raw_path) mock_get_disk_ctrl_and_slot.assert_called_once_with( mock.sentinel.instance_name, mock.sentinel.disk_bus) def test_attach_volume_image_file(self): self._test_attach_volume(is_block_dev=False) def test_attach_volume_block_dev(self): self._test_attach_volume(is_block_dev=True) @mock.patch.object(volumeops.BaseVolumeDriver, 'get_disk_resource_path') def test_detach_volume(self, mock_get_disk_resource_path): connection_info = get_fake_connection_info() self._base_vol_driver.detach_volume(connection_info, mock.sentinel.instance_name) mock_get_disk_resource_path.assert_called_once_with( connection_info) self._vmutils.detach_vm_disk.assert_called_once_with( mock.sentinel.instance_name, mock_get_disk_resource_path.return_value, is_physical=self._base_vol_driver._is_block_dev) def test_get_disk_ctrl_and_slot_ide(self): ctrl, slot = self._base_vol_driver._get_disk_ctrl_and_slot( mock.sentinel.instance_name, disk_bus=constants.CTRL_TYPE_IDE) expected_ctrl = self._vmutils.get_vm_ide_controller.return_value expected_slot = 0 self._vmutils.get_vm_ide_controller.assert_called_once_with( mock.sentinel.instance_name, 0) self.assertEqual(expected_ctrl, ctrl) self.assertEqual(expected_slot, slot) def test_get_disk_ctrl_and_slot_scsi(self): ctrl, slot = self._base_vol_driver._get_disk_ctrl_and_slot( mock.sentinel.instance_name, disk_bus=constants.CTRL_TYPE_SCSI) expected_ctrl = self._vmutils.get_vm_scsi_controller.return_value expected_slot = ( self._vmutils.get_free_controller_slot.return_value) self._vmutils.get_vm_scsi_controller.assert_called_once_with( mock.sentinel.instance_name) self._vmutils.get_free_controller_slot( self._vmutils.get_vm_scsi_controller.return_value) self.assertEqual(expected_ctrl, ctrl) self.assertEqual(expected_slot, slot) def test_set_disk_qos_specs(self): # This base method is a noop, we'll just make sure # it doesn't error out. self._base_vol_driver.set_disk_qos_specs( mock.sentinel.conn_info, mock.sentinel.disk_qos_spes) class ISCSIVolumeDriverTestCase(test_base.HyperVBaseTestCase): """Unit tests for Hyper-V BaseVolumeDriver class.""" def test_extra_conn_args(self): fake_iscsi_initiator = ( 'PCI\\VEN_1077&DEV_2031&SUBSYS_17E8103C&REV_02\\' '4&257301f0&0&0010_0') self.flags(iscsi_initiator_list=[fake_iscsi_initiator], group='hyperv') expected_extra_conn_args = dict( initiator_list=[fake_iscsi_initiator]) vol_driver = volumeops.ISCSIVolumeDriver() self.assertEqual(expected_extra_conn_args, vol_driver._extra_connector_args) class SMBFSVolumeDriverTestCase(test_base.HyperVBaseTestCase): """Unit tests for the Hyper-V SMBFSVolumeDriver class.""" _FAKE_EXPORT_PATH = '//ip/share/' _FAKE_CONN_INFO = get_fake_connection_info(export=_FAKE_EXPORT_PATH) def setUp(self): super(SMBFSVolumeDriverTestCase, self).setUp() self._volume_driver = volumeops.SMBFSVolumeDriver() self._volume_driver._conn = mock.Mock() self._conn = self._volume_driver._conn def test_get_export_path(self): export_path = self._volume_driver._get_export_path( self._FAKE_CONN_INFO) expected_path = self._FAKE_EXPORT_PATH.replace('/', '\\') self.assertEqual(expected_path, export_path) @mock.patch.object(volumeops.BaseVolumeDriver, 'attach_volume') def test_attach_volume(self, mock_attach): # The tested method will just apply a lock before calling # the superclass method. self._volume_driver.attach_volume( self._FAKE_CONN_INFO, mock.sentinel.instance_name, disk_bus=mock.sentinel.disk_bus) mock_attach.assert_called_once_with( self._FAKE_CONN_INFO, mock.sentinel.instance_name, disk_bus=mock.sentinel.disk_bus) @mock.patch.object(volumeops.BaseVolumeDriver, 'detach_volume') def test_detach_volume(self, mock_detach): self._volume_driver.detach_volume( self._FAKE_CONN_INFO, instance_name=mock.sentinel.instance_name) mock_detach.assert_called_once_with( self._FAKE_CONN_INFO, instance_name=mock.sentinel.instance_name) @mock.patch.object(volumeops.VolumeOps, 'bytes_per_sec_to_iops') @mock.patch.object(volumeops.VolumeOps, 'validate_qos_specs') @mock.patch.object(volumeops.BaseVolumeDriver, 'get_disk_resource_path') def test_set_disk_qos_specs(self, mock_get_disk_path, mock_validate_qos_specs, mock_bytes_per_sec_to_iops): fake_total_bytes_sec = 8 fake_total_iops_sec = 1 storage_qos_specs = {'total_bytes_sec': fake_total_bytes_sec} expected_supported_specs = ['total_iops_sec', 'total_bytes_sec'] mock_set_qos_specs = self._volume_driver._vmutils.set_disk_qos_specs mock_bytes_per_sec_to_iops.return_value = fake_total_iops_sec mock_get_disk_path.return_value = mock.sentinel.disk_path self._volume_driver.set_disk_qos_specs(self._FAKE_CONN_INFO, storage_qos_specs) mock_validate_qos_specs.assert_called_once_with( storage_qos_specs, expected_supported_specs) mock_bytes_per_sec_to_iops.assert_called_once_with( fake_total_bytes_sec) mock_get_disk_path.assert_called_once_with(self._FAKE_CONN_INFO) mock_set_qos_specs.assert_called_once_with( mock.sentinel.disk_path, fake_total_iops_sec) nova-17.0.1/nova/tests/unit/virt/hyperv/__init__.py0000666000175000017500000000000013250073126022252 0ustar zuulzuul00000000000000nova-17.0.1/nova/tests/unit/virt/hyperv/test_serialconsoleops.py0000666000175000017500000001123413250073126025151 0ustar zuulzuul00000000000000# Copyright 2016 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from six.moves import builtins from nova import exception from nova.tests.unit.virt.hyperv import test_base from nova.virt.hyperv import serialconsolehandler from nova.virt.hyperv import serialconsoleops class SerialConsoleOpsTestCase(test_base.HyperVBaseTestCase): def setUp(self): super(SerialConsoleOpsTestCase, self).setUp() serialconsoleops._console_handlers = {} self._serialops = serialconsoleops.SerialConsoleOps() self._serialops._pathutils = mock.MagicMock() def _setup_console_handler_mock(self): mock_console_handler = mock.Mock() serialconsoleops._console_handlers = {mock.sentinel.instance_name: mock_console_handler} return mock_console_handler @mock.patch.object(serialconsolehandler, 'SerialConsoleHandler') @mock.patch.object(serialconsoleops.SerialConsoleOps, 'stop_console_handler_unsync') def _test_start_console_handler(self, mock_stop_handler, mock_console_handler, raise_exception=False): mock_handler = mock_console_handler.return_value if raise_exception: mock_handler.start.side_effect = Exception self._serialops.start_console_handler(mock.sentinel.instance_name) mock_stop_handler.assert_called_once_with(mock.sentinel.instance_name) mock_console_handler.assert_called_once_with( mock.sentinel.instance_name) if raise_exception: mock_handler.stop.assert_called_once_with() else: console_handler = serialconsoleops._console_handlers.get( mock.sentinel.instance_name) self.assertEqual(mock_handler, console_handler) def test_start_console_handler(self): self._test_start_console_handler() def test_start_console_handler_exception(self): self._test_start_console_handler(raise_exception=True) def test_stop_console_handler(self): mock_console_handler = self._setup_console_handler_mock() self._serialops.stop_console_handler(mock.sentinel.instance_name) mock_console_handler.stop.assert_called_once_with() handler = serialconsoleops._console_handlers.get( mock.sentinel.instance_name) self.assertIsNone(handler) def test_get_serial_console(self): mock_console_handler = self._setup_console_handler_mock() ret_val = self._serialops.get_serial_console( mock.sentinel.instance_name) self.assertEqual(mock_console_handler.get_serial_console(), ret_val) def test_get_serial_console_exception(self): self.assertRaises(exception.ConsoleTypeUnavailable, self._serialops.get_serial_console, mock.sentinel.instance_name) @mock.patch.object(builtins, 'open') @mock.patch("os.path.exists") def test_get_console_output_exception(self, fake_path_exists, fake_open): self._serialops._pathutils.get_vm_console_log_paths.return_value = [ mock.sentinel.log_path_1, mock.sentinel.log_path_2] fake_open.side_effect = IOError fake_path_exists.return_value = True self.assertRaises(exception.ConsoleLogOutputException, self._serialops.get_console_output, mock.sentinel.instance_name) fake_open.assert_called_once_with(mock.sentinel.log_path_2, 'rb') @mock.patch('os.path.exists') @mock.patch.object(serialconsoleops.SerialConsoleOps, 'start_console_handler') def test_start_console_handlers(self, mock_get_instance_dir, mock_exists): self._serialops._pathutils.get_instance_dir.return_value = [ mock.sentinel.nova_instance_name, mock.sentinel.other_instance_name] mock_exists.side_effect = [True, False] self._serialops.start_console_handlers() self._serialops._vmutils.get_active_instances.assert_called_once_with() nova-17.0.1/nova/tests/unit/virt/hyperv/test_pathutils.py0000666000175000017500000002302413250073126023602 0ustar zuulzuul00000000000000# Copyright 2014 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import time import mock from six.moves import builtins from nova import exception from nova.tests.unit.virt.hyperv import test_base from nova.virt.hyperv import constants from nova.virt.hyperv import pathutils class PathUtilsTestCase(test_base.HyperVBaseTestCase): """Unit tests for the Hyper-V PathUtils class.""" def setUp(self): super(PathUtilsTestCase, self).setUp() self.fake_instance_dir = os.path.join('C:', 'fake_instance_dir') self.fake_instance_name = 'fake_instance_name' self._pathutils = pathutils.PathUtils() def _mock_lookup_configdrive_path(self, ext, rescue=False): self._pathutils.get_instance_dir = mock.MagicMock( return_value=self.fake_instance_dir) def mock_exists(*args, **kwargs): path = args[0] return True if path[(path.rfind('.') + 1):] == ext else False self._pathutils.exists = mock_exists configdrive_path = self._pathutils.lookup_configdrive_path( self.fake_instance_name, rescue) return configdrive_path def _test_lookup_configdrive_path(self, rescue=False): configdrive_name = 'configdrive' if rescue: configdrive_name += '-rescue' for format_ext in constants.DISK_FORMAT_MAP: configdrive_path = self._mock_lookup_configdrive_path(format_ext, rescue) expected_path = os.path.join(self.fake_instance_dir, configdrive_name + '.' + format_ext) self.assertEqual(expected_path, configdrive_path) def test_lookup_configdrive_path(self): self._test_lookup_configdrive_path() def test_lookup_rescue_configdrive_path(self): self._test_lookup_configdrive_path(rescue=True) def test_lookup_configdrive_path_non_exist(self): self._pathutils.get_instance_dir = mock.MagicMock( return_value=self.fake_instance_dir) self._pathutils.exists = mock.MagicMock(return_value=False) configdrive_path = self._pathutils.lookup_configdrive_path( self.fake_instance_name) self.assertIsNone(configdrive_path) def test_get_instances_dir_local(self): self.flags(instances_path=self.fake_instance_dir) instances_dir = self._pathutils.get_instances_dir() self.assertEqual(self.fake_instance_dir, instances_dir) def test_get_instances_dir_remote_instance_share(self): # The Hyper-V driver allows using a pre-configured share exporting # the instances dir. The same share name should be used across nodes. fake_instances_dir_share = 'fake_instances_dir_share' fake_remote = 'fake_remote' expected_instance_dir = r'\\%s\%s' % (fake_remote, fake_instances_dir_share) self.flags(instances_path_share=fake_instances_dir_share, group='hyperv') instances_dir = self._pathutils.get_instances_dir( remote_server=fake_remote) self.assertEqual(expected_instance_dir, instances_dir) def test_get_instances_dir_administrative_share(self): self.flags(instances_path=r'C:\fake_instance_dir') fake_remote = 'fake_remote' expected_instance_dir = r'\\fake_remote\C$\fake_instance_dir' instances_dir = self._pathutils.get_instances_dir( remote_server=fake_remote) self.assertEqual(expected_instance_dir, instances_dir) def test_get_instances_dir_unc_path(self): fake_instance_dir = r'\\fake_addr\fake_share\fake_instance_dir' self.flags(instances_path=fake_instance_dir) fake_remote = 'fake_remote' instances_dir = self._pathutils.get_instances_dir( remote_server=fake_remote) self.assertEqual(fake_instance_dir, instances_dir) @mock.patch('os.path.join') def test_get_instances_sub_dir(self, fake_path_join): class WindowsError(Exception): def __init__(self, winerror=None): self.winerror = winerror fake_dir_name = "fake_dir_name" fake_windows_error = WindowsError self._pathutils.check_create_dir = mock.MagicMock( side_effect=WindowsError(pathutils.ERROR_INVALID_NAME)) with mock.patch.object(builtins, 'WindowsError', fake_windows_error, create=True): self.assertRaises(exception.AdminRequired, self._pathutils._get_instances_sub_dir, fake_dir_name) def test_copy_vm_console_logs(self): fake_local_logs = [mock.sentinel.log_path, mock.sentinel.archived_log_path] fake_remote_logs = [mock.sentinel.remote_log_path, mock.sentinel.remote_archived_log_path] self._pathutils.exists = mock.Mock(return_value=True) self._pathutils.copy = mock.Mock() self._pathutils.get_vm_console_log_paths = mock.Mock( side_effect=[fake_local_logs, fake_remote_logs]) self._pathutils.copy_vm_console_logs(mock.sentinel.instance_name, mock.sentinel.dest_host) self._pathutils.get_vm_console_log_paths.assert_has_calls( [mock.call(mock.sentinel.instance_name), mock.call(mock.sentinel.instance_name, remote_server=mock.sentinel.dest_host)]) self._pathutils.copy.assert_has_calls([ mock.call(mock.sentinel.log_path, mock.sentinel.remote_log_path), mock.call(mock.sentinel.archived_log_path, mock.sentinel.remote_archived_log_path)]) @mock.patch.object(pathutils.PathUtils, 'get_base_vhd_dir') @mock.patch.object(pathutils.PathUtils, 'exists') def test_get_image_path(self, mock_exists, mock_get_base_vhd_dir): fake_image_name = 'fake_image_name' mock_exists.side_effect = [True, False] mock_get_base_vhd_dir.return_value = 'fake_base_dir' res = self._pathutils.get_image_path(fake_image_name) mock_get_base_vhd_dir.assert_called_once_with() self.assertEqual(res, os.path.join('fake_base_dir', 'fake_image_name.vhd')) @mock.patch('os.path.getmtime') @mock.patch.object(pathutils, 'time') def test_get_age_of_file(self, mock_time, mock_getmtime): mock_time.time.return_value = time.time() mock_getmtime.return_value = mock_time.time.return_value - 42 actual_age = self._pathutils.get_age_of_file(mock.sentinel.filename) self.assertEqual(42, actual_age) mock_time.time.assert_called_once_with() mock_getmtime.assert_called_once_with(mock.sentinel.filename) @mock.patch('os.path.exists') @mock.patch('tempfile.NamedTemporaryFile') def test_check_dirs_shared_storage(self, mock_named_tempfile, mock_exists): fake_src_dir = 'fake_src_dir' fake_dest_dir = 'fake_dest_dir' mock_exists.return_value = True mock_tmpfile = mock_named_tempfile.return_value.__enter__.return_value mock_tmpfile.name = 'fake_tmp_fname' expected_src_tmp_path = os.path.join(fake_src_dir, mock_tmpfile.name) self._pathutils.check_dirs_shared_storage( fake_src_dir, fake_dest_dir) mock_named_tempfile.assert_called_once_with(dir=fake_dest_dir) mock_exists.assert_called_once_with(expected_src_tmp_path) @mock.patch('os.path.exists') @mock.patch('tempfile.NamedTemporaryFile') def test_check_dirs_shared_storage_exception(self, mock_named_tempfile, mock_exists): fake_src_dir = 'fake_src_dir' fake_dest_dir = 'fake_dest_dir' mock_exists.return_value = True mock_named_tempfile.side_effect = OSError('not exist') self.assertRaises(exception.FileNotFound, self._pathutils.check_dirs_shared_storage, fake_src_dir, fake_dest_dir) @mock.patch.object(pathutils.PathUtils, 'check_dirs_shared_storage') @mock.patch.object(pathutils.PathUtils, 'get_instances_dir') def test_check_remote_instances_shared(self, mock_get_instances_dir, mock_check_dirs_shared_storage): mock_get_instances_dir.side_effect = [mock.sentinel.local_inst_dir, mock.sentinel.remote_inst_dir] shared_storage = self._pathutils.check_remote_instances_dir_shared( mock.sentinel.dest) self.assertEqual(mock_check_dirs_shared_storage.return_value, shared_storage) mock_get_instances_dir.assert_has_calls( [mock.call(), mock.call(mock.sentinel.dest)]) mock_check_dirs_shared_storage.assert_called_once_with( mock.sentinel.local_inst_dir, mock.sentinel.remote_inst_dir) nova-17.0.1/nova/tests/unit/virt/hyperv/test_snapshotops.py0000666000175000017500000001334113250073126024147 0ustar zuulzuul00000000000000# Copyright 2014 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import mock from nova.compute import task_states from nova.tests.unit import fake_instance from nova.tests.unit.virt.hyperv import test_base from nova.virt.hyperv import snapshotops class SnapshotOpsTestCase(test_base.HyperVBaseTestCase): """Unit tests for the Hyper-V SnapshotOps class.""" def setUp(self): super(SnapshotOpsTestCase, self).setUp() self.context = 'fake_context' self._snapshotops = snapshotops.SnapshotOps() self._snapshotops._pathutils = mock.MagicMock() self._snapshotops._vmutils = mock.MagicMock() self._snapshotops._vhdutils = mock.MagicMock() @mock.patch('nova.image.glance.get_remote_image_service') def test_save_glance_image(self, mock_get_remote_image_service): image_metadata = {"is_public": False, "disk_format": "vhd", "container_format": "bare"} glance_image_service = mock.MagicMock() mock_get_remote_image_service.return_value = (glance_image_service, mock.sentinel.IMAGE_ID) self._snapshotops._save_glance_image(context=self.context, image_id=mock.sentinel.IMAGE_ID, image_vhd_path=mock.sentinel.PATH) mock_get_remote_image_service.assert_called_once_with( self.context, mock.sentinel.IMAGE_ID) self._snapshotops._pathutils.open.assert_called_with( mock.sentinel.PATH, 'rb') glance_image_service.update.assert_called_once_with( self.context, mock.sentinel.IMAGE_ID, image_metadata, self._snapshotops._pathutils.open().__enter__(), purge_props=False) @mock.patch('nova.virt.hyperv.snapshotops.SnapshotOps._save_glance_image') def _test_snapshot(self, mock_save_glance_image, base_disk_path): mock_instance = fake_instance.fake_instance_obj(self.context) mock_update = mock.MagicMock() fake_src_path = os.path.join('fake', 'path') self._snapshotops._pathutils.lookup_root_vhd_path.return_value = ( fake_src_path) fake_exp_dir = os.path.join(os.path.join('fake', 'exp'), 'dir') self._snapshotops._pathutils.get_export_dir.return_value = fake_exp_dir self._snapshotops._vhdutils.get_vhd_parent_path.return_value = ( base_disk_path) fake_snapshot_path = ( self._snapshotops._vmutils.take_vm_snapshot.return_value) self._snapshotops.snapshot(context=self.context, instance=mock_instance, image_id=mock.sentinel.IMAGE_ID, update_task_state=mock_update) self._snapshotops._vmutils.take_vm_snapshot.assert_called_once_with( mock_instance.name) mock_lookup_path = self._snapshotops._pathutils.lookup_root_vhd_path mock_lookup_path.assert_called_once_with(mock_instance.name) mock_get_vhd_path = self._snapshotops._vhdutils.get_vhd_parent_path mock_get_vhd_path.assert_called_once_with(fake_src_path) self._snapshotops._pathutils.get_export_dir.assert_called_once_with( mock_instance.name) expected = [mock.call(fake_src_path, os.path.join(fake_exp_dir, os.path.basename(fake_src_path)))] dest_vhd_path = os.path.join(fake_exp_dir, os.path.basename(fake_src_path)) if base_disk_path: basename = os.path.basename(base_disk_path) base_dest_disk_path = os.path.join(fake_exp_dir, basename) expected.append(mock.call(base_disk_path, base_dest_disk_path)) mock_reconnect = self._snapshotops._vhdutils.reconnect_parent_vhd mock_reconnect.assert_called_once_with(dest_vhd_path, base_dest_disk_path) self._snapshotops._vhdutils.merge_vhd.assert_called_once_with( dest_vhd_path) mock_save_glance_image.assert_called_once_with( self.context, mock.sentinel.IMAGE_ID, base_dest_disk_path) else: mock_save_glance_image.assert_called_once_with( self.context, mock.sentinel.IMAGE_ID, dest_vhd_path) self._snapshotops._pathutils.copyfile.has_calls(expected) expected_update = [ mock.call(task_state=task_states.IMAGE_PENDING_UPLOAD), mock.call(task_state=task_states.IMAGE_UPLOADING, expected_state=task_states.IMAGE_PENDING_UPLOAD)] mock_update.has_calls(expected_update) self._snapshotops._vmutils.remove_vm_snapshot.assert_called_once_with( fake_snapshot_path) self._snapshotops._pathutils.rmtree.assert_called_once_with( fake_exp_dir) def test_snapshot(self): base_disk_path = os.path.join('fake', 'disk') self._test_snapshot(base_disk_path=base_disk_path) def test_snapshot_no_base_disk(self): self._test_snapshot(base_disk_path=None) nova-17.0.1/nova/tests/unit/virt/hyperv/test_rdpconsoleops.py0000666000175000017500000000335613250073126024465 0ustar zuulzuul00000000000000# Copyright 2015 Cloudbase Solutions SRL # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """ Unit tests for the Hyper-V RDPConsoleOps. """ import mock from nova.tests.unit.virt.hyperv import test_base from nova.virt.hyperv import rdpconsoleops class RDPConsoleOpsTestCase(test_base.HyperVBaseTestCase): def setUp(self): super(RDPConsoleOpsTestCase, self).setUp() self.rdpconsoleops = rdpconsoleops.RDPConsoleOps() self.rdpconsoleops._hostops = mock.MagicMock() self.rdpconsoleops._vmutils = mock.MagicMock() self.rdpconsoleops._rdpconsoleutils = mock.MagicMock() def test_get_rdp_console(self): mock_get_host_ip = self.rdpconsoleops._hostops.get_host_ip_addr mock_get_rdp_port = ( self.rdpconsoleops._rdpconsoleutils.get_rdp_console_port) mock_get_vm_id = self.rdpconsoleops._vmutils.get_vm_id connect_info = self.rdpconsoleops.get_rdp_console(mock.DEFAULT) self.assertEqual(mock_get_host_ip.return_value, connect_info.host) self.assertEqual(mock_get_rdp_port.return_value, connect_info.port) self.assertEqual(mock_get_vm_id.return_value, connect_info.internal_access_path) nova-17.0.1/nova/tests/unit/virt/hyperv/test_vif.py0000666000175000017500000001142413250073126022352 0ustar zuulzuul00000000000000# Copyright 2015 Cloudbase Solutions Srl # # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import nova.conf from nova import exception from nova.network import model from nova.tests.unit.virt.hyperv import test_base from nova.virt.hyperv import vif CONF = nova.conf.CONF class HyperVNovaNetworkVIFPluginTestCase(test_base.HyperVBaseTestCase): def setUp(self): super(HyperVNovaNetworkVIFPluginTestCase, self).setUp() self.vif_driver = vif.HyperVNovaNetworkVIFPlugin() def test_plug(self): self.flags(vswitch_name='fake_vswitch_name', group='hyperv') fake_vif = {'id': mock.sentinel.fake_id} self.vif_driver.plug(mock.sentinel.instance, fake_vif) netutils = self.vif_driver._netutils netutils.connect_vnic_to_vswitch.assert_called_once_with( 'fake_vswitch_name', mock.sentinel.fake_id) class HyperVVIFDriverTestCase(test_base.HyperVBaseTestCase): def setUp(self): super(HyperVVIFDriverTestCase, self).setUp() self.vif_driver = vif.HyperVVIFDriver() self.vif_driver._netutils = mock.MagicMock() self.vif_driver._vif_plugin = mock.MagicMock() @mock.patch.object(vif.nova.network, 'is_neutron') def test_init_neutron(self, mock_is_neutron): mock_is_neutron.return_value = True driver = vif.HyperVVIFDriver() self.assertIsInstance(driver._vif_plugin, vif.HyperVNeutronVIFPlugin) @mock.patch.object(vif.nova.network, 'is_neutron') def test_init_nova(self, mock_is_neutron): mock_is_neutron.return_value = False driver = vif.HyperVVIFDriver() self.assertIsInstance(driver._vif_plugin, vif.HyperVNovaNetworkVIFPlugin) def test_plug(self): vif = {'type': model.VIF_TYPE_HYPERV} self.vif_driver.plug(mock.sentinel.instance, vif) self.vif_driver._vif_plugin.plug.assert_called_once_with( mock.sentinel.instance, vif) @mock.patch.object(vif, 'os_vif') @mock.patch.object(vif.os_vif_util, 'nova_to_osvif_instance') @mock.patch.object(vif.os_vif_util, 'nova_to_osvif_vif') def test_plug_ovs(self, mock_nova_to_osvif_vif, mock_nova_to_osvif_instance, mock_os_vif): vif = {'type': model.VIF_TYPE_OVS} self.vif_driver.plug(mock.sentinel.instance, vif) mock_nova_to_osvif_vif.assert_called_once_with(vif) mock_nova_to_osvif_instance.assert_called_once_with( mock.sentinel.instance) connect_vnic = self.vif_driver._netutils.connect_vnic_to_vswitch connect_vnic.assert_called_once_with( CONF.hyperv.vswitch_name, mock_nova_to_osvif_vif.return_value.id) mock_os_vif.plug.assert_called_once_with( mock_nova_to_osvif_vif.return_value, mock_nova_to_osvif_instance.return_value) def test_plug_type_unknown(self): vif = {'type': mock.sentinel.vif_type} self.assertRaises(exception.VirtualInterfacePlugException, self.vif_driver.plug, mock.sentinel.instance, vif) def test_unplug(self): vif = {'type': model.VIF_TYPE_HYPERV} self.vif_driver.unplug(mock.sentinel.instance, vif) self.vif_driver._vif_plugin.unplug.assert_called_once_with( mock.sentinel.instance, vif) @mock.patch.object(vif, 'os_vif') @mock.patch.object(vif.os_vif_util, 'nova_to_osvif_instance') @mock.patch.object(vif.os_vif_util, 'nova_to_osvif_vif') def test_unplug_ovs(self, mock_nova_to_osvif_vif, mock_nova_to_osvif_instance, mock_os_vif): vif = {'type': model.VIF_TYPE_OVS} self.vif_driver.unplug(mock.sentinel.instance, vif) mock_nova_to_osvif_vif.assert_called_once_with(vif) mock_nova_to_osvif_instance.assert_called_once_with( mock.sentinel.instance) mock_os_vif.unplug.assert_called_once_with( mock_nova_to_osvif_vif.return_value, mock_nova_to_osvif_instance.return_value) def test_unplug_type_unknown(self): vif = {'type': mock.sentinel.vif_type} self.assertRaises(exception.VirtualInterfaceUnplugException, self.vif_driver.unplug, mock.sentinel.instance, vif) nova-17.0.1/nova/tests/unit/virt/hyperv/test_serialproxy.py0000666000175000017500000001130613250073126024146 0ustar zuulzuul00000000000000# Copyright 2016 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import socket import mock from nova import exception from nova.tests.unit.virt.hyperv import test_base from nova.virt.hyperv import serialproxy class SerialProxyTestCase(test_base.HyperVBaseTestCase): @mock.patch.object(socket, 'socket') def setUp(self, mock_socket): super(SerialProxyTestCase, self).setUp() self._mock_socket = mock_socket self._mock_input_queue = mock.Mock() self._mock_output_queue = mock.Mock() self._mock_client_connected = mock.Mock() threading_patcher = mock.patch.object(serialproxy, 'threading') threading_patcher.start() self.addCleanup(threading_patcher.stop) self._proxy = serialproxy.SerialProxy( mock.sentinel.instance_nane, mock.sentinel.host, mock.sentinel.port, self._mock_input_queue, self._mock_output_queue, self._mock_client_connected) @mock.patch.object(socket, 'socket') def test_setup_socket_exception(self, mock_socket): fake_socket = mock_socket.return_value fake_socket.listen.side_effect = socket.error self.assertRaises(exception.NovaException, self._proxy._setup_socket) fake_socket.setsockopt.assert_called_once_with(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) fake_socket.bind.assert_called_once_with((mock.sentinel.host, mock.sentinel.port)) def test_stop_serial_proxy(self): self._proxy._conn = mock.Mock() self._proxy._sock = mock.Mock() self._proxy.stop() self._proxy._stopped.set.assert_called_once_with() self._proxy._client_connected.clear.assert_called_once_with() self._proxy._conn.shutdown.assert_called_once_with(socket.SHUT_RDWR) self._proxy._conn.close.assert_called_once_with() self._proxy._sock.close.assert_called_once_with() @mock.patch.object(serialproxy.SerialProxy, '_accept_conn') @mock.patch.object(serialproxy.SerialProxy, '_setup_socket') def test_run(self, mock_setup_socket, mock_accept_con): self._proxy._stopped = mock.MagicMock() self._proxy._stopped.isSet.side_effect = [False, True] self._proxy.run() mock_setup_socket.assert_called_once_with() mock_accept_con.assert_called_once_with() def test_accept_connection(self): mock_conn = mock.Mock() self._proxy._sock = mock.Mock() self._proxy._sock.accept.return_value = [ mock_conn, (mock.sentinel.client_addr, mock.sentinel.client_port)] self._proxy._accept_conn() self._proxy._client_connected.set.assert_called_once_with() mock_conn.close.assert_called_once_with() self.assertIsNone(self._proxy._conn) thread = serialproxy.threading.Thread for job in [self._proxy._get_data, self._proxy._send_data]: thread.assert_any_call(target=job) def test_get_data(self): self._mock_client_connected.isSet.return_value = True self._proxy._conn = mock.Mock() self._proxy._conn.recv.side_effect = [mock.sentinel.data, None] self._proxy._get_data() self._mock_client_connected.clear.assert_called_once_with() self._mock_input_queue.put.assert_called_once_with(mock.sentinel.data) def _test_send_data(self, exception=None): self._mock_client_connected.isSet.side_effect = [True, False] self._mock_output_queue.get_burst.return_value = mock.sentinel.data self._proxy._conn = mock.Mock() self._proxy._conn.sendall.side_effect = exception self._proxy._send_data() self._proxy._conn.sendall.assert_called_once_with( mock.sentinel.data) if exception: self._proxy._client_connected.clear.assert_called_once_with() def test_send_data(self): self._test_send_data() def test_send_data_exception(self): self._test_send_data(exception=socket.error) nova-17.0.1/nova/tests/unit/virt/hyperv/test_serialconsolehandler.py0000666000175000017500000002426313250073126025773 0ustar zuulzuul00000000000000# Copyright 2016 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova import exception from nova.tests.unit.virt.hyperv import test_base from nova.virt.hyperv import constants from nova.virt.hyperv import pathutils from nova.virt.hyperv import serialconsolehandler from nova.virt.hyperv import serialproxy class SerialConsoleHandlerTestCase(test_base.HyperVBaseTestCase): @mock.patch.object(pathutils.PathUtils, 'get_vm_console_log_paths') def setUp(self, mock_get_log_paths): super(SerialConsoleHandlerTestCase, self).setUp() mock_get_log_paths.return_value = [mock.sentinel.log_path] self._consolehandler = serialconsolehandler.SerialConsoleHandler( mock.sentinel.instance_name) self._consolehandler._log_path = mock.sentinel.log_path self._consolehandler._pathutils = mock.Mock() self._consolehandler._vmutils = mock.Mock() @mock.patch.object(serialconsolehandler.SerialConsoleHandler, '_setup_handlers') def test_start(self, mock_setup_handlers): mock_workers = [mock.Mock(), mock.Mock()] self._consolehandler._workers = mock_workers self._consolehandler.start() mock_setup_handlers.assert_called_once_with() for worker in mock_workers: worker.start.assert_called_once_with() @mock.patch('nova.console.serial.release_port') def test_stop(self, mock_release_port): mock_serial_proxy = mock.Mock() mock_workers = [mock_serial_proxy, mock.Mock()] self._consolehandler._serial_proxy = mock_serial_proxy self._consolehandler._listen_host = mock.sentinel.host self._consolehandler._listen_port = mock.sentinel.port self._consolehandler._workers = mock_workers self._consolehandler.stop() mock_release_port.assert_called_once_with(mock.sentinel.host, mock.sentinel.port) for worker in mock_workers: worker.stop.assert_called_once_with() @mock.patch.object(serialconsolehandler.SerialConsoleHandler, '_setup_named_pipe_handlers') @mock.patch.object(serialconsolehandler.SerialConsoleHandler, '_setup_serial_proxy_handler') def _test_setup_handlers(self, mock_setup_proxy, mock_setup_pipe_handlers, serial_console_enabled=True): self.flags(enabled=serial_console_enabled, group='serial_console') self._consolehandler._setup_handlers() self.assertEqual(serial_console_enabled, mock_setup_proxy.called) mock_setup_pipe_handlers.assert_called_once_with() def test_setup_handlers(self): self._test_setup_handlers() def test_setup_handlers_console_disabled(self): self._test_setup_handlers(serial_console_enabled=False) @mock.patch.object(serialproxy, 'SerialProxy') @mock.patch('nova.console.serial.acquire_port') @mock.patch.object(serialconsolehandler.threading, 'Event') @mock.patch.object(serialconsolehandler.ioutils, 'IOQueue') def test_setup_serial_proxy_handler(self, mock_io_queue, mock_event, mock_acquire_port, mock_serial_proxy_class): mock_input_queue = mock.sentinel.input_queue mock_output_queue = mock.sentinel.output_queue mock_client_connected = mock_event.return_value mock_io_queue.side_effect = [mock_input_queue, mock_output_queue] mock_serial_proxy = mock_serial_proxy_class.return_value mock_acquire_port.return_value = mock.sentinel.port self.flags(proxyclient_address='127.0.0.3', group='serial_console') self._consolehandler._setup_serial_proxy_handler() mock_serial_proxy_class.assert_called_once_with( mock.sentinel.instance_name, '127.0.0.3', mock.sentinel.port, mock_input_queue, mock_output_queue, mock_client_connected) self.assertIn(mock_serial_proxy, self._consolehandler._workers) @mock.patch.object(serialconsolehandler.SerialConsoleHandler, '_get_named_pipe_handler') @mock.patch.object(serialconsolehandler.SerialConsoleHandler, '_get_vm_serial_port_mapping') def _mock_setup_named_pipe_handlers(self, mock_get_port_mapping, mock_get_pipe_handler, serial_port_mapping=None): mock_get_port_mapping.return_value = serial_port_mapping self._consolehandler._setup_named_pipe_handlers() expected_workers = [mock_get_pipe_handler.return_value for port in serial_port_mapping] self.assertEqual(expected_workers, self._consolehandler._workers) return mock_get_pipe_handler def test_setup_ro_pipe_handler(self): serial_port_mapping = { constants.SERIAL_PORT_TYPE_RW: mock.sentinel.pipe_path } mock_get_handler = self._mock_setup_named_pipe_handlers( serial_port_mapping=serial_port_mapping) mock_get_handler.assert_called_once_with( mock.sentinel.pipe_path, pipe_type=constants.SERIAL_PORT_TYPE_RW, enable_logging=True) def test_setup_pipe_handlers(self): serial_port_mapping = { constants.SERIAL_PORT_TYPE_RO: mock.sentinel.ro_pipe_path, constants.SERIAL_PORT_TYPE_RW: mock.sentinel.rw_pipe_path } mock_get_handler = self._mock_setup_named_pipe_handlers( serial_port_mapping=serial_port_mapping) expected_calls = [mock.call(mock.sentinel.ro_pipe_path, pipe_type=constants.SERIAL_PORT_TYPE_RO, enable_logging=True), mock.call(mock.sentinel.rw_pipe_path, pipe_type=constants.SERIAL_PORT_TYPE_RW, enable_logging=False)] mock_get_handler.assert_has_calls(expected_calls, any_order=True) @mock.patch.object(serialconsolehandler.utilsfactory, 'get_named_pipe_handler') def _test_get_named_pipe_handler(self, mock_get_pipe_handler, pipe_type=None, enable_logging=False): expected_args = {} if pipe_type == constants.SERIAL_PORT_TYPE_RW: self._consolehandler._input_queue = mock.sentinel.input_queue self._consolehandler._output_queue = mock.sentinel.output_queue self._consolehandler._client_connected = ( mock.sentinel.connect_event) expected_args.update({ 'input_queue': mock.sentinel.input_queue, 'output_queue': mock.sentinel.output_queue, 'connect_event': mock.sentinel.connect_event}) if enable_logging: expected_args['log_file'] = mock.sentinel.log_path ret_val = self._consolehandler._get_named_pipe_handler( mock.sentinel.pipe_path, pipe_type, enable_logging) self.assertEqual(mock_get_pipe_handler.return_value, ret_val) mock_get_pipe_handler.assert_called_once_with( mock.sentinel.pipe_path, **expected_args) def test_get_ro_named_pipe_handler(self): self._test_get_named_pipe_handler( pipe_type=constants.SERIAL_PORT_TYPE_RO, enable_logging=True) def test_get_rw_named_pipe_handler(self): self._test_get_named_pipe_handler( pipe_type=constants.SERIAL_PORT_TYPE_RW, enable_logging=False) def _mock_get_port_connections(self, port_connections): get_port_connections = ( self._consolehandler._vmutils.get_vm_serial_port_connections) get_port_connections.return_value = port_connections def test_get_vm_serial_port_mapping_having_tagged_pipes(self): ro_pipe_path = 'fake_pipe_ro' rw_pipe_path = 'fake_pipe_rw' self._mock_get_port_connections([ro_pipe_path, rw_pipe_path]) ret_val = self._consolehandler._get_vm_serial_port_mapping() expected_mapping = { constants.SERIAL_PORT_TYPE_RO: ro_pipe_path, constants.SERIAL_PORT_TYPE_RW: rw_pipe_path } self.assertEqual(expected_mapping, ret_val) def test_get_vm_serial_port_mapping_untagged_pipe(self): pipe_path = 'fake_pipe_path' self._mock_get_port_connections([pipe_path]) ret_val = self._consolehandler._get_vm_serial_port_mapping() expected_mapping = {constants.SERIAL_PORT_TYPE_RW: pipe_path} self.assertEqual(expected_mapping, ret_val) def test_get_vm_serial_port_mapping_exception(self): self._mock_get_port_connections([]) self.assertRaises(exception.NovaException, self._consolehandler._get_vm_serial_port_mapping) @mock.patch('nova.console.type.ConsoleSerial') def test_get_serial_console(self, mock_serial_console): self.flags(enabled=True, group='serial_console') self._consolehandler._listen_host = mock.sentinel.host self._consolehandler._listen_port = mock.sentinel.port ret_val = self._consolehandler.get_serial_console() self.assertEqual(mock_serial_console.return_value, ret_val) mock_serial_console.assert_called_once_with( host=mock.sentinel.host, port=mock.sentinel.port) def test_get_serial_console_disabled(self): self.flags(enabled=False, group='serial_console') self.assertRaises(exception.ConsoleTypeUnavailable, self._consolehandler.get_serial_console) nova-17.0.1/nova/tests/unit/virt/hyperv/test_hostops.py0000666000175000017500000002763013250073126023273 0ustar zuulzuul00000000000000# Copyright 2014 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import datetime import mock from os_win import constants as os_win_const from oslo_config import cfg from oslo_serialization import jsonutils from oslo_utils import units from nova.objects import fields as obj_fields from nova.tests.unit.virt.hyperv import test_base from nova.virt.hyperv import constants from nova.virt.hyperv import hostops CONF = cfg.CONF class HostOpsTestCase(test_base.HyperVBaseTestCase): """Unit tests for the Hyper-V HostOps class.""" FAKE_ARCHITECTURE = 0 FAKE_NAME = 'fake_name' FAKE_MANUFACTURER = 'FAKE_MANUFACTURER' FAKE_NUM_CPUS = 1 FAKE_INSTANCE_DIR = "C:/fake/dir" FAKE_LOCAL_IP = '10.11.12.13' FAKE_TICK_COUNT = 1000000 def setUp(self): super(HostOpsTestCase, self).setUp() self._hostops = hostops.HostOps() self._hostops._hostutils = mock.MagicMock() self._hostops._pathutils = mock.MagicMock() self._hostops._diskutils = mock.MagicMock() def test_get_cpu_info(self): mock_processors = mock.MagicMock() info = {'Architecture': self.FAKE_ARCHITECTURE, 'Name': self.FAKE_NAME, 'Manufacturer': self.FAKE_MANUFACTURER, 'NumberOfCores': self.FAKE_NUM_CPUS, 'NumberOfLogicalProcessors': self.FAKE_NUM_CPUS} def getitem(key): return info[key] mock_processors.__getitem__.side_effect = getitem self._hostops._hostutils.get_cpus_info.return_value = [mock_processors] response = self._hostops._get_cpu_info() self._hostops._hostutils.get_cpus_info.assert_called_once_with() expected = [mock.call(fkey) for fkey in os_win_const.PROCESSOR_FEATURE.keys()] self._hostops._hostutils.is_cpu_feature_present.has_calls(expected) expected_response = self._get_mock_cpu_info() self.assertEqual(expected_response, response) def _get_mock_cpu_info(self): return {'vendor': self.FAKE_MANUFACTURER, 'model': self.FAKE_NAME, 'arch': constants.WMI_WIN32_PROCESSOR_ARCHITECTURE[ self.FAKE_ARCHITECTURE], 'features': list(os_win_const.PROCESSOR_FEATURE.values()), 'topology': {'cores': self.FAKE_NUM_CPUS, 'threads': self.FAKE_NUM_CPUS, 'sockets': self.FAKE_NUM_CPUS}} def _get_mock_gpu_info(self): return {'remotefx_total_video_ram': 4096, 'remotefx_available_video_ram': 2048, 'remotefx_gpu_info': mock.sentinel.FAKE_GPU_INFO} def test_get_memory_info(self): self._hostops._hostutils.get_memory_info.return_value = (2 * units.Ki, 1 * units.Ki) response = self._hostops._get_memory_info() self._hostops._hostutils.get_memory_info.assert_called_once_with() self.assertEqual((2, 1, 1), response) def test_get_storage_info_gb(self): self._hostops._pathutils.get_instances_dir.return_value = '' self._hostops._diskutils.get_disk_capacity.return_value = ( 2 * units.Gi, 1 * units.Gi) response = self._hostops._get_storage_info_gb() self._hostops._pathutils.get_instances_dir.assert_called_once_with() self._hostops._diskutils.get_disk_capacity.assert_called_once_with('') self.assertEqual((2, 1, 1), response) def test_get_hypervisor_version(self): self._hostops._hostutils.get_windows_version.return_value = '6.3.9600' response_lower = self._hostops._get_hypervisor_version() self._hostops._hostutils.get_windows_version.return_value = '10.1.0' response_higher = self._hostops._get_hypervisor_version() self.assertEqual(6003, response_lower) self.assertEqual(10001, response_higher) def test_get_remotefx_gpu_info(self): self.flags(enable_remotefx=True, group='hyperv') fake_gpus = [{'total_video_ram': '2048', 'available_video_ram': '1024'}, {'total_video_ram': '1024', 'available_video_ram': '1024'}] self._hostops._hostutils.get_remotefx_gpu_info.return_value = fake_gpus ret_val = self._hostops._get_remotefx_gpu_info() self.assertEqual(3072, ret_val['total_video_ram']) self.assertEqual(1024, ret_val['used_video_ram']) def test_get_remotefx_gpu_info_disabled(self): self.flags(enable_remotefx=False, group='hyperv') ret_val = self._hostops._get_remotefx_gpu_info() self.assertEqual(0, ret_val['total_video_ram']) self.assertEqual(0, ret_val['used_video_ram']) self._hostops._hostutils.get_remotefx_gpu_info.assert_not_called() @mock.patch.object(hostops.objects, 'NUMACell') @mock.patch.object(hostops.objects, 'NUMATopology') def test_get_host_numa_topology(self, mock_NUMATopology, mock_NUMACell): numa_node = {'id': mock.sentinel.id, 'memory': mock.sentinel.memory, 'memory_usage': mock.sentinel.memory_usage, 'cpuset': mock.sentinel.cpuset, 'cpu_usage': mock.sentinel.cpu_usage} self._hostops._hostutils.get_numa_nodes.return_value = [ numa_node.copy()] result = self._hostops._get_host_numa_topology() self.assertEqual(mock_NUMATopology.return_value, result) mock_NUMACell.assert_called_once_with( pinned_cpus=set([]), mempages=[], siblings=[], **numa_node) mock_NUMATopology.assert_called_once_with( cells=[mock_NUMACell.return_value]) @mock.patch.object(hostops.HostOps, '_get_pci_passthrough_devices') @mock.patch.object(hostops.HostOps, '_get_host_numa_topology') @mock.patch.object(hostops.HostOps, '_get_remotefx_gpu_info') @mock.patch.object(hostops.HostOps, '_get_cpu_info') @mock.patch.object(hostops.HostOps, '_get_memory_info') @mock.patch.object(hostops.HostOps, '_get_hypervisor_version') @mock.patch.object(hostops.HostOps, '_get_storage_info_gb') @mock.patch('platform.node') def test_get_available_resource(self, mock_node, mock_get_storage_info_gb, mock_get_hypervisor_version, mock_get_memory_info, mock_get_cpu_info, mock_get_gpu_info, mock_get_numa_topology, mock_get_pci_devices): mock_get_storage_info_gb.return_value = (mock.sentinel.LOCAL_GB, mock.sentinel.LOCAL_GB_FREE, mock.sentinel.LOCAL_GB_USED) mock_get_memory_info.return_value = (mock.sentinel.MEMORY_MB, mock.sentinel.MEMORY_MB_FREE, mock.sentinel.MEMORY_MB_USED) mock_cpu_info = self._get_mock_cpu_info() mock_get_cpu_info.return_value = mock_cpu_info mock_get_hypervisor_version.return_value = mock.sentinel.VERSION mock_get_numa_topology.return_value._to_json.return_value = ( mock.sentinel.numa_topology_json) mock_get_pci_devices.return_value = mock.sentinel.pcis mock_gpu_info = self._get_mock_gpu_info() mock_get_gpu_info.return_value = mock_gpu_info response = self._hostops.get_available_resource() mock_get_memory_info.assert_called_once_with() mock_get_cpu_info.assert_called_once_with() mock_get_hypervisor_version.assert_called_once_with() mock_get_pci_devices.assert_called_once_with() expected = {'supported_instances': [("i686", "hyperv", "hvm"), ("x86_64", "hyperv", "hvm")], 'hypervisor_hostname': mock_node(), 'cpu_info': jsonutils.dumps(mock_cpu_info), 'hypervisor_version': mock.sentinel.VERSION, 'memory_mb': mock.sentinel.MEMORY_MB, 'memory_mb_used': mock.sentinel.MEMORY_MB_USED, 'local_gb': mock.sentinel.LOCAL_GB, 'local_gb_used': mock.sentinel.LOCAL_GB_USED, 'disk_available_least': mock.sentinel.LOCAL_GB_FREE, 'vcpus': self.FAKE_NUM_CPUS, 'vcpus_used': 0, 'hypervisor_type': 'hyperv', 'numa_topology': mock.sentinel.numa_topology_json, 'remotefx_available_video_ram': 2048, 'remotefx_gpu_info': mock.sentinel.FAKE_GPU_INFO, 'remotefx_total_video_ram': 4096, 'pci_passthrough_devices': mock.sentinel.pcis, } self.assertEqual(expected, response) @mock.patch.object(hostops.jsonutils, 'dumps') def test_get_pci_passthrough_devices(self, mock_jsonutils_dumps): mock_pci_dev = {'vendor_id': 'fake_vendor_id', 'product_id': 'fake_product_id', 'dev_id': 'fake_dev_id', 'address': 'fake_address'} mock_get_pcis = self._hostops._hostutils.get_pci_passthrough_devices mock_get_pcis.return_value = [mock_pci_dev] expected_label = 'label_%(vendor_id)s_%(product_id)s' % { 'vendor_id': mock_pci_dev['vendor_id'], 'product_id': mock_pci_dev['product_id']} expected_pci_dev = mock_pci_dev.copy() expected_pci_dev.update(dev_type=obj_fields.PciDeviceType.STANDARD, label= expected_label, numa_node=None) result = self._hostops._get_pci_passthrough_devices() self.assertEqual(mock_jsonutils_dumps.return_value, result) mock_jsonutils_dumps.assert_called_once_with([expected_pci_dev]) def _test_host_power_action(self, action): self._hostops._hostutils.host_power_action = mock.Mock() self._hostops.host_power_action(action) self._hostops._hostutils.host_power_action.assert_called_with( action) def test_host_power_action_shutdown(self): self._test_host_power_action(constants.HOST_POWER_ACTION_SHUTDOWN) def test_host_power_action_reboot(self): self._test_host_power_action(constants.HOST_POWER_ACTION_REBOOT) def test_host_power_action_exception(self): self.assertRaises(NotImplementedError, self._hostops.host_power_action, constants.HOST_POWER_ACTION_STARTUP) def test_get_host_ip_addr(self): CONF.set_override('my_ip', None) self._hostops._hostutils.get_local_ips.return_value = [ self.FAKE_LOCAL_IP] response = self._hostops.get_host_ip_addr() self._hostops._hostutils.get_local_ips.assert_called_once_with() self.assertEqual(self.FAKE_LOCAL_IP, response) @mock.patch('time.strftime') def test_get_host_uptime(self, mock_time): self._hostops._hostutils.get_host_tick_count64.return_value = ( self.FAKE_TICK_COUNT) response = self._hostops.get_host_uptime() tdelta = datetime.timedelta(milliseconds=int(self.FAKE_TICK_COUNT)) expected = "%s up %s, 0 users, load average: 0, 0, 0" % ( str(mock_time()), str(tdelta)) self.assertEqual(expected, response) nova-17.0.1/nova/tests/unit/virt/test_driver.py0000666000175000017500000000271513250073126021547 0ustar zuulzuul00000000000000# Copyright (c) 2013 Citrix Systems, Inc. # Copyright 2013 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from oslo_config import fixture as fixture_config from nova import test from nova.virt import driver class DriverMethodTestCase(test.NoDBTestCase): def setUp(self): super(DriverMethodTestCase, self).setUp() self.CONF = self.useFixture(fixture_config.Config()).conf def test_is_xenapi_true(self): self.CONF.set_override('compute_driver', 'xenapi.XenAPIDriver') self.assertTrue(driver.is_xenapi()) def test_is_xenapi_false(self): driver_names = ('libvirt.LibvirtDriver', 'fake.FakeDriver', 'ironic.IronicDriver', 'vmwareapi.VMwareVCDriver', 'hyperv.HyperVDriver', None) for driver_name in driver_names: self.CONF.set_override('compute_driver', driver_name) self.assertFalse(driver.is_xenapi()) nova-17.0.1/nova/tests/unit/virt/test_events.py0000666000175000017500000000223213250073126021552 0ustar zuulzuul00000000000000# All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import time from nova import test from nova.virt import event class TestEvents(test.NoDBTestCase): def test_event_repr(self): t = time.time() uuid = '1234' lifecycle = event.EVENT_LIFECYCLE_RESUMED e = event.Event(t) self.assertEqual(str(e), "" % t) e = event.InstanceEvent(uuid, timestamp=t) self.assertEqual(str(e), "" % (t, uuid)) e = event.LifecycleEvent(uuid, lifecycle, timestamp=t) self.assertEqual(str(e), " Resumed>" % (t, uuid)) nova-17.0.1/nova/tests/unit/virt/powervm/0000775000175000017500000000000013250073472020337 5ustar zuulzuul00000000000000nova-17.0.1/nova/tests/unit/virt/powervm/tasks/0000775000175000017500000000000013250073472021464 5ustar zuulzuul00000000000000nova-17.0.1/nova/tests/unit/virt/powervm/tasks/test_storage.py0000666000175000017500000001552113250073126024543 0ustar zuulzuul00000000000000# Copyright 2015, 2017 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import fixtures import mock from pypowervm import exceptions as pvm_exc from nova import test from nova.virt.powervm.tasks import storage as tf_stg class TestStorage(test.NoDBTestCase): def setUp(self): super(TestStorage, self).setUp() self.adapter = mock.Mock() self.disk_dvr = mock.MagicMock() self.mock_cfg_drv = self.useFixture(fixtures.MockPatch( 'nova.virt.powervm.media.ConfigDrivePowerVM')).mock self.mock_mb = self.mock_cfg_drv.return_value self.instance = mock.MagicMock() self.context = 'context' def test_create_and_connect_cfg_drive(self): # With a specified FeedTask task = tf_stg.CreateAndConnectCfgDrive( self.adapter, self.instance, 'injected_files', 'network_info', 'stg_ftsk', admin_pass='admin_pass') task.execute('mgmt_cna') self.mock_cfg_drv.assert_called_once_with(self.adapter) self.mock_mb.create_cfg_drv_vopt.assert_called_once_with( self.instance, 'injected_files', 'network_info', 'stg_ftsk', admin_pass='admin_pass', mgmt_cna='mgmt_cna') # Normal revert task.revert('mgmt_cna', 'result', 'flow_failures') self.mock_mb.dlt_vopt.assert_called_once_with(self.instance, 'stg_ftsk') self.mock_mb.reset_mock() # Revert when dlt_vopt fails self.mock_mb.dlt_vopt.side_effect = pvm_exc.Error('fake-exc') task.revert('mgmt_cna', 'result', 'flow_failures') self.mock_mb.dlt_vopt.assert_called_once() self.mock_mb.reset_mock() # Revert when media builder not created task.mb = None task.revert('mgmt_cna', 'result', 'flow_failures') self.mock_mb.assert_not_called() # Validate args on taskflow.task.Task instantiation with mock.patch('taskflow.task.Task.__init__') as tf: tf_stg.CreateAndConnectCfgDrive( self.adapter, self.instance, 'injected_files', 'network_info', 'stg_ftsk', admin_pass='admin_pass') tf.assert_called_once_with(name='cfg_drive', requires=['mgmt_cna']) def test_delete_vopt(self): # Test with no FeedTask task = tf_stg.DeleteVOpt(self.adapter, self.instance) task.execute() self.mock_cfg_drv.assert_called_once_with(self.adapter) self.mock_mb.dlt_vopt.assert_called_once_with( self.instance, stg_ftsk=None) self.mock_cfg_drv.reset_mock() self.mock_mb.reset_mock() # With a specified FeedTask task = tf_stg.DeleteVOpt(self.adapter, self.instance, stg_ftsk='ftsk') task.execute() self.mock_cfg_drv.assert_called_once_with(self.adapter) self.mock_mb.dlt_vopt.assert_called_once_with( self.instance, stg_ftsk='ftsk') # Validate args on taskflow.task.Task instantiation with mock.patch('taskflow.task.Task.__init__') as tf: tf_stg.DeleteVOpt(self.adapter, self.instance) tf.assert_called_once_with(name='vopt_delete') def test_delete_disk(self): stor_adpt_mappings = mock.Mock() task = tf_stg.DeleteDisk(self.disk_dvr) task.execute(stor_adpt_mappings) self.disk_dvr.delete_disks.assert_called_once_with(stor_adpt_mappings) # Validate args on taskflow.task.Task instantiation with mock.patch('taskflow.task.Task.__init__') as tf: tf_stg.DeleteDisk(self.disk_dvr) tf.assert_called_once_with( name='delete_disk', requires=['stor_adpt_mappings']) def test_detach_disk(self): task = tf_stg.DetachDisk(self.disk_dvr, self.instance) task.execute() self.disk_dvr.detach_disk.assert_called_once_with(self.instance) # Validate args on taskflow.task.Task instantiation with mock.patch('taskflow.task.Task.__init__') as tf: tf_stg.DetachDisk(self.disk_dvr, self.instance) tf.assert_called_once_with( name='detach_disk', provides='stor_adpt_mappings') def test_attach_disk(self): stg_ftsk = mock.Mock() disk_dev_info = mock.Mock() task = tf_stg.AttachDisk(self.disk_dvr, self.instance, stg_ftsk) task.execute(disk_dev_info) self.disk_dvr.attach_disk.assert_called_once_with( self.instance, disk_dev_info, stg_ftsk) task.revert(disk_dev_info, 'result', 'flow failures') self.disk_dvr.detach_disk.assert_called_once_with(self.instance) self.disk_dvr.detach_disk.reset_mock() # Revert failures are not raised self.disk_dvr.detach_disk.side_effect = pvm_exc.TimeoutError( "timed out") task.revert(disk_dev_info, 'result', 'flow failures') self.disk_dvr.detach_disk.assert_called_once_with(self.instance) # Validate args on taskflow.task.Task instantiation with mock.patch('taskflow.task.Task.__init__') as tf: tf_stg.AttachDisk(self.disk_dvr, self.instance, stg_ftsk) tf.assert_called_once_with( name='attach_disk', requires=['disk_dev_info']) def test_create_disk_for_img(self): image_meta = mock.Mock() task = tf_stg.CreateDiskForImg( self.disk_dvr, self.context, self.instance, image_meta) task.execute() self.disk_dvr.create_disk_from_image.assert_called_once_with( self.context, self.instance, image_meta) task.revert('result', 'flow failures') self.disk_dvr.delete_disks.assert_called_once_with(['result']) self.disk_dvr.delete_disks.reset_mock() # Delete not called if no result task.revert(None, None) self.disk_dvr.delete_disks.assert_not_called() # Delete exception doesn't raise self.disk_dvr.delete_disks.side_effect = pvm_exc.TimeoutError( "timed out") task.revert('result', 'flow failures') self.disk_dvr.delete_disks.assert_called_once_with(['result']) # Validate args on taskflow.task.Task instantiation with mock.patch('taskflow.task.Task.__init__') as tf: tf_stg.CreateDiskForImg( self.disk_dvr, self.context, self.instance, image_meta) tf.assert_called_once_with( name='create_disk_from_img', provides='disk_dev_info') nova-17.0.1/nova/tests/unit/virt/powervm/tasks/test_vm.py0000666000175000017500000001166013250073126023521 0ustar zuulzuul00000000000000# Copyright 2015, 2017 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from taskflow import engines as tf_eng from taskflow.patterns import linear_flow as tf_lf from taskflow import task as tf_tsk from nova import exception from nova import test from nova.virt.powervm.tasks import vm as tf_vm class TestVMTasks(test.NoDBTestCase): def setUp(self): super(TestVMTasks, self).setUp() self.apt = mock.Mock() self.instance = mock.Mock() @mock.patch('pypowervm.tasks.storage.add_lpar_storage_scrub_tasks', autospec=True) @mock.patch('nova.virt.powervm.vm.create_lpar') def test_create(self, mock_vm_crt, mock_stg): lpar_entry = mock.Mock() # Test create with normal (non-recreate) ftsk crt = tf_vm.Create(self.apt, 'host_wrapper', self.instance, 'ftsk') mock_vm_crt.return_value = lpar_entry crt.execute() mock_vm_crt.assert_called_once_with(self.apt, 'host_wrapper', self.instance) mock_stg.assert_called_once_with( [lpar_entry.id], 'ftsk', lpars_exist=True) mock_stg.assert_called_once_with([mock_vm_crt.return_value.id], 'ftsk', lpars_exist=True) # Validate args on taskflow.task.Task instantiation with mock.patch('taskflow.task.Task.__init__') as tf: tf_vm.Create(self.apt, 'host_wrapper', self.instance, 'ftsk') tf.assert_called_once_with(name='crt_vm', provides='lpar_wrap') @mock.patch('nova.virt.powervm.vm.power_on') def test_power_on(self, mock_pwron): pwron = tf_vm.PowerOn(self.apt, self.instance) pwron.execute() mock_pwron.assert_called_once_with(self.apt, self.instance) # Validate args on taskflow.task.Task instantiation with mock.patch('taskflow.task.Task.__init__') as tf: tf_vm.PowerOn(self.apt, self.instance) tf.assert_called_once_with(name='pwr_vm') @mock.patch('nova.virt.powervm.vm.power_on') @mock.patch('nova.virt.powervm.vm.power_off') def test_power_on_revert(self, mock_pwroff, mock_pwron): flow = tf_lf.Flow('revert_power_on') pwron = tf_vm.PowerOn(self.apt, self.instance) flow.add(pwron) # Dummy Task that fails, triggering flow revert def failure(*a, **k): raise ValueError() flow.add(tf_tsk.FunctorTask(failure)) # When PowerOn.execute doesn't fail, revert calls power_off self.assertRaises(ValueError, tf_eng.run, flow) mock_pwron.assert_called_once_with(self.apt, self.instance) mock_pwroff.assert_called_once_with(self.apt, self.instance, force_immediate=True) mock_pwron.reset_mock() mock_pwroff.reset_mock() # When PowerOn.execute fails, revert doesn't call power_off mock_pwron.side_effect = exception.NovaException() self.assertRaises(exception.NovaException, tf_eng.run, flow) mock_pwron.assert_called_once_with(self.apt, self.instance) mock_pwroff.assert_not_called() @mock.patch('nova.virt.powervm.vm.power_off') def test_power_off(self, mock_pwroff): # Default force_immediate pwroff = tf_vm.PowerOff(self.apt, self.instance) pwroff.execute() mock_pwroff.assert_called_once_with(self.apt, self.instance, force_immediate=False) mock_pwroff.reset_mock() # Explicit force_immediate pwroff = tf_vm.PowerOff(self.apt, self.instance, force_immediate=True) pwroff.execute() mock_pwroff.assert_called_once_with(self.apt, self.instance, force_immediate=True) # Validate args on taskflow.task.Task instantiation with mock.patch('taskflow.task.Task.__init__') as tf: tf_vm.PowerOff(self.apt, self.instance) tf.assert_called_once_with(name='pwr_off_vm') @mock.patch('nova.virt.powervm.vm.delete_lpar') def test_delete(self, mock_dlt): delete = tf_vm.Delete(self.apt, self.instance) delete.execute() mock_dlt.assert_called_once_with(self.apt, self.instance) # Validate args on taskflow.task.Task instantiation with mock.patch('taskflow.task.Task.__init__') as tf: tf_vm.Delete(self.apt, self.instance) tf.assert_called_once_with(name='dlt_vm') nova-17.0.1/nova/tests/unit/virt/powervm/tasks/test_network.py0000666000175000017500000003126113250073126024567 0ustar zuulzuul00000000000000# Copyright 2015, 2017 IBM Corp. # # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import copy import eventlet import mock from pypowervm.wrappers import network as pvm_net from nova import exception from nova import test from nova.tests.unit.virt import powervm from nova.virt.powervm.tasks import network as tf_net def cna(mac): """Builds a mock Client Network Adapter for unit tests.""" return mock.MagicMock(mac=mac, vswitch_uri='fake_href') class TestNetwork(test.NoDBTestCase): def setUp(self): super(TestNetwork, self).setUp() self.flags(host='host1') self.apt = mock.Mock() self.mock_lpar_wrap = mock.MagicMock() self.mock_lpar_wrap.can_modify_io.return_value = True, None @mock.patch('nova.virt.powervm.vm.get_instance_wrapper') @mock.patch('nova.virt.powervm.vif.unplug') @mock.patch('nova.virt.powervm.vm.get_cnas') def test_unplug_vifs(self, mock_vm_get, mock_unplug, mock_get_wrap): """Tests that a delete of the vif can be done.""" inst = powervm.TEST_INSTANCE # Mock up the CNA responses. cnas = [cna('AABBCCDDEEFF'), cna('AABBCCDDEE11'), cna('AABBCCDDEE22')] mock_vm_get.return_value = cnas # Mock up the network info. This also validates that they will be # sanitized to upper case. net_info = [ {'address': 'aa:bb:cc:dd:ee:ff'}, {'address': 'aa:bb:cc:dd:ee:22'}, {'address': 'aa:bb:cc:dd:ee:33'} ] # Mock out the instance wrapper mock_get_wrap.return_value = self.mock_lpar_wrap # Mock out the vif driver def validate_unplug(adapter, instance, vif, cna_w_list=None): self.assertEqual(adapter, self.apt) self.assertEqual(instance, inst) self.assertIn(vif, net_info) self.assertEqual(cna_w_list, cnas) mock_unplug.side_effect = validate_unplug # Run method p_vifs = tf_net.UnplugVifs(self.apt, inst, net_info) p_vifs.execute() # Make sure the unplug was invoked, so that we know that the validation # code was called self.assertEqual(3, mock_unplug.call_count) # Validate args on taskflow.task.Task instantiation with mock.patch('taskflow.task.Task.__init__') as tf: tf_net.UnplugVifs(self.apt, inst, net_info) tf.assert_called_once_with(name='unplug_vifs') @mock.patch('nova.virt.powervm.vm.get_instance_wrapper') def test_unplug_vifs_invalid_state(self, mock_get_wrap): """Tests that the delete raises an exception if bad VM state.""" inst = powervm.TEST_INSTANCE # Mock out the instance wrapper mock_get_wrap.return_value = self.mock_lpar_wrap # Mock that the state is incorrect self.mock_lpar_wrap.can_modify_io.return_value = False, 'bad' # Run method p_vifs = tf_net.UnplugVifs(self.apt, inst, mock.Mock()) self.assertRaises(exception.VirtualInterfaceUnplugException, p_vifs.execute) @mock.patch('nova.virt.powervm.vif.plug') @mock.patch('nova.virt.powervm.vm.get_cnas') def test_plug_vifs_rmc(self, mock_cna_get, mock_plug): """Tests that a crt vif can be done with secure RMC.""" inst = powervm.TEST_INSTANCE # Mock up the CNA response. One should already exist, the other # should not. pre_cnas = [cna('AABBCCDDEEFF'), cna('AABBCCDDEE11')] mock_cna_get.return_value = copy.deepcopy(pre_cnas) # Mock up the network info. This also validates that they will be # sanitized to upper case. net_info = [ {'address': 'aa:bb:cc:dd:ee:ff', 'vnic_type': 'normal'}, {'address': 'aa:bb:cc:dd:ee:22', 'vnic_type': 'normal'}, ] # First run the CNA update, then the CNA create. mock_new_cna = mock.Mock(spec=pvm_net.CNA) mock_plug.side_effect = ['upd_cna', mock_new_cna] # Run method p_vifs = tf_net.PlugVifs(mock.MagicMock(), self.apt, inst, net_info) all_cnas = p_vifs.execute(self.mock_lpar_wrap) # new vif should be created twice. mock_plug.assert_any_call(self.apt, inst, net_info[0], new_vif=False) mock_plug.assert_any_call(self.apt, inst, net_info[1], new_vif=True) # The Task provides the list of original CNAs plus only CNAs that were # created. self.assertEqual(pre_cnas + [mock_new_cna], all_cnas) # Validate args on taskflow.task.Task instantiation with mock.patch('taskflow.task.Task.__init__') as tf: tf_net.PlugVifs(mock.MagicMock(), self.apt, inst, net_info) tf.assert_called_once_with( name='plug_vifs', provides='vm_cnas', requires=['lpar_wrap']) @mock.patch('nova.virt.powervm.vif.plug') @mock.patch('nova.virt.powervm.vm.get_cnas') def test_plug_vifs_rmc_no_create(self, mock_vm_get, mock_plug): """Verifies if no creates are needed, none are done.""" inst = powervm.TEST_INSTANCE # Mock up the CNA response. Both should already exist. mock_vm_get.return_value = [cna('AABBCCDDEEFF'), cna('AABBCCDDEE11')] # Mock up the network info. This also validates that they will be # sanitized to upper case. This also validates that we don't call # get_vnics if no nets have vnic_type 'direct'. net_info = [ {'address': 'aa:bb:cc:dd:ee:ff', 'vnic_type': 'normal'}, {'address': 'aa:bb:cc:dd:ee:11', 'vnic_type': 'normal'} ] # Run method p_vifs = tf_net.PlugVifs(mock.MagicMock(), self.apt, inst, net_info) p_vifs.execute(self.mock_lpar_wrap) # The create should have been called with new_vif as False. mock_plug.assert_any_call(self.apt, inst, net_info[0], new_vif=False) mock_plug.assert_any_call(self.apt, inst, net_info[1], new_vif=False) @mock.patch('nova.virt.powervm.vif.plug') @mock.patch('nova.virt.powervm.vm.get_cnas') def test_plug_vifs_invalid_state(self, mock_vm_get, mock_plug): """Tests that a crt_vif fails when the LPAR state is bad.""" inst = powervm.TEST_INSTANCE # Mock up the CNA response. Only doing one for simplicity mock_vm_get.return_value = [] net_info = [{'address': 'aa:bb:cc:dd:ee:ff', 'vnic_type': 'normal'}] # Mock that the state is incorrect self.mock_lpar_wrap.can_modify_io.return_value = False, 'bad' # Run method p_vifs = tf_net.PlugVifs(mock.MagicMock(), self.apt, inst, net_info) self.assertRaises(exception.VirtualInterfaceCreateException, p_vifs.execute, self.mock_lpar_wrap) # The create should not have been invoked self.assertEqual(0, mock_plug.call_count) @mock.patch('nova.virt.powervm.vif.plug') @mock.patch('nova.virt.powervm.vm.get_cnas') def test_plug_vifs_timeout(self, mock_vm_get, mock_plug): """Tests that crt vif failure via loss of neutron callback.""" inst = powervm.TEST_INSTANCE # Mock up the CNA response. Only doing one for simplicity mock_vm_get.return_value = [cna('AABBCCDDEE11')] # Mock up the network info. net_info = [{'address': 'aa:bb:cc:dd:ee:ff', 'vnic_type': 'normal'}] # Ensure that an exception is raised by a timeout. mock_plug.side_effect = eventlet.timeout.Timeout() # Run method p_vifs = tf_net.PlugVifs(mock.MagicMock(), self.apt, inst, net_info) self.assertRaises(exception.VirtualInterfaceCreateException, p_vifs.execute, self.mock_lpar_wrap) # The create should have only been called once. self.assertEqual(1, mock_plug.call_count) @mock.patch('nova.virt.powervm.vif.unplug') @mock.patch('nova.virt.powervm.vif.plug') @mock.patch('nova.virt.powervm.vm.get_cnas') def test_plug_vifs_revert(self, mock_vm_get, mock_plug, mock_unplug): """Tests that the revert flow works properly.""" inst = powervm.TEST_INSTANCE # Fake CNA list. The one pre-existing VIF should *not* get reverted. cna_list = [cna('AABBCCDDEEFF'), cna('FFEEDDCCBBAA')] mock_vm_get.return_value = cna_list # Mock up the network info. Three roll backs. net_info = [ {'address': 'aa:bb:cc:dd:ee:ff', 'vnic_type': 'normal'}, {'address': 'aa:bb:cc:dd:ee:22', 'vnic_type': 'normal'}, {'address': 'aa:bb:cc:dd:ee:33', 'vnic_type': 'normal'} ] # Make sure we test raising an exception mock_unplug.side_effect = [exception.NovaException(), None] # Run method p_vifs = tf_net.PlugVifs(mock.MagicMock(), self.apt, inst, net_info) p_vifs.execute(self.mock_lpar_wrap) p_vifs.revert(self.mock_lpar_wrap, mock.Mock(), mock.Mock()) # The unplug should be called twice. The exception shouldn't stop the # second call. self.assertEqual(2, mock_unplug.call_count) # Make sure each call is invoked correctly. The first plug was not a # new vif, so it should not be reverted. c2 = mock.call(self.apt, inst, net_info[1], cna_w_list=cna_list) c3 = mock.call(self.apt, inst, net_info[2], cna_w_list=cna_list) mock_unplug.assert_has_calls([c2, c3]) @mock.patch('pypowervm.tasks.cna.crt_cna') @mock.patch('pypowervm.wrappers.network.VSwitch.search') @mock.patch('nova.virt.powervm.vif.plug') @mock.patch('nova.virt.powervm.vm.get_cnas') def test_plug_mgmt_vif(self, mock_vm_get, mock_plug, mock_vs_search, mock_crt_cna): """Tests that a mgmt vif can be created.""" inst = powervm.TEST_INSTANCE # Mock up the rmc vswitch vswitch_w = mock.MagicMock() vswitch_w.href = 'fake_mgmt_uri' mock_vs_search.return_value = [vswitch_w] # Run method such that it triggers a fresh CNA search p_vifs = tf_net.PlugMgmtVif(self.apt, inst) p_vifs.execute(None) # With the default get_cnas mock (which returns a Mock()), we think we # found an existing management CNA. mock_crt_cna.assert_not_called() mock_vm_get.assert_called_once_with( self.apt, inst, vswitch_uri='fake_mgmt_uri') # Now mock get_cnas to return no hits mock_vm_get.reset_mock() mock_vm_get.return_value = [] p_vifs.execute(None) # Get was called; and since it didn't have the mgmt CNA, so was plug. self.assertEqual(1, mock_crt_cna.call_count) mock_vm_get.assert_called_once_with( self.apt, inst, vswitch_uri='fake_mgmt_uri') # Now pass CNAs, but not the mgmt vif, "from PlugVifs" cnas = [mock.Mock(vswitch_uri='uri1'), mock.Mock(vswitch_uri='uri2')] mock_crt_cna.reset_mock() mock_vm_get.reset_mock() p_vifs.execute(cnas) # Get wasn't called, since the CNAs were passed "from PlugVifs"; but # since the mgmt vif wasn't included, plug was called. mock_vm_get.assert_not_called() mock_crt_cna.assert_called() # Finally, pass CNAs including the mgmt. cnas.append(mock.Mock(vswitch_uri='fake_mgmt_uri')) mock_crt_cna.reset_mock() p_vifs.execute(cnas) # Neither get nor plug was called. mock_vm_get.assert_not_called() mock_crt_cna.assert_not_called() # Validate args on taskflow.task.Task instantiation with mock.patch('taskflow.task.Task.__init__') as tf: tf_net.PlugMgmtVif(self.apt, inst) tf.assert_called_once_with( name='plug_mgmt_vif', provides='mgmt_cna', requires=['vm_cnas']) def test_get_vif_events(self): # Set up common mocks. inst = powervm.TEST_INSTANCE net_info = [mock.MagicMock(), mock.MagicMock()] net_info[0]['id'] = 'a' net_info[0].get.return_value = False net_info[1]['id'] = 'b' net_info[1].get.return_value = True # Set up the runner. p_vifs = tf_net.PlugVifs(mock.MagicMock(), self.apt, inst, net_info) p_vifs.crt_network_infos = net_info resp = p_vifs._get_vif_events() # Only one should be returned since only one was active. self.assertEqual(1, len(resp)) nova-17.0.1/nova/tests/unit/virt/powervm/tasks/__init__.py0000666000175000017500000000000013250073126023561 0ustar zuulzuul00000000000000nova-17.0.1/nova/tests/unit/virt/powervm/disk/0000775000175000017500000000000013250073472021271 5ustar zuulzuul00000000000000nova-17.0.1/nova/tests/unit/virt/powervm/disk/__init__.py0000666000175000017500000000000013250073126023366 0ustar zuulzuul00000000000000nova-17.0.1/nova/tests/unit/virt/powervm/disk/test_ssp.py0000666000175000017500000002725513250073126023520 0ustar zuulzuul00000000000000# Copyright 2015, 2017 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from __future__ import absolute_import import fixtures import mock from oslo_utils import uuidutils from pypowervm import const as pvm_const from pypowervm import exceptions as pvm_exc from pypowervm.tasks import storage as tsk_stg from pypowervm.utils import transaction as pvm_tx from pypowervm.wrappers import cluster as pvm_clust from pypowervm.wrappers import storage as pvm_stg from pypowervm.wrappers import virtual_io_server as pvm_vios from nova import exception from nova import test from nova.tests.unit.virt import powervm from nova.virt.powervm.disk import ssp as ssp_dvr from nova.virt.powervm import vm FAKE_INST_UUID = uuidutils.generate_uuid(dashed=True) FAKE_INST_UUID_PVM = vm.get_pvm_uuid(mock.Mock(uuid=FAKE_INST_UUID)) class TestSSPDiskAdapter(test.NoDBTestCase): """Unit Tests for the LocalDisk storage driver.""" def setUp(self): super(TestSSPDiskAdapter, self).setUp() self.inst = powervm.TEST_INSTANCE self.apt = mock.Mock() self.host_uuid = 'host_uuid' self.ssp_wrap = mock.create_autospec(pvm_stg.SSP, instance=True) # SSP.refresh() returns itself self.ssp_wrap.refresh.return_value = self.ssp_wrap self.node1 = mock.create_autospec(pvm_clust.Node, instance=True) self.node2 = mock.create_autospec(pvm_clust.Node, instance=True) self.clust_wrap = mock.create_autospec( pvm_clust.Cluster, instance=True) self.clust_wrap.nodes = [self.node1, self.node2] self.clust_wrap.refresh.return_value = self.clust_wrap self.tier_wrap = mock.create_autospec(pvm_stg.Tier, instance=True) # Tier.refresh() returns itself self.tier_wrap.refresh.return_value = self.tier_wrap self.vio_wrap = mock.create_autospec(pvm_vios.VIOS, instance=True) # For _cluster self.mock_clust = self.useFixture(fixtures.MockPatch( 'pypowervm.wrappers.cluster.Cluster', autospec=True)).mock self.mock_clust.get.return_value = [self.clust_wrap] # For _ssp self.mock_ssp_gbhref = self.useFixture(fixtures.MockPatch( 'pypowervm.wrappers.storage.SSP.get_by_href')).mock self.mock_ssp_gbhref.return_value = self.ssp_wrap # For _tier self.mock_get_tier = self.useFixture(fixtures.MockPatch( 'pypowervm.tasks.storage.default_tier_for_ssp', autospec=True)).mock self.mock_get_tier.return_value = self.tier_wrap # A FeedTask self.mock_afs = self.useFixture(fixtures.MockPatch( 'pypowervm.utils.transaction.FeedTask.add_functor_subtask', autospec=True)).mock self.mock_wtsk = mock.create_autospec( pvm_tx.WrapperTask, instance=True) self.mock_wtsk.configure_mock(wrapper=self.vio_wrap) self.mock_ftsk = mock.create_autospec(pvm_tx.FeedTask, instance=True) self.mock_ftsk.configure_mock( wrapper_tasks={self.vio_wrap.uuid: self.mock_wtsk}) self.pvm_uuid = self.useFixture(fixtures.MockPatch( 'nova.virt.powervm.vm.get_pvm_uuid')).mock # The SSP disk adapter self.ssp_drv = ssp_dvr.SSPDiskAdapter(self.apt, self.host_uuid) def test_init(self): self.assertEqual(self.apt, self.ssp_drv._adapter) self.assertEqual(self.host_uuid, self.ssp_drv._host_uuid) self.mock_clust.get.assert_called_once_with(self.apt) self.assertEqual(self.mock_clust.get.return_value, [self.ssp_drv._clust]) self.mock_ssp_gbhref.assert_called_once_with( self.apt, self.clust_wrap.ssp_uri) self.assertEqual(self.mock_ssp_gbhref.return_value, self.ssp_drv._ssp) self.mock_get_tier.assert_called_once_with(self.ssp_wrap) self.assertEqual(self.mock_get_tier.return_value, self.ssp_drv._tier) def test_init_error(self): # Do these in reverse order to verify we trap all of 'em for raiser in (self.mock_get_tier, self.mock_ssp_gbhref, self.mock_clust.get): raiser.side_effect = pvm_exc.TimeoutError("timed out") self.assertRaises(exception.NotFound, ssp_dvr.SSPDiskAdapter, self.apt, self.host_uuid) raiser.side_effect = ValueError self.assertRaises(ValueError, ssp_dvr.SSPDiskAdapter, self.apt, self.host_uuid) def test_capabilities(self): self.assertTrue(self.ssp_drv.capabilities.get('shared_storage')) @mock.patch('pypowervm.util.get_req_path_uuid', autospec=True) def test_vios_uuids(self, mock_rpu): mock_rpu.return_value = self.host_uuid vios_uuids = self.ssp_drv._vios_uuids self.assertEqual({self.node1.vios_uuid, self.node2.vios_uuid}, set(vios_uuids)) mock_rpu.assert_has_calls( [mock.call(node.vios_uri, preserve_case=True, root=True) for node in [self.node1, self.node2]]) mock_rpu.reset_mock() # Test VIOSes on other nodes, which won't have uuid or url node1 = mock.Mock(vios_uuid=None, vios_uri='uri1') node2 = mock.Mock(vios_uuid='2', vios_uri=None) # This mock is good and should be returned node3 = mock.Mock(vios_uuid='3', vios_uri='uri3') self.clust_wrap.nodes = [node1, node2, node3] self.assertEqual(['3'], self.ssp_drv._vios_uuids) # get_req_path_uuid was only called on the good one mock_rpu.assert_called_once_with('uri3', preserve_case=True, root=True) def test_capacity(self): self.tier_wrap.capacity = 10 self.assertAlmostEqual(10.0, self.ssp_drv.capacity) self.tier_wrap.refresh.assert_called_once_with() def test_capacity_used(self): self.ssp_wrap.capacity = 4.56 self.ssp_wrap.free_space = 1.23 self.assertAlmostEqual((4.56 - 1.23), self.ssp_drv.capacity_used) self.ssp_wrap.refresh.assert_called_once_with() @mock.patch('pypowervm.tasks.cluster_ssp.get_or_upload_image_lu', autospec=True) @mock.patch('nova.virt.powervm.disk.ssp.SSPDiskAdapter._vios_uuids', new_callable=mock.PropertyMock) @mock.patch('pypowervm.util.sanitize_file_name_for_api', autospec=True) @mock.patch('pypowervm.tasks.storage.crt_lu', autospec=True) @mock.patch('nova.image.api.API.download') @mock.patch('nova.virt.powervm.disk.ssp.IterableToFileAdapter') def test_create_disk_from_image(self, mock_it2f, mock_dl, mock_crt_lu, mock_san, mock_vuuid, mock_goru): img = powervm.TEST_IMAGE1 mock_crt_lu.return_value = self.ssp_drv._ssp, 'boot_lu' mock_san.return_value = 'disk_name' mock_vuuid.return_value = ['vuuid'] self.assertEqual('boot_lu', self.ssp_drv.create_disk_from_image( 'context', self.inst, img)) mock_dl.assert_called_once_with('context', img.id) mock_san.assert_has_calls([ mock.call(img.name, prefix='image_', suffix='_' + img.checksum), mock.call(self.inst.name, prefix='boot_')]) mock_it2f.assert_called_once_with(mock_dl.return_value) mock_goru.assert_called_once_with( self.ssp_drv._tier, 'disk_name', 'vuuid', mock_it2f.return_value, img.size, upload_type=tsk_stg.UploadType.IO_STREAM) mock_crt_lu.assert_called_once_with( self.mock_get_tier.return_value, mock_san.return_value, self.inst.flavor.root_gb, typ=pvm_stg.LUType.DISK, clone=mock_goru.return_value) @mock.patch('nova.virt.powervm.disk.ssp.SSPDiskAdapter._vios_uuids', new_callable=mock.PropertyMock) @mock.patch('pypowervm.tasks.scsi_mapper.add_map', autospec=True) @mock.patch('pypowervm.tasks.scsi_mapper.build_vscsi_mapping', autospec=True) @mock.patch('pypowervm.wrappers.storage.LU', autospec=True) def test_connect_disk(self, mock_lu, mock_bldmap, mock_addmap, mock_vio_uuids): disk_info = mock.Mock() disk_info.configure_mock(name='dname', udid='dudid') mock_vio_uuids.return_value = [self.vio_wrap.uuid] def test_afs(add_func): # Verify the internal add_func self.assertEqual(mock_addmap.return_value, add_func(self.vio_wrap)) mock_bldmap.assert_called_once_with( self.host_uuid, self.vio_wrap, self.pvm_uuid.return_value, mock_lu.bld_ref.return_value) mock_addmap.assert_called_once_with( self.vio_wrap, mock_bldmap.return_value) self.mock_wtsk.add_functor_subtask.side_effect = test_afs self.ssp_drv.attach_disk(self.inst, disk_info, self.mock_ftsk) mock_lu.bld_ref.assert_called_once_with(self.apt, 'dname', 'dudid') self.pvm_uuid.assert_called_once_with(self.inst) self.assertEqual(1, self.mock_wtsk.add_functor_subtask.call_count) @mock.patch('pypowervm.tasks.storage.rm_tier_storage', autospec=True) def test_delete_disks(self, mock_rm_tstor): self.ssp_drv.delete_disks(['disk1', 'disk2']) mock_rm_tstor.assert_called_once_with(['disk1', 'disk2'], tier=self.ssp_drv._tier) @mock.patch('nova.virt.powervm.disk.ssp.SSPDiskAdapter._vios_uuids', new_callable=mock.PropertyMock) @mock.patch('pypowervm.tasks.scsi_mapper.find_maps', autospec=True) @mock.patch('pypowervm.tasks.scsi_mapper.remove_maps', autospec=True) @mock.patch('pypowervm.tasks.scsi_mapper.gen_match_func', autospec=True) @mock.patch('pypowervm.tasks.partition.build_active_vio_feed_task', autospec=True) def test_disconnect_disk(self, mock_bld_ftsk, mock_gmf, mock_rmmaps, mock_findmaps, mock_vio_uuids): mock_vio_uuids.return_value = [self.vio_wrap.uuid] mock_bld_ftsk.return_value = self.mock_ftsk lu1, lu2 = [mock.create_autospec(pvm_stg.LU, instance=True)] * 2 # Two mappings have the same LU, to verify set behavior mock_findmaps.return_value = [ mock.Mock(spec=pvm_vios.VSCSIMapping, backing_storage=lu) for lu in (lu1, lu2, lu1)] def test_afs(rm_func): # verify the internal rm_func self.assertEqual(mock_rmmaps.return_value, rm_func(self.vio_wrap)) mock_rmmaps.assert_called_once_with( self.vio_wrap, self.pvm_uuid.return_value, match_func=mock_gmf.return_value) self.mock_wtsk.add_functor_subtask.side_effect = test_afs self.assertEqual( {lu1, lu2}, set(self.ssp_drv.detach_disk(self.inst))) mock_bld_ftsk.assert_called_once_with( self.apt, name='ssp', xag=[pvm_const.XAG.VIO_SMAP]) self.pvm_uuid.assert_called_once_with(self.inst) mock_gmf.assert_called_once_with(pvm_stg.LU) self.assertEqual(1, self.mock_wtsk.add_functor_subtask.call_count) mock_findmaps.assert_called_once_with( self.vio_wrap.scsi_mappings, client_lpar_id=self.pvm_uuid.return_value, match_func=mock_gmf.return_value) self.mock_ftsk.execute.assert_called_once_with() nova-17.0.1/nova/tests/unit/virt/powervm/test_driver.py0000666000175000017500000003527513250073126023255 0ustar zuulzuul00000000000000# Copyright 2016, 2017 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from __future__ import absolute_import import fixtures import mock from pypowervm import const as pvm_const from pypowervm import exceptions as pvm_exc from pypowervm.helpers import log_helper as pvm_hlp_log from pypowervm.helpers import vios_busy as pvm_hlp_vbusy from pypowervm.utils import transaction as pvm_tx from pypowervm.wrappers import virtual_io_server as pvm_vios from nova import exception from nova import test from nova.tests.unit.virt import powervm from nova.virt import hardware from nova.virt.powervm.disk import ssp from nova.virt.powervm import driver class TestPowerVMDriver(test.NoDBTestCase): def setUp(self): super(TestPowerVMDriver, self).setUp() self.drv = driver.PowerVMDriver('virtapi') self.adp = self.useFixture(fixtures.MockPatch( 'pypowervm.adapter.Adapter', autospec=True)).mock self.drv.adapter = self.adp self.sess = self.useFixture(fixtures.MockPatch( 'pypowervm.adapter.Session', autospec=True)).mock self.pwron = self.useFixture(fixtures.MockPatch( 'nova.virt.powervm.vm.power_on')).mock self.pwroff = self.useFixture(fixtures.MockPatch( 'nova.virt.powervm.vm.power_off')).mock # Create an instance to test with self.inst = powervm.TEST_INSTANCE @mock.patch('nova.image.API') @mock.patch('pypowervm.tasks.storage.ComprehensiveScrub', autospec=True) @mock.patch('nova.virt.powervm.disk.ssp.SSPDiskAdapter') @mock.patch('pypowervm.wrappers.managed_system.System', autospec=True) @mock.patch('pypowervm.tasks.partition.validate_vios_ready', autospec=True) def test_init_host(self, mock_vvr, mock_sys, mock_ssp, mock_scrub, mock_img): mock_hostw = mock.Mock(uuid='uuid') mock_sys.get.return_value = [mock_hostw] self.drv.init_host('host') self.sess.assert_called_once_with(conn_tries=60) self.adp.assert_called_once_with( self.sess.return_value, helpers=[ pvm_hlp_log.log_helper, pvm_hlp_vbusy.vios_busy_retry_helper]) mock_vvr.assert_called_once_with(self.drv.adapter) mock_sys.get.assert_called_once_with(self.drv.adapter) self.assertEqual(mock_hostw, self.drv.host_wrapper) mock_scrub.assert_called_once_with(self.drv.adapter) mock_scrub.return_value.execute.assert_called_once_with() mock_ssp.assert_called_once_with(self.drv.adapter, 'uuid') self.assertEqual(mock_ssp.return_value, self.drv.disk_dvr) mock_img.assert_called_once_with() self.assertEqual(mock_img.return_value, self.drv.image_api) @mock.patch('nova.virt.powervm.vm.get_pvm_uuid') @mock.patch('nova.virt.powervm.vm.get_vm_qp') @mock.patch('nova.virt.powervm.vm._translate_vm_state') def test_get_info(self, mock_tx_state, mock_qp, mock_uuid): mock_tx_state.return_value = 'fake-state' self.assertEqual(hardware.InstanceInfo('fake-state'), self.drv.get_info('inst')) mock_uuid.assert_called_once_with('inst') mock_qp.assert_called_once_with( self.drv.adapter, mock_uuid.return_value, 'PartitionState') mock_tx_state.assert_called_once_with(mock_qp.return_value) @mock.patch('nova.virt.powervm.vm.get_lpar_names') def test_list_instances(self, mock_names): mock_names.return_value = ['one', 'two', 'three'] self.assertEqual(['one', 'two', 'three'], self.drv.list_instances()) mock_names.assert_called_once_with(self.adp) def test_get_available_nodes(self): self.flags(host='hostname') self.assertEqual(['hostname'], self.drv.get_available_nodes('node')) @mock.patch('pypowervm.wrappers.managed_system.System', autospec=True) @mock.patch('nova.virt.powervm.host.build_host_resource_from_ms') def test_get_available_resource(self, mock_bhrfm, mock_sys): mock_sys.get.return_value = ['sys'] mock_bhrfm.return_value = {'foo': 'bar'} self.drv.disk_dvr = mock.create_autospec(ssp.SSPDiskAdapter, instance=True) self.assertEqual( {'foo': 'bar', 'local_gb': self.drv.disk_dvr.capacity, 'local_gb_used': self.drv.disk_dvr.capacity_used}, self.drv.get_available_resource('node')) mock_sys.get.assert_called_once_with(self.adp) mock_bhrfm.assert_called_once_with('sys') self.assertEqual('sys', self.drv.host_wrapper) @mock.patch('nova.virt.powervm.tasks.network.PlugMgmtVif.execute') @mock.patch('nova.virt.powervm.tasks.network.PlugVifs.execute') @mock.patch('nova.virt.powervm.media.ConfigDrivePowerVM') @mock.patch('nova.virt.configdrive.required_by') @mock.patch('nova.virt.powervm.vm.create_lpar') @mock.patch('pypowervm.tasks.partition.build_active_vio_feed_task', autospec=True) @mock.patch('pypowervm.tasks.storage.add_lpar_storage_scrub_tasks', autospec=True) def test_spawn_ops(self, mock_scrub, mock_bldftsk, mock_crt_lpar, mock_cdrb, mock_cfg_drv, mock_plug_vifs, mock_plug_mgmt_vif): """Validates the 'typical' spawn flow of the spawn of an instance. """ mock_cdrb.return_value = True self.drv.host_wrapper = mock.Mock() self.drv.disk_dvr = mock.create_autospec(ssp.SSPDiskAdapter, instance=True) mock_ftsk = pvm_tx.FeedTask('fake', [mock.Mock(spec=pvm_vios.VIOS)]) mock_bldftsk.return_value = mock_ftsk self.drv.spawn('context', self.inst, 'img_meta', 'files', 'password', 'allocs', network_info='netinfo') mock_crt_lpar.assert_called_once_with( self.adp, self.drv.host_wrapper, self.inst) mock_bldftsk.assert_called_once_with( self.adp, xag={pvm_const.XAG.VIO_SMAP, pvm_const.XAG.VIO_FMAP}) self.assertTrue(mock_plug_vifs.called) self.assertTrue(mock_plug_mgmt_vif.called) mock_scrub.assert_called_once_with( [mock_crt_lpar.return_value.id], mock_ftsk, lpars_exist=True) self.drv.disk_dvr.create_disk_from_image.assert_called_once_with( 'context', self.inst, 'img_meta') self.drv.disk_dvr.attach_disk.assert_called_once_with( self.inst, self.drv.disk_dvr.create_disk_from_image.return_value, mock_ftsk) mock_cfg_drv.assert_called_once_with(self.adp) mock_cfg_drv.return_value.create_cfg_drv_vopt.assert_called_once_with( self.inst, 'files', 'netinfo', mock_ftsk, admin_pass='password', mgmt_cna=mock.ANY) self.pwron.assert_called_once_with(self.adp, self.inst) mock_cfg_drv.reset_mock() # No config drive mock_cdrb.return_value = False self.drv.spawn('context', self.inst, 'img_meta', 'files', 'password', 'allocs') mock_cfg_drv.assert_not_called() @mock.patch('nova.virt.powervm.tasks.network.UnplugVifs.execute') @mock.patch('nova.virt.powervm.vm.delete_lpar') @mock.patch('nova.virt.powervm.media.ConfigDrivePowerVM') @mock.patch('nova.virt.configdrive.required_by') @mock.patch('pypowervm.tasks.partition.build_active_vio_feed_task', autospec=True) def test_destroy(self, mock_bldftsk, mock_cdrb, mock_cfgdrv, mock_dlt_lpar, mock_unplug): """Validates PowerVM destroy.""" self.drv.host_wrapper = mock.Mock() self.drv.disk_dvr = mock.create_autospec(ssp.SSPDiskAdapter, instance=True) mock_ftsk = pvm_tx.FeedTask('fake', [mock.Mock(spec=pvm_vios.VIOS)]) mock_bldftsk.return_value = mock_ftsk # Good path, with config drive, destroy disks mock_cdrb.return_value = True self.drv.destroy('context', self.inst, [], block_device_info={}) self.pwroff.assert_called_once_with( self.adp, self.inst, force_immediate=True) mock_bldftsk.assert_called_once_with( self.adp, xag=[pvm_const.XAG.VIO_SMAP]) mock_unplug.assert_called_once() mock_cdrb.assert_called_once_with(self.inst) mock_cfgdrv.assert_called_once_with(self.adp) mock_cfgdrv.return_value.dlt_vopt.assert_called_once_with( self.inst, stg_ftsk=mock_bldftsk.return_value) self.drv.disk_dvr.detach_disk.assert_called_once_with( self.inst) self.drv.disk_dvr.delete_disks.assert_called_once_with( self.drv.disk_dvr.detach_disk.return_value) mock_dlt_lpar.assert_called_once_with(self.adp, self.inst) self.pwroff.reset_mock() mock_bldftsk.reset_mock() mock_unplug.reset_mock() mock_cdrb.reset_mock() mock_cfgdrv.reset_mock() self.drv.disk_dvr.detach_disk.reset_mock() self.drv.disk_dvr.delete_disks.reset_mock() mock_dlt_lpar.reset_mock() # No config drive, preserve disks mock_cdrb.return_value = False self.drv.destroy('context', self.inst, [], block_device_info={}, destroy_disks=False) mock_cfgdrv.return_value.dlt_vopt.assert_not_called() self.drv.disk_dvr.delete_disks.assert_not_called() # Non-forced power_off, since preserving disks self.pwroff.assert_called_once_with( self.adp, self.inst, force_immediate=False) mock_bldftsk.assert_called_once_with( self.adp, xag=[pvm_const.XAG.VIO_SMAP]) mock_unplug.assert_called_once() mock_cdrb.assert_called_once_with(self.inst) mock_cfgdrv.assert_not_called() mock_cfgdrv.return_value.dlt_vopt.assert_not_called() self.drv.disk_dvr.detach_disk.assert_called_once_with( self.inst) self.drv.disk_dvr.delete_disks.assert_not_called() mock_dlt_lpar.assert_called_once_with(self.adp, self.inst) self.pwroff.reset_mock() mock_bldftsk.reset_mock() mock_unplug.reset_mock() mock_cdrb.reset_mock() mock_cfgdrv.reset_mock() self.drv.disk_dvr.detach_disk.reset_mock() self.drv.disk_dvr.delete_disks.reset_mock() mock_dlt_lpar.reset_mock() # InstanceNotFound exception, non-forced self.pwroff.side_effect = exception.InstanceNotFound( instance_id='something') self.drv.destroy('context', self.inst, [], block_device_info={}, destroy_disks=False) self.pwroff.assert_called_once_with( self.adp, self.inst, force_immediate=False) self.drv.disk_dvr.detach_disk.assert_not_called() mock_unplug.assert_not_called() self.drv.disk_dvr.delete_disks.assert_not_called() mock_dlt_lpar.assert_not_called() self.pwroff.reset_mock() self.pwroff.side_effect = None mock_unplug.reset_mock() # Convertible (PowerVM) exception mock_dlt_lpar.side_effect = pvm_exc.TimeoutError("Timed out") self.assertRaises(exception.InstanceTerminationFailure, self.drv.destroy, 'context', self.inst, [], block_device_info={}) # Everything got called self.pwroff.assert_called_once_with( self.adp, self.inst, force_immediate=True) mock_unplug.assert_called_once() self.drv.disk_dvr.detach_disk.assert_called_once_with(self.inst) self.drv.disk_dvr.delete_disks.assert_called_once_with( self.drv.disk_dvr.detach_disk.return_value) mock_dlt_lpar.assert_called_once_with(self.adp, self.inst) # Other random exception raises directly mock_dlt_lpar.side_effect = ValueError() self.assertRaises(ValueError, self.drv.destroy, 'context', self.inst, [], block_device_info={}) def test_power_on(self): self.drv.power_on('context', self.inst, 'network_info') self.pwron.assert_called_once_with(self.adp, self.inst) def test_power_off(self): self.drv.power_off(self.inst) self.pwroff.assert_called_once_with( self.adp, self.inst, force_immediate=True, timeout=None) def test_power_off_timeout(self): # Long timeout (retry interval means nothing on powervm) self.drv.power_off(self.inst, timeout=500, retry_interval=10) self.pwroff.assert_called_once_with( self.adp, self.inst, force_immediate=False, timeout=500) @mock.patch('nova.virt.powervm.vm.reboot') def test_reboot_soft(self, mock_reboot): inst = mock.Mock() self.drv.reboot('context', inst, 'network_info', 'SOFT') mock_reboot.assert_called_once_with(self.adp, inst, False) @mock.patch('nova.virt.powervm.vm.reboot') def test_reboot_hard(self, mock_reboot): inst = mock.Mock() self.drv.reboot('context', inst, 'network_info', 'HARD') mock_reboot.assert_called_once_with(self.adp, inst, True) @mock.patch('pypowervm.tasks.vterm.open_remotable_vnc_vterm', autospec=True) @mock.patch('nova.virt.powervm.vm.get_pvm_uuid', new=mock.Mock(return_value='uuid')) def test_get_vnc_console(self, mock_vterm): # Success mock_vterm.return_value = '10' resp = self.drv.get_vnc_console(mock.ANY, self.inst) self.assertEqual('127.0.0.1', resp.host) self.assertEqual('10', resp.port) self.assertEqual('uuid', resp.internal_access_path) mock_vterm.assert_called_once_with( mock.ANY, 'uuid', mock.ANY, vnc_path='uuid') # VNC failure - exception is raised directly mock_vterm.side_effect = pvm_exc.VNCBasedTerminalFailedToOpen(err='xx') self.assertRaises(pvm_exc.VNCBasedTerminalFailedToOpen, self.drv.get_vnc_console, mock.ANY, self.inst) # 404 mock_vterm.side_effect = pvm_exc.HttpError(mock.Mock(status=404)) self.assertRaises(exception.InstanceNotFound, self.drv.get_vnc_console, mock.ANY, self.inst) def test_deallocate_networks_on_reschedule(self): candeallocate = self.drv.deallocate_networks_on_reschedule(mock.Mock()) self.assertTrue(candeallocate) nova-17.0.1/nova/tests/unit/virt/powervm/test_host.py0000666000175000017500000000461013250073126022724 0ustar zuulzuul00000000000000# Copyright 2016 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # import mock from pypowervm.wrappers import managed_system as pvm_ms from nova import test from nova.virt.powervm import host as pvm_host class TestPowerVMHost(test.NoDBTestCase): def test_host_resources(self): # Create objects to test with ms_wrapper = mock.create_autospec(pvm_ms.System, spec_set=True) asio = mock.create_autospec(pvm_ms.ASIOConfig, spec_set=True) ms_wrapper.configure_mock( proc_units_configurable=500, proc_units_avail=500, memory_configurable=5242880, memory_free=5242752, memory_region_size='big', asio_config=asio) self.flags(host='the_hostname') # Run the actual test stats = pvm_host.build_host_resource_from_ms(ms_wrapper) self.assertIsNotNone(stats) # Check for the presence of fields fields = (('vcpus', 500), ('vcpus_used', 0), ('memory_mb', 5242880), ('memory_mb_used', 128), 'hypervisor_type', 'hypervisor_version', ('hypervisor_hostname', 'the_hostname'), 'cpu_info', 'supported_instances', 'stats') for fld in fields: if isinstance(fld, tuple): value = stats.get(fld[0], None) self.assertEqual(value, fld[1]) else: value = stats.get(fld, None) self.assertIsNotNone(value) # Check for individual stats hstats = (('proc_units', '500.00'), ('proc_units_used', '0.00')) for stat in hstats: if isinstance(stat, tuple): value = stats['stats'].get(stat[0], None) self.assertEqual(value, stat[1]) else: value = stats['stats'].get(stat, None) self.assertIsNotNone(value) nova-17.0.1/nova/tests/unit/virt/powervm/test_vm.py0000666000175000017500000006155113250073126022400 0ustar zuulzuul00000000000000# Copyright 2014, 2017 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from __future__ import absolute_import import fixtures import mock from pypowervm import exceptions as pvm_exc from pypowervm.helpers import log_helper as pvm_log from pypowervm.tests import test_fixtures as pvm_fx from pypowervm.utils import lpar_builder as lpar_bld from pypowervm.utils import uuid as pvm_uuid from pypowervm.wrappers import base_partition as pvm_bp from pypowervm.wrappers import logical_partition as pvm_lpar from nova.compute import power_state from nova import exception from nova import test from nova.tests.unit.virt import powervm from nova.virt.powervm import vm LPAR_MAPPING = ( { 'z3-9-5-126-127-00000001': '089ffb20-5d19-4a8c-bb80-13650627d985', 'z3-9-5-126-208-000001f0': '668b0882-c24a-4ae9-91c8-297e95e3fe29' }) class TestVMBuilder(test.NoDBTestCase): def setUp(self): super(TestVMBuilder, self).setUp() self.adpt = mock.MagicMock() self.host_w = mock.MagicMock() self.lpar_b = vm.VMBuilder(self.host_w, self.adpt) self.san_lpar_name = self.useFixture(fixtures.MockPatch( 'pypowervm.util.sanitize_partition_name_for_api', autospec=True)).mock self.inst = powervm.TEST_INSTANCE @mock.patch('pypowervm.utils.lpar_builder.DefaultStandardize', autospec=True) @mock.patch('nova.virt.powervm.vm.get_pvm_uuid') @mock.patch('pypowervm.utils.lpar_builder.LPARBuilder', autospec=True) def test_vm_builder(self, mock_lpar_bldr, mock_uuid2pvm, mock_def_stdz): inst = mock.Mock() inst.configure_mock( name='lpar_name', uuid='lpar_uuid', flavor=mock.Mock(memory_mb='mem', vcpus='vcpus', extra_specs={})) vmb = vm.VMBuilder('host', 'adap') mock_def_stdz.assert_called_once_with('host') self.assertEqual(mock_lpar_bldr.return_value, vmb.lpar_builder(inst)) self.san_lpar_name.assert_called_once_with('lpar_name') mock_uuid2pvm.assert_called_once_with(inst) mock_lpar_bldr.assert_called_once_with( 'adap', {'name': self.san_lpar_name.return_value, 'uuid': mock_uuid2pvm.return_value, 'memory': 'mem', 'vcpu': 'vcpus', 'srr_capability': True}, mock_def_stdz.return_value) def test_format_flavor(self): """Perform tests against _format_flavor.""" # convert instance uuid to pypowervm uuid # LP 1561128, simplified remote restart is enabled by default lpar_attrs = {'memory': 2048, 'name': self.san_lpar_name.return_value, 'uuid': pvm_uuid.convert_uuid_to_pvm( self.inst.uuid).upper(), 'vcpu': 1, 'srr_capability': True} # Test dedicated procs self.inst.flavor.extra_specs = {'powervm:dedicated_proc': 'true'} test_attrs = dict(lpar_attrs, dedicated_proc='true') self.assertEqual(self.lpar_b._format_flavor(self.inst), test_attrs) self.san_lpar_name.assert_called_with(self.inst.name) self.san_lpar_name.reset_mock() # Test dedicated procs, min/max vcpu and sharing mode self.inst.flavor.extra_specs = {'powervm:dedicated_proc': 'true', 'powervm:dedicated_sharing_mode': 'share_idle_procs_active', 'powervm:min_vcpu': '1', 'powervm:max_vcpu': '3'} test_attrs = dict(lpar_attrs, dedicated_proc='true', sharing_mode='sre idle procs active', min_vcpu='1', max_vcpu='3') self.assertEqual(self.lpar_b._format_flavor(self.inst), test_attrs) self.san_lpar_name.assert_called_with(self.inst.name) self.san_lpar_name.reset_mock() # Test shared proc sharing mode self.inst.flavor.extra_specs = {'powervm:uncapped': 'true'} test_attrs = dict(lpar_attrs, sharing_mode='uncapped') self.assertEqual(self.lpar_b._format_flavor(self.inst), test_attrs) self.san_lpar_name.assert_called_with(self.inst.name) self.san_lpar_name.reset_mock() # Test availability priority self.inst.flavor.extra_specs = {'powervm:availability_priority': '150'} test_attrs = dict(lpar_attrs, avail_priority='150') self.assertEqual(self.lpar_b._format_flavor(self.inst), test_attrs) self.san_lpar_name.assert_called_with(self.inst.name) self.san_lpar_name.reset_mock() # Test processor compatibility self.inst.flavor.extra_specs = { 'powervm:processor_compatibility': 'POWER8'} test_attrs = dict(lpar_attrs, processor_compatibility='POWER8') self.assertEqual(self.lpar_b._format_flavor(self.inst), test_attrs) self.san_lpar_name.assert_called_with(self.inst.name) self.san_lpar_name.reset_mock() # Test min, max proc units self.inst.flavor.extra_specs = {'powervm:min_proc_units': '0.5', 'powervm:max_proc_units': '2.0'} test_attrs = dict(lpar_attrs, min_proc_units='0.5', max_proc_units='2.0') self.assertEqual(self.lpar_b._format_flavor(self.inst), test_attrs) self.san_lpar_name.assert_called_with(self.inst.name) self.san_lpar_name.reset_mock() # Test min, max mem self.inst.flavor.extra_specs = {'powervm:min_mem': '1024', 'powervm:max_mem': '4096'} test_attrs = dict(lpar_attrs, min_mem='1024', max_mem='4096') self.assertEqual(self.lpar_b._format_flavor(self.inst), test_attrs) self.san_lpar_name.assert_called_with(self.inst.name) self.san_lpar_name.reset_mock() # Test remote restart set to false self.inst.flavor.extra_specs = {'powervm:srr_capability': 'false'} test_attrs = dict(lpar_attrs, srr_capability=False) self.assertEqual(self.lpar_b._format_flavor(self.inst), test_attrs) # Unhandled powervm: key is ignored self.inst.flavor.extra_specs = {'powervm:srr_capability': 'false', 'powervm:something_new': 'foo'} test_attrs = dict(lpar_attrs, srr_capability=False) self.assertEqual(self.lpar_b._format_flavor(self.inst), test_attrs) # If we recognize a key, but don't handle it, we raise with mock.patch.object(self.lpar_b, '_is_pvm_valid_key', return_value=True): self.inst.flavor.extra_specs = {'powervm:srr_capability': 'false', 'powervm:something_new': 'foo'} self.assertRaises(KeyError, self.lpar_b._format_flavor, self.inst) @mock.patch('pypowervm.wrappers.shared_proc_pool.SharedProcPool.search') def test_spp_pool_id(self, mock_search): # The default pool is always zero. Validate the path. self.assertEqual(0, self.lpar_b._spp_pool_id('DefaultPool')) self.assertEqual(0, self.lpar_b._spp_pool_id(None)) # Further invocations require calls to the adapter. Build a minimal # mocked SPP wrapper spp = mock.MagicMock() spp.id = 1 # Three invocations. First has too many elems. Second has none. # Third is just right. :-) mock_search.side_effect = [[spp, spp], [], [spp]] self.assertRaises(exception.ValidationError, self.lpar_b._spp_pool_id, 'fake_name') self.assertRaises(exception.ValidationError, self.lpar_b._spp_pool_id, 'fake_name') self.assertEqual(1, self.lpar_b._spp_pool_id('fake_name')) class TestVM(test.NoDBTestCase): def setUp(self): super(TestVM, self).setUp() self.apt = self.useFixture(pvm_fx.AdapterFx( traits=pvm_fx.LocalPVMTraits)).adpt self.apt.helpers = [pvm_log.log_helper] self.san_lpar_name = self.useFixture(fixtures.MockPatch( 'pypowervm.util.sanitize_partition_name_for_api')).mock self.san_lpar_name.side_effect = lambda name: name mock_entries = [mock.Mock(), mock.Mock()] self.resp = mock.MagicMock() self.resp.feed = mock.MagicMock(entries=mock_entries) self.get_pvm_uuid = self.useFixture(fixtures.MockPatch( 'nova.virt.powervm.vm.get_pvm_uuid')).mock self.inst = powervm.TEST_INSTANCE def test_translate_vm_state(self): self.assertEqual(power_state.RUNNING, vm._translate_vm_state('running')) self.assertEqual(power_state.RUNNING, vm._translate_vm_state('migrating running')) self.assertEqual(power_state.RUNNING, vm._translate_vm_state('starting')) self.assertEqual(power_state.RUNNING, vm._translate_vm_state('open firmware')) self.assertEqual(power_state.RUNNING, vm._translate_vm_state('shutting down')) self.assertEqual(power_state.RUNNING, vm._translate_vm_state('suspending')) self.assertEqual(power_state.SHUTDOWN, vm._translate_vm_state('migrating not active')) self.assertEqual(power_state.SHUTDOWN, vm._translate_vm_state('not activated')) self.assertEqual(power_state.NOSTATE, vm._translate_vm_state('unknown')) self.assertEqual(power_state.NOSTATE, vm._translate_vm_state('hardware discovery')) self.assertEqual(power_state.NOSTATE, vm._translate_vm_state('not available')) self.assertEqual(power_state.SUSPENDED, vm._translate_vm_state('resuming')) self.assertEqual(power_state.SUSPENDED, vm._translate_vm_state('suspended')) self.assertEqual(power_state.CRASHED, vm._translate_vm_state('error')) @mock.patch('pypowervm.wrappers.logical_partition.LPAR', autospec=True) def test_get_lpar_names(self, mock_lpar): inst1 = mock.Mock() inst1.configure_mock(name='inst1') inst2 = mock.Mock() inst2.configure_mock(name='inst2') mock_lpar.search.return_value = [inst1, inst2] self.assertEqual({'inst1', 'inst2'}, set(vm.get_lpar_names('adap'))) mock_lpar.search.assert_called_once_with( 'adap', is_mgmt_partition=False) @mock.patch('pypowervm.tasks.vterm.close_vterm', autospec=True) def test_dlt_lpar(self, mock_vterm): """Performs a delete LPAR test.""" vm.delete_lpar(self.apt, 'inst') self.get_pvm_uuid.assert_called_once_with('inst') self.apt.delete.assert_called_once_with( pvm_lpar.LPAR.schema_type, root_id=self.get_pvm_uuid.return_value) self.assertEqual(1, mock_vterm.call_count) # Test Failure Path # build a mock response body with the expected HSCL msg resp = mock.Mock() resp.body = 'error msg: HSCL151B more text' self.apt.delete.side_effect = pvm_exc.Error( 'Mock Error Message', response=resp) # Reset counters self.apt.reset_mock() mock_vterm.reset_mock() self.assertRaises(pvm_exc.Error, vm.delete_lpar, self.apt, 'inst') self.assertEqual(1, mock_vterm.call_count) self.assertEqual(1, self.apt.delete.call_count) self.apt.reset_mock() mock_vterm.reset_mock() # Test HttpError 404 resp.status = 404 self.apt.delete.side_effect = pvm_exc.HttpError(resp=resp) vm.delete_lpar(self.apt, 'inst') self.assertEqual(1, mock_vterm.call_count) self.assertEqual(1, self.apt.delete.call_count) self.apt.reset_mock() mock_vterm.reset_mock() # Test Other HttpError resp.status = 111 self.apt.delete.side_effect = pvm_exc.HttpError(resp=resp) self.assertRaises(pvm_exc.HttpError, vm.delete_lpar, self.apt, 'inst') self.assertEqual(1, mock_vterm.call_count) self.assertEqual(1, self.apt.delete.call_count) self.apt.reset_mock() mock_vterm.reset_mock() # Test HttpError 404 closing vterm resp.status = 404 mock_vterm.side_effect = pvm_exc.HttpError(resp=resp) vm.delete_lpar(self.apt, 'inst') self.assertEqual(1, mock_vterm.call_count) self.assertEqual(0, self.apt.delete.call_count) self.apt.reset_mock() mock_vterm.reset_mock() # Test Other HttpError closing vterm resp.status = 111 mock_vterm.side_effect = pvm_exc.HttpError(resp=resp) self.assertRaises(pvm_exc.HttpError, vm.delete_lpar, self.apt, 'inst') self.assertEqual(1, mock_vterm.call_count) self.assertEqual(0, self.apt.delete.call_count) @mock.patch('nova.virt.powervm.vm.VMBuilder', autospec=True) @mock.patch('pypowervm.utils.validation.LPARWrapperValidator', autospec=True) def test_crt_lpar(self, mock_vld, mock_vmbldr): self.inst.flavor.extra_specs = {'powervm:dedicated_proc': 'true'} mock_bldr = mock.Mock(spec=lpar_bld.LPARBuilder) mock_vmbldr.return_value.lpar_builder.return_value = mock_bldr mock_pend_lpar = mock.create_autospec(pvm_lpar.LPAR, instance=True) mock_bldr.build.return_value = mock_pend_lpar vm.create_lpar(self.apt, 'host', self.inst) mock_vmbldr.assert_called_once_with('host', self.apt) mock_vmbldr.return_value.lpar_builder.assert_called_once_with( self.inst) mock_bldr.build.assert_called_once_with() mock_vld.assert_called_once_with(mock_pend_lpar, 'host') mock_vld.return_value.validate_all.assert_called_once_with() mock_pend_lpar.create.assert_called_once_with(parent='host') # Test to verify the LPAR Creation with invalid name specification mock_vmbldr.side_effect = lpar_bld.LPARBuilderException("Invalid Name") self.assertRaises(exception.BuildAbortException, vm.create_lpar, self.apt, 'host', self.inst) # HttpError mock_vmbldr.side_effect = pvm_exc.HttpError(mock.Mock()) self.assertRaises(exception.PowerVMAPIFailed, vm.create_lpar, self.apt, 'host', self.inst) @mock.patch('pypowervm.wrappers.logical_partition.LPAR', autospec=True) def test_get_instance_wrapper(self, mock_lpar): resp = mock.Mock(status=404) mock_lpar.get.side_effect = pvm_exc.Error('message', response=resp) # vm.get_instance_wrapper(self.apt, instance, 'lpar_uuid') self.assertRaises(exception.InstanceNotFound, vm.get_instance_wrapper, self.apt, self.inst) @mock.patch('pypowervm.tasks.power.power_on', autospec=True) @mock.patch('oslo_concurrency.lockutils.lock', autospec=True) @mock.patch('nova.virt.powervm.vm.get_instance_wrapper') def test_power_on(self, mock_wrap, mock_lock, mock_power_on): entry = mock.Mock(state=pvm_bp.LPARState.NOT_ACTIVATED) mock_wrap.return_value = entry vm.power_on(None, self.inst) mock_power_on.assert_called_once_with(entry, None) mock_lock.assert_called_once_with('power_%s' % self.inst.uuid) mock_power_on.reset_mock() mock_lock.reset_mock() stop_states = [ pvm_bp.LPARState.RUNNING, pvm_bp.LPARState.STARTING, pvm_bp.LPARState.OPEN_FIRMWARE, pvm_bp.LPARState.SHUTTING_DOWN, pvm_bp.LPARState.ERROR, pvm_bp.LPARState.RESUMING, pvm_bp.LPARState.SUSPENDING] for stop_state in stop_states: entry.state = stop_state vm.power_on(None, self.inst) mock_lock.assert_called_once_with('power_%s' % self.inst.uuid) mock_lock.reset_mock() self.assertEqual(0, mock_power_on.call_count) @mock.patch('pypowervm.tasks.power.power_on', autospec=True) @mock.patch('nova.virt.powervm.vm.get_instance_wrapper') def test_power_on_negative(self, mock_wrp, mock_power_on): mock_wrp.return_value = mock.Mock(state=pvm_bp.LPARState.NOT_ACTIVATED) # Convertible (PowerVM) exception mock_power_on.side_effect = pvm_exc.VMPowerOnFailure( reason='Something bad', lpar_nm='TheLPAR') self.assertRaises(exception.InstancePowerOnFailure, vm.power_on, None, self.inst) # Non-pvm error raises directly mock_power_on.side_effect = ValueError() self.assertRaises(ValueError, vm.power_on, None, self.inst) @mock.patch('pypowervm.tasks.power.PowerOp', autospec=True) @mock.patch('pypowervm.tasks.power.power_off_progressive', autospec=True) @mock.patch('oslo_concurrency.lockutils.lock', autospec=True) @mock.patch('nova.virt.powervm.vm.get_instance_wrapper') def test_power_off(self, mock_wrap, mock_lock, mock_power_off, mock_pop): entry = mock.Mock(state=pvm_bp.LPARState.NOT_ACTIVATED) mock_wrap.return_value = entry vm.power_off(None, self.inst) self.assertEqual(0, mock_power_off.call_count) self.assertEqual(0, mock_pop.stop.call_count) mock_lock.assert_called_once_with('power_%s' % self.inst.uuid) stop_states = [ pvm_bp.LPARState.RUNNING, pvm_bp.LPARState.STARTING, pvm_bp.LPARState.OPEN_FIRMWARE, pvm_bp.LPARState.SHUTTING_DOWN, pvm_bp.LPARState.ERROR, pvm_bp.LPARState.RESUMING, pvm_bp.LPARState.SUSPENDING] for stop_state in stop_states: entry.state = stop_state mock_power_off.reset_mock() mock_pop.stop.reset_mock() mock_lock.reset_mock() vm.power_off(None, self.inst) mock_power_off.assert_called_once_with(entry) self.assertEqual(0, mock_pop.stop.call_count) mock_lock.assert_called_once_with('power_%s' % self.inst.uuid) mock_power_off.reset_mock() mock_lock.reset_mock() vm.power_off(None, self.inst, force_immediate=True, timeout=5) self.assertEqual(0, mock_power_off.call_count) mock_pop.stop.assert_called_once_with( entry, opts=mock.ANY, timeout=5) self.assertEqual('PowerOff(immediate=true, operation=shutdown)', str(mock_pop.stop.call_args[1]['opts'])) mock_lock.assert_called_once_with('power_%s' % self.inst.uuid) @mock.patch('pypowervm.tasks.power.power_off_progressive', autospec=True) @mock.patch('nova.virt.powervm.vm.get_instance_wrapper') def test_power_off_negative(self, mock_wrap, mock_power_off): """Negative tests.""" mock_wrap.return_value = mock.Mock(state=pvm_bp.LPARState.RUNNING) # Raise the expected pypowervm exception mock_power_off.side_effect = pvm_exc.VMPowerOffFailure( reason='Something bad.', lpar_nm='TheLPAR') # We should get a valid Nova exception that the compute manager expects self.assertRaises(exception.InstancePowerOffFailure, vm.power_off, None, self.inst) # Non-pvm error raises directly mock_power_off.side_effect = ValueError() self.assertRaises(ValueError, vm.power_off, None, self.inst) @mock.patch('pypowervm.tasks.power.power_on', autospec=True) @mock.patch('pypowervm.tasks.power.power_off_progressive', autospec=True) @mock.patch('pypowervm.tasks.power.PowerOp', autospec=True) @mock.patch('oslo_concurrency.lockutils.lock', autospec=True) @mock.patch('nova.virt.powervm.vm.get_instance_wrapper') def test_reboot(self, mock_wrap, mock_lock, mock_pop, mock_pwroff, mock_pwron): entry = mock.Mock(state=pvm_bp.LPARState.NOT_ACTIVATED) mock_wrap.return_value = entry # No power_off vm.reboot('adap', self.inst, False) mock_lock.assert_called_once_with('power_%s' % self.inst.uuid) mock_wrap.assert_called_once_with('adap', self.inst) mock_pwron.assert_called_once_with(entry, None) self.assertEqual(0, mock_pwroff.call_count) self.assertEqual(0, mock_pop.stop.call_count) mock_pwron.reset_mock() # power_off (no power_on) hard entry.state = pvm_bp.LPARState.RUNNING vm.reboot('adap', self.inst, True) self.assertEqual(0, mock_pwron.call_count) self.assertEqual(0, mock_pwroff.call_count) mock_pop.stop.assert_called_once_with(entry, opts=mock.ANY) self.assertEqual( 'PowerOff(immediate=true, operation=shutdown, restart=true)', str(mock_pop.stop.call_args[1]['opts'])) mock_pop.reset_mock() # power_off (no power_on) soft entry.state = pvm_bp.LPARState.RUNNING vm.reboot('adap', self.inst, False) self.assertEqual(0, mock_pwron.call_count) mock_pwroff.assert_called_once_with(entry, restart=True) self.assertEqual(0, mock_pop.stop.call_count) mock_pwroff.reset_mock() # PowerVM error is converted mock_pop.stop.side_effect = pvm_exc.TimeoutError("Timed out") self.assertRaises(exception.InstanceRebootFailure, vm.reboot, 'adap', self.inst, True) # Non-PowerVM error is raised directly mock_pwroff.side_effect = ValueError self.assertRaises(ValueError, vm.reboot, 'adap', self.inst, False) @mock.patch('oslo_serialization.jsonutils.loads') def test_get_vm_qp(self, mock_loads): self.apt.helpers = ['helper1', pvm_log.log_helper, 'helper3'] # Defaults self.assertEqual(mock_loads.return_value, vm.get_vm_qp(self.apt, 'lpar_uuid')) self.apt.read.assert_called_once_with( 'LogicalPartition', root_id='lpar_uuid', suffix_type='quick', suffix_parm=None) mock_loads.assert_called_once_with(self.apt.read.return_value.body) self.apt.read.reset_mock() mock_loads.reset_mock() # Specific qprop, no logging errors self.assertEqual(mock_loads.return_value, vm.get_vm_qp(self.apt, 'lpar_uuid', qprop='Prop', log_errors=False)) self.apt.read.assert_called_once_with( 'LogicalPartition', root_id='lpar_uuid', suffix_type='quick', suffix_parm='Prop', helpers=['helper1', 'helper3']) resp = mock.MagicMock() resp.status = 404 self.apt.read.side_effect = pvm_exc.HttpError(resp) self.assertRaises(exception.InstanceNotFound, vm.get_vm_qp, self.apt, 'lpar_uuid', log_errors=False) self.apt.read.side_effect = pvm_exc.Error("message", response=None) self.assertRaises(pvm_exc.Error, vm.get_vm_qp, self.apt, 'lpar_uuid', log_errors=False) resp.status = 500 self.apt.read.side_effect = pvm_exc.Error("message", response=resp) self.assertRaises(pvm_exc.Error, vm.get_vm_qp, self.apt, 'lpar_uuid', log_errors=False) @mock.patch('nova.virt.powervm.vm.get_pvm_uuid') @mock.patch('pypowervm.wrappers.network.CNA.search') @mock.patch('pypowervm.wrappers.network.CNA.get') def test_get_cnas(self, mock_get, mock_search, mock_uuid): # No kwargs: get self.assertEqual(mock_get.return_value, vm.get_cnas(self.apt, 'inst')) mock_uuid.assert_called_once_with('inst') mock_get.assert_called_once_with(self.apt, parent_type=pvm_lpar.LPAR, parent_uuid=mock_uuid.return_value) mock_search.assert_not_called() # With kwargs: search mock_get.reset_mock() mock_uuid.reset_mock() self.assertEqual(mock_search.return_value, vm.get_cnas( self.apt, 'inst', one=2, three=4)) mock_uuid.assert_called_once_with('inst') mock_search.assert_called_once_with( self.apt, parent_type=pvm_lpar.LPAR, parent_uuid=mock_uuid.return_value, one=2, three=4) mock_get.assert_not_called() def test_norm_mac(self): EXPECTED = "12:34:56:78:90:ab" self.assertEqual(EXPECTED, vm.norm_mac("12:34:56:78:90:ab")) self.assertEqual(EXPECTED, vm.norm_mac("1234567890ab")) self.assertEqual(EXPECTED, vm.norm_mac("12:34:56:78:90:AB")) self.assertEqual(EXPECTED, vm.norm_mac("1234567890AB")) nova-17.0.1/nova/tests/unit/virt/powervm/__init__.py0000666000175000017500000000306613250073126022453 0ustar zuulzuul00000000000000# Copyright 2014, 2017 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # from nova.compute import power_state from nova.compute import vm_states from nova import objects from nova.tests import uuidsentinel TEST_FLAVOR = objects.flavor.Flavor( memory_mb=2048, swap=0, vcpu_weight=None, root_gb=10, id=2, name=u'm1.small', ephemeral_gb=0, rxtx_factor=1.0, flavorid=uuidsentinel.flav_id, vcpus=1) TEST_INSTANCE = objects.Instance( id=1, uuid=uuidsentinel.inst_id, display_name='Fake Instance', root_gb=10, ephemeral_gb=0, instance_type_id=TEST_FLAVOR.id, system_metadata={'image_os_distro': 'rhel'}, host='host1', flavor=TEST_FLAVOR, task_state=None, vm_state=vm_states.STOPPED, power_state=power_state.SHUTDOWN, ) IMAGE1 = { 'id': uuidsentinel.img_id, 'name': 'image1', 'size': 300, 'container_format': 'bare', 'disk_format': 'raw', 'checksum': 'b518a8ba2b152b5607aceb5703fac072', } TEST_IMAGE1 = objects.image_meta.ImageMeta.from_dict(IMAGE1) nova-17.0.1/nova/tests/unit/virt/powervm/test_media.py0000666000175000017500000002336113250073126023032 0ustar zuulzuul00000000000000# Copyright 2015, 2017 IBM Corp. # # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from __future__ import absolute_import import fixtures import mock from pypowervm.tasks import scsi_mapper as tsk_map from pypowervm.tests import test_fixtures as pvm_fx from pypowervm.utils import transaction as pvm_tx from pypowervm.wrappers import network as pvm_net from pypowervm.wrappers import storage as pvm_stg from pypowervm.wrappers import virtual_io_server as pvm_vios import six if six.PY2: import __builtin__ as builtins elif six.PY3: import builtins from nova import test from nova.tests import uuidsentinel from nova.virt.powervm import media as m class TestConfigDrivePowerVM(test.NoDBTestCase): """Unit Tests for the ConfigDrivePowerVM class.""" def setUp(self): super(TestConfigDrivePowerVM, self).setUp() self.apt = self.useFixture(pvm_fx.AdapterFx()).adpt self.validate_vopt = self.useFixture(fixtures.MockPatch( 'pypowervm.tasks.vopt.validate_vopt_repo_exists', autospec=True)).mock self.validate_vopt.return_value = 'vios_uuid', 'vg_uuid' @mock.patch('nova.api.metadata.base.InstanceMetadata') @mock.patch('nova.virt.configdrive.ConfigDriveBuilder.make_drive') def test_crt_cfg_dr_iso(self, mock_mkdrv, mock_meta): """Validates that the image creation method works.""" cfg_dr_builder = m.ConfigDrivePowerVM(self.apt) mock_instance = mock.MagicMock() mock_instance.name = 'fake-instance' mock_instance.uuid = uuidsentinel.inst_id mock_files = mock.MagicMock() mock_net = mock.MagicMock() iso_path, file_name = cfg_dr_builder._create_cfg_dr_iso(mock_instance, mock_files, mock_net) self.assertEqual('cfg_fake_instance.iso', file_name) self.assertEqual('/tmp/cfgdrv/cfg_fake_instance.iso', iso_path) # Make sure the length is limited properly mock_instance.name = 'fake-instance-with-name-that-is-too-long' iso_path, file_name = cfg_dr_builder._create_cfg_dr_iso(mock_instance, mock_files, mock_net) self.assertEqual('cfg_fake_instance_with_name_that_.iso', file_name) self.assertEqual('/tmp/cfgdrv/cfg_fake_instance_with_name_that_.iso', iso_path) self.assertTrue(self.validate_vopt.called) mock_mkdrv.reset_mock() # Test retry vopt create mock_mkdrv.side_effect = [OSError, mock_mkdrv] mock_instance.name = 'fake-instance-2' iso_path, file_name = cfg_dr_builder._create_cfg_dr_iso(mock_instance, mock_files, mock_net) self.assertEqual('cfg_fake_instance_2.iso', file_name) self.assertEqual('/tmp/cfgdrv/cfg_fake_instance_2.iso', iso_path) self.assertTrue(self.validate_vopt.called) self.assertEqual(mock_mkdrv.call_count, 2) @mock.patch('nova.virt.powervm.vm.get_pvm_uuid') @mock.patch('pypowervm.tasks.scsi_mapper.build_vscsi_mapping') @mock.patch('pypowervm.tasks.scsi_mapper.add_map') @mock.patch('os.remove') @mock.patch('os.path.getsize') @mock.patch('pypowervm.tasks.storage.upload_vopt') @mock.patch.object(builtins, 'open') @mock.patch('nova.virt.powervm.media.ConfigDrivePowerVM.' '_create_cfg_dr_iso') def test_create_cfg_drv_vopt(self, mock_ccdi, mock_open, mock_upl, mock_getsize, mock_rm, mock_addmap, mock_bldmap, mock_vm_id): cfg_dr = m.ConfigDrivePowerVM(self.apt) mock_ccdi.return_value = 'iso_path', 'file_name' mock_upl.return_value = 'vopt', 'f_uuid' wtsk = mock.create_autospec(pvm_tx.WrapperTask, instance=True) ftsk = mock.create_autospec(pvm_tx.FeedTask, instance=True) ftsk.configure_mock(wrapper_tasks={'vios_uuid': wtsk}) def test_afs(add_func): # Validate the internal add_func vio = mock.create_autospec(pvm_vios.VIOS) self.assertEqual(mock_addmap.return_value, add_func(vio)) mock_vm_id.assert_called_once_with('inst') mock_bldmap.assert_called_once_with( None, vio, mock_vm_id.return_value, 'vopt') mock_addmap.assert_called_once_with(vio, mock_bldmap.return_value) wtsk.add_functor_subtask.side_effect = test_afs cfg_dr.create_cfg_drv_vopt( 'inst', 'files', 'netinfo', ftsk, admin_pass='pass') mock_ccdi.assert_called_once_with('inst', 'files', 'netinfo', admin_pass='pass') mock_open.assert_called_once_with('iso_path', 'rb') mock_getsize.assert_called_once_with('iso_path') mock_upl.assert_called_once_with( self.apt, 'vios_uuid', mock_open.return_value.__enter__.return_value, 'file_name', mock_getsize.return_value) mock_rm.assert_called_once_with('iso_path') wtsk.add_functor_subtask.assert_called_once() def test_sanitize_network_info(self): network_info = [{'type': 'lbr'}, {'type': 'pvm_sea'}, {'type': 'ovs'}] cfg_dr_builder = m.ConfigDrivePowerVM(self.apt) resp = cfg_dr_builder._sanitize_network_info(network_info) expected_ret = [{'type': 'vif'}, {'type': 'vif'}, {'type': 'ovs'}] self.assertEqual(resp, expected_ret) @mock.patch('pypowervm.wrappers.storage.VG', autospec=True) @mock.patch('pypowervm.tasks.storage.rm_vg_storage', autospec=True) @mock.patch('nova.virt.powervm.vm.get_pvm_uuid') @mock.patch('pypowervm.tasks.scsi_mapper.gen_match_func', autospec=True) @mock.patch('pypowervm.tasks.scsi_mapper.find_maps', autospec=True) @mock.patch('pypowervm.wrappers.virtual_io_server.VIOS', autospec=True) @mock.patch('taskflow.task.FunctorTask', autospec=True) def test_dlt_vopt(self, mock_functask, mock_vios, mock_find_maps, mock_gmf, mock_uuid, mock_rmstg, mock_vg): cfg_dr = m.ConfigDrivePowerVM(self.apt) wtsk = mock.create_autospec(pvm_tx.WrapperTask, instance=True) ftsk = mock.create_autospec(pvm_tx.FeedTask, instance=True) ftsk.configure_mock(wrapper_tasks={'vios_uuid': wtsk}) # Test with no media to remove mock_find_maps.return_value = [] cfg_dr.dlt_vopt('inst', ftsk) mock_uuid.assert_called_once_with('inst') mock_gmf.assert_called_once_with(pvm_stg.VOptMedia) wtsk.add_functor_subtask.assert_called_once_with( tsk_map.remove_maps, mock_uuid.return_value, match_func=mock_gmf.return_value) ftsk.get_wrapper.assert_called_once_with('vios_uuid') mock_find_maps.assert_called_once_with( ftsk.get_wrapper.return_value.scsi_mappings, client_lpar_id=mock_uuid.return_value, match_func=mock_gmf.return_value) mock_functask.assert_not_called() # Test with media to remove mock_find_maps.return_value = [mock.Mock(backing_storage=media) for media in ['m1', 'm2']] def test_functor_task(rm_vopt): # Validate internal rm_vopt function rm_vopt() mock_vg.get.assert_called_once_with( self.apt, uuid='vg_uuid', parent_type=pvm_vios.VIOS, parent_uuid='vios_uuid') mock_rmstg.assert_called_once_with( mock_vg.get.return_value, vopts=['m1', 'm2']) return 'functor_task' mock_functask.side_effect = test_functor_task cfg_dr.dlt_vopt('inst', ftsk) mock_functask.assert_called_once() ftsk.add_post_execute.assert_called_once_with('functor_task') def test_mgmt_cna_to_vif(self): mock_cna = mock.Mock(spec=pvm_net.CNA, mac="FAD4433ED120") # Run cfg_dr_builder = m.ConfigDrivePowerVM(self.apt) vif = cfg_dr_builder._mgmt_cna_to_vif(mock_cna) # Validate self.assertEqual(vif.get('address'), "fa:d4:43:3e:d1:20") self.assertEqual(vif.get('id'), 'mgmt_vif') self.assertIsNotNone(vif.get('network')) self.assertEqual(1, len(vif.get('network').get('subnets'))) subnet = vif.get('network').get('subnets')[0] self.assertEqual(6, subnet.get('version')) self.assertEqual('fe80::/64', subnet.get('cidr')) ip = subnet.get('ips')[0] self.assertEqual('fe80::f8d4:43ff:fe3e:d120', ip.get('address')) def test_mac_to_link_local(self): mac = 'fa:d4:43:3e:d1:20' self.assertEqual('fe80::f8d4:43ff:fe3e:d120', m.ConfigDrivePowerVM._mac_to_link_local(mac)) mac = '00:00:00:00:00:00' self.assertEqual('fe80::0200:00ff:fe00:0000', m.ConfigDrivePowerVM._mac_to_link_local(mac)) mac = 'ff:ff:ff:ff:ff:ff' self.assertEqual('fe80::fdff:ffff:feff:ffff', m.ConfigDrivePowerVM._mac_to_link_local(mac)) nova-17.0.1/nova/tests/unit/virt/powervm/test_vif.py0000666000175000017500000003307213250073126022537 0ustar zuulzuul00000000000000# Copyright 2017 IBM Corp. # # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_config import cfg from pypowervm import exceptions as pvm_ex from pypowervm.wrappers import network as pvm_net from nova import exception from nova.network import model from nova import test from nova.virt.powervm import vif CONF = cfg.CONF def cna(mac): """Builds a mock Client Network Adapter for unit tests.""" return mock.Mock(spec=pvm_net.CNA, mac=mac, vswitch_uri='fake_href') class TestVifFunctions(test.NoDBTestCase): def setUp(self): super(TestVifFunctions, self).setUp() self.adpt = mock.Mock() @mock.patch('nova.virt.powervm.vif.PvmOvsVifDriver') def test_build_vif_driver(self, mock_driver): # Valid vif type driver = vif._build_vif_driver(self.adpt, 'instance', {'type': 'ovs'}) self.assertEqual(mock_driver.return_value, driver) mock_driver.reset_mock() # Fail if no vif type self.assertRaises(exception.VirtualInterfacePlugException, vif._build_vif_driver, self.adpt, 'instance', {'type': None}) mock_driver.assert_not_called() # Fail if invalid vif type self.assertRaises(exception.VirtualInterfacePlugException, vif._build_vif_driver, self.adpt, 'instance', {'type': 'bad_type'}) mock_driver.assert_not_called() @mock.patch('oslo_serialization.jsonutils.dumps') @mock.patch('pypowervm.wrappers.event.Event') def test_push_vif_event(self, mock_event, mock_dumps): mock_vif = mock.Mock(mac='MAC', href='HREF') vif._push_vif_event(self.adpt, 'action', mock_vif, mock.Mock(), 'pvm_sea') mock_dumps.assert_called_once_with( {'provider': 'NOVA_PVM_VIF', 'action': 'action', 'mac': 'MAC', 'type': 'pvm_sea'}) mock_event.bld.assert_called_once_with(self.adpt, 'HREF', mock_dumps.return_value) mock_event.bld.return_value.create.assert_called_once_with() mock_dumps.reset_mock() mock_event.bld.reset_mock() mock_event.bld.return_value.create.reset_mock() # Exception reraises mock_event.bld.return_value.create.side_effect = IndexError self.assertRaises(IndexError, vif._push_vif_event, self.adpt, 'action', mock_vif, mock.Mock(), 'pvm_sea') mock_dumps.assert_called_once_with( {'provider': 'NOVA_PVM_VIF', 'action': 'action', 'mac': 'MAC', 'type': 'pvm_sea'}) mock_event.bld.assert_called_once_with(self.adpt, 'HREF', mock_dumps.return_value) mock_event.bld.return_value.create.assert_called_once_with() @mock.patch('nova.virt.powervm.vif._push_vif_event') @mock.patch('nova.virt.powervm.vif._build_vif_driver') def test_plug(self, mock_bld_drv, mock_event): """Test the top-level plug method.""" mock_vif = {'address': 'MAC', 'type': 'pvm_sea'} # 1) With new_vif=True (default) vnet = vif.plug(self.adpt, 'instance', mock_vif) mock_bld_drv.assert_called_once_with(self.adpt, 'instance', mock_vif) mock_bld_drv.return_value.plug.assert_called_once_with(mock_vif, new_vif=True) self.assertEqual(mock_bld_drv.return_value.plug.return_value, vnet) mock_event.assert_called_once_with(self.adpt, 'plug', vnet, mock.ANY, 'pvm_sea') # Clean up mock_bld_drv.reset_mock() mock_bld_drv.return_value.plug.reset_mock() mock_event.reset_mock() # 2) Plug returns None (which it should IRL whenever new_vif=False). mock_bld_drv.return_value.plug.return_value = None vnet = vif.plug(self.adpt, 'instance', mock_vif, new_vif=False) mock_bld_drv.assert_called_once_with(self.adpt, 'instance', mock_vif) mock_bld_drv.return_value.plug.assert_called_once_with(mock_vif, new_vif=False) self.assertIsNone(vnet) mock_event.assert_not_called() @mock.patch('nova.virt.powervm.vif._build_vif_driver') def test_plug_raises(self, mock_vif_drv): """HttpError is converted to VirtualInterfacePlugException.""" vif_drv = mock.Mock(plug=mock.Mock(side_effect=pvm_ex.HttpError( resp=mock.Mock()))) mock_vif_drv.return_value = vif_drv mock_vif = {'address': 'vifaddr'} self.assertRaises(exception.VirtualInterfacePlugException, vif.plug, 'adap', 'inst', mock_vif, new_vif='new_vif') mock_vif_drv.assert_called_once_with('adap', 'inst', mock_vif) vif_drv.plug.assert_called_once_with(mock_vif, new_vif='new_vif') @mock.patch('nova.virt.powervm.vif._push_vif_event') @mock.patch('nova.virt.powervm.vif._build_vif_driver') def test_unplug(self, mock_bld_drv, mock_event): """Test the top-level unplug method.""" mock_vif = {'address': 'MAC', 'type': 'pvm_sea'} # 1) With default cna_w_list mock_bld_drv.return_value.unplug.return_value = 'vnet_w' vif.unplug(self.adpt, 'instance', mock_vif) mock_bld_drv.assert_called_once_with(self.adpt, 'instance', mock_vif) mock_bld_drv.return_value.unplug.assert_called_once_with( mock_vif, cna_w_list=None) mock_event.assert_called_once_with(self.adpt, 'unplug', 'vnet_w', mock.ANY, 'pvm_sea') # Clean up mock_bld_drv.reset_mock() mock_bld_drv.return_value.unplug.reset_mock() mock_event.reset_mock() # 2) With specified cna_w_list mock_bld_drv.return_value.unplug.return_value = None vif.unplug(self.adpt, 'instance', mock_vif, cna_w_list='cnalist') mock_bld_drv.assert_called_once_with(self.adpt, 'instance', mock_vif) mock_bld_drv.return_value.unplug.assert_called_once_with( mock_vif, cna_w_list='cnalist') mock_event.assert_not_called() @mock.patch('nova.virt.powervm.vif._build_vif_driver') def test_unplug_raises(self, mock_vif_drv): """HttpError is converted to VirtualInterfacePlugException.""" vif_drv = mock.Mock(unplug=mock.Mock(side_effect=pvm_ex.HttpError( resp=mock.Mock()))) mock_vif_drv.return_value = vif_drv mock_vif = {'address': 'vifaddr'} self.assertRaises(exception.VirtualInterfaceUnplugException, vif.unplug, 'adap', 'inst', mock_vif, cna_w_list='cna_w_list') mock_vif_drv.assert_called_once_with('adap', 'inst', mock_vif) vif_drv.unplug.assert_called_once_with( mock_vif, cna_w_list='cna_w_list') class TestVifOvsDriver(test.NoDBTestCase): def setUp(self): super(TestVifOvsDriver, self).setUp() self.adpt = mock.Mock() self.inst = mock.MagicMock(uuid='inst_uuid') self.drv = vif.PvmOvsVifDriver(self.adpt, self.inst) @mock.patch('pypowervm.tasks.cna.crt_p2p_cna', autospec=True) @mock.patch('pypowervm.tasks.partition.get_this_partition', autospec=True) @mock.patch('nova.virt.powervm.vm.get_pvm_uuid') def test_plug(self, mock_pvm_uuid, mock_mgmt_lpar, mock_p2p_cna,): # Mock the data mock_pvm_uuid.return_value = 'lpar_uuid' mock_mgmt_lpar.return_value = mock.Mock(uuid='mgmt_uuid') # mock_trunk_dev_name.return_value = 'device' cna_w, trunk_wraps = mock.MagicMock(), [mock.MagicMock()] mock_p2p_cna.return_value = cna_w, trunk_wraps # Run the plug network_model = model.Model({'bridge': 'br0', 'meta': {'mtu': 1450}}) mock_vif = model.VIF(address='aa:bb:cc:dd:ee:ff', id='vif_id', network=network_model, devname='device') self.drv.plug(mock_vif) # Validate the calls ovs_ext_ids = ('iface-id=vif_id,iface-status=active,' 'attached-mac=aa:bb:cc:dd:ee:ff,vm-uuid=inst_uuid') mock_p2p_cna.assert_called_once_with( self.adpt, None, 'lpar_uuid', ['mgmt_uuid'], 'NovaLinkVEABridge', configured_mtu=1450, crt_vswitch=True, mac_addr='aa:bb:cc:dd:ee:ff', dev_name='device', ovs_bridge='br0', ovs_ext_ids=ovs_ext_ids) @mock.patch('pypowervm.tasks.partition.get_this_partition', autospec=True) @mock.patch('nova.virt.powervm.vm.get_pvm_uuid') @mock.patch('nova.virt.powervm.vm.get_cnas') @mock.patch('pypowervm.tasks.cna.find_trunks', autospec=True) def test_plug_existing_vif(self, mock_find_trunks, mock_get_cnas, mock_pvm_uuid, mock_mgmt_lpar): # Mock the data t1, t2 = mock.MagicMock(), mock.MagicMock() mock_find_trunks.return_value = [t1, t2] mock_cna = mock.Mock(mac='aa:bb:cc:dd:ee:ff') mock_get_cnas.return_value = [mock_cna] mock_pvm_uuid.return_value = 'lpar_uuid' mock_mgmt_lpar.return_value = mock.Mock(uuid='mgmt_uuid') self.inst = mock.MagicMock(uuid='c2e7ff9f-b9b6-46fa-8716-93bbb795b8b4') self.drv = vif.PvmOvsVifDriver(self.adpt, self.inst) # Run the plug network_model = model.Model({'bridge': 'br0', 'meta': {'mtu': 1500}}) mock_vif = model.VIF(address='aa:bb:cc:dd:ee:ff', id='vif_id', network=network_model, devname='devname') resp = self.drv.plug(mock_vif, new_vif=False) self.assertIsNone(resp) # Validate if trunk.update got invoked for all trunks of CNA of vif self.assertTrue(t1.update.called) self.assertTrue(t2.update.called) @mock.patch('pypowervm.tasks.cna.find_trunks') @mock.patch('nova.virt.powervm.vm.get_cnas') def test_unplug(self, mock_get_cnas, mock_find_trunks): # Set up the mocks mock_cna = mock.Mock(mac='aa:bb:cc:dd:ee:ff') mock_get_cnas.return_value = [mock_cna] t1, t2 = mock.MagicMock(), mock.MagicMock() mock_find_trunks.return_value = [t1, t2] # Call the unplug mock_vif = {'address': 'aa:bb:cc:dd:ee:ff', 'network': {'bridge': 'br-int'}} self.drv.unplug(mock_vif) # The trunks and the cna should have been deleted self.assertTrue(t1.delete.called) self.assertTrue(t2.delete.called) self.assertTrue(mock_cna.delete.called) class TestVifSeaDriver(test.NoDBTestCase): def setUp(self): super(TestVifSeaDriver, self).setUp() self.adpt = mock.Mock() self.inst = mock.Mock() self.drv = vif.PvmSeaVifDriver(self.adpt, self.inst) @mock.patch('nova.virt.powervm.vm.get_pvm_uuid') @mock.patch('pypowervm.tasks.cna.crt_cna') def test_plug_from_neutron(self, mock_crt_cna, mock_pvm_uuid): """Tests that a VIF can be created. Mocks Neutron net""" # Set up the mocks. Look like Neutron fake_vif = {'details': {'vlan': 5}, 'network': {'meta': {}}, 'address': 'aabbccddeeff'} def validate_crt(adpt, host_uuid, lpar_uuid, vlan, mac_addr=None): self.assertIsNone(host_uuid) self.assertEqual(5, vlan) self.assertEqual('aabbccddeeff', mac_addr) return pvm_net.CNA.bld(self.adpt, 5, 'host_uuid', mac_addr=mac_addr) mock_crt_cna.side_effect = validate_crt # Invoke resp = self.drv.plug(fake_vif) # Validate (along with validate method above) self.assertEqual(1, mock_crt_cna.call_count) self.assertIsNotNone(resp) self.assertIsInstance(resp, pvm_net.CNA) def test_plug_existing_vif(self): """Tests that a VIF need not be created.""" # Set up the mocks fake_vif = {'network': {'meta': {'vlan': 5}}, 'address': 'aabbccddeeff'} # Invoke resp = self.drv.plug(fake_vif, new_vif=False) self.assertIsNone(resp) @mock.patch('nova.virt.powervm.vm.get_cnas') def test_unplug_vifs(self, mock_vm_get): """Tests that a delete of the vif can be done.""" # Mock up the CNA response. Two should already exist, the other # should not. cnas = [cna('AABBCCDDEEFF'), cna('AABBCCDDEE11'), cna('AABBCCDDEE22')] mock_vm_get.return_value = cnas # Run method. The AABBCCDDEE11 won't be unplugged (wasn't invoked # below) and the last unplug will also just no-op because its not on # the VM. self.drv.unplug({'address': 'aa:bb:cc:dd:ee:ff'}) self.drv.unplug({'address': 'aa:bb:cc:dd:ee:22'}) self.drv.unplug({'address': 'aa:bb:cc:dd:ee:33'}) # The delete should have only been called once for each applicable vif. # The second CNA didn't have a matching mac so it should be skipped. self.assertEqual(1, cnas[0].delete.call_count) self.assertEqual(0, cnas[1].delete.call_count) self.assertEqual(1, cnas[2].delete.call_count) nova-17.0.1/nova/tests/unit/virt/test_imagecache.py0000666000175000017500000001551213250073126022321 0ustar zuulzuul00000000000000# Copyright 2013 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova import block_device from nova.compute import vm_states import nova.conf from nova import context from nova import objects from nova.objects import block_device as block_device_obj from nova import test from nova.tests.unit import fake_instance from nova.tests import uuidsentinel as uuids from nova.virt import imagecache CONF = nova.conf.CONF swap_bdm_128 = [block_device.BlockDeviceDict( {'id': 1, 'instance_uuid': uuids.instance, 'device_name': '/dev/sdb1', 'source_type': 'blank', 'destination_type': 'local', 'delete_on_termination': True, 'guest_format': 'swap', 'disk_bus': 'scsi', 'volume_size': 128, 'boot_index': -1})] swap_bdm_256 = [block_device.BlockDeviceDict( {'id': 1, 'instance_uuid': uuids.instance, 'device_name': '/dev/sdb1', 'source_type': 'blank', 'destination_type': 'local', 'delete_on_termination': True, 'guest_format': 'swap', 'disk_bus': 'scsi', 'volume_size': 256, 'boot_index': -1})] class ImageCacheManagerTests(test.NoDBTestCase): def test_configurationi_defaults(self): self.assertEqual(2400, CONF.image_cache_manager_interval) self.assertEqual('_base', CONF.image_cache_subdirectory_name) self.assertTrue(CONF.remove_unused_base_images) self.assertEqual(24 * 3600, CONF.remove_unused_original_minimum_age_seconds) def test_cache_manager(self): cache_manager = imagecache.ImageCacheManager() self.assertTrue(cache_manager.remove_unused_base_images) self.assertRaises(NotImplementedError, cache_manager.update, None, []) self.assertRaises(NotImplementedError, cache_manager._get_base) self.assertRaises(NotImplementedError, cache_manager._scan_base_images, None) self.assertRaises(NotImplementedError, cache_manager._age_and_verify_cached_images, None, [], None) @mock.patch.object(objects.BlockDeviceMappingList, 'bdms_by_instance_uuid') def test_list_running_instances(self, mock_bdms_by_uuid): instances = [{'image_ref': '1', 'host': CONF.host, 'id': '1', 'uuid': uuids.instance_1, 'vm_state': '', 'task_state': ''}, {'image_ref': '2', 'host': CONF.host, 'id': '2', 'uuid': uuids.instance_2, 'vm_state': '', 'task_state': ''}, {'image_ref': '2', 'kernel_id': '21', 'ramdisk_id': '22', 'host': 'remotehost', 'id': '3', 'uuid': uuids.instance_3, 'vm_state': '', 'task_state': ''}] all_instances = [fake_instance.fake_instance_obj(None, **instance) for instance in instances] image_cache_manager = imagecache.ImageCacheManager() ctxt = context.get_admin_context() swap_bdm_256_list = block_device_obj.block_device_make_list_from_dicts( ctxt, swap_bdm_256) swap_bdm_128_list = block_device_obj.block_device_make_list_from_dicts( ctxt, swap_bdm_128) mock_bdms_by_uuid.return_value = {uuids.instance_1: swap_bdm_256_list, uuids.instance_2: swap_bdm_128_list, uuids.instance_3: swap_bdm_128_list} # The argument here should be a context, but it's mocked out running = image_cache_manager._list_running_instances(ctxt, all_instances) mock_bdms_by_uuid.assert_called_once_with(ctxt, [uuids.instance_1, uuids.instance_2, uuids.instance_3]) self.assertEqual(4, len(running['used_images'])) self.assertEqual((1, 0, ['instance-00000001']), running['used_images']['1']) self.assertEqual((1, 1, ['instance-00000002', 'instance-00000003']), running['used_images']['2']) self.assertEqual((0, 1, ['instance-00000003']), running['used_images']['21']) self.assertEqual((0, 1, ['instance-00000003']), running['used_images']['22']) self.assertIn('instance-00000001', running['instance_names']) self.assertIn(uuids.instance_1, running['instance_names']) self.assertEqual(len(running['used_swap_images']), 2) self.assertIn('swap_128', running['used_swap_images']) self.assertIn('swap_256', running['used_swap_images']) @mock.patch.object(objects.BlockDeviceMappingList, 'bdms_by_instance_uuid') def test_list_resizing_instances(self, mock_bdms_by_uuid): instances = [{'image_ref': '1', 'host': CONF.host, 'id': '1', 'uuid': uuids.instance, 'vm_state': vm_states.RESIZED, 'task_state': None}] all_instances = [fake_instance.fake_instance_obj(None, **instance) for instance in instances] image_cache_manager = imagecache.ImageCacheManager() ctxt = context.get_admin_context() bdms = block_device_obj.block_device_make_list_from_dicts( ctxt, swap_bdm_256) mock_bdms_by_uuid.return_value = {uuids.instance: bdms} running = image_cache_manager._list_running_instances(ctxt, all_instances) mock_bdms_by_uuid.assert_called_once_with(ctxt, [uuids.instance]) self.assertEqual(1, len(running['used_images'])) self.assertEqual((1, 0, ['instance-00000001']), running['used_images']['1']) self.assertEqual(set(['instance-00000001', uuids.instance, 'instance-00000001_resize', '%s_resize' % uuids.instance]), running['instance_names']) nova-17.0.1/nova/tests/unit/virt/test_hardware.py0000666000175000017500000042235313250073126022055 0ustar zuulzuul00000000000000# Copyright 2014 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import collections import copy import mock from oslo_serialization import jsonutils import six from nova import context from nova import exception from nova import objects from nova.objects import base as base_obj from nova.objects import fields from nova.pci import stats from nova import test from nova.tests.unit import fake_pci_device_pools as fake_pci from nova.tests import uuidsentinel as uuids from nova.virt import hardware as hw class InstanceInfoTests(test.NoDBTestCase): def test_instance_info_default(self): ii = hw.InstanceInfo('fake-state') self.assertEqual('fake-state', ii.state) self.assertIsNone(ii.internal_id) def test_instance_info(self): ii = hw.InstanceInfo(state='fake-state', internal_id='fake-id') self.assertEqual('fake-state', ii.state) self.assertEqual('fake-id', ii.internal_id) def test_instance_info_equals(self): ii1 = hw.InstanceInfo(state='fake-state', internal_id='fake-id') ii2 = hw.InstanceInfo(state='fake-state', internal_id='fake-id') ii3 = hw.InstanceInfo(state='fake-estat', internal_id='fake-di') self.assertEqual(ii1, ii2) self.assertNotEqual(ii1, ii3) class CpuSetTestCase(test.NoDBTestCase): def test_get_vcpu_pin_set(self): self.flags(vcpu_pin_set="1-3,5,^2") cpuset_ids = hw.get_vcpu_pin_set() self.assertEqual(set([1, 3, 5]), cpuset_ids) def test_parse_cpu_spec_none_returns_none(self): self.flags(vcpu_pin_set=None) cpuset_ids = hw.get_vcpu_pin_set() self.assertIsNone(cpuset_ids) def test_parse_cpu_spec_valid_syntax_works(self): cpuset_ids = hw.parse_cpu_spec("1") self.assertEqual(set([1]), cpuset_ids) cpuset_ids = hw.parse_cpu_spec("1,2") self.assertEqual(set([1, 2]), cpuset_ids) cpuset_ids = hw.parse_cpu_spec(", , 1 , ,, 2, ,") self.assertEqual(set([1, 2]), cpuset_ids) cpuset_ids = hw.parse_cpu_spec("1-1") self.assertEqual(set([1]), cpuset_ids) cpuset_ids = hw.parse_cpu_spec(" 1 - 1, 1 - 2 , 1 -3") self.assertEqual(set([1, 2, 3]), cpuset_ids) cpuset_ids = hw.parse_cpu_spec("1,^2") self.assertEqual(set([1]), cpuset_ids) cpuset_ids = hw.parse_cpu_spec("1-2, ^1") self.assertEqual(set([2]), cpuset_ids) cpuset_ids = hw.parse_cpu_spec("1-3,5,^2") self.assertEqual(set([1, 3, 5]), cpuset_ids) cpuset_ids = hw.parse_cpu_spec(" 1 - 3 , ^2, 5") self.assertEqual(set([1, 3, 5]), cpuset_ids) cpuset_ids = hw.parse_cpu_spec(" 1,1, ^1") self.assertEqual(set([]), cpuset_ids) cpuset_ids = hw.parse_cpu_spec("^0-1") self.assertEqual(set([]), cpuset_ids) cpuset_ids = hw.parse_cpu_spec("0-3,^1-2") self.assertEqual(set([0, 3]), cpuset_ids) def test_parse_cpu_spec_invalid_syntax_raises(self): self.assertRaises(exception.Invalid, hw.parse_cpu_spec, " -1-3,5,^2") self.assertRaises(exception.Invalid, hw.parse_cpu_spec, "1-3-,5,^2") self.assertRaises(exception.Invalid, hw.parse_cpu_spec, "-3,5,^2") self.assertRaises(exception.Invalid, hw.parse_cpu_spec, "1-,5,^2") self.assertRaises(exception.Invalid, hw.parse_cpu_spec, "1-3,5,^2^") self.assertRaises(exception.Invalid, hw.parse_cpu_spec, "1-3,5,^2-") self.assertRaises(exception.Invalid, hw.parse_cpu_spec, "--13,^^5,^2") self.assertRaises(exception.Invalid, hw.parse_cpu_spec, "a-3,5,^2") self.assertRaises(exception.Invalid, hw.parse_cpu_spec, "1-a,5,^2") self.assertRaises(exception.Invalid, hw.parse_cpu_spec, "1-3,b,^2") self.assertRaises(exception.Invalid, hw.parse_cpu_spec, "1-3,5,^c") self.assertRaises(exception.Invalid, hw.parse_cpu_spec, "3 - 1, 5 , ^ 2 ") def test_format_cpu_spec(self): cpus = set([]) spec = hw.format_cpu_spec(cpus) self.assertEqual("", spec) cpus = [] spec = hw.format_cpu_spec(cpus) self.assertEqual("", spec) cpus = set([1, 3]) spec = hw.format_cpu_spec(cpus) self.assertEqual("1,3", spec) cpus = [1, 3] spec = hw.format_cpu_spec(cpus) self.assertEqual("1,3", spec) cpus = set([1, 2, 4, 6]) spec = hw.format_cpu_spec(cpus) self.assertEqual("1-2,4,6", spec) cpus = [1, 2, 4, 6] spec = hw.format_cpu_spec(cpus) self.assertEqual("1-2,4,6", spec) cpus = set([10, 11, 13, 14, 15, 16, 19, 20, 40, 42, 48]) spec = hw.format_cpu_spec(cpus) self.assertEqual("10-11,13-16,19-20,40,42,48", spec) cpus = [10, 11, 13, 14, 15, 16, 19, 20, 40, 42, 48] spec = hw.format_cpu_spec(cpus) self.assertEqual("10-11,13-16,19-20,40,42,48", spec) cpus = set([1, 2, 4, 6]) spec = hw.format_cpu_spec(cpus, allow_ranges=False) self.assertEqual("1,2,4,6", spec) cpus = [1, 2, 4, 6] spec = hw.format_cpu_spec(cpus, allow_ranges=False) self.assertEqual("1,2,4,6", spec) cpus = set([10, 11, 13, 14, 15, 16, 19, 20, 40, 42, 48]) spec = hw.format_cpu_spec(cpus, allow_ranges=False) self.assertEqual("10,11,13,14,15,16,19,20,40,42,48", spec) cpus = [10, 11, 13, 14, 15, 16, 19, 20, 40, 42, 48] spec = hw.format_cpu_spec(cpus, allow_ranges=False) self.assertEqual("10,11,13,14,15,16,19,20,40,42,48", spec) class VCPUTopologyTest(test.NoDBTestCase): def test_validate_config(self): testdata = [ { # Flavor sets preferred topology only "flavor": objects.Flavor(vcpus=16, memory_mb=2048, extra_specs={ "hw:cpu_sockets": "8", "hw:cpu_cores": "2", "hw:cpu_threads": "1", }), "image": { "properties": {} }, "expect": ( 8, 2, 1, 65536, 65536, 65536 ) }, { # Image topology overrides flavor "flavor": objects.Flavor(vcpus=16, memory_mb=2048, extra_specs={ "hw:cpu_sockets": "8", "hw:cpu_cores": "2", "hw:cpu_threads": "1", "hw:cpu_max_threads": "2", }), "image": { "properties": { "hw_cpu_sockets": "4", "hw_cpu_cores": "2", "hw_cpu_threads": "2", } }, "expect": ( 4, 2, 2, 65536, 65536, 2, ) }, { # Partial image topology overrides flavor "flavor": objects.Flavor(vcpus=16, memory_mb=2048, extra_specs={ "hw:cpu_sockets": "8", "hw:cpu_cores": "2", "hw:cpu_threads": "1", }), "image": { "properties": { "hw_cpu_sockets": "2", } }, "expect": ( 2, -1, -1, 65536, 65536, 65536, ) }, { # Restrict use of threads "flavor": objects.Flavor(vcpus=16, memory_mb=2048, extra_specs={ "hw:cpu_max_threads": "2", }), "image": { "properties": { "hw_cpu_max_threads": "1", } }, "expect": ( -1, -1, -1, 65536, 65536, 1, ) }, { # Force use of at least two sockets "flavor": objects.Flavor(vcpus=16, memory_mb=2048, extra_specs={ "hw:cpu_max_cores": "8", "hw:cpu_max_threads": "1", }), "image": { "properties": {} }, "expect": ( -1, -1, -1, 65536, 8, 1 ) }, { # Image limits reduce flavor "flavor": objects.Flavor(vcpus=16, memory_mb=2048, extra_specs={ "hw:cpu_max_cores": "8", "hw:cpu_max_threads": "1", }), "image": { "properties": { "hw_cpu_max_cores": "4", } }, "expect": ( -1, -1, -1, 65536, 4, 1 ) }, { # Image limits kill flavor preferred "flavor": objects.Flavor(vcpus=16, memory_mb=2048, extra_specs={ "hw:cpu_sockets": "2", "hw:cpu_cores": "8", "hw:cpu_threads": "1", }), "image": { "properties": { "hw_cpu_max_cores": "4", } }, "expect": ( -1, -1, -1, 65536, 4, 65536 ) }, { # Image limits cannot exceed flavor "flavor": objects.Flavor(vcpus=16, memory_mb=2048, extra_specs={ "hw:cpu_max_cores": "8", "hw:cpu_max_threads": "1", }), "image": { "properties": { "hw_cpu_max_cores": "16", } }, "expect": exception.ImageVCPULimitsRangeExceeded, }, { # Image preferred cannot exceed flavor "flavor": objects.Flavor(vcpus=16, memory_mb=2048, extra_specs={ "hw:cpu_max_cores": "8", "hw:cpu_max_threads": "1", }), "image": { "properties": { "hw_cpu_cores": "16", } }, "expect": exception.ImageVCPUTopologyRangeExceeded, }, ] for topo_test in testdata: image_meta = objects.ImageMeta.from_dict(topo_test["image"]) if type(topo_test["expect"]) == tuple: (preferred, maximum) = hw._get_cpu_topology_constraints( topo_test["flavor"], image_meta) self.assertEqual(topo_test["expect"][0], preferred.sockets) self.assertEqual(topo_test["expect"][1], preferred.cores) self.assertEqual(topo_test["expect"][2], preferred.threads) self.assertEqual(topo_test["expect"][3], maximum.sockets) self.assertEqual(topo_test["expect"][4], maximum.cores) self.assertEqual(topo_test["expect"][5], maximum.threads) else: self.assertRaises(topo_test["expect"], hw._get_cpu_topology_constraints, topo_test["flavor"], image_meta) def test_possible_topologies(self): testdata = [ { "allow_threads": True, "vcpus": 8, "maxsockets": 8, "maxcores": 8, "maxthreads": 2, "expect": [ [8, 1, 1], [4, 2, 1], [2, 4, 1], [1, 8, 1], [4, 1, 2], [2, 2, 2], [1, 4, 2], ] }, { "allow_threads": False, "vcpus": 8, "maxsockets": 8, "maxcores": 8, "maxthreads": 2, "expect": [ [8, 1, 1], [4, 2, 1], [2, 4, 1], [1, 8, 1], ] }, { "allow_threads": True, "vcpus": 8, "maxsockets": 1024, "maxcores": 1024, "maxthreads": 2, "expect": [ [8, 1, 1], [4, 2, 1], [2, 4, 1], [1, 8, 1], [4, 1, 2], [2, 2, 2], [1, 4, 2], ] }, { "allow_threads": True, "vcpus": 8, "maxsockets": 1024, "maxcores": 1, "maxthreads": 2, "expect": [ [8, 1, 1], [4, 1, 2], ] }, { "allow_threads": True, "vcpus": 7, "maxsockets": 8, "maxcores": 8, "maxthreads": 2, "expect": [ [7, 1, 1], [1, 7, 1], ] }, { "allow_threads": True, "vcpus": 8, "maxsockets": 2, "maxcores": 1, "maxthreads": 1, "expect": exception.ImageVCPULimitsRangeImpossible, }, { "allow_threads": False, "vcpus": 8, "maxsockets": 2, "maxcores": 1, "maxthreads": 4, "expect": exception.ImageVCPULimitsRangeImpossible, }, ] for topo_test in testdata: if type(topo_test["expect"]) == list: actual = [] for topology in hw._get_possible_cpu_topologies( topo_test["vcpus"], objects.VirtCPUTopology( sockets=topo_test["maxsockets"], cores=topo_test["maxcores"], threads=topo_test["maxthreads"]), topo_test["allow_threads"]): actual.append([topology.sockets, topology.cores, topology.threads]) self.assertEqual(topo_test["expect"], actual) else: self.assertRaises(topo_test["expect"], hw._get_possible_cpu_topologies, topo_test["vcpus"], objects.VirtCPUTopology( sockets=topo_test["maxsockets"], cores=topo_test["maxcores"], threads=topo_test["maxthreads"]), topo_test["allow_threads"]) def test_sorting_topologies(self): testdata = [ { "allow_threads": True, "vcpus": 8, "maxsockets": 8, "maxcores": 8, "maxthreads": 2, "sockets": 4, "cores": 2, "threads": 1, "expect": [ [4, 2, 1], # score = 2 [8, 1, 1], # score = 1 [2, 4, 1], # score = 1 [1, 8, 1], # score = 1 [4, 1, 2], # score = 1 [2, 2, 2], # score = 1 [1, 4, 2], # score = 1 ] }, { "allow_threads": True, "vcpus": 8, "maxsockets": 1024, "maxcores": 1024, "maxthreads": 2, "sockets": -1, "cores": 4, "threads": -1, "expect": [ [2, 4, 1], # score = 1 [1, 4, 2], # score = 1 [8, 1, 1], # score = 0 [4, 2, 1], # score = 0 [1, 8, 1], # score = 0 [4, 1, 2], # score = 0 [2, 2, 2], # score = 0 ] }, { "allow_threads": True, "vcpus": 8, "maxsockets": 1024, "maxcores": 1, "maxthreads": 2, "sockets": -1, "cores": -1, "threads": 2, "expect": [ [4, 1, 2], # score = 1 [8, 1, 1], # score = 0 ] }, { "allow_threads": False, "vcpus": 8, "maxsockets": 1024, "maxcores": 1, "maxthreads": 2, "sockets": -1, "cores": -1, "threads": 2, "expect": [ [8, 1, 1], # score = 0 ] }, ] for topo_test in testdata: actual = [] possible = hw._get_possible_cpu_topologies( topo_test["vcpus"], objects.VirtCPUTopology(sockets=topo_test["maxsockets"], cores=topo_test["maxcores"], threads=topo_test["maxthreads"]), topo_test["allow_threads"]) tops = hw._sort_possible_cpu_topologies( possible, objects.VirtCPUTopology(sockets=topo_test["sockets"], cores=topo_test["cores"], threads=topo_test["threads"])) for topology in tops: actual.append([topology.sockets, topology.cores, topology.threads]) self.assertEqual(topo_test["expect"], actual) def test_best_config(self): testdata = [ { # Flavor sets preferred topology only "allow_threads": True, "flavor": objects.Flavor(vcpus=16, memory_mb=2048, extra_specs={ "hw:cpu_sockets": "8", "hw:cpu_cores": "2", "hw:cpu_threads": "1" }), "image": { "properties": {} }, "expect": [8, 2, 1], }, { # Image topology overrides flavor "allow_threads": True, "flavor": objects.Flavor(vcpus=16, memory_mb=2048, extra_specs={ "hw:cpu_sockets": "8", "hw:cpu_cores": "2", "hw:cpu_threads": "1", "hw:cpu_maxthreads": "2", }), "image": { "properties": { "hw_cpu_sockets": "4", "hw_cpu_cores": "2", "hw_cpu_threads": "2", } }, "expect": [4, 2, 2], }, { # Image topology overrides flavor "allow_threads": False, "flavor": objects.Flavor(vcpus=16, memory_mb=2048, extra_specs={ "hw:cpu_sockets": "8", "hw:cpu_cores": "2", "hw:cpu_threads": "1", "hw:cpu_maxthreads": "2", }), "image": { "properties": { "hw_cpu_sockets": "4", "hw_cpu_cores": "2", "hw_cpu_threads": "2", } }, "expect": [8, 2, 1], }, { # Partial image topology overrides flavor "allow_threads": True, "flavor": objects.Flavor(vcpus=16, memory_mb=2048, extra_specs={ "hw:cpu_sockets": "8", "hw:cpu_cores": "2", "hw:cpu_threads": "1" }), "image": { "properties": { "hw_cpu_sockets": "2" } }, "expect": [2, 8, 1], }, { # Restrict use of threads "allow_threads": True, "flavor": objects.Flavor(vcpus=16, memory_mb=2048, extra_specs={ "hw:cpu_max_threads": "1" }), "image": { "properties": {} }, "expect": [16, 1, 1] }, { # Force use of at least two sockets "allow_threads": True, "flavor": objects.Flavor(vcpus=16, memory_mb=2048, extra_specs={ "hw:cpu_max_cores": "8", "hw:cpu_max_threads": "1", }), "image": { "properties": {} }, "expect": [16, 1, 1] }, { # Image limits reduce flavor "allow_threads": True, "flavor": objects.Flavor(vcpus=16, memory_mb=2048, extra_specs={ "hw:cpu_max_sockets": "8", "hw:cpu_max_cores": "8", "hw:cpu_max_threads": "1", }), "image": { "properties": { "hw_cpu_max_sockets": 4, } }, "expect": [4, 4, 1] }, { # Image limits kill flavor preferred "allow_threads": True, "flavor": objects.Flavor(vcpus=16, memory_mb=2048, extra_specs={ "hw:cpu_sockets": "2", "hw:cpu_cores": "8", "hw:cpu_threads": "1", }), "image": { "properties": { "hw_cpu_max_cores": 4, } }, "expect": [16, 1, 1] }, { # NUMA needs threads, only cores requested by flavor "allow_threads": True, "flavor": objects.Flavor(vcpus=4, memory_mb=2048, extra_specs={ "hw:cpu_cores": "2", }), "image": { "properties": { "hw_cpu_max_cores": 2, } }, "numa_topology": objects.InstanceNUMATopology( cells=[ objects.InstanceNUMACell( id=0, cpuset=set([0, 1]), memory=1024, cpu_topology=objects.VirtCPUTopology( sockets=1, cores=1, threads=2)), objects.InstanceNUMACell( id=1, cpuset=set([2, 3]), memory=1024)]), "expect": [1, 2, 2] }, { # NUMA needs threads, but more than requested by flavor - the # least amount of threads wins "allow_threads": True, "flavor": objects.Flavor(vcpus=4, memory_mb=2048, extra_specs={ "hw:cpu_threads": "2", }), "image": { "properties": {} }, "numa_topology": objects.InstanceNUMATopology( cells=[ objects.InstanceNUMACell( id=0, cpuset=set([0, 1, 2, 3]), memory=2048, cpu_topology=objects.VirtCPUTopology( sockets=1, cores=1, threads=4))]), "expect": [2, 1, 2] }, { # NUMA needs threads, but more than limit in flavor - the # least amount of threads which divides into the vcpu # count wins. So with desired 4, max of 3, and # vcpu count of 4, we should get 2 threads. "allow_threads": True, "flavor": objects.Flavor(vcpus=4, memory_mb=2048, extra_specs={ "hw:cpu_max_sockets": "5", "hw:cpu_max_cores": "2", "hw:cpu_max_threads": "3", }), "image": { "properties": {} }, "numa_topology": objects.InstanceNUMATopology( cells=[ objects.InstanceNUMACell( id=0, cpuset=set([0, 1, 2, 3]), memory=2048, cpu_topology=objects.VirtCPUTopology( sockets=1, cores=1, threads=4))]), "expect": [2, 1, 2] }, { # NUMA needs threads, but thread count does not # divide into flavor vcpu count, so we must # reduce thread count to closest divisor "allow_threads": True, "flavor": objects.Flavor(vcpus=6, memory_mb=2048, extra_specs={ }), "image": { "properties": {} }, "numa_topology": objects.InstanceNUMATopology( cells=[ objects.InstanceNUMACell( id=0, cpuset=set([0, 1, 2, 3]), memory=2048, cpu_topology=objects.VirtCPUTopology( sockets=1, cores=1, threads=4))]), "expect": [2, 1, 3] }, { # NUMA needs different number of threads per cell - the least # amount of threads wins "allow_threads": True, "flavor": objects.Flavor(vcpus=8, memory_mb=2048, extra_specs={}), "image": { "properties": {} }, "numa_topology": objects.InstanceNUMATopology( cells=[ objects.InstanceNUMACell( id=0, cpuset=set([0, 1, 2, 3]), memory=1024, cpu_topology=objects.VirtCPUTopology( sockets=1, cores=2, threads=2)), objects.InstanceNUMACell( id=1, cpuset=set([4, 5, 6, 7]), memory=1024, cpu_topology=objects.VirtCPUTopology( sockets=1, cores=1, threads=4))]), "expect": [4, 1, 2] }, ] for topo_test in testdata: image_meta = objects.ImageMeta.from_dict(topo_test["image"]) topology = hw._get_desirable_cpu_topologies( topo_test["flavor"], image_meta, topo_test["allow_threads"], topo_test.get("numa_topology"))[0] self.assertEqual(topo_test["expect"][0], topology.sockets) self.assertEqual(topo_test["expect"][1], topology.cores) self.assertEqual(topo_test["expect"][2], topology.threads) class NUMATopologyTest(test.NoDBTestCase): def test_topology_constraints(self): testdata = [ { "flavor": objects.Flavor(vcpus=8, memory_mb=2048, extra_specs={ }), "image": { }, "expect": None, }, { "flavor": objects.Flavor(vcpus=8, memory_mb=2048, extra_specs={ "hw:numa_nodes": 2 }), "image": { }, "expect": objects.InstanceNUMATopology(cells= [ objects.InstanceNUMACell( id=0, cpuset=set([0, 1, 2, 3]), memory=1024), objects.InstanceNUMACell( id=1, cpuset=set([4, 5, 6, 7]), memory=1024), ]), }, { "flavor": objects.Flavor(vcpus=8, memory_mb=2048, extra_specs={ "hw:mem_page_size": 2048 }), "image": { }, "expect": objects.InstanceNUMATopology(cells=[ objects.InstanceNUMACell( id=0, cpuset=set([0, 1, 2, 3, 4, 5, 6, 7]), memory=2048, pagesize=2048) ]), }, { # a nodes number of zero should lead to an # exception "flavor": objects.Flavor(vcpus=8, memory_mb=2048, extra_specs={ "hw:numa_nodes": 0 }), "image": { }, "expect": exception.InvalidNUMANodesNumber, }, { # a negative nodes number should lead to an # exception "flavor": objects.Flavor(vcpus=8, memory_mb=2048, extra_specs={ "hw:numa_nodes": -1 }), "image": { }, "expect": exception.InvalidNUMANodesNumber, }, { # a nodes number not numeric should lead to an # exception "flavor": objects.Flavor(vcpus=8, memory_mb=2048, extra_specs={ "hw:numa_nodes": 'x' }), "image": { }, "expect": exception.InvalidNUMANodesNumber, }, { # vcpus is not a multiple of nodes, so it # is an error to not provide cpu/mem mapping "flavor": objects.Flavor(vcpus=8, memory_mb=2048, extra_specs={ "hw:numa_nodes": 3 }), "image": { }, "expect": exception.ImageNUMATopologyAsymmetric, }, { "flavor": objects.Flavor(vcpus=8, memory_mb=2048, extra_specs={ "hw:numa_nodes": 3, "hw:numa_cpus.0": "0-3", "hw:numa_mem.0": "1024", "hw:numa_cpus.1": "4,6", "hw:numa_mem.1": "512", "hw:numa_cpus.2": "5,7", "hw:numa_mem.2": "512", }), "image": { }, "expect": objects.InstanceNUMATopology(cells= [ objects.InstanceNUMACell( id=0, cpuset=set([0, 1, 2, 3]), memory=1024), objects.InstanceNUMACell( id=1, cpuset=set([4, 6]), memory=512), objects.InstanceNUMACell( id=2, cpuset=set([5, 7]), memory=512) ]), }, { "flavor": objects.Flavor(vcpus=8, memory_mb=2048, extra_specs={ }), "image": { "properties": { "hw_numa_nodes": 3, "hw_numa_cpus.0": "0-3", "hw_numa_mem.0": "1024", "hw_numa_cpus.1": "4,6", "hw_numa_mem.1": "512", "hw_numa_cpus.2": "5,7", "hw_numa_mem.2": "512", }, }, "expect": objects.InstanceNUMATopology(cells= [ objects.InstanceNUMACell( id=0, cpuset=set([0, 1, 2, 3]), memory=1024), objects.InstanceNUMACell( id=1, cpuset=set([4, 6]), memory=512), objects.InstanceNUMACell( id=2, cpuset=set([5, 7]), memory=512) ]), }, { # Request a CPU that is out of range # wrt vCPU count "flavor": objects.Flavor(vcpus=8, memory_mb=2048, extra_specs={ "hw:numa_nodes": 1, "hw:numa_cpus.0": "0-16", "hw:numa_mem.0": "2048", }), "image": { }, "expect": exception.ImageNUMATopologyCPUOutOfRange, }, { # Request the same CPU in two nodes "flavor": objects.Flavor(vcpus=8, memory_mb=2048, extra_specs={ "hw:numa_nodes": 2, "hw:numa_cpus.0": "0-7", "hw:numa_mem.0": "1024", "hw:numa_cpus.1": "0-7", "hw:numa_mem.1": "1024", }), "image": { }, "expect": exception.ImageNUMATopologyCPUDuplicates, }, { # Request with some CPUs not assigned "flavor": objects.Flavor(vcpus=8, memory_mb=2048, extra_specs={ "hw:numa_nodes": 2, "hw:numa_cpus.0": "0-2", "hw:numa_mem.0": "1024", "hw:numa_cpus.1": "3-4", "hw:numa_mem.1": "1024", }), "image": { }, "expect": exception.ImageNUMATopologyCPUsUnassigned, }, { # Request too little memory vs flavor total "flavor": objects.Flavor(vcpus=8, memory_mb=2048, extra_specs={ "hw:numa_nodes": 2, "hw:numa_cpus.0": "0-3", "hw:numa_mem.0": "512", "hw:numa_cpus.1": "4-7", "hw:numa_mem.1": "512", }), "image": { }, "expect": exception.ImageNUMATopologyMemoryOutOfRange, }, { # Request too much memory vs flavor total "flavor": objects.Flavor(vcpus=8, memory_mb=2048, extra_specs={ "hw:numa_nodes": 2, "hw:numa_cpus.0": "0-3", "hw:numa_mem.0": "1576", "hw:numa_cpus.1": "4-7", "hw:numa_mem.1": "1576", }), "image": { }, "expect": exception.ImageNUMATopologyMemoryOutOfRange, }, { # Request missing mem.0 "flavor": objects.Flavor(vcpus=8, memory_mb=2048, extra_specs={ "hw:numa_nodes": 2, "hw:numa_cpus.0": "0-3", "hw:numa_mem.1": "1576", }), "image": { }, "expect": exception.ImageNUMATopologyIncomplete, }, { # Request missing cpu.0 "flavor": objects.Flavor(vcpus=8, memory_mb=2048, extra_specs={ "hw:numa_nodes": 2, "hw:numa_mem.0": "1576", "hw:numa_cpus.1": "4-7", }), "image": { }, "expect": exception.ImageNUMATopologyIncomplete, }, { # Image attempts to override flavor "flavor": objects.Flavor(vcpus=8, memory_mb=2048, extra_specs={ "hw:numa_nodes": 2, }), "image": { "properties": { "hw_numa_nodes": 4} }, "expect": exception.ImageNUMATopologyForbidden, }, { # NUMA + CPU pinning requested in the flavor "flavor": objects.Flavor(vcpus=4, memory_mb=2048, extra_specs={ "hw:numa_nodes": 2, "hw:cpu_policy": fields.CPUAllocationPolicy.DEDICATED }), "image": { }, "expect": objects.InstanceNUMATopology(cells= [ objects.InstanceNUMACell( id=0, cpuset=set([0, 1]), memory=1024, cpu_policy=fields.CPUAllocationPolicy.DEDICATED), objects.InstanceNUMACell( id=1, cpuset=set([2, 3]), memory=1024, cpu_policy=fields.CPUAllocationPolicy.DEDICATED)]) }, { # no NUMA + CPU pinning requested in the flavor "flavor": objects.Flavor(vcpus=4, memory_mb=2048, extra_specs={ "hw:cpu_policy": fields.CPUAllocationPolicy.DEDICATED }), "image": { }, "expect": objects.InstanceNUMATopology(cells= [ objects.InstanceNUMACell( id=0, cpuset=set([0, 1, 2, 3]), memory=2048, cpu_policy=fields.CPUAllocationPolicy.DEDICATED)]) }, { # NUMA + CPU pinning requested in the image "flavor": objects.Flavor(vcpus=4, memory_mb=2048, extra_specs={ "hw:numa_nodes": 2 }), "image": { "properties": { "hw_cpu_policy": fields.CPUAllocationPolicy.DEDICATED }}, "expect": objects.InstanceNUMATopology(cells= [ objects.InstanceNUMACell( id=0, cpuset=set([0, 1]), memory=1024, cpu_policy=fields.CPUAllocationPolicy.DEDICATED), objects.InstanceNUMACell( id=1, cpuset=set([2, 3]), memory=1024, cpu_policy=fields.CPUAllocationPolicy.DEDICATED)]) }, { # no NUMA + CPU pinning requested in the image "flavor": objects.Flavor(vcpus=4, memory_mb=2048, extra_specs={}), "image": { "properties": { "hw_cpu_policy": fields.CPUAllocationPolicy.DEDICATED }}, "expect": objects.InstanceNUMATopology(cells= [ objects.InstanceNUMACell( id=0, cpuset=set([0, 1, 2, 3]), memory=2048, cpu_policy=fields.CPUAllocationPolicy.DEDICATED)]) }, { # Invalid CPU pinning override "flavor": objects.Flavor(vcpus=4, memory_mb=2048, extra_specs={ "hw:numa_nodes": 2, "hw:cpu_policy": fields.CPUAllocationPolicy.SHARED }), "image": { "properties": { "hw_cpu_policy": fields.CPUAllocationPolicy.DEDICATED} }, "expect": exception.ImageCPUPinningForbidden, }, { # Invalid CPU pinning policy with realtime "flavor": objects.Flavor(vcpus=4, memory_mb=2048, extra_specs={ "hw:cpu_policy": fields.CPUAllocationPolicy.SHARED, "hw:cpu_realtime": "yes", }), "image": { "properties": {} }, "expect": exception.RealtimeConfigurationInvalid, }, { # Invalid CPU thread pinning override "flavor": objects.Flavor(vcpus=4, memory_mb=2048, extra_specs={ "hw:numa_nodes": 2, "hw:cpu_policy": fields.CPUAllocationPolicy.DEDICATED, "hw:cpu_thread_policy": fields.CPUThreadAllocationPolicy.ISOLATE, }), "image": { "properties": { "hw_cpu_policy": fields.CPUAllocationPolicy.DEDICATED, "hw_cpu_thread_policy": fields.CPUThreadAllocationPolicy.REQUIRE, } }, "expect": exception.ImageCPUThreadPolicyForbidden, }, { # CPU thread pinning override set to default value "flavor": objects.Flavor(vcpus=4, memory_mb=2048, extra_specs={ "hw:numa_nodes": 1, "hw:cpu_policy": fields.CPUAllocationPolicy.DEDICATED, "hw:cpu_thread_policy": fields.CPUThreadAllocationPolicy.PREFER, }), "image": {}, "expect": objects.InstanceNUMATopology(cells= [ objects.InstanceNUMACell( id=0, cpuset=set([0, 1, 2, 3]), memory=2048, cpu_policy=fields.CPUAllocationPolicy.DEDICATED, cpu_thread_policy= fields.CPUThreadAllocationPolicy.PREFER)]) }, { # Invalid CPU pinning policy with CPU thread pinning "flavor": objects.Flavor(vcpus=4, memory_mb=2048, extra_specs={ "hw:cpu_policy": fields.CPUAllocationPolicy.SHARED, "hw:cpu_thread_policy": fields.CPUThreadAllocationPolicy.ISOLATE, }), "image": { "properties": {} }, "expect": exception.CPUThreadPolicyConfigurationInvalid, }, { # Invalid vCPUs mask with realtime "flavor": objects.Flavor(vcpus=4, memory_mb=2048, extra_specs={ "hw:cpu_policy": "dedicated", "hw:cpu_realtime": "yes", }), "image": { "properties": {} }, "expect": exception.RealtimeMaskNotFoundOrInvalid, }, { # We pass an invalid option "flavor": objects.Flavor(vcpus=16, memory_mb=2048, extra_specs={ "hw:emulator_threads_policy": "foo", }), "image": { "properties": {} }, "expect": exception.InvalidEmulatorThreadsPolicy, }, { # We request emulator threads option without numa topology "flavor": objects.Flavor(vcpus=16, memory_mb=2048, extra_specs={ "hw:emulator_threads_policy": "isolate", }), "image": { "properties": {} }, "expect": exception.BadRequirementEmulatorThreadsPolicy, }, { # We request a valid emulator threads options with # cpu_policy based from flavor "flavor": objects.Flavor(vcpus=4, memory_mb=2048, extra_specs={ "hw:emulator_threads_policy": "isolate", "hw:cpu_policy": "dedicated", }), "image": { "properties": {} }, "expect": objects.InstanceNUMATopology( emulator_threads_policy= fields.CPUEmulatorThreadsPolicy.ISOLATE, cells=[ objects.InstanceNUMACell( id=0, cpuset=set([0, 1, 2, 3]), memory=2048, cpu_policy=fields.CPUAllocationPolicy.DEDICATED, )]), }, { # We request a valid emulator threads options with cpu # policy based from image "flavor": objects.Flavor(vcpus=4, memory_mb=2048, extra_specs={ "hw:emulator_threads_policy": "isolate", }), "image": { "properties": { "hw_cpu_policy": "dedicated", } }, "expect": objects.InstanceNUMATopology( emulator_threads_policy= fields.CPUEmulatorThreadsPolicy.ISOLATE, cells=[ objects.InstanceNUMACell( id=0, cpuset=set([0, 1, 2, 3]), memory=2048, cpu_policy=fields.CPUAllocationPolicy.DEDICATED, )]), }, ] for testitem in testdata: image_meta = objects.ImageMeta.from_dict(testitem["image"]) if testitem["expect"] is None: topology = hw.numa_get_constraints( testitem["flavor"], image_meta) self.assertIsNone(topology) elif type(testitem["expect"]) == type: self.assertRaises(testitem["expect"], hw.numa_get_constraints, testitem["flavor"], image_meta) else: topology = hw.numa_get_constraints( testitem["flavor"], image_meta) self.assertIsNotNone(topology) self.assertEqual(len(testitem["expect"].cells), len(topology.cells)) self.assertEqual( testitem["expect"].emulator_threads_isolated, topology.emulator_threads_isolated) for i in range(len(topology.cells)): self.assertEqual(testitem["expect"].cells[i].id, topology.cells[i].id) self.assertEqual(testitem["expect"].cells[i].cpuset, topology.cells[i].cpuset) self.assertEqual(testitem["expect"].cells[i].memory, topology.cells[i].memory) self.assertEqual(testitem["expect"].cells[i].pagesize, topology.cells[i].pagesize) self.assertEqual(testitem["expect"].cells[i].cpu_pinning, topology.cells[i].cpu_pinning) def test_host_usage_contiguous(self): hpages0_4K = objects.NUMAPagesTopology(size_kb=4, total=256, used=0) hpages0_2M = objects.NUMAPagesTopology(size_kb=2048, total=0, used=1) hpages1_4K = objects.NUMAPagesTopology(size_kb=4, total=128, used=2) hpages1_2M = objects.NUMAPagesTopology(size_kb=2048, total=0, used=3) hosttopo = objects.NUMATopology(cells=[ objects.NUMACell(id=0, cpuset=set([0, 1, 2, 3]), memory=1024, cpu_usage=0, memory_usage=0, mempages=[ hpages0_4K, hpages0_2M], siblings=[], pinned_cpus=set([])), objects.NUMACell(id=1, cpuset=set([4, 6]), memory=512, cpu_usage=0, memory_usage=0, mempages=[ hpages1_4K, hpages1_2M], siblings=[], pinned_cpus=set([])), objects.NUMACell(id=2, cpuset=set([5, 7]), memory=512, cpu_usage=0, memory_usage=0, mempages=[], siblings=[], pinned_cpus=set([])), ]) instance1 = objects.InstanceNUMATopology(cells=[ objects.InstanceNUMACell(id=0, cpuset=set([0, 1, 2]), memory=256), objects.InstanceNUMACell(id=1, cpuset=set([4]), memory=256), ]) instance2 = objects.InstanceNUMATopology(cells=[ objects.InstanceNUMACell(id=0, cpuset=set([0, 1]), memory=256), objects.InstanceNUMACell(id=1, cpuset=set([5, 7]), memory=256), ]) hostusage = hw.numa_usage_from_instances( hosttopo, [instance1, instance2]) self.assertEqual(len(hosttopo), len(hostusage)) self.assertIsInstance(hostusage.cells[0], objects.NUMACell) self.assertEqual(hosttopo.cells[0].cpuset, hostusage.cells[0].cpuset) self.assertEqual(hosttopo.cells[0].memory, hostusage.cells[0].memory) self.assertEqual(hostusage.cells[0].cpu_usage, 5) self.assertEqual(hostusage.cells[0].memory_usage, 512) self.assertEqual(hostusage.cells[0].mempages, [ hpages0_4K, hpages0_2M]) self.assertIsInstance(hostusage.cells[1], objects.NUMACell) self.assertEqual(hosttopo.cells[1].cpuset, hostusage.cells[1].cpuset) self.assertEqual(hosttopo.cells[1].memory, hostusage.cells[1].memory) self.assertEqual(hostusage.cells[1].cpu_usage, 3) self.assertEqual(hostusage.cells[1].memory_usage, 512) self.assertEqual(hostusage.cells[1].mempages, [ hpages1_4K, hpages1_2M]) self.assertEqual(256, hpages0_4K.total) self.assertEqual(0, hpages0_4K.used) self.assertEqual(0, hpages0_2M.total) self.assertEqual(1, hpages0_2M.used) self.assertIsInstance(hostusage.cells[2], objects.NUMACell) self.assertEqual(hosttopo.cells[2].cpuset, hostusage.cells[2].cpuset) self.assertEqual(hosttopo.cells[2].memory, hostusage.cells[2].memory) self.assertEqual(hostusage.cells[2].cpu_usage, 0) self.assertEqual(hostusage.cells[2].memory_usage, 0) self.assertEqual(128, hpages1_4K.total) self.assertEqual(2, hpages1_4K.used) self.assertEqual(0, hpages1_2M.total) self.assertEqual(3, hpages1_2M.used) def test_host_usage_sparse(self): hosttopo = objects.NUMATopology(cells=[ objects.NUMACell(id=0, cpuset=set([0, 1, 2, 3]), memory=1024, cpu_usage=0, memory_usage=0, mempages=[], siblings=[], pinned_cpus=set([])), objects.NUMACell(id=5, cpuset=set([4, 6]), memory=512, cpu_usage=0, memory_usage=0, mempages=[], siblings=[], pinned_cpus=set([])), objects.NUMACell(id=6, cpuset=set([5, 7]), memory=512, cpu_usage=0, memory_usage=0, mempages=[], siblings=[], pinned_cpus=set([])), ]) instance1 = objects.InstanceNUMATopology(cells=[ objects.InstanceNUMACell(id=0, cpuset=set([0, 1, 2]), memory=256), objects.InstanceNUMACell(id=6, cpuset=set([4]), memory=256), ]) instance2 = objects.InstanceNUMATopology(cells=[ objects.InstanceNUMACell(id=0, cpuset=set([0, 1]), memory=256, cpu_usage=0, memory_usage=0, mempages=[]), objects.InstanceNUMACell(id=5, cpuset=set([5, 7]), memory=256, cpu_usage=0, memory_usage=0, mempages=[]), ]) hostusage = hw.numa_usage_from_instances( hosttopo, [instance1, instance2]) self.assertEqual(len(hosttopo), len(hostusage)) self.assertIsInstance(hostusage.cells[0], objects.NUMACell) self.assertEqual(hosttopo.cells[0].id, hostusage.cells[0].id) self.assertEqual(hosttopo.cells[0].cpuset, hostusage.cells[0].cpuset) self.assertEqual(hosttopo.cells[0].memory, hostusage.cells[0].memory) self.assertEqual(hostusage.cells[0].cpu_usage, 5) self.assertEqual(hostusage.cells[0].memory_usage, 512) self.assertIsInstance(hostusage.cells[1], objects.NUMACell) self.assertEqual(hosttopo.cells[1].id, hostusage.cells[1].id) self.assertEqual(hosttopo.cells[1].cpuset, hostusage.cells[1].cpuset) self.assertEqual(hosttopo.cells[1].memory, hostusage.cells[1].memory) self.assertEqual(hostusage.cells[1].cpu_usage, 2) self.assertEqual(hostusage.cells[1].memory_usage, 256) self.assertIsInstance(hostusage.cells[2], objects.NUMACell) self.assertEqual(hosttopo.cells[2].cpuset, hostusage.cells[2].cpuset) self.assertEqual(hosttopo.cells[2].memory, hostusage.cells[2].memory) self.assertEqual(hostusage.cells[2].cpu_usage, 1) self.assertEqual(hostusage.cells[2].memory_usage, 256) def test_host_usage_culmulative_with_free(self): hosttopo = objects.NUMATopology(cells=[ objects.NUMACell(id=0, cpuset=set([0, 1, 2, 3]), memory=1024, cpu_usage=2, memory_usage=512, mempages=[], siblings=[], pinned_cpus=set([])), objects.NUMACell(id=1, cpuset=set([4, 6]), memory=512, cpu_usage=1, memory_usage=512, mempages=[], siblings=[], pinned_cpus=set([])), objects.NUMACell(id=2, cpuset=set([5, 7]), memory=256, cpu_usage=0, memory_usage=0, mempages=[], siblings=[], pinned_cpus=set([])), ]) instance1 = objects.InstanceNUMATopology(cells=[ objects.InstanceNUMACell(id=0, cpuset=set([0, 1, 2]), memory=512), objects.InstanceNUMACell(id=1, cpuset=set([3]), memory=256), objects.InstanceNUMACell(id=2, cpuset=set([4]), memory=256)]) hostusage = hw.numa_usage_from_instances( hosttopo, [instance1]) self.assertIsInstance(hostusage.cells[0], objects.NUMACell) self.assertEqual(hostusage.cells[0].cpu_usage, 5) self.assertEqual(hostusage.cells[0].memory_usage, 1024) self.assertIsInstance(hostusage.cells[1], objects.NUMACell) self.assertEqual(hostusage.cells[1].cpu_usage, 2) self.assertEqual(hostusage.cells[1].memory_usage, 768) self.assertIsInstance(hostusage.cells[2], objects.NUMACell) self.assertEqual(hostusage.cells[2].cpu_usage, 1) self.assertEqual(hostusage.cells[2].memory_usage, 256) # Test freeing of resources hostusage = hw.numa_usage_from_instances( hostusage, [instance1], free=True) self.assertEqual(hostusage.cells[0].cpu_usage, 2) self.assertEqual(hostusage.cells[0].memory_usage, 512) self.assertEqual(hostusage.cells[1].cpu_usage, 1) self.assertEqual(hostusage.cells[1].memory_usage, 512) self.assertEqual(hostusage.cells[2].cpu_usage, 0) self.assertEqual(hostusage.cells[2].memory_usage, 0) def _topo_usage_reserved_page_size(self): reserved = hw.numa_get_reserved_huge_pages() hosttopo = objects.NUMATopology(cells=[ objects.NUMACell(id=0, cpuset=set([0, 1]), memory=512, cpu_usage=0, memory_usage=0, mempages=[ objects.NUMAPagesTopology( size_kb=2048, total=512, used=128, reserved=reserved[0][2048])], siblings=[], pinned_cpus=set([])), objects.NUMACell(id=1, cpuset=set([2, 3]), memory=512, cpu_usage=0, memory_usage=0, mempages=[ objects.NUMAPagesTopology( size_kb=1048576, total=5, used=2, reserved=reserved[1][1048576])], siblings=[], pinned_cpus=set([])), ]) instance1 = objects.InstanceNUMATopology(cells=[ objects.InstanceNUMACell( id=0, cpuset=set([0, 1]), memory=256, pagesize=2048), objects.InstanceNUMACell( id=1, cpuset=set([2, 3]), memory=1024, pagesize=1048576), ]) return hosttopo, instance1 def test_numa_get_reserved_huge_pages(self): reserved = hw.numa_get_reserved_huge_pages() self.assertEqual({}, reserved) self.flags(reserved_huge_pages=[ {'node': 3, 'size': 2048, 'count': 128}, {'node': 3, 'size': '1GB', 'count': 4}, {'node': 6, 'size': '2MB', 'count': 64}, {'node': 9, 'size': '1GB', 'count': 1}]) reserved = hw.numa_get_reserved_huge_pages() self.assertEqual({2048: 128, 1048576: 4}, reserved[3]) self.assertEqual({2048: 64}, reserved[6]) self.assertEqual({1048576: 1}, reserved[9]) def test_reserved_hugepgaes_success(self): self.flags(reserved_huge_pages=[ {'node': 0, 'size': 2048, 'count': 128}, {'node': 1, 'size': 1048576, 'count': 1}]) hosttopo, instance1 = self._topo_usage_reserved_page_size() hostusage = hw.numa_usage_from_instances( hosttopo, [instance1]) self.assertEqual(hostusage.cells[0].mempages[0].size_kb, 2048) self.assertEqual(hostusage.cells[0].mempages[0].total, 512) self.assertEqual(hostusage.cells[0].mempages[0].used, 256) # 128 already used + 128 used by instance + 128 reserved self.assertEqual(hostusage.cells[0].mempages[0].free, 128) self.assertEqual(hostusage.cells[1].mempages[0].size_kb, 1048576) self.assertEqual(hostusage.cells[1].mempages[0].total, 5) self.assertEqual(hostusage.cells[1].mempages[0].used, 3) # 2 already used + 1 used by instance + 1 reserved self.assertEqual(hostusage.cells[1].mempages[0].free, 1) def test_reserved_huge_pages_invalid_format(self): self.flags(reserved_huge_pages=[{'node': 0, 'size': 2048}]) self.assertRaises( exception.InvalidReservedMemoryPagesOption, self._topo_usage_reserved_page_size) def test_reserved_huge_pages_invalid_value(self): self.flags(reserved_huge_pages=["0:foo:bar"]) self.assertRaises( exception.InvalidReservedMemoryPagesOption, self._topo_usage_reserved_page_size) def test_topo_usage_none(self): hosttopo = objects.NUMATopology(cells=[ objects.NUMACell(id=0, cpuset=set([0, 1]), memory=512, cpu_usage=0, memory_usage=0, mempages=[], siblings=[], pinned_cpus=set([])), objects.NUMACell(id=1, cpuset=set([2, 3]), memory=512, cpu_usage=0, memory_usage=0, mempages=[], siblings=[], pinned_cpus=set([])), ]) instance1 = objects.InstanceNUMATopology(cells=[ objects.InstanceNUMACell(id=0, cpuset=set([0, 1]), memory=256), objects.InstanceNUMACell(id=2, cpuset=set([2]), memory=256), ]) hostusage = hw.numa_usage_from_instances( None, [instance1]) self.assertIsNone(hostusage) hostusage = hw.numa_usage_from_instances( hosttopo, []) self.assertEqual(hostusage.cells[0].cpu_usage, 0) self.assertEqual(hostusage.cells[0].memory_usage, 0) self.assertEqual(hostusage.cells[1].cpu_usage, 0) self.assertEqual(hostusage.cells[1].memory_usage, 0) hostusage = hw.numa_usage_from_instances( hosttopo, None) self.assertEqual(hostusage.cells[0].cpu_usage, 0) self.assertEqual(hostusage.cells[0].memory_usage, 0) self.assertEqual(hostusage.cells[1].cpu_usage, 0) self.assertEqual(hostusage.cells[1].memory_usage, 0) # Test the case where we have an instance with numa topology # and one without def test_topo_usage_mixed(self): hosttopo = objects.NUMATopology(cells=[ objects.NUMACell(id=0, cpuset=set([0, 1]), memory=512, cpu_usage=0, memory_usage=0, mempages=[], siblings=[], pinned_cpus=set([])), objects.NUMACell(id=1, cpuset=set([2, 3]), memory=512, cpu_usage=0, memory_usage=0, mempages=[], siblings=[], pinned_cpus=set([])), ]) instance1_topo = objects.InstanceNUMATopology(cells=[ objects.InstanceNUMACell(id=0, cpuset=set([0, 1]), memory=256), objects.InstanceNUMACell(id=1, cpuset=set([2]), memory=128), ]) instance2_topo = None hostusage = hw.numa_usage_from_instances(hosttopo, [instance1_topo]) self.assertEqual(hostusage.cells[0].cpu_usage, 2) self.assertEqual(hostusage.cells[0].memory_usage, 256) self.assertEqual(hostusage.cells[1].cpu_usage, 1) self.assertEqual(hostusage.cells[1].memory_usage, 128) # This is like processing an instance with no numa_topology hostusage = hw.numa_usage_from_instances(hostusage, instance2_topo) self.assertEqual(hostusage.cells[0].cpu_usage, 2) self.assertEqual(hostusage.cells[0].memory_usage, 256) self.assertEqual(hostusage.cells[1].cpu_usage, 1) self.assertEqual(hostusage.cells[1].memory_usage, 128) def assertNUMACellMatches(self, expected_cell, got_cell): attrs = ('cpuset', 'memory', 'id') if isinstance(expected_cell, objects.NUMATopology): attrs += ('cpu_usage', 'memory_usage') for attr in attrs: self.assertEqual(getattr(expected_cell, attr), getattr(got_cell, attr)) def test_json(self): expected = objects.NUMATopology( cells=[ objects.NUMACell(id=1, cpuset=set([1, 2]), memory=1024, cpu_usage=0, memory_usage=0, mempages=[], siblings=[], pinned_cpus=set([])), objects.NUMACell(id=2, cpuset=set([3, 4]), memory=1024, cpu_usage=0, memory_usage=0, mempages=[], siblings=[], pinned_cpus=set([]))]) got = objects.NUMATopology.obj_from_db_obj(expected._to_json()) for exp_cell, got_cell in zip(expected.cells, got.cells): self.assertNUMACellMatches(exp_cell, got_cell) class VirtNUMATopologyCellUsageTestCase(test.NoDBTestCase): def test_fit_instance_cell_success_no_limit(self): host_cell = objects.NUMACell(id=4, cpuset=set([1, 2]), memory=1024, cpu_usage=0, memory_usage=0, mempages=[], siblings=[], pinned_cpus=set([])) instance_cell = objects.InstanceNUMACell( id=0, cpuset=set([1, 2]), memory=1024) fitted_cell = hw._numa_fit_instance_cell(host_cell, instance_cell) self.assertIsInstance(fitted_cell, objects.InstanceNUMACell) self.assertEqual(host_cell.id, fitted_cell.id) def test_fit_instance_cell_success_w_limit(self): host_cell = objects.NUMACell(id=4, cpuset=set([1, 2]), memory=1024, cpu_usage=2, memory_usage=1024, mempages=[], siblings=[], pinned_cpus=set([])) limit_cell = objects.NUMATopologyLimits( cpu_allocation_ratio=2, ram_allocation_ratio=2) instance_cell = objects.InstanceNUMACell( id=0, cpuset=set([1, 2]), memory=1024) fitted_cell = hw._numa_fit_instance_cell( host_cell, instance_cell, limit_cell=limit_cell) self.assertIsInstance(fitted_cell, objects.InstanceNUMACell) self.assertEqual(host_cell.id, fitted_cell.id) def test_fit_instance_cell_self_overcommit(self): host_cell = objects.NUMACell(id=4, cpuset=set([1, 2]), memory=1024, cpu_usage=0, memory_usage=0, mempages=[], siblings=[], pinned_cpus=set([])) limit_cell = objects.NUMATopologyLimits( cpu_allocation_ratio=2, ram_allocation_ratio=2) instance_cell = objects.InstanceNUMACell( id=0, cpuset=set([1, 2, 3]), memory=4096) fitted_cell = hw._numa_fit_instance_cell( host_cell, instance_cell, limit_cell=limit_cell) self.assertIsNone(fitted_cell) def test_fit_instance_cell_fail_w_limit(self): host_cell = objects.NUMACell(id=4, cpuset=set([1, 2]), memory=1024, cpu_usage=2, memory_usage=1024, mempages=[], siblings=[], pinned_cpus=set([])) instance_cell = objects.InstanceNUMACell( id=0, cpuset=set([1, 2]), memory=4096) limit_cell = objects.NUMATopologyLimits( cpu_allocation_ratio=2, ram_allocation_ratio=2) fitted_cell = hw._numa_fit_instance_cell( host_cell, instance_cell, limit_cell=limit_cell) self.assertIsNone(fitted_cell) instance_cell = objects.InstanceNUMACell( id=0, cpuset=set([1, 2, 3, 4, 5]), memory=1024) fitted_cell = hw._numa_fit_instance_cell( host_cell, instance_cell, limit_cell=limit_cell) self.assertIsNone(fitted_cell) class VirtNUMAHostTopologyTestCase(test.NoDBTestCase): def setUp(self): super(VirtNUMAHostTopologyTestCase, self).setUp() self.host = objects.NUMATopology( cells=[ objects.NUMACell(id=1, cpuset=set([1, 2]), memory=2048, cpu_usage=2, memory_usage=2048, mempages=[], siblings=[], pinned_cpus=set([])), objects.NUMACell(id=2, cpuset=set([3, 4]), memory=2048, cpu_usage=2, memory_usage=2048, mempages=[], siblings=[], pinned_cpus=set([]))]) self.limits = objects.NUMATopologyLimits( cpu_allocation_ratio=2, ram_allocation_ratio=2) self.instance1 = objects.InstanceNUMATopology( cells=[ objects.InstanceNUMACell( id=0, cpuset=set([1, 2]), memory=2048)]) self.instance2 = objects.InstanceNUMATopology( cells=[ objects.InstanceNUMACell( id=0, cpuset=set([1, 2, 3, 4]), memory=1024)]) self.instance3 = objects.InstanceNUMATopology( cells=[ objects.InstanceNUMACell( id=0, cpuset=set([1, 2]), memory=1024)]) def test_get_fitting_success_no_limits(self): fitted_instance1 = hw.numa_fit_instance_to_host( self.host, self.instance1) self.assertIsInstance(fitted_instance1, objects.InstanceNUMATopology) self.host = hw.numa_usage_from_instances(self.host, [fitted_instance1]) fitted_instance2 = hw.numa_fit_instance_to_host( self.host, self.instance3) self.assertIsInstance(fitted_instance2, objects.InstanceNUMATopology) def test_get_fitting_success_limits(self): fitted_instance = hw.numa_fit_instance_to_host( self.host, self.instance3, self.limits) self.assertIsInstance(fitted_instance, objects.InstanceNUMATopology) self.assertEqual(1, fitted_instance.cells[0].id) def test_get_fitting_fails_no_limits(self): fitted_instance = hw.numa_fit_instance_to_host( self.host, self.instance2, self.limits) self.assertIsNone(fitted_instance) def test_get_fitting_culmulative_fails_limits(self): fitted_instance1 = hw.numa_fit_instance_to_host( self.host, self.instance1, self.limits) self.assertIsInstance(fitted_instance1, objects.InstanceNUMATopology) self.assertEqual(1, fitted_instance1.cells[0].id) self.host = hw.numa_usage_from_instances(self.host, [fitted_instance1]) fitted_instance2 = hw.numa_fit_instance_to_host( self.host, self.instance2, self.limits) self.assertIsNone(fitted_instance2) def test_get_fitting_culmulative_success_limits(self): fitted_instance1 = hw.numa_fit_instance_to_host( self.host, self.instance1, self.limits) self.assertIsInstance(fitted_instance1, objects.InstanceNUMATopology) self.assertEqual(1, fitted_instance1.cells[0].id) self.host = hw.numa_usage_from_instances(self.host, [fitted_instance1]) fitted_instance2 = hw.numa_fit_instance_to_host( self.host, self.instance3, self.limits) self.assertIsInstance(fitted_instance2, objects.InstanceNUMATopology) self.assertEqual(2, fitted_instance2.cells[0].id) def test_get_fitting_pci_success(self): pci_request = objects.InstancePCIRequest(count=1, spec=[{'vendor_id': '8086'}]) pci_reqs = [pci_request] pci_stats = stats.PciDeviceStats() with mock.patch.object(stats.PciDeviceStats, 'support_requests', return_value= True): fitted_instance1 = hw.numa_fit_instance_to_host(self.host, self.instance1, pci_requests=pci_reqs, pci_stats=pci_stats) self.assertIsInstance(fitted_instance1, objects.InstanceNUMATopology) def test_get_fitting_pci_fail(self): pci_request = objects.InstancePCIRequest(count=1, spec=[{'vendor_id': '8086'}]) pci_reqs = [pci_request] pci_stats = stats.PciDeviceStats() with mock.patch.object(stats.PciDeviceStats, 'support_requests', return_value= False): fitted_instance1 = hw.numa_fit_instance_to_host( self.host, self.instance1, pci_requests=pci_reqs, pci_stats=pci_stats) self.assertIsNone(fitted_instance1) def test_get_fitting_pci_avoided(self): def _create_pci_stats(node): test_dict = copy.copy(fake_pci.fake_pool_dict) test_dict['numa_node'] = node return stats.PciDeviceStats( [objects.PciDevicePool.from_dict(test_dict)]) # the PCI device is found on host cell 1 pci_stats = _create_pci_stats(1) # ...threfore an instance without a PCI device should get host cell 2 instance_topology = hw.numa_fit_instance_to_host( self.host, self.instance1, pci_stats=pci_stats) self.assertIsInstance(instance_topology, objects.InstanceNUMATopology) # TODO(sfinucan): We should be comparing this against the HOST cell self.assertEqual(2, instance_topology.cells[0].id) # the PCI device is now found on host cell 2 pci_stats = _create_pci_stats(2) # ...threfore an instance without a PCI device should get host cell 1 instance_topology = hw.numa_fit_instance_to_host( self.host, self.instance1, pci_stats=pci_stats) self.assertIsInstance(instance_topology, objects.InstanceNUMATopology) self.assertEqual(1, instance_topology.cells[0].id) class NumberOfSerialPortsTest(test.NoDBTestCase): def test_flavor(self): flavor = objects.Flavor(vcpus=8, memory_mb=2048, extra_specs={"hw:serial_port_count": 3}) image_meta = objects.ImageMeta.from_dict({}) num_ports = hw.get_number_of_serial_ports(flavor, image_meta) self.assertEqual(3, num_ports) def test_image_meta(self): flavor = objects.Flavor(vcpus=8, memory_mb=2048, extra_specs={}) image_meta = objects.ImageMeta.from_dict( {"properties": {"hw_serial_port_count": 2}}) num_ports = hw.get_number_of_serial_ports(flavor, image_meta) self.assertEqual(2, num_ports) def test_flavor_invalid_value(self): flavor = objects.Flavor(vcpus=8, memory_mb=2048, extra_specs={"hw:serial_port_count": 'foo'}) image_meta = objects.ImageMeta.from_dict({}) self.assertRaises(exception.ImageSerialPortNumberInvalid, hw.get_number_of_serial_ports, flavor, image_meta) def test_image_meta_smaller_than_flavor(self): flavor = objects.Flavor(vcpus=8, memory_mb=2048, extra_specs={"hw:serial_port_count": 3}) image_meta = objects.ImageMeta.from_dict( {"properties": {"hw_serial_port_count": 2}}) num_ports = hw.get_number_of_serial_ports(flavor, image_meta) self.assertEqual(2, num_ports) def test_flavor_smaller_than_image_meta(self): flavor = objects.Flavor(vcpus=8, memory_mb=2048, extra_specs={"hw:serial_port_count": 3}) image_meta = objects.ImageMeta.from_dict( {"properties": {"hw_serial_port_count": 4}}) self.assertRaises(exception.ImageSerialPortNumberExceedFlavorValue, hw.get_number_of_serial_ports, flavor, image_meta) class HelperMethodsTestCase(test.NoDBTestCase): def setUp(self): super(HelperMethodsTestCase, self).setUp() self.hosttopo = objects.NUMATopology(cells=[ objects.NUMACell(id=0, cpuset=set([0, 1]), memory=512, memory_usage=0, cpu_usage=0, mempages=[], siblings=[], pinned_cpus=set([])), objects.NUMACell(id=1, cpuset=set([2, 3]), memory=512, memory_usage=0, cpu_usage=0, mempages=[], siblings=[], pinned_cpus=set([])), ]) self.instancetopo = objects.InstanceNUMATopology( instance_uuid=uuids.instance, cells=[ objects.InstanceNUMACell( id=0, cpuset=set([0, 1]), memory=256, pagesize=2048, cpu_pinning={0: 0, 1: 1}, cpu_topology=None), objects.InstanceNUMACell( id=1, cpuset=set([2]), memory=256, pagesize=2048, cpu_pinning={2: 3}, cpu_topology=None), ]) self.context = context.RequestContext('fake-user', 'fake-project') def _check_usage(self, host_usage): self.assertEqual(2, host_usage.cells[0].cpu_usage) self.assertEqual(256, host_usage.cells[0].memory_usage) self.assertEqual(1, host_usage.cells[1].cpu_usage) self.assertEqual(256, host_usage.cells[1].memory_usage) def test_dicts_json(self): host = {'numa_topology': self.hosttopo._to_json()} instance = {'numa_topology': self.instancetopo._to_json()} res = hw.get_host_numa_usage_from_instance(host, instance) self.assertIsInstance(res, six.string_types) self._check_usage(objects.NUMATopology.obj_from_db_obj(res)) def test_dicts_instance_json(self): host = {'numa_topology': self.hosttopo} instance = {'numa_topology': self.instancetopo._to_json()} res = hw.get_host_numa_usage_from_instance(host, instance) self.assertIsInstance(res, objects.NUMATopology) self._check_usage(res) def test_dicts_instance_json_old(self): host = {'numa_topology': self.hosttopo} instance = {'numa_topology': jsonutils.dumps(self.instancetopo._to_dict())} res = hw.get_host_numa_usage_from_instance(host, instance) self.assertIsInstance(res, objects.NUMATopology) self._check_usage(res) def test_dicts_host_json(self): host = {'numa_topology': self.hosttopo._to_json()} instance = {'numa_topology': self.instancetopo} res = hw.get_host_numa_usage_from_instance(host, instance) self.assertIsInstance(res, six.string_types) self._check_usage(objects.NUMATopology.obj_from_db_obj(res)) def test_dicts_host_json_old(self): host = {'numa_topology': jsonutils.dumps( self.hosttopo._to_dict())} instance = {'numa_topology': self.instancetopo} res = hw.get_host_numa_usage_from_instance(host, instance) self.assertIsInstance(res, six.string_types) self._check_usage(objects.NUMATopology.obj_from_db_obj(res)) def test_object_host_instance_json(self): host = objects.ComputeNode(numa_topology=self.hosttopo._to_json()) instance = {'numa_topology': self.instancetopo._to_json()} res = hw.get_host_numa_usage_from_instance(host, instance) self.assertIsInstance(res, six.string_types) self._check_usage(objects.NUMATopology.obj_from_db_obj(res)) def test_object_host_instance(self): host = objects.ComputeNode(numa_topology=self.hosttopo._to_json()) instance = {'numa_topology': self.instancetopo} res = hw.get_host_numa_usage_from_instance(host, instance) self.assertIsInstance(res, six.string_types) self._check_usage(objects.NUMATopology.obj_from_db_obj(res)) def test_instance_with_fetch(self): host = objects.ComputeNode(numa_topology=self.hosttopo._to_json()) fake_uuid = uuids.fake instance = {'uuid': fake_uuid} with mock.patch.object(objects.InstanceNUMATopology, 'get_by_instance_uuid', return_value=None) as get_mock: res = hw.get_host_numa_usage_from_instance(host, instance) self.assertIsInstance(res, six.string_types) self.assertTrue(get_mock.called) def test_object_instance_with_load(self): host = objects.ComputeNode(numa_topology=self.hosttopo._to_json()) fake_uuid = uuids.fake instance = objects.Instance(context=self.context, uuid=fake_uuid) with mock.patch.object(objects.InstanceNUMATopology, 'get_by_instance_uuid', return_value=None) as get_mock: res = hw.get_host_numa_usage_from_instance(host, instance) self.assertIsInstance(res, six.string_types) self.assertTrue(get_mock.called) def test_instance_serialized_by_build_request_spec(self): host = objects.ComputeNode(numa_topology=self.hosttopo._to_json()) fake_uuid = uuids.fake instance = objects.Instance(context=self.context, id=1, uuid=fake_uuid, numa_topology=self.instancetopo) # NOTE (ndipanov): This emulates scheduler.utils.build_request_spec # We can remove this test once we no longer use that method. instance_raw = jsonutils.to_primitive( base_obj.obj_to_primitive(instance)) res = hw.get_host_numa_usage_from_instance(host, instance_raw) self.assertIsInstance(res, six.string_types) self._check_usage(objects.NUMATopology.obj_from_db_obj(res)) def test_attr_host(self): class Host(object): def __init__(obj): obj.numa_topology = self.hosttopo._to_json() host = Host() instance = {'numa_topology': self.instancetopo._to_json()} res = hw.get_host_numa_usage_from_instance(host, instance) self.assertIsInstance(res, six.string_types) self._check_usage(objects.NUMATopology.obj_from_db_obj(res)) def test_never_serialize_result(self): host = {'numa_topology': self.hosttopo._to_json()} instance = {'numa_topology': self.instancetopo} res = hw.get_host_numa_usage_from_instance(host, instance, never_serialize_result=True) self.assertIsInstance(res, objects.NUMATopology) self._check_usage(res) def test_dict_numa_topology_to_obj(self): fake_uuid = uuids.fake instance = objects.Instance(context=self.context, id=1, uuid=fake_uuid, numa_topology=self.instancetopo) instance_dict = base_obj.obj_to_primitive(instance) instance_numa_topo = hw.instance_topology_from_instance(instance_dict) for expected_cell, actual_cell in zip(self.instancetopo.cells, instance_numa_topo.cells): for k in expected_cell.fields: self.assertEqual(getattr(expected_cell, k), getattr(actual_cell, k)) class VirtMemoryPagesTestCase(test.NoDBTestCase): def test_cell_instance_pagesize(self): cell = objects.InstanceNUMACell( id=0, cpuset=set([0]), memory=1024, pagesize=2048) self.assertEqual(0, cell.id) self.assertEqual(set([0]), cell.cpuset) self.assertEqual(1024, cell.memory) self.assertEqual(2048, cell.pagesize) def test_numa_pagesize_usage_from_cell(self): instcell = objects.InstanceNUMACell( id=0, cpuset=set([0]), memory=512, pagesize=2048) hostcell = objects.NUMACell( id=0, cpuset=set([0]), memory=1024, cpu_usage=0, memory_usage=0, mempages=[objects.NUMAPagesTopology( size_kb=2048, total=512, used=0)], siblings=[], pinned_cpus=set([])) topo = hw._numa_pagesize_usage_from_cell(hostcell, instcell, 1) self.assertEqual(2048, topo[0].size_kb) self.assertEqual(512, topo[0].total) self.assertEqual(256, topo[0].used) def _test_get_requested_mempages_pagesize(self, spec=None, props=None): flavor = objects.Flavor(vcpus=16, memory_mb=2048, extra_specs=spec or {}) image_meta = objects.ImageMeta.from_dict({"properties": props or {}}) return hw._numa_get_pagesize_constraints(flavor, image_meta) def test_get_requested_mempages_pagesize_from_flavor_swipe(self): self.assertEqual( hw.MEMPAGES_SMALL, self._test_get_requested_mempages_pagesize( spec={"hw:mem_page_size": "small"})) self.assertEqual( hw.MEMPAGES_LARGE, self._test_get_requested_mempages_pagesize( spec={"hw:mem_page_size": "large"})) self.assertEqual( hw.MEMPAGES_ANY, self._test_get_requested_mempages_pagesize( spec={"hw:mem_page_size": "any"})) def test_get_requested_mempages_pagesize_from_flavor_specific(self): self.assertEqual( 2048, self._test_get_requested_mempages_pagesize( spec={"hw:mem_page_size": "2048"})) def test_get_requested_mempages_pagesize_from_flavor_invalid(self): self.assertRaises( exception.MemoryPageSizeInvalid, self._test_get_requested_mempages_pagesize, {"hw:mem_page_size": "foo"}) self.assertRaises( exception.MemoryPageSizeInvalid, self._test_get_requested_mempages_pagesize, {"hw:mem_page_size": "-42"}) def test_get_requested_mempages_pagesizes_from_flavor_suffix_sweep(self): self.assertEqual( 2048, self._test_get_requested_mempages_pagesize( spec={"hw:mem_page_size": "2048KB"})) self.assertEqual( 2048, self._test_get_requested_mempages_pagesize( spec={"hw:mem_page_size": "2MB"})) self.assertEqual( 1048576, self._test_get_requested_mempages_pagesize( spec={"hw:mem_page_size": "1GB"})) def test_get_requested_mempages_pagesize_from_image_flavor_any(self): self.assertEqual( 2048, self._test_get_requested_mempages_pagesize( spec={"hw:mem_page_size": "any"}, props={"hw_mem_page_size": "2048"})) def test_get_requested_mempages_pagesize_from_image_flavor_large(self): self.assertEqual( 2048, self._test_get_requested_mempages_pagesize( spec={"hw:mem_page_size": "large"}, props={"hw_mem_page_size": "2048"})) def test_get_requested_mempages_pagesize_from_image_forbidden(self): self.assertRaises( exception.MemoryPageSizeForbidden, self._test_get_requested_mempages_pagesize, {"hw:mem_page_size": "small"}, {"hw_mem_page_size": "2048"}) def test_get_requested_mempages_pagesize_from_image_forbidden2(self): self.assertRaises( exception.MemoryPageSizeForbidden, self._test_get_requested_mempages_pagesize, {}, {"hw_mem_page_size": "2048"}) def test_cell_accepts_request_wipe(self): host_cell = objects.NUMACell( id=0, cpuset=set([0]), memory=1024, mempages=[ objects.NUMAPagesTopology(size_kb=4, total=262144, used=0), ], siblings=[], pinned_cpus=set([])) inst_cell = objects.InstanceNUMACell( id=0, cpuset=set([0]), memory=1024, pagesize=hw.MEMPAGES_SMALL) self.assertEqual( 4, hw._numa_cell_supports_pagesize_request(host_cell, inst_cell)) inst_cell = objects.InstanceNUMACell( id=0, cpuset=set([0]), memory=1024, pagesize=hw.MEMPAGES_ANY) self.assertEqual( 4, hw._numa_cell_supports_pagesize_request(host_cell, inst_cell)) inst_cell = objects.InstanceNUMACell( id=0, cpuset=set([0]), memory=1024, pagesize=hw.MEMPAGES_LARGE) self.assertIsNone(hw._numa_cell_supports_pagesize_request( host_cell, inst_cell)) def test_cell_accepts_request_large_pass(self): inst_cell = objects.InstanceNUMACell( id=0, cpuset=set([0]), memory=1024, pagesize=hw.MEMPAGES_LARGE) host_cell = objects.NUMACell( id=0, cpuset=set([0]), memory=1024, mempages=[ objects.NUMAPagesTopology(size_kb=4, total=256, used=0), objects.NUMAPagesTopology(size_kb=2048, total=512, used=0) ], siblings=[], pinned_cpus=set([])) self.assertEqual( 2048, hw._numa_cell_supports_pagesize_request(host_cell, inst_cell)) def test_cell_accepts_request_custom_pass(self): inst_cell = objects.InstanceNUMACell( id=0, cpuset=set([0]), memory=1024, pagesize=2048) host_cell = objects.NUMACell( id=0, cpuset=set([0]), memory=1024, mempages=[ objects.NUMAPagesTopology(size_kb=4, total=256, used=0), objects.NUMAPagesTopology(size_kb=2048, total=512, used=0) ], siblings=[], pinned_cpus=set([])) self.assertEqual( 2048, hw._numa_cell_supports_pagesize_request(host_cell, inst_cell)) def test_cell_accepts_request_remainder_memory(self): # Test memory can't be divided with no rem by mempage's size_kb inst_cell = objects.InstanceNUMACell( id=0, cpuset=set([0]), memory=1024 + 1, pagesize=2048) host_cell = objects.NUMACell( id=0, cpuset=set([0]), memory=1024, mempages=[ objects.NUMAPagesTopology(size_kb=4, total=256, used=0), objects.NUMAPagesTopology(size_kb=2048, total=512, used=0) ], siblings=[], pinned_cpus=set([])) self.assertIsNone(hw._numa_cell_supports_pagesize_request( host_cell, inst_cell)) def test_cell_accepts_request_host_mempages(self): # Test pagesize not in host's mempages inst_cell = objects.InstanceNUMACell( id=0, cpuset=set([0]), memory=1024, pagesize=4096) host_cell = objects.NUMACell( id=0, cpuset=set([0]), memory=1024, mempages=[ objects.NUMAPagesTopology(size_kb=4, total=256, used=0), objects.NUMAPagesTopology(size_kb=2048, total=512, used=0) ], siblings=[], pinned_cpus=set([])) self.assertRaises(exception.MemoryPageSizeNotSupported, hw._numa_cell_supports_pagesize_request, host_cell, inst_cell) class _CPUPinningTestCaseBase(object): def assertEqualTopology(self, expected, got): for attr in ('sockets', 'cores', 'threads'): self.assertEqual(getattr(expected, attr), getattr(got, attr), "Mismatch on %s" % attr) def assertInstanceCellPinned(self, instance_cell, cell_ids=None): default_cell_id = 0 self.assertIsNotNone(instance_cell) if cell_ids is None: self.assertEqual(default_cell_id, instance_cell.id) else: self.assertIn(instance_cell.id, cell_ids) self.assertEqual(len(instance_cell.cpuset), len(instance_cell.cpu_pinning)) def assertPinningPreferThreads(self, instance_cell, host_cell): """Make sure we are preferring threads. We do this by assessing that at least 2 CPUs went to the same core if that was even possible to begin with. """ max_free_siblings = max(map(len, host_cell.free_siblings)) if len(instance_cell) > 1 and max_free_siblings > 1: cpu_to_sib = {} for sib in host_cell.free_siblings: for cpu in sib: cpu_to_sib[cpu] = tuple(sorted(sib)) pins_per_sib = collections.defaultdict(int) for inst_p, host_p in instance_cell.cpu_pinning.items(): pins_per_sib[cpu_to_sib[host_p]] += 1 self.assertGreater(max(pins_per_sib.values()), 1, "Seems threads were not preferred by the " "pinning logic.") class CPUPinningCellTestCase(test.NoDBTestCase, _CPUPinningTestCaseBase): def test_get_pinning_inst_too_large_cpu(self): host_pin = objects.NUMACell(id=0, cpuset=set([0, 1, 2]), memory=2048, memory_usage=0, siblings=[], mempages=[], pinned_cpus=set([])) inst_pin = objects.InstanceNUMACell(cpuset=set([0, 1, 2, 3]), memory=2048) inst_pin = hw._numa_fit_instance_cell_with_pinning(host_pin, inst_pin) self.assertIsNone(inst_pin) def test_get_pinning_inst_too_large_mem(self): host_pin = objects.NUMACell(id=0, cpuset=set([0, 1, 2]), memory=2048, memory_usage=1024, siblings=[], mempages=[], pinned_cpus=set([])) inst_pin = objects.InstanceNUMACell(cpuset=set([0, 1, 2]), memory=2048) inst_pin = hw._numa_fit_instance_cell_with_pinning(host_pin, inst_pin) self.assertIsNone(inst_pin) def test_get_pinning_inst_not_avail(self): host_pin = objects.NUMACell(id=0, cpuset=set([0, 1, 2, 3]), memory=2048, memory_usage=0, pinned_cpus=set([0]), siblings=[], mempages=[]) inst_pin = objects.InstanceNUMACell(cpuset=set([0, 1, 2, 3]), memory=2048) inst_pin = hw._numa_fit_instance_cell_with_pinning(host_pin, inst_pin) self.assertIsNone(inst_pin) def test_get_pinning_no_sibling_fits_empty(self): host_pin = objects.NUMACell(id=0, cpuset=set([0, 1, 2]), memory=2048, memory_usage=0, siblings=[], mempages=[], pinned_cpus=set([])) inst_pin = objects.InstanceNUMACell(cpuset=set([0, 1, 2]), memory=2048) inst_pin = hw._numa_fit_instance_cell_with_pinning(host_pin, inst_pin) self.assertInstanceCellPinned(inst_pin) got_topo = objects.VirtCPUTopology(sockets=1, cores=3, threads=1) self.assertEqualTopology(got_topo, inst_pin.cpu_topology) got_pinning = {x: x for x in range(0, 3)} self.assertEqual(got_pinning, inst_pin.cpu_pinning) def test_get_pinning_no_sibling_fits_w_usage(self): host_pin = objects.NUMACell(id=0, cpuset=set([0, 1, 2, 3]), memory=2048, memory_usage=0, pinned_cpus=set([1]), mempages=[], siblings=[]) inst_pin = objects.InstanceNUMACell(cpuset=set([0, 1, 2]), memory=1024) inst_pin = hw._numa_fit_instance_cell_with_pinning(host_pin, inst_pin) self.assertInstanceCellPinned(inst_pin) got_pinning = {0: 0, 1: 2, 2: 3} self.assertEqual(got_pinning, inst_pin.cpu_pinning) def test_get_pinning_instance_siblings_fits(self): host_pin = objects.NUMACell(id=0, cpuset=set([0, 1, 2, 3]), memory=2048, memory_usage=0, siblings=[], mempages=[], pinned_cpus=set([])) inst_pin = objects.InstanceNUMACell( cpuset=set([0, 1, 2, 3]), memory=2048) inst_pin = hw._numa_fit_instance_cell_with_pinning(host_pin, inst_pin) self.assertInstanceCellPinned(inst_pin) got_topo = objects.VirtCPUTopology(sockets=1, cores=4, threads=1) self.assertEqualTopology(got_topo, inst_pin.cpu_topology) got_pinning = {x: x for x in range(0, 4)} self.assertEqual(got_pinning, inst_pin.cpu_pinning) def test_get_pinning_instance_siblings_host_siblings_fits_empty(self): host_pin = objects.NUMACell(id=0, cpuset=set([0, 1, 2, 3]), memory=2048, memory_usage=0, siblings=[set([0, 1]), set([2, 3])], mempages=[], pinned_cpus=set([])) inst_pin = objects.InstanceNUMACell( cpuset=set([0, 1, 2, 3]), memory=2048) inst_pin = hw._numa_fit_instance_cell_with_pinning(host_pin, inst_pin) self.assertInstanceCellPinned(inst_pin) got_topo = objects.VirtCPUTopology(sockets=1, cores=2, threads=2) self.assertEqualTopology(got_topo, inst_pin.cpu_topology) got_pinning = {x: x for x in range(0, 4)} self.assertEqual(got_pinning, inst_pin.cpu_pinning) def test_get_pinning_instance_siblings_host_siblings_fits_empty_2(self): host_pin = objects.NUMACell( id=0, cpuset=set([0, 1, 2, 3, 4, 5, 6, 7]), memory=4096, memory_usage=0, siblings=[set([0, 1]), set([2, 3]), set([4, 5]), set([6, 7])], mempages=[], pinned_cpus=set([])) inst_pin = objects.InstanceNUMACell( cpuset=set([0, 1, 2, 3, 4, 5, 6, 7]), memory=2048) inst_pin = hw._numa_fit_instance_cell_with_pinning(host_pin, inst_pin) self.assertInstanceCellPinned(inst_pin) got_topo = objects.VirtCPUTopology(sockets=1, cores=4, threads=2) self.assertEqualTopology(got_topo, inst_pin.cpu_topology) got_pinning = {x: x for x in range(0, 8)} self.assertEqual(got_pinning, inst_pin.cpu_pinning) def test_get_pinning_instance_siblings_host_siblings_fits_w_usage(self): host_pin = objects.NUMACell( id=0, cpuset=set([0, 1, 2, 3, 4, 5, 6, 7]), memory=4096, memory_usage=0, pinned_cpus=set([1, 2, 5, 6]), siblings=[set([0, 1, 2, 3]), set([4, 5, 6, 7])], mempages=[]) inst_pin = objects.InstanceNUMACell( cpuset=set([0, 1, 2, 3]), memory=2048) inst_pin = hw._numa_fit_instance_cell_with_pinning(host_pin, inst_pin) self.assertInstanceCellPinned(inst_pin) got_topo = objects.VirtCPUTopology(sockets=1, cores=2, threads=2) self.assertEqualTopology(got_topo, inst_pin.cpu_topology) got_pinning = {0: 0, 1: 3, 2: 4, 3: 7} self.assertEqual(got_pinning, inst_pin.cpu_pinning) def test_get_pinning_host_siblings_fit_single_core(self): host_pin = objects.NUMACell( id=0, cpuset=set([0, 1, 2, 3, 4, 5, 6, 7]), memory=4096, memory_usage=0, siblings=[set([0, 1, 2, 3]), set([4, 5, 6, 7])], mempages=[], pinned_cpus=set([])) inst_pin = objects.InstanceNUMACell(cpuset=set([0, 1, 2, 3]), memory=2048) inst_pin = hw._numa_fit_instance_cell_with_pinning(host_pin, inst_pin) self.assertInstanceCellPinned(inst_pin) got_topo = objects.VirtCPUTopology(sockets=1, cores=1, threads=4) self.assertEqualTopology(got_topo, inst_pin.cpu_topology) got_pinning = {x: x for x in range(0, 4)} self.assertEqual(got_pinning, inst_pin.cpu_pinning) def test_get_pinning_host_siblings_fit(self): host_pin = objects.NUMACell(id=0, cpuset=set([0, 1, 2, 3]), memory=4096, memory_usage=0, siblings=[set([0, 1]), set([2, 3])], mempages=[], pinned_cpus=set([])) inst_pin = objects.InstanceNUMACell(cpuset=set([0, 1, 2, 3]), memory=2048) inst_pin = hw._numa_fit_instance_cell_with_pinning(host_pin, inst_pin) self.assertInstanceCellPinned(inst_pin) got_topo = objects.VirtCPUTopology(sockets=1, cores=2, threads=2) self.assertEqualTopology(got_topo, inst_pin.cpu_topology) got_pinning = {x: x for x in range(0, 4)} self.assertEqual(got_pinning, inst_pin.cpu_pinning) def test_get_pinning_require_policy_no_siblings(self): host_pin = objects.NUMACell( id=0, cpuset=set([0, 1, 2, 3, 4, 5, 6, 7]), memory=4096, memory_usage=0, pinned_cpus=set([]), siblings=[], mempages=[]) inst_pin = objects.InstanceNUMACell( cpuset=set([0, 1, 2, 3]), memory=2048, cpu_policy=fields.CPUAllocationPolicy.DEDICATED, cpu_thread_policy=fields.CPUThreadAllocationPolicy.REQUIRE) inst_pin = hw._numa_fit_instance_cell_with_pinning(host_pin, inst_pin) self.assertIsNone(inst_pin) def test_get_pinning_require_policy_too_few_siblings(self): host_pin = objects.NUMACell( id=0, cpuset=set([0, 1, 2, 3, 4, 5, 6, 7]), memory=4096, memory_usage=0, pinned_cpus=set([0, 1, 2]), siblings=[set([0, 4]), set([1, 5]), set([2, 6]), set([3, 7])], mempages=[]) inst_pin = objects.InstanceNUMACell( cpuset=set([0, 1, 2, 3]), memory=2048, cpu_policy=fields.CPUAllocationPolicy.DEDICATED, cpu_thread_policy=fields.CPUThreadAllocationPolicy.REQUIRE) inst_pin = hw._numa_fit_instance_cell_with_pinning(host_pin, inst_pin) self.assertIsNone(inst_pin) def test_get_pinning_require_policy_fits(self): host_pin = objects.NUMACell(id=0, cpuset=set([0, 1, 2, 3]), memory=4096, memory_usage=0, siblings=[set([0, 1]), set([2, 3])], mempages=[], pinned_cpus=set([])) inst_pin = objects.InstanceNUMACell( cpuset=set([0, 1, 2, 3]), memory=2048, cpu_policy=fields.CPUAllocationPolicy.DEDICATED, cpu_thread_policy=fields.CPUThreadAllocationPolicy.REQUIRE) inst_pin = hw._numa_fit_instance_cell_with_pinning(host_pin, inst_pin) self.assertInstanceCellPinned(inst_pin) got_topo = objects.VirtCPUTopology(sockets=1, cores=2, threads=2) self.assertEqualTopology(got_topo, inst_pin.cpu_topology) def test_get_pinning_require_policy_fits_w_usage(self): host_pin = objects.NUMACell( id=0, cpuset=set([0, 1, 2, 3, 4, 5, 6, 7]), memory=4096, memory_usage=0, pinned_cpus=set([0, 1]), siblings=[set([0, 4]), set([1, 5]), set([2, 6]), set([3, 7])], mempages=[]) inst_pin = objects.InstanceNUMACell( cpuset=set([0, 1, 2, 3]), memory=2048, cpu_policy=fields.CPUAllocationPolicy.DEDICATED, cpu_thread_policy=fields.CPUThreadAllocationPolicy.REQUIRE) inst_pin = hw._numa_fit_instance_cell_with_pinning(host_pin, inst_pin) self.assertInstanceCellPinned(inst_pin) got_topo = objects.VirtCPUTopology(sockets=1, cores=2, threads=2) self.assertEqualTopology(got_topo, inst_pin.cpu_topology) def test_get_pinning_host_siblings_instance_odd_fit(self): host_pin = objects.NUMACell(id=0, cpuset=set([0, 1, 2, 3, 4, 5, 6, 7]), memory=4096, memory_usage=0, siblings=[set([0, 1]), set([2, 3]), set([4, 5]), set([6, 7])], mempages=[], pinned_cpus=set([])) inst_pin = objects.InstanceNUMACell(cpuset=set([0, 1, 2, 3, 4]), memory=2048) inst_pin = hw._numa_fit_instance_cell_with_pinning(host_pin, inst_pin) self.assertInstanceCellPinned(inst_pin) got_topo = objects.VirtCPUTopology(sockets=1, cores=5, threads=1) self.assertEqualTopology(got_topo, inst_pin.cpu_topology) def test_get_pinning_host_siblings_instance_fit_optimize_threads(self): host_pin = objects.NUMACell(id=0, cpuset=set([0, 1, 2, 3, 4, 5, 6, 7]), memory=4096, memory_usage=0, siblings=[set([0, 1, 2, 3]), set([4, 5, 6, 7])], mempages=[], pinned_cpus=set([])) inst_pin = objects.InstanceNUMACell(cpuset=set([0, 1, 2, 3, 4, 5]), memory=2048) inst_pin = hw._numa_fit_instance_cell_with_pinning(host_pin, inst_pin) self.assertInstanceCellPinned(inst_pin) got_topo = objects.VirtCPUTopology(sockets=1, cores=3, threads=2) self.assertEqualTopology(got_topo, inst_pin.cpu_topology) def test_get_pinning_host_siblings_instance_odd_fit_w_usage(self): host_pin = objects.NUMACell(id=0, cpuset=set([0, 1, 2, 3, 4, 5, 6, 7]), memory=4096, memory_usage=0, siblings=[set([0, 1]), set([2, 3]), set([4, 5]), set([6, 7])], mempages=[], pinned_cpus=set([0, 2, 5])) inst_pin = objects.InstanceNUMACell(cpuset=set([0, 1, 2]), memory=2048) inst_pin = hw._numa_fit_instance_cell_with_pinning(host_pin, inst_pin) self.assertInstanceCellPinned(inst_pin) got_topo = objects.VirtCPUTopology(sockets=1, cores=3, threads=1) self.assertEqualTopology(got_topo, inst_pin.cpu_topology) def test_get_pinning_host_siblings_instance_mixed_siblings(self): host_pin = objects.NUMACell(id=0, cpuset=set([0, 1, 2, 3, 4, 5, 6, 7]), memory=4096, memory_usage=0, siblings=[set([0, 1]), set([2, 3]), set([4, 5]), set([6, 7])], mempages=[], pinned_cpus=set([0, 1, 2, 5])) inst_pin = objects.InstanceNUMACell(cpuset=set([0, 1, 2, 3]), memory=2048) inst_pin = hw._numa_fit_instance_cell_with_pinning(host_pin, inst_pin) self.assertInstanceCellPinned(inst_pin) got_topo = objects.VirtCPUTopology(sockets=1, cores=4, threads=1) self.assertEqualTopology(got_topo, inst_pin.cpu_topology) def test_get_pinning_host_siblings_instance_odd_fit_orphan_only(self): host_pin = objects.NUMACell(id=0, cpuset=set([0, 1, 2, 3, 4, 5, 6, 7]), memory=4096, memory_usage=0, siblings=[set([0, 1]), set([2, 3]), set([4, 5]), set([6, 7])], mempages=[], pinned_cpus=set([0, 2, 5, 6])) inst_pin = objects.InstanceNUMACell(cpuset=set([0, 1, 2, 3]), memory=2048) inst_pin = hw._numa_fit_instance_cell_with_pinning(host_pin, inst_pin) self.assertInstanceCellPinned(inst_pin) got_topo = objects.VirtCPUTopology(sockets=1, cores=4, threads=1) self.assertEqualTopology(got_topo, inst_pin.cpu_topology) def test_get_pinning_host_siblings_large_instance_odd_fit(self): host_pin = objects.NUMACell(id=0, cpuset=set([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]), memory=4096, memory_usage=0, siblings=[set([0, 8]), set([1, 9]), set([2, 10]), set([3, 11]), set([4, 12]), set([5, 13]), set([6, 14]), set([7, 15])], mempages=[], pinned_cpus=set([])) inst_pin = objects.InstanceNUMACell(cpuset=set([0, 1, 2, 3, 4]), memory=2048) inst_pin = hw._numa_fit_instance_cell_with_pinning(host_pin, inst_pin) self.assertInstanceCellPinned(inst_pin) self.assertPinningPreferThreads(inst_pin, host_pin) got_topo = objects.VirtCPUTopology(sockets=1, cores=5, threads=1) self.assertEqualTopology(got_topo, inst_pin.cpu_topology) def test_get_pinning_isolate_policy_too_few_fully_free_cores(self): host_pin = objects.NUMACell(id=0, cpuset=set([0, 1, 2, 3]), memory=4096, memory_usage=0, siblings=[set([0, 1]), set([2, 3])], mempages=[], pinned_cpus=set([1])) inst_pin = objects.InstanceNUMACell( cpuset=set([0, 1]), memory=2048, cpu_policy=fields.CPUAllocationPolicy.DEDICATED, cpu_thread_policy=fields.CPUThreadAllocationPolicy.ISOLATE) inst_pin = hw._numa_fit_instance_cell_with_pinning(host_pin, inst_pin) self.assertIsNone(inst_pin) def test_get_pinning_isolate_policy_no_fully_free_cores(self): host_pin = objects.NUMACell(id=0, cpuset=set([0, 1, 2, 3]), memory=4096, memory_usage=0, siblings=[set([0, 1]), set([2, 3])], mempages=[], pinned_cpus=set([1, 2])) inst_pin = objects.InstanceNUMACell( cpuset=set([0, 1]), memory=2048, cpu_policy=fields.CPUAllocationPolicy.DEDICATED, cpu_thread_policy=fields.CPUThreadAllocationPolicy.ISOLATE) inst_pin = hw._numa_fit_instance_cell_with_pinning(host_pin, inst_pin) self.assertIsNone(inst_pin) def test_get_pinning_isolate_policy_fits(self): host_pin = objects.NUMACell(id=0, cpuset=set([0, 1, 2, 3]), memory=4096, memory_usage=0, siblings=[], mempages=[], pinned_cpus=set([])) inst_pin = objects.InstanceNUMACell( cpuset=set([0, 1]), memory=2048, cpu_policy=fields.CPUAllocationPolicy.DEDICATED, cpu_thread_policy=fields.CPUThreadAllocationPolicy.ISOLATE) inst_pin = hw._numa_fit_instance_cell_with_pinning(host_pin, inst_pin) self.assertInstanceCellPinned(inst_pin) got_topo = objects.VirtCPUTopology(sockets=1, cores=2, threads=1) self.assertEqualTopology(got_topo, inst_pin.cpu_topology) def test_get_pinning_isolate_policy_fits_ht_host(self): host_pin = objects.NUMACell(id=0, cpuset=set([0, 1, 2, 3]), memory=4096, memory_usage=0, siblings=[set([0, 1]), set([2, 3])], mempages=[], pinned_cpus=set([])) inst_pin = objects.InstanceNUMACell( cpuset=set([0, 1]), memory=2048, cpu_policy=fields.CPUAllocationPolicy.DEDICATED, cpu_thread_policy=fields.CPUThreadAllocationPolicy.ISOLATE) inst_pin = hw._numa_fit_instance_cell_with_pinning(host_pin, inst_pin) self.assertInstanceCellPinned(inst_pin) got_topo = objects.VirtCPUTopology(sockets=1, cores=2, threads=1) self.assertEqualTopology(got_topo, inst_pin.cpu_topology) def test_get_pinning_isolate_policy_fits_w_usage(self): host_pin = objects.NUMACell( id=0, cpuset=set([0, 1, 2, 3, 4, 5, 6, 7]), memory=4096, memory_usage=0, pinned_cpus=set([0, 1]), siblings=[set([0, 4]), set([1, 5]), set([2, 6]), set([3, 7])], mempages=[]) inst_pin = objects.InstanceNUMACell( cpuset=set([0, 1]), memory=2048, cpu_policy=fields.CPUAllocationPolicy.DEDICATED, cpu_thread_policy=fields.CPUThreadAllocationPolicy.ISOLATE) inst_pin = hw._numa_fit_instance_cell_with_pinning(host_pin, inst_pin) self.assertInstanceCellPinned(inst_pin) got_topo = objects.VirtCPUTopology(sockets=1, cores=2, threads=1) self.assertEqualTopology(got_topo, inst_pin.cpu_topology) class CPUPinningTestCase(test.NoDBTestCase, _CPUPinningTestCaseBase): def test_host_numa_fit_instance_to_host_single_cell(self): host_topo = objects.NUMATopology( cells=[objects.NUMACell(id=0, cpuset=set([0, 1]), memory=2048, memory_usage=0, siblings=[], mempages=[], pinned_cpus=set([])), objects.NUMACell(id=1, cpuset=set([2, 3]), memory=2048, memory_usage=0, siblings=[], mempages=[], pinned_cpus=set([]))] ) inst_topo = objects.InstanceNUMATopology( cells=[objects.InstanceNUMACell( cpuset=set([0, 1]), memory=2048, cpu_policy=fields.CPUAllocationPolicy.DEDICATED)]) inst_topo = hw.numa_fit_instance_to_host(host_topo, inst_topo) for cell in inst_topo.cells: self.assertInstanceCellPinned(cell, cell_ids=(0, 1)) def test_host_numa_fit_instance_to_host_single_cell_w_usage(self): host_topo = objects.NUMATopology( cells=[objects.NUMACell(id=0, cpuset=set([0, 1]), pinned_cpus=set([0]), memory=2048, memory_usage=0, siblings=[], mempages=[]), objects.NUMACell(id=1, cpuset=set([2, 3]), memory=2048, memory_usage=0, siblings=[], mempages=[], pinned_cpus=set([]))]) inst_topo = objects.InstanceNUMATopology( cells=[objects.InstanceNUMACell( cpuset=set([0, 1]), memory=2048, cpu_policy=fields.CPUAllocationPolicy.DEDICATED)]) inst_topo = hw.numa_fit_instance_to_host(host_topo, inst_topo) for cell in inst_topo.cells: self.assertInstanceCellPinned(cell, cell_ids=(1,)) def test_host_numa_fit_instance_to_host_single_cell_fail(self): host_topo = objects.NUMATopology( cells=[objects.NUMACell(id=0, cpuset=set([0, 1]), memory=2048, pinned_cpus=set([0]), memory_usage=0, siblings=[], mempages=[]), objects.NUMACell(id=1, cpuset=set([2, 3]), memory=2048, pinned_cpus=set([2]), memory_usage=0, siblings=[], mempages=[])]) inst_topo = objects.InstanceNUMATopology( cells=[objects.InstanceNUMACell( cpuset=set([0, 1]), memory=2048, cpu_policy=fields.CPUAllocationPolicy.DEDICATED)]) inst_topo = hw.numa_fit_instance_to_host(host_topo, inst_topo) self.assertIsNone(inst_topo) def test_host_numa_fit_instance_to_host_fit(self): host_topo = objects.NUMATopology( cells=[objects.NUMACell(id=0, cpuset=set([0, 1, 2, 3]), memory=2048, memory_usage=0, siblings=[], mempages=[], pinned_cpus=set([])), objects.NUMACell(id=1, cpuset=set([4, 5, 6, 7]), memory=2048, memory_usage=0, siblings=[], mempages=[], pinned_cpus=set([]))]) inst_topo = objects.InstanceNUMATopology( cells=[objects.InstanceNUMACell( cpuset=set([0, 1]), memory=2048, cpu_policy=fields.CPUAllocationPolicy.DEDICATED), objects.InstanceNUMACell( cpuset=set([2, 3]), memory=2048, cpu_policy=fields.CPUAllocationPolicy.DEDICATED)]) inst_topo = hw.numa_fit_instance_to_host(host_topo, inst_topo) for cell in inst_topo.cells: self.assertInstanceCellPinned(cell, cell_ids=(0, 1)) def test_host_numa_fit_instance_to_host_barely_fit(self): host_topo = objects.NUMATopology( cells=[objects.NUMACell(id=0, cpuset=set([0, 1, 2, 3]), memory=2048, pinned_cpus=set([0]), siblings=[], mempages=[], memory_usage=0), objects.NUMACell(id=1, cpuset=set([4, 5, 6, 7]), memory=2048, memory_usage=0, siblings=[], mempages=[], pinned_cpus=set([4, 5, 6])), objects.NUMACell(id=2, cpuset=set([8, 9, 10, 11]), memory=2048, memory_usage=0, siblings=[], mempages=[], pinned_cpus=set([10, 11]))]) inst_topo = objects.InstanceNUMATopology( cells=[objects.InstanceNUMACell( cpuset=set([0, 1]), memory=2048, cpu_policy=fields.CPUAllocationPolicy.DEDICATED), objects.InstanceNUMACell( cpuset=set([2, 3]), memory=2048, cpu_policy=fields.CPUAllocationPolicy.DEDICATED)]) inst_topo = hw.numa_fit_instance_to_host(host_topo, inst_topo) for cell in inst_topo.cells: self.assertInstanceCellPinned(cell, cell_ids=(0, 2)) def test_host_numa_fit_instance_to_host_fail_capacity(self): host_topo = objects.NUMATopology( cells=[objects.NUMACell(id=0, cpuset=set([0, 1, 2, 3]), memory=4096, memory_usage=0, mempages=[], siblings=[], pinned_cpus=set([0])), objects.NUMACell(id=1, cpuset=set([4, 5, 6, 7]), memory=4096, memory_usage=0, siblings=[], mempages=[], pinned_cpus=set([4, 5, 6]))]) inst_topo = objects.InstanceNUMATopology( cells=[objects.InstanceNUMACell( cpuset=set([0, 1]), memory=2048, cpu_policy=fields.CPUAllocationPolicy.DEDICATED), objects.InstanceNUMACell( cpuset=set([2, 3]), memory=2048, cpu_policy=fields.CPUAllocationPolicy.DEDICATED)]) inst_topo = hw.numa_fit_instance_to_host(host_topo, inst_topo) self.assertIsNone(inst_topo) def test_host_numa_fit_instance_to_host_fail_topology(self): host_topo = objects.NUMATopology( cells=[objects.NUMACell(id=0, cpuset=set([0, 1, 2, 3]), memory=4096, memory_usage=0, siblings=[], mempages=[], pinned_cpus=set([])), objects.NUMACell(id=1, cpuset=set([4, 5, 6, 7]), memory=4096, memory_usage=0, siblings=[], mempages=[], pinned_cpus=set([]))]) inst_topo = objects.InstanceNUMATopology( cells=[objects.InstanceNUMACell( cpuset=set([0, 1]), memory=1024, cpu_policy=fields.CPUAllocationPolicy.DEDICATED), objects.InstanceNUMACell( cpuset=set([2, 3]), memory=1024, cpu_policy=fields.CPUAllocationPolicy.DEDICATED), objects.InstanceNUMACell( cpuset=set([4, 5]), memory=1024, cpu_policy=fields.CPUAllocationPolicy.DEDICATED)]) inst_topo = hw.numa_fit_instance_to_host(host_topo, inst_topo) self.assertIsNone(inst_topo) def test_cpu_pinning_usage_from_instances(self): host_pin = objects.NUMATopology( cells=[objects.NUMACell(id=0, cpuset=set([0, 1, 2, 3]), memory=4096, cpu_usage=0, memory_usage=0, siblings=[], mempages=[], pinned_cpus=set([]))]) inst_pin_1 = objects.InstanceNUMATopology( cells=[objects.InstanceNUMACell( cpuset=set([0, 1]), id=0, memory=2048, cpu_pinning={0: 0, 1: 3}, cpu_policy=fields.CPUAllocationPolicy.DEDICATED)]) inst_pin_2 = objects.InstanceNUMATopology( cells = [objects.InstanceNUMACell( cpuset=set([0, 1]), id=0, memory=2048, cpu_pinning={0: 1, 1: 2}, cpu_policy=fields.CPUAllocationPolicy.DEDICATED)]) host_pin = hw.numa_usage_from_instances( host_pin, [inst_pin_1, inst_pin_2]) self.assertEqual(set([0, 1, 2, 3]), host_pin.cells[0].pinned_cpus) def test_cpu_pinning_usage_from_instances_free(self): host_pin = objects.NUMATopology( cells=[objects.NUMACell(id=0, cpuset=set([0, 1, 2, 3]), memory=4096, cpu_usage=0, memory_usage=0, siblings=[], mempages=[], pinned_cpus=set([0, 1, 3]))]) inst_pin_1 = objects.InstanceNUMATopology( cells=[objects.InstanceNUMACell( cpuset=set([0]), memory=1024, cpu_pinning={0: 1}, id=0, cpu_policy=fields.CPUAllocationPolicy.DEDICATED)]) inst_pin_2 = objects.InstanceNUMATopology( cells=[objects.InstanceNUMACell( cpuset=set([0, 1]), memory=1024, id=0, cpu_pinning={0: 0, 1: 3}, cpu_policy=fields.CPUAllocationPolicy.DEDICATED)]) host_pin = hw.numa_usage_from_instances( host_pin, [inst_pin_1, inst_pin_2], free=True) self.assertEqual(set(), host_pin.cells[0].pinned_cpus) def test_host_usage_from_instances_fail(self): host_pin = objects.NUMATopology( cells=[objects.NUMACell(id=0, cpuset=set([0, 1, 2, 3]), memory=4096, cpu_usage=0, memory_usage=0, siblings=[], mempages=[], pinned_cpus=set([]))]) inst_pin_1 = objects.InstanceNUMATopology( cells=[objects.InstanceNUMACell( cpuset=set([0, 1]), memory=2048, id=0, cpu_pinning={0: 0, 1: 3}, cpu_policy=fields.CPUAllocationPolicy.DEDICATED)]) inst_pin_2 = objects.InstanceNUMATopology( cells = [objects.InstanceNUMACell( cpuset=set([0, 1]), id=0, memory=2048, cpu_pinning={0: 0, 1: 2}, cpu_policy=fields.CPUAllocationPolicy.DEDICATED)]) self.assertRaises(exception.CPUPinningInvalid, hw.numa_usage_from_instances, host_pin, [inst_pin_1, inst_pin_2]) def test_host_usage_from_instances_isolate(self): host_pin = objects.NUMATopology( cells=[objects.NUMACell(id=0, cpuset=set([0, 1, 2, 3]), memory=4096, cpu_usage=0, memory_usage=0, siblings=[set([0, 2]), set([1, 3])], mempages=[], pinned_cpus=set([]))]) inst_pin_1 = objects.InstanceNUMATopology( cells=[objects.InstanceNUMACell( cpuset=set([0, 1]), memory=2048, id=0, cpu_pinning={0: 0, 1: 1}, cpu_policy=fields.CPUAllocationPolicy.DEDICATED, cpu_thread_policy=fields.CPUThreadAllocationPolicy.ISOLATE )]) new_cell = hw.numa_usage_from_instances(host_pin, [inst_pin_1]) self.assertEqual(host_pin.cells[0].cpuset, new_cell.cells[0].pinned_cpus) self.assertEqual(new_cell.cells[0].cpu_usage, 4) def test_host_usage_from_instances_isolate_free(self): host_pin = objects.NUMATopology( cells=[objects.NUMACell(id=0, cpuset=set([0, 1, 2, 3]), memory=4096, cpu_usage=4, memory_usage=0, siblings=[set([0, 2]), set([1, 3])], mempages=[], pinned_cpus=set([0, 1, 2, 3]))]) inst_pin_1 = objects.InstanceNUMATopology( cells=[objects.InstanceNUMACell( cpuset=set([0, 1]), memory=2048, id=0, cpu_pinning={0: 0, 1: 1}, cpu_policy=fields.CPUAllocationPolicy.DEDICATED, cpu_thread_policy=fields.CPUThreadAllocationPolicy.ISOLATE )]) new_cell = hw.numa_usage_from_instances(host_pin, [inst_pin_1], free=True) self.assertEqual(set([]), new_cell.cells[0].pinned_cpus) self.assertEqual(new_cell.cells[0].cpu_usage, 0) def test_host_usage_from_instances_isolated_without_siblings(self): host_pin = objects.NUMATopology( cells=[objects.NUMACell(id=0, cpuset=set([0, 1, 2, 3]), memory=4096, cpu_usage=0, memory_usage=0, siblings=[], mempages=[], pinned_cpus=set([]))]) inst_pin = objects.InstanceNUMATopology( cells=[objects.InstanceNUMACell( cpuset=set([0, 1, 2]), memory=2048, id=0, cpu_pinning={0: 0, 1: 1, 2: 2}, cpu_policy=fields.CPUAllocationPolicy.DEDICATED, cpu_thread_policy=fields.CPUThreadAllocationPolicy.ISOLATE )]) new_cell = hw.numa_usage_from_instances(host_pin, [inst_pin]) self.assertEqual(inst_pin.cells[0].cpuset, new_cell.cells[0].pinned_cpus) self.assertEqual(new_cell.cells[0].cpu_usage, 3) def test_host_usage_from_instances_isolated_without_siblings_free(self): host_pin = objects.NUMATopology( cells=[objects.NUMACell(id=0, cpuset=set([0, 1, 2, 3]), memory=4096, cpu_usage=4, memory_usage=0, siblings=[], mempages=[], pinned_cpus=set([0, 1, 2, 3]))]) inst_pin = objects.InstanceNUMATopology( cells=[objects.InstanceNUMACell( cpuset=set([0, 1, 3]), memory=2048, id=0, cpu_pinning={0: 0, 1: 1, 2: 2}, cpu_policy=fields.CPUAllocationPolicy.DEDICATED, cpu_thread_policy=fields.CPUThreadAllocationPolicy.ISOLATE )]) new_cell = hw.numa_usage_from_instances(host_pin, [inst_pin], free=True) self.assertEqual(set([3]), new_cell.cells[0].pinned_cpus) self.assertEqual(new_cell.cells[0].cpu_usage, 1) class CPUSReservedCellTestCase(test.NoDBTestCase): def _test_reserved(self, reserved): host_cell = objects.NUMACell(id=0, cpuset=set([0, 1, 2]), memory=2048, memory_usage=0, siblings=[], mempages=[], pinned_cpus=set([])) inst_cell = objects.InstanceNUMACell(cpuset=set([0, 1]), memory=2048) return hw._numa_fit_instance_cell_with_pinning( host_cell, inst_cell, reserved) def test_no_reserved(self): inst_cell = self._test_reserved(reserved=0) self.assertEqual(set([0, 1]), inst_cell.cpuset) self.assertIsNone(inst_cell.cpuset_reserved) def test_reserved(self): inst_cell = self._test_reserved(reserved=1) self.assertEqual(set([0, 1]), inst_cell.cpuset) self.assertEqual(set([2]), inst_cell.cpuset_reserved) def test_reserved_exceeded(self): inst_cell = self._test_reserved(reserved=2) self.assertIsNone(inst_cell) class CPURealtimeTestCase(test.NoDBTestCase): def test_success_flavor(self): flavor = objects.Flavor(vcpus=3, memory_mb=2048, extra_specs={"hw:cpu_realtime_mask": "^1"}) image = objects.ImageMeta.from_dict({}) rt = hw.vcpus_realtime_topology(flavor, image) self.assertEqual(set([0, 2]), rt) def test_success_image(self): flavor = objects.Flavor(vcpus=3, memory_mb=2048, extra_specs={"hw:cpu_realtime_mask": "^1"}) image = objects.ImageMeta.from_dict( {"properties": {"hw_cpu_realtime_mask": "^0-1"}}) rt = hw.vcpus_realtime_topology(flavor, image) self.assertEqual(set([2]), rt) def test_no_mask_configured(self): flavor = objects.Flavor(vcpus=3, memory_mb=2048, extra_specs={}) image = objects.ImageMeta.from_dict({"properties": {}}) self.assertRaises( exception.RealtimeMaskNotFoundOrInvalid, hw.vcpus_realtime_topology, flavor, image) def test_mask_badly_configured(self): flavor = objects.Flavor(vcpus=3, memory_mb=2048, extra_specs={"hw:cpu_realtime_mask": "^0-2"}) image = objects.ImageMeta.from_dict({"properties": {}}) self.assertRaises( exception.RealtimeMaskNotFoundOrInvalid, hw.vcpus_realtime_topology, flavor, image) class EmulatorThreadsTestCase(test.NoDBTestCase): @staticmethod def _host_topology(): return objects.NUMATopology( cells=[objects.NUMACell(id=0, cpuset=set([0, 1]), memory=2048, cpu_usage=0, memory_usage=0, siblings=[], mempages=[], pinned_cpus=set([])), objects.NUMACell(id=1, cpuset=set([2, 3]), memory=2048, cpu_usage=0, memory_usage=0, siblings=[], mempages=[], pinned_cpus=set([]))]) def test_single_node_not_defined(self): host_topo = self._host_topology() inst_topo = objects.InstanceNUMATopology( cells=[objects.InstanceNUMACell( id=0, cpuset=set([0]), memory=2048, cpu_policy=fields.CPUAllocationPolicy.DEDICATED)]) inst_topo = hw.numa_fit_instance_to_host(host_topo, inst_topo) self.assertEqual({0: 0}, inst_topo.cells[0].cpu_pinning) self.assertIsNone(inst_topo.cells[0].cpuset_reserved) def test_single_node_shared(self): host_topo = self._host_topology() inst_topo = objects.InstanceNUMATopology( emulator_threads_policy=( fields.CPUEmulatorThreadsPolicy.SHARE), cells=[objects.InstanceNUMACell( id=0, cpuset=set([0]), memory=2048, cpu_policy=fields.CPUAllocationPolicy.DEDICATED)]) inst_topo = hw.numa_fit_instance_to_host(host_topo, inst_topo) self.assertEqual({0: 0}, inst_topo.cells[0].cpu_pinning) self.assertIsNone(inst_topo.cells[0].cpuset_reserved) def test_single_node_isolate(self): host_topo = self._host_topology() inst_topo = objects.InstanceNUMATopology( emulator_threads_policy=( fields.CPUEmulatorThreadsPolicy.ISOLATE), cells=[objects.InstanceNUMACell( id=0, cpuset=set([0]), memory=2048, cpu_policy=fields.CPUAllocationPolicy.DEDICATED)]) inst_topo = hw.numa_fit_instance_to_host(host_topo, inst_topo) self.assertEqual({0: 0}, inst_topo.cells[0].cpu_pinning) self.assertEqual(set([1]), inst_topo.cells[0].cpuset_reserved) def test_single_node_isolate_exceeded(self): host_topo = self._host_topology() inst_topo = objects.InstanceNUMATopology( emulator_threads_policy=( fields.CPUEmulatorThreadsPolicy.ISOLATE), cells=[objects.InstanceNUMACell( id=0, cpuset=set([0, 1, 2, 4]), memory=2048, cpu_policy=fields.CPUAllocationPolicy.DEDICATED)]) inst_topo = hw.numa_fit_instance_to_host(host_topo, inst_topo) self.assertIsNone(inst_topo) def test_multi_nodes_isolate(self): host_topo = self._host_topology() inst_topo = objects.InstanceNUMATopology( emulator_threads_policy=( fields.CPUEmulatorThreadsPolicy.ISOLATE), cells=[objects.InstanceNUMACell( id=0, cpuset=set([0]), memory=2048, cpu_policy=fields.CPUAllocationPolicy.DEDICATED), objects.InstanceNUMACell( id=1, cpuset=set([1]), memory=2048, cpu_policy=fields.CPUAllocationPolicy.DEDICATED)]) inst_topo = hw.numa_fit_instance_to_host(host_topo, inst_topo) self.assertEqual({0: 0}, inst_topo.cells[0].cpu_pinning) self.assertEqual(set([1]), inst_topo.cells[0].cpuset_reserved) self.assertEqual({1: 2}, inst_topo.cells[1].cpu_pinning) self.assertIsNone(inst_topo.cells[1].cpuset_reserved) def test_multi_nodes_isolate_exceeded(self): host_topo = self._host_topology() inst_topo = objects.InstanceNUMATopology( emulator_threads_policy=( fields.CPUEmulatorThreadsPolicy.ISOLATE), cells=[objects.InstanceNUMACell( id=0, cpuset=set([0, 1]), memory=2048, cpu_policy=fields.CPUAllocationPolicy.DEDICATED), objects.InstanceNUMACell( id=1, cpuset=set([2]), memory=2048, cpu_policy=fields.CPUAllocationPolicy.DEDICATED)]) inst_topo = hw.numa_fit_instance_to_host(host_topo, inst_topo) # The guest NUMA node 0 is requesting 2pCPUs + 1 additional # pCPU for emulator threads, the host can't handle the # request. self.assertIsNone(inst_topo) def test_multi_nodes_isolate_full_usage(self): host_topo = self._host_topology() inst_topo = objects.InstanceNUMATopology( emulator_threads_policy=( fields.CPUEmulatorThreadsPolicy.ISOLATE), cells=[objects.InstanceNUMACell( id=0, cpuset=set([0]), memory=2048, cpu_policy=fields.CPUAllocationPolicy.DEDICATED), objects.InstanceNUMACell( id=1, cpuset=set([1, 2]), memory=2048, cpu_policy=fields.CPUAllocationPolicy.DEDICATED)]) inst_topo = hw.numa_fit_instance_to_host(host_topo, inst_topo) self.assertEqual({0: 0}, inst_topo.cells[0].cpu_pinning) self.assertEqual(set([1]), inst_topo.cells[0].cpuset_reserved) self.assertEqual({1: 2, 2: 3}, inst_topo.cells[1].cpu_pinning) self.assertIsNone(inst_topo.cells[1].cpuset_reserved) def test_isolate_usage(self): host_topo = self._host_topology() inst_topo = objects.InstanceNUMATopology( emulator_threads_policy=( fields.CPUEmulatorThreadsPolicy.ISOLATE), cells=[objects.InstanceNUMACell( id=0, cpuset=set([0]), memory=2048, cpu_policy=fields.CPUAllocationPolicy.DEDICATED, cpu_pinning={0: 0}, cpuset_reserved=set([1]))]) host_topo = hw.numa_usage_from_instances( host_topo, [inst_topo]) self.assertEqual(2, host_topo.cells[0].cpu_usage) self.assertEqual(set([0, 1]), host_topo.cells[0].pinned_cpus) self.assertEqual(0, host_topo.cells[1].cpu_usage) self.assertEqual(set([]), host_topo.cells[1].pinned_cpus) def test_isolate_full_usage(self): host_topo = self._host_topology() inst_topo1 = objects.InstanceNUMATopology( emulator_threads_policy=( fields.CPUEmulatorThreadsPolicy.ISOLATE), cells=[objects.InstanceNUMACell( id=0, cpuset=set([0]), memory=2048, cpu_policy=fields.CPUAllocationPolicy.DEDICATED, cpu_pinning={0: 0}, cpuset_reserved=set([1]))]) inst_topo2 = objects.InstanceNUMATopology( emulator_threads_policy=( fields.CPUEmulatorThreadsPolicy.ISOLATE), cells=[objects.InstanceNUMACell( id=1, cpuset=set([0]), memory=2048, cpu_policy=fields.CPUAllocationPolicy.DEDICATED, cpu_pinning={0: 2}, cpuset_reserved=set([3]))]) host_topo = hw.numa_usage_from_instances( host_topo, [inst_topo1, inst_topo2]) self.assertEqual(2, host_topo.cells[0].cpu_usage) self.assertEqual(set([0, 1]), host_topo.cells[0].pinned_cpus) nova-17.0.1/nova/tests/unit/virt/test_firewall.py0000666000175000017500000006247013250073126022065 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova import exception from nova import objects from nova import test from nova.virt import firewall _IPT_DRIVER_CLS = firewall.IptablesFirewallDriver _FN_INSTANCE_RULES = 'instance_rules' _FN_ADD_FILTERS = 'add_filters_for_instance' _FN_DO_BASIC_RULES = '_do_basic_rules' _FN_DO_DHCP_RULES = '_do_dhcp_rules' class TestIptablesFirewallDriver(test.NoDBTestCase): def setUp(self): super(TestIptablesFirewallDriver, self).setUp() self.driver = _IPT_DRIVER_CLS() @mock.patch('nova.network.linux_net.iptables_manager') def test_constructor(self, iptm_mock): self.driver.__init__() self.assertEqual({}, self.driver.instance_info) self.assertFalse(self.driver.dhcp_create) self.assertFalse(self.driver.dhcp_created) self.assertEqual(iptm_mock, self.driver.iptables) # NOTE(jaypipes): Here we are not testing the IptablesManager # constructor. We are only testing the calls made against the # IptablesManager singleton during initialization of the # IptablesFirewallDriver. expected = [ mock.call.add_chain('sg-fallback'), mock.call.add_rule('sg-fallback', '-j DROP'), ] iptm_mock.ipv4.__getitem__.return_value \ .assert_has_calls(expected) iptm_mock.ipv6.__getitem__.return_value \ .assert_has_calls(expected) def test_filter_defer_apply_on(self): with mock.patch.object(self.driver.iptables, 'defer_apply_on') as dao_mock: self.driver.filter_defer_apply_on() dao_mock.assert_called_once_with() def test_filter_defer_apply_off(self): with mock.patch.object(self.driver.iptables, 'defer_apply_off') as dao_mock: self.driver.filter_defer_apply_off() dao_mock.assert_called_once_with() @mock.patch.object(_IPT_DRIVER_CLS, 'remove_filters_for_instance') def test_unfilter_instance_valid(self, rfii_mock): with mock.patch.object(self.driver, 'instance_info') as ii_mock, \ mock.patch.object(self.driver, 'iptables') as ipt_mock: fake_instance = objects.Instance(id=123) ii_mock.pop.return_value = True self.driver.unfilter_instance(fake_instance, mock.sentinel.net_info) ii_mock.pop.assert_called_once_with(fake_instance.id, None) rfii_mock.assert_called_once_with(fake_instance) ipt_mock.apply.assert_called_once_with() @mock.patch.object(_IPT_DRIVER_CLS, 'remove_filters_for_instance') def test_unfilter_instance_invalid(self, rfii_mock): with mock.patch.object(self.driver, 'instance_info') as ii_mock, \ mock.patch.object(self.driver, 'iptables') as ipt_mock: fake_instance = objects.Instance(id=123) ii_mock.pop.return_value = False self.driver.unfilter_instance(fake_instance, mock.sentinel.net_info) ii_mock.pop.assert_called_once_with(fake_instance.id, None) self.assertFalse(rfii_mock.called) self.assertFalse(ipt_mock.apply.called) def setup_instance_filter(self, i_rules_mock): # NOTE(chenli) The IptablesFirewallDriver init method calls the # iptables manager, so we must reset here. self.driver.iptables = mock.MagicMock() i_mock = mock.MagicMock(spec=dict) i_mock.id = 'fake_id' i_rules_mock.return_value = (mock.sentinel.v4_rules, mock.sentinel.v6_rules) return i_mock @mock.patch.object(_IPT_DRIVER_CLS, _FN_ADD_FILTERS) @mock.patch.object(_IPT_DRIVER_CLS, _FN_INSTANCE_RULES) def test_prepare_instance_filter(self, i_rules_mock, add_filters_mock): i_mock = self.setup_instance_filter(i_rules_mock) self.driver.prepare_instance_filter(i_mock, mock.sentinel.net_info) i_rules_mock.assert_called_once_with(i_mock, mock.sentinel.net_info) add_filters_mock.assert_called_once_with( i_mock, mock.sentinel.net_info, mock.sentinel.v4_rules, mock.sentinel.v6_rules) self.driver.iptables.apply.assert_called_once_with() # When DHCP created flag is False, make sure we don't set any filters gi_mock = self.driver.iptables.ipv4.__getitem__.return_value self.assertFalse(gi_mock.called) @mock.patch.object(_IPT_DRIVER_CLS, _FN_ADD_FILTERS) @mock.patch.object(_IPT_DRIVER_CLS, _FN_INSTANCE_RULES) def test_prepare_instance_filter_with_dhcp_create(self, i_rules_mock, add_filters_mock): i_mock = self.setup_instance_filter(i_rules_mock) # add rules when DHCP create is set self.driver.dhcp_create = True self.driver.prepare_instance_filter(i_mock, mock.sentinel.net_info) expected = [ mock.call.add_rule( 'INPUT', '-s 0.0.0.0/32 -d 255.255.255.255/32 ' '-p udp -m udp --sport 68 --dport 67 -j ACCEPT'), mock.call.add_rule( 'FORWARD', '-s 0.0.0.0/32 -d 255.255.255.255/32 ' '-p udp -m udp --sport 68 --dport 67 -j ACCEPT') ] self.driver.iptables.ipv4.__getitem__.return_value.assert_has_calls( expected) @mock.patch.object(_IPT_DRIVER_CLS, _FN_ADD_FILTERS) @mock.patch.object(_IPT_DRIVER_CLS, _FN_INSTANCE_RULES) def test_prepare_instance_filter_recreate(self, i_rules_mock, add_filters_mock): i_mock = self.setup_instance_filter(i_rules_mock) # add rules when DHCP create is set and create the rule self.driver.dhcp_create = True self.driver.prepare_instance_filter(i_mock, mock.sentinel.net_info) # Check we don't recreate the DHCP rules if we've already # done so (there is a dhcp_created flag on the driver that is # set when prepare_instance_filters() first creates them) self.driver.iptables.ipv4.__getitem__.reset_mock() self.driver.prepare_instance_filter(i_mock, mock.sentinel.net_info) gi_mock = self.driver.iptables.ipv4.__getitem__.return_value self.assertFalse(gi_mock.called) def test_create_filter(self): filter = self.driver._create_filter(['myip', 'otherip'], 'mychain') self.assertEqual(filter, ['-d myip -j $mychain', '-d otherip -j $mychain']) def test_get_subnets(self): subnet1 = {'version': '1', 'foo': 1} subnet2 = {'version': '2', 'foo': 2} subnet3 = {'version': '1', 'foo': 3} network_info = [{'network': {'subnets': [subnet1, subnet2]}}, {'network': {'subnets': [subnet3]}}] subnets = self.driver._get_subnets(network_info, '1') self.assertEqual(subnets, [subnet1, subnet3]) def get_subnets_mock(self, network_info, version): if version == 4: return [{'ips': [{'address': '1.1.1.1'}, {'address': '2.2.2.2'}]}] if version == 6: return [{'ips': [{'address': '3.3.3.3'}]}] def create_filter_mock(self, ips, chain_name): if ips == ['1.1.1.1', '2.2.2.2']: return 'rule1' if ips == ['3.3.3.3']: return 'rule2' def test_filters_for_instance(self): self.flags(use_ipv6=True) chain_name = 'mychain' network_info = {'foo': 'bar'} self.driver._get_subnets = mock.Mock(side_effect=self.get_subnets_mock) self.driver._create_filter = \ mock.Mock(side_effect=self.create_filter_mock) ipv4_rules, ipv6_rules = \ self.driver._filters_for_instance(chain_name, network_info) self.assertEqual(self.driver._get_subnets.mock_calls, [mock.call(network_info, 4), mock.call(network_info, 6)]) self.assertEqual(self.driver._create_filter.mock_calls, [mock.call(['1.1.1.1', '2.2.2.2'], chain_name), mock.call(['3.3.3.3'], chain_name)]) self.assertEqual(ipv4_rules, 'rule1') self.assertEqual(ipv6_rules, 'rule2') def test_add_filters(self): self.flags(use_ipv6=True) self.driver.iptables.ipv4['filter'].add_rule = mock.Mock() self.driver.iptables.ipv6['filter'].add_rule = mock.Mock() chain_name = 'mychain' ipv4_rules = ['rule1', 'rule2'] ipv6_rules = ['rule3', 'rule4'] self.driver._add_filters(chain_name, ipv4_rules, ipv6_rules) self.assertEqual(self.driver.iptables.ipv4['filter'].add_rule. mock_calls, [mock.call(chain_name, 'rule1'), mock.call(chain_name, 'rule2')]) self.assertEqual(self.driver.iptables.ipv6['filter'].add_rule. mock_calls, [mock.call(chain_name, 'rule3'), mock.call(chain_name, 'rule4')]) @mock.patch.object(_IPT_DRIVER_CLS, '_instance_chain_name', return_value=mock.sentinel.mychain) @mock.patch.object(_IPT_DRIVER_CLS, '_filters_for_instance', return_value=[mock.sentinel.ipv4_rules, mock.sentinel.ipv6_rules]) @mock.patch.object(_IPT_DRIVER_CLS, '_add_filters') def test_add_filters_for_instance(self, add_filters_mock, ffi_mock, icn_mock): self.flags(use_ipv6=True) with mock.patch.object(self.driver.iptables.ipv6['filter'], 'add_chain') as ipv6_add_chain_mock, \ mock.patch.object(self.driver.iptables.ipv4['filter'], 'add_chain') as ipv4_add_chain_mock: self.driver.add_filters_for_instance( mock.sentinel.instance, mock.sentinel.network_info, mock.sentinel.inst_ipv4_rules, mock.sentinel.inst_ipv6_rules) ipv4_add_chain_mock.assert_called_with(mock.sentinel.mychain) ipv6_add_chain_mock.assert_called_with(mock.sentinel.mychain) icn_mock.assert_called_with(mock.sentinel.instance) ffi_mock.assert_called_with(mock.sentinel.mychain, mock.sentinel.network_info) self.assertEqual([mock.call('local', mock.sentinel.ipv4_rules, mock.sentinel.ipv6_rules), mock.call(mock.sentinel.mychain, mock.sentinel.inst_ipv4_rules, mock.sentinel.inst_ipv6_rules)], add_filters_mock.mock_calls) def test_remove_filters_for_instance(self): self.flags(use_ipv6=True) self.driver._instance_chain_name = \ mock.Mock(return_value='mychainname') self.driver.iptables.ipv4['filter'].remove_chain = mock.Mock() self.driver.iptables.ipv6['filter'].remove_chain = mock.Mock() self.driver.remove_filters_for_instance('myinstance') self.driver._instance_chain_name.assert_called_with('myinstance') self.driver.iptables.ipv4['filter'].remove_chain.assert_called_with( 'mychainname') self.driver.iptables.ipv6['filter'].remove_chain.assert_called_with( 'mychainname') def test_instance_chain_name(self): instance = mock.Mock() instance.id = "myinstanceid" instance_chain_name = self.driver._instance_chain_name(instance) self.assertEqual(instance_chain_name, 'inst-myinstanceid') def test_do_basic_rules(self): ipv4_rules = ['rule1'] ipv6_rules = ['rule2'] self.driver._do_basic_rules(ipv4_rules, ipv6_rules, mock.sentinel.net_info) self.assertEqual(ipv4_rules, ['rule1', '-m state --state INVALID -j DROP', '-m state --state ESTABLISHED,RELATED -j ACCEPT']) self.assertEqual(ipv6_rules, ['rule2', '-m state --state INVALID -j DROP', '-m state --state ESTABLISHED,RELATED -j ACCEPT']) def test_do_dhcp_rules(self): subnet1 = mock.Mock() subnet1.get_meta = mock.Mock(return_value='mydhcp') subnet2 = mock.Mock() subnet2.get_meta = mock.Mock(return_value=None) self.driver._get_subnets = mock.Mock(return_value=[subnet1, subnet2]) ipv4_rules = ['rule1'] self.driver._do_dhcp_rules(ipv4_rules, mock.sentinel.net_info) self.assertEqual(ipv4_rules, ['rule1', '-s mydhcp -p udp --sport 67 --dport 68 -j ACCEPT']) def test_do_project_network_rules(self): self.flags(use_ipv6=True) subnet1 = {'cidr': 'mycidr1'} subnet2 = {'cidr': 'mycidr2'} ipv4_rules = ['rule1'] ipv6_rules = ['rule2'] self.driver._get_subnets = mock.Mock(return_value=[subnet1, subnet2]) self.driver._do_project_network_rules(ipv4_rules, ipv6_rules, mock.sentinel.net_info) self.assertEqual(ipv4_rules, ['rule1', '-s mycidr1 -j ACCEPT', '-s mycidr2 -j ACCEPT']) self.assertEqual(ipv6_rules, ['rule2', '-s mycidr1 -j ACCEPT', '-s mycidr2 -j ACCEPT']) def test_do_ra_rules(self): subnet1 = {'gateway': {'address': 'myaddress1'}} subnet2 = {'gateway': {'address': 'myaddress2'}} self.driver._get_subnets = \ mock.Mock(return_value=[subnet1, subnet2]) ipv6_rules = ['rule1'] self.driver._do_ra_rules(ipv6_rules, mock.sentinel.net_info) self.assertEqual(ipv6_rules, ['rule1', '-s myaddress1/128 -p icmpv6 -j ACCEPT', '-s myaddress2/128 -p icmpv6 -j ACCEPT']) def test_build_icmp_rule(self): rule = mock.Mock() # invalid icmp type rule.from_port = -1 icmp_rule = self.driver._build_icmp_rule(rule, 4) self.assertEqual(icmp_rule, []) # version 4 invalid icmp code rule.from_port = 123 rule.to_port = -1 icmp_rule = self.driver._build_icmp_rule(rule, 4) self.assertEqual(icmp_rule, ['-m', 'icmp', '--icmp-type', '123']) # version 6 valid icmp code rule.from_port = 123 rule.to_port = 456 icmp_rule = self.driver._build_icmp_rule(rule, 6) self.assertEqual(icmp_rule, ['-m', 'icmp6', '--icmpv6-type', '123/456']) def test_build_tcp_udp_rule(self): rule = mock.Mock() # equal from and to port rule.from_port = 123 rule.to_port = 123 tu_rule = self.driver._build_tcp_udp_rule(rule, 42) self.assertEqual(tu_rule, ['--dport', '123']) # different from and to port rule.to_port = 456 tu_rule = self.driver._build_tcp_udp_rule(rule, 42) self.assertEqual(tu_rule, ['-m', 'multiport', '--dports', '123:456']) def setup_instance_rules(self, ins_obj_cls_mock): """Create necessary mock varibles for instance_rules. The i_mock and ni_mock represent instance_rules parameters instance and network_info. The i_obj_mock represent the return vaue for nova.objects.Instance. """ i_mock = mock.MagicMock(spec=dict) ni_mock = mock.MagicMock(spec=dict) i_obj_mock = mock.MagicMock() ins_obj_cls_mock._from_db_object.return_value = i_obj_mock driver = firewall.IptablesFirewallDriver() return i_mock, ni_mock, i_obj_mock, driver @mock.patch('nova.objects.SecurityGroupRuleList') @mock.patch.object(_IPT_DRIVER_CLS, _FN_DO_DHCP_RULES) @mock.patch.object(_IPT_DRIVER_CLS, _FN_DO_BASIC_RULES) @mock.patch('nova.objects.Instance') @mock.patch('nova.context.get_admin_context', return_value=mock.sentinel.ctx) @mock.patch('nova.network.linux_net.iptables_manager') def test_instance_rules_no_secgroups(self, _iptm_mock, ctx_mock, ins_obj_cls_mock, _do_basic_mock, _do_dhcp_mock, sec_grp_list_mock): i_mock, ni_mock, i_obj_mock, driver = self.setup_instance_rules( ins_obj_cls_mock) # Simple unit test that verifies that the fallback jump # is the only rule added to the returned list of rules if # no secgroups are found (we ignore the basic and DHCP # rule additions here) sec_grp_list_mock.get_by_instance.return_value = [] v4_rules, v6_rules = driver.instance_rules(i_mock, ni_mock) ins_obj_cls_mock._from_db_object.assert_called_once_with( mock.sentinel.ctx, mock.ANY, i_mock, mock.ANY) sec_grp_list_mock.get_by_instance.assert_called_once_with( mock.sentinel.ctx, i_obj_mock) expected = ['-j $sg-fallback'] self.assertEqual(expected, v4_rules) self.assertEqual(expected, v6_rules) @mock.patch('nova.objects.SecurityGroupRuleList') @mock.patch('nova.objects.SecurityGroupList') @mock.patch.object(_IPT_DRIVER_CLS, _FN_DO_DHCP_RULES) @mock.patch.object(_IPT_DRIVER_CLS, _FN_DO_BASIC_RULES) @mock.patch('nova.objects.Instance') @mock.patch('nova.context.get_admin_context', return_value=mock.sentinel.ctx) @mock.patch('nova.network.linux_net.iptables_manager') def test_instance_rules_cidr(self, _iptm_mock, ctx_mock, ins_obj_cls_mock, _do_basic_mock, _do_dhcp_mock, sec_grp_list_mock, sec_grp_rule_list_mock): i_mock, ni_mock, i_obj_mock, driver = self.setup_instance_rules( ins_obj_cls_mock) # Tests that sec group rules that contain a CIDR (i.e. the # rule does not contain a grantee group of instances) populates # the returned iptables rules with appropriate ingress and # egress filters. sec_grp_list_mock.get_by_instance.return_value = [ mock.sentinel.sec_grp ] sec_grp_rule_list_mock.get_by_security_group.return_value = [ { "cidr": "192.168.1.0/24", "protocol": "tcp", "to_port": "22", "from_port": "22" } ] v4_rules, v6_rules = driver.instance_rules(i_mock, ni_mock) expected = [ # '-j ACCEPT -p tcp --dport 22 -s 192.168.1.0/24', '-j $sg-fallback' ] self.assertEqual(expected, v4_rules) expected = ['-j $sg-fallback'] self.assertEqual(expected, v6_rules) def setup_grantee_group( self, ins_obj_cls_mock, sec_grp_list_mock, sec_grp_rule_list_mock, ins_list_mock): i_mock, ni_mock, i_obj_mock, driver = self.setup_instance_rules( ins_obj_cls_mock) # Tests that sec group rules that DO NOT contain a CIDR (i.e. the # rule contains a grantee group of instances) populates # the returned iptables rules with appropriate ingress and # egress filters after calling out to the network API for information # about the instances in the grantee group. sec_grp_list_mock.get_by_instance.return_value = [ mock.sentinel.sec_grp ] sec_grp_rule_list_mock.get_by_security_group.return_value = [ { "cidr": None, "grantee_group": mock.sentinel.gg, "protocol": "tcp", "to_port": "22", "from_port": "22" } ] i_obj_list_mock = mock.MagicMock() i_obj_list_mock.info_cache.return_value = { "deleted": False } ins_list_mock.get_by_security_group.return_value = [i_obj_list_mock] return i_mock, i_obj_mock, ni_mock, driver @mock.patch('nova.objects.Instance.get_network_info') @mock.patch('nova.objects.InstanceList') @mock.patch('nova.objects.SecurityGroupRuleList') @mock.patch('nova.objects.SecurityGroupList') @mock.patch.object(_IPT_DRIVER_CLS, _FN_DO_DHCP_RULES) @mock.patch.object(_IPT_DRIVER_CLS, _FN_DO_BASIC_RULES) @mock.patch('nova.objects.Instance') @mock.patch('nova.context.get_admin_context', return_value=mock.sentinel.ctx) @mock.patch('nova.network.linux_net.iptables_manager') def test_instance_rules_grantee_group(self, _iptm_mock, ctx_mock, ins_obj_cls_mock, _do_basic_mock, _do_dhcp_mock, sec_grp_list_mock, sec_grp_rule_list_mock, ins_list_mock, get_nw_info_mock): i_mock, i_obj_mock, ni_mock, driver = self.setup_grantee_group( ins_obj_cls_mock, sec_grp_list_mock, sec_grp_rule_list_mock, ins_list_mock) nw_info_mock = mock.MagicMock() nw_info_mock.fixed_ips.return_value = [ { "address": "10.0.1.4", "version": 4 } ] get_nw_info_mock.return_value = nw_info_mock v4_rules, v6_rules = driver.instance_rules(i_mock, ni_mock) expected = ['-j $sg-fallback'] self.assertEqual(expected, v4_rules) self.assertEqual(expected, v6_rules) @mock.patch('nova.objects.Instance.get_network_info') @mock.patch('nova.objects.InstanceList') @mock.patch('nova.objects.SecurityGroupRuleList') @mock.patch('nova.objects.SecurityGroupList') @mock.patch.object(_IPT_DRIVER_CLS, _FN_DO_DHCP_RULES) @mock.patch.object(_IPT_DRIVER_CLS, _FN_DO_BASIC_RULES) @mock.patch('nova.objects.Instance') @mock.patch('nova.context.get_admin_context', return_value=mock.sentinel.ctx) @mock.patch('nova.network.linux_net.iptables_manager') def test_instance_rules_grantee_group_instance_deleted( self, _iptm_mock, ctx_mock, ins_obj_cls_mock, _do_basic_mock, _do_dhcp_mock, sec_grp_list_mock, sec_grp_rule_list_mock, ins_list_mock, get_nw_info_mock): i_mock, i_obj_mock, ni_mock, driver = self.setup_grantee_group( ins_obj_cls_mock, sec_grp_list_mock, sec_grp_rule_list_mock, ins_list_mock) # Emulate one of the instances in the grantee group being deleted # in between when the spawn of this instance and when we set up # network for that instance, and ensure that we do not crash and # burn but just skip the deleted instance from the iptables filters get_nw_info_mock.side_effect = exception.InstanceNotFound( instance_id="_ignored") v4_rules, v6_rules = driver.instance_rules(i_mock, ni_mock) expected = ['-j $sg-fallback'] self.assertEqual(expected, v4_rules) self.assertEqual(expected, v6_rules) def test_refresh_security_group_rules(self): self.driver.do_refresh_security_group_rules = mock.Mock() self.driver.iptables.apply = mock.Mock() self.driver.refresh_security_group_rules('mysecgroup') self.driver.do_refresh_security_group_rules \ .assert_called_with('mysecgroup') self.driver.iptables.apply.assert_called() def test_refresh_instance_security_rules(self): self.driver.do_refresh_instance_rules = mock.Mock() self.driver.iptables.apply = mock.Mock() self.driver.refresh_instance_security_rules('myinstance') self.driver.do_refresh_instance_rules.assert_called_with('myinstance') self.driver.iptables.apply.assert_called() def test_do_refresh_security_group_rules(self): self.driver.instance_info = \ {'1': ['myinstance1', 'netinfo1'], '2': ['myinstance2', 'netinfo2']} self.driver.instance_rules = \ mock.Mock(return_value=['myipv4rules', 'myipv6rules']) self.driver._inner_do_refresh_rules = mock.Mock() self.driver.do_refresh_security_group_rules('mysecgroup') self.driver.instance_rules.assert_any_call('myinstance1', 'netinfo1') self.driver.instance_rules.assert_any_call('myinstance2', 'netinfo2') self.driver._inner_do_refresh_rules.assert_any_call( 'myinstance1', 'netinfo1', 'myipv4rules', 'myipv6rules') self.driver._inner_do_refresh_rules.assert_any_call( 'myinstance2', 'netinfo2', 'myipv4rules', 'myipv6rules') def test_do_refresh_instance_rules(self): instance = mock.Mock() instance.id = 'myid' self.driver.instance_info = {instance.id: ['myinstance', 'mynetinfo']} self.driver.instance_rules = \ mock.Mock(return_value=['myipv4rules', 'myipv6rules']) self.driver._inner_do_refresh_rules = mock.Mock() self.driver.do_refresh_instance_rules(instance) self.driver.instance_rules.assert_called_with(instance, 'mynetinfo') self.driver._inner_do_refresh_rules.assert_called_with( instance, 'mynetinfo', 'myipv4rules', 'myipv6rules') nova-17.0.1/nova/tests/unit/virt/test_configdrive.py0000666000175000017500000000373613250073126022557 0ustar zuulzuul00000000000000# Copyright 2014 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova import objects from nova import test from nova.virt import configdrive class ConfigDriveTestCase(test.NoDBTestCase): def test_instance_force(self): self.flags(force_config_drive=False) instance = objects.Instance( config_drive="yes", system_metadata={ "image_img_config_drive": "mandatory", } ) self.assertTrue(configdrive.required_by(instance)) def test_image_meta_force(self): self.flags(force_config_drive=False) instance = objects.Instance( config_drive=None, system_metadata={ "image_img_config_drive": "mandatory", } ) self.assertTrue(configdrive.required_by(instance)) def test_config_flag_force(self): self.flags(force_config_drive=True) instance = objects.Instance( config_drive=None, system_metadata={ "image_img_config_drive": "optional", } ) self.assertTrue(configdrive.required_by(instance)) def test_no_config_drive(self): self.flags(force_config_drive=False) instance = objects.Instance( config_drive=None, system_metadata={ "image_img_config_drive": "optional", } ) self.assertFalse(configdrive.required_by(instance)) nova-17.0.1/nova/tests/unit/virt/__init__.py0000666000175000017500000000000013250073126020735 0ustar zuulzuul00000000000000nova-17.0.1/nova/tests/unit/virt/test_fake.py0000666000175000017500000000161313250073126021156 0ustar zuulzuul00000000000000# # Copyright (c) 2015 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova import test from nova.virt import driver from nova.virt import fake class FakeDriverTest(test.NoDBTestCase): def test_public_api_signatures(self): baseinst = driver.ComputeDriver(None) inst = fake.FakeDriver(fake.FakeVirtAPI(), True) self.assertPublicAPISignatures(baseinst, inst) nova-17.0.1/nova/tests/unit/virt/test_virt_drivers.py0000666000175000017500000011773313250073136023006 0ustar zuulzuul00000000000000# Copyright 2010 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import base64 from collections import deque import sys import traceback import fixtures import mock import netaddr import os_vif from oslo_log import log as logging from oslo_serialization import jsonutils from oslo_utils import importutils from oslo_utils import timeutils import six from nova.compute import manager from nova.console import type as ctype from nova import context from nova import exception from nova import objects from nova import test from nova.tests import fixtures as nova_fixtures from nova.tests.unit import fake_block_device from nova.tests.unit.image import fake as fake_image from nova.tests.unit import utils as test_utils from nova.tests.unit.virt.libvirt import fake_libvirt_utils from nova.virt import block_device as driver_block_device from nova.virt import event as virtevent from nova.virt import fake from nova.virt import hardware from nova.virt import libvirt from nova.virt.libvirt import imagebackend LOG = logging.getLogger(__name__) def catch_notimplementederror(f): """Decorator to simplify catching drivers raising NotImplementedError If a particular call makes a driver raise NotImplementedError, we log it so that we can extract this information afterwards as needed. """ def wrapped_func(self, *args, **kwargs): try: return f(self, *args, **kwargs) except NotImplementedError: frame = traceback.extract_tb(sys.exc_info()[2])[-1] LOG.error("%(driver)s does not implement %(method)s " "required for test %(test)s", {'driver': type(self.connection), 'method': frame[2], 'test': f.__name__}) wrapped_func.__name__ = f.__name__ wrapped_func.__doc__ = f.__doc__ return wrapped_func class _FakeDriverBackendTestCase(object): def _setup_fakelibvirt(self): # So that the _supports_direct_io does the test based # on the current working directory, instead of the # default instances_path which doesn't exist self.flags(instances_path=self.useFixture(fixtures.TempDir()).path) # Put fakelibvirt in place if 'libvirt' in sys.modules: self.saved_libvirt = sys.modules['libvirt'] else: self.saved_libvirt = None import nova.tests.unit.virt.libvirt.fake_imagebackend as \ fake_imagebackend import nova.tests.unit.virt.libvirt.fake_libvirt_utils as \ fake_libvirt_utils import nova.tests.unit.virt.libvirt.fakelibvirt as fakelibvirt import nova.tests.unit.virt.libvirt.fake_os_brick_connector as \ fake_os_brick_connector self.useFixture(fake_imagebackend.ImageBackendFixture()) self.useFixture(fakelibvirt.FakeLibvirtFixture()) self.useFixture(fixtures.MonkeyPatch( 'nova.virt.libvirt.driver.libvirt_utils', fake_libvirt_utils)) self.useFixture(fixtures.MonkeyPatch( 'nova.virt.libvirt.imagebackend.libvirt_utils', fake_libvirt_utils)) self.useFixture(fixtures.MonkeyPatch( 'nova.virt.libvirt.driver.connector', fake_os_brick_connector)) self.useFixture(fixtures.MonkeyPatch( 'nova.virt.libvirt.host.Host._conn_event_thread', lambda *args: None)) self.flags(rescue_image_id="2", rescue_kernel_id="3", rescue_ramdisk_id=None, snapshots_directory='./', sysinfo_serial='none', group='libvirt') def fake_extend(image, size): pass def fake_migrate(_self, destination, migrate_uri=None, params=None, flags=0, domain_xml=None, bandwidth=0): pass def fake_make_drive(_self, _path): pass def fake_get_instance_disk_info_from_config( _self, guest_config, block_device_info): return [] def fake_delete_instance_files(_self, _instance): pass def fake_wait(): pass def fake_detach_device_with_retry(_self, get_device_conf_func, device, live, *args, **kwargs): # Still calling detach, but instead of returning function # that actually checks if device is gone from XML, just continue # because XML never gets updated in these tests _self.detach_device(get_device_conf_func(device), live=live) return fake_wait import nova.virt.libvirt.driver self.stubs.Set(nova.virt.libvirt.driver.LibvirtDriver, '_get_instance_disk_info_from_config', fake_get_instance_disk_info_from_config) self.stubs.Set(nova.virt.libvirt.driver.disk_api, 'extend', fake_extend) self.stubs.Set(nova.virt.libvirt.driver.LibvirtDriver, 'delete_instance_files', fake_delete_instance_files) self.stubs.Set(nova.virt.libvirt.guest.Guest, 'detach_device_with_retry', fake_detach_device_with_retry) self.stubs.Set(nova.virt.libvirt.guest.Guest, 'migrate', fake_migrate) # We can't actually make a config drive v2 because ensure_tree has # been faked out self.stubs.Set(nova.virt.configdrive.ConfigDriveBuilder, 'make_drive', fake_make_drive) def _teardown_fakelibvirt(self): # Restore libvirt if self.saved_libvirt: sys.modules['libvirt'] = self.saved_libvirt def setUp(self): super(_FakeDriverBackendTestCase, self).setUp() # TODO(sdague): it would be nice to do this in a way that only # the relevant backends where replaced for tests, though this # should not harm anything by doing it for all backends fake_image.stub_out_image_service(self) self._setup_fakelibvirt() def tearDown(self): fake_image.FakeImageService_reset() self._teardown_fakelibvirt() super(_FakeDriverBackendTestCase, self).tearDown() class VirtDriverLoaderTestCase(_FakeDriverBackendTestCase, test.TestCase): """Test that ComputeManager can successfully load both old style and new style drivers and end up with the correct final class. """ # if your driver supports being tested in a fake way, it can go here new_drivers = { 'fake.FakeDriver': 'FakeDriver', 'libvirt.LibvirtDriver': 'LibvirtDriver' } def test_load_new_drivers(self): for cls, driver in self.new_drivers.items(): self.flags(compute_driver=cls) # NOTE(sdague) the try block is to make it easier to debug a # failure by knowing which driver broke try: cm = manager.ComputeManager() except Exception as e: self.fail("Couldn't load driver %s - %s" % (cls, e)) self.assertEqual(cm.driver.__class__.__name__, driver, "Could't load driver %s" % cls) def test_fail_to_load_new_drivers(self): self.flags(compute_driver='nova.virt.amiga') def _fake_exit(error): raise test.TestingException() self.stubs.Set(sys, 'exit', _fake_exit) self.assertRaises(test.TestingException, manager.ComputeManager) class _VirtDriverTestCase(_FakeDriverBackendTestCase): def setUp(self): super(_VirtDriverTestCase, self).setUp() self.flags(instances_path=self.useFixture(fixtures.TempDir()).path) self.connection = importutils.import_object(self.driver_module, fake.FakeVirtAPI()) self.ctxt = test_utils.get_test_admin_context() self.image_service = fake_image.FakeImageService() # NOTE(dripton): resolve_driver_format does some file reading and # writing and chowning that complicate testing too much by requiring # using real directories with proper permissions. Just stub it out # here; we test it in test_imagebackend.py self.stubs.Set(imagebackend.Image, 'resolve_driver_format', imagebackend.Image._get_driver_format) os_vif.initialize() def _get_running_instance(self, obj=True): instance_ref = test_utils.get_test_instance(obj=obj) network_info = test_utils.get_test_network_info() network_info[0]['network']['subnets'][0]['meta']['dhcp_server'] = \ '1.1.1.1' image_meta = test_utils.get_test_image_object(None, instance_ref) self.connection.spawn(self.ctxt, instance_ref, image_meta, [], 'herp', {}, network_info=network_info) return instance_ref, network_info @catch_notimplementederror def test_init_host(self): self.connection.init_host('myhostname') @catch_notimplementederror def test_list_instances(self): self.connection.list_instances() @catch_notimplementederror def test_list_instance_uuids(self): self.connection.list_instance_uuids() @catch_notimplementederror def test_spawn(self): instance_ref, network_info = self._get_running_instance() domains = self.connection.list_instances() self.assertIn(instance_ref['name'], domains) num_instances = self.connection.get_num_instances() self.assertEqual(1, num_instances) @catch_notimplementederror def test_snapshot_not_running(self): instance_ref = test_utils.get_test_instance() img_ref = self.image_service.create(self.ctxt, {'name': 'snap-1'}) self.assertRaises(exception.InstanceNotRunning, self.connection.snapshot, self.ctxt, instance_ref, img_ref['id'], lambda *args, **kwargs: None) @catch_notimplementederror def test_snapshot_running(self): img_ref = self.image_service.create(self.ctxt, {'name': 'snap-1'}) instance_ref, network_info = self._get_running_instance() self.connection.snapshot(self.ctxt, instance_ref, img_ref['id'], lambda *args, **kwargs: None) @catch_notimplementederror def test_post_interrupted_snapshot_cleanup(self): instance_ref, network_info = self._get_running_instance() self.connection.post_interrupted_snapshot_cleanup(self.ctxt, instance_ref) @catch_notimplementederror def test_reboot(self): reboot_type = "SOFT" instance_ref, network_info = self._get_running_instance() self.connection.reboot(self.ctxt, instance_ref, network_info, reboot_type) @catch_notimplementederror def test_get_host_ip_addr(self): host_ip = self.connection.get_host_ip_addr() # Will raise an exception if it's not a valid IP at all ip = netaddr.IPAddress(host_ip) # For now, assume IPv4. self.assertEqual(ip.version, 4) @catch_notimplementederror def test_set_admin_password(self): instance, network_info = self._get_running_instance(obj=True) self.connection.set_admin_password(instance, 'p4ssw0rd') @catch_notimplementederror def test_inject_file(self): instance_ref, network_info = self._get_running_instance() self.connection.inject_file(instance_ref, base64.b64encode(b'/testfile'), base64.b64encode(b'testcontents')) @catch_notimplementederror def test_resume_state_on_host_boot(self): instance_ref, network_info = self._get_running_instance() self.connection.resume_state_on_host_boot(self.ctxt, instance_ref, network_info) @catch_notimplementederror def test_rescue(self): image_meta = objects.ImageMeta.from_dict({}) instance_ref, network_info = self._get_running_instance() self.connection.rescue(self.ctxt, instance_ref, network_info, image_meta, '') @catch_notimplementederror @mock.patch('os.unlink') def test_unrescue_unrescued_instance(self, mock_unlink): instance_ref, network_info = self._get_running_instance() self.connection.unrescue(instance_ref, network_info) @catch_notimplementederror @mock.patch('os.unlink') def test_unrescue_rescued_instance(self, mock_unlink): image_meta = objects.ImageMeta.from_dict({}) instance_ref, network_info = self._get_running_instance() self.connection.rescue(self.ctxt, instance_ref, network_info, image_meta, '') self.connection.unrescue(instance_ref, network_info) @catch_notimplementederror def test_poll_rebooting_instances(self): instances = [self._get_running_instance()] self.connection.poll_rebooting_instances(10, instances) @catch_notimplementederror def test_migrate_disk_and_power_off(self): instance_ref, network_info = self._get_running_instance() flavor_ref = test_utils.get_test_flavor() self.connection.migrate_disk_and_power_off( self.ctxt, instance_ref, 'dest_host', flavor_ref, network_info) @catch_notimplementederror def test_power_off(self): instance_ref, network_info = self._get_running_instance() self.connection.power_off(instance_ref) @catch_notimplementederror def test_power_on_running(self): instance_ref, network_info = self._get_running_instance() self.connection.power_on(self.ctxt, instance_ref, network_info, None) @catch_notimplementederror def test_power_on_powered_off(self): instance_ref, network_info = self._get_running_instance() self.connection.power_off(instance_ref) self.connection.power_on(self.ctxt, instance_ref, network_info, None) @catch_notimplementederror def test_trigger_crash_dump(self): instance_ref, network_info = self._get_running_instance() self.connection.trigger_crash_dump(instance_ref) @catch_notimplementederror def test_soft_delete(self): instance_ref, network_info = self._get_running_instance(obj=True) self.connection.soft_delete(instance_ref) @catch_notimplementederror def test_restore_running(self): instance_ref, network_info = self._get_running_instance() self.connection.restore(instance_ref) @catch_notimplementederror def test_restore_soft_deleted(self): instance_ref, network_info = self._get_running_instance() self.connection.soft_delete(instance_ref) self.connection.restore(instance_ref) @catch_notimplementederror def test_pause(self): instance_ref, network_info = self._get_running_instance() self.connection.pause(instance_ref) @catch_notimplementederror def test_unpause_unpaused_instance(self): instance_ref, network_info = self._get_running_instance() self.connection.unpause(instance_ref) @catch_notimplementederror def test_unpause_paused_instance(self): instance_ref, network_info = self._get_running_instance() self.connection.pause(instance_ref) self.connection.unpause(instance_ref) @catch_notimplementederror def test_suspend(self): instance_ref, network_info = self._get_running_instance() self.connection.suspend(self.ctxt, instance_ref) @catch_notimplementederror def test_resume_unsuspended_instance(self): instance_ref, network_info = self._get_running_instance() self.connection.resume(self.ctxt, instance_ref, network_info) @catch_notimplementederror def test_resume_suspended_instance(self): instance_ref, network_info = self._get_running_instance() self.connection.suspend(self.ctxt, instance_ref) self.connection.resume(self.ctxt, instance_ref, network_info) @catch_notimplementederror def test_destroy_instance_nonexistent(self): fake_instance = test_utils.get_test_instance(obj=True) network_info = test_utils.get_test_network_info() self.connection.destroy(self.ctxt, fake_instance, network_info) @catch_notimplementederror def test_destroy_instance(self): instance_ref, network_info = self._get_running_instance() self.assertIn(instance_ref['name'], self.connection.list_instances()) self.connection.destroy(self.ctxt, instance_ref, network_info) self.assertNotIn(instance_ref['name'], self.connection.list_instances()) @catch_notimplementederror def test_get_volume_connector(self): result = self.connection.get_volume_connector({'id': 'fake'}) self.assertIn('ip', result) self.assertIn('initiator', result) self.assertIn('host', result) return result @catch_notimplementederror def test_get_volume_connector_storage_ip(self): ip = 'my_ip' storage_ip = 'storage_ip' self.flags(my_block_storage_ip=storage_ip, my_ip=ip) result = self.connection.get_volume_connector({'id': 'fake'}) self.assertIn('ip', result) self.assertIn('initiator', result) self.assertIn('host', result) self.assertEqual(storage_ip, result['ip']) @catch_notimplementederror @mock.patch.object(libvirt.driver.LibvirtDriver, '_build_device_metadata', return_value=objects.InstanceDeviceMetadata()) def test_attach_detach_volume(self, _): instance_ref, network_info = self._get_running_instance() connection_info = { "driver_volume_type": "fake", "serial": "fake_serial", "data": {} } self.assertIsNone( self.connection.attach_volume(None, connection_info, instance_ref, '/dev/sda')) self.assertIsNone( self.connection.detach_volume(mock.sentinel.context, connection_info, instance_ref, '/dev/sda')) @catch_notimplementederror @mock.patch.object(libvirt.driver.LibvirtDriver, '_build_device_metadata', return_value=objects.InstanceDeviceMetadata()) def test_swap_volume(self, _): instance_ref, network_info = self._get_running_instance() self.assertIsNone( self.connection.attach_volume(None, {'driver_volume_type': 'fake', 'data': {}}, instance_ref, '/dev/sda')) self.assertIsNone( self.connection.swap_volume(None, {'driver_volume_type': 'fake', 'data': {}}, {'driver_volume_type': 'fake', 'data': {}}, instance_ref, '/dev/sda', 2)) @catch_notimplementederror @mock.patch.object(libvirt.driver.LibvirtDriver, '_build_device_metadata', return_value=objects.InstanceDeviceMetadata()) def test_attach_detach_different_power_states(self, _): instance_ref, network_info = self._get_running_instance() connection_info = { "driver_volume_type": "fake", "serial": "fake_serial", "data": {} } self.connection.power_off(instance_ref) self.connection.attach_volume(None, connection_info, instance_ref, '/dev/sda') bdm = { 'root_device_name': None, 'swap': None, 'ephemerals': [], 'block_device_mapping': driver_block_device.convert_volumes([ objects.BlockDeviceMapping( self.ctxt, **fake_block_device.FakeDbBlockDeviceDict( {'id': 1, 'instance_uuid': instance_ref['uuid'], 'device_name': '/dev/sda', 'source_type': 'volume', 'destination_type': 'volume', 'delete_on_termination': False, 'snapshot_id': None, 'volume_id': 'abcdedf', 'volume_size': None, 'no_device': None })), ]) } bdm['block_device_mapping'][0]['connection_info'] = ( {'driver_volume_type': 'fake', 'data': {}}) with mock.patch.object( driver_block_device.DriverVolumeBlockDevice, 'save'): self.connection.power_on( self.ctxt, instance_ref, network_info, bdm) self.connection.detach_volume(mock.sentinel.context, connection_info, instance_ref, '/dev/sda') @catch_notimplementederror def test_get_info(self): instance_ref, network_info = self._get_running_instance() info = self.connection.get_info(instance_ref) self.assertIsInstance(info, hardware.InstanceInfo) @catch_notimplementederror def test_get_info_for_unknown_instance(self): fake_instance = test_utils.get_test_instance(obj=True) self.assertRaises(exception.NotFound, self.connection.get_info, fake_instance) @catch_notimplementederror def test_get_diagnostics(self): instance_ref, network_info = self._get_running_instance(obj=True) self.connection.get_diagnostics(instance_ref) @catch_notimplementederror def test_get_instance_diagnostics(self): instance_ref, network_info = self._get_running_instance(obj=True) instance_ref['launched_at'] = timeutils.utcnow() self.connection.get_instance_diagnostics(instance_ref) @catch_notimplementederror def test_block_stats(self): instance_ref, network_info = self._get_running_instance() stats = self.connection.block_stats(instance_ref, 'someid') self.assertEqual(len(stats), 5) @catch_notimplementederror def test_get_console_output(self): fake_libvirt_utils.files['dummy.log'] = '' instance_ref, network_info = self._get_running_instance() console_output = self.connection.get_console_output(self.ctxt, instance_ref) self.assertIsInstance(console_output, six.string_types) @catch_notimplementederror def test_get_vnc_console(self): instance, network_info = self._get_running_instance(obj=True) vnc_console = self.connection.get_vnc_console(self.ctxt, instance) self.assertIsInstance(vnc_console, ctype.ConsoleVNC) @catch_notimplementederror def test_get_spice_console(self): instance_ref, network_info = self._get_running_instance() spice_console = self.connection.get_spice_console(self.ctxt, instance_ref) self.assertIsInstance(spice_console, ctype.ConsoleSpice) @catch_notimplementederror def test_get_rdp_console(self): instance_ref, network_info = self._get_running_instance() rdp_console = self.connection.get_rdp_console(self.ctxt, instance_ref) self.assertIsInstance(rdp_console, ctype.ConsoleRDP) @catch_notimplementederror def test_get_serial_console(self): instance_ref, network_info = self._get_running_instance() serial_console = self.connection.get_serial_console(self.ctxt, instance_ref) self.assertIsInstance(serial_console, ctype.ConsoleSerial) @catch_notimplementederror def test_get_mks_console(self): instance_ref, network_info = self._get_running_instance() mks_console = self.connection.get_mks_console(self.ctxt, instance_ref) self.assertIsInstance(mks_console, ctype.ConsoleMKS) @catch_notimplementederror def test_get_console_pool_info(self): instance_ref, network_info = self._get_running_instance() console_pool = self.connection.get_console_pool_info(instance_ref) self.assertIn('address', console_pool) self.assertIn('username', console_pool) self.assertIn('password', console_pool) @catch_notimplementederror def test_refresh_security_group_rules(self): # FIXME: Create security group and add the instance to it instance_ref, network_info = self._get_running_instance() self.connection.refresh_security_group_rules(1) @catch_notimplementederror def test_refresh_instance_security_rules(self): # FIXME: Create security group and add the instance to it instance_ref, network_info = self._get_running_instance() self.connection.refresh_instance_security_rules(instance_ref) @catch_notimplementederror def test_ensure_filtering_for_instance(self): instance = test_utils.get_test_instance(obj=True) network_info = test_utils.get_test_network_info() self.connection.ensure_filtering_rules_for_instance(instance, network_info) @catch_notimplementederror def test_unfilter_instance(self): instance_ref = test_utils.get_test_instance() network_info = test_utils.get_test_network_info() self.connection.unfilter_instance(instance_ref, network_info) def test_live_migration(self): instance_ref, network_info = self._get_running_instance() fake_context = context.RequestContext('fake', 'fake') migration = objects.Migration(context=fake_context, id=1) migrate_data = objects.LibvirtLiveMigrateData( migration=migration, bdms=[], block_migration=False, serial_listen_addr='127.0.0.1') self.connection.live_migration(self.ctxt, instance_ref, 'otherhost', lambda *a: None, lambda *a: None, migrate_data=migrate_data) @catch_notimplementederror def test_live_migration_force_complete(self): instance_ref, network_info = self._get_running_instance() self.connection.active_migrations[instance_ref.uuid] = deque() self.connection.live_migration_force_complete(instance_ref) @catch_notimplementederror def test_live_migration_abort(self): instance_ref, network_info = self._get_running_instance() self.connection.live_migration_abort(instance_ref) @catch_notimplementederror def _check_available_resource_fields(self, host_status): keys = ['vcpus', 'memory_mb', 'local_gb', 'vcpus_used', 'memory_mb_used', 'hypervisor_type', 'hypervisor_version', 'hypervisor_hostname', 'cpu_info', 'disk_available_least', 'supported_instances'] for key in keys: self.assertIn(key, host_status) self.assertIsInstance(host_status['hypervisor_version'], int) @catch_notimplementederror def test_get_available_resource(self): available_resource = self.connection.get_available_resource( 'myhostname') self._check_available_resource_fields(available_resource) @catch_notimplementederror def test_get_available_nodes(self): self.connection.get_available_nodes(False) @catch_notimplementederror def _check_host_cpu_status_fields(self, host_cpu_status): self.assertIn('kernel', host_cpu_status) self.assertIn('idle', host_cpu_status) self.assertIn('user', host_cpu_status) self.assertIn('iowait', host_cpu_status) self.assertIn('frequency', host_cpu_status) @catch_notimplementederror def test_get_host_cpu_stats(self): host_cpu_status = self.connection.get_host_cpu_stats() self._check_host_cpu_status_fields(host_cpu_status) @catch_notimplementederror def test_set_host_enabled(self): self.connection.set_host_enabled(True) @catch_notimplementederror def test_get_host_uptime(self): self.connection.get_host_uptime() @catch_notimplementederror def test_host_power_action_reboot(self): self.connection.host_power_action('reboot') @catch_notimplementederror def test_host_power_action_shutdown(self): self.connection.host_power_action('shutdown') @catch_notimplementederror def test_host_power_action_startup(self): self.connection.host_power_action('startup') @catch_notimplementederror def test_add_to_aggregate(self): self.connection.add_to_aggregate(self.ctxt, 'aggregate', 'host') @catch_notimplementederror def test_remove_from_aggregate(self): self.connection.remove_from_aggregate(self.ctxt, 'aggregate', 'host') def test_events(self): got_events = [] def handler(event): got_events.append(event) self.connection.register_event_listener(handler) event1 = virtevent.LifecycleEvent( "cef19ce0-0ca2-11df-855d-b19fbce37686", virtevent.EVENT_LIFECYCLE_STARTED) event2 = virtevent.LifecycleEvent( "cef19ce0-0ca2-11df-855d-b19fbce37686", virtevent.EVENT_LIFECYCLE_PAUSED) self.connection.emit_event(event1) self.connection.emit_event(event2) want_events = [event1, event2] self.assertEqual(want_events, got_events) event3 = virtevent.LifecycleEvent( "cef19ce0-0ca2-11df-855d-b19fbce37686", virtevent.EVENT_LIFECYCLE_RESUMED) event4 = virtevent.LifecycleEvent( "cef19ce0-0ca2-11df-855d-b19fbce37686", virtevent.EVENT_LIFECYCLE_STOPPED) self.connection.emit_event(event3) self.connection.emit_event(event4) want_events = [event1, event2, event3, event4] self.assertEqual(want_events, got_events) def test_event_bad_object(self): # Passing in something which does not inherit # from virtevent.Event def handler(event): pass self.connection.register_event_listener(handler) badevent = { "foo": "bar" } self.assertRaises(ValueError, self.connection.emit_event, badevent) def test_event_bad_callback(self): # Check that if a callback raises an exception, # it does not propagate back out of the # 'emit_event' call def handler(event): raise Exception("Hit Me!") self.connection.register_event_listener(handler) event1 = virtevent.LifecycleEvent( "cef19ce0-0ca2-11df-855d-b19fbce37686", virtevent.EVENT_LIFECYCLE_STARTED) self.connection.emit_event(event1) def test_emit_unicode_event(self): """Tests that we do not fail on translated unicode events.""" started_event = virtevent.LifecycleEvent( "cef19ce0-0ca2-11df-855d-b19fbce37686", virtevent.EVENT_LIFECYCLE_STARTED) callback = mock.Mock() self.connection.register_event_listener(callback) with mock.patch.object(started_event, 'get_name', return_value=u'\xF0\x9F\x92\xA9'): self.connection.emit_event(started_event) callback.assert_called_once_with(started_event) def test_set_bootable(self): self.assertRaises(NotImplementedError, self.connection.set_bootable, 'instance', True) @catch_notimplementederror def test_get_instance_disk_info(self): # This should be implemented by any driver that supports live migrate. instance_ref, network_info = self._get_running_instance() self.connection.get_instance_disk_info(instance_ref, block_device_info={}) @catch_notimplementederror def test_get_device_name_for_instance(self): instance, _ = self._get_running_instance() self.connection.get_device_name_for_instance( instance, [], mock.Mock(spec=objects.BlockDeviceMapping)) def test_network_binding_host_id(self): # NOTE(jroll) self._get_running_instance calls spawn(), so we can't # use it to test this method. Make a simple object instead; we just # need instance.host. instance = objects.Instance(self.ctxt, host='somehost') self.assertEqual(instance.host, self.connection.network_binding_host_id(self.ctxt, instance)) class AbstractDriverTestCase(_VirtDriverTestCase, test.TestCase): def setUp(self): self.driver_module = "nova.virt.driver.ComputeDriver" super(AbstractDriverTestCase, self).setUp() def test_live_migration(self): self.skipTest('Live migration is not implemented in the base ' 'virt driver.') class FakeConnectionTestCase(_VirtDriverTestCase, test.TestCase): def setUp(self): self.driver_module = 'nova.virt.fake.FakeDriver' fake.set_nodes(['myhostname']) super(FakeConnectionTestCase, self).setUp() def _check_available_resource_fields(self, host_status): super(FakeConnectionTestCase, self)._check_available_resource_fields( host_status) hypervisor_type = host_status['hypervisor_type'] supported_instances = host_status['supported_instances'] try: # supported_instances could be JSON wrapped supported_instances = jsonutils.loads(supported_instances) except TypeError: pass self.assertTrue(any(hypervisor_type in x for x in supported_instances)) class LibvirtConnTestCase(_VirtDriverTestCase, test.TestCase): REQUIRES_LOCKING = True def setUp(self): # Point _VirtDriverTestCase at the right module self.driver_module = 'nova.virt.libvirt.LibvirtDriver' super(LibvirtConnTestCase, self).setUp() self.stubs.Set(self.connection, '_set_host_enabled', mock.MagicMock()) self.useFixture(fixtures.MonkeyPatch( 'nova.context.get_admin_context', self._fake_admin_context)) # This is needed for the live migration tests which spawn off the # operation for monitoring. self.useFixture(nova_fixtures.SpawnIsSynchronousFixture()) # When using CONF.use_neutron=True and destroying an instance os-vif # will try to execute some commands which hangs tests so let's just # stub out the unplug call to os-vif since we don't care about it. self.stub_out('os_vif.unplug', lambda a, kw: None) def _fake_admin_context(self, *args, **kwargs): return self.ctxt def test_force_hard_reboot(self): self.flags(wait_soft_reboot_seconds=0, group='libvirt') self.test_reboot() def test_migrate_disk_and_power_off(self): # there is lack of fake stuff to execute this method. so pass. self.skipTest("Test nothing, but this method" " needed to override superclass.") def test_internal_set_host_enabled(self): self.mox.UnsetStubs() service_mock = mock.MagicMock() # Previous status of the service: disabled: False service_mock.configure_mock(disabled_reason='None', disabled=False) with mock.patch.object(objects.Service, "get_by_compute_host", return_value=service_mock): self.connection._set_host_enabled(False, 'ERROR!') self.assertTrue(service_mock.disabled) self.assertEqual(service_mock.disabled_reason, 'AUTO: ERROR!') def test_set_host_enabled_when_auto_disabled(self): self.mox.UnsetStubs() service_mock = mock.MagicMock() # Previous status of the service: disabled: True, 'AUTO: ERROR' service_mock.configure_mock(disabled_reason='AUTO: ERROR', disabled=True) with mock.patch.object(objects.Service, "get_by_compute_host", return_value=service_mock): self.connection._set_host_enabled(True) self.assertFalse(service_mock.disabled) self.assertIsNone(service_mock.disabled_reason) def test_set_host_enabled_when_manually_disabled(self): self.mox.UnsetStubs() service_mock = mock.MagicMock() # Previous status of the service: disabled: True, 'Manually disabled' service_mock.configure_mock(disabled_reason='Manually disabled', disabled=True) with mock.patch.object(objects.Service, "get_by_compute_host", return_value=service_mock): self.connection._set_host_enabled(True) self.assertTrue(service_mock.disabled) self.assertEqual(service_mock.disabled_reason, 'Manually disabled') def test_set_host_enabled_dont_override_manually_disabled(self): self.mox.UnsetStubs() service_mock = mock.MagicMock() # Previous status of the service: disabled: True, 'Manually disabled' service_mock.configure_mock(disabled_reason='Manually disabled', disabled=True) with mock.patch.object(objects.Service, "get_by_compute_host", return_value=service_mock): self.connection._set_host_enabled(False, 'ERROR!') self.assertTrue(service_mock.disabled) self.assertEqual(service_mock.disabled_reason, 'Manually disabled') @catch_notimplementederror @mock.patch.object(libvirt.driver.LibvirtDriver, '_unplug_vifs') def test_unplug_vifs_with_destroy_vifs_false(self, unplug_vifs_mock): instance_ref, network_info = self._get_running_instance() self.connection.cleanup(self.ctxt, instance_ref, network_info, destroy_vifs=False) self.assertEqual(unplug_vifs_mock.call_count, 0) @catch_notimplementederror @mock.patch.object(libvirt.driver.LibvirtDriver, '_unplug_vifs') def test_unplug_vifs_with_destroy_vifs_true(self, unplug_vifs_mock): instance_ref, network_info = self._get_running_instance() self.connection.cleanup(self.ctxt, instance_ref, network_info, destroy_vifs=True) self.assertEqual(unplug_vifs_mock.call_count, 1) unplug_vifs_mock.assert_called_once_with(instance_ref, network_info, True) def test_get_device_name_for_instance(self): self.skipTest("Tested by the nova.tests.unit.virt.libvirt suite") @catch_notimplementederror @mock.patch('nova.utils.get_image_from_system_metadata') @mock.patch("nova.virt.libvirt.host.Host.has_min_version") def test_set_admin_password(self, ver, mock_image): self.flags(virt_type='kvm', group='libvirt') mock_image.return_value = {"properties": { "hw_qemu_guest_agent": "yes"}} instance, network_info = self._get_running_instance(obj=True) self.connection.set_admin_password(instance, 'p4ssw0rd') def test_get_volume_connector(self): for multipath in (True, False): self.flags(volume_use_multipath=multipath, group='libvirt') result = super(LibvirtConnTestCase, self).test_get_volume_connector() self.assertIn('multipath', result) self.assertEqual(multipath, result['multipath']) nova-17.0.1/nova/tests/unit/virt/test_virt.py0000666000175000017500000002670613250073126021246 0ustar zuulzuul00000000000000# Copyright 2011 Isaku Yamahata # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import io import mock import six from nova import test from nova.virt.disk import api as disk_api from nova.virt import driver PROC_MOUNTS_CONTENTS = """rootfs / rootfs rw 0 0 sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0 proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0 udev /dev devtmpfs rw,relatime,size=1013160k,nr_inodes=253290,mode=755 0 0 devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620 0 0 tmpfs /run tmpfs rw,nosuid,relatime,size=408904k,mode=755 0 0""" class TestVirtDriver(test.NoDBTestCase): def test_block_device(self): swap = {'device_name': '/dev/sdb', 'swap_size': 1} ephemerals = [{'num': 0, 'virtual_name': 'ephemeral0', 'device_name': '/dev/sdc1', 'size': 1}] block_device_mapping = [{'mount_device': '/dev/sde', 'device_path': 'fake_device'}] block_device_info = { 'root_device_name': '/dev/sda', 'swap': swap, 'ephemerals': ephemerals, 'block_device_mapping': block_device_mapping} empty_block_device_info = {} self.assertEqual( driver.block_device_info_get_root_device(block_device_info), '/dev/sda') self.assertIsNone( driver.block_device_info_get_root_device(empty_block_device_info)) self.assertIsNone(driver.block_device_info_get_root_device(None)) self.assertEqual( driver.block_device_info_get_swap(block_device_info), swap) self.assertIsNone(driver.block_device_info_get_swap( empty_block_device_info)['device_name']) self.assertEqual(driver.block_device_info_get_swap( empty_block_device_info)['swap_size'], 0) self.assertIsNone( driver.block_device_info_get_swap({'swap': None})['device_name']) self.assertEqual( driver.block_device_info_get_swap({'swap': None})['swap_size'], 0) self.assertIsNone( driver.block_device_info_get_swap(None)['device_name']) self.assertEqual( driver.block_device_info_get_swap(None)['swap_size'], 0) self.assertEqual( driver.block_device_info_get_ephemerals(block_device_info), ephemerals) self.assertEqual( driver.block_device_info_get_ephemerals(empty_block_device_info), []) self.assertEqual( driver.block_device_info_get_ephemerals(None), []) def test_swap_is_usable(self): self.assertFalse(driver.swap_is_usable(None)) self.assertFalse(driver.swap_is_usable({'device_name': None})) self.assertFalse(driver.swap_is_usable({'device_name': '/dev/sdb', 'swap_size': 0})) self.assertTrue(driver.swap_is_usable({'device_name': '/dev/sdb', 'swap_size': 1})) class FakeMount(object): def __init__(self, image, mount_dir, partition=None, device=None): self.image = image self.partition = partition self.mount_dir = mount_dir self.linked = self.mapped = self.mounted = False self.device = device def do_mount(self): self.linked = True self.mapped = True self.mounted = True self.device = '/dev/fake' return True def do_umount(self): self.linked = True self.mounted = False def do_teardown(self): self.linked = False self.mapped = False self.mounted = False self.device = None class TestDiskImage(test.NoDBTestCase): def mock_proc_mounts(self, mock_open): response = io.StringIO(six.text_type(PROC_MOUNTS_CONTENTS)) mock_open.return_value = response @mock.patch.object(six.moves.builtins, 'open') def test_mount(self, mock_open): self.mock_proc_mounts(mock_open) image = '/tmp/fake-image' mountdir = '/mnt/fake_rootfs' fakemount = FakeMount(image, mountdir, None) def fake_instance_for_format(image, mountdir, partition): return fakemount self.stub_out('nova.virt.disk.mount.api.Mount.instance_for_format', staticmethod(fake_instance_for_format)) diskimage = disk_api._DiskImage(image=image, mount_dir=mountdir) dev = diskimage.mount() self.assertEqual(diskimage._mounter, fakemount) self.assertEqual(dev, '/dev/fake') @mock.patch.object(six.moves.builtins, 'open') def test_umount(self, mock_open): self.mock_proc_mounts(mock_open) image = '/tmp/fake-image' mountdir = '/mnt/fake_rootfs' fakemount = FakeMount(image, mountdir, None) def fake_instance_for_format(image, mountdir, partition): return fakemount self.stub_out('nova.virt.disk.mount.api.Mount.instance_for_format', staticmethod(fake_instance_for_format)) diskimage = disk_api._DiskImage(image=image, mount_dir=mountdir) dev = diskimage.mount() self.assertEqual(diskimage._mounter, fakemount) self.assertEqual(dev, '/dev/fake') diskimage.umount() self.assertIsNone(diskimage._mounter) @mock.patch.object(six.moves.builtins, 'open') def test_teardown(self, mock_open): self.mock_proc_mounts(mock_open) image = '/tmp/fake-image' mountdir = '/mnt/fake_rootfs' fakemount = FakeMount(image, mountdir, None) def fake_instance_for_format(image, mountdir, partition): return fakemount self.stub_out('nova.virt.disk.mount.api.Mount.instance_for_format', staticmethod(fake_instance_for_format)) diskimage = disk_api._DiskImage(image=image, mount_dir=mountdir) dev = diskimage.mount() self.assertEqual(diskimage._mounter, fakemount) self.assertEqual(dev, '/dev/fake') diskimage.teardown() self.assertIsNone(diskimage._mounter) class TestVirtDisk(test.NoDBTestCase): def setUp(self): super(TestVirtDisk, self).setUp() # TODO(mikal): this can probably be removed post privsep cleanup. self.executes = [] def fake_execute(*cmd, **kwargs): self.executes.append(cmd) return None, None self.stub_out('nova.utils.execute', fake_execute) def test_lxc_setup_container(self): image = '/tmp/fake-image' container_dir = '/mnt/fake_rootfs/' def proc_mounts(mount_point): return None def fake_instance_for_format(image, mountdir, partition): return FakeMount(image, mountdir, partition) self.stub_out('os.path.exists', lambda _: True) self.stub_out('nova.virt.disk.api._DiskImage._device_for_path', proc_mounts) self.stub_out('nova.virt.disk.mount.api.Mount.instance_for_format', staticmethod(fake_instance_for_format)) self.assertEqual(disk_api.setup_container(image, container_dir), '/dev/fake') @mock.patch('os.path.exists', return_value=True) @mock.patch('nova.privsep.fs.loopremove') @mock.patch('nova.privsep.fs.umount') @mock.patch('nova.privsep.fs.nbd_disconnect') @mock.patch('nova.privsep.fs.remove_device_maps') @mock.patch('nova.privsep.fs.blockdev_flush') def test_lxc_teardown_container( self, mock_blockdev_flush, mock_remove_maps, mock_nbd_disconnect, mock_umount, mock_loopremove, mock_exist): def proc_mounts(mount_point): mount_points = { '/mnt/loop/nopart': '/dev/loop0', '/mnt/loop/part': '/dev/mapper/loop0p1', '/mnt/nbd/nopart': '/dev/nbd15', '/mnt/nbd/part': '/dev/mapper/nbd15p1', } return mount_points[mount_point] self.stub_out('nova.virt.disk.api._DiskImage._device_for_path', proc_mounts) disk_api.teardown_container('/mnt/loop/nopart') mock_loopremove.assert_has_calls([mock.call('/dev/loop0')]) mock_loopremove.reset_mock() mock_umount.assert_has_calls([mock.call('/dev/loop0')]) mock_umount.reset_mock() disk_api.teardown_container('/mnt/loop/part') mock_loopremove.assert_has_calls([mock.call('/dev/loop0')]) mock_loopremove.reset_mock() mock_umount.assert_has_calls([mock.call('/dev/mapper/loop0p1')]) mock_umount.reset_mock() mock_remove_maps.assert_has_calls([mock.call('/dev/loop0')]) mock_remove_maps.reset_mock() disk_api.teardown_container('/mnt/nbd/nopart') mock_nbd_disconnect.assert_has_calls([mock.call('/dev/nbd15')]) mock_umount.assert_has_calls([mock.call('/dev/nbd15')]) mock_blockdev_flush.assert_has_calls([mock.call('/dev/nbd15')]) mock_nbd_disconnect.reset_mock() mock_umount.reset_mock() mock_blockdev_flush.reset_mock() disk_api.teardown_container('/mnt/nbd/part') mock_nbd_disconnect.assert_has_calls([mock.call('/dev/nbd15')]) mock_umount.assert_has_calls([mock.call('/dev/mapper/nbd15p1')]) mock_blockdev_flush.assert_has_calls([mock.call('/dev/nbd15')]) mock_nbd_disconnect.reset_mock() mock_umount.reset_mock() mock_remove_maps.assert_has_calls([mock.call('/dev/nbd15')]) mock_remove_maps.reset_mock() mock_blockdev_flush.reset_mock() # NOTE(thomasem): Not adding any commands in this case, because we're # not expecting an additional umount for LocalBlockImages. This is to # assert that no additional commands are run in this case. disk_api.teardown_container('/dev/volume-group/uuid_disk') mock_umount.assert_not_called() @mock.patch('os.path.exists', return_value=True) @mock.patch('nova.virt.disk.api._DiskImage._device_for_path', return_value=None) @mock.patch('nova.privsep.fs.loopremove') @mock.patch('nova.privsep.fs.nbd_disconnect') def test_lxc_teardown_container_with_namespace_cleaned( self, mock_nbd_disconnect, mock_loopremove, mock_device_for_path, mock_exists): disk_api.teardown_container('/mnt/loop/nopart', '/dev/loop0') mock_loopremove.assert_has_calls([mock.call('/dev/loop0')]) mock_loopremove.reset_mock() disk_api.teardown_container('/mnt/loop/part', '/dev/loop0') mock_loopremove.assert_has_calls([mock.call('/dev/loop0')]) mock_loopremove.reset_mock() disk_api.teardown_container('/mnt/nbd/nopart', '/dev/nbd15') mock_nbd_disconnect.assert_has_calls([mock.call('/dev/nbd15')]) mock_nbd_disconnect.reset_mock() disk_api.teardown_container('/mnt/nbd/part', '/dev/nbd15') mock_nbd_disconnect.assert_has_calls([mock.call('/dev/nbd15')]) mock_nbd_disconnect.reset_mock() nova-17.0.1/nova/tests/unit/virt/fakelibosinfo.py0000666000175000017500000000565513250073126022036 0ustar zuulzuul00000000000000# Copyright 2016 Red Hat, Inc # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. def match_item(obj, fltr): key, val = list(fltr._filter.items())[0] if key == 'class': key = '_class' elif key == 'short-id': key = 'short_id' return getattr(obj, key, None) == val class Loader(object): def process_default_path(self): pass def get_db(self): return Db() class Db(object): def __init__(self): # Generate test devices self.devs = [] self.oslist = None net = Device() net._class = 'net' net.name = 'virtio-net' self.devs.append(net) net = Device() net._class = 'block' net.name = 'virtio-block' self.devs.append(net) devlist = DeviceList() devlist.devices = self.devs fedora = Os() fedora.name = 'Fedora 22' fedora.id = 'http://fedoraproject.org/fedora/22' fedora.short_id = 'fedora22' fedora.dev_list = devlist self.oslist = OsList() self.oslist.os_list = [fedora] def get_os_list(self): return self.oslist class Filter(object): def __init__(self): self._filter = {} @classmethod def new(cls): return cls() def add_constraint(self, flt_key, val): self._filter[flt_key] = val class OsList(object): def __init__(self): self.os_list = [] def new_filtered(self, fltr): new_list = OsList() new_list.os_list = [os for os in self.os_list if match_item(os, fltr)] return new_list def get_length(self): return len(self.os_list) def get_nth(self, index): return self.os_list[index] class Os(object): def __init__(self): self.name = None self.short_id = None self.id = None self.dev_list = None def get_all_devices(self, fltr): new_list = DeviceList() new_list.devices = [dev for dev in self.dev_list.devices if match_item(dev, fltr)] return new_list def get_name(self): return self.name class DeviceList(object): def __init__(self): self.devices = [] def get_length(self): return len(self.devices) def get_nth(self, index): return self.devices[index] class Device(object): def __init__(self): self.name = None self._class = None def get_name(self): return self.name nova-17.0.1/nova/tests/unit/virt/libvirt/0000775000175000017500000000000013250073472020313 5ustar zuulzuul00000000000000nova-17.0.1/nova/tests/unit/virt/libvirt/test_migration.py0000666000175000017500000011463713250073126023727 0ustar zuulzuul00000000000000# Copyright (c) 2016 Red Hat, Inc # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from collections import deque from lxml import etree import mock from oslo_utils import units from nova.compute import power_state from nova import objects from nova import test from nova.tests.unit import matchers from nova.tests.unit.virt.libvirt import fakelibvirt from nova.tests import uuidsentinel as uuids from nova.virt.libvirt import config as vconfig from nova.virt.libvirt import guest as libvirt_guest from nova.virt.libvirt import host from nova.virt.libvirt import migration class UtilityMigrationTestCase(test.NoDBTestCase): def test_graphics_listen_addrs(self): data = objects.LibvirtLiveMigrateData( graphics_listen_addr_vnc='127.0.0.1', graphics_listen_addr_spice='127.0.0.2') addrs = migration.graphics_listen_addrs(data) self.assertEqual({ 'vnc': '127.0.0.1', 'spice': '127.0.0.2'}, addrs) def test_graphics_listen_addrs_empty(self): data = objects.LibvirtLiveMigrateData() addrs = migration.graphics_listen_addrs(data) self.assertIsNone(addrs) def test_graphics_listen_addrs_spice(self): data = objects.LibvirtLiveMigrateData( graphics_listen_addr_spice='127.0.0.2') addrs = migration.graphics_listen_addrs(data) self.assertEqual({ 'vnc': None, 'spice': '127.0.0.2'}, addrs) def test_graphics_listen_addrs_vnc(self): data = objects.LibvirtLiveMigrateData( graphics_listen_addr_vnc='127.0.0.1') addrs = migration.graphics_listen_addrs(data) self.assertEqual({ 'vnc': '127.0.0.1', 'spice': None}, addrs) def test_serial_listen_addr(self): data = objects.LibvirtLiveMigrateData( serial_listen_addr='127.0.0.1') addr = migration.serial_listen_addr(data) self.assertEqual('127.0.0.1', addr) def test_serial_listen_addr_emtpy(self): data = objects.LibvirtLiveMigrateData() addr = migration.serial_listen_addr(data) self.assertIsNone(addr) def test_serial_listen_addr_None(self): data = objects.LibvirtLiveMigrateData() data.serial_listen_addr = None addr = migration.serial_listen_addr(data) self.assertIsNone(addr) def test_serial_listen_ports(self): data = objects.LibvirtLiveMigrateData( serial_listen_ports=[1, 2, 3]) ports = migration.serial_listen_ports(data) self.assertEqual([1, 2, 3], ports) def test_serial_listen_ports_emtpy(self): data = objects.LibvirtLiveMigrateData() ports = migration.serial_listen_ports(data) self.assertEqual([], ports) @mock.patch('lxml.etree.tostring') @mock.patch.object(migration, '_update_perf_events_xml') @mock.patch.object(migration, '_update_graphics_xml') @mock.patch.object(migration, '_update_serial_xml') @mock.patch.object(migration, '_update_volume_xml') def test_get_updated_guest_xml( self, mock_volume, mock_serial, mock_graphics, mock_perf_events_xml, mock_tostring): data = objects.LibvirtLiveMigrateData() mock_guest = mock.Mock(spec=libvirt_guest.Guest) get_volume_config = mock.MagicMock() mock_guest.get_xml_desc.return_value = '' migration.get_updated_guest_xml(mock_guest, data, get_volume_config) mock_graphics.assert_called_once_with(mock.ANY, data) mock_serial.assert_called_once_with(mock.ANY, data) mock_volume.assert_called_once_with(mock.ANY, data, get_volume_config) mock_perf_events_xml.assert_called_once_with(mock.ANY, data) self.assertEqual(1, mock_tostring.called) def test_update_serial_xml_serial(self): data = objects.LibvirtLiveMigrateData( serial_listen_addr='127.0.0.100', serial_listen_ports=[2001]) xml = """ """ doc = etree.fromstring(xml) res = etree.tostring(migration._update_serial_xml(doc, data), encoding='unicode') new_xml = xml.replace("127.0.0.1", "127.0.0.100").replace( "2000", "2001") self.assertThat(res, matchers.XMLMatches(new_xml)) def test_update_serial_xml_console(self): data = objects.LibvirtLiveMigrateData( serial_listen_addr='127.0.0.100', serial_listen_ports=[299, 300]) xml = """ """ doc = etree.fromstring(xml) res = etree.tostring(migration._update_serial_xml(doc, data), encoding='unicode') new_xml = xml.replace("127.0.0.1", "127.0.0.100").replace( "2001", "299").replace("2002", "300") self.assertThat(res, matchers.XMLMatches(new_xml)) def test_update_serial_xml_without_ports(self): # This test is for backwards compatibility when we don't # get the serial ports from the target node. data = objects.LibvirtLiveMigrateData( serial_listen_addr='127.0.0.100', serial_listen_ports=[]) xml = """ """ doc = etree.fromstring(xml) res = etree.tostring(migration._update_serial_xml(doc, data), encoding='unicode') new_xml = xml.replace("127.0.0.1", "127.0.0.100") self.assertThat(res, matchers.XMLMatches(new_xml)) def test_update_graphics(self): data = objects.LibvirtLiveMigrateData( graphics_listen_addr_vnc='127.0.0.100', graphics_listen_addr_spice='127.0.0.200') xml = """ """ doc = etree.fromstring(xml) res = etree.tostring(migration._update_graphics_xml(doc, data), encoding='unicode') new_xml = xml.replace("127.0.0.1", "127.0.0.100") new_xml = new_xml.replace("127.0.0.2", "127.0.0.200") self.assertThat(res, matchers.XMLMatches(new_xml)) def test_update_volume_xml(self): connection_info = { 'driver_volume_type': 'iscsi', 'serial': '58a84f6d-3f0c-4e19-a0af-eb657b790657', 'data': { 'access_mode': 'rw', 'target_discovered': False, 'target_iqn': 'ip-1.2.3.4:3260-iqn.cde.67890.opst-lun-Z', 'volume_id': '58a84f6d-3f0c-4e19-a0af-eb657b790657', 'device_path': '/dev/disk/by-path/ip-1.2.3.4:3260-iqn.cde.67890.opst-lun-Z'}} bdm = objects.LibvirtLiveMigrateBDMInfo( serial='58a84f6d-3f0c-4e19-a0af-eb657b790657', bus='virtio', type='disk', dev='vdb', connection_info=connection_info) data = objects.LibvirtLiveMigrateData( target_connect_addr='127.0.0.1', bdms=[bdm], block_migration=False) xml = """ 58a84f6d-3f0c-4e19-a0af-eb657b790657
""" conf = vconfig.LibvirtConfigGuestDisk() conf.source_device = bdm.type conf.driver_name = "qemu" conf.driver_format = "raw" conf.driver_cache = "none" conf.target_dev = bdm.dev conf.target_bus = bdm.bus conf.serial = bdm.connection_info.get('serial') conf.source_type = "block" conf.source_path = bdm.connection_info['data'].get('device_path') get_volume_config = mock.MagicMock(return_value=conf) doc = etree.fromstring(xml) res = etree.tostring(migration._update_volume_xml( doc, data, get_volume_config), encoding='unicode') new_xml = xml.replace('ip-1.2.3.4:3260-iqn.abc.12345.opst-lun-X', 'ip-1.2.3.4:3260-iqn.cde.67890.opst-lun-Z') self.assertThat(res, matchers.XMLMatches(new_xml)) def test_update_volume_xml_keeps_address(self): # Now test to make sure address isn't altered for virtio-scsi and rbd connection_info = { 'driver_volume_type': 'rbd', 'serial': 'd299a078-f0db-4993-bf03-f10fe44fd192', 'data': { 'access_mode': 'rw', 'secret_type': 'ceph', 'name': 'cinder-volumes/volume-d299a078', 'encrypted': False, 'discard': True, 'cluster_name': 'ceph', 'secret_uuid': '1a790a26-dd49-4825-8d16-3dd627cf05a9', 'qos_specs': None, 'auth_enabled': True, 'volume_id': 'd299a078-f0db-4993-bf03-f10fe44fd192', 'hosts': ['172.16.128.101', '172.16.128.121'], 'auth_username': 'cinder', 'ports': ['6789', '6789', '6789']}} bdm = objects.LibvirtLiveMigrateBDMInfo( serial='d299a078-f0db-4993-bf03-f10fe44fd192', bus='scsi', type='disk', dev='sdc', connection_info=connection_info) data = objects.LibvirtLiveMigrateData( target_connect_addr=None, bdms=[bdm], block_migration=False) xml = """ d299a078-f0db-4993-bf03-f10fe44fd192
""" conf = vconfig.LibvirtConfigGuestDisk() conf.source_device = bdm.type conf.driver_name = "qemu" conf.driver_format = "raw" conf.driver_cache = "writeback" conf.target_dev = bdm.dev conf.target_bus = bdm.bus conf.serial = bdm.connection_info.get('serial') conf.source_type = "network" conf.driver_discard = 'unmap' conf.device_addr = vconfig.LibvirtConfigGuestDeviceAddressDrive() conf.device_addr.controller = 0 get_volume_config = mock.MagicMock(return_value=conf) doc = etree.fromstring(xml) res = etree.tostring(migration._update_volume_xml( doc, data, get_volume_config), encoding='unicode') new_xml = xml.replace('sdb', 'sdc') self.assertThat(res, matchers.XMLMatches(new_xml)) def test_update_volume_xml_add_encryption(self): connection_info = { 'driver_volume_type': 'rbd', 'serial': 'd299a078-f0db-4993-bf03-f10fe44fd192', 'data': { 'access_mode': 'rw', 'secret_type': 'ceph', 'name': 'cinder-volumes/volume-d299a078', 'encrypted': False, 'discard': True, 'cluster_name': 'ceph', 'secret_uuid': '1a790a26-dd49-4825-8d16-3dd627cf05a9', 'qos_specs': None, 'auth_enabled': True, 'volume_id': 'd299a078-f0db-4993-bf03-f10fe44fd192', 'hosts': ['172.16.128.101', '172.16.128.121'], 'auth_username': 'cinder', 'ports': ['6789', '6789', '6789']}} bdm = objects.LibvirtLiveMigrateBDMInfo( serial='d299a078-f0db-4993-bf03-f10fe44fd192', bus='scsi', type='disk', dev='sdb', connection_info=connection_info, encryption_secret_uuid=uuids.encryption_secret_uuid) data = objects.LibvirtLiveMigrateData( target_connect_addr=None, bdms=[bdm], block_migration=False) xml = """ d299a078-f0db-4993-bf03-f10fe44fd192
""" new_xml = """ d299a078-f0db-4993-bf03-f10fe44fd192
""" % {'encryption_secret_uuid': uuids.encryption_secret_uuid} conf = vconfig.LibvirtConfigGuestDisk() conf.source_device = bdm.type conf.driver_name = "qemu" conf.driver_format = "raw" conf.driver_cache = "writeback" conf.target_dev = bdm.dev conf.target_bus = bdm.bus conf.serial = bdm.connection_info.get('serial') conf.source_type = "network" conf.driver_discard = 'unmap' conf.device_addr = vconfig.LibvirtConfigGuestDeviceAddressDrive() conf.device_addr.controller = 0 get_volume_config = mock.MagicMock(return_value=conf) doc = etree.fromstring(xml) res = etree.tostring(migration._update_volume_xml( doc, data, get_volume_config), encoding='unicode') self.assertThat(res, matchers.XMLMatches(new_xml)) def test_update_volume_xml_update_encryption(self): connection_info = { 'driver_volume_type': 'rbd', 'serial': 'd299a078-f0db-4993-bf03-f10fe44fd192', 'data': { 'access_mode': 'rw', 'secret_type': 'ceph', 'name': 'cinder-volumes/volume-d299a078', 'encrypted': False, 'discard': True, 'cluster_name': 'ceph', 'secret_uuid': '1a790a26-dd49-4825-8d16-3dd627cf05a9', 'qos_specs': None, 'auth_enabled': True, 'volume_id': 'd299a078-f0db-4993-bf03-f10fe44fd192', 'hosts': ['172.16.128.101', '172.16.128.121'], 'auth_username': 'cinder', 'ports': ['6789', '6789', '6789']}} bdm = objects.LibvirtLiveMigrateBDMInfo( serial='d299a078-f0db-4993-bf03-f10fe44fd192', bus='scsi', type='disk', dev='sdb', connection_info=connection_info, encryption_secret_uuid=uuids.encryption_secret_uuid_new) data = objects.LibvirtLiveMigrateData( target_connect_addr=None, bdms=[bdm], block_migration=False) xml = """ d299a078-f0db-4993-bf03-f10fe44fd192
""" % {'encryption_secret_uuid': uuids.encryption_secret_uuid_old} conf = vconfig.LibvirtConfigGuestDisk() conf.source_device = bdm.type conf.driver_name = "qemu" conf.driver_format = "raw" conf.driver_cache = "writeback" conf.target_dev = bdm.dev conf.target_bus = bdm.bus conf.serial = bdm.connection_info.get('serial') conf.source_type = "network" conf.driver_discard = 'unmap' conf.device_addr = vconfig.LibvirtConfigGuestDeviceAddressDrive() conf.device_addr.controller = 0 get_volume_config = mock.MagicMock(return_value=conf) doc = etree.fromstring(xml) res = etree.tostring(migration._update_volume_xml( doc, data, get_volume_config), encoding='unicode') new_xml = xml.replace(uuids.encryption_secret_uuid_old, uuids.encryption_secret_uuid_new) self.assertThat(res, matchers.XMLMatches(new_xml)) def test_update_perf_events_xml(self): data = objects.LibvirtLiveMigrateData( supported_perf_events=['cmt']) xml = """ """ doc = etree.fromstring(xml) res = etree.tostring(migration._update_perf_events_xml(doc, data), encoding='unicode') self.assertThat(res, matchers.XMLMatches(""" """)) def test_update_perf_events_xml_add_new_events(self): data = objects.LibvirtLiveMigrateData( supported_perf_events=['cmt']) xml = """ """ doc = etree.fromstring(xml) res = etree.tostring(migration._update_perf_events_xml(doc, data), encoding='unicode') self.assertThat(res, matchers.XMLMatches(""" """)) def test_update_perf_events_xml_add_new_events1(self): data = objects.LibvirtLiveMigrateData( supported_perf_events=['cmt', 'mbml']) xml = """ """ doc = etree.fromstring(xml) res = etree.tostring(migration._update_perf_events_xml(doc, data), encoding='unicode') self.assertThat(res, matchers.XMLMatches(""" """)) def test_update_perf_events_xml_remove_all_events(self): data = objects.LibvirtLiveMigrateData( supported_perf_events=[]) xml = """ """ doc = etree.fromstring(xml) res = etree.tostring(migration._update_perf_events_xml(doc, data), encoding='unicode') self.assertThat(res, matchers.XMLMatches(""" """)) class MigrationMonitorTestCase(test.NoDBTestCase): def setUp(self): super(MigrationMonitorTestCase, self).setUp() self.useFixture(fakelibvirt.FakeLibvirtFixture()) flavor = objects.Flavor(memory_mb=2048, swap=0, vcpu_weight=None, root_gb=1, id=2, name=u'm1.small', ephemeral_gb=0, rxtx_factor=1.0, flavorid=u'1', vcpus=1, extra_specs={}) instance = { 'id': 1, 'uuid': '32dfcb37-5af1-552b-357c-be8c3aa38310', 'memory_kb': '1024000', 'basepath': '/some/path', 'bridge_name': 'br100', 'display_name': "Acme webserver", 'vcpus': 2, 'project_id': 'fake', 'bridge': 'br101', 'image_ref': '155d900f-4e14-4e4c-a73d-069cbf4541e6', 'root_gb': 10, 'ephemeral_gb': 20, 'instance_type_id': '5', # m1.small 'extra_specs': {}, 'system_metadata': { 'image_disk_format': 'raw', }, 'flavor': flavor, 'new_flavor': None, 'old_flavor': None, 'pci_devices': objects.PciDeviceList(), 'numa_topology': None, 'config_drive': None, 'vm_mode': None, 'kernel_id': None, 'ramdisk_id': None, 'os_type': 'linux', 'user_id': '838a72b0-0d54-4827-8fd6-fb1227633ceb', 'ephemeral_key_uuid': None, 'vcpu_model': None, 'host': 'fake-host', 'task_state': None, } self.instance = objects.Instance(**instance) self.conn = fakelibvirt.Connection("qemu:///system") self.dom = fakelibvirt.Domain(self.conn, "", True) self.host = host.Host("qemu:///system") self.guest = libvirt_guest.Guest(self.dom) @mock.patch.object(libvirt_guest.Guest, "is_active", return_value=True) def test_live_migration_find_type_active(self, mock_active): self.assertEqual(migration.find_job_type(self.guest, self.instance), fakelibvirt.VIR_DOMAIN_JOB_FAILED) @mock.patch.object(libvirt_guest.Guest, "is_active", return_value=False) def test_live_migration_find_type_inactive(self, mock_active): self.assertEqual(migration.find_job_type(self.guest, self.instance), fakelibvirt.VIR_DOMAIN_JOB_COMPLETED) @mock.patch.object(libvirt_guest.Guest, "is_active") def test_live_migration_find_type_no_domain(self, mock_active): mock_active.side_effect = fakelibvirt.make_libvirtError( fakelibvirt.libvirtError, "No domain with ID", error_code=fakelibvirt.VIR_ERR_NO_DOMAIN) self.assertEqual(migration.find_job_type(self.guest, self.instance), fakelibvirt.VIR_DOMAIN_JOB_COMPLETED) @mock.patch.object(libvirt_guest.Guest, "is_active") def test_live_migration_find_type_bad_err(self, mock_active): mock_active.side_effect = fakelibvirt.make_libvirtError( fakelibvirt.libvirtError, "Something weird happened", error_code=fakelibvirt.VIR_ERR_INTERNAL_ERROR) self.assertEqual(migration.find_job_type(self.guest, self.instance), fakelibvirt.VIR_DOMAIN_JOB_FAILED) def test_live_migration_abort_stuck(self): # Progress time exceeds progress timeout self.assertTrue(migration.should_abort(self.instance, 5000, 1000, 2000, 4500, 9000, "running")) def test_live_migration_abort_no_prog_timeout(self): # Progress timeout is disabled self.assertFalse(migration.should_abort(self.instance, 5000, 1000, 0, 4500, 9000, "running")) def test_live_migration_abort_not_stuck(self): # Progress time is less than progress timeout self.assertFalse(migration.should_abort(self.instance, 5000, 4500, 2000, 4500, 9000, "running")) def test_live_migration_abort_too_long(self): # Elapsed time is over completion timeout self.assertTrue(migration.should_abort(self.instance, 5000, 4500, 2000, 4500, 2000, "running")) def test_live_migration_abort_no_comp_timeout(self): # Completion timeout is disabled self.assertFalse(migration.should_abort(self.instance, 5000, 4500, 2000, 4500, 0, "running")) def test_live_migration_abort_still_working(self): # Elapsed time is less than completion timeout self.assertFalse(migration.should_abort(self.instance, 5000, 4500, 2000, 4500, 9000, "running")) def test_live_migration_postcopy_switch(self): # Migration progress is not fast enough self.assertTrue(migration.should_switch_to_postcopy( 2, 100, 105, "running")) def test_live_migration_postcopy_switch_already_switched(self): # Migration already running in postcopy mode self.assertFalse(migration.should_switch_to_postcopy( 2, 100, 105, "running (post-copy)")) def test_live_migration_postcopy_switch_too_soon(self): # First memory iteration not completed yet self.assertFalse(migration.should_switch_to_postcopy( 1, 100, 105, "running")) def test_live_migration_postcopy_switch_fast_progress(self): # Migration progress is good self.assertFalse(migration.should_switch_to_postcopy( 2, 100, 155, "running")) @mock.patch.object(libvirt_guest.Guest, "migrate_configure_max_downtime") def test_live_migration_update_downtime_no_steps(self, mock_dt): steps = [] newdt = migration.update_downtime(self.guest, self.instance, None, steps, 5000) self.assertIsNone(newdt) self.assertFalse(mock_dt.called) @mock.patch.object(libvirt_guest.Guest, "migrate_configure_max_downtime") def test_live_migration_update_downtime_too_early(self, mock_dt): steps = [ (9000, 50), (18000, 200), ] # We shouldn't change downtime since haven't hit first time newdt = migration.update_downtime(self.guest, self.instance, None, steps, 5000) self.assertIsNone(newdt) self.assertFalse(mock_dt.called) @mock.patch.object(libvirt_guest.Guest, "migrate_configure_max_downtime") def test_live_migration_update_downtime_step1(self, mock_dt): steps = [ (9000, 50), (18000, 200), ] # We should pick the first downtime entry newdt = migration.update_downtime(self.guest, self.instance, None, steps, 11000) self.assertEqual(newdt, 50) mock_dt.assert_called_once_with(50) @mock.patch.object(libvirt_guest.Guest, "migrate_configure_max_downtime") def test_live_migration_update_downtime_nostep1(self, mock_dt): steps = [ (9000, 50), (18000, 200), ] # We shouldn't change downtime, since its already set newdt = migration.update_downtime(self.guest, self.instance, 50, steps, 11000) self.assertEqual(newdt, 50) self.assertFalse(mock_dt.called) @mock.patch.object(libvirt_guest.Guest, "migrate_configure_max_downtime") def test_live_migration_update_downtime_step2(self, mock_dt): steps = [ (9000, 50), (18000, 200), ] newdt = migration.update_downtime(self.guest, self.instance, 50, steps, 22000) self.assertEqual(newdt, 200) mock_dt.assert_called_once_with(200) @mock.patch.object(libvirt_guest.Guest, "migrate_configure_max_downtime") def test_live_migration_update_downtime_err(self, mock_dt): steps = [ (9000, 50), (18000, 200), ] mock_dt.side_effect = fakelibvirt.make_libvirtError( fakelibvirt.libvirtError, "Failed to set downtime", error_code=fakelibvirt.VIR_ERR_INTERNAL_ERROR) newdt = migration.update_downtime(self.guest, self.instance, 50, steps, 22000) self.assertEqual(newdt, 200) mock_dt.assert_called_once_with(200) @mock.patch.object(objects.Instance, "save") @mock.patch.object(objects.Migration, "save") def test_live_migration_save_stats(self, mock_isave, mock_msave): mig = objects.Migration() info = libvirt_guest.JobInfo( memory_total=1 * units.Gi, memory_processed=5 * units.Gi, memory_remaining=500 * units.Mi, disk_total=15 * units.Gi, disk_processed=10 * units.Gi, disk_remaining=14 * units.Gi) migration.save_stats(self.instance, mig, info, 75) self.assertEqual(mig.memory_total, 1 * units.Gi) self.assertEqual(mig.memory_processed, 5 * units.Gi) self.assertEqual(mig.memory_remaining, 500 * units.Mi) self.assertEqual(mig.disk_total, 15 * units.Gi) self.assertEqual(mig.disk_processed, 10 * units.Gi) self.assertEqual(mig.disk_remaining, 14 * units.Gi) self.assertEqual(self.instance.progress, 25) mock_msave.assert_called_once_with() mock_isave.assert_called_once_with() @mock.patch.object(libvirt_guest.Guest, "migrate_start_postcopy") @mock.patch.object(libvirt_guest.Guest, "pause") def test_live_migration_run_tasks_empty_tasks(self, mock_pause, mock_postcopy): tasks = deque() active_migrations = {self.instance.uuid: tasks} on_migration_failure = deque() mig = objects.Migration(id=1, status="running") migration.run_tasks(self.guest, self.instance, active_migrations, on_migration_failure, mig, False) self.assertFalse(mock_pause.called) self.assertFalse(mock_postcopy.called) self.assertEqual(len(on_migration_failure), 0) @mock.patch.object(libvirt_guest.Guest, "migrate_start_postcopy") @mock.patch.object(libvirt_guest.Guest, "pause") def test_live_migration_run_tasks_no_tasks(self, mock_pause, mock_postcopy): active_migrations = {} on_migration_failure = deque() mig = objects.Migration(id=1, status="running") migration.run_tasks(self.guest, self.instance, active_migrations, on_migration_failure, mig, False) self.assertFalse(mock_pause.called) self.assertFalse(mock_postcopy.called) self.assertEqual(len(on_migration_failure), 0) @mock.patch.object(libvirt_guest.Guest, "migrate_start_postcopy") @mock.patch.object(libvirt_guest.Guest, "pause") def test_live_migration_run_tasks_no_force_complete(self, mock_pause, mock_postcopy): tasks = deque() # Test to ensure unknown tasks are ignored tasks.append("wibble") active_migrations = {self.instance.uuid: tasks} on_migration_failure = deque() mig = objects.Migration(id=1, status="running") migration.run_tasks(self.guest, self.instance, active_migrations, on_migration_failure, mig, False) self.assertFalse(mock_pause.called) self.assertFalse(mock_postcopy.called) self.assertEqual(len(on_migration_failure), 0) @mock.patch.object(libvirt_guest.Guest, "migrate_start_postcopy") @mock.patch.object(libvirt_guest.Guest, "pause") def test_live_migration_run_tasks_force_complete(self, mock_pause, mock_postcopy): tasks = deque() tasks.append("force-complete") active_migrations = {self.instance.uuid: tasks} on_migration_failure = deque() mig = objects.Migration(id=1, status="running") migration.run_tasks(self.guest, self.instance, active_migrations, on_migration_failure, mig, False) mock_pause.assert_called_once_with() self.assertFalse(mock_postcopy.called) self.assertEqual(len(on_migration_failure), 1) self.assertEqual(on_migration_failure.pop(), "unpause") @mock.patch.object(libvirt_guest.Guest, "migrate_start_postcopy") @mock.patch.object(libvirt_guest.Guest, "pause") def test_live_migration_run_tasks_force_complete_postcopy_running(self, mock_pause, mock_postcopy): tasks = deque() tasks.append("force-complete") active_migrations = {self.instance.uuid: tasks} on_migration_failure = deque() mig = objects.Migration(id=1, status="running (post-copy)") migration.run_tasks(self.guest, self.instance, active_migrations, on_migration_failure, mig, True) self.assertFalse(mock_pause.called) self.assertFalse(mock_postcopy.called) self.assertEqual(len(on_migration_failure), 0) @mock.patch.object(objects.Migration, "save") @mock.patch.object(libvirt_guest.Guest, "migrate_start_postcopy") @mock.patch.object(libvirt_guest.Guest, "pause") def test_live_migration_run_tasks_force_complete_postcopy(self, mock_pause, mock_postcopy, mock_msave): tasks = deque() tasks.append("force-complete") active_migrations = {self.instance.uuid: tasks} on_migration_failure = deque() mig = objects.Migration(id=1, status="running") migration.run_tasks(self.guest, self.instance, active_migrations, on_migration_failure, mig, True) mock_postcopy.assert_called_once_with() self.assertFalse(mock_pause.called) self.assertEqual(len(on_migration_failure), 0) @mock.patch.object(libvirt_guest.Guest, "resume") @mock.patch.object(libvirt_guest.Guest, "get_power_state", return_value=power_state.PAUSED) def test_live_migration_recover_tasks_resume(self, mock_ps, mock_resume): tasks = deque() tasks.append("unpause") migration.run_recover_tasks(self.host, self.guest, self.instance, tasks) mock_resume.assert_called_once_with() @mock.patch.object(libvirt_guest.Guest, "resume") @mock.patch.object(libvirt_guest.Guest, "get_power_state", return_value=power_state.RUNNING) def test_live_migration_recover_tasks_no_resume(self, mock_ps, mock_resume): tasks = deque() tasks.append("unpause") migration.run_recover_tasks(self.host, self.guest, self.instance, tasks) self.assertFalse(mock_resume.called) @mock.patch.object(libvirt_guest.Guest, "resume") def test_live_migration_recover_tasks_empty_tasks(self, mock_resume): tasks = deque() migration.run_recover_tasks(self.host, self.guest, self.instance, tasks) self.assertFalse(mock_resume.called) @mock.patch.object(libvirt_guest.Guest, "resume") def test_live_migration_recover_tasks_no_pause(self, mock_resume): tasks = deque() # Test to ensure unknown tasks are ignored tasks.append("wibble") migration.run_recover_tasks(self.host, self.guest, self.instance, tasks) self.assertFalse(mock_resume.called) def test_live_migration_downtime_steps(self): self.flags(live_migration_downtime=400, group='libvirt') self.flags(live_migration_downtime_steps=10, group='libvirt') self.flags(live_migration_downtime_delay=30, group='libvirt') steps = migration.downtime_steps(3.0) self.assertEqual([ (0, 40), (90, 76), (180, 112), (270, 148), (360, 184), (450, 220), (540, 256), (630, 292), (720, 328), (810, 364), (900, 400), ], list(steps)) nova-17.0.1/nova/tests/unit/virt/libvirt/volume/0000775000175000017500000000000013250073472021622 5ustar zuulzuul00000000000000nova-17.0.1/nova/tests/unit/virt/libvirt/volume/test_volume.py0000666000175000017500000003210713250073126024543 0ustar zuulzuul00000000000000# Copyright 2010 OpenStack Foundation # Copyright 2012 University Of Minho # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from nova import exception from nova import test from nova.tests.unit.virt.libvirt import fakelibvirt from nova.tests import uuidsentinel as uuids from nova.virt import fake from nova.virt.libvirt import driver from nova.virt.libvirt import host from nova.virt.libvirt.volume import volume SECRET_UUID = '2a0a0d6c-babf-454d-b93e-9ac9957b95e0' class FakeSecret(object): def __init__(self): self.uuid = SECRET_UUID def getUUIDString(self): return self.uuid def UUIDString(self): return self.uuid def setValue(self, value): self.value = value return 0 def getValue(self, value): return self.value def undefine(self): self.value = None return 0 class LibvirtBaseVolumeDriverSubclassSignatureTestCase( test.SubclassSignatureTestCase): def _get_base_class(self): # We do this because it has the side-effect of loading all the # volume drivers self.useFixture(fakelibvirt.FakeLibvirtFixture()) driver.LibvirtDriver(fake.FakeVirtAPI(), False) return volume.LibvirtBaseVolumeDriver class LibvirtVolumeBaseTestCase(test.NoDBTestCase): """Contains common setup and helper methods for libvirt volume tests.""" def setUp(self): super(LibvirtVolumeBaseTestCase, self).setUp() self.executes = [] def fake_execute(*cmd, **kwargs): self.executes.append(cmd) return None, None self.stub_out('nova.utils.execute', fake_execute) self.useFixture(fakelibvirt.FakeLibvirtFixture()) self.fake_host = host.Host("qemu:///system") self.connr = { 'ip': '127.0.0.1', 'initiator': 'fake_initiator', 'host': 'fake_host' } self.disk_info = { "bus": "virtio", "dev": "vde", "type": "disk", } self.name = 'volume-00000001' self.location = '10.0.2.15:3260' self.iqn = 'iqn.2010-10.org.openstack:%s' % self.name self.vol = {'id': 1, 'name': self.name} self.uuid = '875a8070-d0b9-4949-8b31-104d125c9a64' self.user = 'foo' def _assertFileTypeEquals(self, tree, file_path): self.assertEqual('file', tree.get('type')) self.assertEqual(file_path, tree.find('./source').get('file')) class LibvirtISCSIVolumeBaseTestCase(LibvirtVolumeBaseTestCase): """Contains common setup and helper methods for iSCSI volume tests.""" def iscsi_connection(self, volume, location, iqn, auth=False, transport=None): dev_name = 'ip-%s-iscsi-%s-lun-1' % (location, iqn) if transport is not None: dev_name = 'pci-0000:00:00.0-' + dev_name dev_path = '/dev/disk/by-path/%s' % (dev_name) ret = { 'driver_volume_type': 'iscsi', 'data': { 'volume_id': volume['id'], 'target_portal': location, 'target_iqn': iqn, 'target_lun': 1, 'device_path': dev_path, 'qos_specs': { 'total_bytes_sec': '102400', 'read_iops_sec': '200', } } } if auth: ret['data']['auth_method'] = 'CHAP' ret['data']['auth_username'] = 'foo' ret['data']['auth_password'] = 'bar' return ret class LibvirtVolumeTestCase(LibvirtISCSIVolumeBaseTestCase): def _assertDiskInfoEquals(self, tree, disk_info): self.assertEqual(disk_info['type'], tree.get('device')) self.assertEqual(disk_info['bus'], tree.find('./target').get('bus')) self.assertEqual(disk_info['dev'], tree.find('./target').get('dev')) def _test_libvirt_volume_driver_disk_info(self): libvirt_driver = volume.LibvirtVolumeDriver(self.fake_host) connection_info = { 'driver_volume_type': 'fake', 'data': { 'device_path': '/foo', }, 'serial': 'fake_serial', } conf = libvirt_driver.get_config(connection_info, self.disk_info) tree = conf.format_dom() self._assertDiskInfoEquals(tree, self.disk_info) def test_libvirt_volume_disk_info_type(self): self.disk_info['type'] = 'cdrom' self._test_libvirt_volume_driver_disk_info() def test_libvirt_volume_disk_info_dev(self): self.disk_info['dev'] = 'hdc' self._test_libvirt_volume_driver_disk_info() def test_libvirt_volume_disk_info_bus(self): self.disk_info['bus'] = 'scsi' self._test_libvirt_volume_driver_disk_info() def test_libvirt_volume_driver_serial(self): libvirt_driver = volume.LibvirtVolumeDriver(self.fake_host) connection_info = { 'driver_volume_type': 'fake', 'data': { 'device_path': '/foo', }, 'serial': 'fake_serial', } conf = libvirt_driver.get_config(connection_info, self.disk_info) tree = conf.format_dom() self.assertEqual('block', tree.get('type')) self.assertEqual('fake_serial', tree.find('./serial').text) self.assertIsNone(tree.find('./blockio')) self.assertIsNone(tree.find("driver[@discard]")) def test_libvirt_volume_driver_blockio(self): libvirt_driver = volume.LibvirtVolumeDriver(self.fake_host) connection_info = { 'driver_volume_type': 'fake', 'data': { 'device_path': '/foo', 'logical_block_size': '4096', 'physical_block_size': '4096', }, 'serial': 'fake_serial', } disk_info = { "bus": "virtio", "dev": "vde", "type": "disk", } conf = libvirt_driver.get_config(connection_info, disk_info) tree = conf.format_dom() blockio = tree.find('./blockio') self.assertEqual('4096', blockio.get('logical_block_size')) self.assertEqual('4096', blockio.get('physical_block_size')) def test_libvirt_volume_driver_iotune(self): libvirt_driver = volume.LibvirtVolumeDriver(self.fake_host) connection_info = { 'driver_volume_type': 'fake', 'data': { "device_path": "/foo", 'qos_specs': 'bar', }, } disk_info = { "bus": "virtio", "dev": "vde", "type": "disk", } conf = libvirt_driver.get_config(connection_info, disk_info) tree = conf.format_dom() iotune = tree.find('./iotune') # ensure invalid qos_specs is ignored self.assertIsNone(iotune) specs = { 'total_bytes_sec': '102400', 'read_bytes_sec': '51200', 'write_bytes_sec': '0', 'total_iops_sec': '0', 'read_iops_sec': '200', 'write_iops_sec': '200', } del connection_info['data']['qos_specs'] connection_info['data'].update(dict(qos_specs=specs)) conf = libvirt_driver.get_config(connection_info, disk_info) tree = conf.format_dom() self.assertEqual('102400', tree.find('./iotune/total_bytes_sec').text) self.assertEqual('51200', tree.find('./iotune/read_bytes_sec').text) self.assertEqual('0', tree.find('./iotune/write_bytes_sec').text) self.assertEqual('0', tree.find('./iotune/total_iops_sec').text) self.assertEqual('200', tree.find('./iotune/read_iops_sec').text) self.assertEqual('200', tree.find('./iotune/write_iops_sec').text) def test_libvirt_volume_driver_readonly(self): libvirt_driver = volume.LibvirtVolumeDriver(self.fake_host) connection_info = { 'driver_volume_type': 'fake', 'data': { "device_path": "/foo", 'access_mode': 'bar', }, } disk_info = { "bus": "virtio", "dev": "vde", "type": "disk", } self.assertRaises(exception.InvalidVolumeAccessMode, libvirt_driver.get_config, connection_info, self.disk_info) connection_info['data']['access_mode'] = 'rw' conf = libvirt_driver.get_config(connection_info, disk_info) tree = conf.format_dom() readonly = tree.find('./readonly') self.assertIsNone(readonly) connection_info['data']['access_mode'] = 'ro' conf = libvirt_driver.get_config(connection_info, disk_info) tree = conf.format_dom() readonly = tree.find('./readonly') self.assertIsNotNone(readonly) def test_libvirt_volume_multiattach(self): libvirt_driver = volume.LibvirtVolumeDriver(self.fake_host) connection_info = { 'driver_volume_type': 'fake', 'data': { "device_path": "/foo", 'access_mode': 'rw', }, 'multiattach': True, } disk_info = { "bus": "virtio", "dev": "vde", "type": "disk", } conf = libvirt_driver.get_config(connection_info, disk_info) tree = conf.format_dom() shareable = tree.find('./shareable') self.assertIsNotNone(shareable) connection_info['multiattach'] = False conf = libvirt_driver.get_config(connection_info, disk_info) tree = conf.format_dom() shareable = tree.find('./shareable') self.assertIsNone(shareable) @mock.patch('nova.virt.libvirt.host.Host.has_min_version') def test_libvirt_volume_driver_discard_true(self, mock_has_min_version): # Check the discard attrib is present in driver section mock_has_min_version.return_value = True libvirt_driver = volume.LibvirtVolumeDriver(self.fake_host) connection_info = { 'driver_volume_type': 'fake', 'data': { 'device_path': '/foo', 'discard': True, }, 'serial': 'fake_serial', } conf = libvirt_driver.get_config(connection_info, self.disk_info) tree = conf.format_dom() driver_node = tree.find("driver[@discard]") self.assertIsNotNone(driver_node) self.assertEqual('unmap', driver_node.attrib['discard']) def test_libvirt_volume_driver_discard_false(self): # Check the discard attrib is not present in driver section libvirt_driver = volume.LibvirtVolumeDriver(self.fake_host) connection_info = { 'driver_volume_type': 'fake', 'data': { 'device_path': '/foo', 'discard': False, }, 'serial': 'fake_serial', } conf = libvirt_driver.get_config(connection_info, self.disk_info) tree = conf.format_dom() self.assertIsNone(tree.find("driver[@discard]")) def test_libvirt_volume_driver_encryption(self): fake_secret = FakeSecret() fake_host = mock.Mock(spec=host.Host) fake_host.find_secret.return_value = fake_secret libvirt_driver = volume.LibvirtVolumeDriver(fake_host) connection_info = { 'driver_volume_type': 'fake', 'data': { 'volume_id': uuids.volume_id, 'device_path': '/foo', 'discard': False, }, 'serial': 'fake_serial', } conf = libvirt_driver.get_config(connection_info, self.disk_info) tree = conf.format_dom() encryption = tree.find("encryption") secret = encryption.find("secret") self.assertEqual('luks', encryption.attrib['format']) self.assertEqual('passphrase', secret.attrib['type']) self.assertEqual(SECRET_UUID, secret.attrib['uuid']) def test_libvirt_volume_driver_encryption_missing_secret(self): fake_host = mock.Mock(spec=host.Host) fake_host.find_secret.return_value = None libvirt_driver = volume.LibvirtVolumeDriver(fake_host) connection_info = { 'driver_volume_type': 'fake', 'data': { 'volume_id': uuids.volume_id, 'device_path': '/foo', 'discard': False, }, 'serial': 'fake_serial', } conf = libvirt_driver.get_config(connection_info, self.disk_info) tree = conf.format_dom() self.assertIsNone(tree.find("encryption")) nova-17.0.1/nova/tests/unit/virt/libvirt/volume/test_vzstorage.py0000666000175000017500000001177213250073126025265 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import mock from os_brick.initiator import connector from nova import exception from nova.tests.unit.virt.libvirt.volume import test_volume from nova import utils from nova.virt.libvirt.volume import vzstorage class LibvirtVZStorageTestCase(test_volume.LibvirtVolumeBaseTestCase): """Tests the libvirt vzstorage volume driver.""" def setUp(self): super(LibvirtVZStorageTestCase, self).setUp() self.mnt_base = '/mnt' self.flags(vzstorage_mount_point_base=self.mnt_base, group='libvirt') self.flags(vzstorage_cache_path="/tmp/ssd-cache/%(cluster_name)s", group='libvirt') def test_libvirt_vzstorage_driver(self): libvirt_driver = vzstorage.LibvirtVZStorageVolumeDriver(self.fake_host) self.assertIsInstance(libvirt_driver.connector, connector.RemoteFsConnector) def test_libvirt_vzstorage_driver_opts_negative(self): """Test that custom options cannot duplicate the configured""" bad_opts = [ ["-c", "clus111", "-v"], ["-l", "/var/log/vstorage.log", "-L", "5x5"], ["-u", "user1", "-p", "pass1"], ["-v", "-R", "100", "-C", "/ssd"], ] for opts in bad_opts: self.flags(vzstorage_mount_opts=opts, group='libvirt') self.assertRaises(exception.NovaException, vzstorage.LibvirtVZStorageVolumeDriver, self.fake_host) def test_libvirt_vzstorage_driver_share_fmt_neg(self): drv = vzstorage.LibvirtVZStorageVolumeDriver(self.fake_host) wrong_export_string = 'mds1, mds2:/testcluster:passwd12111' connection_info = {'data': {'export': wrong_export_string, 'name': self.name}} err_pattern = ("^Valid share format is " "\[mds\[,mds1\[\.\.\.\]\]:/\]clustername\[:password\]$") self.assertRaisesRegex(exception.InvalidVolume, err_pattern, drv.connect_volume, connection_info, mock.sentinel.instance) @mock.patch.object(vzstorage.utils, 'synchronized', return_value=lambda f: f) def test_libvirt_vzstorage_driver_connect(self, mock_synchronized): def brick_conn_vol(data): return {'path': 'vstorage://testcluster'} drv = vzstorage.LibvirtVZStorageVolumeDriver(self.fake_host) drv.connector.connect_volume = brick_conn_vol export_string = 'testcluster' connection_info = {'data': {'export': export_string, 'name': self.name}} drv.connect_volume(connection_info, mock.sentinel.instance) self.assertEqual('vstorage://testcluster', connection_info['data']['device_path']) self.assertEqual('-u stack -g qemu -m 0770 ' '-l /var/log/vstorage/testcluster/nova.log.gz ' '-C /tmp/ssd-cache/testcluster', connection_info['data']['options']) mock_synchronized.assert_called_once_with('vz_share-testcluster') def test_libvirt_vzstorage_driver_disconnect(self): drv = vzstorage.LibvirtVZStorageVolumeDriver(self.fake_host) drv.connector.disconnect_volume = mock.MagicMock() conn = {'data': mock.sentinel.conn_data} drv.disconnect_volume(conn, mock.sentinel.instance) drv.connector.disconnect_volume.assert_called_once_with( mock.sentinel.conn_data, None) def test_libvirt_vzstorage_driver_get_config(self): libvirt_driver = vzstorage.LibvirtVZStorageVolumeDriver(self.fake_host) export_string = 'vzstorage' export_mnt_base = os.path.join(self.mnt_base, utils.get_hash_str(export_string)) file_path = os.path.join(export_mnt_base, self.name) connection_info = {'data': {'export': export_string, 'name': self.name, 'device_path': file_path}} conf = libvirt_driver.get_config(connection_info, self.disk_info) self.assertEqual('file', conf.source_type) self.assertEqual(file_path, conf.source_path) self.assertEqual('raw', conf.driver_format) self.assertEqual('writethrough', conf.driver_cache) nova-17.0.1/nova/tests/unit/virt/libvirt/volume/test_iser.py0000666000175000017500000000167613250073126024205 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.tests.unit.virt.libvirt.volume import test_volume from nova.virt.libvirt.volume import iser class LibvirtISERVolumeDriverTestCase(test_volume.LibvirtVolumeBaseTestCase): """Tests the libvirt iSER volume driver.""" def test_get_transport(self): driver = iser.LibvirtISERVolumeDriver(self.fake_host) self.assertEqual('iser', driver._get_transport()) nova-17.0.1/nova/tests/unit/virt/libvirt/volume/test_nfs.py0000666000175000017500000001204313250073126024017 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import mock from nova.tests.unit.virt.libvirt.volume import test_volume from nova.tests import uuidsentinel as uuids from nova import utils from nova.virt.libvirt.volume import mount from nova.virt.libvirt.volume import nfs class LibvirtNFSVolumeDriverTestCase(test_volume.LibvirtVolumeBaseTestCase): """Tests the libvirt NFS volume driver.""" def setUp(self): super(LibvirtNFSVolumeDriverTestCase, self).setUp() m = mount.get_manager() m._reset_state() self.mnt_base = '/mnt' m.host_up(self.fake_host) self.flags(nfs_mount_point_base=self.mnt_base, group='libvirt') @mock.patch('oslo_utils.fileutils.ensure_tree') @mock.patch('nova.privsep.fs.mount') @mock.patch('os.path.ismount', side_effect=[False, True, False]) @mock.patch('nova.privsep.fs.umount') @mock.patch('nova.privsep.path.rmdir') def test_libvirt_nfs_driver(self, mock_rmdir, mock_umount, mock_ismount, mock_mount, mock_ensure_tree): libvirt_driver = nfs.LibvirtNFSVolumeDriver(self.fake_host) export_string = '192.168.1.1:/nfs/share1' export_mnt_base = os.path.join(self.mnt_base, utils.get_hash_str(export_string)) connection_info = {'data': {'export': export_string, 'name': self.name}} instance = mock.sentinel.instance instance.uuid = uuids.instance libvirt_driver.connect_volume(connection_info, instance) libvirt_driver.disconnect_volume(connection_info, mock.sentinel.instance) device_path = os.path.join(export_mnt_base, connection_info['data']['name']) self.assertEqual(connection_info['data']['device_path'], device_path) mock_ensure_tree.assert_has_calls([mock.call(export_mnt_base)]) mock_mount.assert_has_calls([mock.call('nfs', export_string, export_mnt_base, [])]) mock_umount.assert_has_calls([mock.call(export_mnt_base)]) mock_rmdir.assert_has_calls([mock.call(export_mnt_base)]) def test_libvirt_nfs_driver_get_config(self): libvirt_driver = nfs.LibvirtNFSVolumeDriver(self.fake_host) export_string = '192.168.1.1:/nfs/share1' export_mnt_base = os.path.join(self.mnt_base, utils.get_hash_str(export_string)) file_path = os.path.join(export_mnt_base, self.name) connection_info = {'data': {'export': export_string, 'name': self.name, 'device_path': file_path}} conf = libvirt_driver.get_config(connection_info, self.disk_info) tree = conf.format_dom() self._assertFileTypeEquals(tree, file_path) self.assertEqual('raw', tree.find('./driver').get('type')) self.assertEqual('native', tree.find('./driver').get('io')) @mock.patch('oslo_utils.fileutils.ensure_tree') @mock.patch('nova.privsep.fs.mount') @mock.patch('os.path.ismount', side_effect=[False, True, False]) @mock.patch('nova.privsep.fs.umount') @mock.patch('nova.privsep.path.rmdir') def test_libvirt_nfs_driver_with_opts(self, mock_rmdir, mock_umount, mock_ismount, mock_mount, mock_ensure_tree): libvirt_driver = nfs.LibvirtNFSVolumeDriver(self.fake_host) export_string = '192.168.1.1:/nfs/share1' options = '-o intr,nfsvers=3' export_mnt_base = os.path.join(self.mnt_base, utils.get_hash_str(export_string)) connection_info = {'data': {'export': export_string, 'name': self.name, 'options': options}} instance = mock.sentinel.instance instance.uuid = uuids.instance libvirt_driver.connect_volume(connection_info, instance) libvirt_driver.disconnect_volume(connection_info, mock.sentinel.instance) mock_ensure_tree.assert_has_calls([mock.call(export_mnt_base)]) mock_mount.assert_has_calls([mock.call('nfs', export_string, export_mnt_base, ['-o', 'intr,nfsvers=3'])]) mock_umount.assert_has_calls([mock.call(export_mnt_base)]) mock_rmdir.assert_has_calls([mock.call(export_mnt_base)]) nova-17.0.1/nova/tests/unit/virt/libvirt/volume/test_fs.py0000666000175000017500000000363113250073126023644 0ustar zuulzuul00000000000000# Copyright 2015 IBM Corp. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import mock from nova import test from nova import utils from nova.virt.libvirt.volume import fs FAKE_MOUNT_POINT = '/var/lib/nova/fake-mount' FAKE_SHARE = 'fake-share' NORMALIZED_SHARE = FAKE_SHARE + '-normalized' HASHED_SHARE = utils.get_hash_str(NORMALIZED_SHARE) FAKE_DEVICE_NAME = 'fake-device' class FakeFileSystemVolumeDriver(fs.LibvirtBaseFileSystemVolumeDriver): def _get_mount_point_base(self): return FAKE_MOUNT_POINT def _normalize_export(self, export): return NORMALIZED_SHARE class LibvirtBaseFileSystemVolumeDriverTestCase(test.NoDBTestCase): """Tests the basic behavior of the LibvirtBaseFileSystemVolumeDriver""" def setUp(self): super(LibvirtBaseFileSystemVolumeDriverTestCase, self).setUp() self.connection = mock.Mock() self.driver = FakeFileSystemVolumeDriver(self.connection) self.connection_info = { 'data': { 'export': FAKE_SHARE, 'name': FAKE_DEVICE_NAME, } } def test_get_device_path(self): path = self.driver._get_device_path(self.connection_info) expected_path = os.path.join(FAKE_MOUNT_POINT, HASHED_SHARE, FAKE_DEVICE_NAME) self.assertEqual(expected_path, path) nova-17.0.1/nova/tests/unit/virt/libvirt/volume/test_hgst.py0000666000175000017500000000470613250073126024205 0ustar zuulzuul00000000000000# Copyright 2015 HGST # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from os_brick.initiator import connector from nova.tests.unit.virt.libvirt.volume import test_volume from nova.virt.libvirt.volume import hgst # Actual testing of the os_brick HGST driver done in the os_brick testcases # Here we're concerned only with the small API shim that connects Nova # so these will be pretty simple cases. class LibvirtHGSTVolumeDriverTestCase(test_volume.LibvirtVolumeBaseTestCase): def test_libvirt_hgst_driver_type(self): drvr = hgst.LibvirtHGSTVolumeDriver(self.fake_host) self.assertIsInstance(drvr.connector, connector.HGSTConnector) def test_libvirt_hgst_driver_connect(self): def brick_conn_vol(data): return {'path': '/dev/space01'} drvr = hgst.LibvirtHGSTVolumeDriver(self.fake_host) drvr.connector.connect_volume = brick_conn_vol di = {'path': '/dev/space01', 'name': 'space01'} ci = {'data': di} drvr.connect_volume(ci, mock.sentinel.instance) self.assertEqual('/dev/space01', ci['data']['device_path']) def test_libvirt_hgst_driver_get_config(self): drvr = hgst.LibvirtHGSTVolumeDriver(self.fake_host) di = {'path': '/dev/space01', 'name': 'space01', 'type': 'raw', 'dev': 'vda1', 'bus': 'pci0', 'device_path': '/dev/space01'} ci = {'data': di} conf = drvr.get_config(ci, di) self.assertEqual('block', conf.source_type) self.assertEqual('/dev/space01', conf.source_path) def test_libvirt_hgst_driver_disconnect(self): drvr = hgst.LibvirtHGSTVolumeDriver(self.fake_host) drvr.connector.disconnect_volume = mock.MagicMock() ci = {'data': mock.sentinel.conn_data} drvr.disconnect_volume(ci, mock.sentinel.instance) drvr.connector.disconnect_volume.assert_called_once_with( mock.sentinel.conn_data, None) nova-17.0.1/nova/tests/unit/virt/libvirt/volume/test_scaleio.py0000666000175000017500000000467113250073126024660 0ustar zuulzuul00000000000000# Copyright (c) 2015 EMC Corporation # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from os_brick.initiator import connector from nova.tests.unit.virt.libvirt.volume import test_volume from nova.virt.libvirt.volume import scaleio class LibvirtScaleIOVolumeDriverTestCase( test_volume.LibvirtVolumeBaseTestCase): def test_libvirt_scaleio_driver(self): libvirt_driver = scaleio.LibvirtScaleIOVolumeDriver( self.fake_host) self.assertIsInstance(libvirt_driver.connector, connector.ScaleIOConnector) def test_libvirt_scaleio_driver_connect(self): def brick_conn_vol(data): return {'path': '/dev/vol01'} sio = scaleio.LibvirtScaleIOVolumeDriver(self.fake_host) sio.connector.connect_volume = brick_conn_vol disk_info = {'path': '/dev/vol01', 'name': 'vol01'} conn = {'data': disk_info} sio.connect_volume(conn, mock.sentinel.instance) self.assertEqual('/dev/vol01', conn['data']['device_path']) def test_libvirt_scaleio_driver_get_config(self): sio = scaleio.LibvirtScaleIOVolumeDriver(self.fake_host) disk_info = {'path': '/dev/vol01', 'name': 'vol01', 'type': 'raw', 'dev': 'vda1', 'bus': 'pci0', 'device_path': '/dev/vol01'} conn = {'data': disk_info} conf = sio.get_config(conn, disk_info) self.assertEqual('block', conf.source_type) self.assertEqual('/dev/vol01', conf.source_path) def test_libvirt_scaleio_driver_disconnect(self): sio = scaleio.LibvirtScaleIOVolumeDriver(self.fake_host) sio.connector.disconnect_volume = mock.MagicMock() conn = {'data': mock.sentinel.conn_data} sio.disconnect_volume(conn, mock.sentinel.instance) sio.connector.disconnect_volume.assert_called_once_with( mock.sentinel.conn_data, None) nova-17.0.1/nova/tests/unit/virt/libvirt/volume/test_disco.py0000666000175000017500000000533413250073126024337 0ustar zuulzuul00000000000000# Copyright (c) 2015 Industrial Technology Research Institute. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from os_brick.initiator import connector from nova.tests.unit.virt.libvirt.volume import test_volume from nova.virt.libvirt.volume import disco class LibvirtDISCOVolumeDriverTestCase( test_volume.LibvirtVolumeBaseTestCase): def test_libvirt_disco_driver(self): libvirt_driver = disco.LibvirtDISCOVolumeDriver( self.fake_host) self.assertIsInstance(libvirt_driver.connector, connector.DISCOConnector) def test_libvirt_disco_driver_connect(self): dcon = disco.LibvirtDISCOVolumeDriver(self.fake_host) conf = {'server_ip': '127.0.0.1', 'server_port': 9898} disk_info = {'disco_id': '1234567', 'name': 'aDiscoVolume', 'conf': conf} conn = {'data': disk_info} with mock.patch.object(dcon.connector, 'connect_volume', return_value={'path': '/dev/dms1234567'}): dcon.connect_volume(conn, mock.sentinel.instance) self.assertEqual('/dev/dms1234567', conn['data']['device_path']) def test_libvirt_disco_driver_get_config(self): dcon = disco.LibvirtDISCOVolumeDriver(self.fake_host) disk_info = {'path': '/dev/dms1234567', 'name': 'aDiscoVolume', 'type': 'raw', 'dev': 'vda1', 'bus': 'pci0', 'device_path': '/dev/dms1234567'} conn = {'data': disk_info} conf = dcon.get_config(conn, disk_info) self.assertEqual('file', conf.source_type) self.assertEqual('/dev/dms1234567', conf.source_path) self.assertEqual('disco', conf.source_protocol) def test_libvirt_disco_driver_disconnect(self): dcon = disco.LibvirtDISCOVolumeDriver(self.fake_host) dcon.connector.disconnect_volume = mock.MagicMock() conn = {'data': mock.sentinel.conn_data} dcon.disconnect_volume(conn, mock.sentinel.instance) dcon.connector.disconnect_volume.assert_called_once_with( mock.sentinel.conn_data, None) nova-17.0.1/nova/tests/unit/virt/libvirt/volume/test_quobyte.py0000666000175000017500000004730413250073126024731 0ustar zuulzuul00000000000000# Copyright (c) 2015 Quobyte Inc. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Unit tests for the Quobyte volume driver module.""" import os import traceback import mock from oslo_concurrency import processutils from oslo_utils import fileutils import psutil import six from nova import exception as nova_exception from nova import test from nova.tests.unit.virt.libvirt.volume import test_volume from nova import utils from nova.virt.libvirt import utils as libvirt_utils from nova.virt.libvirt.volume import quobyte class QuobyteTestCase(test.NoDBTestCase): """Tests the nova.virt.libvirt.volume.quobyte module utilities.""" TEST_MNT_POINT = mock.sentinel.TEST_MNT_POINT def assertRaisesAndMessageMatches( self, excClass, msg, callableObj, *args, **kwargs): """Ensure that the specified exception was raised. """ caught = False try: callableObj(*args, **kwargs) except Exception as exc: caught = True self.assertIsInstance(exc, excClass, 'Wrong exception caught: %s Stacktrace: %s' % (exc, traceback.format_exc())) self.assertIn(msg, six.text_type(exc)) if not caught: self.fail('Expected raised exception but nothing caught.') def get_mock_partitions(self): mypart = mock.Mock() mypart.device = "quobyte@" mypart.mountpoint = self.TEST_MNT_POINT return [mypart] @mock.patch.object(os.path, "exists", return_value=False) @mock.patch.object(fileutils, "ensure_tree") @mock.patch.object(utils, "execute") def test_quobyte_mount_volume_not_systemd(self, mock_execute, mock_ensure_tree, mock_exists): mnt_base = '/mnt' quobyte_volume = '192.168.1.1/volume-00001' export_mnt_base = os.path.join(mnt_base, utils.get_hash_str(quobyte_volume)) quobyte.mount_volume(quobyte_volume, export_mnt_base) mock_ensure_tree.assert_called_once_with(export_mnt_base) expected_commands = [mock.call('mount.quobyte', '--disable-xattrs', quobyte_volume, export_mnt_base) ] mock_execute.assert_has_calls(expected_commands) mock_exists.assert_called_once_with(" /run/systemd/system") @mock.patch.object(os.path, "exists", return_value=True) @mock.patch.object(fileutils, "ensure_tree") @mock.patch.object(utils, "execute") def test_quobyte_mount_volume_systemd(self, mock_execute, mock_ensure_tree, mock_exists): mnt_base = '/mnt' quobyte_volume = '192.168.1.1/volume-00001' export_mnt_base = os.path.join(mnt_base, utils.get_hash_str(quobyte_volume)) quobyte.mount_volume(quobyte_volume, export_mnt_base) mock_ensure_tree.assert_called_once_with(export_mnt_base) expected_commands = [mock.call('systemd-run', '--scope', '--user', 'mount.quobyte', '--disable-xattrs', quobyte_volume, export_mnt_base) ] mock_execute.assert_has_calls(expected_commands) mock_exists.assert_called_once_with(" /run/systemd/system") @mock.patch.object(os.path, "exists", return_value=False) @mock.patch.object(fileutils, "ensure_tree") @mock.patch.object(utils, "execute") def test_quobyte_mount_volume_with_config(self, mock_execute, mock_ensure_tree, mock_exists): mnt_base = '/mnt' quobyte_volume = '192.168.1.1/volume-00001' export_mnt_base = os.path.join(mnt_base, utils.get_hash_str(quobyte_volume)) config_file_dummy = "/etc/quobyte/dummy.conf" quobyte.mount_volume(quobyte_volume, export_mnt_base, config_file_dummy) mock_ensure_tree.assert_called_once_with(export_mnt_base) expected_commands = [mock.call('mount.quobyte', '--disable-xattrs', quobyte_volume, export_mnt_base, '-c', config_file_dummy) ] mock_execute.assert_has_calls(expected_commands) mock_exists.assert_called_once_with(" /run/systemd/system") @mock.patch.object(fileutils, "ensure_tree") @mock.patch.object(utils, "execute", side_effect=(processutils. ProcessExecutionError)) def test_quobyte_mount_volume_fails(self, mock_execute, mock_ensure_tree): mnt_base = '/mnt' quobyte_volume = '192.168.1.1/volume-00001' export_mnt_base = os.path.join(mnt_base, utils.get_hash_str(quobyte_volume)) self.assertRaises(processutils.ProcessExecutionError, quobyte.mount_volume, quobyte_volume, export_mnt_base) @mock.patch.object(utils, "execute") def test_quobyte_umount_volume(self, mock_execute): mnt_base = '/mnt' quobyte_volume = '192.168.1.1/volume-00001' export_mnt_base = os.path.join(mnt_base, utils.get_hash_str(quobyte_volume)) quobyte.umount_volume(export_mnt_base) mock_execute.assert_called_once_with('umount.quobyte', export_mnt_base) @mock.patch.object(quobyte.LOG, "error") @mock.patch.object(utils, "execute") def test_quobyte_umount_volume_warns(self, mock_execute, mock_debug): mnt_base = '/mnt' quobyte_volume = '192.168.1.1/volume-00001' export_mnt_base = os.path.join(mnt_base, utils.get_hash_str(quobyte_volume)) def exec_side_effect(*cmd, **kwargs): exerror = processutils.ProcessExecutionError( "Device or resource busy") raise exerror mock_execute.side_effect = exec_side_effect quobyte.umount_volume(export_mnt_base) (mock_debug. assert_called_once_with("The Quobyte volume at %s is still in use.", export_mnt_base)) @mock.patch.object(quobyte.LOG, "exception") @mock.patch.object(utils, "execute", side_effect=(processutils.ProcessExecutionError)) def test_quobyte_umount_volume_fails(self, mock_execute, mock_exception): mnt_base = '/mnt' quobyte_volume = '192.168.1.1/volume-00001' export_mnt_base = os.path.join(mnt_base, utils.get_hash_str(quobyte_volume)) quobyte.umount_volume(export_mnt_base) (mock_exception. assert_called_once_with("Couldn't unmount " "the Quobyte Volume at %s", export_mnt_base)) @mock.patch.object(psutil, "disk_partitions") @mock.patch.object(os, "stat") def test_validate_volume_all_good(self, stat_mock, part_mock): part_mock.return_value = self.get_mock_partitions() drv = quobyte def statMockCall(*args): if args[0] == self.TEST_MNT_POINT: stat_result = mock.Mock() stat_result.st_size = 0 return stat_result return os.stat(args) stat_mock.side_effect = statMockCall drv.validate_volume(self.TEST_MNT_POINT) stat_mock.assert_called_once_with(self.TEST_MNT_POINT) part_mock.assert_called_once_with(all=True) @mock.patch.object(psutil, "disk_partitions") @mock.patch.object(os, "stat") def test_validate_volume_mount_not_working(self, stat_mock, part_mock): part_mock.return_value = self.get_mock_partitions() drv = quobyte def statMockCall(*args): print (args) if args[0] == self.TEST_MNT_POINT: raise nova_exception.InvalidVolume() stat_mock.side_effect = [os.stat, statMockCall] self.assertRaises( excClass=nova_exception.InvalidVolume, callableObj=drv.validate_volume, mount_path=self.TEST_MNT_POINT) stat_mock.assert_called_with(self.TEST_MNT_POINT) part_mock.assert_called_once_with(all=True) @mock.patch.object(psutil, "disk_partitions") def test_validate_volume_no_mtab_entry(self, part_mock): part_mock.return_value = [] # no quobyte@ devices msg = ("No matching Quobyte mount entry for %(mpt)s" " could be found for validation in partition list." % {'mpt': self.TEST_MNT_POINT}) self.assertRaisesAndMessageMatches( nova_exception.InvalidVolume, msg, quobyte.validate_volume, self.TEST_MNT_POINT) @mock.patch.object(psutil, "disk_partitions") def test_validate_volume_wrong_mount_type(self, part_mock): mypart = mock.Mock() mypart.device = "not-quobyte" mypart.mountpoint = self.TEST_MNT_POINT part_mock.return_value = [mypart] msg = ("The mount %(mpt)s is not a valid" " Quobyte volume according to partition list." % {'mpt': self.TEST_MNT_POINT}) self.assertRaisesAndMessageMatches( nova_exception.InvalidVolume, msg, quobyte.validate_volume, self.TEST_MNT_POINT) part_mock.assert_called_once_with(all=True) @mock.patch.object(os, "stat") @mock.patch.object(psutil, "disk_partitions") def test_validate_volume_stale_mount(self, part_mock, stat_mock): part_mock.return_value = self.get_mock_partitions() def statMockCall(*args): if args[0] == self.TEST_MNT_POINT: stat_result = mock.Mock() stat_result.st_size = 1 return stat_result return os.stat(args) stat_mock.side_effect = statMockCall # As this uses a dir size >0, it raises an exception self.assertRaises( nova_exception.InvalidVolume, quobyte.validate_volume, self.TEST_MNT_POINT) part_mock.assert_called_once_with(all=True) class LibvirtQuobyteVolumeDriverTestCase( test_volume.LibvirtVolumeBaseTestCase): """Tests the LibvirtQuobyteVolumeDriver class.""" @mock.patch.object(quobyte, 'validate_volume') @mock.patch.object(quobyte, 'mount_volume') @mock.patch.object(libvirt_utils, 'is_mounted', return_value=False) def test_libvirt_quobyte_driver_mount(self, mock_is_mounted, mock_mount_volume, mock_validate_volume ): mnt_base = '/mnt' self.flags(quobyte_mount_point_base=mnt_base, group='libvirt') libvirt_driver = quobyte.LibvirtQuobyteVolumeDriver(self.fake_host) export_string = 'quobyte://192.168.1.1/volume-00001' quobyte_volume = '192.168.1.1/volume-00001' export_mnt_base = os.path.join(mnt_base, utils.get_hash_str(quobyte_volume)) file_path = os.path.join(export_mnt_base, self.name) connection_info = {'data': {'export': export_string, 'name': self.name}} libvirt_driver.connect_volume(connection_info, mock.sentinel.instance) conf = libvirt_driver.get_config(connection_info, self.disk_info) tree = conf.format_dom() self._assertFileTypeEquals(tree, file_path) mock_mount_volume.assert_called_once_with(quobyte_volume, export_mnt_base, mock.ANY) mock_validate_volume.assert_called_with(export_mnt_base) @mock.patch.object(quobyte, 'validate_volume') @mock.patch.object(quobyte, 'umount_volume') @mock.patch.object(libvirt_utils, 'is_mounted', return_value=True) def test_libvirt_quobyte_driver_umount(self, mock_is_mounted, mock_umount_volume, mock_validate_volume): mnt_base = '/mnt' self.flags(quobyte_mount_point_base=mnt_base, group='libvirt') libvirt_driver = quobyte.LibvirtQuobyteVolumeDriver(self.fake_host) export_string = 'quobyte://192.168.1.1/volume-00001' quobyte_volume = '192.168.1.1/volume-00001' export_mnt_base = os.path.join(mnt_base, utils.get_hash_str(quobyte_volume)) file_path = os.path.join(export_mnt_base, self.name) connection_info = {'data': {'export': export_string, 'name': self.name}} libvirt_driver.connect_volume(connection_info, mock.sentinel.instance) conf = libvirt_driver.get_config(connection_info, self.disk_info) tree = conf.format_dom() self._assertFileTypeEquals(tree, file_path) libvirt_driver.disconnect_volume(connection_info, mock.sentinel.instance) mock_validate_volume.assert_called_once_with(export_mnt_base) mock_umount_volume.assert_called_once_with(export_mnt_base) @mock.patch.object(quobyte, 'validate_volume') @mock.patch.object(quobyte, 'umount_volume') @mock.patch.object(libvirt_utils, 'is_mounted', return_value=True) def test_libvirt_quobyte_driver_already_mounted(self, mock_is_mounted, mock_umount_volume, mock_validate_volume ): mnt_base = '/mnt' self.flags(quobyte_mount_point_base=mnt_base, group='libvirt') libvirt_driver = quobyte.LibvirtQuobyteVolumeDriver(self.fake_host) export_string = 'quobyte://192.168.1.1/volume-00001' quobyte_volume = '192.168.1.1/volume-00001' export_mnt_base = os.path.join(mnt_base, utils.get_hash_str(quobyte_volume)) file_path = os.path.join(export_mnt_base, self.name) connection_info = {'data': {'export': export_string, 'name': self.name}} libvirt_driver.connect_volume(connection_info, mock.sentinel.instance) conf = libvirt_driver.get_config(connection_info, self.disk_info) tree = conf.format_dom() self._assertFileTypeEquals(tree, file_path) libvirt_driver.disconnect_volume(connection_info, mock.sentinel.instance) mock_umount_volume.assert_called_once_with(export_mnt_base) mock_validate_volume.assert_called_once_with(export_mnt_base) @mock.patch.object(quobyte, 'validate_volume') @mock.patch.object(quobyte, 'mount_volume') @mock.patch.object(libvirt_utils, 'is_mounted', return_value=False) def test_libvirt_quobyte_driver_qcow2(self, mock_is_mounted, mock_mount_volume, mock_validate_volume ): mnt_base = '/mnt' self.flags(quobyte_mount_point_base=mnt_base, group='libvirt') libvirt_driver = quobyte.LibvirtQuobyteVolumeDriver(self.fake_host) export_string = 'quobyte://192.168.1.1/volume-00001' name = 'volume-00001' image_format = 'qcow2' quobyte_volume = '192.168.1.1/volume-00001' connection_info = {'data': {'export': export_string, 'name': name, 'format': image_format}} export_mnt_base = os.path.join(mnt_base, utils.get_hash_str(quobyte_volume)) libvirt_driver.connect_volume(connection_info, mock.sentinel.instance) conf = libvirt_driver.get_config(connection_info, self.disk_info) tree = conf.format_dom() self.assertEqual('file', tree.get('type')) self.assertEqual('qcow2', tree.find('./driver').get('type')) (mock_mount_volume. assert_called_once_with('192.168.1.1/volume-00001', export_mnt_base, mock.ANY)) mock_validate_volume.assert_called_with(export_mnt_base) libvirt_driver.disconnect_volume(connection_info, mock.sentinel.instance) @mock.patch.object(libvirt_utils, 'is_mounted', return_value=True) def test_libvirt_quobyte_driver_mount_non_quobyte_volume(self, mock_is_mounted): mnt_base = '/mnt' self.flags(quobyte_mount_point_base=mnt_base, group='libvirt') libvirt_driver = quobyte.LibvirtQuobyteVolumeDriver(self.fake_host) export_string = 'quobyte://192.168.1.1/volume-00001' connection_info = {'data': {'export': export_string, 'name': self.name}} def exe_side_effect(*cmd, **kwargs): if cmd == mock.ANY: raise nova_exception.NovaException() with mock.patch.object(quobyte, 'validate_volume') as mock_execute: mock_execute.side_effect = exe_side_effect self.assertRaises(nova_exception.NovaException, libvirt_driver.connect_volume, connection_info, mock.sentinel.instance) def test_libvirt_quobyte_driver_normalize_export_with_protocol(self): mnt_base = '/mnt' self.flags(quobyte_mount_point_base=mnt_base, group='libvirt') libvirt_driver = quobyte.LibvirtQuobyteVolumeDriver(self.fake_host) export_string = 'quobyte://192.168.1.1/volume-00001' self.assertEqual("192.168.1.1/volume-00001", libvirt_driver._normalize_export(export_string)) def test_libvirt_quobyte_driver_normalize_export_without_protocol(self): mnt_base = '/mnt' self.flags(quobyte_mount_point_base=mnt_base, group='libvirt') libvirt_driver = quobyte.LibvirtQuobyteVolumeDriver(self.fake_host) export_string = '192.168.1.1/volume-00001' self.assertEqual("192.168.1.1/volume-00001", libvirt_driver._normalize_export(export_string)) nova-17.0.1/nova/tests/unit/virt/libvirt/volume/test_iscsi.py0000666000175000017500000000640113250073126024344 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from os_brick import exception as os_brick_exception from os_brick.initiator import connector from nova.tests.unit.virt.libvirt.volume import test_volume from nova.virt.libvirt.volume import iscsi class LibvirtISCSIVolumeDriverTestCase( test_volume.LibvirtISCSIVolumeBaseTestCase): def test_libvirt_iscsi_driver(self, transport=None): for multipath in (True, False): self.flags(volume_use_multipath=multipath, group='libvirt') libvirt_driver = iscsi.LibvirtISCSIVolumeDriver(self.fake_host) self.assertIsInstance(libvirt_driver.connector, connector.ISCSIConnector) if hasattr(libvirt_driver.connector, 'use_multipath'): self.assertEqual( multipath, libvirt_driver.connector.use_multipath) def test_libvirt_iscsi_driver_get_config(self): libvirt_driver = iscsi.LibvirtISCSIVolumeDriver(self.fake_host) device_path = '/dev/fake-dev' connection_info = {'data': {'device_path': device_path}} conf = libvirt_driver.get_config(connection_info, self.disk_info) tree = conf.format_dom() self.assertEqual('block', tree.get('type')) self.assertEqual(device_path, tree.find('./source').get('dev')) self.assertEqual('raw', tree.find('./driver').get('type')) self.assertEqual('native', tree.find('./driver').get('io')) @mock.patch.object(iscsi.LOG, 'warning') def test_libvirt_iscsi_driver_disconnect_volume_with_devicenotfound(self, mock_LOG_warning): device_path = '/dev/fake-dev' connection_info = {'data': {'device_path': device_path}} libvirt_driver = iscsi.LibvirtISCSIVolumeDriver(self.fake_host) libvirt_driver.connector.disconnect_volume = mock.MagicMock( side_effect=os_brick_exception.VolumeDeviceNotFound( device=device_path)) libvirt_driver.disconnect_volume(connection_info, mock.sentinel.instance) msg = mock_LOG_warning.call_args_list[0] self.assertIn('Ignoring VolumeDeviceNotFound', msg[0][0]) def test_extend_volume(self): device_path = '/dev/fake-dev' connection_info = {'data': {'device_path': device_path}} libvirt_driver = iscsi.LibvirtISCSIVolumeDriver(self.fake_host) libvirt_driver.connector.extend_volume = mock.MagicMock(return_value=1) new_size = libvirt_driver.extend_volume(connection_info, mock.sentinel.instance) self.assertEqual(1, new_size) libvirt_driver.connector.extend_volume.assert_called_once_with( connection_info['data']) nova-17.0.1/nova/tests/unit/virt/libvirt/volume/test_gpfs.py0000666000175000017500000000242613250073126024174 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. from nova.tests.unit.virt.libvirt.volume import test_volume from nova.virt.libvirt.volume import gpfs class LibvirtGPFSVolumeDriverTestCase(test_volume.LibvirtVolumeBaseTestCase): def test_libvirt_gpfs_driver_get_config(self): libvirt_driver = gpfs.LibvirtGPFSVolumeDriver(self.fake_host) connection_info = { 'driver_volume_type': 'gpfs', 'data': { 'device_path': '/gpfs/foo', }, 'serial': 'fake_serial', } conf = libvirt_driver.get_config(connection_info, self.disk_info) tree = conf.format_dom() self.assertEqual('file', tree.get('type')) self.assertEqual('fake_serial', tree.find('./serial').text) nova-17.0.1/nova/tests/unit/virt/libvirt/volume/__init__.py0000666000175000017500000000000013250073126023717 0ustar zuulzuul00000000000000nova-17.0.1/nova/tests/unit/virt/libvirt/volume/test_mount.py0000666000175000017500000006124413250073136024403 0ustar zuulzuul00000000000000# Copyright 2017 Red Hat, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os.path import threading import time import eventlet import fixtures import mock from oslo_concurrency import processutils from nova import exception from nova import test from nova.tests import uuidsentinel as uuids from nova.virt.libvirt import config as libvirt_config from nova.virt.libvirt import guest as libvirt_guest from nova.virt.libvirt import host as libvirt_host from nova.virt.libvirt.volume import mount # We wait on events in a few cases. In normal execution the wait period should # be in the order of fractions of a millisecond. However, if we've hit a bug we # might deadlock and never return. To be nice to our test environment, we cut # this short at MAX_WAIT seconds. This should be large enough that normal # jitter won't trigger it, but not so long that it's annoying to wait for. MAX_WAIT = 2 class TestThreadController(object): """Helper class for executing a test thread incrementally by waiting at named waitpoints. def test(ctl): things() ctl.waitpoint('foo') more_things() ctl.waitpoint('bar') final_things() ctl = TestThreadController(test) ctl.runto('foo') assert(things) ctl.runto('bar') assert(more_things) ctl.finish() assert(final_things) This gets more interesting when the waitpoints are mocked into non-test code. """ # A map of threads to controllers all_threads = {} def __init__(self, fn): """Create a TestThreadController. :param fn: A test function which takes a TestThreadController as its only argument """ # All updates to wait_at and waiting are guarded by wait_lock self.wait_lock = threading.Condition() # The name of the next wait point self.wait_at = None # True when waiting at a waitpoint self.waiting = False # Incremented every time we continue from a waitpoint self.epoch = 1 # The last epoch we waited at self.last_epoch = 0 self.start_event = eventlet.event.Event() self.running = False self.complete = False # We must not execute fn() until the thread has been registered in # all_threads. eventlet doesn't give us an API to do this directly, # so we defer with an Event def deferred_start(): self.start_event.wait() fn() with self.wait_lock: self.complete = True self.wait_lock.notify_all() self.thread = eventlet.greenthread.spawn(deferred_start) self.all_threads[self.thread] = self @classmethod def current(cls): return cls.all_threads.get(eventlet.greenthread.getcurrent()) def _ensure_running(self): if not self.running: self.running = True self.start_event.send() def waitpoint(self, name): """Called by the test thread. Wait at a waitpoint called name""" with self.wait_lock: wait_since = time.time() while name == self.wait_at: self.waiting = True self.wait_lock.notify_all() self.wait_lock.wait(1) assert(time.time() - wait_since < MAX_WAIT) self.epoch += 1 self.waiting = False self.wait_lock.notify_all() def runto(self, name): """Called by the control thread. Cause the test thread to run until reaching a waitpoint called name. When runto() exits, the test thread is guaranteed to have reached this waitpoint. """ with self.wait_lock: # Set a new wait point self.wait_at = name self.wait_lock.notify_all() # We deliberately don't do this first to avoid a race the first # time we call runto() self._ensure_running() # Wait until the test thread is at the wait point wait_since = time.time() while self.epoch == self.last_epoch or not self.waiting: self.wait_lock.wait(1) assert(time.time() - wait_since < MAX_WAIT) self.last_epoch = self.epoch def start(self): """Called by the control thread. Cause the test thread to start running, but to not wait for it to complete. """ self._ensure_running() def finish(self): """Called by the control thread. Cause the test thread to run to completion. When finish() exits, the test thread is guaranteed to have completed. """ self._ensure_running() wait_since = time.time() with self.wait_lock: self.wait_at = None self.wait_lock.notify_all() while not self.complete: self.wait_lock.wait(1) assert(time.time() - wait_since < MAX_WAIT) self.thread.wait() class HostMountStateTestCase(test.NoDBTestCase): def setUp(self): super(HostMountStateTestCase, self).setUp() @mock.patch('os.path.ismount', side_effect=[False, True, True, True, True]) def test_init(self, mock_ismount): # Test that we initialise the state of MountManager correctly at # startup def fake_disk(disk): libvirt_disk = libvirt_config.LibvirtConfigGuestDisk() libvirt_disk.source_type = disk[0] libvirt_disk.source_path = os.path.join(*disk[1]) return libvirt_disk def mock_guest(uuid, disks): guest = mock.create_autospec(libvirt_guest.Guest) guest.uuid = uuid guest.get_all_disks.return_value = map(fake_disk, disks) return guest local_dir = '/local' mountpoint_a = '/mnt/a' mountpoint_b = '/mnt/b' guests = map(mock_guest, [uuids.instance_a, uuids.instance_b], [ # Local file root disk and a volume on each of mountpoints a and b [ ('file', (local_dir, uuids.instance_a, 'disk')), ('file', (mountpoint_a, 'vola1')), ('file', (mountpoint_b, 'volb1')), ], # Local LVM root disk and a volume on each of mountpoints a and b [ ('block', ('/dev', 'vg', uuids.instance_b + '_disk')), ('file', (mountpoint_a, 'vola2')), ('file', (mountpoint_b, 'volb2')), ] ]) host = mock.create_autospec(libvirt_host.Host) host.list_guests.return_value = guests m = mount._HostMountState(host, 0) self.assertEqual([mountpoint_a, mountpoint_b], sorted(m.mountpoints.keys())) self.assertSetEqual(set([('vola1', uuids.instance_a), ('vola2', uuids.instance_b)]), m.mountpoints[mountpoint_a].attachments) self.assertSetEqual(set([('volb1', uuids.instance_a), ('volb2', uuids.instance_b)]), m.mountpoints[mountpoint_b].attachments) @staticmethod def _get_clean_hostmountstate(): # list_guests returns no guests: _HostMountState initial state is # clean. host = mock.create_autospec(libvirt_host.Host) host.list_guests.return_value = [] return mount._HostMountState(host, 0) def _sentinel_mount(self, m, vol, mountpoint=mock.sentinel.mountpoint, instance=None): if instance is None: instance = mock.sentinel.instance instance.uuid = uuids.instance m.mount(mock.sentinel.fstype, mock.sentinel.export, vol, mountpoint, instance, [mock.sentinel.option1, mock.sentinel.option2]) def _sentinel_umount(self, m, vol, mountpoint=mock.sentinel.mountpoint, instance=mock.sentinel.instance): m.umount(vol, mountpoint, instance) @mock.patch('oslo_utils.fileutils.ensure_tree') @mock.patch('nova.privsep.fs.mount') @mock.patch('nova.privsep.fs.umount') @mock.patch('os.path.ismount', side_effect=[False, True, True, True]) def test_mount_umount(self, mock_ismount, mock_umount, mock_mount, mock_ensure_tree): # Mount 2 different volumes from the same export. Test that we only # mount and umount once. m = self._get_clean_hostmountstate() # Mount vol_a from export self._sentinel_mount(m, mock.sentinel.vol_a) mock_ensure_tree.assert_has_calls([ mock.call(mock.sentinel.mountpoint)]) mock_mount.assert_has_calls([ mock.call(mock.sentinel.fstype, mock.sentinel.export, mock.sentinel.mountpoint, [mock.sentinel.option1, mock.sentinel.option2])]) # Mount vol_b from export. We shouldn't have mounted again self._sentinel_mount(m, mock.sentinel.vol_b) mock_ensure_tree.assert_has_calls([ mock.call(mock.sentinel.mountpoint)]) mock_mount.assert_has_calls([ mock.call(mock.sentinel.fstype, mock.sentinel.export, mock.sentinel.mountpoint, [mock.sentinel.option1, mock.sentinel.option2])]) # Unmount vol_a. We shouldn't have unmounted self._sentinel_umount(m, mock.sentinel.vol_a) mock_ensure_tree.assert_has_calls([ mock.call(mock.sentinel.mountpoint)]) mock_mount.assert_has_calls([ mock.call(mock.sentinel.fstype, mock.sentinel.export, mock.sentinel.mountpoint, [mock.sentinel.option1, mock.sentinel.option2])]) # Unmount vol_b. We should have umounted. self._sentinel_umount(m, mock.sentinel.vol_b) mock_ensure_tree.assert_has_calls([ mock.call(mock.sentinel.mountpoint)]) mock_mount.assert_has_calls([ mock.call(mock.sentinel.fstype, mock.sentinel.export, mock.sentinel.mountpoint, [mock.sentinel.option1, mock.sentinel.option2])]) mock_umount.assert_has_calls([ mock.call(mock.sentinel.mountpoint)]) @mock.patch('oslo_utils.fileutils.ensure_tree') @mock.patch('nova.privsep.fs.mount') @mock.patch('nova.privsep.fs.umount') @mock.patch('os.path.ismount', side_effect=[False, True, True, True]) @mock.patch('nova.privsep.path.rmdir') def test_mount_umount_multi_attach(self, mock_rmdir, mock_ismount, mock_umount, mock_mount, mock_ensure_tree): # Mount a volume from a single export for 2 different instances. Test # that we only mount and umount once. m = self._get_clean_hostmountstate() instance_a = mock.sentinel.instance_a instance_a.uuid = uuids.instance_a instance_b = mock.sentinel.instance_b instance_b.uuid = uuids.instance_b # Mount vol_a for instance_a self._sentinel_mount(m, mock.sentinel.vol_a, instance=instance_a) mock_mount.assert_has_calls([ mock.call(mock.sentinel.fstype, mock.sentinel.export, mock.sentinel.mountpoint, [mock.sentinel.option1, mock.sentinel.option2])]) mock_mount.reset_mock() # Mount vol_a for instance_b. We shouldn't have mounted again self._sentinel_mount(m, mock.sentinel.vol_a, instance=instance_b) mock_mount.assert_not_called() # Unmount vol_a for instance_a. We shouldn't have unmounted self._sentinel_umount(m, mock.sentinel.vol_a, instance=instance_a) mock_umount.assert_not_called() # Unmount vol_a for instance_b. We should have umounted. self._sentinel_umount(m, mock.sentinel.vol_a, instance=instance_b) mock_umount.assert_has_calls([mock.call(mock.sentinel.mountpoint)]) @mock.patch('oslo_utils.fileutils.ensure_tree') @mock.patch('nova.privsep.fs.mount') @mock.patch('nova.privsep.fs.umount') @mock.patch('os.path.ismount', side_effect=[False, False, True, False, False, True]) @mock.patch('nova.privsep.path.rmdir') def test_mount_concurrent(self, mock_rmdir, mock_ismount, mock_umount, mock_mount, mock_ensure_tree): # This is 2 tests in 1, because the first test is the precondition # for the second. # The first test is that if 2 threads call mount simultaneously, # only one of them will call mount # The second test is that we correctly handle the case where we # delete a lock after umount. During the umount of the first test, # which will delete the lock when it completes, we start 2 more # threads which both call mount. These threads are holding a lock # which is about to be deleted. We test that they still don't race, # and only one of them calls mount. m = self._get_clean_hostmountstate() def mount_a(): # Mount vol_a from export self._sentinel_mount(m, mock.sentinel.vol_a) TestThreadController.current().waitpoint('mounted') self._sentinel_umount(m, mock.sentinel.vol_a) def mount_b(): # Mount vol_b from export self._sentinel_mount(m, mock.sentinel.vol_b) self._sentinel_umount(m, mock.sentinel.vol_b) def mount_c(): self._sentinel_mount(m, mock.sentinel.vol_c) def mount_d(): self._sentinel_mount(m, mock.sentinel.vol_d) ctl_a = TestThreadController(mount_a) ctl_b = TestThreadController(mount_b) ctl_c = TestThreadController(mount_c) ctl_d = TestThreadController(mount_d) def trap_mount(*args, **kwargs): # Conditionally wait at a waitpoint named after the command # we're executing TestThreadController.current().waitpoint('mount') def trap_umount(*args, **kwargs): # Conditionally wait at a waitpoint named after the command # we're executing TestThreadController.current().waitpoint('umount') mock_mount.side_effect = trap_mount mock_umount.side_effect = trap_umount # Run the first thread until it's blocked while calling mount ctl_a.runto('mount') mock_ensure_tree.assert_has_calls([ mock.call(mock.sentinel.mountpoint)]) mock_mount.assert_has_calls([ mock.call(mock.sentinel.fstype, mock.sentinel.export, mock.sentinel.mountpoint, [mock.sentinel.option1, mock.sentinel.option2])]) # Start the second mount, and ensure it's got plenty of opportunity # to race. ctl_b.start() time.sleep(0.01) mock_ensure_tree.assert_has_calls([ mock.call(mock.sentinel.mountpoint)]) mock_mount.assert_has_calls([ mock.call(mock.sentinel.fstype, mock.sentinel.export, mock.sentinel.mountpoint, [mock.sentinel.option1, mock.sentinel.option2])]) mock_umount.assert_not_called() # Allow ctl_a to complete its mount ctl_a.runto('mounted') mock_ensure_tree.assert_has_calls([ mock.call(mock.sentinel.mountpoint)]) mock_mount.assert_has_calls([ mock.call(mock.sentinel.fstype, mock.sentinel.export, mock.sentinel.mountpoint, [mock.sentinel.option1, mock.sentinel.option2])]) mock_umount.assert_not_called() # Allow ctl_b to finish. We should not have done a umount ctl_b.finish() mock_ensure_tree.assert_has_calls([ mock.call(mock.sentinel.mountpoint)]) mock_mount.assert_has_calls([ mock.call(mock.sentinel.fstype, mock.sentinel.export, mock.sentinel.mountpoint, [mock.sentinel.option1, mock.sentinel.option2])]) mock_umount.assert_not_called() # Allow ctl_a to start umounting. We haven't executed rmdir yet, # because we've blocked during umount ctl_a.runto('umount') mock_ensure_tree.assert_has_calls([ mock.call(mock.sentinel.mountpoint)]) mock_mount.assert_has_calls([ mock.call(mock.sentinel.fstype, mock.sentinel.export, mock.sentinel.mountpoint, [mock.sentinel.option1, mock.sentinel.option2])]) mock_umount.assert_has_calls([mock.call(mock.sentinel.mountpoint)]) mock_rmdir.assert_not_called() # While ctl_a is umounting, simultaneously start both ctl_c and # ctl_d, and ensure they have an opportunity to race ctl_c.start() ctl_d.start() time.sleep(0.01) # Allow a, c, and d to complete for ctl in (ctl_a, ctl_c, ctl_d): ctl.finish() # We should have completed the previous umount, then remounted # exactly once mock_ensure_tree.assert_has_calls([ mock.call(mock.sentinel.mountpoint)]) mock_mount.assert_has_calls([ mock.call(mock.sentinel.fstype, mock.sentinel.export, mock.sentinel.mountpoint, [mock.sentinel.option1, mock.sentinel.option2]), mock.call(mock.sentinel.fstype, mock.sentinel.export, mock.sentinel.mountpoint, [mock.sentinel.option1, mock.sentinel.option2])]) mock_umount.assert_has_calls([mock.call(mock.sentinel.mountpoint)]) @mock.patch('oslo_utils.fileutils.ensure_tree') @mock.patch('nova.privsep.fs.mount') @mock.patch('nova.privsep.fs.umount') @mock.patch('os.path.ismount', side_effect=[False, False, True, True, True, False]) @mock.patch('nova.privsep.path.rmdir') def test_mount_concurrent_no_interfere(self, mock_rmdir, mock_ismount, mock_umount, mock_mount, mock_ensure_tree): # Test that concurrent calls to mount volumes in different exports # run concurrently m = self._get_clean_hostmountstate() def mount_a(): # Mount vol on mountpoint a self._sentinel_mount(m, mock.sentinel.vol, mock.sentinel.mountpoint_a) TestThreadController.current().waitpoint('mounted') self._sentinel_umount(m, mock.sentinel.vol, mock.sentinel.mountpoint_a) def mount_b(): # Mount vol on mountpoint b self._sentinel_mount(m, mock.sentinel.vol, mock.sentinel.mountpoint_b) self._sentinel_umount(m, mock.sentinel.vol, mock.sentinel.mountpoint_b) ctl_a = TestThreadController(mount_a) ctl_b = TestThreadController(mount_b) ctl_a.runto('mounted') mock_mount.assert_has_calls([ mock.call(mock.sentinel.fstype, mock.sentinel.export, mock.sentinel.mountpoint_a, [mock.sentinel.option1, mock.sentinel.option2])]) mock_mount.reset_mock() ctl_b.finish() mock_mount.assert_has_calls([ mock.call(mock.sentinel.fstype, mock.sentinel.export, mock.sentinel.mountpoint_b, [mock.sentinel.option1, mock.sentinel.option2])]) mock_umount.assert_has_calls([mock.call(mock.sentinel.mountpoint_b)]) mock_umount.reset_mock() ctl_a.finish() mock_umount.assert_has_calls([mock.call(mock.sentinel.mountpoint_a)]) @mock.patch('oslo_utils.fileutils.ensure_tree') @mock.patch('nova.privsep.fs.mount') @mock.patch('nova.privsep.fs.umount', side_effect=processutils.ProcessExecutionError()) @mock.patch('os.path.ismount', side_effect=[False, True, True, True, False]) @mock.patch('nova.privsep.path.rmdir') def test_mount_after_failed_umount(self, mock_rmdir, mock_ismount, mock_umount, mock_mount, mock_ensure_tree): # Test that MountManager correctly tracks state when umount fails. # Test that when umount fails a subsequent mount doesn't try to # remount it. m = self._get_clean_hostmountstate() # Mount vol_a self._sentinel_mount(m, mock.sentinel.vol_a) mock_mount.assert_has_calls([ mock.call(mock.sentinel.fstype, mock.sentinel.export, mock.sentinel.mountpoint, [mock.sentinel.option1, mock.sentinel.option2])]) mock_mount.reset_mock() # Umount vol_a. The umount command will fail. self._sentinel_umount(m, mock.sentinel.vol_a) mock_umount.assert_has_calls([mock.call(mock.sentinel.mountpoint)]) # We should not have called rmdir, because umount failed mock_rmdir.assert_not_called() # Mount vol_a again. We should not have called mount, because umount # failed. self._sentinel_mount(m, mock.sentinel.vol_a) mock_mount.assert_not_called() # Prevent future failure of umount mock_umount.side_effect = None # Umount vol_a successfully self._sentinel_umount(m, mock.sentinel.vol_a) mock_umount.assert_has_calls([mock.call(mock.sentinel.mountpoint)]) @mock.patch.object(mount.LOG, 'error') @mock.patch('oslo_utils.fileutils.ensure_tree') @mock.patch('nova.privsep.fs.mount') @mock.patch('os.path.ismount') @mock.patch('nova.privsep.fs.umount') def test_umount_log_failure(self, mock_umount, mock_ismount, mock_mount, mock_ensure_tree, mock_LOG_error): mock_umount.side_effect = mount.processutils.ProcessExecutionError( None, None, None, 'umount', 'umount: device is busy.') mock_ismount.side_effect = [False, True, True] m = self._get_clean_hostmountstate() self._sentinel_mount(m, mock.sentinel.vol_a) self._sentinel_umount(m, mock.sentinel.vol_a) mock_LOG_error.assert_called() class MountManagerTestCase(test.NoDBTestCase): class FakeHostMountState(object): def __init__(self, host, generation): self.host = host self.generation = generation ctl = TestThreadController.current() if ctl is not None: ctl.waitpoint('init') def setUp(self): super(MountManagerTestCase, self).setUp() self.useFixture(fixtures.MonkeyPatch( 'nova.virt.libvirt.volume.mount._HostMountState', self.FakeHostMountState)) self.m = mount.get_manager() self.m._reset_state() def _get_state(self): with self.m.get_state() as state: return state def test_host_up_down(self): self.m.host_up(mock.sentinel.host) state = self._get_state() self.assertEqual(state.host, mock.sentinel.host) self.assertEqual(state.generation, 0) self.m.host_down() self.assertRaises(exception.HypervisorUnavailable, self._get_state) def test_host_up_waits_for_completion(self): self.m.host_up(mock.sentinel.host) def txn(): with self.m.get_state(): TestThreadController.current().waitpoint('running') # Start a thread which blocks holding a state object ctl = TestThreadController(txn) ctl.runto('running') # Host goes down self.m.host_down() # Call host_up in a separate thread because it will block, and give # it plenty of time to race host_up = eventlet.greenthread.spawn(self.m.host_up, mock.sentinel.host) time.sleep(0.01) # Assert that we haven't instantiated a new state while there's an # ongoing operation from the previous state self.assertRaises(exception.HypervisorUnavailable, self._get_state) # Allow the previous ongoing operation and host_up to complete ctl.finish() host_up.wait() # Assert that we've got a new state generation state = self._get_state() self.assertEqual(1, state.generation) nova-17.0.1/nova/tests/unit/virt/libvirt/volume/test_smbfs.py0000666000175000017500000001226613250073126024352 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os import mock from nova.tests.unit.virt.libvirt.volume import test_volume from nova import utils from nova.virt.libvirt import utils as libvirt_utils from nova.virt.libvirt.volume import smbfs class LibvirtSMBFSVolumeDriverTestCase(test_volume.LibvirtVolumeBaseTestCase): """Tests the libvirt SMBFS volume driver.""" def setUp(self): super(LibvirtSMBFSVolumeDriverTestCase, self).setUp() self.mnt_base = '/mnt' self.flags(smbfs_mount_point_base=self.mnt_base, group='libvirt') @mock.patch.object(libvirt_utils, 'is_mounted') @mock.patch('oslo_utils.fileutils.ensure_tree') @mock.patch('nova.privsep.fs.mount') @mock.patch('nova.privsep.fs.umount') def test_libvirt_smbfs_driver(self, mock_umount, mock_mount, mock_ensure_tree, mock_is_mounted): mock_is_mounted.return_value = False libvirt_driver = smbfs.LibvirtSMBFSVolumeDriver(self.fake_host) export_string = '//192.168.1.1/volumes' export_mnt_base = os.path.join(self.mnt_base, utils.get_hash_str(export_string)) connection_info = {'data': {'export': export_string, 'name': self.name, 'options': None}} libvirt_driver.connect_volume(connection_info, mock.sentinel.instance) libvirt_driver.disconnect_volume(connection_info, mock.sentinel.instance) mock_ensure_tree.assert_has_calls([mock.call(export_mnt_base)]) mock_mount.assert_has_calls( [mock.call('cifs', export_string, export_mnt_base, ['-o', 'username=guest'])]) mock_umount.assert_has_calls([mock.call(export_mnt_base)]) @mock.patch.object(libvirt_utils, 'is_mounted', return_value=True) @mock.patch('nova.privsep.fs.umount') def test_libvirt_smbfs_driver_already_mounted(self, mock_umount, mock_is_mounted): libvirt_driver = smbfs.LibvirtSMBFSVolumeDriver(self.fake_host) export_string = '//192.168.1.1/volumes' export_mnt_base = os.path.join(self.mnt_base, utils.get_hash_str(export_string)) connection_info = {'data': {'export': export_string, 'name': self.name}} libvirt_driver.connect_volume(connection_info, mock.sentinel.instance) libvirt_driver.disconnect_volume(connection_info, mock.sentinel.instance) mock_umount.assert_has_calls([mock.call(export_mnt_base)]) def test_libvirt_smbfs_driver_get_config(self): libvirt_driver = smbfs.LibvirtSMBFSVolumeDriver(self.fake_host) export_string = '//192.168.1.1/volumes' export_mnt_base = os.path.join(self.mnt_base, utils.get_hash_str(export_string)) file_path = os.path.join(export_mnt_base, self.name) connection_info = {'data': {'export': export_string, 'name': self.name, 'device_path': file_path}} conf = libvirt_driver.get_config(connection_info, self.disk_info) tree = conf.format_dom() self._assertFileTypeEquals(tree, file_path) @mock.patch.object(libvirt_utils, 'is_mounted') @mock.patch('oslo_utils.fileutils.ensure_tree') @mock.patch('nova.privsep.fs.mount') @mock.patch('nova.privsep.fs.umount') def test_libvirt_smbfs_driver_with_opts(self, mock_umount, mock_mount, mock_ensure_tree, mock_is_mounted): mock_is_mounted.return_value = False libvirt_driver = smbfs.LibvirtSMBFSVolumeDriver(self.fake_host) export_string = '//192.168.1.1/volumes' options = '-o user=guest,uid=107,gid=105' export_mnt_base = os.path.join(self.mnt_base, utils.get_hash_str(export_string)) connection_info = {'data': {'export': export_string, 'name': self.name, 'options': options}} libvirt_driver.connect_volume(connection_info, mock.sentinel.instance) libvirt_driver.disconnect_volume(connection_info, mock.sentinel.instance) mock_ensure_tree.assert_has_calls([mock.call(export_mnt_base)]) mock_mount.assert_has_calls( [mock.call('cifs', export_string, export_mnt_base, ['-o', 'user=guest,uid=107,gid=105'])]) mock_umount.assert_has_calls([mock.call(export_mnt_base)]) nova-17.0.1/nova/tests/unit/virt/libvirt/volume/test_net.py0000666000175000017500000002676113250073126024033 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock import nova.conf from nova.tests.unit.virt.libvirt.volume import test_volume from nova.virt.libvirt import host from nova.virt.libvirt.volume import net CONF = nova.conf.CONF class LibvirtNetVolumeDriverTestCase( test_volume.LibvirtISCSIVolumeBaseTestCase): """Tests the libvirt network volume driver.""" def _assertNetworkAndProtocolEquals(self, tree): self.assertEqual('network', tree.get('type')) self.assertEqual('rbd', tree.find('./source').get('protocol')) rbd_name = '%s/%s' % ('rbd', self.name) self.assertEqual(rbd_name, tree.find('./source').get('name')) def _assertISCSINetworkAndProtocolEquals(self, tree): self.assertEqual('network', tree.get('type')) self.assertEqual('iscsi', tree.find('./source').get('protocol')) iscsi_name = '%s/%s' % (self.iqn, self.vol['id']) self.assertEqual(iscsi_name, tree.find('./source').get('name')) def sheepdog_connection(self, volume): return { 'driver_volume_type': 'sheepdog', 'data': { 'name': volume['name'] } } def test_libvirt_sheepdog_driver(self): libvirt_driver = net.LibvirtNetVolumeDriver(self.fake_host) connection_info = self.sheepdog_connection(self.vol) conf = libvirt_driver.get_config(connection_info, self.disk_info) tree = conf.format_dom() self.assertEqual('network', tree.get('type')) self.assertEqual('sheepdog', tree.find('./source').get('protocol')) self.assertEqual(self.name, tree.find('./source').get('name')) libvirt_driver.disconnect_volume(connection_info, mock.sentinel.instance) def rbd_connection(self, volume): return { 'driver_volume_type': 'rbd', 'data': { 'name': '%s/%s' % ('rbd', volume['name']), 'auth_enabled': CONF.libvirt.rbd_user is not None, 'auth_username': CONF.libvirt.rbd_user, 'secret_type': 'ceph', 'secret_uuid': CONF.libvirt.rbd_secret_uuid, 'qos_specs': { 'total_bytes_sec': '1048576', 'read_iops_sec': '500', } } } def test_libvirt_rbd_driver(self): libvirt_driver = net.LibvirtNetVolumeDriver(self.fake_host) connection_info = self.rbd_connection(self.vol) conf = libvirt_driver.get_config(connection_info, self.disk_info) tree = conf.format_dom() self._assertNetworkAndProtocolEquals(tree) self.assertIsNone(tree.find('./source/auth')) self.assertEqual('1048576', tree.find('./iotune/total_bytes_sec').text) self.assertEqual('500', tree.find('./iotune/read_iops_sec').text) libvirt_driver.disconnect_volume(connection_info, mock.sentinel.instance) def test_libvirt_rbd_driver_hosts(self): libvirt_driver = net.LibvirtNetVolumeDriver(self.fake_host) connection_info = self.rbd_connection(self.vol) hosts = ['example.com', '1.2.3.4', '::1'] ports = [None, '6790', '6791'] connection_info['data']['hosts'] = hosts connection_info['data']['ports'] = ports conf = libvirt_driver.get_config(connection_info, self.disk_info) tree = conf.format_dom() self._assertNetworkAndProtocolEquals(tree) self.assertIsNone(tree.find('./source/auth')) found_hosts = tree.findall('./source/host') self.assertEqual(hosts, [host.get('name') for host in found_hosts]) self.assertEqual(ports, [host.get('port') for host in found_hosts]) libvirt_driver.disconnect_volume(connection_info, mock.sentinel.instance) def test_libvirt_rbd_driver_auth_enabled(self): libvirt_driver = net.LibvirtNetVolumeDriver(self.fake_host) connection_info = self.rbd_connection(self.vol) secret_type = 'ceph' connection_info['data']['auth_enabled'] = True connection_info['data']['auth_username'] = self.user connection_info['data']['secret_type'] = secret_type connection_info['data']['secret_uuid'] = self.uuid conf = libvirt_driver.get_config(connection_info, self.disk_info) tree = conf.format_dom() self._assertNetworkAndProtocolEquals(tree) self.assertEqual(self.user, tree.find('./auth').get('username')) self.assertEqual(secret_type, tree.find('./auth/secret').get('type')) self.assertEqual(self.uuid, tree.find('./auth/secret').get('uuid')) libvirt_driver.disconnect_volume(connection_info, mock.sentinel.instance) def test_libvirt_rbd_driver_auth_enabled_flags(self): # The values from the cinder connection_info take precedence over # nova.conf values. libvirt_driver = net.LibvirtNetVolumeDriver(self.fake_host) connection_info = self.rbd_connection(self.vol) secret_type = 'ceph' connection_info['data']['auth_enabled'] = True connection_info['data']['auth_username'] = self.user connection_info['data']['secret_type'] = secret_type connection_info['data']['secret_uuid'] = self.uuid flags_uuid = '37152720-1785-11e2-a740-af0c1d8b8e4b' flags_user = 'bar' self.flags(rbd_user=flags_user, rbd_secret_uuid=flags_uuid, group='libvirt') conf = libvirt_driver.get_config(connection_info, self.disk_info) tree = conf.format_dom() self._assertNetworkAndProtocolEquals(tree) self.assertEqual(self.user, tree.find('./auth').get('username')) self.assertEqual(secret_type, tree.find('./auth/secret').get('type')) self.assertEqual(self.uuid, tree.find('./auth/secret').get('uuid')) libvirt_driver.disconnect_volume(connection_info, mock.sentinel.instance) def test_libvirt_rbd_driver_auth_enabled_flags_secret_uuid_fallback(self): """The values from the cinder connection_info take precedence over nova.conf values, unless it's old connection data where the secret_uuid wasn't set on the cinder side for the original connection which is now persisted in the nova.block_device_mappings.connection_info column and used here. In this case we fallback to use the local config for secret_uuid. """ libvirt_driver = net.LibvirtNetVolumeDriver(self.fake_host) connection_info = self.rbd_connection(self.vol) secret_type = 'ceph' connection_info['data']['auth_enabled'] = True connection_info['data']['auth_username'] = self.user connection_info['data']['secret_type'] = secret_type # Fake out cinder not setting the secret_uuid in the old connection. connection_info['data']['secret_uuid'] = None flags_uuid = '37152720-1785-11e2-a740-af0c1d8b8e4b' flags_user = 'bar' self.flags(rbd_user=flags_user, rbd_secret_uuid=flags_uuid, group='libvirt') conf = libvirt_driver.get_config(connection_info, self.disk_info) tree = conf.format_dom() self._assertNetworkAndProtocolEquals(tree) self.assertEqual(self.user, tree.find('./auth').get('username')) self.assertEqual(secret_type, tree.find('./auth/secret').get('type')) # Assert that the secret_uuid comes from CONF.libvirt.rbd_secret_uuid. self.assertEqual(flags_uuid, tree.find('./auth/secret').get('uuid')) libvirt_driver.disconnect_volume(connection_info, mock.sentinel.instance) def test_libvirt_rbd_driver_auth_disabled(self): libvirt_driver = net.LibvirtNetVolumeDriver(self.fake_host) connection_info = self.rbd_connection(self.vol) secret_type = 'ceph' connection_info['data']['auth_enabled'] = False connection_info['data']['auth_username'] = self.user connection_info['data']['secret_type'] = secret_type connection_info['data']['secret_uuid'] = self.uuid conf = libvirt_driver.get_config(connection_info, self.disk_info) tree = conf.format_dom() self._assertNetworkAndProtocolEquals(tree) self.assertIsNone(tree.find('./auth')) libvirt_driver.disconnect_volume(connection_info, mock.sentinel.instance) def test_libvirt_rbd_driver_auth_disabled_flags_override(self): libvirt_driver = net.LibvirtNetVolumeDriver(self.fake_host) connection_info = self.rbd_connection(self.vol) secret_type = 'ceph' connection_info['data']['auth_enabled'] = False connection_info['data']['auth_username'] = self.user connection_info['data']['secret_type'] = secret_type connection_info['data']['secret_uuid'] = self.uuid # NOTE: Supplying the rbd_secret_uuid will enable authentication # locally in nova-compute even if not enabled in nova-volume/cinder flags_uuid = '37152720-1785-11e2-a740-af0c1d8b8e4b' flags_user = 'bar' self.flags(rbd_user=flags_user, rbd_secret_uuid=flags_uuid, group='libvirt') conf = libvirt_driver.get_config(connection_info, self.disk_info) tree = conf.format_dom() self._assertNetworkAndProtocolEquals(tree) self.assertEqual(flags_user, tree.find('./auth').get('username')) self.assertEqual(secret_type, tree.find('./auth/secret').get('type')) self.assertEqual(flags_uuid, tree.find('./auth/secret').get('uuid')) libvirt_driver.disconnect_volume(connection_info, mock.sentinel.instance) @mock.patch.object(host.Host, 'find_secret') @mock.patch.object(host.Host, 'create_secret') @mock.patch.object(host.Host, 'delete_secret') def test_libvirt_iscsi_net_driver(self, mock_delete, mock_create, mock_find): mock_find.return_value = test_volume.FakeSecret() mock_create.return_value = test_volume.FakeSecret() libvirt_driver = net.LibvirtNetVolumeDriver(self.fake_host) connection_info = self.iscsi_connection(self.vol, self.location, self.iqn, auth=True) secret_type = 'iscsi' flags_user = connection_info['data']['auth_username'] conf = libvirt_driver.get_config(connection_info, self.disk_info) tree = conf.format_dom() self._assertISCSINetworkAndProtocolEquals(tree) self.assertEqual(flags_user, tree.find('./auth').get('username')) self.assertEqual(secret_type, tree.find('./auth/secret').get('type')) self.assertEqual(test_volume.SECRET_UUID, tree.find('./auth/secret').get('uuid')) libvirt_driver.disconnect_volume(connection_info, mock.sentinel.instance) nova-17.0.1/nova/tests/unit/virt/libvirt/volume/test_drbd.py0000666000175000017500000000533113250073126024146 0ustar zuulzuul00000000000000# Copyright (c) 2015 LINBIT HA-Solutions GmbH. # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. """Unit tests for the DRDB volume driver module.""" import mock from os_brick.initiator import connector from nova import context as nova_context from nova.tests.unit import fake_instance from nova.tests.unit.virt.libvirt.volume import test_volume from nova.virt.libvirt.volume import drbd class LibvirtDRBDVolumeDriverTestCase( test_volume.LibvirtVolumeBaseTestCase): """Tests the LibvirtDRBDVolumeDriver class.""" def test_libvirt_drbd_driver(self): drbd_driver = drbd.LibvirtDRBDVolumeDriver(self.fake_host) self.assertIsInstance(drbd_driver.connector, connector.DRBDConnector) # connect a fake volume connection_info = { 'data': { 'device': '/path/to/fake-device' } } ctxt = nova_context.RequestContext('fake-user', 'fake-project') instance = fake_instance.fake_instance_obj(ctxt) device_info = { 'type': 'block', 'path': connection_info['data']['device'], } with mock.patch.object(connector.DRBDConnector, 'connect_volume', return_value=device_info): drbd_driver.connect_volume(connection_info, instance) # assert that the device_path was set self.assertIn('device_path', connection_info['data']) self.assertEqual('/path/to/fake-device', connection_info['data']['device_path']) # now get the config using the updated connection_info conf = drbd_driver.get_config(connection_info, self.disk_info) # assert things were passed through to the parent class self.assertEqual('block', conf.source_type) self.assertEqual('/path/to/fake-device', conf.source_path) # now disconnect the volume with mock.patch.object(connector.DRBDConnector, 'disconnect_volume') as mock_disconnect: drbd_driver.disconnect_volume(connection_info, instance) # disconnect is all passthrough so just assert the call mock_disconnect.assert_called_once_with(connection_info['data'], None) nova-17.0.1/nova/tests/unit/virt/libvirt/volume/test_fibrechannel.py0000666000175000017500000000677013250073126025663 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import platform import mock from os_brick.initiator import connector from nova.objects import fields as obj_fields from nova.tests.unit.virt.libvirt.volume import test_volume from nova.virt.libvirt.volume import fibrechannel class LibvirtFibreChannelVolumeDriverTestCase( test_volume.LibvirtVolumeBaseTestCase): def test_libvirt_fibrechan_driver(self): for multipath in (True, False): self.flags(volume_use_multipath=multipath, group='libvirt') libvirt_driver = fibrechannel.LibvirtFibreChannelVolumeDriver( self.fake_host) self.assertIsInstance(libvirt_driver.connector, connector.FibreChannelConnector) if hasattr(libvirt_driver.connector, 'use_multipath'): self.assertEqual( multipath, libvirt_driver.connector.use_multipath) def _test_libvirt_fibrechan_driver_s390(self): libvirt_driver = fibrechannel.LibvirtFibreChannelVolumeDriver( self.fake_host) self.assertIsInstance(libvirt_driver.connector, connector.FibreChannelConnectorS390X) @mock.patch.object(platform, 'machine', return_value=obj_fields.Architecture.S390) def test_libvirt_fibrechan_driver_s390(self, mock_machine): self._test_libvirt_fibrechan_driver_s390() @mock.patch.object(platform, 'machine', return_value=obj_fields.Architecture.S390X) def test_libvirt_fibrechan_driver_s390x(self, mock_machine): self._test_libvirt_fibrechan_driver_s390() def test_libvirt_fibrechan_driver_get_config(self): libvirt_driver = fibrechannel.LibvirtFibreChannelVolumeDriver( self.fake_host) device_path = '/dev/fake-dev' connection_info = {'data': {'device_path': device_path}} conf = libvirt_driver.get_config(connection_info, self.disk_info) tree = conf.format_dom() self.assertEqual('block', tree.get('type')) self.assertEqual(device_path, tree.find('./source').get('dev')) self.assertEqual('raw', tree.find('./driver').get('type')) self.assertEqual('native', tree.find('./driver').get('io')) def test_extend_volume(self): device_path = '/dev/fake-dev' connection_info = {'data': {'device_path': device_path}} libvirt_driver = fibrechannel.LibvirtFibreChannelVolumeDriver( self.fake_host) libvirt_driver.connector.extend_volume = mock.MagicMock(return_value=1) new_size = libvirt_driver.extend_volume(connection_info, mock.sentinel.instance) self.assertEqual(1, new_size) libvirt_driver.connector.extend_volume.assert_called_once_with( connection_info['data']) nova-17.0.1/nova/tests/unit/virt/libvirt/volume/test_vrtshyperscale.py0000666000175000017500000000605413250073126026314 0ustar zuulzuul00000000000000# Copyright (c) 2017 Veritas Technologies LLC # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from os_brick.initiator import connector from nova.tests.unit.virt.libvirt.volume import test_volume from nova.virt.libvirt.volume import vrtshyperscale DEVICE_NAME = '{8ee71c33-dcd0-4267-8f2b-e0742ecabe9f}' DEVICE_PATH = '/dev/8ee71c33-dcd0-4267-8f2b-e0742ec' class LibvirtHyperScaleVolumeDriverTestCase( test_volume.LibvirtVolumeBaseTestCase): def test_driver_init(self): hs = vrtshyperscale.LibvirtHyperScaleVolumeDriver(self.fake_host) self.assertIsInstance(hs.connector, connector.HyperScaleConnector) def test_get_config(self): hs = vrtshyperscale.LibvirtHyperScaleVolumeDriver(self.fake_host) # expect valid conf is returned if called with proper arguments disk_info = {'name': DEVICE_NAME, 'type': None, 'dev': None, 'bus': None, 'device_path': DEVICE_PATH, } conn = {'data': disk_info} conf = hs.get_config(conn, disk_info) self.assertEqual("block", conf.source_type) self.assertEqual(DEVICE_PATH, conf.source_path) @mock.patch('os_brick.initiator.connectors.vrtshyperscale' '.HyperScaleConnector.connect_volume') def test_connect_volume(self, mock_brick_connect_volume): mock_brick_connect_volume.return_value = {'path': DEVICE_PATH} hs = vrtshyperscale.LibvirtHyperScaleVolumeDriver(self.fake_host) # dummy arguments are just passed through to mock connector disk_info = {'name': DEVICE_NAME} connection_info = {'data': disk_info} hs.connect_volume(connection_info, mock.sentinel.instance) # expect connect_volume to add device_path to connection_info: self.assertEqual(connection_info['data']['device_path'], DEVICE_PATH) @mock.patch('os_brick.initiator.connectors.vrtshyperscale' '.HyperScaleConnector.disconnect_volume') def test_disconnect_volume(self, mock_brick_disconnect_volume): mock_brick_disconnect_volume.return_value = None hs = vrtshyperscale.LibvirtHyperScaleVolumeDriver(self.fake_host) # dummy arguments are just passed through to mock connector conn_data = {'name': DEVICE_NAME} connection_info = {'data': conn_data} hs.disconnect_volume(connection_info, mock.sentinel.instance) hs.connector.disconnect_volume.assert_called_once_with(conn_data, None) nova-17.0.1/nova/tests/unit/virt/libvirt/volume/test_aoe.py0000666000175000017500000000207613250073126024002 0ustar zuulzuul00000000000000# Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from os_brick.initiator import connector from nova.tests.unit.virt.libvirt.volume import test_volume from nova.virt.libvirt.volume import aoe class LibvirtAOEVolumeDriverTestCase(test_volume.LibvirtVolumeBaseTestCase): @mock.patch('os.path.exists', return_value=True) def test_libvirt_aoe_driver(self, exists): libvirt_driver = aoe.LibvirtAOEVolumeDriver(self.fake_host) self.assertIsInstance(libvirt_driver.connector, connector.AoEConnector) nova-17.0.1/nova/tests/unit/virt/libvirt/volume/test_storpool.py0000666000175000017500000001377013250073126025122 0ustar zuulzuul00000000000000# (c) Copyright 2015 - 2018 StorPool # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from os_brick import initiator from nova.tests.unit.virt.libvirt.volume import test_volume from nova.virt.libvirt.volume import storpool as vol_sp test_attached = {} class MockStorPoolExc(Exception): def __init__(self, msg): super(MockStorPoolExc, self).__init__(msg) def storpoolVolumeName(vid): return 'os--volume--{id}'.format(id=vid) def storpoolVolumePath(vid): return '/dev/storpool/' + storpoolVolumeName(vid) class MockStorPoolConnector(object): def __init__(self, inst): self.inst = inst def connect_volume(self, connection_info): self.inst.assertIn('client_id', connection_info) self.inst.assertIn('volume', connection_info) self.inst.assertIn('access_mode', connection_info) v = connection_info['volume'] if v in test_attached: raise MockStorPoolExc('Duplicate volume attachment') test_attached[v] = { 'info': connection_info, 'path': storpoolVolumePath(v) } return {'type': 'block', 'path': test_attached[v]['path']} def disconnect_volume(self, connection_info, device_info): self.inst.assertIn('client_id', connection_info) self.inst.assertIn('volume', connection_info) v = connection_info['volume'] if v not in test_attached: raise MockStorPoolExc('Unknown volume to detach') self.inst.assertIs(test_attached[v]['info'], connection_info) del test_attached[v] def extend_volume(self, connection_info): self.inst.assertIn('volume', connection_info) self.inst.assertIn('real_size', connection_info) v = connection_info['volume'] if v not in test_attached: raise MockStorPoolExc('Extending a volume that is not attached') return connection_info['real_size'] class MockStorPoolInitiator(object): def __init__(self, inst): self.inst = inst def factory(self, proto, helper): self.inst.assertEqual(proto, initiator.STORPOOL) self.inst.assertIsNotNone(helper) return MockStorPoolConnector(self.inst) class LibvirtStorPoolVolumeDriverTestCase( test_volume.LibvirtVolumeBaseTestCase): def mock_storpool(f): def _config_inner_inner1(inst, *args, **kwargs): @mock.patch( 'os_brick.initiator.connector.InitiatorConnector', new=MockStorPoolInitiator(inst)) def _config_inner_inner2(): return f(inst, *args, **kwargs) return _config_inner_inner2() return _config_inner_inner1 def assertStorpoolAttached(self, names): self.assertListEqual(sorted(test_attached.keys()), sorted(names)) def conn_info(self, volume_id): return { 'data': { 'access_mode': 'rw', 'client_id': '1', 'volume': volume_id, 'real_size': 42 if volume_id == '1' else 616 }, 'serial': volume_id } @mock_storpool def test_storpool_config(self): libvirt_driver = vol_sp.LibvirtStorPoolVolumeDriver(self.fake_host) ci = self.conn_info('1') ci['data']['device_path'] = '/dev/storpool/something' c = libvirt_driver.get_config(ci, self.disk_info) self.assertEqual('block', c.source_type) self.assertEqual('/dev/storpool/something', c.source_path) @mock_storpool def test_storpool_attach_detach_extend(self): libvirt_driver = vol_sp.LibvirtStorPoolVolumeDriver(self.fake_host) self.assertDictEqual({}, test_attached) ci_1 = self.conn_info('1') ci_2 = self.conn_info('2') self.assertRaises(MockStorPoolExc, libvirt_driver.extend_volume, ci_1, mock.sentinel.instance) self.assertRaises(MockStorPoolExc, libvirt_driver.extend_volume, ci_2, mock.sentinel.instance) libvirt_driver.connect_volume(ci_1, mock.sentinel.instance) self.assertStorpoolAttached(('1',)) ns_1 = libvirt_driver.extend_volume(ci_1, mock.sentinel.instance) self.assertEqual(ci_1['data']['real_size'], ns_1) self.assertRaises(MockStorPoolExc, libvirt_driver.extend_volume, ci_2, mock.sentinel.instance) libvirt_driver.connect_volume(ci_2, mock.sentinel.instance) self.assertStorpoolAttached(('1', '2')) ns_1 = libvirt_driver.extend_volume(ci_1, mock.sentinel.instance) self.assertEqual(ci_1['data']['real_size'], ns_1) ns_2 = libvirt_driver.extend_volume(ci_2, mock.sentinel.instance) self.assertEqual(ci_2['data']['real_size'], ns_2) self.assertRaises(MockStorPoolExc, libvirt_driver.connect_volume, ci_2, mock.sentinel.instance) libvirt_driver.disconnect_volume(ci_1, mock.sentinel.instance) self.assertStorpoolAttached(('2',)) self.assertRaises(MockStorPoolExc, libvirt_driver.disconnect_volume, ci_1, mock.sentinel.instance) self.assertRaises(MockStorPoolExc, libvirt_driver.extend_volume, ci_1, mock.sentinel.instance) libvirt_driver.disconnect_volume(ci_2, mock.sentinel.instance) self.assertDictEqual({}, test_attached) nova-17.0.1/nova/tests/unit/virt/libvirt/volume/test_remotefs.py0000666000175000017500000003044013250073126025056 0ustar zuulzuul00000000000000# Copyright 2014 Cloudbase Solutions Srl # All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import mock from oslo_concurrency import processutils from nova import test from nova.virt.libvirt.volume import remotefs class RemoteFSTestCase(test.NoDBTestCase): """Remote filesystem operations test case.""" @mock.patch('oslo_utils.fileutils.ensure_tree') @mock.patch('nova.privsep.fs.mount') def _test_mount_share(self, mock_mount, mock_ensure_tree, already_mounted=False): if already_mounted: err_msg = 'Device or resource busy' mock_mount.side_effect = [ None, processutils.ProcessExecutionError(err_msg)] remotefs.mount_share( mock.sentinel.mount_path, mock.sentinel.export_path, mock.sentinel.export_type, options=[mock.sentinel.mount_options]) mock_ensure_tree.assert_any_call(mock.sentinel.mount_path) mock_mount.assert_has_calls( [mock.call(mock.sentinel.export_type, mock.sentinel.export_path, mock.sentinel.mount_path, [mock.sentinel.mount_options])]) def test_mount_new_share(self): self._test_mount_share() def test_mount_already_mounted_share(self): self._test_mount_share(already_mounted=True) @mock.patch('nova.privsep.fs.umount') def test_unmount_share(self, mock_umount): remotefs.unmount_share( mock.sentinel.mount_path, mock.sentinel.export_path) mock_umount.assert_has_calls( [mock.call(mock.sentinel.mount_path)]) @mock.patch('tempfile.mkdtemp', return_value='/tmp/Mercury') @mock.patch('nova.utils.execute') def test_remove_remote_file_rsync(self, mock_execute, mock_mkdtemp): remotefs.RsyncDriver().remove_file('host', 'dest', None, None) rsync_call_args = mock.call('rsync', '--archive', '--delete', '--include', 'dest', '--exclude', '*', '/tmp/Mercury/', 'host:', on_completion=None, on_execute=None) self.assertEqual(mock_execute.mock_calls[0], rsync_call_args) rm_call_args = mock.call('rm', '-rf', '/tmp/Mercury') self.assertEqual(mock_execute.mock_calls[1], rm_call_args) self.assertEqual(2, mock_execute.call_count) self.assertEqual(1, mock_mkdtemp.call_count) @mock.patch('nova.utils.ssh_execute') def test_remove_remote_file_ssh(self, mock_ssh_execute): remotefs.SshDriver().remove_file('host', 'dest', None, None) mock_ssh_execute.assert_called_once_with( 'host', 'rm', 'dest', on_completion=None, on_execute=None) @mock.patch('tempfile.mkdtemp', return_value='/tmp/Venus') @mock.patch('nova.utils.execute') def test_remove_remote_dir_rsync(self, mock_execute, mock_mkdtemp): remotefs.RsyncDriver().remove_dir('host', 'dest', None, None) rsync_call_args = mock.call('rsync', '--archive', '--delete-excluded', '/tmp/Venus/', 'host:dest', on_completion=None, on_execute=None) self.assertEqual(mock_execute.mock_calls[0], rsync_call_args) rsync_call_args = mock.call('rsync', '--archive', '--delete', '--include', 'dest', '--exclude', '*', '/tmp/Venus/', 'host:', on_completion=None, on_execute=None) self.assertEqual(mock_execute.mock_calls[1], rsync_call_args) rm_call_args = mock.call('rm', '-rf', '/tmp/Venus') self.assertEqual(mock_execute.mock_calls[2], rm_call_args) self.assertEqual(3, mock_execute.call_count) self.assertEqual(1, mock_mkdtemp.call_count) @mock.patch('nova.utils.ssh_execute') def test_remove_remote_dir_ssh(self, mock_ssh_execute): remotefs.SshDriver().remove_dir('host', 'dest', None, None) mock_ssh_execute.assert_called_once_with( 'host', 'rm', '-rf', 'dest', on_completion=None, on_execute=None) @mock.patch('tempfile.mkdtemp', return_value='/tmp/Mars') @mock.patch('nova.utils.execute') def test_create_remote_file_rsync(self, mock_execute, mock_mkdtemp): remotefs.RsyncDriver().create_file('host', 'dest_dir', None, None) mkdir_call_args = mock.call('mkdir', '-p', '/tmp/Mars/', on_completion=None, on_execute=None) self.assertEqual(mock_execute.mock_calls[0], mkdir_call_args) touch_call_args = mock.call('touch', '/tmp/Mars/dest_dir', on_completion=None, on_execute=None) self.assertEqual(mock_execute.mock_calls[1], touch_call_args) rsync_call_args = mock.call('rsync', '--archive', '--relative', '--no-implied-dirs', '/tmp/Mars/./dest_dir', 'host:/', on_completion=None, on_execute=None) self.assertEqual(mock_execute.mock_calls[2], rsync_call_args) rm_call_args = mock.call('rm', '-rf', '/tmp/Mars') self.assertEqual(mock_execute.mock_calls[3], rm_call_args) self.assertEqual(4, mock_execute.call_count) self.assertEqual(1, mock_mkdtemp.call_count) @mock.patch('nova.utils.ssh_execute') def test_create_remote_file_ssh(self, mock_ssh_execute): remotefs.SshDriver().create_file('host', 'dest_dir', None, None) mock_ssh_execute.assert_called_once_with('host', 'touch', 'dest_dir', on_completion=None, on_execute=None) @mock.patch('tempfile.mkdtemp', return_value='/tmp/Jupiter') @mock.patch('nova.utils.execute') def test_create_remote_dir_rsync(self, mock_execute, mock_mkdtemp): remotefs.RsyncDriver().create_dir('host', 'dest_dir', None, None) mkdir_call_args = mock.call('mkdir', '-p', '/tmp/Jupiter/dest_dir', on_completion=None, on_execute=None) self.assertEqual(mock_execute.mock_calls[0], mkdir_call_args) rsync_call_args = mock.call('rsync', '--archive', '--relative', '--no-implied-dirs', '/tmp/Jupiter/./dest_dir', 'host:/', on_completion=None, on_execute=None) self.assertEqual(mock_execute.mock_calls[1], rsync_call_args) rm_call_args = mock.call('rm', '-rf', '/tmp/Jupiter') self.assertEqual(mock_execute.mock_calls[2], rm_call_args) self.assertEqual(3, mock_execute.call_count) self.assertEqual(1, mock_mkdtemp.call_count) @mock.patch('nova.utils.ssh_execute') def test_create_remote_dir_ssh(self, mock_ssh_execute): remotefs.SshDriver().create_dir('host', 'dest_dir', None, None) mock_ssh_execute.assert_called_once_with('host', 'mkdir', '-p', 'dest_dir', on_completion=None, on_execute=None) @mock.patch('nova.utils.execute') def test_remote_copy_file_rsync(self, mock_execute): remotefs.RsyncDriver().copy_file('1.2.3.4:/home/star_wars', '/home/favourite', None, None, compression=True) mock_execute.assert_called_once_with('rsync', '-r', '--sparse', '1.2.3.4:/home/star_wars', '/home/favourite', '--compress', on_completion=None, on_execute=None) @mock.patch('nova.utils.execute') def test_remote_copy_file_rsync_without_compression(self, mock_execute): remotefs.RsyncDriver().copy_file('1.2.3.4:/home/star_wars', '/home/favourite', None, None, compression=False) mock_execute.assert_called_once_with('rsync', '-r', '--sparse', '1.2.3.4:/home/star_wars', '/home/favourite', on_completion=None, on_execute=None) @mock.patch('nova.utils.execute') def test_remote_copy_file_ssh(self, mock_execute): remotefs.SshDriver().copy_file('1.2.3.4:/home/SpaceOdyssey', '/home/favourite', None, None, True) mock_execute.assert_called_once_with('scp', '-r', '1.2.3.4:/home/SpaceOdyssey', '/home/favourite', on_completion=None, on_execute=None) @mock.patch('tempfile.mkdtemp', return_value='/tmp/Saturn') def test_rsync_driver_ipv6(self, mock_mkdtemp): with mock.patch('nova.utils.execute') as mock_execute: remotefs.RsyncDriver().create_file('2600::', 'dest_dir', None, None) rsync_call_args = mock.call('rsync', '--archive', '--relative', '--no-implied-dirs', '/tmp/Saturn/./dest_dir', '[2600::]:/', on_completion=None, on_execute=None) self.assertEqual(mock_execute.mock_calls[2], rsync_call_args) with mock.patch('nova.utils.execute') as mock_execute: remotefs.RsyncDriver().create_dir('2600::', 'dest_dir', None, None) rsync_call_args = mock.call('rsync', '--archive', '--relative', '--no-implied-dirs', '/tmp/Saturn/./dest_dir', '[2600::]:/', on_completion=None, on_execute=None) self.assertEqual(mock_execute.mock_calls[1], rsync_call_args) with mock.patch('nova.utils.execute') as mock_execute: remotefs.RsyncDriver().remove_file('2600::', 'dest', None, None) rsync_call_args = mock.call('rsync', '--archive', '--delete', '--include', 'dest', '--exclude', '*', '/tmp/Saturn/', '[2600::]:', on_completion=None, on_execute=None) self.assertEqual(mock_execute.mock_calls[0], rsync_call_args) with mock.patch('nova.utils.execute') as mock_execute: remotefs.RsyncDriver().remove_dir('2600::', 'dest', None, None) rsync_call_args = mock.call('rsync', '--archive', '--delete-excluded', '/tmp/Saturn/', '[2600::]:dest', on_completion=None, on_execute=None) self.assertEqual(mock_execute.mock_calls[0], rsync_call_args) rsync_call_args = mock.call('rsync', '--archive', '--delete', '--include', 'dest', '--exclude', '*', '/tmp/Saturn/', '[2600::]:', on_completion=None, on_execute=None) self.assertEqual(mock_execute.mock_calls[1], rsync_call_args) nova-17.0.1/nova/tests/unit/virt/libvirt/test_driver.py0000666000175000017500000347130713250073136023235 0ustar zuulzuul00000000000000# Copyright 2010 OpenStack Foundation # Copyright 2012 University Of Minho # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import binascii from collections import deque from collections import OrderedDict import contextlib import copy import datetime import errno import glob import os import random import re import shutil import signal import threading import time from castellan import key_manager import ddt import eventlet from eventlet import greenthread import fixtures from lxml import etree import mock from mox3 import mox from os_brick import encryptors from os_brick import exception as brick_exception from os_brick.initiator import connector import os_vif from oslo_concurrency import lockutils from oslo_concurrency import processutils from oslo_config import cfg from oslo_serialization import jsonutils from oslo_service import loopingcall from oslo_utils import encodeutils from oslo_utils import fileutils from oslo_utils import fixture as utils_fixture from oslo_utils import units from oslo_utils import uuidutils from oslo_utils import versionutils import six from six.moves import builtins from six.moves import range from nova.api.metadata import base as instance_metadata from nova.compute import manager from nova.compute import power_state from nova.compute import task_states from nova.compute import vm_states import nova.conf from nova import context from nova import db from nova import exception from nova.network import model as network_model from nova import objects from nova.objects import block_device as block_device_obj from nova.objects import fields from nova.objects import migrate_data as migrate_data_obj from nova.objects import virtual_interface as obj_vif from nova.pci import manager as pci_manager from nova.pci import utils as pci_utils import nova.privsep.libvirt from nova import test from nova.tests.unit import fake_block_device from nova.tests.unit import fake_diagnostics from nova.tests.unit import fake_instance from nova.tests.unit import fake_network import nova.tests.unit.image.fake from nova.tests.unit import matchers from nova.tests.unit.objects import test_diagnostics from nova.tests.unit.objects import test_pci_device from nova.tests.unit.objects import test_vcpu_model from nova.tests.unit.virt.libvirt import fake_imagebackend from nova.tests.unit.virt.libvirt import fake_libvirt_utils from nova.tests.unit.virt.libvirt import fakelibvirt from nova.tests import uuidsentinel as uuids from nova import utils from nova import version from nova.virt import block_device as driver_block_device from nova.virt.disk import api as disk_api from nova.virt import driver from nova.virt import fake from nova.virt import firewall as base_firewall from nova.virt import hardware from nova.virt.image import model as imgmodel from nova.virt import images from nova.virt.libvirt import blockinfo from nova.virt.libvirt import config as vconfig from nova.virt.libvirt import driver as libvirt_driver from nova.virt.libvirt import firewall from nova.virt.libvirt import guest as libvirt_guest from nova.virt.libvirt import host from nova.virt.libvirt import imagebackend from nova.virt.libvirt import imagecache from nova.virt.libvirt import migration as libvirt_migrate from nova.virt.libvirt.storage import dmcrypt from nova.virt.libvirt.storage import lvm from nova.virt.libvirt.storage import rbd_utils from nova.virt.libvirt import utils as libvirt_utils from nova.virt.libvirt.volume import volume as volume_drivers CONF = nova.conf.CONF _fake_network_info = fake_network.fake_get_instance_nw_info _fake_NodeDevXml = \ {"pci_0000_04_00_3": """ pci_0000_04_00_3 pci_0000_00_01_1 igb 0 4 0 3 I350 Gigabit Network Connection Intel Corporation
""", "pci_0000_04_10_7": """ pci_0000_04_10_7 pci_0000_00_01_1 igbvf 0 4 16 7 I350 Ethernet Controller Virtual Function Intel Corporation
""", "pci_0000_04_11_7": """ pci_0000_04_11_7 pci_0000_00_01_1 igbvf 0 4 17 7 I350 Ethernet Controller Virtual Function Intel Corporation
""", "pci_0000_04_00_1": """ pci_0000_04_00_1 /sys/devices/pci0000:00/0000:00:02.0/0000:04:00.1 pci_0000_00_02_0 mlx5_core 0 4 0 1 MT27700 Family [ConnectX-4] Mellanox Technologies
""", # libvirt >= 1.3.0 nodedev-dumpxml "pci_0000_03_00_0": """ pci_0000_03_00_0 /sys/devices/pci0000:00/0000:00:02.0/0000:03:00.0 pci_0000_00_02_0 mlx5_core 0 3 0 0 MT27700 Family [ConnectX-4] Mellanox Technologies
""", "pci_0000_03_00_1": """ pci_0000_03_00_1 /sys/devices/pci0000:00/0000:00:02.0/0000:03:00.1 pci_0000_00_02_0 mlx5_core 0 3 0 1 MT27700 Family [ConnectX-4] Mellanox Technologies
""", "net_enp2s2_02_9a_a1_37_be_54": """ net_enp2s2_02_9a_a1_37_be_54 /sys/devices/pci0000:00/0000:00:02.0/0000:02:02.0/net/enp2s2 pci_0000_04_11_7 enp2s2
02:9a:a1:37:be:54
""", "pci_0000_06_00_0": """ pci_0000_06_00_0 /sys/devices/pci0000:00/0000:00:06.0 nvidia 0 10 1 5 GRID M60-0B Nvidia GRID M60-0B vfio-pci 16 """, "mdev_4b20d080_1b54_4048_85b3_a6a62d165c01": """ mdev_4b20d080_1b54_4048_85b3_a6a62d165c01 /sys/devices/pci0000:00/0000:00:02.0/4b20d080-1b54-4048-85b3-a6a62d165c01 pci_0000_00_02_0 vfio_mdev """, } _fake_cpu_info = { "arch": "test_arch", "model": "test_model", "vendor": "test_vendor", "topology": { "sockets": 1, "cores": 8, "threads": 16 }, "features": ["feature1", "feature2"] } eph_default_ext = utils.get_hash_str(disk_api._DEFAULT_FILE_SYSTEM)[:7] def eph_name(size): return ('ephemeral_%(size)s_%(ext)s' % {'size': size, 'ext': eph_default_ext}) def fake_disk_info_byname(instance, type='qcow2'): """Return instance_disk_info corresponding accurately to the properties of the given Instance object. The info is returned as an OrderedDict of name->disk_info for each disk. :param instance: The instance we're generating fake disk_info for. :param type: libvirt's disk type. :return: disk_info :rtype: OrderedDict """ instance_dir = os.path.join(CONF.instances_path, instance.uuid) def instance_path(name): return os.path.join(instance_dir, name) disk_info = OrderedDict() # root disk if (instance.image_ref is not None and instance.image_ref != uuids.fake_volume_backed_image_ref): cache_name = imagecache.get_cache_fname(instance.image_ref) disk_info['disk'] = { 'type': type, 'path': instance_path('disk'), 'virt_disk_size': instance.flavor.root_gb * units.Gi, 'backing_file': cache_name, 'disk_size': instance.flavor.root_gb * units.Gi, 'over_committed_disk_size': 0} swap_mb = instance.flavor.swap if swap_mb > 0: disk_info['disk.swap'] = { 'type': type, 'path': instance_path('disk.swap'), 'virt_disk_size': swap_mb * units.Mi, 'backing_file': 'swap_%s' % swap_mb, 'disk_size': swap_mb * units.Mi, 'over_committed_disk_size': 0} eph_gb = instance.flavor.ephemeral_gb if eph_gb > 0: disk_info['disk.local'] = { 'type': type, 'path': instance_path('disk.local'), 'virt_disk_size': eph_gb * units.Gi, 'backing_file': eph_name(eph_gb), 'disk_size': eph_gb * units.Gi, 'over_committed_disk_size': 0} if instance.config_drive: disk_info['disk.config'] = { 'type': 'raw', 'path': instance_path('disk.config'), 'virt_disk_size': 1024, 'backing_file': '', 'disk_size': 1024, 'over_committed_disk_size': 0} return disk_info def fake_diagnostics_object(with_cpus=False, with_disks=False, with_nic=False): diag_dict = {'config_drive': False, 'driver': 'libvirt', 'hypervisor': 'kvm', 'hypervisor_os': 'linux', 'memory_details': {'maximum': 2048, 'used': 1234}, 'state': 'running', 'uptime': 10} if with_cpus: diag_dict['cpu_details'] = [] for id, t in enumerate([15340000000, 1640000000, 3040000000, 1420000000]): diag_dict['cpu_details'].append({'id': id, 'time': t}) if with_disks: diag_dict['disk_details'] = [] for i in range(2): diag_dict['disk_details'].append( {'read_bytes': 688640, 'read_requests': 169, 'write_bytes': 0, 'write_requests': 0, 'errors_count': 1}) if with_nic: diag_dict['nic_details'] = [ {'mac_address': '52:54:00:a4:38:38', 'rx_drop': 0, 'rx_errors': 0, 'rx_octets': 4408, 'rx_packets': 82, 'tx_drop': 0, 'tx_errors': 0, 'tx_octets': 0, 'tx_packets': 0}] return fake_diagnostics.fake_diagnostics_obj(**diag_dict) def fake_disk_info_json(instance, type='qcow2'): """Return fake instance_disk_info corresponding accurately to the properties of the given Instance object. :param instance: The instance we're generating fake disk_info for. :param type: libvirt's disk type. :return: JSON representation of instance_disk_info for all disks. :rtype: str """ disk_info = fake_disk_info_byname(instance, type) return jsonutils.dumps(disk_info.values()) def get_injection_info(network_info=None, admin_pass=None, files=None): return libvirt_driver.InjectionInfo( network_info=network_info, admin_pass=admin_pass, files=files) def _concurrency(signal, wait, done, target, is_block_dev=False): signal.send() wait.wait() done.send() class FakeVirtDomain(object): def __init__(self, fake_xml=None, uuidstr=None, id=None, name=None, info=None): if uuidstr is None: uuidstr = uuids.fake self.uuidstr = uuidstr self.id = id self.domname = name self._info = info or ( [power_state.RUNNING, 2048 * units.Mi, 1234 * units.Mi, None, None]) if fake_xml: self._fake_dom_xml = fake_xml else: self._fake_dom_xml = """ testinstance1 """ def name(self): if self.domname is None: return "fake-domain %s" % self else: return self.domname def ID(self): return self.id def info(self): return self._info def create(self): pass def managedSave(self, *args): pass def createWithFlags(self, launch_flags): pass def XMLDesc(self, flags): return self._fake_dom_xml def UUIDString(self): return self.uuidstr def attachDeviceFlags(self, xml, flags): pass def attachDevice(self, xml): pass def detachDeviceFlags(self, xml, flags): pass def snapshotCreateXML(self, xml, flags): pass def blockCommit(self, disk, base, top, bandwidth=0, flags=0): pass def blockRebase(self, disk, base, bandwidth=0, flags=0): pass def blockJobInfo(self, path, flags): pass def blockJobAbort(self, path, flags): pass def resume(self): pass def destroy(self): pass def fsFreeze(self, disks=None, flags=0): pass def fsThaw(self, disks=None, flags=0): pass def isActive(self): return True def isPersistent(self): return True def undefine(self): return True class CacheConcurrencyTestCase(test.NoDBTestCase): def setUp(self): super(CacheConcurrencyTestCase, self).setUp() self.flags(instances_path=self.useFixture(fixtures.TempDir()).path) # utils.synchronized() will create the lock_path for us if it # doesn't already exist. It will also delete it when it's done, # which can cause race conditions with the multiple threads we # use for tests. So, create the path here so utils.synchronized() # won't delete it out from under one of the threads. self.lock_path = os.path.join(CONF.instances_path, 'locks') fileutils.ensure_tree(self.lock_path) def fake_exists(fname): basedir = os.path.join(CONF.instances_path, CONF.image_cache_subdirectory_name) if fname == basedir or fname == self.lock_path: return True return False def fake_execute(*args, **kwargs): pass def fake_extend(image, size, use_cow=False): pass self.stub_out('os.path.exists', fake_exists) self.stubs.Set(utils, 'execute', fake_execute) self.stubs.Set(imagebackend.disk, 'extend', fake_extend) self.useFixture(fixtures.MonkeyPatch( 'nova.virt.libvirt.imagebackend.libvirt_utils', fake_libvirt_utils)) def _fake_instance(self, uuid): return objects.Instance(id=1, uuid=uuid) def test_same_fname_concurrency(self): # Ensures that the same fname cache runs at a sequentially. uuid = uuids.fake backend = imagebackend.Backend(False) wait1 = eventlet.event.Event() done1 = eventlet.event.Event() sig1 = eventlet.event.Event() thr1 = eventlet.spawn(backend.by_name(self._fake_instance(uuid), 'name').cache, _concurrency, 'fname', None, signal=sig1, wait=wait1, done=done1) eventlet.sleep(0) # Thread 1 should run before thread 2. sig1.wait() wait2 = eventlet.event.Event() done2 = eventlet.event.Event() sig2 = eventlet.event.Event() thr2 = eventlet.spawn(backend.by_name(self._fake_instance(uuid), 'name').cache, _concurrency, 'fname', None, signal=sig2, wait=wait2, done=done2) wait2.send() eventlet.sleep(0) try: self.assertFalse(done2.ready()) finally: wait1.send() done1.wait() eventlet.sleep(0) self.assertTrue(done2.ready()) # Wait on greenthreads to assert they didn't raise exceptions # during execution thr1.wait() thr2.wait() def test_different_fname_concurrency(self): # Ensures that two different fname caches are concurrent. uuid = uuids.fake backend = imagebackend.Backend(False) wait1 = eventlet.event.Event() done1 = eventlet.event.Event() sig1 = eventlet.event.Event() thr1 = eventlet.spawn(backend.by_name(self._fake_instance(uuid), 'name').cache, _concurrency, 'fname2', None, signal=sig1, wait=wait1, done=done1) eventlet.sleep(0) # Thread 1 should run before thread 2. sig1.wait() wait2 = eventlet.event.Event() done2 = eventlet.event.Event() sig2 = eventlet.event.Event() thr2 = eventlet.spawn(backend.by_name(self._fake_instance(uuid), 'name').cache, _concurrency, 'fname1', None, signal=sig2, wait=wait2, done=done2) eventlet.sleep(0) # Wait for thread 2 to start. sig2.wait() wait2.send() tries = 0 while not done2.ready() and tries < 10: eventlet.sleep(0) tries += 1 try: self.assertTrue(done2.ready()) finally: wait1.send() eventlet.sleep(0) # Wait on greenthreads to assert they didn't raise exceptions # during execution thr1.wait() thr2.wait() class FakeInvalidVolumeDriver(object): def __init__(self, *args, **kwargs): raise brick_exception.InvalidConnectorProtocol('oops!') class FakeConfigGuestDisk(object): def __init__(self, *args, **kwargs): self.source_type = None self.driver_cache = None class FakeConfigGuest(object): def __init__(self, *args, **kwargs): self.driver_cache = None class FakeNodeDevice(object): def __init__(self, fakexml): self.xml = fakexml def XMLDesc(self, flags): return self.xml def _create_test_instance(): flavor = objects.Flavor(memory_mb=2048, swap=0, vcpu_weight=None, root_gb=10, id=2, name=u'm1.small', ephemeral_gb=20, rxtx_factor=1.0, flavorid=u'1', vcpus=2, extra_specs={}) return { 'id': 1, 'uuid': '32dfcb37-5af1-552b-357c-be8c3aa38310', 'memory_kb': '1024000', 'basepath': '/some/path', 'bridge_name': 'br100', 'display_name': "Acme webserver", 'vcpus': 2, 'project_id': 'fake', 'bridge': 'br101', 'image_ref': '155d900f-4e14-4e4c-a73d-069cbf4541e6', 'root_gb': 10, 'ephemeral_gb': 20, 'instance_type_id': '5', # m1.small 'extra_specs': {}, 'system_metadata': { 'image_disk_format': 'raw' }, 'flavor': flavor, 'new_flavor': None, 'old_flavor': None, 'pci_devices': objects.PciDeviceList(), 'numa_topology': None, 'config_drive': None, 'vm_mode': None, 'kernel_id': None, 'ramdisk_id': None, 'os_type': 'linux', 'user_id': '838a72b0-0d54-4827-8fd6-fb1227633ceb', 'ephemeral_key_uuid': None, 'vcpu_model': None, 'host': 'fake-host', 'task_state': None, } @ddt.ddt class LibvirtConnTestCase(test.NoDBTestCase, test_diagnostics.DiagnosticsComparisonMixin): REQUIRES_LOCKING = True _EPHEMERAL_20_DEFAULT = eph_name(20) def setUp(self): super(LibvirtConnTestCase, self).setUp() self.user_id = 'fake' self.project_id = 'fake' self.context = context.get_admin_context() temp_dir = self.useFixture(fixtures.TempDir()).path self.flags(instances_path=temp_dir, firewall_driver=None) self.flags(snapshots_directory=temp_dir, group='libvirt') self.useFixture(fixtures.MonkeyPatch( 'nova.virt.libvirt.driver.libvirt_utils', fake_libvirt_utils)) self.flags(sysinfo_serial="hardware", group="libvirt") # normally loaded during nova-compute startup os_vif.initialize() self.useFixture(fixtures.MonkeyPatch( 'nova.virt.libvirt.imagebackend.libvirt_utils', fake_libvirt_utils)) def fake_extend(image, size, use_cow=False): pass self.stubs.Set(libvirt_driver.disk_api, 'extend', fake_extend) self.stubs.Set(imagebackend.Image, 'resolve_driver_format', imagebackend.Image._get_driver_format) self.useFixture(fakelibvirt.FakeLibvirtFixture()) self.test_instance = _create_test_instance() self.test_image_meta = { "disk_format": "raw", } self.image_service = nova.tests.unit.image.fake.stub_out_image_service( self) self.device_xml_tmpl = """ 58a84f6d-3f0c-4e19-a0af-eb657b790657
""" def relpath(self, path): return os.path.relpath(path, CONF.instances_path) def tearDown(self): nova.tests.unit.image.fake.FakeImageService_reset() super(LibvirtConnTestCase, self).tearDown() def test_driver_capabilities(self): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) self.assertTrue(drvr.capabilities['has_imagecache'], 'Driver capabilities for \'has_imagecache\' ' 'is invalid') self.assertTrue(drvr.capabilities['supports_recreate'], 'Driver capabilities for \'supports_recreate\' ' 'is invalid') self.assertFalse(drvr.capabilities['supports_migrate_to_same_host'], 'Driver capabilities for ' '\'supports_migrate_to_same_host\' is invalid') self.assertTrue(drvr.capabilities['supports_attach_interface'], 'Driver capabilities for ' '\'supports_attach_interface\' ' 'is invalid') self.assertTrue(drvr.capabilities['supports_extend_volume'], 'Driver capabilities for ' '\'supports_extend_volume\' ' 'is invalid') self.assertFalse(drvr.requires_allocation_refresh, 'Driver does not need allocation refresh') def create_fake_libvirt_mock(self, **kwargs): """Defining mocks for LibvirtDriver(libvirt is not used).""" # A fake libvirt.virConnect class FakeLibvirtDriver(object): def defineXML(self, xml): return FakeVirtDomain() # Creating mocks fake = FakeLibvirtDriver() # Customizing above fake if necessary for key, val in kwargs.items(): fake.__setattr__(key, val) self.stubs.Set(libvirt_driver.LibvirtDriver, '_conn', fake) self.stubs.Set(host.Host, 'get_connection', lambda x: fake) def fake_lookup(self, instance_name): return FakeVirtDomain() def fake_execute(self, *args, **kwargs): open(args[-1], "a").close() def _create_service(self, **kwargs): service_ref = {'host': kwargs.get('host', 'dummy'), 'disabled': kwargs.get('disabled', False), 'binary': 'nova-compute', 'topic': 'compute', 'report_count': 0} return objects.Service(**service_ref) def _get_pause_flag(self, drvr, network_info, power_on=True, vifs_already_plugged=False): timeout = CONF.vif_plugging_timeout events = [] if (drvr._conn_supports_start_paused and utils.is_neutron() and not vifs_already_plugged and power_on and timeout): events = drvr._get_neutron_events(network_info) return bool(events) def test_public_api_signatures(self): baseinst = driver.ComputeDriver(None) inst = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) self.assertPublicAPISignatures(baseinst, inst) def test_legacy_block_device_info(self): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) self.assertFalse(drvr.need_legacy_block_device_info) @mock.patch.object(host.Host, "has_min_version") def test_min_version_start_ok(self, mock_version): mock_version.return_value = True drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) drvr.init_host("dummyhost") @mock.patch.object(host.Host, "has_min_version") def test_min_version_start_abort(self, mock_version): mock_version.return_value = False drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) self.assertRaises(exception.NovaException, drvr.init_host, "dummyhost") @mock.patch.object(fakelibvirt.Connection, 'getLibVersion', return_value=versionutils.convert_version_to_int( libvirt_driver.NEXT_MIN_LIBVIRT_VERSION) - 1) @mock.patch.object(libvirt_driver.LOG, 'warning') def test_next_min_version_deprecation_warning(self, mock_warning, mock_get_libversion): # Skip test if there's no currently planned new min version if (versionutils.convert_version_to_int( libvirt_driver.NEXT_MIN_LIBVIRT_VERSION) == versionutils.convert_version_to_int( libvirt_driver.MIN_LIBVIRT_VERSION)): self.skipTest("NEXT_MIN_LIBVIRT_VERSION == MIN_LIBVIRT_VERSION") # Test that a warning is logged if the libvirt version is less than # the next required minimum version. drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) drvr.init_host("dummyhost") # assert that the next min version is in a warning message expected_arg = {'version': versionutils.convert_version_to_str( versionutils.convert_version_to_int( libvirt_driver.NEXT_MIN_LIBVIRT_VERSION))} version_arg_found = False for call in mock_warning.call_args_list: if call[0][1] == expected_arg: version_arg_found = True break self.assertTrue(version_arg_found) @mock.patch.object(fakelibvirt.Connection, 'getVersion', return_value=versionutils.convert_version_to_int( libvirt_driver.NEXT_MIN_QEMU_VERSION) - 1) @mock.patch.object(libvirt_driver.LOG, 'warning') def test_next_min_qemu_version_deprecation_warning(self, mock_warning, mock_get_libversion): # Skip test if there's no currently planned new min version if (versionutils.convert_version_to_int( libvirt_driver.NEXT_MIN_QEMU_VERSION) == versionutils.convert_version_to_int( libvirt_driver.MIN_QEMU_VERSION)): self.skipTest("NEXT_MIN_QEMU_VERSION == MIN_QEMU_VERSION") # Test that a warning is logged if the libvirt version is less than # the next required minimum version. drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) drvr.init_host("dummyhost") # assert that the next min version is in a warning message expected_arg = {'version': versionutils.convert_version_to_str( versionutils.convert_version_to_int( libvirt_driver.NEXT_MIN_QEMU_VERSION))} version_arg_found = False for call in mock_warning.call_args_list: if call[0][1] == expected_arg: version_arg_found = True break self.assertTrue(version_arg_found) @mock.patch.object(fakelibvirt.Connection, 'getLibVersion', return_value=versionutils.convert_version_to_int( libvirt_driver.NEXT_MIN_LIBVIRT_VERSION)) @mock.patch.object(libvirt_driver.LOG, 'warning') def test_next_min_version_ok(self, mock_warning, mock_get_libversion): # Skip test if there's no currently planned new min version if (versionutils.convert_version_to_int( libvirt_driver.NEXT_MIN_LIBVIRT_VERSION) == versionutils.convert_version_to_int( libvirt_driver.MIN_LIBVIRT_VERSION)): self.skipTest("NEXT_MIN_LIBVIRT_VERSION == MIN_LIBVIRT_VERSION") # Test that a warning is not logged if the libvirt version is greater # than or equal to NEXT_MIN_LIBVIRT_VERSION. drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) drvr.init_host("dummyhost") # assert that the next min version is in a warning message expected_arg = {'version': versionutils.convert_version_to_str( versionutils.convert_version_to_int( libvirt_driver.NEXT_MIN_LIBVIRT_VERSION))} version_arg_found = False for call in mock_warning.call_args_list: if call[0][1] == expected_arg: version_arg_found = True break self.assertFalse(version_arg_found) @mock.patch.object(fakelibvirt.Connection, 'getVersion', return_value=versionutils.convert_version_to_int( libvirt_driver.NEXT_MIN_QEMU_VERSION)) @mock.patch.object(libvirt_driver.LOG, 'warning') def test_next_min_qemu_version_ok(self, mock_warning, mock_get_libversion): # Skip test if there's no currently planned new min version if (versionutils.convert_version_to_int( libvirt_driver.NEXT_MIN_QEMU_VERSION) == versionutils.convert_version_to_int( libvirt_driver.MIN_QEMU_VERSION)): self.skipTest("NEXT_MIN_QEMU_VERSION == MIN_QEMU_VERSION") # Test that a warning is not logged if the libvirt version is greater # than or equal to NEXT_MIN_QEMU_VERSION. drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) drvr.init_host("dummyhost") # assert that the next min version is in a warning message expected_arg = {'version': versionutils.convert_version_to_str( versionutils.convert_version_to_int( libvirt_driver.NEXT_MIN_QEMU_VERSION))} version_arg_found = False for call in mock_warning.call_args_list: if call[0][1] == expected_arg: version_arg_found = True break self.assertFalse(version_arg_found) # NOTE(sdague): python2.7 and python3.5 have different behaviors # when it comes to comparing against the sentinel, so # has_min_version is needed to pass python3.5. @mock.patch.object(nova.virt.libvirt.host.Host, "has_min_version", return_value=True) @mock.patch.object(fakelibvirt.Connection, 'getVersion', return_value=mock.sentinel.qemu_version) def test_qemu_image_version(self, mock_get_libversion, min_ver): """Test that init_host sets qemu image version A sentinel is used here so that we aren't chasing this value against minimums that get raised over time. """ drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) drvr.init_host("dummyhost") self.assertEqual(images.QEMU_VERSION, mock.sentinel.qemu_version) @mock.patch.object(fakelibvirt.Connection, 'getLibVersion', return_value=versionutils.convert_version_to_int( libvirt_driver.MIN_LIBVIRT_OTHER_ARCH.get( fields.Architecture.PPC64)) - 1) @mock.patch.object(fields.Architecture, "from_host", return_value=fields.Architecture.PPC64) def test_min_version_ppc_old_libvirt(self, mock_libv, mock_arch): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) self.assertRaises(exception.NovaException, drvr.init_host, "dummyhost") @mock.patch.object(fakelibvirt.Connection, 'getLibVersion', return_value=versionutils.convert_version_to_int( libvirt_driver.MIN_LIBVIRT_OTHER_ARCH.get( fields.Architecture.PPC64))) @mock.patch.object(fields.Architecture, "from_host", return_value=fields.Architecture.PPC64) def test_min_version_ppc_ok(self, mock_libv, mock_arch): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) drvr.init_host("dummyhost") @mock.patch.object(fakelibvirt.Connection, 'getLibVersion', return_value=versionutils.convert_version_to_int( libvirt_driver.MIN_LIBVIRT_OTHER_ARCH.get( fields.Architecture.S390X)) - 1) @mock.patch.object(fakelibvirt.Connection, 'getVersion', return_value=versionutils.convert_version_to_int( libvirt_driver.MIN_QEMU_OTHER_ARCH.get( fields.Architecture.S390X))) @mock.patch.object(fields.Architecture, "from_host", return_value=fields.Architecture.S390X) def test_min_version_s390_old_libvirt(self, mock_libv, mock_qemu, mock_arch): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) self.assertRaises(exception.NovaException, drvr.init_host, "dummyhost") @mock.patch.object(fakelibvirt.Connection, 'getLibVersion', return_value=versionutils.convert_version_to_int( libvirt_driver.MIN_LIBVIRT_OTHER_ARCH.get( fields.Architecture.S390X))) @mock.patch.object(fakelibvirt.Connection, 'getVersion', return_value=versionutils.convert_version_to_int( libvirt_driver.MIN_QEMU_OTHER_ARCH.get( fields.Architecture.S390X)) - 1) @mock.patch.object(fields.Architecture, "from_host", return_value=fields.Architecture.S390X) def test_min_version_s390_old_qemu(self, mock_libv, mock_qemu, mock_arch): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) self.assertRaises(exception.NovaException, drvr.init_host, "dummyhost") @mock.patch.object(fakelibvirt.Connection, 'getLibVersion', return_value=versionutils.convert_version_to_int( libvirt_driver.MIN_LIBVIRT_OTHER_ARCH.get( fields.Architecture.S390X))) @mock.patch.object(fakelibvirt.Connection, 'getVersion', return_value=versionutils.convert_version_to_int( libvirt_driver.MIN_QEMU_OTHER_ARCH.get( fields.Architecture.S390X))) @mock.patch.object(fields.Architecture, "from_host", return_value=fields.Architecture.S390X) def test_min_version_s390_ok(self, mock_libv, mock_qemu, mock_arch): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) drvr.init_host("dummyhost") def _do_test_parse_migration_flags(self, lm_expected=None, bm_expected=None): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) drvr._parse_migration_flags() if lm_expected is not None: self.assertEqual(lm_expected, drvr._live_migration_flags) if bm_expected is not None: self.assertEqual(bm_expected, drvr._block_migration_flags) def test_parse_live_migration_flags_default(self): self._do_test_parse_migration_flags( lm_expected=(libvirt_driver.libvirt.VIR_MIGRATE_UNDEFINE_SOURCE | libvirt_driver.libvirt.VIR_MIGRATE_PERSIST_DEST | libvirt_driver.libvirt.VIR_MIGRATE_PEER2PEER | libvirt_driver.libvirt.VIR_MIGRATE_LIVE)) def test_parse_live_migration_flags(self): self._do_test_parse_migration_flags( lm_expected=(libvirt_driver.libvirt.VIR_MIGRATE_UNDEFINE_SOURCE | libvirt_driver.libvirt.VIR_MIGRATE_PERSIST_DEST | libvirt_driver.libvirt.VIR_MIGRATE_PEER2PEER | libvirt_driver.libvirt.VIR_MIGRATE_LIVE)) def test_parse_block_migration_flags_default(self): self._do_test_parse_migration_flags( bm_expected=(libvirt_driver.libvirt.VIR_MIGRATE_UNDEFINE_SOURCE | libvirt_driver.libvirt.VIR_MIGRATE_PERSIST_DEST | libvirt_driver.libvirt.VIR_MIGRATE_PERSIST_DEST | libvirt_driver.libvirt.VIR_MIGRATE_PEER2PEER | libvirt_driver.libvirt.VIR_MIGRATE_LIVE | libvirt_driver.libvirt.VIR_MIGRATE_NON_SHARED_INC)) def test_parse_block_migration_flags(self): self._do_test_parse_migration_flags( bm_expected=(libvirt_driver.libvirt.VIR_MIGRATE_UNDEFINE_SOURCE | libvirt_driver.libvirt.VIR_MIGRATE_PERSIST_DEST | libvirt_driver.libvirt.VIR_MIGRATE_PEER2PEER | libvirt_driver.libvirt.VIR_MIGRATE_LIVE | libvirt_driver.libvirt.VIR_MIGRATE_NON_SHARED_INC)) def test_parse_migration_flags_p2p_xen(self): self.flags(virt_type='xen', group='libvirt') self._do_test_parse_migration_flags( lm_expected=(libvirt_driver.libvirt.VIR_MIGRATE_UNDEFINE_SOURCE | libvirt_driver.libvirt.VIR_MIGRATE_PERSIST_DEST | libvirt_driver.libvirt.VIR_MIGRATE_LIVE), bm_expected=(libvirt_driver.libvirt.VIR_MIGRATE_UNDEFINE_SOURCE | libvirt_driver.libvirt.VIR_MIGRATE_PERSIST_DEST | libvirt_driver.libvirt.VIR_MIGRATE_LIVE | libvirt_driver.libvirt.VIR_MIGRATE_NON_SHARED_INC)) def test_live_migration_tunnelled_none(self): self.flags(live_migration_tunnelled=None, group='libvirt') self._do_test_parse_migration_flags( lm_expected=(libvirt_driver.libvirt.VIR_MIGRATE_LIVE | libvirt_driver.libvirt.VIR_MIGRATE_PEER2PEER | libvirt_driver.libvirt.VIR_MIGRATE_UNDEFINE_SOURCE | libvirt_driver.libvirt.VIR_MIGRATE_PERSIST_DEST | libvirt_driver.libvirt.VIR_MIGRATE_TUNNELLED), bm_expected=(libvirt_driver.libvirt.VIR_MIGRATE_LIVE | libvirt_driver.libvirt.VIR_MIGRATE_PEER2PEER | libvirt_driver.libvirt.VIR_MIGRATE_UNDEFINE_SOURCE | libvirt_driver.libvirt.VIR_MIGRATE_PERSIST_DEST | libvirt_driver.libvirt.VIR_MIGRATE_NON_SHARED_INC | libvirt_driver.libvirt.VIR_MIGRATE_TUNNELLED)) def test_live_migration_tunnelled_true(self): self.flags(live_migration_tunnelled=True, group='libvirt') self._do_test_parse_migration_flags( lm_expected=(libvirt_driver.libvirt.VIR_MIGRATE_LIVE | libvirt_driver.libvirt.VIR_MIGRATE_PEER2PEER | libvirt_driver.libvirt.VIR_MIGRATE_UNDEFINE_SOURCE | libvirt_driver.libvirt.VIR_MIGRATE_PERSIST_DEST | libvirt_driver.libvirt.VIR_MIGRATE_TUNNELLED), bm_expected=(libvirt_driver.libvirt.VIR_MIGRATE_LIVE | libvirt_driver.libvirt.VIR_MIGRATE_PEER2PEER | libvirt_driver.libvirt.VIR_MIGRATE_UNDEFINE_SOURCE | libvirt_driver.libvirt.VIR_MIGRATE_PERSIST_DEST | libvirt_driver.libvirt.VIR_MIGRATE_NON_SHARED_INC | libvirt_driver.libvirt.VIR_MIGRATE_TUNNELLED)) @mock.patch.object(host.Host, 'has_min_version', return_value=True) def test_live_migration_permit_postcopy_true(self, host): self.flags(live_migration_permit_post_copy=True, group='libvirt') self._do_test_parse_migration_flags( lm_expected=(libvirt_driver.libvirt.VIR_MIGRATE_UNDEFINE_SOURCE | libvirt_driver.libvirt.VIR_MIGRATE_PERSIST_DEST | libvirt_driver.libvirt.VIR_MIGRATE_PEER2PEER | libvirt_driver.libvirt.VIR_MIGRATE_LIVE | libvirt_driver.libvirt.VIR_MIGRATE_POSTCOPY), bm_expected=(libvirt_driver.libvirt.VIR_MIGRATE_UNDEFINE_SOURCE | libvirt_driver.libvirt.VIR_MIGRATE_PERSIST_DEST | libvirt_driver.libvirt.VIR_MIGRATE_PEER2PEER | libvirt_driver.libvirt.VIR_MIGRATE_LIVE | libvirt_driver.libvirt.VIR_MIGRATE_NON_SHARED_INC | libvirt_driver.libvirt.VIR_MIGRATE_POSTCOPY)) @mock.patch.object(host.Host, 'has_min_version', return_value=True) def test_live_migration_permit_auto_converge_true(self, host): self.flags(live_migration_permit_auto_converge=True, group='libvirt') self._do_test_parse_migration_flags( lm_expected=(libvirt_driver.libvirt.VIR_MIGRATE_UNDEFINE_SOURCE | libvirt_driver.libvirt.VIR_MIGRATE_PERSIST_DEST | libvirt_driver.libvirt.VIR_MIGRATE_PEER2PEER | libvirt_driver.libvirt.VIR_MIGRATE_LIVE | libvirt_driver.libvirt.VIR_MIGRATE_AUTO_CONVERGE), bm_expected=(libvirt_driver.libvirt.VIR_MIGRATE_UNDEFINE_SOURCE | libvirt_driver.libvirt.VIR_MIGRATE_PERSIST_DEST | libvirt_driver.libvirt.VIR_MIGRATE_PEER2PEER | libvirt_driver.libvirt.VIR_MIGRATE_LIVE | libvirt_driver.libvirt.VIR_MIGRATE_NON_SHARED_INC | libvirt_driver.libvirt.VIR_MIGRATE_AUTO_CONVERGE)) @mock.patch.object(host.Host, 'has_min_version', return_value=True) def test_live_migration_permit_auto_converge_and_post_copy_true(self, host): self.flags(live_migration_permit_auto_converge=True, group='libvirt') self.flags(live_migration_permit_post_copy=True, group='libvirt') self._do_test_parse_migration_flags( lm_expected=(libvirt_driver.libvirt.VIR_MIGRATE_UNDEFINE_SOURCE | libvirt_driver.libvirt.VIR_MIGRATE_PERSIST_DEST | libvirt_driver.libvirt.VIR_MIGRATE_PEER2PEER | libvirt_driver.libvirt.VIR_MIGRATE_LIVE | libvirt_driver.libvirt.VIR_MIGRATE_POSTCOPY), bm_expected=(libvirt_driver.libvirt.VIR_MIGRATE_UNDEFINE_SOURCE | libvirt_driver.libvirt.VIR_MIGRATE_PERSIST_DEST | libvirt_driver.libvirt.VIR_MIGRATE_PEER2PEER | libvirt_driver.libvirt.VIR_MIGRATE_LIVE | libvirt_driver.libvirt.VIR_MIGRATE_NON_SHARED_INC | libvirt_driver.libvirt.VIR_MIGRATE_POSTCOPY)) @mock.patch.object(host.Host, 'has_min_version') def test_live_migration_auto_converge_and_post_copy_true_old_libvirt( self, mock_host): self.flags(live_migration_permit_auto_converge=True, group='libvirt') self.flags(live_migration_permit_post_copy=True, group='libvirt') def fake_has_min_version(lv_ver=None, hv_ver=None, hv_type=None): if (lv_ver == libvirt_driver.MIN_LIBVIRT_POSTCOPY_VERSION and hv_ver == libvirt_driver.MIN_QEMU_POSTCOPY_VERSION): return False return True mock_host.side_effect = fake_has_min_version self._do_test_parse_migration_flags( lm_expected=(libvirt_driver.libvirt.VIR_MIGRATE_UNDEFINE_SOURCE | libvirt_driver.libvirt.VIR_MIGRATE_PERSIST_DEST | libvirt_driver.libvirt.VIR_MIGRATE_PEER2PEER | libvirt_driver.libvirt.VIR_MIGRATE_LIVE | libvirt_driver.libvirt.VIR_MIGRATE_AUTO_CONVERGE), bm_expected=(libvirt_driver.libvirt.VIR_MIGRATE_UNDEFINE_SOURCE | libvirt_driver.libvirt.VIR_MIGRATE_PERSIST_DEST | libvirt_driver.libvirt.VIR_MIGRATE_PEER2PEER | libvirt_driver.libvirt.VIR_MIGRATE_LIVE | libvirt_driver.libvirt.VIR_MIGRATE_NON_SHARED_INC | libvirt_driver.libvirt.VIR_MIGRATE_AUTO_CONVERGE)) @mock.patch.object(host.Host, 'has_min_version', return_value=False) def test_live_migration_permit_postcopy_true_old_libvirt(self, host): self.flags(live_migration_permit_post_copy=True, group='libvirt') self._do_test_parse_migration_flags( lm_expected=(libvirt_driver.libvirt.VIR_MIGRATE_UNDEFINE_SOURCE | libvirt_driver.libvirt.VIR_MIGRATE_PERSIST_DEST | libvirt_driver.libvirt.VIR_MIGRATE_PEER2PEER | libvirt_driver.libvirt.VIR_MIGRATE_LIVE), bm_expected=(libvirt_driver.libvirt.VIR_MIGRATE_UNDEFINE_SOURCE | libvirt_driver.libvirt.VIR_MIGRATE_PERSIST_DEST | libvirt_driver.libvirt.VIR_MIGRATE_PEER2PEER | libvirt_driver.libvirt.VIR_MIGRATE_LIVE | libvirt_driver.libvirt.VIR_MIGRATE_NON_SHARED_INC)) def test_live_migration_permit_postcopy_false(self): self._do_test_parse_migration_flags( lm_expected=(libvirt_driver.libvirt.VIR_MIGRATE_UNDEFINE_SOURCE | libvirt_driver.libvirt.VIR_MIGRATE_PERSIST_DEST | libvirt_driver.libvirt.VIR_MIGRATE_PEER2PEER | libvirt_driver.libvirt.VIR_MIGRATE_LIVE), bm_expected=(libvirt_driver.libvirt.VIR_MIGRATE_UNDEFINE_SOURCE | libvirt_driver.libvirt.VIR_MIGRATE_PERSIST_DEST | libvirt_driver.libvirt.VIR_MIGRATE_PEER2PEER | libvirt_driver.libvirt.VIR_MIGRATE_LIVE | libvirt_driver.libvirt.VIR_MIGRATE_NON_SHARED_INC)) def test_live_migration_permit_autoconverge_false(self): self._do_test_parse_migration_flags( lm_expected=(libvirt_driver.libvirt.VIR_MIGRATE_UNDEFINE_SOURCE | libvirt_driver.libvirt.VIR_MIGRATE_PERSIST_DEST | libvirt_driver.libvirt.VIR_MIGRATE_PEER2PEER | libvirt_driver.libvirt.VIR_MIGRATE_LIVE), bm_expected=(libvirt_driver.libvirt.VIR_MIGRATE_UNDEFINE_SOURCE | libvirt_driver.libvirt.VIR_MIGRATE_PERSIST_DEST | libvirt_driver.libvirt.VIR_MIGRATE_PEER2PEER | libvirt_driver.libvirt.VIR_MIGRATE_LIVE | libvirt_driver.libvirt.VIR_MIGRATE_NON_SHARED_INC)) @mock.patch('nova.utils.get_image_from_system_metadata') @mock.patch.object(host.Host, 'has_min_version', return_value=True) @mock.patch('nova.virt.libvirt.host.Host.get_guest') def test_set_admin_password(self, mock_get_guest, ver, mock_image): self.flags(virt_type='kvm', group='libvirt') instance = objects.Instance(**self.test_instance) mock_image.return_value = {"properties": { "hw_qemu_guest_agent": "yes"}} mock_guest = mock.Mock(spec=libvirt_guest.Guest) mock_get_guest.return_value = mock_guest drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) drvr.set_admin_password(instance, "123") mock_guest.set_user_password.assert_called_once_with("root", "123") @mock.patch.object(host.Host, 'has_min_version', return_value=True) @mock.patch('nova.virt.libvirt.host.Host.get_guest') def test_set_admin_password_parallels(self, mock_get_guest, ver): self.flags(virt_type='parallels', group='libvirt') instance = objects.Instance(**self.test_instance) mock_guest = mock.Mock(spec=libvirt_guest.Guest) mock_get_guest.return_value = mock_guest drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) drvr.set_admin_password(instance, "123") mock_guest.set_user_password.assert_called_once_with("root", "123") @mock.patch('nova.utils.get_image_from_system_metadata') @mock.patch.object(host.Host, 'has_min_version', return_value=True) @mock.patch('nova.virt.libvirt.host.Host.get_guest') def test_set_admin_password_windows(self, mock_get_guest, ver, mock_image): self.flags(virt_type='kvm', group='libvirt') instance = objects.Instance(**self.test_instance) instance.os_type = "windows" mock_image.return_value = {"properties": { "hw_qemu_guest_agent": "yes"}} mock_guest = mock.Mock(spec=libvirt_guest.Guest) mock_get_guest.return_value = mock_guest drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) drvr.set_admin_password(instance, "123") mock_guest.set_user_password.assert_called_once_with( "Administrator", "123") @mock.patch('nova.utils.get_image_from_system_metadata') @mock.patch.object(host.Host, 'has_min_version', return_value=True) @mock.patch('nova.virt.libvirt.host.Host.get_guest') def test_set_admin_password_image(self, mock_get_guest, ver, mock_image): self.flags(virt_type='kvm', group='libvirt') instance = objects.Instance(**self.test_instance) mock_image.return_value = {"properties": { "hw_qemu_guest_agent": "yes", "os_admin_user": "foo" }} mock_guest = mock.Mock(spec=libvirt_guest.Guest) mock_get_guest.return_value = mock_guest drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) drvr.set_admin_password(instance, "123") mock_guest.set_user_password.assert_called_once_with("foo", "123") @mock.patch('nova.utils.get_image_from_system_metadata') @mock.patch.object(host.Host, 'has_min_version', return_value=False) def test_set_admin_password_bad_version(self, mock_svc, mock_image): instance = objects.Instance(**self.test_instance) mock_image.return_value = {"properties": { "hw_qemu_guest_agent": "yes"}} for hyp in ('kvm', 'parallels'): self.flags(virt_type=hyp, group='libvirt') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) self.assertRaises(exception.SetAdminPasswdNotSupported, drvr.set_admin_password, instance, "123") @mock.patch('nova.utils.get_image_from_system_metadata') @mock.patch.object(host.Host, 'has_min_version', return_value=True) def test_set_admin_password_bad_hyp(self, mock_svc, mock_image): self.flags(virt_type='lxc', group='libvirt') instance = objects.Instance(**self.test_instance) mock_image.return_value = {"properties": { "hw_qemu_guest_agent": "yes"}} drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) self.assertRaises(exception.SetAdminPasswdNotSupported, drvr.set_admin_password, instance, "123") @mock.patch.object(host.Host, 'has_min_version', return_value=True) def test_set_admin_password_guest_agent_not_running(self, mock_svc): self.flags(virt_type='kvm', group='libvirt') instance = objects.Instance(**self.test_instance) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) self.assertRaises(exception.QemuGuestAgentNotEnabled, drvr.set_admin_password, instance, "123") @mock.patch('nova.utils.get_image_from_system_metadata') @mock.patch.object(host.Host, 'has_min_version', return_value=True) @mock.patch('nova.virt.libvirt.host.Host.get_guest') def test_set_admin_password_error(self, mock_get_guest, ver, mock_image): self.flags(virt_type='kvm', group='libvirt') instance = objects.Instance(**self.test_instance) mock_image.return_value = {"properties": { "hw_qemu_guest_agent": "yes"}} mock_guest = mock.Mock(spec=libvirt_guest.Guest) mock_guest.set_user_password.side_effect = ( fakelibvirt.libvirtError("error")) mock_get_guest.return_value = mock_guest drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) self.assertRaises(exception.NovaException, drvr.set_admin_password, instance, "123") @mock.patch('nova.utils.get_image_from_system_metadata') @mock.patch.object(host.Host, 'has_min_version', return_value=True) @mock.patch('nova.virt.libvirt.host.Host.get_guest') def test_set_admin_password_error_with_unicode( self, mock_get_guest, ver, mock_image): self.flags(virt_type='kvm', group='libvirt') instance = objects.Instance(**self.test_instance) mock_image.return_value = {"properties": { "hw_qemu_guest_agent": "yes"}} mock_guest = mock.Mock(spec=libvirt_guest.Guest) mock_guest.set_user_password.side_effect = ( fakelibvirt.libvirtError( b"failed: \xe9\x94\x99\xe8\xaf\xaf\xe3\x80\x82")) mock_get_guest.return_value = mock_guest drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) self.assertRaises(exception.NovaException, drvr.set_admin_password, instance, "123") @mock.patch('nova.utils.get_image_from_system_metadata') @mock.patch.object(host.Host, 'has_min_version', return_value=True) @mock.patch('nova.virt.libvirt.host.Host.get_guest') def test_set_admin_password_not_implemented( self, mock_get_guest, ver, mock_image): self.flags(virt_type='kvm', group='libvirt') instance = objects.Instance(**self.test_instance) mock_image.return_value = {"properties": { "hw_qemu_guest_agent": "yes"}} mock_guest = mock.Mock(spec=libvirt_guest.Guest) not_implemented = fakelibvirt.make_libvirtError( fakelibvirt.libvirtError, "Guest agent disappeared while executing command", error_code=fakelibvirt.VIR_ERR_AGENT_UNRESPONSIVE) mock_guest.set_user_password.side_effect = not_implemented mock_get_guest.return_value = mock_guest drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) self.assertRaises(NotImplementedError, drvr.set_admin_password, instance, "123") @mock.patch.object(objects.Service, 'save') @mock.patch.object(objects.Service, 'get_by_compute_host') def test_set_host_enabled_with_disable(self, mock_svc, mock_save): # Tests disabling an enabled host. drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) svc = self._create_service(host='fake-mini') mock_svc.return_value = svc drvr._set_host_enabled(False) self.assertTrue(svc.disabled) mock_save.assert_called_once_with() @mock.patch.object(objects.Service, 'save') @mock.patch.object(objects.Service, 'get_by_compute_host') def test_set_host_enabled_with_enable(self, mock_svc, mock_save): # Tests enabling a disabled host. drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) svc = self._create_service(disabled=True, host='fake-mini') mock_svc.return_value = svc drvr._set_host_enabled(True) # since disabled_reason is not set and not prefixed with "AUTO:", # service should not be enabled. mock_save.assert_not_called() self.assertTrue(svc.disabled) @mock.patch.object(objects.Service, 'save') @mock.patch.object(objects.Service, 'get_by_compute_host') def test_set_host_enabled_with_enable_state_enabled(self, mock_svc, mock_save): # Tests enabling an enabled host. drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) svc = self._create_service(disabled=False, host='fake-mini') mock_svc.return_value = svc drvr._set_host_enabled(True) self.assertFalse(svc.disabled) mock_save.assert_not_called() @mock.patch.object(objects.Service, 'save') @mock.patch.object(objects.Service, 'get_by_compute_host') def test_set_host_enabled_with_disable_state_disabled(self, mock_svc, mock_save): # Tests disabling a disabled host. drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) svc = self._create_service(disabled=True, host='fake-mini') mock_svc.return_value = svc drvr._set_host_enabled(False) mock_save.assert_not_called() self.assertTrue(svc.disabled) def test_set_host_enabled_swallows_exceptions(self): # Tests that set_host_enabled will swallow exceptions coming from the # db_api code so they don't break anything calling it, e.g. the # _get_new_connection method. drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) with mock.patch.object(db, 'service_get_by_compute_host') as db_mock: # Make db.service_get_by_compute_host raise NovaException; this # is more robust than just raising ComputeHostNotFound. db_mock.side_effect = exception.NovaException drvr._set_host_enabled(False) @mock.patch.object(fakelibvirt.virConnect, "nodeDeviceLookupByName") def test_prepare_pci_device(self, mock_lookup): pci_devices = [dict(hypervisor_name='xxx')] self.flags(virt_type='xen', group='libvirt') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) conn = drvr._host.get_connection() mock_lookup.side_effect = lambda x: fakelibvirt.NodeDevice(conn) drvr._prepare_pci_devices_for_use(pci_devices) @mock.patch.object(fakelibvirt.virConnect, "nodeDeviceLookupByName") @mock.patch.object(fakelibvirt.virNodeDevice, "dettach") def test_prepare_pci_device_exception(self, mock_detach, mock_lookup): pci_devices = [dict(hypervisor_name='xxx', id='id1', instance_uuid='uuid')] self.flags(virt_type='xen', group='libvirt') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) conn = drvr._host.get_connection() mock_lookup.side_effect = lambda x: fakelibvirt.NodeDevice(conn) mock_detach.side_effect = fakelibvirt.libvirtError("xxxx") self.assertRaises(exception.PciDevicePrepareFailed, drvr._prepare_pci_devices_for_use, pci_devices) @mock.patch.object(host.Host, "has_min_version", return_value=False) def test_device_metadata(self, mock_version): xml = """ dummy 32dfcb37-5af1-552b-357c-be8c3aa38310 1048576 1 hvm
""" drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) dom = fakelibvirt.Domain(drvr._get_connection(), xml, False) guest = libvirt_guest.Guest(dom) instance_ref = objects.Instance(**self.test_instance) bdms = block_device_obj.block_device_make_list_from_dicts( self.context, [ fake_block_device.FakeDbBlockDeviceDict( {'id': 1, 'source_type': 'volume', 'destination_type': 'volume', 'device_name': '/dev/sda', 'tag': "db", 'volume_id': uuids.volume_1}), fake_block_device.FakeDbBlockDeviceDict( {'id': 2, 'source_type': 'volume', 'destination_type': 'volume', 'device_name': '/dev/hda', 'tag': "nfvfunc1", 'volume_id': uuids.volume_2}), fake_block_device.FakeDbBlockDeviceDict( {'id': 3, 'source_type': 'volume', 'destination_type': 'volume', 'device_name': '/dev/sdb', 'tag': "nfvfunc2", 'volume_id': uuids.volume_3}), fake_block_device.FakeDbBlockDeviceDict( {'id': 4, 'source_type': 'volume', 'destination_type': 'volume', 'device_name': '/dev/hdb', 'volume_id': uuids.volume_4}), fake_block_device.FakeDbBlockDeviceDict( {'id': 5, 'source_type': 'volume', 'destination_type': 'volume', 'device_name': '/dev/vda', 'tag': "nfvfunc3", 'volume_id': uuids.volume_5}), fake_block_device.FakeDbBlockDeviceDict( {'id': 6, 'source_type': 'volume', 'destination_type': 'volume', 'device_name': '/dev/vdb', 'tag': "nfvfunc4", 'volume_id': uuids.volume_6}), fake_block_device.FakeDbBlockDeviceDict( {'id': 7, 'source_type': 'volume', 'destination_type': 'volume', 'device_name': '/dev/vdc', 'tag': "nfvfunc5", 'volume_id': uuids.volume_7}), ] ) vif = obj_vif.VirtualInterface(context=self.context) vif.address = '52:54:00:f6:35:8f' vif.network_id = 123 vif.instance_uuid = '32dfcb37-5af1-552b-357c-be8c3aa38310' vif.uuid = '12ec4b21-ef22-6c21-534b-ba3e3ab3a311' vif.tag = 'mytag1' vif1 = obj_vif.VirtualInterface(context=self.context) vif1.address = '51:5a:2c:a4:5e:1b' vif1.network_id = 123 vif1.instance_uuid = '32dfcb37-5af1-552b-357c-be8c3aa38310' vif1.uuid = 'abec4b21-ef22-6c21-534b-ba3e3ab3a312' vif2 = obj_vif.VirtualInterface(context=self.context) vif2.address = 'fa:16:3e:d1:28:e4' vif2.network_id = 123 vif2.instance_uuid = '32dfcb37-5af1-552b-357c-be8c3aa38310' vif2.uuid = '645686e4-7086-4eab-8c2f-c41f017a1b16' vif2.tag = 'mytag2' vif3 = obj_vif.VirtualInterface(context=self.context) vif3.address = '52:54:00:14:6f:50' vif3.network_id = 123 vif3.instance_uuid = '32dfcb37-5af1-552b-357c-be8c3aa38310' vif3.uuid = '99cc3604-782d-4a32-a27c-bc33ac56ce86' vif3.tag = 'mytag3' vifs = [vif, vif1, vif2, vif3] network_info = _fake_network_info(self, 4) network_info[0]['vnic_type'] = network_model.VNIC_TYPE_DIRECT_PHYSICAL network_info[0]['address'] = "51:5a:2c:a4:5e:1b" network_info[0]['details'] = dict(vlan='2145') instance_ref.info_cache = objects.InstanceInfoCache( network_info=network_info) with test.nested( mock.patch('nova.objects.VirtualInterfaceList' '.get_by_instance_uuid', return_value=vifs), mock.patch('nova.objects.BlockDeviceMappingList' '.get_by_instance_uuid', return_value=bdms), mock.patch('nova.virt.libvirt.host.Host.get_guest', return_value=guest), mock.patch.object(nova.virt.libvirt.guest.Guest, 'get_xml_desc', return_value=xml)): metadata_obj = drvr._build_device_metadata(self.context, instance_ref) metadata = metadata_obj.devices self.assertEqual(10, len(metadata)) self.assertIsInstance(metadata[0], objects.DiskMetadata) self.assertIsInstance(metadata[0].bus, objects.SCSIDeviceBus) self.assertEqual(['db'], metadata[0].tags) self.assertEqual(uuids.volume_1, metadata[0].serial) self.assertFalse(metadata[0].bus.obj_attr_is_set('address')) self.assertEqual(['nfvfunc1'], metadata[1].tags) self.assertEqual(uuids.volume_2, metadata[1].serial) self.assertIsInstance(metadata[1], objects.DiskMetadata) self.assertIsInstance(metadata[1].bus, objects.IDEDeviceBus) self.assertEqual(['nfvfunc1'], metadata[1].tags) self.assertFalse(metadata[1].bus.obj_attr_is_set('address')) self.assertIsInstance(metadata[2], objects.DiskMetadata) self.assertIsInstance(metadata[2].bus, objects.USBDeviceBus) self.assertEqual(['nfvfunc2'], metadata[2].tags) self.assertEqual(uuids.volume_3, metadata[2].serial) self.assertFalse(metadata[2].bus.obj_attr_is_set('address')) self.assertIsInstance(metadata[3], objects.DiskMetadata) self.assertIsInstance(metadata[3].bus, objects.PCIDeviceBus) self.assertEqual(['nfvfunc3'], metadata[3].tags) # NOTE(artom) We're not checking volume 4 because it's not tagged # and only tagged devices appear in the metadata self.assertEqual(uuids.volume_5, metadata[3].serial) self.assertEqual('0000:00:09.0', metadata[3].bus.address) self.assertIsInstance(metadata[4], objects.DiskMetadata) self.assertEqual(['nfvfunc4'], metadata[4].tags) self.assertEqual(uuids.volume_6, metadata[4].serial) self.assertIsInstance(metadata[5], objects.DiskMetadata) self.assertEqual(['nfvfunc5'], metadata[5].tags) self.assertEqual(uuids.volume_7, metadata[5].serial) self.assertIsInstance(metadata[6], objects.NetworkInterfaceMetadata) self.assertIsInstance(metadata[6].bus, objects.PCIDeviceBus) self.assertEqual(['mytag1'], metadata[6].tags) self.assertEqual('0000:00:03.0', metadata[6].bus.address) # Make sure that interface with vlan is exposed to the metadata self.assertIsInstance(metadata[7], objects.NetworkInterfaceMetadata) self.assertEqual('51:5a:2c:a4:5e:1b', metadata[7].mac) self.assertEqual(2145, metadata[7].vlan) self.assertIsInstance(metadata[8], objects.NetworkInterfaceMetadata) self.assertEqual(['mytag2'], metadata[8].tags) self.assertIsInstance(metadata[9], objects.NetworkInterfaceMetadata) self.assertEqual(['mytag3'], metadata[9].tags) @mock.patch.object(host.Host, 'get_connection') @mock.patch.object(nova.virt.libvirt.guest.Guest, 'get_xml_desc') def test_detach_pci_devices(self, mocked_get_xml_desc, mock_conn): fake_domXML1_with_pci = ( """
""") fake_domXML1_without_pci = ( """
""") pci_device_info = {'compute_node_id': 1, 'instance_uuid': 'uuid', 'address': '0001:04:10.1'} pci_device = objects.PciDevice(**pci_device_info) pci_devices = [pci_device] mocked_get_xml_desc.return_value = fake_domXML1_without_pci drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) dom = fakelibvirt.Domain( drvr._get_connection(), fake_domXML1_with_pci, False) guest = libvirt_guest.Guest(dom) drvr._detach_pci_devices(guest, pci_devices) @mock.patch.object(host.Host, 'get_connection') @mock.patch.object(nova.virt.libvirt.guest.Guest, 'get_xml_desc') def test_detach_pci_devices_timeout(self, mocked_get_xml_desc, mock_conn): fake_domXML1_with_pci = ( """
""") pci_device_info = {'compute_node_id': 1, 'instance_uuid': 'uuid', 'address': '0001:04:10.1'} pci_device = objects.PciDevice(**pci_device_info) pci_devices = [pci_device] mocked_get_xml_desc.return_value = fake_domXML1_with_pci drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) dom = fakelibvirt.Domain( drvr._get_connection(), fake_domXML1_with_pci, False) guest = libvirt_guest.Guest(dom) self.assertRaises(exception.PciDeviceDetachFailed, drvr._detach_pci_devices, guest, pci_devices) @mock.patch.object(connector, 'get_connector_properties') def test_get_connector(self, fake_get_connector): initiator = 'fake.initiator.iqn' ip = 'fakeip' host = 'fakehost' wwpns = ['100010604b019419'] wwnns = ['200010604b019419'] self.flags(my_ip=ip) self.flags(host=host) expected = { 'ip': ip, 'initiator': initiator, 'host': host, 'wwpns': wwpns, 'wwnns': wwnns } volume = { 'id': 'fake' } # TODO(walter-boring) add the fake in os-brick fake_get_connector.return_value = expected drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) result = drvr.get_volume_connector(volume) self.assertThat(expected, matchers.DictMatches(result)) @mock.patch.object(connector, 'get_connector_properties') def test_get_connector_storage_ip(self, fake_get_connector): ip = '100.100.100.100' storage_ip = '101.101.101.101' self.flags(my_block_storage_ip=storage_ip, my_ip=ip) volume = { 'id': 'fake' } expected = { 'ip': storage_ip } # TODO(walter-boring) add the fake in os-brick fake_get_connector.return_value = expected drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) result = drvr.get_volume_connector(volume) self.assertEqual(storage_ip, result['ip']) def test_lifecycle_event_registration(self): calls = [] def fake_registerErrorHandler(*args, **kwargs): calls.append('fake_registerErrorHandler') def fake_get_host_capabilities(**args): cpu = vconfig.LibvirtConfigGuestCPU() cpu.arch = fields.Architecture.ARMV7 caps = vconfig.LibvirtConfigCaps() caps.host = vconfig.LibvirtConfigCapsHost() caps.host.cpu = cpu calls.append('fake_get_host_capabilities') return caps @mock.patch.object(fakelibvirt, 'registerErrorHandler', side_effect=fake_registerErrorHandler) @mock.patch.object(host.Host, "get_capabilities", side_effect=fake_get_host_capabilities) def test_init_host(get_host_capabilities, register_error_handler): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) drvr.init_host("test_host") test_init_host() # NOTE(dkliban): Will fail if get_host_capabilities is called before # registerErrorHandler self.assertEqual(['fake_registerErrorHandler', 'fake_get_host_capabilities'], calls) def test_sanitize_log_to_xml(self): # setup fake data data = {'auth_password': 'scrubme'} bdm = [{'connection_info': {'data': data}}] bdi = {'block_device_mapping': bdm} # Tests that the parameters to the _get_guest_xml method # are sanitized for passwords when logged. def fake_debug(*args, **kwargs): if 'auth_password' in args[0]: self.assertNotIn('scrubme', args[0]) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) conf = mock.Mock() with test.nested( mock.patch.object(libvirt_driver.LOG, 'debug', side_effect=fake_debug), mock.patch.object(drvr, '_get_guest_config', return_value=conf) ) as ( debug_mock, conf_mock ): drvr._get_guest_xml(self.context, self.test_instance, network_info={}, disk_info={}, image_meta={}, block_device_info=bdi) # we don't care what the log message is, we just want to make sure # our stub method is called which asserts the password is scrubbed self.assertTrue(debug_mock.called) @mock.patch.object(time, "time") def test_get_guest_config(self, time_mock): time_mock.return_value = 1234567.89 drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) test_instance = copy.deepcopy(self.test_instance) test_instance["display_name"] = "purple tomatoes" test_instance['system_metadata']['owner_project_name'] = 'sweetshop' test_instance['system_metadata']['owner_user_name'] = 'cupcake' ctxt = context.RequestContext(project_id=123, project_name="aubergine", user_id=456, user_name="pie") flavor = objects.Flavor(name='m1.small', memory_mb=6, vcpus=28, root_gb=496, ephemeral_gb=8128, swap=33550336, extra_specs={}) instance_ref = objects.Instance(**test_instance) instance_ref.flavor = flavor image_meta = objects.ImageMeta.from_dict(self.test_image_meta) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) cfg = drvr._get_guest_config(instance_ref, _fake_network_info(self, 1), image_meta, disk_info, context=ctxt) self.assertEqual(cfg.uuid, instance_ref["uuid"]) self.assertEqual(2, len(cfg.features)) self.assertIsInstance(cfg.features[0], vconfig.LibvirtConfigGuestFeatureACPI) self.assertIsInstance(cfg.features[1], vconfig.LibvirtConfigGuestFeatureAPIC) self.assertEqual(cfg.memory, 6 * units.Ki) self.assertEqual(cfg.vcpus, 28) self.assertEqual(cfg.os_type, fields.VMMode.HVM) self.assertEqual(cfg.os_boot_dev, ["hd"]) self.assertIsNone(cfg.os_root) self.assertEqual(len(cfg.devices), 10) self.assertIsInstance(cfg.devices[0], vconfig.LibvirtConfigGuestDisk) self.assertIsInstance(cfg.devices[1], vconfig.LibvirtConfigGuestDisk) self.assertIsInstance(cfg.devices[2], vconfig.LibvirtConfigGuestDisk) self.assertIsInstance(cfg.devices[3], vconfig.LibvirtConfigGuestInterface) self.assertIsInstance(cfg.devices[4], vconfig.LibvirtConfigGuestSerial) self.assertIsInstance(cfg.devices[5], vconfig.LibvirtConfigGuestSerial) self.assertIsInstance(cfg.devices[6], vconfig.LibvirtConfigGuestInput) self.assertIsInstance(cfg.devices[7], vconfig.LibvirtConfigGuestGraphics) self.assertIsInstance(cfg.devices[8], vconfig.LibvirtConfigGuestVideo) self.assertIsInstance(cfg.devices[9], vconfig.LibvirtConfigMemoryBalloon) self.assertEqual(len(cfg.metadata), 1) self.assertIsInstance(cfg.metadata[0], vconfig.LibvirtConfigGuestMetaNovaInstance) self.assertEqual(version.version_string_with_package(), cfg.metadata[0].package) self.assertEqual("purple tomatoes", cfg.metadata[0].name) self.assertEqual(1234567.89, cfg.metadata[0].creationTime) self.assertEqual("image", cfg.metadata[0].roottype) self.assertEqual(str(instance_ref["image_ref"]), cfg.metadata[0].rootid) self.assertIsInstance(cfg.metadata[0].owner, vconfig.LibvirtConfigGuestMetaNovaOwner) self.assertEqual("838a72b0-0d54-4827-8fd6-fb1227633ceb", cfg.metadata[0].owner.userid) self.assertEqual("cupcake", cfg.metadata[0].owner.username) self.assertEqual("fake", cfg.metadata[0].owner.projectid) self.assertEqual("sweetshop", cfg.metadata[0].owner.projectname) self.assertIsInstance(cfg.metadata[0].flavor, vconfig.LibvirtConfigGuestMetaNovaFlavor) self.assertEqual("m1.small", cfg.metadata[0].flavor.name) self.assertEqual(6, cfg.metadata[0].flavor.memory) self.assertEqual(28, cfg.metadata[0].flavor.vcpus) self.assertEqual(496, cfg.metadata[0].flavor.disk) self.assertEqual(8128, cfg.metadata[0].flavor.ephemeral) self.assertEqual(33550336, cfg.metadata[0].flavor.swap) def test_get_guest_config_missing_ownership_info(self): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) test_instance = copy.deepcopy(self.test_instance) ctxt = context.RequestContext(project_id=123, project_name="aubergine", user_id=456, user_name="pie") flavor = objects.Flavor(name='m1.small', memory_mb=6, vcpus=28, root_gb=496, ephemeral_gb=8128, swap=33550336, extra_specs={}) instance_ref = objects.Instance(**test_instance) instance_ref.flavor = flavor image_meta = objects.ImageMeta.from_dict(self.test_image_meta) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) cfg = drvr._get_guest_config(instance_ref, _fake_network_info(self, 1), image_meta, disk_info, context=ctxt) self.assertEqual("N/A", cfg.metadata[0].owner.username) self.assertEqual("N/A", cfg.metadata[0].owner.projectname) def test_get_guest_config_lxc(self): self.flags(virt_type='lxc', group='libvirt') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance_ref = objects.Instance(**self.test_instance) image_meta = objects.ImageMeta.from_dict(self.test_image_meta) cfg = drvr._get_guest_config(instance_ref, _fake_network_info(self, 1), image_meta, {'mapping': {}}) self.assertEqual(instance_ref["uuid"], cfg.uuid) self.assertEqual(instance_ref.flavor.memory_mb * units.Ki, cfg.memory) self.assertEqual(instance_ref.flavor.vcpus, cfg.vcpus) self.assertEqual(fields.VMMode.EXE, cfg.os_type) self.assertEqual("/sbin/init", cfg.os_init_path) self.assertEqual("console=tty0 console=ttyS0 console=hvc0", cfg.os_cmdline) self.assertIsNone(cfg.os_root) self.assertEqual(3, len(cfg.devices)) self.assertIsInstance(cfg.devices[0], vconfig.LibvirtConfigGuestFilesys) self.assertIsInstance(cfg.devices[1], vconfig.LibvirtConfigGuestInterface) self.assertIsInstance(cfg.devices[2], vconfig.LibvirtConfigGuestConsole) def test_get_guest_config_lxc_with_id_maps(self): self.flags(virt_type='lxc', group='libvirt') self.flags(uid_maps=['0:1000:100'], group='libvirt') self.flags(gid_maps=['0:1000:100'], group='libvirt') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance_ref = objects.Instance(**self.test_instance) image_meta = objects.ImageMeta.from_dict(self.test_image_meta) cfg = drvr._get_guest_config(instance_ref, _fake_network_info(self, 1), image_meta, {'mapping': {}}) self.assertEqual(instance_ref["uuid"], cfg.uuid) self.assertEqual(instance_ref.flavor.memory_mb * units.Ki, cfg.memory) self.assertEqual(instance_ref.vcpus, cfg.vcpus) self.assertEqual(fields.VMMode.EXE, cfg.os_type) self.assertEqual("/sbin/init", cfg.os_init_path) self.assertEqual("console=tty0 console=ttyS0 console=hvc0", cfg.os_cmdline) self.assertIsNone(cfg.os_root) self.assertEqual(3, len(cfg.devices)) self.assertIsInstance(cfg.devices[0], vconfig.LibvirtConfigGuestFilesys) self.assertIsInstance(cfg.devices[1], vconfig.LibvirtConfigGuestInterface) self.assertIsInstance(cfg.devices[2], vconfig.LibvirtConfigGuestConsole) self.assertEqual(len(cfg.idmaps), 2) self.assertIsInstance(cfg.idmaps[0], vconfig.LibvirtConfigGuestUIDMap) self.assertIsInstance(cfg.idmaps[1], vconfig.LibvirtConfigGuestGIDMap) @mock.patch.object( host.Host, "is_cpu_control_policy_capable", return_value=True) def test_get_guest_config_numa_host_instance_fits(self, is_able): instance_ref = objects.Instance(**self.test_instance) image_meta = objects.ImageMeta.from_dict(self.test_image_meta) flavor = objects.Flavor(memory_mb=1, vcpus=2, root_gb=496, ephemeral_gb=8128, swap=33550336, name='fake', extra_specs={}) instance_ref.flavor = flavor caps = vconfig.LibvirtConfigCaps() caps.host = vconfig.LibvirtConfigCapsHost() caps.host.cpu = vconfig.LibvirtConfigCPU() caps.host.cpu.arch = fields.Architecture.X86_64 caps.host.topology = fakelibvirt.NUMATopology() drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) with test.nested( mock.patch.object(host.Host, 'has_min_version', return_value=True), mock.patch.object(host.Host, "get_capabilities", return_value=caps)): cfg = drvr._get_guest_config(instance_ref, [], image_meta, disk_info) self.assertIsNone(cfg.cpuset) self.assertEqual(0, len(cfg.cputune.vcpupin)) self.assertIsNone(cfg.cpu.numa) @mock.patch.object( host.Host, "is_cpu_control_policy_capable", return_value=True) def test_get_guest_config_numa_host_instance_no_fit(self, is_able): instance_ref = objects.Instance(**self.test_instance) image_meta = objects.ImageMeta.from_dict(self.test_image_meta) flavor = objects.Flavor(memory_mb=4096, vcpus=4, root_gb=496, ephemeral_gb=8128, swap=33550336, name='fake', extra_specs={}) instance_ref.flavor = flavor caps = vconfig.LibvirtConfigCaps() caps.host = vconfig.LibvirtConfigCapsHost() caps.host.cpu = vconfig.LibvirtConfigCPU() caps.host.cpu.arch = fields.Architecture.X86_64 caps.host.topology = fakelibvirt.NUMATopology() drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) with test.nested( mock.patch.object(host.Host, "get_capabilities", return_value=caps), mock.patch.object( hardware, 'get_vcpu_pin_set', return_value=set([3])), mock.patch.object(random, 'choice'), mock.patch.object(drvr, '_has_numa_support', return_value=False) ) as (get_host_cap_mock, get_vcpu_pin_set_mock, choice_mock, _has_numa_support_mock): cfg = drvr._get_guest_config(instance_ref, [], image_meta, disk_info) self.assertFalse(choice_mock.called) self.assertEqual(set([3]), cfg.cpuset) self.assertEqual(0, len(cfg.cputune.vcpupin)) self.assertIsNone(cfg.cpu.numa) def _test_get_guest_memory_backing_config( self, host_topology, inst_topology, numatune): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) with mock.patch.object( drvr, "_get_host_numa_topology", return_value=host_topology): return drvr._get_guest_memory_backing_config( inst_topology, numatune, {}) @mock.patch.object(host.Host, 'has_min_version', return_value=True) def test_get_guest_memory_backing_config_large_success(self, mock_version): host_topology = objects.NUMATopology( cells=[ objects.NUMACell( id=3, cpuset=set([1]), memory=1024, mempages=[ objects.NUMAPagesTopology(size_kb=4, total=2000, used=0), objects.NUMAPagesTopology(size_kb=2048, total=512, used=0), objects.NUMAPagesTopology(size_kb=1048576, total=0, used=0), ])]) inst_topology = objects.InstanceNUMATopology(cells=[ objects.InstanceNUMACell( id=3, cpuset=set([0, 1]), memory=1024, pagesize=2048)]) numa_tune = vconfig.LibvirtConfigGuestNUMATune() numa_tune.memnodes = [vconfig.LibvirtConfigGuestNUMATuneMemNode()] numa_tune.memnodes[0].cellid = 0 numa_tune.memnodes[0].nodeset = [3] result = self._test_get_guest_memory_backing_config( host_topology, inst_topology, numa_tune) self.assertEqual(1, len(result.hugepages)) self.assertEqual(2048, result.hugepages[0].size_kb) self.assertEqual([0], result.hugepages[0].nodeset) @mock.patch.object(host.Host, 'has_min_version', return_value=True) def test_get_guest_memory_backing_config_smallest(self, mock_version): host_topology = objects.NUMATopology( cells=[ objects.NUMACell( id=3, cpuset=set([1]), memory=1024, mempages=[ objects.NUMAPagesTopology(size_kb=4, total=2000, used=0), objects.NUMAPagesTopology(size_kb=2048, total=512, used=0), objects.NUMAPagesTopology(size_kb=1048576, total=0, used=0), ])]) inst_topology = objects.InstanceNUMATopology(cells=[ objects.InstanceNUMACell( id=3, cpuset=set([0, 1]), memory=1024, pagesize=4)]) numa_tune = vconfig.LibvirtConfigGuestNUMATune() numa_tune.memnodes = [vconfig.LibvirtConfigGuestNUMATuneMemNode()] numa_tune.memnodes[0].cellid = 0 numa_tune.memnodes[0].nodeset = [3] result = self._test_get_guest_memory_backing_config( host_topology, inst_topology, numa_tune) self.assertIsNone(result) def test_get_guest_memory_backing_config_realtime(self): flavor = {"extra_specs": { "hw:cpu_realtime": "yes", "hw:cpu_policy": "dedicated" }} drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) membacking = drvr._get_guest_memory_backing_config( None, None, flavor) self.assertTrue(membacking.locked) self.assertFalse(membacking.sharedpages) @mock.patch.object( host.Host, "is_cpu_control_policy_capable", return_value=True) def test_get_guest_config_numa_host_instance_pci_no_numa_info( self, is_able): instance_ref = objects.Instance(**self.test_instance) image_meta = objects.ImageMeta.from_dict(self.test_image_meta) flavor = objects.Flavor(memory_mb=1, vcpus=2, root_gb=496, ephemeral_gb=8128, swap=33550336, name='fake', extra_specs={}) instance_ref.flavor = flavor caps = vconfig.LibvirtConfigCaps() caps.host = vconfig.LibvirtConfigCapsHost() caps.host.cpu = vconfig.LibvirtConfigCPU() caps.host.cpu.arch = fields.Architecture.X86_64 caps.host.topology = fakelibvirt.NUMATopology() conn = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) pci_device_info = dict(test_pci_device.fake_db_dev) pci_device_info.update(compute_node_id=1, label='fake', status=fields.PciDeviceStatus.AVAILABLE, address='0000:00:00.1', instance_uuid=None, request_id=None, extra_info={}, numa_node=None) pci_device = objects.PciDevice(**pci_device_info) with test.nested( mock.patch.object(host.Host, 'has_min_version', return_value=True), mock.patch.object( host.Host, "get_capabilities", return_value=caps), mock.patch.object( hardware, 'get_vcpu_pin_set', return_value=set([3])), mock.patch.object(host.Host, 'get_online_cpus', return_value=set(range(8))), mock.patch.object(pci_manager, "get_instance_pci_devs", return_value=[pci_device])): cfg = conn._get_guest_config(instance_ref, [], image_meta, disk_info) self.assertEqual(set([3]), cfg.cpuset) self.assertEqual(0, len(cfg.cputune.vcpupin)) self.assertIsNone(cfg.cpu.numa) @mock.patch.object( host.Host, "is_cpu_control_policy_capable", return_value=True) def test_get_guest_config_numa_host_instance_2pci_no_fit(self, is_able): instance_ref = objects.Instance(**self.test_instance) image_meta = objects.ImageMeta.from_dict(self.test_image_meta) flavor = objects.Flavor(memory_mb=4096, vcpus=4, root_gb=496, ephemeral_gb=8128, swap=33550336, name='fake', extra_specs={}) instance_ref.flavor = flavor caps = vconfig.LibvirtConfigCaps() caps.host = vconfig.LibvirtConfigCapsHost() caps.host.cpu = vconfig.LibvirtConfigCPU() caps.host.cpu.arch = fields.Architecture.X86_64 caps.host.topology = fakelibvirt.NUMATopology() conn = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) pci_device_info = dict(test_pci_device.fake_db_dev) pci_device_info.update(compute_node_id=1, label='fake', status=fields.PciDeviceStatus.AVAILABLE, address='0000:00:00.1', instance_uuid=None, request_id=None, extra_info={}, numa_node=1) pci_device = objects.PciDevice(**pci_device_info) pci_device_info.update(numa_node=0, address='0000:00:00.2') pci_device2 = objects.PciDevice(**pci_device_info) with test.nested( mock.patch.object( host.Host, "get_capabilities", return_value=caps), mock.patch.object( hardware, 'get_vcpu_pin_set', return_value=set([3])), mock.patch.object(random, 'choice'), mock.patch.object(pci_manager, "get_instance_pci_devs", return_value=[pci_device, pci_device2]), mock.patch.object(conn, '_has_numa_support', return_value=False) ) as (get_host_cap_mock, get_vcpu_pin_set_mock, choice_mock, pci_mock, _has_numa_support_mock): cfg = conn._get_guest_config(instance_ref, [], image_meta, disk_info) self.assertFalse(choice_mock.called) self.assertEqual(set([3]), cfg.cpuset) self.assertEqual(0, len(cfg.cputune.vcpupin)) self.assertIsNone(cfg.cpu.numa) @mock.patch.object(fakelibvirt.Connection, 'getType') @mock.patch.object(fakelibvirt.Connection, 'getVersion') @mock.patch.object(fakelibvirt.Connection, 'getLibVersion') @mock.patch.object(host.Host, 'get_capabilities') @mock.patch.object(libvirt_driver.LibvirtDriver, '_set_host_enabled') def _test_get_guest_config_numa_unsupported(self, fake_lib_version, fake_version, fake_type, fake_arch, exception_class, pagesize, mock_host, mock_caps, mock_lib_version, mock_version, mock_type): instance_topology = objects.InstanceNUMATopology( cells=[objects.InstanceNUMACell( id=0, cpuset=set([0]), memory=1024, pagesize=pagesize)]) instance_ref = objects.Instance(**self.test_instance) instance_ref.numa_topology = instance_topology image_meta = objects.ImageMeta.from_dict(self.test_image_meta) flavor = objects.Flavor(memory_mb=1, vcpus=2, root_gb=496, ephemeral_gb=8128, swap=33550336, name='fake', extra_specs={}) instance_ref.flavor = flavor caps = vconfig.LibvirtConfigCaps() caps.host = vconfig.LibvirtConfigCapsHost() caps.host.cpu = vconfig.LibvirtConfigCPU() caps.host.cpu.arch = fake_arch caps.host.topology = fakelibvirt.NUMATopology() mock_type.return_value = fake_type mock_version.return_value = fake_version mock_lib_version.return_value = fake_lib_version mock_caps.return_value = caps drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) self.assertRaises(exception_class, drvr._get_guest_config, instance_ref, [], image_meta, disk_info) def test_get_guest_config_numa_old_version_libvirt_ppc(self): self.flags(virt_type='kvm', group='libvirt') self._test_get_guest_config_numa_unsupported( versionutils.convert_version_to_int( libvirt_driver.MIN_LIBVIRT_NUMA_VERSION_PPC) - 1, versionutils.convert_version_to_int( libvirt_driver.MIN_QEMU_VERSION), host.HV_DRIVER_QEMU, fields.Architecture.PPC64LE, exception.NUMATopologyUnsupported, None) def test_get_guest_config_numa_bad_version_libvirt(self): self.flags(virt_type='kvm', group='libvirt') self._test_get_guest_config_numa_unsupported( versionutils.convert_version_to_int( libvirt_driver.BAD_LIBVIRT_NUMA_VERSIONS[0]), versionutils.convert_version_to_int( libvirt_driver.MIN_QEMU_VERSION), host.HV_DRIVER_QEMU, fields.Architecture.X86_64, exception.NUMATopologyUnsupported, None) @mock.patch.object(libvirt_driver.LOG, 'warning') def test_has_numa_support_bad_version_libvirt_log(self, mock_warn): # Tests that a warning is logged once and only once when there is a bad # BAD_LIBVIRT_NUMA_VERSIONS detected. drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) self.assertFalse(hasattr(drvr, '_bad_libvirt_numa_version_warn')) with mock.patch.object(drvr._host, 'has_version', return_value=True): for i in range(2): self.assertFalse(drvr._has_numa_support()) self.assertTrue(drvr._bad_libvirt_numa_version_warn) self.assertEqual(1, mock_warn.call_count) # assert the version is logged properly self.assertEqual('1.2.9.2', mock_warn.call_args[0][1]) def test_get_guest_config_numa_other_arch_qemu(self): self.flags(virt_type='kvm', group='libvirt') self._test_get_guest_config_numa_unsupported( versionutils.convert_version_to_int( libvirt_driver.MIN_LIBVIRT_VERSION), versionutils.convert_version_to_int( libvirt_driver.MIN_QEMU_VERSION), host.HV_DRIVER_QEMU, fields.Architecture.S390, exception.NUMATopologyUnsupported, None) def test_get_guest_config_numa_xen(self): self.flags(virt_type='xen', group='libvirt') self._test_get_guest_config_numa_unsupported( versionutils.convert_version_to_int( libvirt_driver.MIN_LIBVIRT_VERSION), versionutils.convert_version_to_int((4, 5, 0)), 'XEN', fields.Architecture.X86_64, exception.NUMATopologyUnsupported, None) @mock.patch.object( host.Host, "is_cpu_control_policy_capable", return_value=True) def test_get_guest_config_numa_host_instance_fit_w_cpu_pinset( self, is_able): instance_ref = objects.Instance(**self.test_instance) image_meta = objects.ImageMeta.from_dict(self.test_image_meta) flavor = objects.Flavor(memory_mb=1024, vcpus=2, root_gb=496, ephemeral_gb=8128, swap=33550336, name='fake', extra_specs={}) instance_ref.flavor = flavor caps = vconfig.LibvirtConfigCaps() caps.host = vconfig.LibvirtConfigCapsHost() caps.host.cpu = vconfig.LibvirtConfigCPU() caps.host.cpu.arch = fields.Architecture.X86_64 caps.host.topology = fakelibvirt.NUMATopology(kb_mem=4194304) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) with test.nested( mock.patch.object(host.Host, 'has_min_version', return_value=True), mock.patch.object(host.Host, "get_capabilities", return_value=caps), mock.patch.object( hardware, 'get_vcpu_pin_set', return_value=set([2, 3])), mock.patch.object(host.Host, 'get_online_cpus', return_value=set(range(8))) ) as (has_min_version_mock, get_host_cap_mock, get_vcpu_pin_set_mock, get_online_cpus_mock): cfg = drvr._get_guest_config(instance_ref, [], image_meta, disk_info) # NOTE(ndipanov): we make sure that pin_set was taken into account # when choosing viable cells self.assertEqual(set([2, 3]), cfg.cpuset) self.assertEqual(0, len(cfg.cputune.vcpupin)) self.assertIsNone(cfg.cpu.numa) @mock.patch.object( host.Host, "is_cpu_control_policy_capable", return_value=True) def test_get_guest_config_non_numa_host_instance_topo(self, is_able): instance_topology = objects.InstanceNUMATopology( cells=[objects.InstanceNUMACell( id=0, cpuset=set([0]), memory=1024), objects.InstanceNUMACell( id=1, cpuset=set([2]), memory=1024)]) instance_ref = objects.Instance(**self.test_instance) instance_ref.numa_topology = instance_topology image_meta = objects.ImageMeta.from_dict(self.test_image_meta) flavor = objects.Flavor(memory_mb=2048, vcpus=2, root_gb=496, ephemeral_gb=8128, swap=33550336, name='fake', extra_specs={}) instance_ref.flavor = flavor caps = vconfig.LibvirtConfigCaps() caps.host = vconfig.LibvirtConfigCapsHost() caps.host.cpu = vconfig.LibvirtConfigCPU() caps.host.cpu.arch = fields.Architecture.X86_64 caps.host.topology = None drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) with test.nested( mock.patch.object( objects.InstanceNUMATopology, "get_by_instance_uuid", return_value=instance_topology), mock.patch.object(host.Host, 'has_min_version', return_value=True), mock.patch.object(host.Host, "get_capabilities", return_value=caps)): cfg = drvr._get_guest_config(instance_ref, [], image_meta, disk_info) self.assertIsNone(cfg.cpuset) self.assertEqual(0, len(cfg.cputune.vcpupin)) self.assertIsNone(cfg.numatune) self.assertIsNotNone(cfg.cpu.numa) for instance_cell, numa_cfg_cell in zip( instance_topology.cells, cfg.cpu.numa.cells): self.assertEqual(instance_cell.id, numa_cfg_cell.id) self.assertEqual(instance_cell.cpuset, numa_cfg_cell.cpus) self.assertEqual(instance_cell.memory * units.Ki, numa_cfg_cell.memory) @mock.patch.object( host.Host, "is_cpu_control_policy_capable", return_value=True) def test_get_guest_config_numa_host_instance_topo(self, is_able): instance_topology = objects.InstanceNUMATopology( cells=[objects.InstanceNUMACell( id=1, cpuset=set([0, 1]), memory=1024, pagesize=None), objects.InstanceNUMACell( id=2, cpuset=set([2, 3]), memory=1024, pagesize=None)]) instance_ref = objects.Instance(**self.test_instance) instance_ref.numa_topology = instance_topology image_meta = objects.ImageMeta.from_dict(self.test_image_meta) flavor = objects.Flavor(memory_mb=2048, vcpus=4, root_gb=496, ephemeral_gb=8128, swap=33550336, name='fake', extra_specs={}) instance_ref.flavor = flavor caps = vconfig.LibvirtConfigCaps() caps.host = vconfig.LibvirtConfigCapsHost() caps.host.cpu = vconfig.LibvirtConfigCPU() caps.host.cpu.arch = fields.Architecture.X86_64 caps.host.topology = fakelibvirt.NUMATopology() drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) with test.nested( mock.patch.object( objects.InstanceNUMATopology, "get_by_instance_uuid", return_value=instance_topology), mock.patch.object(host.Host, 'has_min_version', return_value=True), mock.patch.object(host.Host, "get_capabilities", return_value=caps), mock.patch.object( hardware, 'get_vcpu_pin_set', return_value=set([2, 3, 4, 5])), mock.patch.object(host.Host, 'get_online_cpus', return_value=set(range(8))), ): cfg = drvr._get_guest_config(instance_ref, [], image_meta, disk_info) self.assertIsNone(cfg.cpuset) # Test that the pinning is correct and limited to allowed only self.assertEqual(0, cfg.cputune.vcpupin[0].id) self.assertEqual(set([2, 3]), cfg.cputune.vcpupin[0].cpuset) self.assertEqual(1, cfg.cputune.vcpupin[1].id) self.assertEqual(set([2, 3]), cfg.cputune.vcpupin[1].cpuset) self.assertEqual(2, cfg.cputune.vcpupin[2].id) self.assertEqual(set([4, 5]), cfg.cputune.vcpupin[2].cpuset) self.assertEqual(3, cfg.cputune.vcpupin[3].id) self.assertEqual(set([4, 5]), cfg.cputune.vcpupin[3].cpuset) self.assertIsNotNone(cfg.cpu.numa) self.assertIsInstance(cfg.cputune.emulatorpin, vconfig.LibvirtConfigGuestCPUTuneEmulatorPin) self.assertEqual(set([2, 3, 4, 5]), cfg.cputune.emulatorpin.cpuset) for instance_cell, numa_cfg_cell, index in zip( instance_topology.cells, cfg.cpu.numa.cells, range(len(instance_topology.cells))): self.assertEqual(index, numa_cfg_cell.id) self.assertEqual(instance_cell.cpuset, numa_cfg_cell.cpus) self.assertEqual(instance_cell.memory * units.Ki, numa_cfg_cell.memory) allnodes = [cell.id for cell in instance_topology.cells] self.assertEqual(allnodes, cfg.numatune.memory.nodeset) self.assertEqual("strict", cfg.numatune.memory.mode) for instance_cell, memnode, index in zip( instance_topology.cells, cfg.numatune.memnodes, range(len(instance_topology.cells))): self.assertEqual(index, memnode.cellid) self.assertEqual([instance_cell.id], memnode.nodeset) self.assertEqual("strict", memnode.mode) def test_get_guest_config_numa_host_instance_topo_reordered(self): instance_topology = objects.InstanceNUMATopology( cells=[objects.InstanceNUMACell( id=3, cpuset=set([0, 1]), memory=1024), objects.InstanceNUMACell( id=0, cpuset=set([2, 3]), memory=1024)]) instance_ref = objects.Instance(**self.test_instance) instance_ref.numa_topology = instance_topology image_meta = objects.ImageMeta.from_dict(self.test_image_meta) flavor = objects.Flavor(memory_mb=2048, vcpus=4, root_gb=496, ephemeral_gb=8128, swap=33550336, name='fake', extra_specs={}) instance_ref.flavor = flavor caps = vconfig.LibvirtConfigCaps() caps.host = vconfig.LibvirtConfigCapsHost() caps.host.cpu = vconfig.LibvirtConfigCPU() caps.host.cpu.arch = fields.Architecture.X86_64 caps.host.topology = fakelibvirt.NUMATopology() drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) with test.nested( mock.patch.object( objects.InstanceNUMATopology, "get_by_instance_uuid", return_value=instance_topology), mock.patch.object(host.Host, 'has_min_version', return_value=True), mock.patch.object(host.Host, "get_capabilities", return_value=caps), mock.patch.object(host.Host, 'get_online_cpus', return_value=set(range(8))), ): cfg = drvr._get_guest_config(instance_ref, [], image_meta, disk_info) self.assertIsNone(cfg.cpuset) # Test that the pinning is correct and limited to allowed only self.assertEqual(0, cfg.cputune.vcpupin[0].id) self.assertEqual(set([6, 7]), cfg.cputune.vcpupin[0].cpuset) self.assertEqual(1, cfg.cputune.vcpupin[1].id) self.assertEqual(set([6, 7]), cfg.cputune.vcpupin[1].cpuset) self.assertEqual(2, cfg.cputune.vcpupin[2].id) self.assertEqual(set([0, 1]), cfg.cputune.vcpupin[2].cpuset) self.assertEqual(3, cfg.cputune.vcpupin[3].id) self.assertEqual(set([0, 1]), cfg.cputune.vcpupin[3].cpuset) self.assertIsNotNone(cfg.cpu.numa) self.assertIsInstance(cfg.cputune.emulatorpin, vconfig.LibvirtConfigGuestCPUTuneEmulatorPin) self.assertEqual(set([0, 1, 6, 7]), cfg.cputune.emulatorpin.cpuset) for index, (instance_cell, numa_cfg_cell) in enumerate(zip( instance_topology.cells, cfg.cpu.numa.cells)): self.assertEqual(index, numa_cfg_cell.id) self.assertEqual(instance_cell.cpuset, numa_cfg_cell.cpus) self.assertEqual(instance_cell.memory * units.Ki, numa_cfg_cell.memory) self.assertIsNone(numa_cfg_cell.memAccess) allnodes = set([cell.id for cell in instance_topology.cells]) self.assertEqual(allnodes, set(cfg.numatune.memory.nodeset)) self.assertEqual("strict", cfg.numatune.memory.mode) for index, (instance_cell, memnode) in enumerate(zip( instance_topology.cells, cfg.numatune.memnodes)): self.assertEqual(index, memnode.cellid) self.assertEqual([instance_cell.id], memnode.nodeset) self.assertEqual("strict", memnode.mode) def test_get_guest_config_numa_host_instance_topo_cpu_pinning(self): instance_topology = objects.InstanceNUMATopology( cells=[objects.InstanceNUMACell( id=1, cpuset=set([0, 1]), memory=1024, cpu_pinning={0: 24, 1: 25}), objects.InstanceNUMACell( id=0, cpuset=set([2, 3]), memory=1024, cpu_pinning={2: 0, 3: 1})]) instance_ref = objects.Instance(**self.test_instance) instance_ref.numa_topology = instance_topology image_meta = objects.ImageMeta.from_dict(self.test_image_meta) flavor = objects.Flavor(memory_mb=2048, vcpus=2, root_gb=496, ephemeral_gb=8128, swap=33550336, name='fake', extra_specs={}) instance_ref.flavor = flavor caps = vconfig.LibvirtConfigCaps() caps.host = vconfig.LibvirtConfigCapsHost() caps.host.cpu = vconfig.LibvirtConfigCPU() caps.host.cpu.arch = fields.Architecture.X86_64 caps.host.topology = fakelibvirt.NUMATopology( sockets_per_cell=4, cores_per_socket=3, threads_per_core=2) conn = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) with test.nested( mock.patch.object( objects.InstanceNUMATopology, "get_by_instance_uuid", return_value=instance_topology), mock.patch.object(host.Host, 'has_min_version', return_value=True), mock.patch.object(host.Host, "get_capabilities", return_value=caps), mock.patch.object(host.Host, 'get_online_cpus', return_value=set(range(8))), ): cfg = conn._get_guest_config(instance_ref, [], image_meta, disk_info) self.assertIsNone(cfg.cpuset) # Test that the pinning is correct and limited to allowed only self.assertEqual(0, cfg.cputune.vcpupin[0].id) self.assertEqual(set([24]), cfg.cputune.vcpupin[0].cpuset) self.assertEqual(1, cfg.cputune.vcpupin[1].id) self.assertEqual(set([25]), cfg.cputune.vcpupin[1].cpuset) self.assertEqual(2, cfg.cputune.vcpupin[2].id) self.assertEqual(set([0]), cfg.cputune.vcpupin[2].cpuset) self.assertEqual(3, cfg.cputune.vcpupin[3].id) self.assertEqual(set([1]), cfg.cputune.vcpupin[3].cpuset) self.assertIsNotNone(cfg.cpu.numa) # Emulator must be pinned to union of cfg.cputune.vcpupin[*].cpuset self.assertIsInstance(cfg.cputune.emulatorpin, vconfig.LibvirtConfigGuestCPUTuneEmulatorPin) self.assertEqual(set([0, 1, 24, 25]), cfg.cputune.emulatorpin.cpuset) for i, (instance_cell, numa_cfg_cell) in enumerate(zip( instance_topology.cells, cfg.cpu.numa.cells)): self.assertEqual(i, numa_cfg_cell.id) self.assertEqual(instance_cell.cpuset, numa_cfg_cell.cpus) self.assertEqual(instance_cell.memory * units.Ki, numa_cfg_cell.memory) self.assertIsNone(numa_cfg_cell.memAccess) allnodes = set([cell.id for cell in instance_topology.cells]) self.assertEqual(allnodes, set(cfg.numatune.memory.nodeset)) self.assertEqual("strict", cfg.numatune.memory.mode) for i, (instance_cell, memnode) in enumerate(zip( instance_topology.cells, cfg.numatune.memnodes)): self.assertEqual(i, memnode.cellid) self.assertEqual([instance_cell.id], memnode.nodeset) self.assertEqual("strict", memnode.mode) def test_get_guest_config_numa_host_mempages_shared(self): instance_topology = objects.InstanceNUMATopology( cells=[ objects.InstanceNUMACell( id=1, cpuset=set([0, 1]), memory=1024, pagesize=2048), objects.InstanceNUMACell( id=2, cpuset=set([2, 3]), memory=1024, pagesize=2048)]) instance_ref = objects.Instance(**self.test_instance) instance_ref.numa_topology = instance_topology image_meta = objects.ImageMeta.from_dict(self.test_image_meta) flavor = objects.Flavor(memory_mb=2048, vcpus=4, root_gb=496, ephemeral_gb=8128, swap=33550336, name='fake', extra_specs={}) instance_ref.flavor = flavor caps = vconfig.LibvirtConfigCaps() caps.host = vconfig.LibvirtConfigCapsHost() caps.host.cpu = vconfig.LibvirtConfigCPU() caps.host.cpu.arch = fields.Architecture.X86_64 caps.host.topology = fakelibvirt.NUMATopology() for i, cell in enumerate(caps.host.topology.cells): cell.mempages = fakelibvirt.create_mempages( [(4, 1024 * i), (2048, i)]) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) with test.nested( mock.patch.object( objects.InstanceNUMATopology, "get_by_instance_uuid", return_value=instance_topology), mock.patch.object(host.Host, 'has_min_version', return_value=True), mock.patch.object(host.Host, "get_capabilities", return_value=caps), mock.patch.object( hardware, 'get_vcpu_pin_set', return_value=set([2, 3, 4, 5])), mock.patch.object(host.Host, 'get_online_cpus', return_value=set(range(8))), ): cfg = drvr._get_guest_config(instance_ref, [], image_meta, disk_info) for instance_cell, numa_cfg_cell, index in zip( instance_topology.cells, cfg.cpu.numa.cells, range(len(instance_topology.cells))): self.assertEqual(index, numa_cfg_cell.id) self.assertEqual(instance_cell.cpuset, numa_cfg_cell.cpus) self.assertEqual(instance_cell.memory * units.Ki, numa_cfg_cell.memory) self.assertEqual("shared", numa_cfg_cell.memAccess) allnodes = [cell.id for cell in instance_topology.cells] self.assertEqual(allnodes, cfg.numatune.memory.nodeset) self.assertEqual("strict", cfg.numatune.memory.mode) for instance_cell, memnode, index in zip( instance_topology.cells, cfg.numatune.memnodes, range(len(instance_topology.cells))): self.assertEqual(index, memnode.cellid) self.assertEqual([instance_cell.id], memnode.nodeset) self.assertEqual("strict", memnode.mode) self.assertEqual(0, len(cfg.cputune.vcpusched)) self.assertEqual(set([2, 3, 4, 5]), cfg.cputune.emulatorpin.cpuset) def test_get_guest_config_numa_host_instance_cpu_pinning_realtime(self): instance_topology = objects.InstanceNUMATopology( cells=[ objects.InstanceNUMACell( id=2, cpuset=set([0, 1]), memory=1024, pagesize=2048), objects.InstanceNUMACell( id=3, cpuset=set([2, 3]), memory=1024, pagesize=2048)]) instance_ref = objects.Instance(**self.test_instance) instance_ref.numa_topology = instance_topology image_meta = objects.ImageMeta.from_dict(self.test_image_meta) flavor = objects.Flavor(memory_mb=2048, vcpus=4, root_gb=496, ephemeral_gb=8128, swap=33550336, name='fake', extra_specs={ "hw:cpu_realtime": "yes", "hw:cpu_policy": "dedicated", "hw:cpu_realtime_mask": "^0-1" }) instance_ref.flavor = flavor caps = vconfig.LibvirtConfigCaps() caps.host = vconfig.LibvirtConfigCapsHost() caps.host.cpu = vconfig.LibvirtConfigCPU() caps.host.cpu.arch = fields.Architecture.X86_64 caps.host.topology = fakelibvirt.NUMATopology() for i, cell in enumerate(caps.host.topology.cells): cell.mempages = fakelibvirt.create_mempages( [(4, 1024 * i), (2048, i)]) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) with test.nested( mock.patch.object( objects.InstanceNUMATopology, "get_by_instance_uuid", return_value=instance_topology), mock.patch.object(host.Host, 'has_min_version', return_value=True), mock.patch.object(host.Host, "get_capabilities", return_value=caps), mock.patch.object( hardware, 'get_vcpu_pin_set', return_value=set([4, 5, 6, 7])), mock.patch.object(host.Host, 'get_online_cpus', return_value=set(range(8))), ): cfg = drvr._get_guest_config(instance_ref, [], image_meta, disk_info) for instance_cell, numa_cfg_cell, index in zip( instance_topology.cells, cfg.cpu.numa.cells, range(len(instance_topology.cells))): self.assertEqual(index, numa_cfg_cell.id) self.assertEqual(instance_cell.cpuset, numa_cfg_cell.cpus) self.assertEqual(instance_cell.memory * units.Ki, numa_cfg_cell.memory) self.assertEqual("shared", numa_cfg_cell.memAccess) allnodes = [cell.id for cell in instance_topology.cells] self.assertEqual(allnodes, cfg.numatune.memory.nodeset) self.assertEqual("strict", cfg.numatune.memory.mode) for instance_cell, memnode, index in zip( instance_topology.cells, cfg.numatune.memnodes, range(len(instance_topology.cells))): self.assertEqual(index, memnode.cellid) self.assertEqual([instance_cell.id], memnode.nodeset) self.assertEqual("strict", memnode.mode) self.assertEqual(1, len(cfg.cputune.vcpusched)) self.assertEqual("fifo", cfg.cputune.vcpusched[0].scheduler) # Ensure vCPUs 0-1 are pinned on host CPUs 4-5 and 2-3 are # set on host CPUs 6-7 according the realtime mask ^0-1 self.assertEqual(set([4, 5]), cfg.cputune.vcpupin[0].cpuset) self.assertEqual(set([4, 5]), cfg.cputune.vcpupin[1].cpuset) self.assertEqual(set([6, 7]), cfg.cputune.vcpupin[2].cpuset) self.assertEqual(set([6, 7]), cfg.cputune.vcpupin[3].cpuset) # We ensure that emulator threads are pinned on host CPUs # 4-5 which are "normal" vCPUs self.assertEqual(set([4, 5]), cfg.cputune.emulatorpin.cpuset) # We ensure that the vCPUs RT are 2-3 set to the host CPUs # which are 6, 7 self.assertEqual(set([2, 3]), cfg.cputune.vcpusched[0].vcpus) def test_get_guest_config_numa_host_instance_isolated_emulator_threads( self): instance_topology = objects.InstanceNUMATopology( emulator_threads_policy=( fields.CPUEmulatorThreadsPolicy.ISOLATE), cells=[ objects.InstanceNUMACell( id=0, cpuset=set([0, 1]), memory=1024, pagesize=2048, cpu_policy=fields.CPUAllocationPolicy.DEDICATED, cpu_pinning={0: 4, 1: 5}, cpuset_reserved=set([6])), objects.InstanceNUMACell( id=1, cpuset=set([2, 3]), memory=1024, pagesize=2048, cpu_policy=fields.CPUAllocationPolicy.DEDICATED, cpu_pinning={2: 7, 3: 8}, cpuset_reserved=set([]))]) instance_ref = objects.Instance(**self.test_instance) instance_ref.numa_topology = instance_topology image_meta = objects.ImageMeta.from_dict(self.test_image_meta) caps = vconfig.LibvirtConfigCaps() caps.host = vconfig.LibvirtConfigCapsHost() caps.host.cpu = vconfig.LibvirtConfigCPU() caps.host.cpu.arch = "x86_64" caps.host.topology = fakelibvirt.NUMATopology() drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) with test.nested( mock.patch.object( objects.InstanceNUMATopology, "get_by_instance_uuid", return_value=instance_topology), mock.patch.object(host.Host, 'has_min_version', return_value=True), mock.patch.object(host.Host, "get_capabilities", return_value=caps), mock.patch.object( hardware, 'get_vcpu_pin_set', return_value=set([4, 5, 6, 7, 8])), mock.patch.object(host.Host, 'get_online_cpus', return_value=set(range(10))), ): cfg = drvr._get_guest_config(instance_ref, [], image_meta, disk_info) self.assertEqual(set([6]), cfg.cputune.emulatorpin.cpuset) self.assertEqual(set([4]), cfg.cputune.vcpupin[0].cpuset) self.assertEqual(set([5]), cfg.cputune.vcpupin[1].cpuset) self.assertEqual(set([7]), cfg.cputune.vcpupin[2].cpuset) self.assertEqual(set([8]), cfg.cputune.vcpupin[3].cpuset) def test_get_cpu_numa_config_from_instance(self): topology = objects.InstanceNUMATopology(cells=[ objects.InstanceNUMACell(id=0, cpuset=set([1, 2]), memory=128), objects.InstanceNUMACell(id=1, cpuset=set([3, 4]), memory=128), ]) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) conf = drvr._get_cpu_numa_config_from_instance(topology, True) self.assertIsInstance(conf, vconfig.LibvirtConfigGuestCPUNUMA) self.assertEqual(0, conf.cells[0].id) self.assertEqual(set([1, 2]), conf.cells[0].cpus) self.assertEqual(131072, conf.cells[0].memory) self.assertEqual("shared", conf.cells[0].memAccess) self.assertEqual(1, conf.cells[1].id) self.assertEqual(set([3, 4]), conf.cells[1].cpus) self.assertEqual(131072, conf.cells[1].memory) self.assertEqual("shared", conf.cells[1].memAccess) def test_get_cpu_numa_config_from_instance_none(self): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) conf = drvr._get_cpu_numa_config_from_instance(None, False) self.assertIsNone(conf) @mock.patch.object(libvirt_driver.LibvirtDriver, "_has_numa_support", return_value=True) def test_get_memnode_numa_config_from_instance(self, mock_numa): instance_topology = objects.InstanceNUMATopology(cells=[ objects.InstanceNUMACell(id=0, cpuset=set([1, 2]), memory=128), objects.InstanceNUMACell(id=1, cpuset=set([3, 4]), memory=128), objects.InstanceNUMACell(id=16, cpuset=set([5, 6]), memory=128) ]) host_topology = objects.NUMATopology( cells=[ objects.NUMACell( id=0, cpuset=set([1, 2]), memory=1024, mempages=[]), objects.NUMACell( id=1, cpuset=set([3, 4]), memory=1024, mempages=[]), objects.NUMACell( id=16, cpuset=set([5, 6]), memory=1024, mempages=[])]) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) with test.nested( mock.patch.object(drvr, "_get_host_numa_topology", return_value=host_topology)): guest_numa_config = drvr._get_guest_numa_config(instance_topology, flavor={}, allowed_cpus=[1, 2, 3, 4, 5, 6], image_meta={}) self.assertEqual(2, guest_numa_config.numatune.memnodes[2].cellid) self.assertEqual([16], guest_numa_config.numatune.memnodes[2].nodeset) self.assertEqual(set([5, 6]), guest_numa_config.numaconfig.cells[2].cpus) @mock.patch.object(host.Host, 'has_version', return_value=True) def test_has_cpu_policy_support(self, mock_has_version): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) self.assertRaises(exception.CPUPinningNotSupported, drvr._has_cpu_policy_support) @mock.patch.object(libvirt_driver.LibvirtDriver, "_has_numa_support", return_value=True) @mock.patch.object(host.Host, "get_capabilities") def test_does_not_want_hugepages(self, mock_caps, mock_numa): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance_topology = objects.InstanceNUMATopology( cells=[ objects.InstanceNUMACell( id=1, cpuset=set([0, 1]), memory=1024, pagesize=4), objects.InstanceNUMACell( id=2, cpuset=set([2, 3]), memory=1024, pagesize=4)]) caps = vconfig.LibvirtConfigCaps() caps.host = vconfig.LibvirtConfigCapsHost() caps.host.cpu = vconfig.LibvirtConfigCPU() caps.host.cpu.arch = fields.Architecture.X86_64 caps.host.topology = fakelibvirt.NUMATopology() mock_caps.return_value = caps host_topology = drvr._get_host_numa_topology() self.assertFalse(drvr._wants_hugepages(None, None)) self.assertFalse(drvr._wants_hugepages(host_topology, None)) self.assertFalse(drvr._wants_hugepages(None, instance_topology)) self.assertFalse(drvr._wants_hugepages(host_topology, instance_topology)) @mock.patch.object(libvirt_driver.LibvirtDriver, "_has_numa_support", return_value=True) @mock.patch.object(host.Host, "get_capabilities") def test_does_want_hugepages(self, mock_caps, mock_numa): for arch in [fields.Architecture.I686, fields.Architecture.X86_64, fields.Architecture.AARCH64, fields.Architecture.PPC64LE, fields.Architecture.PPC64]: self._test_does_want_hugepages(mock_caps, mock_numa, arch) def _test_does_want_hugepages(self, mock_caps, mock_numa, architecture): self.flags(reserved_huge_pages=[ {'node': 0, 'size': 2048, 'count': 128}, {'node': 1, 'size': 2048, 'count': 1}, {'node': 3, 'size': 2048, 'count': 64}]) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance_topology = objects.InstanceNUMATopology( cells=[ objects.InstanceNUMACell( id=1, cpuset=set([0, 1]), memory=1024, pagesize=2048), objects.InstanceNUMACell( id=2, cpuset=set([2, 3]), memory=1024, pagesize=2048)]) caps = vconfig.LibvirtConfigCaps() caps.host = vconfig.LibvirtConfigCapsHost() caps.host.cpu = vconfig.LibvirtConfigCPU() caps.host.cpu.arch = architecture caps.host.topology = fakelibvirt.NUMATopology() for i, cell in enumerate(caps.host.topology.cells): cell.mempages = fakelibvirt.create_mempages( [(4, 1024 * i), (2048, i)]) mock_caps.return_value = caps host_topology = drvr._get_host_numa_topology() self.assertEqual(128, host_topology.cells[0].mempages[1].reserved) self.assertEqual(1, host_topology.cells[1].mempages[1].reserved) self.assertEqual(0, host_topology.cells[2].mempages[1].reserved) self.assertEqual(64, host_topology.cells[3].mempages[1].reserved) self.assertTrue(drvr._wants_hugepages(host_topology, instance_topology)) def test_get_guest_config_clock(self): self.flags(virt_type='kvm', group='libvirt') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance_ref = objects.Instance(**self.test_instance) image_meta = objects.ImageMeta.from_dict(self.test_image_meta) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) hpet_map = { fields.Architecture.X86_64: True, fields.Architecture.I686: True, fields.Architecture.PPC: False, fields.Architecture.PPC64: False, fields.Architecture.ARMV7: False, fields.Architecture.AARCH64: False, } for guestarch, expect_hpet in hpet_map.items(): with mock.patch.object(libvirt_driver.libvirt_utils, 'get_arch', return_value=guestarch): cfg = drvr._get_guest_config(instance_ref, [], image_meta, disk_info) self.assertIsInstance(cfg.clock, vconfig.LibvirtConfigGuestClock) self.assertEqual(cfg.clock.offset, "utc") self.assertIsInstance(cfg.clock.timers[0], vconfig.LibvirtConfigGuestTimer) self.assertIsInstance(cfg.clock.timers[1], vconfig.LibvirtConfigGuestTimer) self.assertEqual(cfg.clock.timers[0].name, "pit") self.assertEqual(cfg.clock.timers[0].tickpolicy, "delay") self.assertEqual(cfg.clock.timers[1].name, "rtc") self.assertEqual(cfg.clock.timers[1].tickpolicy, "catchup") if expect_hpet: self.assertEqual(3, len(cfg.clock.timers)) self.assertIsInstance(cfg.clock.timers[2], vconfig.LibvirtConfigGuestTimer) self.assertEqual('hpet', cfg.clock.timers[2].name) self.assertFalse(cfg.clock.timers[2].present) else: self.assertEqual(2, len(cfg.clock.timers)) @mock.patch.object(libvirt_utils, 'get_arch') def test_get_guest_config_windows_timer(self, mock_get_arch): mock_get_arch.return_value = fields.Architecture.I686 drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance_ref = objects.Instance(**self.test_instance) instance_ref['os_type'] = 'windows' image_meta = objects.ImageMeta.from_dict(self.test_image_meta) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) cfg = drvr._get_guest_config(instance_ref, _fake_network_info(self, 1), image_meta, disk_info) self.assertIsInstance(cfg.clock, vconfig.LibvirtConfigGuestClock) self.assertEqual(cfg.clock.offset, "localtime") self.assertEqual(4, len(cfg.clock.timers), cfg.clock.timers) self.assertEqual("pit", cfg.clock.timers[0].name) self.assertEqual("rtc", cfg.clock.timers[1].name) self.assertEqual("hpet", cfg.clock.timers[2].name) self.assertFalse(cfg.clock.timers[2].present) self.assertEqual("hypervclock", cfg.clock.timers[3].name) self.assertTrue(cfg.clock.timers[3].present) self.assertEqual(3, len(cfg.features)) self.assertIsInstance(cfg.features[0], vconfig.LibvirtConfigGuestFeatureACPI) self.assertIsInstance(cfg.features[1], vconfig.LibvirtConfigGuestFeatureAPIC) self.assertIsInstance(cfg.features[2], vconfig.LibvirtConfigGuestFeatureHyperV) @mock.patch.object(host.Host, 'has_min_version') def test_get_guest_config_windows_hyperv_feature2(self, mock_version): mock_version.return_value = True drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance_ref = objects.Instance(**self.test_instance) instance_ref['os_type'] = 'windows' image_meta = objects.ImageMeta.from_dict(self.test_image_meta) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) cfg = drvr._get_guest_config(instance_ref, _fake_network_info(self, 1), image_meta, disk_info) self.assertIsInstance(cfg.clock, vconfig.LibvirtConfigGuestClock) self.assertEqual(cfg.clock.offset, "localtime") self.assertEqual(3, len(cfg.features)) self.assertIsInstance(cfg.features[0], vconfig.LibvirtConfigGuestFeatureACPI) self.assertIsInstance(cfg.features[1], vconfig.LibvirtConfigGuestFeatureAPIC) self.assertIsInstance(cfg.features[2], vconfig.LibvirtConfigGuestFeatureHyperV) self.assertTrue(cfg.features[2].relaxed) self.assertTrue(cfg.features[2].spinlocks) self.assertEqual(8191, cfg.features[2].spinlock_retries) self.assertTrue(cfg.features[2].vapic) def test_get_guest_config_with_two_nics(self): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance_ref = objects.Instance(**self.test_instance) image_meta = objects.ImageMeta.from_dict(self.test_image_meta) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) cfg = drvr._get_guest_config(instance_ref, _fake_network_info(self, 2), image_meta, disk_info) self.assertEqual(2, len(cfg.features)) self.assertIsInstance(cfg.features[0], vconfig.LibvirtConfigGuestFeatureACPI) self.assertIsInstance(cfg.features[1], vconfig.LibvirtConfigGuestFeatureAPIC) self.assertEqual(cfg.memory, instance_ref.flavor.memory_mb * units.Ki) self.assertEqual(cfg.vcpus, instance_ref.flavor.vcpus) self.assertEqual(cfg.os_type, fields.VMMode.HVM) self.assertEqual(cfg.os_boot_dev, ["hd"]) self.assertIsNone(cfg.os_root) self.assertEqual(len(cfg.devices), 10) self.assertIsInstance(cfg.devices[0], vconfig.LibvirtConfigGuestDisk) self.assertIsInstance(cfg.devices[1], vconfig.LibvirtConfigGuestDisk) self.assertIsInstance(cfg.devices[2], vconfig.LibvirtConfigGuestInterface) self.assertIsInstance(cfg.devices[3], vconfig.LibvirtConfigGuestInterface) self.assertIsInstance(cfg.devices[4], vconfig.LibvirtConfigGuestSerial) self.assertIsInstance(cfg.devices[5], vconfig.LibvirtConfigGuestSerial) self.assertIsInstance(cfg.devices[6], vconfig.LibvirtConfigGuestInput) self.assertIsInstance(cfg.devices[7], vconfig.LibvirtConfigGuestGraphics) self.assertIsInstance(cfg.devices[8], vconfig.LibvirtConfigGuestVideo) self.assertIsInstance(cfg.devices[9], vconfig.LibvirtConfigMemoryBalloon) def test_get_guest_config_bug_1118829(self): self.flags(virt_type='uml', group='libvirt') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance_ref = objects.Instance(**self.test_instance) disk_info = {'disk_bus': 'virtio', 'cdrom_bus': 'ide', 'mapping': {u'vda': {'bus': 'virtio', 'type': 'disk', 'dev': u'vda'}, 'root': {'bus': 'virtio', 'type': 'disk', 'dev': 'vda'}}} # NOTE(jdg): For this specific test leave this blank # This will exercise the failed code path still, # and won't require fakes and stubs of the iscsi discovery block_device_info = {} image_meta = objects.ImageMeta.from_dict(self.test_image_meta) drvr._get_guest_config(instance_ref, [], image_meta, disk_info, None, block_device_info) self.assertEqual(instance_ref['root_device_name'], '/dev/vda') def test_get_guest_config_with_root_device_name(self): self.flags(virt_type='uml', group='libvirt') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance_ref = objects.Instance(**self.test_instance) image_meta = objects.ImageMeta.from_dict(self.test_image_meta) block_device_info = {'root_device_name': '/dev/vdb'} disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta, block_device_info) cfg = drvr._get_guest_config(instance_ref, [], image_meta, disk_info, None, block_device_info) self.assertEqual(0, len(cfg.features)) self.assertEqual(cfg.memory, instance_ref.flavor.memory_mb * units.Ki) self.assertEqual(cfg.vcpus, instance_ref.flavor.vcpus) self.assertEqual(cfg.os_type, "uml") self.assertEqual(cfg.os_boot_dev, []) self.assertEqual(cfg.os_root, '/dev/vdb') self.assertEqual(len(cfg.devices), 3) self.assertIsInstance(cfg.devices[0], vconfig.LibvirtConfigGuestDisk) self.assertIsInstance(cfg.devices[1], vconfig.LibvirtConfigGuestDisk) self.assertIsInstance(cfg.devices[2], vconfig.LibvirtConfigGuestConsole) def test_has_uefi_support_not_supported_arch(self): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) self._stub_host_capabilities_cpu_arch(fields.Architecture.ALPHA) self.assertFalse(drvr._has_uefi_support()) @mock.patch('os.path.exists', return_value=False) def test_has_uefi_support_with_no_loader_existed(self, mock_exist): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) self.assertFalse(drvr._has_uefi_support()) @mock.patch('os.path.exists', return_value=True) def test_has_uefi_support(self, mock_has_version): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) self._stub_host_capabilities_cpu_arch(fields.Architecture.X86_64) with mock.patch.object(drvr._host, 'has_min_version', return_value=True): self.assertTrue(drvr._has_uefi_support()) def test_get_guest_config_with_uefi(self): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) image_meta = objects.ImageMeta.from_dict({ "disk_format": "raw", "properties": {"hw_firmware_type": "uefi"}}) instance_ref = objects.Instance(**self.test_instance) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) with mock.patch.object(drvr, "_has_uefi_support", return_value=True) as mock_support: cfg = drvr._get_guest_config(instance_ref, [], image_meta, disk_info) mock_support.assert_called_once_with() self.assertEqual(cfg.os_loader_type, "pflash") def test_get_guest_config_with_block_device(self): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance_ref = objects.Instance(**self.test_instance) image_meta = objects.ImageMeta.from_dict(self.test_image_meta) conn_info = {'driver_volume_type': 'fake', 'data': {}} bdms = block_device_obj.block_device_make_list_from_dicts( self.context, [ fake_block_device.FakeDbBlockDeviceDict( {'id': 1, 'source_type': 'volume', 'destination_type': 'volume', 'device_name': '/dev/vdc'}), fake_block_device.FakeDbBlockDeviceDict( {'id': 2, 'source_type': 'volume', 'destination_type': 'volume', 'device_name': '/dev/vdd'}), ] ) info = {'block_device_mapping': driver_block_device.convert_volumes( bdms )} info['block_device_mapping'][0]['connection_info'] = conn_info info['block_device_mapping'][1]['connection_info'] = conn_info disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta, info) with mock.patch.object( driver_block_device.DriverVolumeBlockDevice, 'save' ) as mock_save: cfg = drvr._get_guest_config(instance_ref, [], image_meta, disk_info, None, info) self.assertIsInstance(cfg.devices[2], vconfig.LibvirtConfigGuestDisk) self.assertEqual(cfg.devices[2].target_dev, 'vdc') self.assertIsInstance(cfg.devices[3], vconfig.LibvirtConfigGuestDisk) self.assertEqual(cfg.devices[3].target_dev, 'vdd') mock_save.assert_called_with() def test_get_guest_config_lxc_with_attached_volume(self): self.flags(virt_type='lxc', group='libvirt') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance_ref = objects.Instance(**self.test_instance) image_meta = objects.ImageMeta.from_dict(self.test_image_meta) conn_info = {'driver_volume_type': 'fake', 'data': {}} bdms = block_device_obj.block_device_make_list_from_dicts( self.context, [ fake_block_device.FakeDbBlockDeviceDict( {'id': 1, 'source_type': 'volume', 'destination_type': 'volume', 'boot_index': 0}), fake_block_device.FakeDbBlockDeviceDict( {'id': 2, 'source_type': 'volume', 'destination_type': 'volume', }), fake_block_device.FakeDbBlockDeviceDict( {'id': 3, 'source_type': 'volume', 'destination_type': 'volume', }), ] ) info = {'block_device_mapping': driver_block_device.convert_volumes( bdms )} info['block_device_mapping'][0]['connection_info'] = conn_info info['block_device_mapping'][1]['connection_info'] = conn_info info['block_device_mapping'][2]['connection_info'] = conn_info info['block_device_mapping'][0]['mount_device'] = '/dev/vda' info['block_device_mapping'][1]['mount_device'] = '/dev/vdc' info['block_device_mapping'][2]['mount_device'] = '/dev/vdd' with mock.patch.object( driver_block_device.DriverVolumeBlockDevice, 'save' ) as mock_save: disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta, info) cfg = drvr._get_guest_config(instance_ref, [], image_meta, disk_info, None, info) self.assertIsInstance(cfg.devices[1], vconfig.LibvirtConfigGuestDisk) self.assertEqual(cfg.devices[1].target_dev, 'vdc') self.assertIsInstance(cfg.devices[2], vconfig.LibvirtConfigGuestDisk) self.assertEqual(cfg.devices[2].target_dev, 'vdd') mock_save.assert_called_with() def test_get_guest_config_with_configdrive(self): # It's necessary to check if the architecture is power, because # power doesn't have support to ide, and so libvirt translate # all ide calls to scsi drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance_ref = objects.Instance(**self.test_instance) image_meta = objects.ImageMeta.from_dict(self.test_image_meta) # make configdrive.required_by() return True instance_ref['config_drive'] = True disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) cfg = drvr._get_guest_config(instance_ref, [], image_meta, disk_info) # Pick the first drive letter on the bus that is available # as the config drive. Delete the last device hardcode as # the config drive here. expect = {"ppc": "sda", "ppc64": "sda", "ppc64le": "sda", "aarch64": "sda"} disk = expect.get(blockinfo.libvirt_utils.get_arch({}), "hda") self.assertIsInstance(cfg.devices[2], vconfig.LibvirtConfigGuestDisk) self.assertEqual(cfg.devices[2].target_dev, disk) def test_get_guest_config_default_with_virtio_scsi_bus(self): self._test_get_guest_config_with_virtio_scsi_bus() @mock.patch.object(rbd_utils.RBDDriver, 'get_mon_addrs') @mock.patch.object(rbd_utils, 'rbd') def test_get_guest_config_rbd_with_virtio_scsi_bus( self, mock_rdb, mock_get_mon_addrs): self.flags(images_type='rbd', group='libvirt') mock_get_mon_addrs.return_value = ("host", 9876) self._test_get_guest_config_with_virtio_scsi_bus() def _test_get_guest_config_with_virtio_scsi_bus(self): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) image_meta = objects.ImageMeta.from_dict({ "disk_format": "raw", "properties": {"hw_scsi_model": "virtio-scsi", "hw_disk_bus": "scsi"}}) instance_ref = objects.Instance(**self.test_instance) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta, []) cfg = drvr._get_guest_config(instance_ref, [], image_meta, disk_info) self.assertIsInstance(cfg.devices[0], vconfig.LibvirtConfigGuestDisk) self.assertEqual(0, cfg.devices[0].device_addr.unit) self.assertIsInstance(cfg.devices[1], vconfig.LibvirtConfigGuestDisk) self.assertEqual(1, cfg.devices[1].device_addr.unit) self.assertIsInstance(cfg.devices[2], vconfig.LibvirtConfigGuestController) self.assertEqual(cfg.devices[2].model, 'virtio-scsi') def test_get_guest_config_with_virtio_scsi_bus_bdm(self): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) image_meta = objects.ImageMeta.from_dict({ "disk_format": "raw", "properties": {"hw_scsi_model": "virtio-scsi", "hw_disk_bus": "scsi"}}) instance_ref = objects.Instance(**self.test_instance) conn_info = {'driver_volume_type': 'fake', 'data': {}} bdms = block_device_obj.block_device_make_list_from_dicts( self.context, [ fake_block_device.FakeDbBlockDeviceDict( {'id': 1, 'source_type': 'volume', 'destination_type': 'volume', 'device_name': '/dev/sdc', 'disk_bus': 'scsi'}), fake_block_device.FakeDbBlockDeviceDict( {'id': 2, 'source_type': 'volume', 'destination_type': 'volume', 'device_name': '/dev/sdd', 'disk_bus': 'scsi'}), ] ) bd_info = { 'block_device_mapping': driver_block_device.convert_volumes(bdms)} bd_info['block_device_mapping'][0]['connection_info'] = conn_info bd_info['block_device_mapping'][1]['connection_info'] = conn_info disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta, bd_info) with mock.patch.object( driver_block_device.DriverVolumeBlockDevice, 'save' ) as mock_save: cfg = drvr._get_guest_config(instance_ref, [], image_meta, disk_info, [], bd_info) self.assertIsInstance(cfg.devices[2], vconfig.LibvirtConfigGuestDisk) self.assertEqual(cfg.devices[2].target_dev, 'sdc') self.assertEqual(cfg.devices[2].target_bus, 'scsi') self.assertEqual(2, cfg.devices[2].device_addr.unit) self.assertIsInstance(cfg.devices[3], vconfig.LibvirtConfigGuestDisk) self.assertEqual(cfg.devices[3].target_dev, 'sdd') self.assertEqual(cfg.devices[3].target_bus, 'scsi') self.assertEqual(3, cfg.devices[3].device_addr.unit) self.assertIsInstance(cfg.devices[4], vconfig.LibvirtConfigGuestController) self.assertEqual(cfg.devices[4].model, 'virtio-scsi') mock_save.assert_called_with() def _get_guest_config_with_graphics(self): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance_ref = objects.Instance(**self.test_instance) image_meta = objects.ImageMeta.from_dict(self.test_image_meta) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) cfg = drvr._get_guest_config(instance_ref, [], image_meta, disk_info) return cfg def test_get_guest_config_with_vnc(self): self.flags(enabled=True, server_listen='10.0.0.1', keymap='en-ie', group='vnc') self.flags(virt_type='kvm', group='libvirt') self.flags(pointer_model='ps2mouse') self.flags(enabled=False, group='spice') cfg = self._get_guest_config_with_graphics() self.assertEqual(len(cfg.devices), 7) self.assertIsInstance(cfg.devices[0], vconfig.LibvirtConfigGuestDisk) self.assertIsInstance(cfg.devices[1], vconfig.LibvirtConfigGuestDisk) self.assertIsInstance(cfg.devices[2], vconfig.LibvirtConfigGuestSerial) self.assertIsInstance(cfg.devices[3], vconfig.LibvirtConfigGuestSerial) self.assertIsInstance(cfg.devices[4], vconfig.LibvirtConfigGuestGraphics) self.assertIsInstance(cfg.devices[5], vconfig.LibvirtConfigGuestVideo) self.assertIsInstance(cfg.devices[6], vconfig.LibvirtConfigMemoryBalloon) self.assertEqual(cfg.devices[4].type, 'vnc') self.assertEqual(cfg.devices[4].keymap, 'en-ie') self.assertEqual(cfg.devices[4].listen, '10.0.0.1') def test_get_guest_config_with_vnc_and_tablet(self): self.flags(enabled=True, group='vnc') self.flags(virt_type='kvm', use_usb_tablet=True, group='libvirt') self.flags(enabled=False, group='spice') cfg = self._get_guest_config_with_graphics() self.assertEqual(len(cfg.devices), 8) self.assertIsInstance(cfg.devices[0], vconfig.LibvirtConfigGuestDisk) self.assertIsInstance(cfg.devices[1], vconfig.LibvirtConfigGuestDisk) self.assertIsInstance(cfg.devices[2], vconfig.LibvirtConfigGuestSerial) self.assertIsInstance(cfg.devices[3], vconfig.LibvirtConfigGuestSerial) self.assertIsInstance(cfg.devices[4], vconfig.LibvirtConfigGuestInput) self.assertIsInstance(cfg.devices[5], vconfig.LibvirtConfigGuestGraphics) self.assertIsInstance(cfg.devices[6], vconfig.LibvirtConfigGuestVideo) self.assertIsInstance(cfg.devices[7], vconfig.LibvirtConfigMemoryBalloon) self.assertEqual(cfg.devices[4].type, "tablet") self.assertEqual(cfg.devices[5].type, "vnc") def test_get_guest_config_with_spice_and_tablet(self): self.flags(enabled=False, group='vnc') self.flags(virt_type='kvm', use_usb_tablet=True, group='libvirt') self.flags(enabled=True, agent_enabled=False, server_listen='10.0.0.1', keymap='en-ie', group='spice') cfg = self._get_guest_config_with_graphics() self.assertEqual(len(cfg.devices), 8) self.assertIsInstance(cfg.devices[0], vconfig.LibvirtConfigGuestDisk) self.assertIsInstance(cfg.devices[1], vconfig.LibvirtConfigGuestDisk) self.assertIsInstance(cfg.devices[2], vconfig.LibvirtConfigGuestSerial) self.assertIsInstance(cfg.devices[3], vconfig.LibvirtConfigGuestSerial) self.assertIsInstance(cfg.devices[4], vconfig.LibvirtConfigGuestInput) self.assertIsInstance(cfg.devices[5], vconfig.LibvirtConfigGuestGraphics) self.assertIsInstance(cfg.devices[6], vconfig.LibvirtConfigGuestVideo) self.assertIsInstance(cfg.devices[7], vconfig.LibvirtConfigMemoryBalloon) self.assertEqual(cfg.devices[4].type, 'tablet') self.assertEqual(cfg.devices[5].type, 'spice') self.assertEqual(cfg.devices[5].keymap, 'en-ie') self.assertEqual(cfg.devices[5].listen, '10.0.0.1') def test_get_guest_config_with_spice_and_agent(self): self.flags(enabled=False, group='vnc') self.flags(virt_type='kvm', use_usb_tablet=True, group='libvirt') self.flags(enabled=True, agent_enabled=True, group='spice') cfg = self._get_guest_config_with_graphics() expect = {"ppc": "vga", "ppc64": "vga", "ppc64le": "vga", "aarch64": "virtio"} video_type = expect.get(blockinfo.libvirt_utils.get_arch({}), "qxl") self.assertEqual(len(cfg.devices), 8) self.assertIsInstance(cfg.devices[0], vconfig.LibvirtConfigGuestDisk) self.assertIsInstance(cfg.devices[1], vconfig.LibvirtConfigGuestDisk) self.assertIsInstance(cfg.devices[2], vconfig.LibvirtConfigGuestSerial) self.assertIsInstance(cfg.devices[3], vconfig.LibvirtConfigGuestSerial) self.assertIsInstance(cfg.devices[4], vconfig.LibvirtConfigGuestChannel) self.assertIsInstance(cfg.devices[5], vconfig.LibvirtConfigGuestGraphics) self.assertIsInstance(cfg.devices[6], vconfig.LibvirtConfigGuestVideo) self.assertIsInstance(cfg.devices[7], vconfig.LibvirtConfigMemoryBalloon) self.assertEqual(cfg.devices[4].target_name, "com.redhat.spice.0") self.assertEqual(cfg.devices[4].type, 'spicevmc') self.assertEqual(cfg.devices[5].type, "spice") self.assertEqual(cfg.devices[6].type, video_type) def test_get_guest_config_with_vnc_no_keymap(self): self.flags(virt_type='kvm', group='libvirt') self.flags(enabled=True, keymap=None, group='vnc') self.flags(enabled=False, group='spice') cfg = self._get_guest_config_with_graphics() for device in cfg.devices: if device.root_name == 'graphics': self.assertIsInstance(device, vconfig.LibvirtConfigGuestGraphics) self.assertEqual('vnc', device.type) self.assertIsNone(device.keymap) def test_get_guest_config_with_spice_no_keymap(self): self.flags(virt_type='kvm', group='libvirt') self.flags(enabled=True, keymap=None, group='spice') self.flags(enabled=False, group='vnc') cfg = self._get_guest_config_with_graphics() for device in cfg.devices: if device.root_name == 'graphics': self.assertIsInstance(device, vconfig.LibvirtConfigGuestGraphics) self.assertEqual('spice', device.type) self.assertIsNone(device.keymap) @mock.patch.object(host.Host, 'get_guest') @mock.patch.object(libvirt_driver.LibvirtDriver, '_get_serial_ports_from_guest') @mock.patch('nova.console.serial.acquire_port') @mock.patch('nova.virt.hardware.get_number_of_serial_ports', return_value=1) @mock.patch.object(libvirt_driver.libvirt_utils, 'get_arch',) def test_create_serial_console_devices_based_on_arch(self, mock_get_arch, mock_get_port_number, mock_acquire_port, mock_ports, mock_guest): self.flags(enabled=True, group='serial_console') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance = objects.Instance(**self.test_instance) expected = { fields.Architecture.X86_64: vconfig.LibvirtConfigGuestSerial, fields.Architecture.S390: vconfig.LibvirtConfigGuestConsole, fields.Architecture.S390X: vconfig.LibvirtConfigGuestConsole} for guest_arch, device_type in expected.items(): mock_get_arch.return_value = guest_arch guest = vconfig.LibvirtConfigGuest() drvr._create_consoles(virt_type="kvm", guest_cfg=guest, instance=instance, flavor={}, image_meta={}) self.assertEqual(2, len(guest.devices)) console_device = guest.devices[0] self.assertIsInstance(console_device, device_type) self.assertEqual("tcp", console_device.type) @mock.patch.object(host.Host, 'get_guest') @mock.patch.object(libvirt_driver.LibvirtDriver, '_get_serial_ports_from_guest') @mock.patch('nova.virt.hardware.get_number_of_serial_ports', return_value=4) @mock.patch.object(libvirt_driver.libvirt_utils, 'get_arch', side_effect=[fields.Architecture.X86_64, fields.Architecture.S390, fields.Architecture.S390X]) def test_create_serial_console_devices_with_limit_exceeded_based_on_arch( self, mock_get_arch, mock_get_port_number, mock_ports, mock_guest): self.flags(enabled=True, group='serial_console') self.flags(virt_type="qemu", group='libvirt') flavor = 'fake_flavor' image_meta = objects.ImageMeta() drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) guest = vconfig.LibvirtConfigGuest() instance = objects.Instance(**self.test_instance) self.assertRaises(exception.SerialPortNumberLimitExceeded, drvr._create_consoles, "kvm", guest, instance, flavor, image_meta) mock_get_arch.assert_called_with(image_meta) mock_get_port_number.assert_called_with(flavor, image_meta) drvr._create_consoles("kvm", guest, instance, flavor, image_meta) mock_get_arch.assert_called_with(image_meta) mock_get_port_number.assert_called_with(flavor, image_meta) drvr._create_consoles("kvm", guest, instance, flavor, image_meta) mock_get_arch.assert_called_with(image_meta) mock_get_port_number.assert_called_with(flavor, image_meta) @mock.patch('nova.console.serial.acquire_port') def test_get_guest_config_serial_console(self, acquire_port): self.flags(enabled=True, group='serial_console') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance_ref = objects.Instance(**self.test_instance) image_meta = objects.ImageMeta.from_dict(self.test_image_meta) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) acquire_port.return_value = 11111 cfg = drvr._get_guest_config(instance_ref, [], image_meta, disk_info) self.assertEqual(8, len(cfg.devices)) self.assertIsInstance(cfg.devices[0], vconfig.LibvirtConfigGuestDisk) self.assertIsInstance(cfg.devices[1], vconfig.LibvirtConfigGuestDisk) self.assertIsInstance(cfg.devices[2], vconfig.LibvirtConfigGuestSerial) self.assertIsInstance(cfg.devices[3], vconfig.LibvirtConfigGuestSerial) self.assertIsInstance(cfg.devices[4], vconfig.LibvirtConfigGuestInput) self.assertIsInstance(cfg.devices[5], vconfig.LibvirtConfigGuestGraphics) self.assertIsInstance(cfg.devices[6], vconfig.LibvirtConfigGuestVideo) self.assertIsInstance(cfg.devices[7], vconfig.LibvirtConfigMemoryBalloon) self.assertEqual("tcp", cfg.devices[2].type) self.assertEqual(11111, cfg.devices[2].listen_port) def test_get_guest_config_serial_console_through_flavor(self): self.flags(enabled=True, group='serial_console') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance_ref = objects.Instance(**self.test_instance) instance_ref.flavor.extra_specs = {'hw:serial_port_count': 3} image_meta = objects.ImageMeta.from_dict(self.test_image_meta) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) cfg = drvr._get_guest_config(instance_ref, [], image_meta, disk_info) self.assertEqual(10, len(cfg.devices)) self.assertIsInstance(cfg.devices[0], vconfig.LibvirtConfigGuestDisk) self.assertIsInstance(cfg.devices[1], vconfig.LibvirtConfigGuestDisk) self.assertIsInstance(cfg.devices[2], vconfig.LibvirtConfigGuestSerial) self.assertIsInstance(cfg.devices[3], vconfig.LibvirtConfigGuestSerial) self.assertIsInstance(cfg.devices[4], vconfig.LibvirtConfigGuestSerial) self.assertIsInstance(cfg.devices[5], vconfig.LibvirtConfigGuestSerial) self.assertIsInstance(cfg.devices[6], vconfig.LibvirtConfigGuestInput) self.assertIsInstance(cfg.devices[7], vconfig.LibvirtConfigGuestGraphics) self.assertIsInstance(cfg.devices[8], vconfig.LibvirtConfigGuestVideo) self.assertIsInstance(cfg.devices[9], vconfig.LibvirtConfigMemoryBalloon) self.assertEqual("tcp", cfg.devices[2].type) self.assertEqual("tcp", cfg.devices[3].type) self.assertEqual("tcp", cfg.devices[4].type) def test_get_guest_config_serial_console_invalid_flavor(self): self.flags(enabled=True, group='serial_console') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance_ref = objects.Instance(**self.test_instance) instance_ref.flavor.extra_specs = {'hw:serial_port_count': "a"} image_meta = objects.ImageMeta.from_dict(self.test_image_meta) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) self.assertRaises( exception.ImageSerialPortNumberInvalid, drvr._get_guest_config, instance_ref, [], image_meta, disk_info) def test_get_guest_config_serial_console_image_and_flavor(self): self.flags(enabled=True, group='serial_console') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) image_meta = objects.ImageMeta.from_dict({ "disk_format": "raw", "properties": {"hw_serial_port_count": "3"}}) instance_ref = objects.Instance(**self.test_instance) instance_ref.flavor.extra_specs = {'hw:serial_port_count': 4} disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) cfg = drvr._get_guest_config(instance_ref, [], image_meta, disk_info) self.assertEqual(10, len(cfg.devices), cfg.devices) self.assertIsInstance(cfg.devices[0], vconfig.LibvirtConfigGuestDisk) self.assertIsInstance(cfg.devices[1], vconfig.LibvirtConfigGuestDisk) self.assertIsInstance(cfg.devices[2], vconfig.LibvirtConfigGuestSerial) self.assertIsInstance(cfg.devices[3], vconfig.LibvirtConfigGuestSerial) self.assertIsInstance(cfg.devices[4], vconfig.LibvirtConfigGuestSerial) self.assertIsInstance(cfg.devices[5], vconfig.LibvirtConfigGuestSerial) self.assertIsInstance(cfg.devices[6], vconfig.LibvirtConfigGuestInput) self.assertIsInstance(cfg.devices[7], vconfig.LibvirtConfigGuestGraphics) self.assertIsInstance(cfg.devices[8], vconfig.LibvirtConfigGuestVideo) self.assertIsInstance(cfg.devices[9], vconfig.LibvirtConfigMemoryBalloon) self.assertEqual("tcp", cfg.devices[2].type) self.assertEqual("tcp", cfg.devices[3].type) self.assertEqual("tcp", cfg.devices[4].type) @mock.patch.object(host.Host, 'has_min_version', return_value=True) @mock.patch('nova.console.serial.acquire_port') @mock.patch('nova.virt.hardware.get_number_of_serial_ports', return_value=1) @mock.patch.object(libvirt_driver.libvirt_utils, 'get_arch',) def test_guest_config_char_device_logd(self, mock_get_arch, mock_get_number_serial_ports, mock_acquire_port, mock_host_has_min_version): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) def _test_consoles(arch_to_mock, serial_enabled, expected_device_type, expected_device_cls, virt_type='qemu'): guest_cfg = vconfig.LibvirtConfigGuest() mock_get_arch.return_value = arch_to_mock self.flags(enabled=serial_enabled, group='serial_console') instance = objects.Instance(**self.test_instance) drvr._create_consoles(virt_type, guest_cfg, instance=instance, flavor=None, image_meta=None) self.assertEqual(1, len(guest_cfg.devices)) device = guest_cfg.devices[0] self.assertEqual(expected_device_type, device.type) self.assertIsInstance(device, expected_device_cls) self.assertIsInstance(device.log, vconfig.LibvirtConfigGuestCharDeviceLog) self.assertEqual("off", device.log.append) self.assertIsNotNone(device.log.file) self.assertTrue(device.log.file.endswith("console.log")) _test_consoles(fields.Architecture.X86_64, True, "tcp", vconfig.LibvirtConfigGuestSerial) _test_consoles(fields.Architecture.X86_64, False, "pty", vconfig.LibvirtConfigGuestSerial) _test_consoles(fields.Architecture.S390, True, "tcp", vconfig.LibvirtConfigGuestConsole) _test_consoles(fields.Architecture.S390X, False, "pty", vconfig.LibvirtConfigGuestConsole) _test_consoles(fields.Architecture.X86_64, False, "pty", vconfig.LibvirtConfigGuestConsole, 'xen') @mock.patch('nova.console.serial.acquire_port') def test_get_guest_config_serial_console_through_port_rng_exhausted( self, acquire_port): self.flags(enabled=True, group='serial_console') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance_ref = objects.Instance(**self.test_instance) image_meta = objects.ImageMeta.from_dict(self.test_image_meta) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) acquire_port.side_effect = exception.SocketPortRangeExhaustedException( '127.0.0.1') self.assertRaises( exception.SocketPortRangeExhaustedException, drvr._get_guest_config, instance_ref, [], image_meta, disk_info) @mock.patch('nova.console.serial.release_port') @mock.patch.object(libvirt_driver.LibvirtDriver, 'get_info') @mock.patch.object(host.Host, 'get_guest') @mock.patch.object(libvirt_driver.LibvirtDriver, '_get_serial_ports_from_guest') def test_serial_console_release_port( self, mock_get_serial_ports_from_guest, mock_get_guest, mock_get_info, mock_release_port): self.flags(enabled="True", group='serial_console') guest = libvirt_guest.Guest(FakeVirtDomain()) guest.power_off = mock.Mock() mock_get_info.return_value = hardware.InstanceInfo( state=power_state.SHUTDOWN) mock_get_guest.return_value = guest mock_get_serial_ports_from_guest.return_value = iter([ ('127.0.0.1', 10000), ('127.0.0.1', 10001)]) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) drvr._destroy(objects.Instance(**self.test_instance)) mock_release_port.assert_has_calls( [mock.call(host='127.0.0.1', port=10000), mock.call(host='127.0.0.1', port=10001)]) @mock.patch('os.path.getsize', return_value=0) # size doesn't matter @mock.patch('nova.virt.libvirt.storage.lvm.get_volume_size', return_value='fake-size') def test_detach_encrypted_volumes(self, mock_get_volume_size, mock_getsize): """Test that unencrypted volumes are not disconnected with dmcrypt.""" instance = objects.Instance(**self.test_instance) xml = """ """ dom = FakeVirtDomain(fake_xml=xml) instance.ephemeral_key_uuid = uuids.ephemeral_key_uuid # encrypted conn = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) @mock.patch.object(dmcrypt, 'delete_volume') @mock.patch.object(conn._host, '_get_domain', return_value=dom) def detach_encrypted_volumes(block_device_info, mock_get_domain, mock_delete_volume): conn._detach_encrypted_volumes(instance, block_device_info) mock_get_domain.assert_called_once_with(instance) self.assertFalse(mock_delete_volume.called) block_device_info = {'root_device_name': '/dev/vda', 'ephemerals': [], 'block_device_mapping': []} detach_encrypted_volumes(block_device_info) @mock.patch.object(libvirt_guest.Guest, "get_xml_desc") def test_get_serial_ports_from_guest(self, mock_get_xml_desc): i = self._test_get_serial_ports_from_guest(None, mock_get_xml_desc) self.assertEqual([ ('127.0.0.1', 100), ('127.0.0.1', 101), ('127.0.0.2', 100), ('127.0.0.2', 101)], list(i)) @mock.patch.object(libvirt_guest.Guest, "get_xml_desc") def test_get_serial_ports_from_guest_bind_only(self, mock_get_xml_desc): i = self._test_get_serial_ports_from_guest('bind', mock_get_xml_desc) self.assertEqual([ ('127.0.0.1', 101), ('127.0.0.2', 100)], list(i)) @mock.patch.object(libvirt_guest.Guest, "get_xml_desc") def test_get_serial_ports_from_guest_connect_only(self, mock_get_xml_desc): i = self._test_get_serial_ports_from_guest('connect', mock_get_xml_desc) self.assertEqual([ ('127.0.0.1', 100), ('127.0.0.2', 101)], list(i)) @mock.patch.object(libvirt_guest.Guest, "get_xml_desc") def test_get_serial_ports_from_guest_on_s390(self, mock_get_xml_desc): i = self._test_get_serial_ports_from_guest(None, mock_get_xml_desc, 'console') self.assertEqual([ ('127.0.0.1', 100), ('127.0.0.1', 101), ('127.0.0.2', 100), ('127.0.0.2', 101)], list(i)) def _test_get_serial_ports_from_guest(self, mode, mock_get_xml_desc, dev_name='serial'): xml = """ <%(dev_name)s type="tcp"> <%(dev_name)s type="tcp"> <%(dev_name)s type="tcp"> <%(dev_name)s type="tcp"> """ % {'dev_name': dev_name} mock_get_xml_desc.return_value = xml drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) guest = libvirt_guest.Guest(FakeVirtDomain()) return drvr._get_serial_ports_from_guest(guest, mode=mode) def test_get_guest_config_with_type_xen(self): self.flags(enabled=True, group='vnc') self.flags(virt_type='xen', use_usb_tablet=False, group='libvirt') self.flags(enabled=False, group='spice') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance_ref = objects.Instance(**self.test_instance) image_meta = objects.ImageMeta.from_dict(self.test_image_meta) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) cfg = drvr._get_guest_config(instance_ref, [], image_meta, disk_info) self.assertEqual(len(cfg.devices), 6) self.assertIsInstance(cfg.devices[0], vconfig.LibvirtConfigGuestDisk) self.assertIsInstance(cfg.devices[1], vconfig.LibvirtConfigGuestDisk) self.assertIsInstance(cfg.devices[2], vconfig.LibvirtConfigGuestConsole) self.assertIsInstance(cfg.devices[3], vconfig.LibvirtConfigGuestGraphics) self.assertIsInstance(cfg.devices[4], vconfig.LibvirtConfigGuestVideo) self.assertIsInstance(cfg.devices[5], vconfig.LibvirtConfigMemoryBalloon) self.assertEqual(cfg.devices[3].type, "vnc") self.assertEqual(cfg.devices[4].type, "xen") @mock.patch.object(libvirt_driver.libvirt_utils, 'get_arch', return_value=fields.Architecture.S390X) def test_get_guest_config_with_type_kvm_on_s390(self, mock_get_arch): self.flags(enabled=False, group='vnc') self.flags(virt_type='kvm', use_usb_tablet=False, group='libvirt') self._stub_host_capabilities_cpu_arch(fields.Architecture.S390X) instance_ref = objects.Instance(**self.test_instance) cfg = self._get_guest_config_via_fake_api(instance_ref) self.assertIsInstance(cfg.devices[0], vconfig.LibvirtConfigGuestDisk) self.assertIsInstance(cfg.devices[1], vconfig.LibvirtConfigGuestDisk) log_file_device = cfg.devices[2] self.assertIsInstance(log_file_device, vconfig.LibvirtConfigGuestConsole) self.assertEqual("sclplm", log_file_device.target_type) self.assertEqual("file", log_file_device.type) terminal_device = cfg.devices[3] self.assertIsInstance(terminal_device, vconfig.LibvirtConfigGuestConsole) self.assertEqual("sclp", terminal_device.target_type) self.assertEqual("pty", terminal_device.type) self.assertEqual("s390-ccw-virtio", cfg.os_mach_type) def _stub_host_capabilities_cpu_arch(self, cpu_arch): def get_host_capabilities_stub(self): cpu = vconfig.LibvirtConfigGuestCPU() cpu.arch = cpu_arch caps = vconfig.LibvirtConfigCaps() caps.host = vconfig.LibvirtConfigCapsHost() caps.host.cpu = cpu return caps self.stubs.Set(host.Host, "get_capabilities", get_host_capabilities_stub) def _get_guest_config_via_fake_api(self, instance): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) image_meta = objects.ImageMeta.from_dict(self.test_image_meta) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance, image_meta) return drvr._get_guest_config(instance, [], image_meta, disk_info) def test_get_guest_config_with_type_xen_pae_hvm(self): self.flags(enabled=True, group='vnc') self.flags(virt_type='xen', use_usb_tablet=False, group='libvirt') self.flags(enabled=False, group='spice') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance_ref = objects.Instance(**self.test_instance) instance_ref['vm_mode'] = fields.VMMode.HVM image_meta = objects.ImageMeta.from_dict(self.test_image_meta) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) cfg = drvr._get_guest_config(instance_ref, [], image_meta, disk_info) self.assertEqual(cfg.os_type, fields.VMMode.HVM) self.assertEqual(cfg.os_loader, CONF.libvirt.xen_hvmloader_path) self.assertEqual(3, len(cfg.features)) self.assertIsInstance(cfg.features[0], vconfig.LibvirtConfigGuestFeaturePAE) self.assertIsInstance(cfg.features[1], vconfig.LibvirtConfigGuestFeatureACPI) self.assertIsInstance(cfg.features[2], vconfig.LibvirtConfigGuestFeatureAPIC) def test_get_guest_config_with_type_xen_pae_pvm(self): self.flags(enabled=True, group='vnc') self.flags(virt_type='xen', use_usb_tablet=False, group='libvirt') self.flags(enabled=False, group='spice') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance_ref = objects.Instance(**self.test_instance) image_meta = objects.ImageMeta.from_dict(self.test_image_meta) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) cfg = drvr._get_guest_config(instance_ref, [], image_meta, disk_info) self.assertEqual(cfg.os_type, fields.VMMode.XEN) self.assertEqual(1, len(cfg.features)) self.assertIsInstance(cfg.features[0], vconfig.LibvirtConfigGuestFeaturePAE) def test_get_guest_config_with_vnc_and_spice(self): self.flags(enabled=True, group='vnc') self.flags(virt_type='kvm', use_usb_tablet=True, group='libvirt') self.flags(enabled=True, agent_enabled=True, group='spice') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance_ref = objects.Instance(**self.test_instance) image_meta = objects.ImageMeta.from_dict(self.test_image_meta) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) cfg = drvr._get_guest_config(instance_ref, [], image_meta, disk_info) self.assertEqual(len(cfg.devices), 10) self.assertIsInstance(cfg.devices[0], vconfig.LibvirtConfigGuestDisk) self.assertIsInstance(cfg.devices[1], vconfig.LibvirtConfigGuestDisk) self.assertIsInstance(cfg.devices[2], vconfig.LibvirtConfigGuestSerial) self.assertIsInstance(cfg.devices[3], vconfig.LibvirtConfigGuestSerial) self.assertIsInstance(cfg.devices[4], vconfig.LibvirtConfigGuestInput) self.assertIsInstance(cfg.devices[5], vconfig.LibvirtConfigGuestChannel) self.assertIsInstance(cfg.devices[6], vconfig.LibvirtConfigGuestGraphics) self.assertIsInstance(cfg.devices[7], vconfig.LibvirtConfigGuestGraphics) self.assertIsInstance(cfg.devices[8], vconfig.LibvirtConfigGuestVideo) self.assertIsInstance(cfg.devices[9], vconfig.LibvirtConfigMemoryBalloon) self.assertEqual(cfg.devices[4].type, "tablet") self.assertEqual(cfg.devices[5].target_name, "com.redhat.spice.0") self.assertEqual(cfg.devices[5].type, 'spicevmc') self.assertEqual(cfg.devices[6].type, "vnc") self.assertEqual(cfg.devices[7].type, "spice") def test_get_guest_config_with_watchdog_action_image_meta(self): self.flags(virt_type='kvm', group='libvirt') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance_ref = objects.Instance(**self.test_instance) image_meta = objects.ImageMeta.from_dict({ "disk_format": "raw", "properties": {"hw_watchdog_action": "none"}}) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) cfg = drvr._get_guest_config(instance_ref, [], image_meta, disk_info) self.assertEqual(len(cfg.devices), 9) self.assertIsInstance(cfg.devices[0], vconfig.LibvirtConfigGuestDisk) self.assertIsInstance(cfg.devices[1], vconfig.LibvirtConfigGuestDisk) self.assertIsInstance(cfg.devices[2], vconfig.LibvirtConfigGuestSerial) self.assertIsInstance(cfg.devices[3], vconfig.LibvirtConfigGuestSerial) self.assertIsInstance(cfg.devices[4], vconfig.LibvirtConfigGuestInput) self.assertIsInstance(cfg.devices[5], vconfig.LibvirtConfigGuestGraphics) self.assertIsInstance(cfg.devices[6], vconfig.LibvirtConfigGuestVideo) self.assertIsInstance(cfg.devices[7], vconfig.LibvirtConfigGuestWatchdog) self.assertIsInstance(cfg.devices[8], vconfig.LibvirtConfigMemoryBalloon) self.assertEqual("none", cfg.devices[7].action) def _test_get_guest_usb_tablet(self, vnc_enabled, spice_enabled, os_type, agent_enabled=False, image_meta=None): self.flags(enabled=vnc_enabled, group='vnc') self.flags(enabled=spice_enabled, agent_enabled=agent_enabled, group='spice') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) image_meta = objects.ImageMeta.from_dict(image_meta) return drvr._get_guest_pointer_model(os_type, image_meta) def test_use_ps2_mouse(self): self.flags(pointer_model='ps2mouse') tablet = self._test_get_guest_usb_tablet( True, True, fields.VMMode.HVM) self.assertIsNone(tablet) def test_get_guest_usb_tablet_wipe(self): self.flags(use_usb_tablet=True, group='libvirt') tablet = self._test_get_guest_usb_tablet( True, True, fields.VMMode.HVM) self.assertIsNotNone(tablet) tablet = self._test_get_guest_usb_tablet( True, False, fields.VMMode.HVM) self.assertIsNotNone(tablet) tablet = self._test_get_guest_usb_tablet( False, True, fields.VMMode.HVM) self.assertIsNotNone(tablet) tablet = self._test_get_guest_usb_tablet( False, False, fields.VMMode.HVM) self.assertIsNone(tablet) tablet = self._test_get_guest_usb_tablet( True, True, "foo") self.assertIsNone(tablet) tablet = self._test_get_guest_usb_tablet( False, True, fields.VMMode.HVM, True) self.assertIsNone(tablet) def test_get_guest_usb_tablet_image_meta(self): self.flags(use_usb_tablet=True, group='libvirt') image_meta = {"properties": {"hw_pointer_model": "usbtablet"}} tablet = self._test_get_guest_usb_tablet( True, True, fields.VMMode.HVM, image_meta=image_meta) self.assertIsNotNone(tablet) tablet = self._test_get_guest_usb_tablet( True, False, fields.VMMode.HVM, image_meta=image_meta) self.assertIsNotNone(tablet) tablet = self._test_get_guest_usb_tablet( False, True, fields.VMMode.HVM, image_meta=image_meta) self.assertIsNotNone(tablet) tablet = self._test_get_guest_usb_tablet( False, False, fields.VMMode.HVM, image_meta=image_meta) self.assertIsNone(tablet) tablet = self._test_get_guest_usb_tablet( True, True, "foo", image_meta=image_meta) self.assertIsNone(tablet) tablet = self._test_get_guest_usb_tablet( False, True, fields.VMMode.HVM, True, image_meta=image_meta) self.assertIsNone(tablet) def test_get_guest_usb_tablet_image_meta_no_vnc(self): self.flags(use_usb_tablet=False, group='libvirt') self.flags(pointer_model=None) image_meta = {"properties": {"hw_pointer_model": "usbtablet"}} self.assertRaises( exception.UnsupportedPointerModelRequested, self._test_get_guest_usb_tablet, False, False, fields.VMMode.HVM, True, image_meta=image_meta) def test_get_guest_no_pointer_model_usb_tablet_set(self): self.flags(use_usb_tablet=True, group='libvirt') self.flags(pointer_model=None) tablet = self._test_get_guest_usb_tablet(True, True, fields.VMMode.HVM) self.assertIsNotNone(tablet) def test_get_guest_no_pointer_model_usb_tablet_not_set(self): self.flags(use_usb_tablet=False, group='libvirt') self.flags(pointer_model=None) tablet = self._test_get_guest_usb_tablet(True, True, fields.VMMode.HVM) self.assertIsNone(tablet) def test_get_guest_pointer_model_usb_tablet(self): self.flags(use_usb_tablet=False, group='libvirt') self.flags(pointer_model='usbtablet') tablet = self._test_get_guest_usb_tablet(True, True, fields.VMMode.HVM) self.assertIsNotNone(tablet) def test_get_guest_pointer_model_usb_tablet_image(self): image_meta = {"properties": {"hw_pointer_model": "usbtablet"}} tablet = self._test_get_guest_usb_tablet( True, True, fields.VMMode.HVM, image_meta=image_meta) self.assertIsNotNone(tablet) def test_get_guest_pointer_model_usb_tablet_image_no_HVM(self): self.flags(pointer_model=None) self.flags(use_usb_tablet=False, group='libvirt') image_meta = {"properties": {"hw_pointer_model": "usbtablet"}} self.assertRaises( exception.UnsupportedPointerModelRequested, self._test_get_guest_usb_tablet, True, True, fields.VMMode.XEN, image_meta=image_meta) def test_get_guest_config_with_watchdog_action_flavor(self): self.flags(virt_type='kvm', group='libvirt') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance_ref = objects.Instance(**self.test_instance) instance_ref.flavor.extra_specs = {"hw:watchdog_action": 'none'} image_meta = objects.ImageMeta.from_dict(self.test_image_meta) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) cfg = drvr._get_guest_config(instance_ref, [], image_meta, disk_info) self.assertEqual(9, len(cfg.devices)) self.assertIsInstance(cfg.devices[0], vconfig.LibvirtConfigGuestDisk) self.assertIsInstance(cfg.devices[1], vconfig.LibvirtConfigGuestDisk) self.assertIsInstance(cfg.devices[2], vconfig.LibvirtConfigGuestSerial) self.assertIsInstance(cfg.devices[3], vconfig.LibvirtConfigGuestSerial) self.assertIsInstance(cfg.devices[4], vconfig.LibvirtConfigGuestInput) self.assertIsInstance(cfg.devices[5], vconfig.LibvirtConfigGuestGraphics) self.assertIsInstance(cfg.devices[6], vconfig.LibvirtConfigGuestVideo) self.assertIsInstance(cfg.devices[7], vconfig.LibvirtConfigGuestWatchdog) self.assertIsInstance(cfg.devices[8], vconfig.LibvirtConfigMemoryBalloon) self.assertEqual("none", cfg.devices[7].action) def test_get_guest_config_with_watchdog_overrides_flavor(self): self.flags(virt_type='kvm', group='libvirt') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance_ref = objects.Instance(**self.test_instance) instance_ref.flavor.extra_specs = {'hw:watchdog_action': 'none'} image_meta = objects.ImageMeta.from_dict({ "disk_format": "raw", "properties": {"hw_watchdog_action": "pause"}}) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) cfg = drvr._get_guest_config(instance_ref, [], image_meta, disk_info) self.assertEqual(9, len(cfg.devices)) self.assertIsInstance(cfg.devices[0], vconfig.LibvirtConfigGuestDisk) self.assertIsInstance(cfg.devices[1], vconfig.LibvirtConfigGuestDisk) self.assertIsInstance(cfg.devices[2], vconfig.LibvirtConfigGuestSerial) self.assertIsInstance(cfg.devices[3], vconfig.LibvirtConfigGuestSerial) self.assertIsInstance(cfg.devices[4], vconfig.LibvirtConfigGuestInput) self.assertIsInstance(cfg.devices[5], vconfig.LibvirtConfigGuestGraphics) self.assertIsInstance(cfg.devices[6], vconfig.LibvirtConfigGuestVideo) self.assertIsInstance(cfg.devices[7], vconfig.LibvirtConfigGuestWatchdog) self.assertIsInstance(cfg.devices[8], vconfig.LibvirtConfigMemoryBalloon) self.assertEqual("pause", cfg.devices[7].action) def test_get_guest_config_with_video_driver_image_meta(self): self.flags(virt_type='kvm', group='libvirt') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance_ref = objects.Instance(**self.test_instance) image_meta = objects.ImageMeta.from_dict({ "disk_format": "raw", "properties": {"hw_video_model": "vmvga"}}) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) cfg = drvr._get_guest_config(instance_ref, [], image_meta, disk_info) self.assertEqual(len(cfg.devices), 8) self.assertIsInstance(cfg.devices[0], vconfig.LibvirtConfigGuestDisk) self.assertIsInstance(cfg.devices[1], vconfig.LibvirtConfigGuestDisk) self.assertIsInstance(cfg.devices[2], vconfig.LibvirtConfigGuestSerial) self.assertIsInstance(cfg.devices[3], vconfig.LibvirtConfigGuestSerial) self.assertIsInstance(cfg.devices[4], vconfig.LibvirtConfigGuestInput) self.assertIsInstance(cfg.devices[5], vconfig.LibvirtConfigGuestGraphics) self.assertIsInstance(cfg.devices[6], vconfig.LibvirtConfigGuestVideo) self.assertIsInstance(cfg.devices[7], vconfig.LibvirtConfigMemoryBalloon) self.assertEqual(cfg.devices[5].type, "vnc") self.assertEqual(cfg.devices[6].type, "vmvga") def test_get_guest_config_with_qga_through_image_meta(self): self.flags(virt_type='kvm', group='libvirt') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance_ref = objects.Instance(**self.test_instance) image_meta = objects.ImageMeta.from_dict({ "disk_format": "raw", "properties": {"hw_qemu_guest_agent": "yes"}}) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) cfg = drvr._get_guest_config(instance_ref, [], image_meta, disk_info) self.assertEqual(len(cfg.devices), 9) self.assertIsInstance(cfg.devices[0], vconfig.LibvirtConfigGuestDisk) self.assertIsInstance(cfg.devices[1], vconfig.LibvirtConfigGuestDisk) self.assertIsInstance(cfg.devices[2], vconfig.LibvirtConfigGuestSerial) self.assertIsInstance(cfg.devices[3], vconfig.LibvirtConfigGuestSerial) self.assertIsInstance(cfg.devices[4], vconfig.LibvirtConfigGuestInput) self.assertIsInstance(cfg.devices[5], vconfig.LibvirtConfigGuestGraphics) self.assertIsInstance(cfg.devices[6], vconfig.LibvirtConfigGuestVideo) self.assertIsInstance(cfg.devices[7], vconfig.LibvirtConfigGuestChannel) self.assertIsInstance(cfg.devices[8], vconfig.LibvirtConfigMemoryBalloon) self.assertEqual(cfg.devices[4].type, "tablet") self.assertEqual(cfg.devices[5].type, "vnc") self.assertEqual(cfg.devices[7].type, "unix") self.assertEqual(cfg.devices[7].target_name, "org.qemu.guest_agent.0") def test_get_guest_config_with_video_driver_vram(self): self.flags(enabled=False, group='vnc') self.flags(virt_type='kvm', group='libvirt') self.flags(enabled=True, agent_enabled=True, group='spice') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance_ref = objects.Instance(**self.test_instance) instance_ref.flavor.extra_specs = {'hw_video:ram_max_mb': "100"} image_meta = objects.ImageMeta.from_dict({ "disk_format": "raw", "properties": {"hw_video_model": "qxl", "hw_video_ram": "64"}}) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) cfg = drvr._get_guest_config(instance_ref, [], image_meta, disk_info) self.assertEqual(len(cfg.devices), 8) self.assertIsInstance(cfg.devices[0], vconfig.LibvirtConfigGuestDisk) self.assertIsInstance(cfg.devices[1], vconfig.LibvirtConfigGuestDisk) self.assertIsInstance(cfg.devices[2], vconfig.LibvirtConfigGuestSerial) self.assertIsInstance(cfg.devices[3], vconfig.LibvirtConfigGuestSerial) self.assertIsInstance(cfg.devices[4], vconfig.LibvirtConfigGuestChannel) self.assertIsInstance(cfg.devices[5], vconfig.LibvirtConfigGuestGraphics) self.assertIsInstance(cfg.devices[6], vconfig.LibvirtConfigGuestVideo) self.assertIsInstance(cfg.devices[7], vconfig.LibvirtConfigMemoryBalloon) self.assertEqual(cfg.devices[5].type, "spice") self.assertEqual(cfg.devices[6].type, "qxl") self.assertEqual(cfg.devices[6].vram, 64 * units.Mi / units.Ki) @mock.patch('nova.virt.disk.api.teardown_container') @mock.patch('nova.virt.libvirt.driver.LibvirtDriver.get_info') @mock.patch('nova.virt.disk.api.setup_container') @mock.patch('oslo_utils.fileutils.ensure_tree') @mock.patch.object(fake_libvirt_utils, 'get_instance_path') def test_unmount_fs_if_error_during_lxc_create_domain(self, mock_get_inst_path, mock_ensure_tree, mock_setup_container, mock_get_info, mock_teardown): """If we hit an error during a `_create_domain` call to `libvirt+lxc` we need to ensure the guest FS is unmounted from the host so that any future `lvremove` calls will work. """ self.flags(virt_type='lxc', group='libvirt') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) mock_instance = mock.MagicMock() mock_get_inst_path.return_value = '/tmp/' mock_image_backend = mock.MagicMock() drvr.image_backend = mock_image_backend mock_image = mock.MagicMock() mock_image.path = '/tmp/test.img' drvr.image_backend.by_name.return_value = mock_image mock_setup_container.return_value = '/dev/nbd0' mock_get_info.side_effect = exception.InstanceNotFound( instance_id='foo') drvr._conn.defineXML = mock.Mock() drvr._conn.defineXML.side_effect = ValueError('somethingbad') with test.nested( mock.patch.object(drvr, '_is_booted_from_volume', return_value=False), mock.patch.object(drvr, 'plug_vifs'), mock.patch.object(drvr, 'firewall_driver'), mock.patch.object(drvr, 'cleanup')): self.assertRaises(ValueError, drvr._create_domain_and_network, self.context, 'xml', mock_instance, None) mock_teardown.assert_called_with(container_dir='/tmp/rootfs') def test_video_driver_flavor_limit_not_set(self): self.flags(virt_type='kvm', group='libvirt') self.flags(enabled=True, agent_enabled=True, group='spice') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance_ref = objects.Instance(**self.test_instance) image_meta = objects.ImageMeta.from_dict({ "disk_format": "raw", "properties": {"hw_video_model": "qxl", "hw_video_ram": "64"}}) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) with mock.patch.object(objects.Instance, 'save'): self.assertRaises(exception.RequestedVRamTooHigh, drvr._get_guest_config, instance_ref, [], image_meta, disk_info) def test_video_driver_ram_above_flavor_limit(self): self.flags(virt_type='kvm', group='libvirt') self.flags(enabled=True, agent_enabled=True, group='spice') instance_ref = objects.Instance(**self.test_instance) instance_type = instance_ref.get_flavor() instance_type.extra_specs = {'hw_video:ram_max_mb': "50"} image_meta = objects.ImageMeta.from_dict({ "disk_format": "raw", "properties": {"hw_video_model": "qxl", "hw_video_ram": "64"}}) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) with mock.patch.object(objects.Instance, 'save'): self.assertRaises(exception.RequestedVRamTooHigh, drvr._get_guest_config, instance_ref, [], image_meta, disk_info) def test_get_guest_config_without_qga_through_image_meta(self): self.flags(virt_type='kvm', group='libvirt') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance_ref = objects.Instance(**self.test_instance) image_meta = objects.ImageMeta.from_dict({ "disk_format": "raw", "properties": {"hw_qemu_guest_agent": "no"}}) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) cfg = drvr._get_guest_config(instance_ref, [], image_meta, disk_info) self.assertEqual(len(cfg.devices), 8) self.assertIsInstance(cfg.devices[0], vconfig.LibvirtConfigGuestDisk) self.assertIsInstance(cfg.devices[1], vconfig.LibvirtConfigGuestDisk) self.assertIsInstance(cfg.devices[2], vconfig.LibvirtConfigGuestSerial) self.assertIsInstance(cfg.devices[3], vconfig.LibvirtConfigGuestSerial) self.assertIsInstance(cfg.devices[4], vconfig.LibvirtConfigGuestInput) self.assertIsInstance(cfg.devices[5], vconfig.LibvirtConfigGuestGraphics) self.assertIsInstance(cfg.devices[6], vconfig.LibvirtConfigGuestVideo) self.assertIsInstance(cfg.devices[7], vconfig.LibvirtConfigMemoryBalloon) self.assertEqual(cfg.devices[4].type, "tablet") self.assertEqual(cfg.devices[5].type, "vnc") def test_get_guest_config_with_rng_device(self): self.flags(virt_type='kvm', group='libvirt') self.flags(pointer_model='ps2mouse') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance_ref = objects.Instance(**self.test_instance) instance_ref.flavor.extra_specs = {'hw_rng:allowed': 'True'} image_meta = objects.ImageMeta.from_dict({ "disk_format": "raw", "properties": {"hw_rng_model": "virtio"}}) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) cfg = drvr._get_guest_config(instance_ref, [], image_meta, disk_info) self.assertEqual(len(cfg.devices), 8) self.assertIsInstance(cfg.devices[0], vconfig.LibvirtConfigGuestDisk) self.assertIsInstance(cfg.devices[1], vconfig.LibvirtConfigGuestDisk) self.assertIsInstance(cfg.devices[2], vconfig.LibvirtConfigGuestSerial) self.assertIsInstance(cfg.devices[3], vconfig.LibvirtConfigGuestSerial) self.assertIsInstance(cfg.devices[4], vconfig.LibvirtConfigGuestGraphics) self.assertIsInstance(cfg.devices[5], vconfig.LibvirtConfigGuestVideo) self.assertIsInstance(cfg.devices[6], vconfig.LibvirtConfigGuestRng) self.assertIsInstance(cfg.devices[7], vconfig.LibvirtConfigMemoryBalloon) self.assertEqual(cfg.devices[6].model, 'random') self.assertIsNone(cfg.devices[6].backend) self.assertIsNone(cfg.devices[6].rate_bytes) self.assertIsNone(cfg.devices[6].rate_period) def test_get_guest_config_with_rng_not_allowed(self): self.flags(virt_type='kvm', group='libvirt') self.flags(pointer_model='ps2mouse') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance_ref = objects.Instance(**self.test_instance) image_meta = objects.ImageMeta.from_dict({ "disk_format": "raw", "properties": {"hw_rng_model": "virtio"}}) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) cfg = drvr._get_guest_config(instance_ref, [], image_meta, disk_info) self.assertEqual(len(cfg.devices), 7) self.assertIsInstance(cfg.devices[0], vconfig.LibvirtConfigGuestDisk) self.assertIsInstance(cfg.devices[1], vconfig.LibvirtConfigGuestDisk) self.assertIsInstance(cfg.devices[2], vconfig.LibvirtConfigGuestSerial) self.assertIsInstance(cfg.devices[3], vconfig.LibvirtConfigGuestSerial) self.assertIsInstance(cfg.devices[4], vconfig.LibvirtConfigGuestGraphics) self.assertIsInstance(cfg.devices[5], vconfig.LibvirtConfigGuestVideo) self.assertIsInstance(cfg.devices[6], vconfig.LibvirtConfigMemoryBalloon) def test_get_guest_config_with_rng_limits(self): self.flags(virt_type='kvm', group='libvirt') self.flags(pointer_model='ps2mouse') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance_ref = objects.Instance(**self.test_instance) instance_ref.flavor.extra_specs = {'hw_rng:allowed': 'True', 'hw_rng:rate_bytes': '1024', 'hw_rng:rate_period': '2'} image_meta = objects.ImageMeta.from_dict({ "disk_format": "raw", "properties": {"hw_rng_model": "virtio"}}) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) cfg = drvr._get_guest_config(instance_ref, [], image_meta, disk_info) self.assertEqual(len(cfg.devices), 8) self.assertIsInstance(cfg.devices[0], vconfig.LibvirtConfigGuestDisk) self.assertIsInstance(cfg.devices[1], vconfig.LibvirtConfigGuestDisk) self.assertIsInstance(cfg.devices[2], vconfig.LibvirtConfigGuestSerial) self.assertIsInstance(cfg.devices[3], vconfig.LibvirtConfigGuestSerial) self.assertIsInstance(cfg.devices[4], vconfig.LibvirtConfigGuestGraphics) self.assertIsInstance(cfg.devices[5], vconfig.LibvirtConfigGuestVideo) self.assertIsInstance(cfg.devices[6], vconfig.LibvirtConfigGuestRng) self.assertIsInstance(cfg.devices[7], vconfig.LibvirtConfigMemoryBalloon) self.assertEqual(cfg.devices[6].model, 'random') self.assertIsNone(cfg.devices[6].backend) self.assertEqual(cfg.devices[6].rate_bytes, 1024) self.assertEqual(cfg.devices[6].rate_period, 2) @mock.patch('nova.virt.libvirt.driver.os.path.exists') def test_get_guest_config_with_rng_backend(self, mock_path): self.flags(virt_type='kvm', rng_dev_path='/dev/hw_rng', group='libvirt') self.flags(pointer_model='ps2mouse') mock_path.return_value = True drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance_ref = objects.Instance(**self.test_instance) instance_ref.flavor.extra_specs = {'hw_rng:allowed': 'True'} image_meta = objects.ImageMeta.from_dict({ "disk_format": "raw", "properties": {"hw_rng_model": "virtio"}}) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) cfg = drvr._get_guest_config(instance_ref, [], image_meta, disk_info) self.assertEqual(len(cfg.devices), 8) self.assertIsInstance(cfg.devices[0], vconfig.LibvirtConfigGuestDisk) self.assertIsInstance(cfg.devices[1], vconfig.LibvirtConfigGuestDisk) self.assertIsInstance(cfg.devices[2], vconfig.LibvirtConfigGuestSerial) self.assertIsInstance(cfg.devices[3], vconfig.LibvirtConfigGuestSerial) self.assertIsInstance(cfg.devices[4], vconfig.LibvirtConfigGuestGraphics) self.assertIsInstance(cfg.devices[5], vconfig.LibvirtConfigGuestVideo) self.assertIsInstance(cfg.devices[6], vconfig.LibvirtConfigGuestRng) self.assertIsInstance(cfg.devices[7], vconfig.LibvirtConfigMemoryBalloon) self.assertEqual(cfg.devices[6].model, 'random') self.assertEqual(cfg.devices[6].backend, '/dev/hw_rng') self.assertIsNone(cfg.devices[6].rate_bytes) self.assertIsNone(cfg.devices[6].rate_period) @mock.patch('nova.virt.libvirt.driver.os.path.exists') def test_get_guest_config_with_rng_dev_not_present(self, mock_path): self.flags(virt_type='kvm', use_usb_tablet=False, rng_dev_path='/dev/hw_rng', group='libvirt') mock_path.return_value = False drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance_ref = objects.Instance(**self.test_instance) instance_ref.flavor.extra_specs = {'hw_rng:allowed': 'True'} image_meta = objects.ImageMeta.from_dict({ "disk_format": "raw", "properties": {"hw_rng_model": "virtio"}}) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) self.assertRaises(exception.RngDeviceNotExist, drvr._get_guest_config, instance_ref, [], image_meta, disk_info) @mock.patch.object( host.Host, "is_cpu_control_policy_capable", return_value=True) def test_guest_cpu_shares_with_multi_vcpu(self, is_able): self.flags(virt_type='kvm', group='libvirt') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance_ref = objects.Instance(**self.test_instance) instance_ref.flavor.vcpus = 4 image_meta = objects.ImageMeta.from_dict(self.test_image_meta) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) cfg = drvr._get_guest_config(instance_ref, [], image_meta, disk_info) self.assertEqual(4096, cfg.cputune.shares) @mock.patch.object( host.Host, "is_cpu_control_policy_capable", return_value=True) def test_get_guest_config_with_cpu_quota(self, is_able): self.flags(virt_type='kvm', group='libvirt') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance_ref = objects.Instance(**self.test_instance) instance_ref.flavor.extra_specs = {'quota:cpu_shares': '10000', 'quota:cpu_period': '20000'} image_meta = objects.ImageMeta.from_dict(self.test_image_meta) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) cfg = drvr._get_guest_config(instance_ref, [], image_meta, disk_info) self.assertEqual(10000, cfg.cputune.shares) self.assertEqual(20000, cfg.cputune.period) def test_get_guest_config_with_hiding_hypervisor_id(self): self.flags(virt_type='kvm', group='libvirt') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance_ref = objects.Instance(**self.test_instance) image_meta = objects.ImageMeta.from_dict({ "disk_format": "raw", "properties": {"img_hide_hypervisor_id": "true"}}) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) cfg = drvr._get_guest_config(instance_ref, [], image_meta, disk_info) self.assertTrue( any(isinstance(feature, vconfig.LibvirtConfigGuestFeatureKvmHidden) for feature in cfg.features)) def test_get_guest_config_without_hiding_hypervisor_id(self): self.flags(virt_type='kvm', group='libvirt') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance_ref = objects.Instance(**self.test_instance) image_meta = objects.ImageMeta.from_dict({ "disk_format": "raw", "properties": {"img_hide_hypervisor_id": "false"}}) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) cfg = drvr._get_guest_config(instance_ref, [], image_meta, disk_info) self.assertFalse( any(isinstance(feature, vconfig.LibvirtConfigGuestFeatureKvmHidden) for feature in cfg.features)) def _test_get_guest_config_disk_cachemodes(self, images_type): # Verify that the configured cachemodes are propagated to the device # configurations. if images_type == 'flat': cachemode = 'file=directsync' elif images_type == 'lvm': cachemode = 'block=writethrough' elif images_type == 'rbd': cachemode = 'network=writeback' self.flags(disk_cachemodes=[cachemode], group='libvirt') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance_ref = objects.Instance(**self.test_instance) image_meta = objects.ImageMeta.from_dict(self.test_image_meta) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) cfg = drvr._get_guest_config(instance_ref, [], image_meta, disk_info) for d in cfg.devices: if isinstance(d, vconfig.LibvirtConfigGuestDisk): expected = cachemode.split('=') self.assertEqual(expected[0], d.source_type) self.assertEqual(expected[1], d.driver_cache) def test_get_guest_config_disk_cachemodes_file(self): self.flags(images_type='flat', group='libvirt') self._test_get_guest_config_disk_cachemodes('flat') def test_get_guest_config_disk_cachemodes_block(self): self.flags(images_type='lvm', group='libvirt') self.flags(images_volume_group='vols', group='libvirt') self._test_get_guest_config_disk_cachemodes('lvm') @mock.patch.object(rbd_utils, 'rbd') @mock.patch.object(rbd_utils, 'rados') @mock.patch.object(rbd_utils.RBDDriver, 'get_mon_addrs', return_value=(mock.Mock(), mock.Mock())) def test_get_guest_config_disk_cachemodes_network( self, mock_get_mon_addrs, mock_rados, mock_rbd): self.flags(images_type='rbd', group='libvirt') self._test_get_guest_config_disk_cachemodes('rbd') @mock.patch.object( host.Host, "is_cpu_control_policy_capable", return_value=True) def test_get_guest_config_with_bogus_cpu_quota(self, is_able): self.flags(virt_type='kvm', group='libvirt') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance_ref = objects.Instance(**self.test_instance) instance_ref.flavor.extra_specs = {'quota:cpu_shares': 'fishfood', 'quota:cpu_period': '20000'} image_meta = objects.ImageMeta.from_dict(self.test_image_meta) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) self.assertRaises(ValueError, drvr._get_guest_config, instance_ref, [], image_meta, disk_info) @mock.patch.object( host.Host, "is_cpu_control_policy_capable", return_value=False) def test_get_update_guest_cputune(self, is_able): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance_ref = objects.Instance(**self.test_instance) instance_ref.flavor.extra_specs = {'quota:cpu_shares': '10000', 'quota:cpu_period': '20000'} self.assertRaises( exception.UnsupportedHostCPUControlPolicy, drvr._update_guest_cputune, {}, instance_ref.flavor, "kvm") def _test_get_guest_config_sysinfo_serial(self, expected_serial): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance_ref = objects.Instance(**self.test_instance) cfg = drvr._get_guest_config_sysinfo(instance_ref) self.assertIsInstance(cfg, vconfig.LibvirtConfigGuestSysinfo) self.assertEqual(version.vendor_string(), cfg.system_manufacturer) self.assertEqual(version.product_string(), cfg.system_product) self.assertEqual(version.version_string_with_package(), cfg.system_version) self.assertEqual(expected_serial, cfg.system_serial) self.assertEqual(instance_ref['uuid'], cfg.system_uuid) self.assertEqual("Virtual Machine", cfg.system_family) def test_get_guest_config_sysinfo_serial_none(self): self.flags(sysinfo_serial="none", group="libvirt") self._test_get_guest_config_sysinfo_serial(None) @mock.patch.object(libvirt_driver.LibvirtDriver, "_get_host_sysinfo_serial_hardware") def test_get_guest_config_sysinfo_serial_hardware(self, mock_uuid): self.flags(sysinfo_serial="hardware", group="libvirt") theuuid = "56b40135-a973-4eb3-87bb-a2382a3e6dbc" mock_uuid.return_value = theuuid self._test_get_guest_config_sysinfo_serial(theuuid) @contextlib.contextmanager def patch_exists(self, result): real_exists = os.path.exists def fake_exists(filename): if filename == "/etc/machine-id": return result return real_exists(filename) with mock.patch.object(os.path, "exists") as mock_exists: mock_exists.side_effect = fake_exists yield mock_exists def test_get_guest_config_sysinfo_serial_os(self): self.flags(sysinfo_serial="os", group="libvirt") theuuid = "56b40135-a973-4eb3-87bb-a2382a3e6dbc" with test.nested( mock.patch.object(six.moves.builtins, "open", mock.mock_open(read_data=theuuid)), self.patch_exists(True)): self._test_get_guest_config_sysinfo_serial(theuuid) def test_get_guest_config_sysinfo_serial_os_empty_machine_id(self): self.flags(sysinfo_serial="os", group="libvirt") with test.nested( mock.patch.object(six.moves.builtins, "open", mock.mock_open(read_data="")), self.patch_exists(True)): self.assertRaises(exception.NovaException, self._test_get_guest_config_sysinfo_serial, None) def test_get_guest_config_sysinfo_serial_os_no_machine_id_file(self): self.flags(sysinfo_serial="os", group="libvirt") with self.patch_exists(False): self.assertRaises(exception.NovaException, self._test_get_guest_config_sysinfo_serial, None) def test_get_guest_config_sysinfo_serial_auto_hardware(self): self.flags(sysinfo_serial="auto", group="libvirt") real_exists = os.path.exists with test.nested( mock.patch.object(os.path, "exists"), mock.patch.object(libvirt_driver.LibvirtDriver, "_get_host_sysinfo_serial_hardware") ) as (mock_exists, mock_uuid): def fake_exists(filename): if filename == "/etc/machine-id": return False return real_exists(filename) mock_exists.side_effect = fake_exists theuuid = "56b40135-a973-4eb3-87bb-a2382a3e6dbc" mock_uuid.return_value = theuuid self._test_get_guest_config_sysinfo_serial(theuuid) def test_get_guest_config_sysinfo_serial_auto_os(self): self.flags(sysinfo_serial="auto", group="libvirt") real_exists = os.path.exists real_open = builtins.open with test.nested( mock.patch.object(os.path, "exists"), mock.patch.object(builtins, "open"), ) as (mock_exists, mock_open): def fake_exists(filename): if filename == "/etc/machine-id": return True return real_exists(filename) mock_exists.side_effect = fake_exists theuuid = "56b40135-a973-4eb3-87bb-a2382a3e6dbc" def fake_open(filename, *args, **kwargs): if filename == "/etc/machine-id": h = mock.MagicMock() h.read.return_value = theuuid h.__enter__.return_value = h return h return real_open(filename, *args, **kwargs) mock_open.side_effect = fake_open self._test_get_guest_config_sysinfo_serial(theuuid) def _create_fake_service_compute(self): service_info = { 'id': 1729, 'host': 'fake', 'report_count': 0 } service_ref = objects.Service(**service_info) compute_info = { 'id': 1729, 'vcpus': 2, 'memory_mb': 1024, 'local_gb': 2048, 'vcpus_used': 0, 'memory_mb_used': 0, 'local_gb_used': 0, 'free_ram_mb': 1024, 'free_disk_gb': 2048, 'hypervisor_type': 'xen', 'hypervisor_version': 1, 'running_vms': 0, 'cpu_info': '', 'current_workload': 0, 'service_id': service_ref['id'], 'host': service_ref['host'] } compute_ref = objects.ComputeNode(**compute_info) return (service_ref, compute_ref) def test_get_guest_config_with_pci_passthrough_kvm(self): self.flags(virt_type='kvm', group='libvirt') service_ref, compute_ref = self._create_fake_service_compute() instance = objects.Instance(**self.test_instance) image_meta = objects.ImageMeta.from_dict(self.test_image_meta) pci_device_info = dict(test_pci_device.fake_db_dev) pci_device_info.update(compute_node_id=1, label='fake', status=fields.PciDeviceStatus.ALLOCATED, address='0000:00:00.1', compute_id=compute_ref.id, instance_uuid=instance.uuid, request_id=None, extra_info={}) pci_device = objects.PciDevice(**pci_device_info) pci_list = objects.PciDeviceList() pci_list.objects.append(pci_device) instance.pci_devices = pci_list drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance, image_meta) cfg = drvr._get_guest_config(instance, [], image_meta, disk_info) had_pci = 0 # care only about the PCI devices for dev in cfg.devices: if type(dev) == vconfig.LibvirtConfigGuestHostdevPCI: had_pci += 1 self.assertEqual(dev.type, 'pci') self.assertEqual(dev.managed, 'yes') self.assertEqual(dev.mode, 'subsystem') self.assertEqual(dev.domain, "0000") self.assertEqual(dev.bus, "00") self.assertEqual(dev.slot, "00") self.assertEqual(dev.function, "1") self.assertEqual(had_pci, 1) def test_get_guest_config_with_pci_passthrough_xen(self): self.flags(virt_type='xen', group='libvirt') service_ref, compute_ref = self._create_fake_service_compute() instance = objects.Instance(**self.test_instance) image_meta = objects.ImageMeta.from_dict(self.test_image_meta) pci_device_info = dict(test_pci_device.fake_db_dev) pci_device_info.update(compute_node_id=1, label='fake', status=fields.PciDeviceStatus.ALLOCATED, address='0000:00:00.2', compute_id=compute_ref.id, instance_uuid=instance.uuid, request_id=None, extra_info={}) pci_device = objects.PciDevice(**pci_device_info) pci_list = objects.PciDeviceList() pci_list.objects.append(pci_device) instance.pci_devices = pci_list drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance, image_meta) cfg = drvr._get_guest_config(instance, [], image_meta, disk_info) had_pci = 0 # care only about the PCI devices for dev in cfg.devices: if type(dev) == vconfig.LibvirtConfigGuestHostdevPCI: had_pci += 1 self.assertEqual(dev.type, 'pci') self.assertEqual(dev.managed, 'no') self.assertEqual(dev.mode, 'subsystem') self.assertEqual(dev.domain, "0000") self.assertEqual(dev.bus, "00") self.assertEqual(dev.slot, "00") self.assertEqual(dev.function, "2") self.assertEqual(had_pci, 1) def test_get_guest_config_os_command_line_through_image_meta(self): self.flags(virt_type="kvm", cpu_mode='none', group='libvirt') self.test_instance['kernel_id'] = "fake_kernel_id" drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance_ref = objects.Instance(**self.test_instance) image_meta = objects.ImageMeta.from_dict({ "disk_format": "raw", "properties": {"os_command_line": "fake_os_command_line"}}) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) cfg = drvr._get_guest_config(instance_ref, _fake_network_info(self, 1), image_meta, disk_info) self.assertEqual(cfg.os_cmdline, "fake_os_command_line") def test_get_guest_config_os_command_line_without_kernel_id(self): self.flags(virt_type="kvm", cpu_mode='none', group='libvirt') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance_ref = objects.Instance(**self.test_instance) image_meta = objects.ImageMeta.from_dict({ "disk_format": "raw", "properties": {"os_command_line": "fake_os_command_line"}}) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) cfg = drvr._get_guest_config(instance_ref, _fake_network_info(self, 1), image_meta, disk_info) self.assertIsNone(cfg.os_cmdline) def test_get_guest_config_os_command_empty(self): self.flags(virt_type="kvm", cpu_mode='none', group='libvirt') self.test_instance['kernel_id'] = "fake_kernel_id" drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance_ref = objects.Instance(**self.test_instance) image_meta = objects.ImageMeta.from_dict({ "disk_format": "raw", "properties": {"os_command_line": ""}}) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) # the instance has 'root=/dev/vda console=tty0 console=ttyS0 # console=hvc0' set by default, so testing an empty string and None # value in the os_command_line image property must pass cfg = drvr._get_guest_config(instance_ref, _fake_network_info(self, 1), image_meta, disk_info) self.assertNotEqual(cfg.os_cmdline, "") @mock.patch.object(libvirt_driver.LibvirtDriver, "_get_guest_storage_config") @mock.patch.object(libvirt_driver.LibvirtDriver, "_has_numa_support") def test_get_guest_config_armv7(self, mock_numa, mock_storage): def get_host_capabilities_stub(self): cpu = vconfig.LibvirtConfigGuestCPU() cpu.arch = fields.Architecture.ARMV7 caps = vconfig.LibvirtConfigCaps() caps.host = vconfig.LibvirtConfigCapsHost() caps.host.cpu = cpu return caps self.flags(virt_type="kvm", group="libvirt") instance_ref = objects.Instance(**self.test_instance) image_meta = objects.ImageMeta.from_dict(self.test_image_meta) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) self.stubs.Set(host.Host, "get_capabilities", get_host_capabilities_stub) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) cfg = drvr._get_guest_config(instance_ref, _fake_network_info(self, 1), image_meta, disk_info) self.assertEqual(cfg.os_mach_type, "vexpress-a15") @mock.patch.object(libvirt_driver.LibvirtDriver, "_get_guest_storage_config") @mock.patch.object(libvirt_driver.LibvirtDriver, "_has_numa_support") @mock.patch('os.path.exists', return_value=True) def test_get_guest_config_aarch64(self, mock_path_exists, mock_numa, mock_storage): def get_host_capabilities_stub(self): cpu = vconfig.LibvirtConfigGuestCPU() cpu.arch = fields.Architecture.AARCH64 caps = vconfig.LibvirtConfigCaps() caps.host = vconfig.LibvirtConfigCapsHost() caps.host.cpu = cpu return caps self.flags(virt_type="kvm", group="libvirt") instance_ref = objects.Instance(**self.test_instance) image_meta = objects.ImageMeta.from_dict(self.test_image_meta) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) self.stubs.Set(host.Host, "get_capabilities", get_host_capabilities_stub) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) cfg = drvr._get_guest_config(instance_ref, _fake_network_info(self, 1), image_meta, disk_info) self.assertTrue(mock_path_exists.called) mock_path_exists.assert_called_with( libvirt_driver.DEFAULT_UEFI_LOADER_PATH['aarch64']) self.assertEqual(cfg.os_mach_type, "virt") @mock.patch.object(libvirt_driver.LibvirtDriver, "_get_guest_storage_config") @mock.patch.object(libvirt_driver.LibvirtDriver, "_has_numa_support") @mock.patch('os.path.exists', return_value=True) def test_get_guest_config_aarch64_with_graphics(self, mock_path_exists, mock_numa, mock_storage): def get_host_capabilities_stub(self): cpu = vconfig.LibvirtConfigGuestCPU() cpu.arch = fields.Architecture.AARCH64 caps = vconfig.LibvirtConfigCaps() caps.host = vconfig.LibvirtConfigCapsHost() caps.host.cpu = cpu return caps self.stubs.Set(host.Host, "get_capabilities", get_host_capabilities_stub) self.flags(enabled=True, server_listen='10.0.0.1', keymap='en-ie', group='vnc') self.flags(virt_type='kvm', group='libvirt') self.flags(enabled=False, group='spice') cfg = self._get_guest_config_with_graphics() self.assertTrue(mock_path_exists.called) mock_path_exists.assert_called_with( libvirt_driver.DEFAULT_UEFI_LOADER_PATH['aarch64']) self.assertEqual(cfg.os_mach_type, "virt") usbhost_exists = False keyboard_exists = False for device in cfg.devices: if device.root_name == 'controller' and device.type == 'usb': usbhost_exists = True if device.root_name == 'input' and device.type == 'keyboard': keyboard_exists = True self.assertTrue(usbhost_exists) self.assertTrue(keyboard_exists) def test_get_guest_config_machine_type_s390(self): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) caps = vconfig.LibvirtConfigCaps() caps.host = vconfig.LibvirtConfigCapsHost() caps.host.cpu = vconfig.LibvirtConfigGuestCPU() image_meta = objects.ImageMeta.from_dict(self.test_image_meta) host_cpu_archs = (fields.Architecture.S390, fields.Architecture.S390X) for host_cpu_arch in host_cpu_archs: caps.host.cpu.arch = host_cpu_arch os_mach_type = drvr._get_machine_type(image_meta, caps) self.assertEqual('s390-ccw-virtio', os_mach_type) def test_get_guest_config_machine_type_through_image_meta(self): self.flags(virt_type="kvm", group='libvirt') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance_ref = objects.Instance(**self.test_instance) image_meta = objects.ImageMeta.from_dict({ "disk_format": "raw", "properties": {"hw_machine_type": "fake_machine_type"}}) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) cfg = drvr._get_guest_config(instance_ref, _fake_network_info(self, 1), image_meta, disk_info) self.assertEqual(cfg.os_mach_type, "fake_machine_type") def test_get_guest_config_machine_type_from_config(self): self.flags(virt_type='kvm', group='libvirt') self.flags(hw_machine_type=['x86_64=fake_machine_type'], group='libvirt') def fake_getCapabilities(): return """ cef19ce0-0ca2-11df-855d-b19fbce37686 x86_64 Penryn Intel """ def fake_baselineCPU(cpu, flag): return """ Penryn Intel """ # Make sure the host arch is mocked as x86_64 self.create_fake_libvirt_mock(getCapabilities=fake_getCapabilities, baselineCPU=fake_baselineCPU, getVersion=lambda: 1005001) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance_ref = objects.Instance(**self.test_instance) image_meta = objects.ImageMeta.from_dict(self.test_image_meta) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) cfg = drvr._get_guest_config(instance_ref, _fake_network_info(self, 1), image_meta, disk_info) self.assertEqual(cfg.os_mach_type, "fake_machine_type") def _test_get_guest_config_ppc64(self, device_index): """Test for nova.virt.libvirt.driver.LibvirtDriver._get_guest_config. """ self.flags(virt_type='kvm', group='libvirt') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance_ref = objects.Instance(**self.test_instance) image_meta = objects.ImageMeta.from_dict(self.test_image_meta) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) image_meta = objects.ImageMeta.from_dict(self.test_image_meta) expected = (fields.Architecture.PPC64, fields.Architecture.PPC) for guestarch in expected: with mock.patch.object(libvirt_driver.libvirt_utils, 'get_arch', return_value=guestarch): cfg = drvr._get_guest_config(instance_ref, [], image_meta, disk_info) self.assertIsInstance(cfg.devices[device_index], vconfig.LibvirtConfigGuestVideo) self.assertEqual(cfg.devices[device_index].type, 'vga') def test_get_guest_config_ppc64_through_image_meta_vnc_enabled(self): self.flags(enabled=True, group='vnc') self._test_get_guest_config_ppc64(6) def test_get_guest_config_ppc64_through_image_meta_spice_enabled(self): self.flags(enabled=True, agent_enabled=True, group='spice') self._test_get_guest_config_ppc64(8) def _test_get_guest_config_bootmenu(self, image_meta, extra_specs): self.flags(virt_type='kvm', group='libvirt') conn = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance_ref = objects.Instance(**self.test_instance) instance_ref.flavor.extra_specs = extra_specs disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) conf = conn._get_guest_config(instance_ref, [], image_meta, disk_info) self.assertTrue(conf.os_bootmenu) def test_get_guest_config_bootmenu_via_image_meta(self): image_meta = objects.ImageMeta.from_dict( {"disk_format": "raw", "properties": {"hw_boot_menu": "True"}}) self._test_get_guest_config_bootmenu(image_meta, {}) def test_get_guest_config_bootmenu_via_extra_specs(self): image_meta = objects.ImageMeta.from_dict( self.test_image_meta) self._test_get_guest_config_bootmenu(image_meta, {'hw:boot_menu': 'True'}) def test_get_guest_cpu_config_none(self): self.flags(cpu_mode="none", group='libvirt') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance_ref = objects.Instance(**self.test_instance) image_meta = objects.ImageMeta.from_dict(self.test_image_meta) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) conf = drvr._get_guest_config(instance_ref, _fake_network_info(self, 1), image_meta, disk_info) self.assertIsInstance(conf.cpu, vconfig.LibvirtConfigGuestCPU) self.assertIsNone(conf.cpu.mode) self.assertIsNone(conf.cpu.model) self.assertEqual(conf.cpu.sockets, instance_ref.flavor.vcpus) self.assertEqual(conf.cpu.cores, 1) self.assertEqual(conf.cpu.threads, 1) def test_get_guest_cpu_config_default_kvm(self): self.flags(virt_type="kvm", cpu_mode='none', group='libvirt') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance_ref = objects.Instance(**self.test_instance) image_meta = objects.ImageMeta.from_dict(self.test_image_meta) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) conf = drvr._get_guest_config(instance_ref, _fake_network_info(self, 1), image_meta, disk_info) self.assertIsInstance(conf.cpu, vconfig.LibvirtConfigGuestCPU) self.assertIsNone(conf.cpu.mode) self.assertIsNone(conf.cpu.model) self.assertEqual(conf.cpu.sockets, instance_ref.flavor.vcpus) self.assertEqual(conf.cpu.cores, 1) self.assertEqual(conf.cpu.threads, 1) def test_get_guest_cpu_config_default_uml(self): self.flags(virt_type="uml", cpu_mode='none', group='libvirt') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance_ref = objects.Instance(**self.test_instance) image_meta = objects.ImageMeta.from_dict(self.test_image_meta) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) conf = drvr._get_guest_config(instance_ref, _fake_network_info(self, 1), image_meta, disk_info) self.assertIsNone(conf.cpu) def test_get_guest_cpu_config_default_lxc(self): self.flags(virt_type="lxc", cpu_mode='none', group='libvirt') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance_ref = objects.Instance(**self.test_instance) image_meta = objects.ImageMeta.from_dict(self.test_image_meta) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) conf = drvr._get_guest_config(instance_ref, _fake_network_info(self, 1), image_meta, disk_info) self.assertIsNone(conf.cpu) def test_get_guest_cpu_config_host_passthrough(self): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance_ref = objects.Instance(**self.test_instance) image_meta = objects.ImageMeta.from_dict(self.test_image_meta) self.flags(cpu_mode="host-passthrough", group='libvirt') disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) conf = drvr._get_guest_config(instance_ref, _fake_network_info(self, 1), image_meta, disk_info) self.assertIsInstance(conf.cpu, vconfig.LibvirtConfigGuestCPU) self.assertEqual(conf.cpu.mode, "host-passthrough") self.assertIsNone(conf.cpu.model) self.assertEqual(conf.cpu.sockets, instance_ref.flavor.vcpus) self.assertEqual(conf.cpu.cores, 1) self.assertEqual(conf.cpu.threads, 1) def test_get_guest_cpu_config_host_passthrough_aarch64(self): expected = { fields.Architecture.X86_64: "host-model", fields.Architecture.I686: "host-model", fields.Architecture.PPC: "host-model", fields.Architecture.PPC64: "host-model", fields.Architecture.ARMV7: "host-model", fields.Architecture.AARCH64: "host-passthrough", } for guestarch, expect_mode in expected.items(): caps = vconfig.LibvirtConfigCaps() caps.host = vconfig.LibvirtConfigCapsHost() caps.host.cpu = vconfig.LibvirtConfigCPU() caps.host.cpu.arch = guestarch with mock.patch.object(host.Host, "get_capabilities", return_value=caps): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) if caps.host.cpu.arch == fields.Architecture.AARCH64: drvr._has_uefi_support = mock.Mock(return_value=True) instance_ref = objects.Instance(**self.test_instance) image_meta = objects.ImageMeta.from_dict(self.test_image_meta) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) conf = drvr._get_guest_config(instance_ref, _fake_network_info(self, 1), image_meta, disk_info) self.assertIsInstance(conf.cpu, vconfig.LibvirtConfigGuestCPU) self.assertEqual(conf.cpu.mode, expect_mode) def test_get_guest_cpu_config_host_model(self): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance_ref = objects.Instance(**self.test_instance) image_meta = objects.ImageMeta.from_dict(self.test_image_meta) self.flags(cpu_mode="host-model", group='libvirt') disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) conf = drvr._get_guest_config(instance_ref, _fake_network_info(self, 1), image_meta, disk_info) self.assertIsInstance(conf.cpu, vconfig.LibvirtConfigGuestCPU) self.assertEqual(conf.cpu.mode, "host-model") self.assertIsNone(conf.cpu.model) self.assertEqual(conf.cpu.sockets, instance_ref.flavor.vcpus) self.assertEqual(conf.cpu.cores, 1) self.assertEqual(conf.cpu.threads, 1) def test_get_guest_cpu_config_custom(self): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance_ref = objects.Instance(**self.test_instance) image_meta = objects.ImageMeta.from_dict(self.test_image_meta) self.flags(cpu_mode="custom", cpu_model="Penryn", group='libvirt') disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) conf = drvr._get_guest_config(instance_ref, _fake_network_info(self, 1), image_meta, disk_info) self.assertIsInstance(conf.cpu, vconfig.LibvirtConfigGuestCPU) self.assertEqual(conf.cpu.mode, "custom") self.assertEqual(conf.cpu.model, "Penryn") self.assertEqual(conf.cpu.sockets, instance_ref.flavor.vcpus) self.assertEqual(conf.cpu.cores, 1) self.assertEqual(conf.cpu.threads, 1) def test_get_guest_cpu_topology(self): instance_ref = objects.Instance(**self.test_instance) instance_ref.flavor.vcpus = 8 instance_ref.flavor.extra_specs = {'hw:cpu_max_sockets': '4'} drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) image_meta = objects.ImageMeta.from_dict(self.test_image_meta) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) conf = drvr._get_guest_config(instance_ref, _fake_network_info(self, 1), image_meta, disk_info) self.assertIsInstance(conf.cpu, vconfig.LibvirtConfigGuestCPU) self.assertEqual(conf.cpu.mode, "host-model") self.assertEqual(conf.cpu.sockets, 4) self.assertEqual(conf.cpu.cores, 2) self.assertEqual(conf.cpu.threads, 1) def test_get_guest_memory_balloon_config_by_default(self): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance_ref = objects.Instance(**self.test_instance) image_meta = objects.ImageMeta.from_dict(self.test_image_meta) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) cfg = drvr._get_guest_config(instance_ref, [], image_meta, disk_info) for device in cfg.devices: if device.root_name == 'memballoon': self.assertIsInstance(device, vconfig.LibvirtConfigMemoryBalloon) self.assertEqual('virtio', device.model) self.assertEqual(10, device.period) def test_get_guest_memory_balloon_config_disable(self): self.flags(mem_stats_period_seconds=0, group='libvirt') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance_ref = objects.Instance(**self.test_instance) image_meta = objects.ImageMeta.from_dict(self.test_image_meta) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) cfg = drvr._get_guest_config(instance_ref, [], image_meta, disk_info) no_exist = True for device in cfg.devices: if device.root_name == 'memballoon': no_exist = False break self.assertTrue(no_exist) def test_get_guest_memory_balloon_config_period_value(self): self.flags(mem_stats_period_seconds=21, group='libvirt') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance_ref = objects.Instance(**self.test_instance) image_meta = objects.ImageMeta.from_dict(self.test_image_meta) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) cfg = drvr._get_guest_config(instance_ref, [], image_meta, disk_info) for device in cfg.devices: if device.root_name == 'memballoon': self.assertIsInstance(device, vconfig.LibvirtConfigMemoryBalloon) self.assertEqual('virtio', device.model) self.assertEqual(21, device.period) def test_get_guest_memory_balloon_config_qemu(self): self.flags(virt_type='qemu', group='libvirt') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance_ref = objects.Instance(**self.test_instance) image_meta = objects.ImageMeta.from_dict(self.test_image_meta) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) cfg = drvr._get_guest_config(instance_ref, [], image_meta, disk_info) for device in cfg.devices: if device.root_name == 'memballoon': self.assertIsInstance(device, vconfig.LibvirtConfigMemoryBalloon) self.assertEqual('virtio', device.model) self.assertEqual(10, device.period) def test_get_guest_memory_balloon_config_xen(self): self.flags(virt_type='xen', group='libvirt') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance_ref = objects.Instance(**self.test_instance) image_meta = objects.ImageMeta.from_dict(self.test_image_meta) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) cfg = drvr._get_guest_config(instance_ref, [], image_meta, disk_info) for device in cfg.devices: if device.root_name == 'memballoon': self.assertIsInstance(device, vconfig.LibvirtConfigMemoryBalloon) self.assertEqual('xen', device.model) self.assertEqual(10, device.period) def test_get_guest_memory_balloon_config_lxc(self): self.flags(virt_type='lxc', group='libvirt') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance_ref = objects.Instance(**self.test_instance) image_meta = objects.ImageMeta.from_dict(self.test_image_meta) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) cfg = drvr._get_guest_config(instance_ref, [], image_meta, disk_info) no_exist = True for device in cfg.devices: if device.root_name == 'memballoon': no_exist = False break self.assertTrue(no_exist) @mock.patch('nova.virt.libvirt.driver.LOG.warning') @mock.patch.object(host.Host, 'has_min_version', return_value=True) @mock.patch.object(host.Host, "get_capabilities") def test_get_supported_perf_events_foo(self, mock_get_caps, mock_min_version, mock_warn): self.flags(enabled_perf_events=['foo'], group='libvirt') caps = vconfig.LibvirtConfigCaps() caps.host = vconfig.LibvirtConfigCapsHost() caps.host.cpu = vconfig.LibvirtConfigCPU() caps.host.cpu.arch = fields.Architecture.X86_64 caps.host.topology = fakelibvirt.NUMATopology() mock_get_caps.return_value = caps drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) events = drvr._get_supported_perf_events() self.assertTrue(mock_warn.called) self.assertEqual([], events) @mock.patch.object(host.Host, "get_capabilities") def _test_get_guest_with_perf(self, caps, events, mock_get_caps): mock_get_caps.return_value = caps drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) drvr.init_host('test_perf') instance_ref = objects.Instance(**self.test_instance) image_meta = objects.ImageMeta.from_dict(self.test_image_meta) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) cfg = drvr._get_guest_config(instance_ref, [], image_meta, disk_info) self.assertEqual(events, cfg.perf_events) @mock.patch.object(fakelibvirt, 'VIR_PERF_PARAM_CMT', True, create=True) @mock.patch.object(fakelibvirt, 'VIR_PERF_PARAM_MBMT', True, create=True) @mock.patch.object(fakelibvirt, 'VIR_PERF_PARAM_MBML', True, create=True) @mock.patch.object(host.Host, 'has_min_version', return_value=True) def test_get_guest_with_perf_supported(self, mock_min_version): self.flags(enabled_perf_events=['cmt', 'mbml', 'mbmt'], group='libvirt') caps = vconfig.LibvirtConfigCaps() caps.host = vconfig.LibvirtConfigCapsHost() caps.host.cpu = vconfig.LibvirtConfigCPU() caps.host.cpu.arch = fields.Architecture.X86_64 caps.host.topology = fakelibvirt.NUMATopology() features = [] for f in ('cmt', 'mbm_local', 'mbm_total'): feature = vconfig.LibvirtConfigGuestCPUFeature() feature.name = f feature.policy = fields.CPUFeaturePolicy.REQUIRE features.append(feature) caps.host.cpu.features = set(features) self._test_get_guest_with_perf(caps, ['cmt', 'mbml', 'mbmt']) @mock.patch.object(host.Host, 'has_min_version') def test_get_guest_with_perf_libvirt_unsupported(self, mock_min_version): def fake_has_min_version(lv_ver=None, hv_ver=None, hv_type=None): if lv_ver == libvirt_driver.MIN_LIBVIRT_PERF_VERSION: return False return True mock_min_version.side_effect = fake_has_min_version self.flags(enabled_perf_events=['cmt'], group='libvirt') caps = vconfig.LibvirtConfigCaps() caps.host = vconfig.LibvirtConfigCapsHost() caps.host.cpu = vconfig.LibvirtConfigCPU() caps.host.cpu.arch = fields.Architecture.X86_64 self._test_get_guest_with_perf(caps, []) @mock.patch.object(fakelibvirt, 'VIR_PERF_PARAM_CMT', True, create=True) @mock.patch.object(host.Host, 'has_min_version', return_value=True) def test_get_guest_with_perf_host_unsupported(self, mock_min_version): self.flags(enabled_perf_events=['cmt'], group='libvirt') caps = vconfig.LibvirtConfigCaps() caps.host = vconfig.LibvirtConfigCapsHost() caps.host.cpu = vconfig.LibvirtConfigCPU() caps.host.cpu.arch = fields.Architecture.X86_64 caps.host.topology = fakelibvirt.NUMATopology() self._test_get_guest_with_perf(caps, []) def test_xml_and_uri_no_ramdisk_no_kernel(self): instance_data = dict(self.test_instance) self._check_xml_and_uri(instance_data, expect_kernel=False, expect_ramdisk=False) def test_xml_and_uri_no_ramdisk_no_kernel_xen_hvm(self): instance_data = dict(self.test_instance) instance_data.update({'vm_mode': fields.VMMode.HVM}) self._check_xml_and_uri(instance_data, expect_kernel=False, expect_ramdisk=False, expect_xen_hvm=True) def test_xml_and_uri_no_ramdisk_no_kernel_xen_pv(self): instance_data = dict(self.test_instance) instance_data.update({'vm_mode': fields.VMMode.XEN}) self._check_xml_and_uri(instance_data, expect_kernel=False, expect_ramdisk=False, expect_xen_hvm=False, xen_only=True) def test_xml_and_uri_no_ramdisk(self): instance_data = dict(self.test_instance) instance_data['kernel_id'] = 'aki-deadbeef' self._check_xml_and_uri(instance_data, expect_kernel=True, expect_ramdisk=False) def test_xml_and_uri_no_kernel(self): instance_data = dict(self.test_instance) instance_data['ramdisk_id'] = 'ari-deadbeef' self._check_xml_and_uri(instance_data, expect_kernel=False, expect_ramdisk=False) def test_xml_and_uri(self): instance_data = dict(self.test_instance) instance_data['ramdisk_id'] = 'ari-deadbeef' instance_data['kernel_id'] = 'aki-deadbeef' self._check_xml_and_uri(instance_data, expect_kernel=True, expect_ramdisk=True) def test_xml_and_uri_rescue(self): instance_data = dict(self.test_instance) instance_data['ramdisk_id'] = 'ari-deadbeef' instance_data['kernel_id'] = 'aki-deadbeef' self._check_xml_and_uri(instance_data, expect_kernel=True, expect_ramdisk=True, rescue=instance_data) def test_xml_and_uri_rescue_no_kernel_no_ramdisk(self): instance_data = dict(self.test_instance) self._check_xml_and_uri(instance_data, expect_kernel=False, expect_ramdisk=False, rescue=instance_data) def test_xml_and_uri_rescue_no_kernel(self): instance_data = dict(self.test_instance) instance_data['ramdisk_id'] = 'aki-deadbeef' self._check_xml_and_uri(instance_data, expect_kernel=False, expect_ramdisk=True, rescue=instance_data) def test_xml_and_uri_rescue_no_ramdisk(self): instance_data = dict(self.test_instance) instance_data['kernel_id'] = 'aki-deadbeef' self._check_xml_and_uri(instance_data, expect_kernel=True, expect_ramdisk=False, rescue=instance_data) def test_xml_uuid(self): self._check_xml_and_uuid(self.test_image_meta) def test_lxc_container_and_uri(self): instance_data = dict(self.test_instance) self._check_xml_and_container(instance_data) def test_xml_disk_prefix(self): instance_data = dict(self.test_instance) self._check_xml_and_disk_prefix(instance_data, None) def test_xml_user_specified_disk_prefix(self): instance_data = dict(self.test_instance) self._check_xml_and_disk_prefix(instance_data, 'sd') def test_xml_disk_driver(self): instance_data = dict(self.test_instance) self._check_xml_and_disk_driver(instance_data) def test_xml_disk_bus_virtio(self): image_meta = objects.ImageMeta.from_dict(self.test_image_meta) self._check_xml_and_disk_bus(image_meta, None, (("disk", "virtio", "vda"),)) def test_xml_disk_bus_ide(self): # It's necessary to check if the architecture is power, because # power doesn't have support to ide, and so libvirt translate # all ide calls to scsi expected = {fields.Architecture.PPC: ("cdrom", "scsi", "sda"), fields.Architecture.PPC64: ("cdrom", "scsi", "sda"), fields.Architecture.PPC64LE: ("cdrom", "scsi", "sda"), fields.Architecture.AARCH64: ("cdrom", "scsi", "sda")} expec_val = expected.get(blockinfo.libvirt_utils.get_arch({}), ("cdrom", "ide", "hda")) image_meta = objects.ImageMeta.from_dict({ "disk_format": "iso"}) self._check_xml_and_disk_bus(image_meta, None, (expec_val,)) def test_xml_disk_bus_ide_and_virtio(self): # It's necessary to check if the architecture is power, because # power doesn't have support to ide, and so libvirt translate # all ide calls to scsi expected = {fields.Architecture.PPC: ("cdrom", "scsi", "sda"), fields.Architecture.PPC64: ("cdrom", "scsi", "sda"), fields.Architecture.PPC64LE: ("cdrom", "scsi", "sda"), fields.Architecture.AARCH64: ("cdrom", "scsi", "sda")} swap = {'device_name': '/dev/vdc', 'swap_size': 1} ephemerals = [{'device_type': 'disk', 'disk_bus': 'virtio', 'device_name': '/dev/vdb', 'size': 1}] block_device_info = { 'swap': swap, 'ephemerals': ephemerals} expec_val = expected.get(blockinfo.libvirt_utils.get_arch({}), ("cdrom", "ide", "hda")) image_meta = objects.ImageMeta.from_dict({ "disk_format": "iso"}) self._check_xml_and_disk_bus(image_meta, block_device_info, (expec_val, ("disk", "virtio", "vdb"), ("disk", "virtio", "vdc"))) @mock.patch.object(host.Host, 'get_guest') def test_instance_exists(self, mock_get_guest): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) self.assertTrue(drvr.instance_exists(None)) mock_get_guest.side_effect = exception.InstanceNotFound( instance_id='something') self.assertFalse(drvr.instance_exists(None)) mock_get_guest.side_effect = exception.InternalError(err='something') self.assertFalse(drvr.instance_exists(None)) def test_estimate_instance_overhead_spawn(self): # test that method when called with instance ref instance_topology = objects.InstanceNUMATopology( emulator_threads_policy=( fields.CPUEmulatorThreadsPolicy.ISOLATE), cells=[objects.InstanceNUMACell( id=0, cpuset=set([0]), memory=1024)]) instance_info = objects.Instance(**self.test_instance) instance_info.numa_topology = instance_topology drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) overhead = drvr.estimate_instance_overhead(instance_info) self.assertEqual(1, overhead['vcpus']) def test_estimate_instance_overhead_spawn_no_overhead(self): # test that method when called with instance ref, no overhead instance_topology = objects.InstanceNUMATopology( emulator_threads_policy=( fields.CPUEmulatorThreadsPolicy.SHARE), cells=[objects.InstanceNUMACell( id=0, cpuset=set([0]), memory=1024)]) instance_info = objects.Instance(**self.test_instance) instance_info.numa_topology = instance_topology drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) overhead = drvr.estimate_instance_overhead(instance_info) self.assertEqual(0, overhead['vcpus']) def test_estimate_instance_overhead_migrate(self): # test that method when called with flavor ref instance_info = objects.Flavor(extra_specs={ 'hw:emulator_threads_policy': ( fields.CPUEmulatorThreadsPolicy.ISOLATE), 'hw:cpu_policy': fields.CPUAllocationPolicy.DEDICATED, }) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) overhead = drvr.estimate_instance_overhead(instance_info) self.assertEqual(1, overhead['vcpus']) def test_estimate_instance_overhead_migrate_no_overhead(self): # test that method when called with flavor ref, no overhead instance_info = objects.Flavor(extra_specs={ 'hw:emulator_threads_policy': ( fields.CPUEmulatorThreadsPolicy.SHARE), 'hw:cpu_policy': fields.CPUAllocationPolicy.DEDICATED, }) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) overhead = drvr.estimate_instance_overhead(instance_info) self.assertEqual(0, overhead['vcpus']) def test_estimate_instance_overhead_usage(self): # test that method when called with usage dict instance_info = objects.Flavor(extra_specs={ 'hw:emulator_threads_policy': ( fields.CPUEmulatorThreadsPolicy.ISOLATE), 'hw:cpu_policy': fields.CPUAllocationPolicy.DEDICATED, }) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) overhead = drvr.estimate_instance_overhead(instance_info) self.assertEqual(1, overhead['vcpus']) def test_estimate_instance_overhead_usage_no_overhead(self): # test that method when called with usage dict, no overhead instance_info = objects.Flavor(extra_specs={ 'hw:emulator_threads_policy': ( fields.CPUEmulatorThreadsPolicy.SHARE), 'hw:cpu_policy': fields.CPUAllocationPolicy.DEDICATED, }) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) overhead = drvr.estimate_instance_overhead(instance_info) self.assertEqual(0, overhead['vcpus']) @mock.patch.object(host.Host, "list_instance_domains") def test_list_instances(self, mock_list): vm1 = FakeVirtDomain(id=3, name="instance00000001") vm2 = FakeVirtDomain(id=17, name="instance00000002") vm3 = FakeVirtDomain(name="instance00000003") vm4 = FakeVirtDomain(name="instance00000004") mock_list.return_value = [vm1, vm2, vm3, vm4] drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) names = drvr.list_instances() self.assertEqual(names[0], vm1.name()) self.assertEqual(names[1], vm2.name()) self.assertEqual(names[2], vm3.name()) self.assertEqual(names[3], vm4.name()) mock_list.assert_called_with(only_guests=True, only_running=False) @mock.patch.object(host.Host, "list_instance_domains") def test_list_instance_uuids(self, mock_list): vm1 = FakeVirtDomain(id=3, name="instance00000001") vm2 = FakeVirtDomain(id=17, name="instance00000002") vm3 = FakeVirtDomain(name="instance00000003") vm4 = FakeVirtDomain(name="instance00000004") mock_list.return_value = [vm1, vm2, vm3, vm4] drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) uuids = drvr.list_instance_uuids() self.assertEqual(len(uuids), 4) self.assertEqual(uuids[0], vm1.UUIDString()) self.assertEqual(uuids[1], vm2.UUIDString()) self.assertEqual(uuids[2], vm3.UUIDString()) self.assertEqual(uuids[3], vm4.UUIDString()) mock_list.assert_called_with(only_guests=True, only_running=False) @mock.patch('nova.virt.libvirt.host.Host.get_online_cpus', return_value=None) @mock.patch('nova.virt.libvirt.host.Host.get_cpu_count', return_value=4) def test_get_host_vcpus_is_empty(self, get_cpu_count, get_online_cpus): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) self.flags(vcpu_pin_set="") vcpus = drvr._get_vcpu_total() self.assertEqual(4, vcpus) @mock.patch('nova.virt.libvirt.host.Host.get_online_cpus') def test_get_host_vcpus(self, get_online_cpus): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) self.flags(vcpu_pin_set="4-5") get_online_cpus.return_value = set([4, 5, 6]) expected_vcpus = 2 vcpus = drvr._get_vcpu_total() self.assertEqual(expected_vcpus, vcpus) @mock.patch('nova.virt.libvirt.host.Host.get_online_cpus') def test_get_host_vcpus_out_of_range(self, get_online_cpus): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) self.flags(vcpu_pin_set="4-6") get_online_cpus.return_value = set([4, 5]) self.assertRaises(exception.Invalid, drvr._get_vcpu_total) @mock.patch('nova.virt.libvirt.host.Host.get_online_cpus') def test_get_host_vcpus_libvirt_error(self, get_online_cpus): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) not_supported_exc = fakelibvirt.make_libvirtError( fakelibvirt.libvirtError, 'this function is not supported by the connection driver:' ' virNodeNumOfDevices', error_code=fakelibvirt.VIR_ERR_NO_SUPPORT) self.flags(vcpu_pin_set="4-6") get_online_cpus.side_effect = not_supported_exc self.assertRaises(exception.Invalid, drvr._get_vcpu_total) @mock.patch('nova.virt.libvirt.host.Host.get_online_cpus') def test_get_host_vcpus_libvirt_error_success(self, get_online_cpus): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) not_supported_exc = fakelibvirt.make_libvirtError( fakelibvirt.libvirtError, 'this function is not supported by the connection driver:' ' virNodeNumOfDevices', error_code=fakelibvirt.VIR_ERR_NO_SUPPORT) self.flags(vcpu_pin_set="1") get_online_cpus.side_effect = not_supported_exc expected_vcpus = 1 vcpus = drvr._get_vcpu_total() self.assertEqual(expected_vcpus, vcpus) @mock.patch('nova.virt.libvirt.host.Host.get_cpu_count') def test_get_host_vcpus_after_hotplug(self, get_cpu_count): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) get_cpu_count.return_value = 2 expected_vcpus = 2 vcpus = drvr._get_vcpu_total() self.assertEqual(expected_vcpus, vcpus) get_cpu_count.return_value = 3 expected_vcpus = 3 vcpus = drvr._get_vcpu_total() self.assertEqual(expected_vcpus, vcpus) @mock.patch.object(host.Host, "has_min_version", return_value=True) def test_quiesce(self, mock_has_min_version): self.create_fake_libvirt_mock(lookupByUUIDString=self.fake_lookup) with mock.patch.object(FakeVirtDomain, "fsFreeze") as mock_fsfreeze: drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI()) instance = objects.Instance(**self.test_instance) image_meta = objects.ImageMeta.from_dict( {"properties": {"hw_qemu_guest_agent": "yes"}}) self.assertIsNone(drvr.quiesce(self.context, instance, image_meta)) mock_fsfreeze.assert_called_once_with() @mock.patch.object(host.Host, "has_min_version", return_value=True) def test_unquiesce(self, mock_has_min_version): self.create_fake_libvirt_mock(getLibVersion=lambda: 1002005, lookupByUUIDString=self.fake_lookup) with mock.patch.object(FakeVirtDomain, "fsThaw") as mock_fsthaw: drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI()) instance = objects.Instance(**self.test_instance) image_meta = objects.ImageMeta.from_dict( {"properties": {"hw_qemu_guest_agent": "yes"}}) self.assertIsNone(drvr.unquiesce(self.context, instance, image_meta)) mock_fsthaw.assert_called_once_with() def test_create_snapshot_metadata(self): base = objects.ImageMeta.from_dict( {'disk_format': 'raw'}) instance_data = {'kernel_id': 'kernel', 'project_id': 'prj_id', 'ramdisk_id': 'ram_id', 'os_type': None} instance = objects.Instance(**instance_data) img_fmt = 'raw' snp_name = 'snapshot_name' drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) ret = drvr._create_snapshot_metadata(base, instance, img_fmt, snp_name) expected = {'is_public': False, 'status': 'active', 'name': snp_name, 'properties': { 'kernel_id': instance['kernel_id'], 'image_location': 'snapshot', 'image_state': 'available', 'owner_id': instance['project_id'], 'ramdisk_id': instance['ramdisk_id'], }, 'disk_format': img_fmt, 'container_format': 'bare', } self.assertEqual(ret, expected) # simulate an instance with os_type field defined # disk format equals to ami # container format not equals to bare instance['os_type'] = 'linux' base = objects.ImageMeta.from_dict( {'disk_format': 'ami', 'container_format': 'test_container'}) expected['properties']['os_type'] = instance['os_type'] expected['disk_format'] = base.disk_format expected['container_format'] = base.container_format ret = drvr._create_snapshot_metadata(base, instance, img_fmt, snp_name) self.assertEqual(ret, expected) def test_get_volume_driver(self): conn = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) connection_info = {'driver_volume_type': 'fake', 'data': {'device_path': '/fake', 'access_mode': 'rw'}} driver = conn._get_volume_driver(connection_info) result = isinstance(driver, volume_drivers.LibvirtFakeVolumeDriver) self.assertTrue(result) def test_get_volume_driver_unknown(self): conn = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) connection_info = {'driver_volume_type': 'unknown', 'data': {'device_path': '/fake', 'access_mode': 'rw'}} self.assertRaises( exception.VolumeDriverNotFound, conn._get_volume_driver, connection_info ) def _fake_libvirt_config_guest_disk(self): fake_config = vconfig.LibvirtConfigGuestDisk() fake_config.source_type = "network" fake_config.source_device = "fake-type" fake_config.driver_name = "qemu" fake_config.driver_format = "raw" fake_config.driver_cache = "none" fake_config.source_protocol = "fake" fake_config.source_name = "fake" fake_config.target_bus = "fake-bus" fake_config.target_dev = "vdb" return fake_config @mock.patch.object(volume_drivers.LibvirtFakeVolumeDriver, 'get_config') @mock.patch.object(libvirt_driver.LibvirtDriver, '_set_cache_mode') def test_get_volume_config(self, _set_cache_mode, get_config): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) connection_info = {'driver_volume_type': 'fake', 'data': {'device_path': '/fake', 'access_mode': 'rw'}} disk_info = {'bus': 'fake-bus', 'type': 'fake-type', 'dev': 'vdb'} config_guest_disk = self._fake_libvirt_config_guest_disk() get_config.return_value = copy.deepcopy(config_guest_disk) config = drvr._get_volume_config(connection_info, disk_info) get_config.assert_called_once_with(connection_info, disk_info) _set_cache_mode.assert_called_once_with(config) self.assertEqual(config_guest_disk.to_xml(), config.to_xml()) @mock.patch.object(key_manager, 'API') @mock.patch.object(libvirt_driver.LibvirtDriver, '_get_volume_encryption') @mock.patch.object(libvirt_driver.LibvirtDriver, '_use_native_luks') @mock.patch.object(libvirt_driver.LibvirtDriver, '_get_volume_encryptor') @mock.patch('nova.virt.libvirt.host.Host') @mock.patch('os_brick.encryptors.luks.is_luks') def test_connect_volume_native_luks(self, mock_is_luks, mock_host, mock_get_volume_encryptor, mock_use_native_luks, mock_get_volume_encryption, mock_get_key_mgr): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) connection_info = {'driver_volume_type': 'fake', 'data': {'device_path': '/fake', 'access_mode': 'rw', 'volume_id': uuids.volume_id}} encryption = {'provider': encryptors.LUKS, 'encryption_key_id': uuids.encryption_key_id} instance = mock.sentinel.instance # Mock out the encryptors mock_encryptor = mock.Mock() mock_get_volume_encryptor.return_value = mock_encryptor mock_is_luks.return_value = True # Mock out the key manager key = u'3734363537333734' key_encoded = binascii.unhexlify(key) mock_key = mock.Mock() mock_key_mgr = mock.Mock() mock_get_key_mgr.return_value = mock_key_mgr mock_key_mgr.get.return_value = mock_key mock_key.get_encoded.return_value = key_encoded # assert that the secret is created for the encrypted volume during # _connect_volume when use_native_luks is True mock_get_volume_encryption.return_value = encryption mock_use_native_luks.return_value = True drvr._connect_volume(self.context, connection_info, instance, encryption=encryption) drvr._host.create_secret.assert_called_once_with('volume', uuids.volume_id, password=key) mock_encryptor.attach_volume.assert_not_called() # assert that the encryptor is used if use_native_luks is False drvr._host.create_secret.reset_mock() mock_get_volume_encryption.reset_mock() mock_use_native_luks.return_value = False drvr._connect_volume(self.context, connection_info, instance, encryption=encryption) drvr._host.create_secret.assert_not_called() mock_encryptor.attach_volume.assert_called_once_with(self.context, **encryption) # assert that we format the volume if is_luks is False mock_use_native_luks.return_value = True mock_is_luks.return_value = False drvr._connect_volume(self.context, connection_info, instance, encryption=encryption) mock_encryptor._format_volume.assert_called_once_with(key, **encryption) # assert that os-brick is used when allow_native_luks is False mock_encryptor.attach_volume.reset_mock() mock_is_luks.return_value = True drvr._connect_volume(self.context, connection_info, instance, encryption=encryption, allow_native_luks=False) mock_encryptor.attach_volume.assert_called_once_with(self.context, **encryption) @mock.patch.object(libvirt_driver.LibvirtDriver, '_get_volume_encryptor') def test_disconnect_volume_native_luks(self, mock_get_volume_encryptor): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) drvr._host = mock.Mock() drvr._host.find_secret.return_value = mock.Mock() connection_info = {'driver_volume_type': 'fake', 'data': {'device_path': '/fake', 'access_mode': 'rw', 'volume_id': uuids.volume_id}} encryption = {'provider': encryptors.LUKS, 'encryption_key_id': uuids.encryption_key_id} instance = mock.sentinel.instance # Mock out the encryptors mock_encryptor = mock.Mock() mock_get_volume_encryptor.return_value = mock_encryptor # assert that a secret is deleted if found drvr._disconnect_volume(self.context, connection_info, instance) drvr._host.delete_secret.assert_called_once_with('volume', uuids.volume_id) mock_encryptor.detach_volume.assert_not_called() # assert that the encryptor is used if no secret is found drvr._host.find_secret.reset_mock() drvr._host.delete_secret.reset_mock() drvr._host.find_secret.return_value = None drvr._disconnect_volume(self.context, connection_info, instance, encryption=encryption) drvr._host.delete_secret.assert_not_called() mock_encryptor.detach_volume.called_once_with(self.context, **encryption) @mock.patch.object(libvirt_driver.LibvirtDriver, '_detach_encryptor') @mock.patch('nova.objects.InstanceList.get_uuids_by_host') @mock.patch('nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver') @mock.patch('nova.volume.cinder.API.get') def test_disconnect_multiattach_single_connection( self, mock_volume_get, mock_get_volume_driver, mock_get_instances, mock_detach_encryptor): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) mock_volume_driver = mock.MagicMock( spec=volume_drivers.LibvirtBaseVolumeDriver) mock_get_volume_driver.return_value = mock_volume_driver attachments = ( [('70ab645f-6ffc-406a-b3d2-5007a0c01b82', {'mountpoint': u'/dev/vdb', 'attachment_id': u'9402c249-99df-4f72-89e7-fd611493ee5d'}), ('00803490-f768-4049-aa7d-151f54e6311e', {'mountpoint': u'/dev/vdb', 'attachment_id': u'd6128a7b-19c8-4a3e-8036-011396df95ac'})]) mock_volume_get.return_value = ( {'attachments': OrderedDict(attachments), 'multiattach': True, 'id': 'd30559cf-f092-4693-8589-0d0a1e7d9b1f'}) fake_connection_info = { 'multiattach': True, 'volume_id': 'd30559cf-f092-4693-8589-0d0a1e7d9b1f'} fake_instance_1 = fake_instance.fake_instance_obj( self.context, host='fake-host-1') mock_get_instances.return_value = ( ['00803490-f768-4049-aa7d-151f54e6311e']) drvr._disconnect_volume( self.context, fake_connection_info, fake_instance_1) mock_volume_driver.disconnect_volume.assert_called_once_with( fake_connection_info, fake_instance_1) @mock.patch.object(libvirt_driver.LibvirtDriver, '_detach_encryptor') @mock.patch('nova.objects.InstanceList.get_uuids_by_host') @mock.patch('nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver') @mock.patch('nova.volume.cinder.API.get') def test_disconnect_multiattach_multi_connection( self, mock_volume_get, mock_get_volume_driver, mock_get_instances, mock_detach_encryptor): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) mock_volume_driver = mock.MagicMock( spec=volume_drivers.LibvirtBaseVolumeDriver) mock_get_volume_driver.return_value = mock_volume_driver attachments = ( [('70ab645f-6ffc-406a-b3d2-5007a0c01b82', {'mountpoint': u'/dev/vdb', 'attachment_id': u'9402c249-99df-4f72-89e7-fd611493ee5d'}), ('00803490-f768-4049-aa7d-151f54e6311e', {'mountpoint': u'/dev/vdb', 'attachment_id': u'd6128a7b-19c8-4a3e-8036-011396df95ac'})]) mock_volume_get.return_value = ( {'attachments': OrderedDict(attachments), 'multiattach': True, 'id': 'd30559cf-f092-4693-8589-0d0a1e7d9b1f'}) fake_connection_info = { 'multiattach': True, 'volume_id': 'd30559cf-f092-4693-8589-0d0a1e7d9b1f'} fake_instance_1 = fake_instance.fake_instance_obj( self.context, host='fake-host-1') mock_get_instances.return_value = ( ['00803490-f768-4049-aa7d-151f54e6311e', '70ab645f-6ffc-406a-b3d2-5007a0c01b82']) drvr._disconnect_volume( self.context, fake_connection_info, fake_instance_1) mock_volume_driver.disconnect_volume.assert_not_called() def test_attach_invalid_volume_type(self): self.create_fake_libvirt_mock() libvirt_driver.LibvirtDriver._conn.lookupByUUIDString \ = self.fake_lookup instance = objects.Instance(**self.test_instance) self.mox.ReplayAll() drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) self.assertRaises(exception.VolumeDriverNotFound, drvr.attach_volume, None, {"driver_volume_type": "badtype"}, instance, "/dev/sda") def test_attach_blockio_invalid_hypervisor(self): self.flags(virt_type='lxc', group='libvirt') self.create_fake_libvirt_mock() libvirt_driver.LibvirtDriver._conn.lookupByUUIDString \ = self.fake_lookup instance = objects.Instance(**self.test_instance) self.mox.ReplayAll() drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) self.assertRaises(exception.InvalidHypervisorType, drvr.attach_volume, None, {"driver_volume_type": "fake", "data": {"logical_block_size": "4096", "physical_block_size": "4096"} }, instance, "/dev/sda") def _test_check_discard(self, mock_log, driver_discard=None, bus=None, should_log=False): mock_config = mock.Mock() mock_config.driver_discard = driver_discard mock_config.target_bus = bus mock_instance = mock.Mock() drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) drvr._check_discard_for_attach_volume(mock_config, mock_instance) self.assertEqual(should_log, mock_log.called) @mock.patch('nova.virt.libvirt.driver.LOG.debug') def test_check_discard_for_attach_volume_no_unmap(self, mock_log): self._test_check_discard(mock_log, driver_discard=None, bus='scsi', should_log=False) @mock.patch('nova.virt.libvirt.driver.LOG.debug') def test_check_discard_for_attach_volume_blk_controller(self, mock_log): self._test_check_discard(mock_log, driver_discard='unmap', bus='virtio', should_log=True) @mock.patch('nova.virt.libvirt.driver.LOG.debug') def test_check_discard_for_attach_volume_valid_controller(self, mock_log): self._test_check_discard(mock_log, driver_discard='unmap', bus='scsi', should_log=False) @mock.patch('nova.virt.libvirt.driver.LOG.debug') def test_check_discard_for_attach_volume_blk_controller_no_unmap(self, mock_log): self._test_check_discard(mock_log, driver_discard=None, bus='virtio', should_log=False) @mock.patch('nova.utils.get_image_from_system_metadata') @mock.patch('nova.virt.libvirt.blockinfo.get_info_from_bdm') @mock.patch('nova.virt.libvirt.host.Host._get_domain') def test_attach_volume_with_vir_domain_affect_live_flag(self, mock_get_domain, mock_get_info, get_image): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) instance = objects.Instance(**self.test_instance) image_meta = {} get_image.return_value = image_meta mock_dom = mock.MagicMock() mock_get_domain.return_value = mock_dom connection_info = {"driver_volume_type": "fake", "data": {"device_path": "/fake", "access_mode": "rw"}} bdm = {'device_name': 'vdb', 'disk_bus': 'fake-bus', 'device_type': 'fake-type'} disk_info = {'bus': bdm['disk_bus'], 'type': bdm['device_type'], 'dev': 'vdb'} mock_get_info.return_value = disk_info mock_conf = mock.MagicMock() flags = (fakelibvirt.VIR_DOMAIN_AFFECT_CONFIG | fakelibvirt.VIR_DOMAIN_AFFECT_LIVE) with test.nested( mock.patch.object(drvr, '_connect_volume'), mock.patch.object(drvr, '_get_volume_config', return_value=mock_conf), mock.patch.object(drvr, '_check_discard_for_attach_volume'), mock.patch.object(drvr, '_build_device_metadata'), mock.patch.object(objects.Instance, 'save') ) as (mock_connect_volume, mock_get_volume_config, mock_check_discard, mock_build_metadata, mock_save): for state in (power_state.RUNNING, power_state.PAUSED): mock_dom.info.return_value = [state, 512, 512, 2, 1234, 5678] mock_build_metadata.return_value = \ objects.InstanceDeviceMetadata() drvr.attach_volume(self.context, connection_info, instance, "/dev/vdb", disk_bus=bdm['disk_bus'], device_type=bdm['device_type']) mock_get_domain.assert_called_with(instance) mock_get_info.assert_called_with( instance, CONF.libvirt.virt_type, test.MatchType(objects.ImageMeta), bdm) mock_connect_volume.assert_called_with( self.context, connection_info, instance, encryption=None) mock_get_volume_config.assert_called_with( connection_info, disk_info) mock_dom.attachDeviceFlags.assert_called_with( mock_conf.to_xml(), flags=flags) mock_check_discard.assert_called_with(mock_conf, instance) mock_build_metadata.assert_called_with(self.context, instance) mock_save.assert_called_with() @mock.patch('nova.virt.libvirt.host.Host._get_domain') def test_detach_volume_with_vir_domain_affect_live_flag(self, mock_get_domain): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) instance = objects.Instance(**self.test_instance) mock_xml_with_disk = """ """ mock_xml_without_disk = """ """ mock_dom = mock.MagicMock() # Second time don't return anything about disk vdc so it looks removed return_list = [mock_xml_with_disk, mock_xml_without_disk] # Doubling the size of return list because we test with two guest power # states mock_dom.XMLDesc.side_effect = return_list + return_list connection_info = {"driver_volume_type": "fake", "data": {"device_path": "/fake", "access_mode": "rw"}} flags = (fakelibvirt.VIR_DOMAIN_AFFECT_CONFIG | fakelibvirt.VIR_DOMAIN_AFFECT_LIVE) with mock.patch.object(drvr, '_disconnect_volume') as \ mock_disconnect_volume: for state in (power_state.RUNNING, power_state.PAUSED): mock_dom.info.return_value = [state, 512, 512, 2, 1234, 5678] mock_get_domain.return_value = mock_dom drvr.detach_volume( self.context, connection_info, instance, '/dev/vdc') mock_get_domain.assert_called_with(instance) mock_dom.detachDeviceFlags.assert_called_with( """ """, flags=flags) mock_disconnect_volume.assert_called_with( self.context, connection_info, instance, encryption=None) @mock.patch('nova.virt.libvirt.driver.LibvirtDriver._disconnect_volume') @mock.patch('nova.virt.libvirt.host.Host._get_domain') def test_detach_volume_disk_not_found(self, mock_get_domain, mock_disconnect_volume): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) instance = objects.Instance(**self.test_instance) mock_xml_without_disk = """ """ mock_dom = mock.MagicMock(return_value=mock_xml_without_disk) connection_info = {"driver_volume_type": "fake", "data": {"device_path": "/fake", "access_mode": "rw"}} mock_dom.info.return_value = [power_state.RUNNING, 512, 512, 2, 1234, 5678] mock_get_domain.return_value = mock_dom drvr.detach_volume( self.context, connection_info, instance, '/dev/vdc') mock_get_domain.assert_called_once_with(instance) mock_disconnect_volume.assert_called_once_with( self.context, connection_info, instance, encryption=None) @mock.patch('nova.virt.libvirt.driver.LibvirtDriver._get_volume_encryptor') @mock.patch('nova.virt.libvirt.driver.LibvirtDriver._disconnect_volume') @mock.patch('nova.virt.libvirt.host.Host._get_domain') def test_detach_volume_disk_not_found_encryption(self, mock_get_domain, mock_disconnect_volume, mock_get_encryptor): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) instance = objects.Instance(**self.test_instance) mock_xml_without_disk = """ """ mock_dom = mock.MagicMock(return_value=mock_xml_without_disk) encryption = mock.MagicMock() connection_info = {"driver_volume_type": "fake", "data": {"device_path": "/fake", "access_mode": "rw"}} mock_dom.info.return_value = [power_state.RUNNING, 512, 512, 2, 1234, 5678] mock_get_domain.return_value = mock_dom drvr.detach_volume(self.context, connection_info, instance, '/dev/vdc', encryption) mock_disconnect_volume.assert_called_once_with( self.context, connection_info, instance, encryption=encryption) @mock.patch('nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver') @mock.patch('nova.virt.libvirt.driver.LibvirtDriver._get_volume_encryptor') @mock.patch('nova.virt.libvirt.host.Host.get_guest') def test_detach_volume_order_with_encryptors(self, mock_get_guest, mock_get_encryptor, mock_get_volume_driver): mock_volume_driver = mock.MagicMock( spec=volume_drivers.LibvirtBaseVolumeDriver) mock_get_volume_driver.return_value = mock_volume_driver mock_guest = mock.MagicMock(spec=libvirt_guest.Guest) mock_guest.get_power_state.return_value = power_state.RUNNING mock_get_guest.return_value = mock_guest mock_encryptor = mock.MagicMock( spec=encryptors.nop.NoOpEncryptor) mock_get_encryptor.return_value = mock_encryptor mock_order = mock.Mock() mock_order.attach_mock(mock_volume_driver.disconnect_volume, 'disconnect_volume') mock_order.attach_mock(mock_guest.detach_device_with_retry(), 'detach_volume') mock_order.attach_mock(mock_encryptor.detach_volume, 'detach_encryptor') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) instance = objects.Instance(**self.test_instance) connection_info = {"driver_volume_type": "fake", "data": {"device_path": "/fake", "access_mode": "rw"}} encryption = {"provider": "NoOpEncryptor"} drvr.detach_volume( self.context, connection_info, instance, '/dev/vdc', encryption=encryption) mock_order.assert_has_calls([ mock.call.detach_volume(), mock.call.detach_encryptor(**encryption), mock.call.disconnect_volume(connection_info, instance)]) def test_extend_volume(self): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) instance = objects.Instance(**self.test_instance) connection_info = { 'driver_volume_type': 'fake', 'data': {'device_path': '/fake', 'access_mode': 'rw'} } new_size_in_kb = 20 * 1024 * 1024 guest = mock.Mock(spec='nova.virt.libvirt.guest.Guest') # block_device block_device = mock.Mock( spec='nova.virt.libvirt.guest.BlockDevice') block_device.resize = mock.Mock() guest.get_block_device = mock.Mock(return_value=block_device) drvr._host.get_guest = mock.Mock(return_value=guest) drvr._extend_volume = mock.Mock(return_value=new_size_in_kb) for state in (power_state.RUNNING, power_state.PAUSED): guest.get_power_state = mock.Mock(return_value=state) drvr.extend_volume(connection_info, instance) drvr._extend_volume.assert_called_with(connection_info, instance) guest.get_block_device.assert_called_with('/fake') block_device.resize.assert_called_with(20480) def test_extend_volume_with_volume_driver_without_support(self): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) instance = objects.Instance(**self.test_instance) with mock.patch.object(drvr, '_extend_volume', side_effect=NotImplementedError()): connection_info = {'driver_volume_type': 'fake'} self.assertRaises(exception.ExtendVolumeNotSupported, drvr.extend_volume, connection_info, instance) def test_extend_volume_disk_not_found(self): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) instance = objects.Instance(**self.test_instance) connection_info = { 'driver_volume_type': 'fake', 'data': {'device_path': '/fake', 'access_mode': 'rw'} } new_size_in_kb = 20 * 1024 * 1024 xml_no_disk = "" dom = fakelibvirt.Domain(drvr._get_connection(), xml_no_disk, False) guest = libvirt_guest.Guest(dom) guest.get_power_state = mock.Mock(return_value=power_state.RUNNING) drvr._host.get_guest = mock.Mock(return_value=guest) drvr._extend_volume = mock.Mock(return_value=new_size_in_kb) drvr.extend_volume(connection_info, instance) def test_extend_volume_with_instance_not_found(self): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) instance = objects.Instance(**self.test_instance) with test.nested( mock.patch.object(host.Host, '_get_domain', side_effect=exception.InstanceNotFound( instance_id=instance.uuid)), mock.patch.object(drvr, '_extend_volume') ) as (_get_domain, _extend_volume): connection_info = {'driver_volume_type': 'fake'} self.assertRaises(exception.InstanceNotFound, drvr.extend_volume, connection_info, instance) def test_extend_volume_with_libvirt_error(self): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) instance = objects.Instance(**self.test_instance) connection_info = { 'driver_volume_type': 'fake', 'data': {'device_path': '/fake', 'access_mode': 'rw'} } new_size_in_kb = 20 * 1024 * 1024 guest = mock.Mock(spec='nova.virt.libvirt.guest.Guest') guest.get_power_state = mock.Mock(return_value=power_state.RUNNING) # block_device block_device = mock.Mock( spec='nova.virt.libvirt.guest.BlockDevice') block_device.resize = mock.Mock( side_effect=fakelibvirt.libvirtError('ERR')) guest.get_block_device = mock.Mock(return_value=block_device) drvr._host.get_guest = mock.Mock(return_value=guest) drvr._extend_volume = mock.Mock(return_value=new_size_in_kb) self.assertRaises(fakelibvirt.libvirtError, drvr.extend_volume, connection_info, instance) @mock.patch('os_brick.encryptors.get_encryption_metadata') @mock.patch('nova.virt.libvirt.driver.LibvirtDriver._get_volume_encryptor') def test_use_encryptor_connection_info_incomplete(self, mock_get_encryptor, mock_get_metadata): """Assert no attach attempt is made given incomplete connection_info. """ drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) connection_info = {'data': {}} drvr._attach_encryptor(self.context, connection_info, None, False) mock_get_metadata.assert_not_called() mock_get_encryptor.assert_not_called() @mock.patch('os_brick.encryptors.get_encryption_metadata') @mock.patch('nova.virt.libvirt.driver.LibvirtDriver._get_volume_encryptor') def test_attach_encryptor_unencrypted_volume_meta_missing(self, mock_get_encryptor, mock_get_metadata): """Assert that if not provided encryption metadata is fetched even if the volume is ultimately unencrypted and no attempt to attach is made. """ drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) encryption = {} connection_info = {'data': {'volume_id': uuids.volume_id}} mock_get_metadata.return_value = encryption drvr._attach_encryptor(self.context, connection_info, None, False) mock_get_metadata.assert_called_once_with(self.context, drvr._volume_api, uuids.volume_id, connection_info) mock_get_encryptor.assert_not_called() @mock.patch('os_brick.encryptors.get_encryption_metadata') @mock.patch('nova.virt.libvirt.driver.LibvirtDriver._get_volume_encryptor') def test_attach_encryptor_unencrypted_volume_meta_provided(self, mock_get_encryptor, mock_get_metadata): """Assert that if an empty encryption metadata dict is provided that there is no additional attempt to lookup the metadata or attach the encryptor. """ drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) encryption = {} connection_info = {'data': {'volume_id': uuids.volume_id}} drvr._attach_encryptor(self.context, connection_info, encryption, False) mock_get_metadata.assert_not_called() mock_get_encryptor.assert_not_called() @mock.patch('os_brick.encryptors.get_encryption_metadata') @mock.patch('nova.virt.libvirt.driver.LibvirtDriver._get_volume_encryptor') def test_attach_encryptor_encrypted_volume_meta_missing(self, mock_get_encryptor, mock_get_metadata): """Assert that if missing the encryption metadata of an encrypted volume is fetched and then used to attach the encryptor for the volume. """ drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) mock_encryptor = mock.MagicMock() mock_get_encryptor.return_value = mock_encryptor encryption = {'provider': 'luks', 'control_location': 'front-end'} mock_get_metadata.return_value = encryption connection_info = {'data': {'volume_id': uuids.volume_id}} drvr._attach_encryptor(self.context, connection_info, None, False) mock_get_metadata.assert_called_once_with(self.context, drvr._volume_api, uuids.volume_id, connection_info) mock_get_encryptor.assert_called_once_with(connection_info, encryption) mock_encryptor.attach_volume.assert_called_once_with(self.context, **encryption) @mock.patch('os_brick.encryptors.get_encryption_metadata') @mock.patch('nova.virt.libvirt.driver.LibvirtDriver._get_volume_encryptor') def test_attach_encryptor_encrypted_volume_meta_provided(self, mock_get_encryptor, mock_get_metadata): """Assert that when provided there are no further attempts to fetch the encryption metadata for the volume and that the provided metadata is then used to attach the volume. """ drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) mock_encryptor = mock.MagicMock() mock_get_encryptor.return_value = mock_encryptor encryption = {'provider': 'luks', 'control_location': 'front-end'} connection_info = {'data': {'volume_id': uuids.volume_id}} drvr._attach_encryptor(self.context, connection_info, encryption, False) mock_get_metadata.assert_not_called() mock_get_encryptor.assert_called_once_with(connection_info, encryption) mock_encryptor.attach_volume.assert_called_once_with(self.context, **encryption) @mock.patch.object(key_manager, 'API') @mock.patch('os_brick.encryptors.get_encryption_metadata') @mock.patch('nova.virt.libvirt.driver.LibvirtDriver._get_volume_encryptor') def test_attach_encryptor_encrypted_native_luks_serial(self, mock_get_encryptor, mock_get_metadata, mock_get_key_mgr): """Uses native luks encryption with a provider encryptor and the connection_info has a serial but not volume_id in the 'data' sub-dict. """ drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) mock_encryptor = mock.MagicMock() mock_get_encryptor.return_value = mock_encryptor encryption = {'provider': 'luks', 'control_location': 'front-end', 'encryption_key_id': uuids.encryption_key_id} connection_info = {'serial': uuids.serial, 'data': {}} # Mock out the key manager key = u'3734363537333734' key_encoded = binascii.unhexlify(key) mock_key = mock.Mock() mock_key_mgr = mock.Mock() mock_get_key_mgr.return_value = mock_key_mgr mock_key_mgr.get.return_value = mock_key mock_key.get_encoded.return_value = key_encoded with mock.patch.object(drvr, '_use_native_luks', return_value=True): with mock.patch.object(drvr._host, 'create_secret') as crt_scrt: drvr._attach_encryptor(self.context, connection_info, encryption, allow_native_luks=True) mock_get_metadata.assert_not_called() mock_get_encryptor.assert_not_called() crt_scrt.assert_called_once_with( 'volume', uuids.serial, password=key) @mock.patch('os_brick.encryptors.get_encryption_metadata') @mock.patch('nova.virt.libvirt.driver.LibvirtDriver._get_volume_encryptor') def test_detach_encryptor_connection_info_incomplete(self, mock_get_encryptor, mock_get_metadata): """Assert no detach attempt is made given incomplete connection_info. """ drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) connection_info = {'data': {}} drvr._detach_encryptor(self.context, connection_info, None) mock_get_metadata.assert_not_called() mock_get_encryptor.assert_not_called() @mock.patch('os_brick.encryptors.get_encryption_metadata') @mock.patch('nova.virt.libvirt.driver.LibvirtDriver._get_volume_encryptor') def test_detach_encryptor_unencrypted_volume_meta_missing(self, mock_get_encryptor, mock_get_metadata): """Assert that if not provided encryption metadata is fetched even if the volume is ultimately unencrypted and no attempt to detach is made. """ drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) encryption = {} connection_info = {'data': {'volume_id': uuids.volume_id}} mock_get_metadata.return_value = encryption drvr._detach_encryptor(self.context, connection_info, None) mock_get_metadata.assert_called_once_with(self.context, drvr._volume_api, uuids.volume_id, connection_info) mock_get_encryptor.assert_not_called() @mock.patch('os_brick.encryptors.get_encryption_metadata') @mock.patch('nova.virt.libvirt.driver.LibvirtDriver._get_volume_encryptor') def test_detach_encryptor_unencrypted_volume_meta_provided(self, mock_get_encryptor, mock_get_metadata): """Assert that if an empty encryption metadata dict is provided that there is no additional attempt to lookup the metadata or detach the encryptor. """ drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) encryption = {} connection_info = {'data': {'volume_id': uuids.volume_id}} drvr._detach_encryptor(self.context, connection_info, encryption) mock_get_metadata.assert_not_called() mock_get_encryptor.assert_not_called() @mock.patch('os_brick.encryptors.get_encryption_metadata') @mock.patch('nova.virt.libvirt.driver.LibvirtDriver._get_volume_encryptor') def test_detach_encryptor_encrypted_volume_meta_missing(self, mock_get_encryptor, mock_get_metadata): """Assert that if missing the encryption metadata of an encrypted volume is fetched and then used to detach the encryptor for the volume. """ drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) mock_encryptor = mock.MagicMock() mock_get_encryptor.return_value = mock_encryptor encryption = {'provider': 'luks', 'control_location': 'front-end'} mock_get_metadata.return_value = encryption connection_info = {'data': {'volume_id': uuids.volume_id}} drvr._detach_encryptor(self.context, connection_info, None) mock_get_metadata.assert_called_once_with(self.context, drvr._volume_api, uuids.volume_id, connection_info) mock_get_encryptor.assert_called_once_with(connection_info, encryption) mock_encryptor.detach_volume.assert_called_once_with(**encryption) @mock.patch('os_brick.encryptors.get_encryption_metadata') @mock.patch('nova.virt.libvirt.driver.LibvirtDriver._get_volume_encryptor') def test_detach_encryptor_encrypted_volume_meta_provided(self, mock_get_encryptor, mock_get_metadata): """Assert that when provided there are no further attempts to fetch the encryption metadata for the volume and that the provided metadata is then used to detach the volume. """ drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) mock_encryptor = mock.MagicMock() mock_get_encryptor.return_value = mock_encryptor encryption = {'provider': 'luks', 'control_location': 'front-end'} connection_info = {'data': {'volume_id': uuids.volume_id}} drvr._detach_encryptor(self.context, connection_info, encryption) mock_get_metadata.assert_not_called() mock_get_encryptor.assert_called_once_with(connection_info, encryption) mock_encryptor.detach_volume.assert_called_once_with(**encryption) @mock.patch.object(host.Host, "has_min_version") def test_use_native_luks(self, mock_has_min_version): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) # True only when the required QEMU and Libvirt versions are available # on the host and a valid LUKS provider is present within the # encryption metadata dict. mock_has_min_version.return_value = True self.assertFalse(drvr._use_native_luks({})) self.assertFalse(drvr._use_native_luks({ 'provider': 'nova.volume.encryptors.cryptsetup.CryptSetupEncryptor' })) self.assertFalse(drvr._use_native_luks({ 'provider': 'CryptSetupEncryptor'})) self.assertFalse(drvr._use_native_luks({ 'provider': encryptors.PLAIN})) self.assertTrue(drvr._use_native_luks({ 'provider': 'nova.volume.encryptors.luks.LuksEncryptor'})) self.assertTrue(drvr._use_native_luks({ 'provider': 'LuksEncryptor'})) self.assertTrue(drvr._use_native_luks({ 'provider': encryptors.LUKS})) # Always False when the required QEMU and Libvirt versions are not # available on the host. mock_has_min_version.return_value = False self.assertFalse(drvr._use_native_luks({})) self.assertFalse(drvr._use_native_luks({ 'provider': 'nova.volume.encryptors.cryptsetup.CryptSetupEncryptor' })) self.assertFalse(drvr._use_native_luks({ 'provider': 'CryptSetupEncryptor'})) self.assertFalse(drvr._use_native_luks({ 'provider': encryptors.PLAIN})) self.assertFalse(drvr._use_native_luks({ 'provider': 'nova.volume.encryptors.luks.LuksEncryptor'})) self.assertFalse(drvr._use_native_luks({ 'provider': 'LuksEncryptor'})) self.assertFalse(drvr._use_native_luks({ 'provider': encryptors.LUKS})) def test_multi_nic(self): network_info = _fake_network_info(self, 2) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance_ref = objects.Instance(**self.test_instance) image_meta = objects.ImageMeta.from_dict(self.test_image_meta) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) xml = drvr._get_guest_xml(self.context, instance_ref, network_info, disk_info, image_meta) tree = etree.fromstring(xml) interfaces = tree.findall("./devices/interface") self.assertEqual(len(interfaces), 2) self.assertEqual(interfaces[0].get('type'), 'bridge') def _check_xml_and_container(self, instance): instance_ref = objects.Instance(**instance) image_meta = objects.ImageMeta.from_dict(self.test_image_meta) self.flags(virt_type='lxc', group='libvirt') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) self.assertEqual(drvr._uri(), 'lxc:///') network_info = _fake_network_info(self, 1) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) xml = drvr._get_guest_xml(self.context, instance_ref, network_info, disk_info, image_meta) tree = etree.fromstring(xml) check = [ (lambda t: t.find('.').get('type'), 'lxc'), (lambda t: t.find('./os/type').text, 'exe'), (lambda t: t.find('./devices/filesystem/target').get('dir'), '/')] for i, (check, expected_result) in enumerate(check): self.assertEqual(check(tree), expected_result, '%s failed common check %d' % (xml, i)) target = tree.find('./devices/filesystem/source').get('dir') self.assertGreater(len(target), 0) def _check_xml_and_disk_prefix(self, instance, prefix): instance_ref = objects.Instance(**instance) image_meta = objects.ImageMeta.from_dict(self.test_image_meta) def _get_prefix(p, default): if p: return p + 'a' return default type_disk_map = { 'qemu': [ (lambda t: t.find('.').get('type'), 'qemu'), (lambda t: t.find('./devices/disk/target').get('dev'), _get_prefix(prefix, 'vda'))], 'xen': [ (lambda t: t.find('.').get('type'), 'xen'), (lambda t: t.find('./devices/disk/target').get('dev'), _get_prefix(prefix, 'xvda'))], 'kvm': [ (lambda t: t.find('.').get('type'), 'kvm'), (lambda t: t.find('./devices/disk/target').get('dev'), _get_prefix(prefix, 'vda'))], 'uml': [ (lambda t: t.find('.').get('type'), 'uml'), (lambda t: t.find('./devices/disk/target').get('dev'), _get_prefix(prefix, 'ubda'))] } for (virt_type, checks) in type_disk_map.items(): self.flags(virt_type=virt_type, group='libvirt') if prefix: self.flags(disk_prefix=prefix, group='libvirt') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) network_info = _fake_network_info(self, 1) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) xml = drvr._get_guest_xml(self.context, instance_ref, network_info, disk_info, image_meta) tree = etree.fromstring(xml) for i, (check, expected_result) in enumerate(checks): self.assertEqual(check(tree), expected_result, '%s != %s failed check %d' % (check(tree), expected_result, i)) def _check_xml_and_disk_driver(self, image_meta): os_open = os.open directio_supported = True def os_open_stub(path, flags, *args, **kwargs): if flags & os.O_DIRECT: if not directio_supported: raise OSError(errno.EINVAL, '%s: %s' % (os.strerror(errno.EINVAL), path)) flags &= ~os.O_DIRECT return os_open(path, flags, *args, **kwargs) self.stub_out('os.open', os_open_stub) def connection_supports_direct_io_stub(dirpath): return directio_supported self.stub_out('nova.utils.supports_direct_io', connection_supports_direct_io_stub) instance_ref = objects.Instance(**self.test_instance) image_meta = objects.ImageMeta.from_dict(self.test_image_meta) network_info = _fake_network_info(self, 1) drv = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) xml = drv._get_guest_xml(self.context, instance_ref, network_info, disk_info, image_meta) tree = etree.fromstring(xml) disks = tree.findall('./devices/disk/driver') for guest_disk in disks: self.assertEqual(guest_disk.get("cache"), "none") directio_supported = False # The O_DIRECT availability is cached on first use in # LibvirtDriver, hence we re-create it here drv = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) xml = drv._get_guest_xml(self.context, instance_ref, network_info, disk_info, image_meta) tree = etree.fromstring(xml) disks = tree.findall('./devices/disk/driver') for guest_disk in disks: self.assertEqual(guest_disk.get("cache"), "writethrough") def _check_xml_and_disk_bus(self, image_meta, block_device_info, wantConfig): instance_ref = objects.Instance(**self.test_instance) network_info = _fake_network_info(self, 1) drv = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta, block_device_info) xml = drv._get_guest_xml(self.context, instance_ref, network_info, disk_info, image_meta, block_device_info=block_device_info) tree = etree.fromstring(xml) got_disks = tree.findall('./devices/disk') got_disk_targets = tree.findall('./devices/disk/target') for i in range(len(wantConfig)): want_device_type = wantConfig[i][0] want_device_bus = wantConfig[i][1] want_device_dev = wantConfig[i][2] got_device_type = got_disks[i].get('device') got_device_bus = got_disk_targets[i].get('bus') got_device_dev = got_disk_targets[i].get('dev') self.assertEqual(got_device_type, want_device_type) self.assertEqual(got_device_bus, want_device_bus) self.assertEqual(got_device_dev, want_device_dev) def _check_xml_and_uuid(self, image_meta): instance_ref = objects.Instance(**self.test_instance) image_meta = objects.ImageMeta.from_dict(self.test_image_meta) network_info = _fake_network_info(self, 1) drv = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) xml = drv._get_guest_xml(self.context, instance_ref, network_info, disk_info, image_meta) tree = etree.fromstring(xml) self.assertEqual(tree.find('./uuid').text, instance_ref['uuid']) @mock.patch.object(libvirt_driver.LibvirtDriver, "_get_host_sysinfo_serial_hardware",) def _check_xml_and_uri(self, instance, mock_serial, expect_ramdisk=False, expect_kernel=False, rescue=None, expect_xen_hvm=False, xen_only=False): mock_serial.return_value = "cef19ce0-0ca2-11df-855d-b19fbce37686" instance_ref = objects.Instance(**instance) image_meta = objects.ImageMeta.from_dict(self.test_image_meta) xen_vm_mode = fields.VMMode.XEN if expect_xen_hvm: xen_vm_mode = fields.VMMode.HVM type_uri_map = {'qemu': ('qemu:///system', [(lambda t: t.find('.').get('type'), 'qemu'), (lambda t: t.find('./os/type').text, fields.VMMode.HVM), (lambda t: t.find('./devices/emulator'), None)]), 'kvm': ('qemu:///system', [(lambda t: t.find('.').get('type'), 'kvm'), (lambda t: t.find('./os/type').text, fields.VMMode.HVM), (lambda t: t.find('./devices/emulator'), None)]), 'uml': ('uml:///system', [(lambda t: t.find('.').get('type'), 'uml'), (lambda t: t.find('./os/type').text, fields.VMMode.UML)]), 'xen': ('xen:///', [(lambda t: t.find('.').get('type'), 'xen'), (lambda t: t.find('./os/type').text, xen_vm_mode)])} if expect_xen_hvm or xen_only: hypervisors_to_check = ['xen'] else: hypervisors_to_check = ['qemu', 'kvm', 'xen'] for hypervisor_type in hypervisors_to_check: check_list = type_uri_map[hypervisor_type][1] if rescue: suffix = '.rescue' else: suffix = '' if expect_kernel: check = (lambda t: self.relpath(t.find('./os/kernel').text). split('/')[1], 'kernel' + suffix) else: check = (lambda t: t.find('./os/kernel'), None) check_list.append(check) if expect_kernel: check = (lambda t: "no_timer_check" in t.find('./os/cmdline'). text, hypervisor_type == "qemu") check_list.append(check) # Hypervisors that only support vm_mode.HVM should not produce # configuration that results in kernel arguments if not expect_kernel and (hypervisor_type in ['qemu', 'kvm']): check = (lambda t: t.find('./os/root'), None) check_list.append(check) check = (lambda t: t.find('./os/cmdline'), None) check_list.append(check) if expect_ramdisk: check = (lambda t: self.relpath(t.find('./os/initrd').text). split('/')[1], 'ramdisk' + suffix) else: check = (lambda t: t.find('./os/initrd'), None) check_list.append(check) if hypervisor_type in ['qemu', 'kvm']: xpath = "./sysinfo/system/entry" check = (lambda t: t.findall(xpath)[0].get("name"), "manufacturer") check_list.append(check) check = (lambda t: t.findall(xpath)[0].text, version.vendor_string()) check_list.append(check) check = (lambda t: t.findall(xpath)[1].get("name"), "product") check_list.append(check) check = (lambda t: t.findall(xpath)[1].text, version.product_string()) check_list.append(check) check = (lambda t: t.findall(xpath)[2].get("name"), "version") check_list.append(check) # NOTE(sirp): empty strings don't roundtrip in lxml (they are # converted to None), so we need an `or ''` to correct for that check = (lambda t: t.findall(xpath)[2].text or '', version.version_string_with_package()) check_list.append(check) check = (lambda t: t.findall(xpath)[3].get("name"), "serial") check_list.append(check) check = (lambda t: t.findall(xpath)[3].text, "cef19ce0-0ca2-11df-855d-b19fbce37686") check_list.append(check) check = (lambda t: t.findall(xpath)[4].get("name"), "uuid") check_list.append(check) check = (lambda t: t.findall(xpath)[4].text, instance['uuid']) check_list.append(check) if hypervisor_type in ['qemu', 'kvm']: check = (lambda t: t.findall('./devices/serial')[0].get( 'type'), 'file') check_list.append(check) check = (lambda t: t.findall('./devices/serial')[1].get( 'type'), 'pty') check_list.append(check) check = (lambda t: self.relpath(t.findall( './devices/serial/source')[0].get('path')). split('/')[1], 'console.log') check_list.append(check) else: check = (lambda t: t.find('./devices/console').get( 'type'), 'pty') check_list.append(check) common_checks = [ (lambda t: t.find('.').tag, 'domain'), (lambda t: t.find('./memory').text, '2097152')] if rescue: common_checks += [ (lambda t: self.relpath(t.findall('./devices/disk/source')[0]. get('file')).split('/')[1], 'disk.rescue'), (lambda t: self.relpath(t.findall('./devices/disk/source')[1]. get('file')).split('/')[1], 'disk')] else: common_checks += [(lambda t: self.relpath(t.findall( './devices/disk/source')[0].get('file')).split('/')[1], 'disk')] common_checks += [(lambda t: self.relpath(t.findall( './devices/disk/source')[1].get('file')).split('/')[1], 'disk.local')] for virt_type in hypervisors_to_check: expected_uri = type_uri_map[virt_type][0] checks = type_uri_map[virt_type][1] self.flags(virt_type=virt_type, group='libvirt') with mock.patch('nova.virt.libvirt.driver.libvirt') as old_virt: del old_virt.VIR_CONNECT_BASELINE_CPU_EXPAND_FEATURES drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) self.assertEqual(drvr._uri(), expected_uri) network_info = _fake_network_info(self, 1) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta, rescue=rescue) xml = drvr._get_guest_xml(self.context, instance_ref, network_info, disk_info, image_meta, rescue=rescue) tree = etree.fromstring(xml) for i, (check, expected_result) in enumerate(checks): self.assertEqual(check(tree), expected_result, '%s != %s failed check %d' % (check(tree), expected_result, i)) for i, (check, expected_result) in enumerate(common_checks): self.assertEqual(check(tree), expected_result, '%s != %s failed common check %d' % (check(tree), expected_result, i)) filterref = './devices/interface/filterref' vif = network_info[0] nic_id = vif['address'].lower().replace(':', '') fw = firewall.NWFilterFirewall(drvr) instance_filter_name = fw._instance_filter_name(instance_ref, nic_id) self.assertEqual(tree.find(filterref).get('filter'), instance_filter_name) # This test is supposed to make sure we don't # override a specifically set uri # # Deliberately not just assigning this string to CONF.connection_uri # and checking against that later on. This way we make sure the # implementation doesn't fiddle around with the CONF. testuri = 'something completely different' self.flags(connection_uri=testuri, group='libvirt') for virt_type in type_uri_map: self.flags(virt_type=virt_type, group='libvirt') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) self.assertEqual(drvr._uri(), testuri) def test_ensure_filtering_rules_for_instance_timeout(self): # ensure_filtering_fules_for_instance() finishes with timeout. # Preparing mocks def fake_none(self, *args): return class FakeTime(object): def __init__(self): self.counter = 0 def sleep(self, t): self.counter += t fake_timer = FakeTime() def fake_sleep(t): fake_timer.sleep(t) # _fake_network_info must be called before create_fake_libvirt_mock(), # as _fake_network_info calls importutils.import_class() and # create_fake_libvirt_mock() mocks importutils.import_class(). network_info = _fake_network_info(self, 1) self.create_fake_libvirt_mock() instance_ref = objects.Instance(**self.test_instance) # Start test self.mox.ReplayAll() try: drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) self.stubs.Set(drvr.firewall_driver, 'setup_basic_filtering', fake_none) self.stubs.Set(drvr.firewall_driver, 'prepare_instance_filter', fake_none) self.stubs.Set(drvr.firewall_driver, 'instance_filter_exists', fake_none) self.stubs.Set(greenthread, 'sleep', fake_sleep) drvr.ensure_filtering_rules_for_instance(instance_ref, network_info) except exception.NovaException as e: msg = ('The firewall filter for %s does not exist' % instance_ref['name']) c1 = (0 <= six.text_type(e).find(msg)) self.assertTrue(c1) self.assertEqual(29, fake_timer.counter, "Didn't wait the expected " "amount of time") @mock.patch.object(objects.Service, 'get_by_compute_host') @mock.patch.object(libvirt_driver.LibvirtDriver, '_create_shared_storage_test_file') @mock.patch.object(fakelibvirt.Connection, 'compareCPU') def test_check_can_live_migrate_dest_all_pass_with_block_migration( self, mock_cpu, mock_test_file, mock_svc): instance_ref = objects.Instance(**self.test_instance) instance_ref.vcpu_model = test_vcpu_model.fake_vcpumodel drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) compute_info = {'disk_available_least': 400, 'cpu_info': 'asdf', } filename = "file" # _check_cpu_match mock_cpu.return_value = 1 # mounted_on_same_shared_storage mock_test_file.return_value = filename # No need for the src_compute_info return_value = drvr.check_can_live_migrate_destination(self.context, instance_ref, None, compute_info, True) return_value.is_volume_backed = False self.assertThat({"filename": "file", 'image_type': 'default', 'disk_available_mb': 409600, "disk_over_commit": False, "block_migration": True, "is_volume_backed": False}, matchers.DictMatches(return_value.to_legacy_dict())) @mock.patch.object(objects.Service, 'get_by_compute_host') @mock.patch.object(libvirt_driver.LibvirtDriver, '_create_shared_storage_test_file') @mock.patch.object(fakelibvirt.Connection, 'compareCPU') def test_check_can_live_migrate_dest_all_pass_with_over_commit( self, mock_cpu, mock_test_file, mock_svc): instance_ref = objects.Instance(**self.test_instance) instance_ref.vcpu_model = test_vcpu_model.fake_vcpumodel drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) compute_info = {'disk_available_least': -1000, 'local_gb': 100, 'cpu_info': 'asdf', } filename = "file" # _check_cpu_match mock_cpu.return_value = 1 # mounted_on_same_shared_storage mock_test_file.return_value = filename # No need for the src_compute_info return_value = drvr.check_can_live_migrate_destination(self.context, instance_ref, None, compute_info, True, True) return_value.is_volume_backed = False self.assertThat({"filename": "file", 'image_type': 'default', 'disk_available_mb': 102400, "disk_over_commit": True, "block_migration": True, "is_volume_backed": False}, matchers.DictMatches(return_value.to_legacy_dict())) @mock.patch.object(objects.Service, 'get_by_compute_host') @mock.patch.object(libvirt_driver.LibvirtDriver, '_create_shared_storage_test_file') @mock.patch.object(fakelibvirt.Connection, 'compareCPU') def test_check_can_live_migrate_dest_all_pass_no_block_migration( self, mock_cpu, mock_test_file, mock_svc): instance_ref = objects.Instance(**self.test_instance) instance_ref.vcpu_model = test_vcpu_model.fake_vcpumodel drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) compute_info = {'disk_available_least': 400, 'cpu_info': 'asdf', } filename = "file" # _check_cpu_match mock_cpu.return_value = 1 # mounted_on_same_shared_storage mock_test_file.return_value = filename # No need for the src_compute_info return_value = drvr.check_can_live_migrate_destination(self.context, instance_ref, None, compute_info, False) return_value.is_volume_backed = False self.assertThat({"filename": "file", "image_type": 'default', "block_migration": False, "disk_over_commit": False, "disk_available_mb": 409600, "is_volume_backed": False}, matchers.DictMatches(return_value.to_legacy_dict())) @mock.patch.object(objects.Service, 'get_by_compute_host') @mock.patch.object(libvirt_driver.LibvirtDriver, '_create_shared_storage_test_file', return_value='fake') @mock.patch.object(fakelibvirt.Connection, 'compareCPU') def test_check_can_live_migrate_dest_fills_listen_addrs( self, mock_cpu, mock_test_file, mock_svc): # Tests that check_can_live_migrate_destination returns the listen # addresses required by check_can_live_migrate_source. self.flags(server_listen='192.0.2.12', group='vnc') self.flags(server_listen='198.51.100.34', group='spice') self.flags(proxyclient_address='203.0.113.56', group='serial_console') self.flags(enabled=True, group='serial_console') mock_cpu.return_value = 1 instance_ref = objects.Instance(**self.test_instance) instance_ref.vcpu_model = test_vcpu_model.fake_vcpumodel drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) compute_info = {'cpu_info': 'asdf', 'disk_available_least': 1} result = drvr.check_can_live_migrate_destination( self.context, instance_ref, compute_info, compute_info) self.assertEqual('192.0.2.12', str(result.graphics_listen_addr_vnc)) self.assertEqual('198.51.100.34', str(result.graphics_listen_addr_spice)) self.assertEqual('203.0.113.56', str(result.serial_listen_addr)) @mock.patch.object(objects.Service, 'get_by_compute_host') @mock.patch.object(libvirt_driver.LibvirtDriver, '_create_shared_storage_test_file', return_value='fake') @mock.patch.object(fakelibvirt.Connection, 'compareCPU', return_value=1) def test_check_can_live_migrate_dest_ensure_serial_adds_not_set( self, mock_cpu, mock_test_file, mock_svc): self.flags(proxyclient_address='127.0.0.1', group='serial_console') self.flags(enabled=False, group='serial_console') instance_ref = objects.Instance(**self.test_instance) instance_ref.vcpu_model = test_vcpu_model.fake_vcpumodel drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) compute_info = {'cpu_info': 'asdf', 'disk_available_least': 1} result = drvr.check_can_live_migrate_destination( self.context, instance_ref, compute_info, compute_info) self.assertIsNone(result.serial_listen_addr) @mock.patch.object(libvirt_driver.LibvirtDriver, '_create_shared_storage_test_file', return_value='fake') @mock.patch.object(libvirt_driver.LibvirtDriver, '_compare_cpu') def test_check_can_live_migrate_guest_cpu_none_model( self, mock_cpu, mock_test_file): # Tests that when instance.vcpu_model.model is None, the host cpu # model is used for live migration. instance_ref = objects.Instance(**self.test_instance) instance_ref.vcpu_model = test_vcpu_model.fake_vcpumodel instance_ref.vcpu_model.model = None drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) compute_info = {'cpu_info': 'asdf', 'disk_available_least': 1} result = drvr.check_can_live_migrate_destination( self.context, instance_ref, compute_info, compute_info) result.is_volume_backed = False mock_cpu.assert_called_once_with(None, 'asdf', instance_ref) expected_result = {"filename": 'fake', "image_type": CONF.libvirt.images_type, "block_migration": False, "disk_over_commit": False, "disk_available_mb": 1024, "is_volume_backed": False} self.assertEqual(expected_result, result.to_legacy_dict()) @mock.patch.object(objects.Service, 'get_by_compute_host') @mock.patch.object(libvirt_driver.LibvirtDriver, '_create_shared_storage_test_file') @mock.patch.object(fakelibvirt.Connection, 'compareCPU') def test_check_can_live_migrate_dest_no_instance_cpu_info( self, mock_cpu, mock_test_file, mock_svc): instance_ref = objects.Instance(**self.test_instance) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) compute_info = {'cpu_info': jsonutils.dumps({ "vendor": "AMD", "arch": fields.Architecture.I686, "features": ["sse3"], "model": "Opteron_G3", "topology": {"cores": 2, "threads": 1, "sockets": 4} }), 'disk_available_least': 1} filename = "file" # _check_cpu_match mock_cpu.return_value = 1 # mounted_on_same_shared_storage mock_test_file.return_value = filename return_value = drvr.check_can_live_migrate_destination(self.context, instance_ref, compute_info, compute_info, False) # NOTE(danms): Compute manager would have set this, so set it here return_value.is_volume_backed = False self.assertThat({"filename": "file", "image_type": 'default', "block_migration": False, "disk_over_commit": False, "disk_available_mb": 1024, "is_volume_backed": False}, matchers.DictMatches(return_value.to_legacy_dict())) @mock.patch.object(objects.Service, 'get_by_compute_host') @mock.patch.object(fakelibvirt.Connection, 'compareCPU') def test_check_can_live_migrate_dest_incompatible_cpu_raises( self, mock_cpu, mock_svc): instance_ref = objects.Instance(**self.test_instance) instance_ref.vcpu_model = test_vcpu_model.fake_vcpumodel drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) compute_info = {'cpu_info': 'asdf', 'disk_available_least': 1} mock_cpu.side_effect = exception.InvalidCPUInfo(reason='foo') self.assertRaises(exception.InvalidCPUInfo, drvr.check_can_live_migrate_destination, self.context, instance_ref, compute_info, compute_info, False) @mock.patch.object(host.Host, 'compare_cpu') @mock.patch.object(nova.virt.libvirt, 'config') def test_compare_cpu_compatible_host_cpu(self, mock_vconfig, mock_compare): instance = objects.Instance(**self.test_instance) mock_compare.return_value = 5 conn = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) ret = conn._compare_cpu(None, jsonutils.dumps(_fake_cpu_info), instance) self.assertIsNone(ret) @mock.patch.object(host.Host, 'compare_cpu') @mock.patch.object(nova.virt.libvirt, 'config') def test_compare_cpu_handles_not_supported_error_gracefully(self, mock_vconfig, mock_compare): instance = objects.Instance(**self.test_instance) not_supported_exc = fakelibvirt.make_libvirtError( fakelibvirt.libvirtError, 'this function is not supported by the connection driver:' ' virCompareCPU', error_code=fakelibvirt.VIR_ERR_NO_SUPPORT) mock_compare.side_effect = not_supported_exc conn = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) ret = conn._compare_cpu(None, jsonutils.dumps(_fake_cpu_info), instance) self.assertIsNone(ret) @mock.patch.object(host.Host, 'compare_cpu') @mock.patch.object(nova.virt.libvirt.LibvirtDriver, '_vcpu_model_to_cpu_config') def test_compare_cpu_compatible_guest_cpu(self, mock_vcpu_to_cpu, mock_compare): instance = objects.Instance(**self.test_instance) mock_compare.return_value = 6 conn = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) ret = conn._compare_cpu(jsonutils.dumps(_fake_cpu_info), None, instance) self.assertIsNone(ret) def test_compare_cpu_virt_type_xen(self): instance = objects.Instance(**self.test_instance) self.flags(virt_type='xen', group='libvirt') conn = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) ret = conn._compare_cpu(None, None, instance) self.assertIsNone(ret) def test_compare_cpu_virt_type_qemu(self): instance = objects.Instance(**self.test_instance) self.flags(virt_type='qemu', group='libvirt') conn = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) ret = conn._compare_cpu(None, None, instance) self.assertIsNone(ret) @mock.patch.object(host.Host, 'compare_cpu') @mock.patch.object(nova.virt.libvirt, 'config') def test_compare_cpu_invalid_cpuinfo_raises(self, mock_vconfig, mock_compare): instance = objects.Instance(**self.test_instance) mock_compare.return_value = 0 conn = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) self.assertRaises(exception.InvalidCPUInfo, conn._compare_cpu, None, jsonutils.dumps(_fake_cpu_info), instance) @mock.patch.object(host.Host, 'compare_cpu') @mock.patch.object(nova.virt.libvirt, 'config') def test_compare_cpu_incompatible_cpu_raises(self, mock_vconfig, mock_compare): instance = objects.Instance(**self.test_instance) mock_compare.side_effect = fakelibvirt.libvirtError('cpu') conn = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) self.assertRaises(exception.MigrationPreCheckError, conn._compare_cpu, None, jsonutils.dumps(_fake_cpu_info), instance) def test_check_can_live_migrate_dest_cleanup_works_correctly(self): objects.Instance(**self.test_instance) dest_check_data = objects.LibvirtLiveMigrateData( filename="file", block_migration=True, disk_over_commit=False, disk_available_mb=1024) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) self.mox.StubOutWithMock(drvr, '_cleanup_shared_storage_test_file') drvr._cleanup_shared_storage_test_file("file") self.mox.ReplayAll() drvr.cleanup_live_migration_destination_check(self.context, dest_check_data) @mock.patch('os.path.exists', return_value=True) @mock.patch('os.utime') def test_check_shared_storage_test_file_exists(self, mock_utime, mock_path_exists): tmpfile_path = os.path.join(CONF.instances_path, 'tmp123') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) self.assertTrue(drvr._check_shared_storage_test_file( 'tmp123', mock.sentinel.instance)) mock_utime.assert_called_once_with(CONF.instances_path, None) mock_path_exists.assert_called_once_with(tmpfile_path) @mock.patch('os.path.exists', return_value=False) @mock.patch('os.utime') def test_check_shared_storage_test_file_does_not_exist(self, mock_utime, mock_path_exists): tmpfile_path = os.path.join(CONF.instances_path, 'tmp123') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) self.assertFalse(drvr._check_shared_storage_test_file( 'tmp123', mock.sentinel.instance)) mock_utime.assert_called_once_with(CONF.instances_path, None) mock_path_exists.assert_called_once_with(tmpfile_path) def _mock_can_live_migrate_source(self, block_migration=False, is_shared_block_storage=False, is_shared_instance_path=False, disk_available_mb=1024): instance = objects.Instance(**self.test_instance) dest_check_data = objects.LibvirtLiveMigrateData( filename='file', image_type='default', block_migration=block_migration, disk_over_commit=False, disk_available_mb=disk_available_mb) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) self.mox.StubOutWithMock(drvr, '_is_shared_block_storage') drvr._is_shared_block_storage(instance, dest_check_data, None).AndReturn(is_shared_block_storage) self.mox.StubOutWithMock(drvr, '_check_shared_storage_test_file') drvr._check_shared_storage_test_file('file', instance).AndReturn( is_shared_instance_path) return (instance, dest_check_data, drvr) def test_check_can_live_migrate_source_block_migration(self): instance, dest_check_data, drvr = self._mock_can_live_migrate_source( block_migration=True) self.mox.StubOutWithMock(drvr, "_assert_dest_node_has_enough_disk") drvr._assert_dest_node_has_enough_disk( self.context, instance, dest_check_data.disk_available_mb, False, None) self.mox.ReplayAll() ret = drvr.check_can_live_migrate_source(self.context, instance, dest_check_data) self.assertIsInstance(ret, objects.LibvirtLiveMigrateData) self.assertIn('is_shared_block_storage', ret) self.assertFalse(ret.is_shared_block_storage) self.assertIn('is_shared_instance_path', ret) self.assertFalse(ret.is_shared_instance_path) def test_check_can_live_migrate_source_shared_block_storage(self): instance, dest_check_data, drvr = self._mock_can_live_migrate_source( is_shared_block_storage=True) self.mox.ReplayAll() ret = drvr.check_can_live_migrate_source(self.context, instance, dest_check_data) self.assertTrue(ret.is_shared_block_storage) def test_check_can_live_migrate_source_shared_instance_path(self): instance, dest_check_data, drvr = self._mock_can_live_migrate_source( is_shared_instance_path=True) self.mox.ReplayAll() ret = drvr.check_can_live_migrate_source(self.context, instance, dest_check_data) self.assertTrue(ret.is_shared_instance_path) def test_check_can_live_migrate_source_non_shared_fails(self): instance, dest_check_data, drvr = self._mock_can_live_migrate_source() self.mox.ReplayAll() self.assertRaises(exception.InvalidSharedStorage, drvr.check_can_live_migrate_source, self.context, instance, dest_check_data) def test_check_can_live_migrate_source_shared_block_migration_fails(self): instance, dest_check_data, drvr = self._mock_can_live_migrate_source( block_migration=True, is_shared_block_storage=True) self.mox.ReplayAll() self.assertRaises(exception.InvalidLocalStorage, drvr.check_can_live_migrate_source, self.context, instance, dest_check_data) def test_check_can_live_migrate_shared_path_block_migration_fails(self): instance, dest_check_data, drvr = self._mock_can_live_migrate_source( block_migration=True, is_shared_instance_path=True) self.mox.ReplayAll() self.assertRaises(exception.InvalidLocalStorage, drvr.check_can_live_migrate_source, self.context, instance, dest_check_data, None) def test_check_can_live_migrate_non_shared_non_block_migration_fails(self): instance, dest_check_data, drvr = self._mock_can_live_migrate_source() self.mox.ReplayAll() self.assertRaises(exception.InvalidSharedStorage, drvr.check_can_live_migrate_source, self.context, instance, dest_check_data) @mock.patch('nova.virt.libvirt.driver.LibvirtDriver.' '_get_instance_disk_info') def test_check_can_live_migrate_source_with_dest_not_enough_disk( self, mock_get_bdi): mock_get_bdi.return_value = [{"virt_disk_size": 2}] instance, dest_check_data, drvr = self._mock_can_live_migrate_source( block_migration=True, disk_available_mb=0) self.mox.ReplayAll() self.assertRaises(exception.MigrationError, drvr.check_can_live_migrate_source, self.context, instance, dest_check_data) mock_get_bdi.assert_called_once_with(instance, None) @mock.patch.object(host.Host, 'has_min_version', return_value=False) @mock.patch('nova.virt.libvirt.driver.LibvirtDriver.' '_assert_dest_node_has_enough_disk') @mock.patch('nova.virt.libvirt.driver.LibvirtDriver.' '_is_shared_block_storage', return_value=False) @mock.patch('nova.virt.libvirt.driver.LibvirtDriver.' '_check_shared_storage_test_file', return_value=False) def test_check_can_live_migrate_source_block_migration_with_bdm_error( self, mock_check, mock_shared_block, mock_enough, mock_min_version): bdi = {'block_device_mapping': ['bdm']} instance = objects.Instance(**self.test_instance) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) dest_check_data = objects.LibvirtLiveMigrateData( filename='file', image_type='default', block_migration=True, disk_over_commit=False, disk_available_mb=100) self.assertRaises(exception.MigrationPreCheckError, drvr.check_can_live_migrate_source, self.context, instance, dest_check_data, block_device_info=bdi) @mock.patch.object(host.Host, 'has_min_version', return_value=True) @mock.patch('nova.virt.libvirt.driver.LibvirtDriver.' '_assert_dest_node_has_enough_disk') @mock.patch('nova.virt.libvirt.driver.LibvirtDriver.' '_is_shared_block_storage', return_value=False) @mock.patch('nova.virt.libvirt.driver.LibvirtDriver.' '_check_shared_storage_test_file', return_value=False) def test_check_can_live_migrate_source_bm_with_bdm_tunnelled_error( self, mock_check, mock_shared_block, mock_enough, mock_min_version): self.flags(live_migration_tunnelled=True, group='libvirt') bdi = {'block_device_mapping': ['bdm']} instance = objects.Instance(**self.test_instance) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) dest_check_data = objects.LibvirtLiveMigrateData( filename='file', image_type='default', block_migration=True, disk_over_commit=False, disk_available_mb=100) drvr._parse_migration_flags() self.assertRaises(exception.MigrationPreCheckError, drvr.check_can_live_migrate_source, self.context, instance, dest_check_data, block_device_info=bdi) @mock.patch.object(host.Host, 'has_min_version', return_value=True) @mock.patch('nova.virt.libvirt.driver.LibvirtDriver.' '_assert_dest_node_has_enough_disk') @mock.patch('nova.virt.libvirt.driver.LibvirtDriver.' '_is_shared_block_storage') @mock.patch('nova.virt.libvirt.driver.LibvirtDriver.' '_check_shared_storage_test_file') def _test_check_can_live_migrate_source_block_migration_none( self, block_migrate, is_shared_instance_path, is_share_block, mock_check, mock_shared_block, mock_enough, mock_verson): mock_check.return_value = is_shared_instance_path mock_shared_block.return_value = is_share_block instance = objects.Instance(**self.test_instance) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) dest_check_data = objects.LibvirtLiveMigrateData( filename='file', image_type='default', disk_over_commit=False, disk_available_mb=100) dest_check_data_ret = drvr.check_can_live_migrate_source( self.context, instance, dest_check_data) self.assertEqual(block_migrate, dest_check_data_ret.block_migration) def test_check_can_live_migrate_source_block_migration_none_shared1(self): self._test_check_can_live_migrate_source_block_migration_none( False, True, False) def test_check_can_live_migrate_source_block_migration_none_shared2(self): self._test_check_can_live_migrate_source_block_migration_none( False, False, True) def test_check_can_live_migrate_source_block_migration_none_no_share(self): self._test_check_can_live_migrate_source_block_migration_none( True, False, False) @mock.patch('nova.virt.libvirt.driver.LibvirtDriver.' '_assert_dest_node_has_enough_disk') @mock.patch('nova.virt.libvirt.driver.LibvirtDriver.' '_assert_dest_node_has_enough_disk') @mock.patch('nova.virt.libvirt.driver.LibvirtDriver.' '_is_shared_block_storage') @mock.patch('nova.virt.libvirt.driver.LibvirtDriver.' '_check_shared_storage_test_file') def test_check_can_live_migration_source_disk_over_commit_none(self, mock_check, mock_shared_block, mock_enough, mock_disk_check): mock_check.return_value = False mock_shared_block.return_value = False instance = objects.Instance(**self.test_instance) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) dest_check_data = objects.LibvirtLiveMigrateData( filename='file', image_type='default', disk_available_mb=100) drvr.check_can_live_migrate_source( self.context, instance, dest_check_data) self.assertFalse(mock_disk_check.called) def _is_shared_block_storage_test_create_mocks(self, disks): # Test data instance_xml = ("instance-0000000a" "{}") disks_xml = '' for dsk in disks: if dsk['type'] is not 'network': disks_xml = ''.join([disks_xml, "" "" "" "" "".format(**dsk)]) else: disks_xml = ''.join([disks_xml, "" "" "" "" "" "" "".format(**dsk)]) # Preparing mocks mock_virDomain = mock.Mock(fakelibvirt.virDomain) mock_virDomain.XMLDesc = mock.Mock() mock_virDomain.XMLDesc.return_value = (instance_xml.format(disks_xml)) mock_lookup = mock.Mock() def mock_lookup_side_effect(name): return mock_virDomain mock_lookup.side_effect = mock_lookup_side_effect mock_getsize = mock.Mock() mock_getsize.return_value = "10737418240" return (mock_getsize, mock_lookup) def test_is_shared_block_storage_rbd(self): self.flags(images_type='rbd', group='libvirt') bdi = {'block_device_mapping': []} instance = objects.Instance(**self.test_instance) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) mock_get_instance_disk_info = mock.Mock() data = objects.LibvirtLiveMigrateData(image_type='rbd') with mock.patch.object(drvr, '_get_instance_disk_info', mock_get_instance_disk_info): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) self.assertTrue(drvr._is_shared_block_storage(instance, data, block_device_info=bdi)) self.assertEqual(0, mock_get_instance_disk_info.call_count) self.assertTrue(drvr._is_storage_shared_with('foo', 'bar')) def test_is_shared_block_storage_lvm(self): self.flags(images_type='lvm', group='libvirt') bdi = {'block_device_mapping': []} instance = objects.Instance(**self.test_instance) mock_get_instance_disk_info = mock.Mock() drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) data = objects.LibvirtLiveMigrateData(image_type='lvm', is_volume_backed=False, is_shared_instance_path=False) with mock.patch.object(drvr, '_get_instance_disk_info', mock_get_instance_disk_info): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) self.assertFalse(drvr._is_shared_block_storage( instance, data, block_device_info=bdi)) self.assertEqual(0, mock_get_instance_disk_info.call_count) def test_is_shared_block_storage_qcow2(self): self.flags(images_type='qcow2', group='libvirt') bdi = {'block_device_mapping': []} instance = objects.Instance(**self.test_instance) mock_get_instance_disk_info = mock.Mock() drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) data = objects.LibvirtLiveMigrateData(image_type='qcow2', is_volume_backed=False, is_shared_instance_path=False) with mock.patch.object(drvr, '_get_instance_disk_info', mock_get_instance_disk_info): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) self.assertFalse(drvr._is_shared_block_storage( instance, data, block_device_info=bdi)) self.assertEqual(0, mock_get_instance_disk_info.call_count) def test_is_shared_block_storage_rbd_only_source(self): self.flags(images_type='rbd', group='libvirt') bdi = {'block_device_mapping': []} instance = objects.Instance(**self.test_instance) mock_get_instance_disk_info = mock.Mock() drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) data = objects.LibvirtLiveMigrateData(is_shared_instance_path=False, is_volume_backed=False) with mock.patch.object(drvr, '_get_instance_disk_info', mock_get_instance_disk_info): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) self.assertFalse(drvr._is_shared_block_storage( instance, data, block_device_info=bdi)) self.assertEqual(0, mock_get_instance_disk_info.call_count) def test_is_shared_block_storage_rbd_only_dest(self): bdi = {'block_device_mapping': []} instance = objects.Instance(**self.test_instance) mock_get_instance_disk_info = mock.Mock() drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) data = objects.LibvirtLiveMigrateData(image_type='rbd', is_volume_backed=False, is_shared_instance_path=False) with mock.patch.object(drvr, '_get_instance_disk_info', mock_get_instance_disk_info): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) self.assertFalse(drvr._is_shared_block_storage( instance, data, block_device_info=bdi)) self.assertEqual(0, mock_get_instance_disk_info.call_count) def test_is_shared_block_storage_volume_backed(self): disks = [{'type': 'block', 'driver': 'raw', 'source': 'dev', 'source_path': '/dev/disk', 'target_dev': 'vda'}] bdi = {'block_device_mapping': [ {'connection_info': 'info', 'mount_device': '/dev/vda'}]} instance = objects.Instance(**self.test_instance) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) (mock_getsize, mock_lookup) =\ self._is_shared_block_storage_test_create_mocks(disks) data = objects.LibvirtLiveMigrateData(is_volume_backed=True, is_shared_instance_path=False) with mock.patch.object(host.Host, '_get_domain', mock_lookup): self.assertTrue(drvr._is_shared_block_storage(instance, data, block_device_info = bdi)) mock_lookup.assert_called_once_with(instance) def test_is_shared_block_storage_volume_backed_with_disk(self): disks = [{'type': 'block', 'driver': 'raw', 'source': 'dev', 'source_path': '/dev/disk', 'target_dev': 'vda'}, {'type': 'file', 'driver': 'raw', 'source': 'file', 'source_path': '/instance/disk.local', 'target_dev': 'vdb'}] bdi = {'block_device_mapping': [ {'connection_info': 'info', 'mount_device': '/dev/vda'}]} instance = objects.Instance(**self.test_instance) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) (mock_getsize, mock_lookup) =\ self._is_shared_block_storage_test_create_mocks(disks) data = objects.LibvirtLiveMigrateData(is_volume_backed=True, is_shared_instance_path=False) with test.nested( mock.patch.object(os.path, 'getsize', mock_getsize), mock.patch.object(host.Host, '_get_domain', mock_lookup)): self.assertFalse(drvr._is_shared_block_storage( instance, data, block_device_info = bdi)) mock_getsize.assert_called_once_with('/instance/disk.local') mock_lookup.assert_called_once_with(instance) def test_is_shared_block_storage_nfs(self): bdi = {'block_device_mapping': []} drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) mock_image_backend = mock.MagicMock() drvr.image_backend = mock_image_backend mock_backend = mock.MagicMock() mock_image_backend.backend.return_value = mock_backend mock_backend.is_file_in_instance_path.return_value = True mock_get_instance_disk_info = mock.Mock() data = objects.LibvirtLiveMigrateData( is_shared_instance_path=True, image_type='foo') with mock.patch.object(drvr, '_get_instance_disk_info', mock_get_instance_disk_info): self.assertTrue(drvr._is_shared_block_storage( 'instance', data, block_device_info=bdi)) self.assertEqual(0, mock_get_instance_disk_info.call_count) def test_live_migration_update_graphics_xml(self): self.compute = manager.ComputeManager() instance_dict = dict(self.test_instance) instance_dict.update({'host': 'fake', 'power_state': power_state.RUNNING, 'vm_state': vm_states.ACTIVE}) instance_ref = objects.Instance(**instance_dict) xml_tmpl = ("" "" "" "" "" "" "" "" "" "") initial_xml = xml_tmpl.format(vnc='1.2.3.4', spice='5.6.7.8') target_xml = xml_tmpl.format(vnc='10.0.0.1', spice='10.0.0.2') target_xml = etree.tostring(etree.fromstring(target_xml), encoding='unicode') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) # Preparing mocks vdmock = self.mox.CreateMock(fakelibvirt.virDomain) guest = libvirt_guest.Guest(vdmock) self.mox.StubOutWithMock(vdmock, "migrateToURI2") _bandwidth = CONF.libvirt.live_migration_bandwidth vdmock.XMLDesc(flags=fakelibvirt.VIR_DOMAIN_XML_MIGRATABLE).AndReturn( initial_xml) vdmock.migrateToURI2(drvr._live_migration_uri('dest'), miguri=None, dxml=target_xml, flags=mox.IgnoreArg(), bandwidth=_bandwidth).AndRaise( fakelibvirt.libvirtError("ERR")) # start test migrate_data = objects.LibvirtLiveMigrateData( graphics_listen_addr_vnc='10.0.0.1', graphics_listen_addr_spice='10.0.0.2', serial_listen_addr='127.0.0.1', target_connect_addr=None, bdms=[], block_migration=False) self.mox.ReplayAll() self.assertRaises(fakelibvirt.libvirtError, drvr._live_migration_operation, self.context, instance_ref, 'dest', False, migrate_data, guest, []) def test_live_migration_parallels_no_new_xml(self): self.flags(virt_type='parallels', group='libvirt') self.flags(enabled=False, group='vnc') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI()) instance_dict = dict(self.test_instance) instance_dict.update({'host': 'fake', 'power_state': power_state.RUNNING, 'vm_state': vm_states.ACTIVE}) instance = objects.Instance(**instance_dict) migrate_data = objects.LibvirtLiveMigrateData( block_migration=False) dom_mock = mock.MagicMock() guest = libvirt_guest.Guest(dom_mock) drvr._live_migration_operation(self.context, instance, 'dest', False, migrate_data, guest, []) # when new xml is not passed we fall back to migrateToURI dom_mock.migrateToURI.assert_called_once_with( drvr._live_migration_uri('dest'), flags=0, bandwidth=0) @mock.patch.object(utils, 'spawn') @mock.patch.object(host.Host, 'get_guest') @mock.patch.object(fakelibvirt.Connection, '_mark_running') @mock.patch.object(libvirt_driver.LibvirtDriver, '_live_migration_monitor') @mock.patch.object(libvirt_driver.LibvirtDriver, '_live_migration_copy_disk_paths') def test_live_migration_parallels_no_migrate_disks(self, mock_copy_disk_paths, mock_monitor, mock_running, mock_guest, mock_thread): self.flags(virt_type='parallels', group='libvirt') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI()) instance_dict = dict(self.test_instance) instance_dict.update({'host': 'fake', 'power_state': power_state.RUNNING, 'vm_state': vm_states.ACTIVE}) instance = objects.Instance(**instance_dict) migrate_data = objects.LibvirtLiveMigrateData( block_migration=True) dom = fakelibvirt.Domain(drvr._get_connection(), '', True) guest = libvirt_guest.Guest(dom) mock_guest.return_value = guest drvr._live_migration(self.context, instance, 'dest', lambda: None, lambda: None, True, migrate_data) self.assertFalse(mock_copy_disk_paths.called) mock_thread.assert_called_once_with( drvr._live_migration_operation, self.context, instance, 'dest', True, migrate_data, guest, []) def test_live_migration_update_volume_xml(self): self.compute = manager.ComputeManager() instance_dict = dict(self.test_instance) instance_dict.update({'host': 'fake', 'power_state': power_state.RUNNING, 'vm_state': vm_states.ACTIVE}) instance_ref = objects.Instance(**instance_dict) target_xml = self.device_xml_tmpl.format( device_path='/dev/disk/by-path/' 'ip-1.2.3.4:3260-iqn.' 'cde.67890.opst-lun-Z') # start test connection_info = { u'driver_volume_type': u'iscsi', u'serial': u'58a84f6d-3f0c-4e19-a0af-eb657b790657', u'data': { u'access_mode': u'rw', u'target_discovered': False, u'target_iqn': u'ip-1.2.3.4:3260-iqn.cde.67890.opst-lun-Z', u'volume_id': u'58a84f6d-3f0c-4e19-a0af-eb657b790657', 'device_path': u'/dev/disk/by-path/ip-1.2.3.4:3260-iqn.cde.67890.opst-lun-Z', }, } bdm = objects.LibvirtLiveMigrateBDMInfo( serial='58a84f6d-3f0c-4e19-a0af-eb657b790657', bus='virtio', type='disk', dev='vdb', connection_info=connection_info) migrate_data = objects.LibvirtLiveMigrateData( serial_listen_addr='', target_connect_addr=None, bdms=[bdm], block_migration=False) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) test_mock = mock.MagicMock() guest = libvirt_guest.Guest(test_mock) with mock.patch.object(libvirt_driver.LibvirtDriver, 'get_info') as \ mget_info,\ mock.patch.object(drvr._host, '_get_domain') as mget_domain,\ mock.patch.object(fakelibvirt.virDomain, 'migrateToURI2'),\ mock.patch.object( libvirt_migrate, 'get_updated_guest_xml') as mupdate: mget_info.side_effect = exception.InstanceNotFound( instance_id='foo') mget_domain.return_value = test_mock test_mock.XMLDesc.return_value = target_xml self.assertFalse(drvr._live_migration_operation( self.context, instance_ref, 'dest', False, migrate_data, guest, [])) mupdate.assert_called_once_with( guest, migrate_data, mock.ANY) def test_live_migration_with_valid_target_connect_addr(self): self.compute = manager.ComputeManager() instance_dict = dict(self.test_instance) instance_dict.update({'host': 'fake', 'power_state': power_state.RUNNING, 'vm_state': vm_states.ACTIVE}) instance_ref = objects.Instance(**instance_dict) target_xml = self.device_xml_tmpl.format( device_path='/dev/disk/by-path/' 'ip-1.2.3.4:3260-iqn.' 'cde.67890.opst-lun-Z') # start test connection_info = { u'driver_volume_type': u'iscsi', u'serial': u'58a84f6d-3f0c-4e19-a0af-eb657b790657', u'data': { u'access_mode': u'rw', u'target_discovered': False, u'target_iqn': u'ip-1.2.3.4:3260-iqn.cde.67890.opst-lun-Z', u'volume_id': u'58a84f6d-3f0c-4e19-a0af-eb657b790657', 'device_path': u'/dev/disk/by-path/ip-1.2.3.4:3260-iqn.cde.67890.opst-lun-Z', }, } bdm = objects.LibvirtLiveMigrateBDMInfo( serial='58a84f6d-3f0c-4e19-a0af-eb657b790657', bus='virtio', type='disk', dev='vdb', connection_info=connection_info) migrate_data = objects.LibvirtLiveMigrateData( serial_listen_addr='', target_connect_addr='127.0.0.2', bdms=[bdm], block_migration=False) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) test_mock = mock.MagicMock() guest = libvirt_guest.Guest(test_mock) with mock.patch.object(libvirt_migrate, 'get_updated_guest_xml') as mupdate: test_mock.XMLDesc.return_value = target_xml drvr._live_migration_operation(self.context, instance_ref, 'dest', False, migrate_data, guest, []) test_mock.migrateToURI2.assert_called_once_with( 'qemu+tcp://127.0.0.2/system', miguri='tcp://127.0.0.2', dxml=mupdate(), flags=0, bandwidth=0) def test_update_volume_xml(self): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) initial_xml = self.device_xml_tmpl.format( device_path='/dev/disk/by-path/' 'ip-1.2.3.4:3260-iqn.' 'abc.12345.opst-lun-X') target_xml = self.device_xml_tmpl.format( device_path='/dev/disk/by-path/' 'ip-1.2.3.4:3260-iqn.' 'cde.67890.opst-lun-Z') target_xml = etree.tostring(etree.fromstring(target_xml), encoding='unicode') serial = "58a84f6d-3f0c-4e19-a0af-eb657b790657" bdmi = objects.LibvirtLiveMigrateBDMInfo(serial=serial, bus='virtio', type='disk', dev='vdb') bdmi.connection_info = {u'driver_volume_type': u'iscsi', 'serial': u'58a84f6d-3f0c-4e19-a0af-eb657b790657', u'data': {u'access_mode': u'rw', u'target_discovered': False, u'target_iqn': u'ip-1.2.3.4:3260-iqn.cde.67890.opst-lun-Z', u'volume_id': u'58a84f6d-3f0c-4e19-a0af-eb657b790657', 'device_path': u'/dev/disk/by-path/ip-1.2.3.4:3260-iqn.cde.67890.opst-lun-Z'}} conf = vconfig.LibvirtConfigGuestDisk() conf.source_device = bdmi.type conf.driver_name = "qemu" conf.driver_format = "raw" conf.driver_cache = "none" conf.target_dev = bdmi.dev conf.target_bus = bdmi.bus conf.serial = bdmi.connection_info.get('serial') conf.source_type = "block" conf.source_path = bdmi.connection_info['data'].get('device_path') guest = libvirt_guest.Guest(mock.MagicMock()) with test.nested( mock.patch.object(drvr, '_get_volume_config', return_value=conf), mock.patch.object(guest, 'get_xml_desc', return_value=initial_xml)): config = libvirt_migrate.get_updated_guest_xml(guest, objects.LibvirtLiveMigrateData(bdms=[bdmi]), drvr._get_volume_config) parser = etree.XMLParser(remove_blank_text=True) config = etree.fromstring(config, parser) target_xml = etree.fromstring(target_xml, parser) self.assertEqual(etree.tostring(target_xml, encoding='unicode'), etree.tostring(config, encoding='unicode')) def test_live_migration_uri(self): hypervisor_uri_map = ( ('xen', 'xenmigr://%s/system'), ('kvm', 'qemu+tcp://%s/system'), ('qemu', 'qemu+tcp://%s/system'), ('parallels', 'parallels+tcp://%s/system'), # anything else will return None ('lxc', None), ) dest = 'destination' for hyperv, uri in hypervisor_uri_map: self.flags(virt_type=hyperv, group='libvirt') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) if uri is not None: uri = uri % dest self.assertEqual(uri, drvr._live_migration_uri(dest)) else: self.assertRaises(exception.LiveMigrationURINotAvailable, drvr._live_migration_uri, dest) def test_live_migration_uri_forced(self): dest = 'destination' for hyperv in ('kvm', 'xen'): self.flags(virt_type=hyperv, group='libvirt') forced_uri = 'foo://%s/bar' self.flags(live_migration_uri=forced_uri, group='libvirt') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) self.assertEqual(forced_uri % dest, drvr._live_migration_uri(dest)) def test_live_migration_scheme(self): self.flags(live_migration_scheme='ssh', group='libvirt') dest = 'destination' uri = 'qemu+ssh://%s/system' drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) self.assertEqual(uri % dest, drvr._live_migration_uri(dest)) def test_live_migration_scheme_does_not_override_uri(self): forced_uri = 'qemu+ssh://%s/system' self.flags(live_migration_uri=forced_uri, group='libvirt') self.flags(live_migration_scheme='tcp', group='libvirt') dest = 'destination' drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) self.assertEqual(forced_uri % dest, drvr._live_migration_uri(dest)) def test_migrate_uri(self): hypervisor_uri_map = ( ('xen', None), ('kvm', 'tcp://%s'), ('qemu', 'tcp://%s'), ) dest = 'destination' for hyperv, uri in hypervisor_uri_map: self.flags(virt_type=hyperv, group='libvirt') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) if uri is not None: uri = uri % dest self.assertEqual(uri, drvr._migrate_uri(dest)) def test_migrate_uri_forced_live_migration_uri(self): dest = 'destination' self.flags(virt_type='kvm', group='libvirt') forced_uri = 'qemu+tcp://user:pass@%s/system' self.flags(live_migration_uri=forced_uri, group='libvirt') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) self.assertEqual('tcp://%s' % dest, drvr._migrate_uri(dest)) def test_migrate_uri_forced_live_migration_inboud_addr(self): self.flags(virt_type='kvm', group='libvirt') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) addresses = ('127.0.0.1', '127.0.0.1:4444', '0:0:0:0:0:0:0:1', '[0:0:0:0:0:0:0:1]:4444', u'127.0.0.1', u'destination') for dest in addresses: result = drvr._migrate_uri(dest) self.assertEqual('tcp://%s' % dest, result) self.assertIsInstance(result, str) def test_update_volume_xml_no_serial(self): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) xml_tmpl = """
""" initial_xml = xml_tmpl.format(device_path='/dev/disk/by-path/' 'ip-1.2.3.4:3260-iqn.' 'abc.12345.opst-lun-X') target_xml = xml_tmpl.format(device_path='/dev/disk/by-path/' 'ip-1.2.3.4:3260-iqn.' 'abc.12345.opst-lun-X') target_xml = etree.tostring(etree.fromstring(target_xml), encoding='unicode') serial = "58a84f6d-3f0c-4e19-a0af-eb657b790657" connection_info = { u'driver_volume_type': u'iscsi', 'serial': u'58a84f6d-3f0c-4e19-a0af-eb657b790657', u'data': { u'access_mode': u'rw', u'target_discovered': False, u'target_iqn': u'ip-1.2.3.4:3260-iqn.cde.67890.opst-lun-Z', u'volume_id': u'58a84f6d-3f0c-4e19-a0af-eb657b790657', u'device_path': u'/dev/disk/by-path/ip-1.2.3.4:3260-iqn.cde.67890.opst-lun-Z', }, } bdmi = objects.LibvirtLiveMigrateBDMInfo(serial=serial, bus='virtio', dev='vdb', type='disk') bdmi.connection_info = connection_info conf = vconfig.LibvirtConfigGuestDisk() conf.source_device = bdmi.type conf.driver_name = "qemu" conf.driver_format = "raw" conf.driver_cache = "none" conf.target_dev = bdmi.dev conf.target_bus = bdmi.bus conf.serial = bdmi.connection_info.get('serial') conf.source_type = "block" conf.source_path = bdmi.connection_info['data'].get('device_path') guest = libvirt_guest.Guest(mock.MagicMock()) with test.nested( mock.patch.object(drvr, '_get_volume_config', return_value=conf), mock.patch.object(guest, 'get_xml_desc', return_value=initial_xml)): config = libvirt_migrate.get_updated_guest_xml(guest, objects.LibvirtLiveMigrateData(bdms=[bdmi]), drvr._get_volume_config) self.assertEqual(target_xml, config) def test_update_volume_xml_no_connection_info(self): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) initial_xml = self.device_xml_tmpl.format( device_path='/dev/disk/by-path/' 'ip-1.2.3.4:3260-iqn.' 'abc.12345.opst-lun-X') target_xml = self.device_xml_tmpl.format( device_path='/dev/disk/by-path/' 'ip-1.2.3.4:3260-iqn.' 'abc.12345.opst-lun-X') target_xml = etree.tostring(etree.fromstring(target_xml), encoding='unicode') serial = "58a84f6d-3f0c-4e19-a0af-eb657b790657" bdmi = objects.LibvirtLiveMigrateBDMInfo(serial=serial, dev='vdb', type='disk', bus='scsi', format='qcow') bdmi.connection_info = {} conf = vconfig.LibvirtConfigGuestDisk() guest = libvirt_guest.Guest(mock.MagicMock()) with test.nested( mock.patch.object(drvr, '_get_volume_config', return_value=conf), mock.patch.object(guest, 'get_xml_desc', return_value=initial_xml)): config = libvirt_migrate.get_updated_guest_xml( guest, objects.LibvirtLiveMigrateData(bdms=[bdmi]), drvr._get_volume_config) self.assertEqual(target_xml, config) @mock.patch.object(libvirt_driver.LibvirtDriver, '_get_serial_ports_from_guest') @mock.patch.object(fakelibvirt.virDomain, "migrateToURI2") @mock.patch.object(fakelibvirt.virDomain, "XMLDesc") def test_live_migration_update_serial_console_xml(self, mock_xml, mock_migrate, mock_get): self.compute = manager.ComputeManager() instance_ref = self.test_instance xml_tmpl = ("" "" "" "" "" "" "" "") initial_xml = xml_tmpl.format(addr='9.0.0.1', port='10100') target_xml = xml_tmpl.format(addr='9.0.0.12', port='10200') target_xml = etree.tostring(etree.fromstring(target_xml), encoding='unicode') # Preparing mocks mock_xml.return_value = initial_xml mock_migrate.side_effect = fakelibvirt.libvirtError("ERR") # start test bandwidth = CONF.libvirt.live_migration_bandwidth migrate_data = objects.LibvirtLiveMigrateData( graphics_listen_addr_vnc='10.0.0.1', graphics_listen_addr_spice='10.0.0.2', serial_listen_addr='9.0.0.12', target_connect_addr=None, bdms=[], block_migration=False, serial_listen_ports=[10200]) dom = fakelibvirt.virDomain guest = libvirt_guest.Guest(dom) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) self.assertRaises(fakelibvirt.libvirtError, drvr._live_migration_operation, self.context, instance_ref, 'dest', False, migrate_data, guest, []) mock_xml.assert_called_once_with( flags=fakelibvirt.VIR_DOMAIN_XML_MIGRATABLE) mock_migrate.assert_called_once_with( drvr._live_migration_uri('dest'), miguri=None, dxml=target_xml, flags=mock.ANY, bandwidth=bandwidth) def test_live_migration_fails_without_serial_console_address(self): self.compute = manager.ComputeManager() self.flags(enabled=True, group='serial_console') self.flags(proxyclient_address='', group='serial_console') instance_dict = dict(self.test_instance) instance_dict.update({'host': 'fake', 'power_state': power_state.RUNNING, 'vm_state': vm_states.ACTIVE}) instance_ref = objects.Instance(**instance_dict) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) # Preparing mocks dom = fakelibvirt.virDomain guest = libvirt_guest.Guest(dom) # start test migrate_data = objects.LibvirtLiveMigrateData( serial_listen_addr='', target_connect_addr=None, bdms=[], block_migration=False) self.assertRaises(exception.MigrationError, drvr._live_migration_operation, self.context, instance_ref, 'dest', False, migrate_data, guest, []) @mock.patch.object(host.Host, 'has_min_version', return_value=True) @mock.patch.object(fakelibvirt.virDomain, "migrateToURI3") @mock.patch('nova.virt.libvirt.migration.get_updated_guest_xml', return_value='') @mock.patch('nova.virt.libvirt.guest.Guest.get_xml_desc', return_value='') def test_live_migration_uses_migrateToURI3( self, mock_old_xml, mock_new_xml, mock_migrateToURI3, mock_min_version): # Preparing mocks disk_paths = ['vda', 'vdb'] params = { 'migrate_disks': ['vda', 'vdb'], 'bandwidth': CONF.libvirt.live_migration_bandwidth, 'destination_xml': '', } mock_migrateToURI3.side_effect = fakelibvirt.libvirtError("ERR") # Start test migrate_data = objects.LibvirtLiveMigrateData( graphics_listen_addr_vnc='0.0.0.0', graphics_listen_addr_spice='0.0.0.0', serial_listen_addr='127.0.0.1', target_connect_addr=None, bdms=[], block_migration=False) dom = fakelibvirt.virDomain guest = libvirt_guest.Guest(dom) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) instance = objects.Instance(**self.test_instance) self.assertRaises(fakelibvirt.libvirtError, drvr._live_migration_operation, self.context, instance, 'dest', False, migrate_data, guest, disk_paths) mock_migrateToURI3.assert_called_once_with( drvr._live_migration_uri('dest'), params=params, flags=0) @mock.patch.object(fakelibvirt.virDomain, "migrateToURI3") @mock.patch.object(host.Host, 'has_min_version', return_value=True) @mock.patch('nova.virt.libvirt.guest.Guest.get_xml_desc', return_value='') def _test_live_migration_block_migration_flags(self, device_names, expected_flags, mock_old_xml, mock_min_version, mock_migrateToURI3): migrate_data = objects.LibvirtLiveMigrateData( graphics_listen_addr_vnc='0.0.0.0', graphics_listen_addr_spice='0.0.0.0', serial_listen_addr='127.0.0.1', target_connect_addr=None, bdms=[], block_migration=True) dom = fakelibvirt.virDomain guest = libvirt_guest.Guest(dom) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) drvr._parse_migration_flags() instance = objects.Instance(**self.test_instance) drvr._live_migration_operation(self.context, instance, 'dest', True, migrate_data, guest, device_names) params = { 'migrate_disks': device_names, 'bandwidth': CONF.libvirt.live_migration_bandwidth, 'destination_xml': '', } mock_migrateToURI3.assert_called_once_with( drvr._live_migration_uri('dest'), params=params, flags=expected_flags) def test_live_migration_block_migration_with_devices(self): device_names = ['vda'] expected_flags = (fakelibvirt.VIR_MIGRATE_NON_SHARED_INC | fakelibvirt.VIR_MIGRATE_UNDEFINE_SOURCE | fakelibvirt.VIR_MIGRATE_PERSIST_DEST | fakelibvirt.VIR_MIGRATE_PEER2PEER | fakelibvirt.VIR_MIGRATE_LIVE) self._test_live_migration_block_migration_flags(device_names, expected_flags) def test_live_migration_block_migration_all_filtered(self): device_names = [] expected_flags = (fakelibvirt.VIR_MIGRATE_UNDEFINE_SOURCE | fakelibvirt.VIR_MIGRATE_PERSIST_DEST | fakelibvirt.VIR_MIGRATE_PEER2PEER | fakelibvirt.VIR_MIGRATE_LIVE) self._test_live_migration_block_migration_flags(device_names, expected_flags) @mock.patch.object(host.Host, 'has_min_version', return_value=True) @mock.patch.object(fakelibvirt.virDomain, "migrateToURI3") @mock.patch('nova.virt.libvirt.migration.get_updated_guest_xml', return_value='') @mock.patch('nova.virt.libvirt.guest.Guest.get_xml_desc', return_value='') def test_block_live_migration_tunnelled_migrateToURI3( self, mock_old_xml, mock_new_xml, mock_migrateToURI3, mock_min_version): self.flags(live_migration_tunnelled=True, group='libvirt') # Preparing mocks disk_paths = [] params = { 'bandwidth': CONF.libvirt.live_migration_bandwidth, 'destination_xml': '', } # Start test migrate_data = objects.LibvirtLiveMigrateData( graphics_listen_addr_vnc='0.0.0.0', graphics_listen_addr_spice='0.0.0.0', serial_listen_addr='127.0.0.1', target_connect_addr=None, bdms=[], block_migration=True) dom = fakelibvirt.virDomain guest = libvirt_guest.Guest(dom) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) drvr._parse_migration_flags() instance = objects.Instance(**self.test_instance) drvr._live_migration_operation(self.context, instance, 'dest', True, migrate_data, guest, disk_paths) expected_flags = (fakelibvirt.VIR_MIGRATE_UNDEFINE_SOURCE | fakelibvirt.VIR_MIGRATE_PERSIST_DEST | fakelibvirt.VIR_MIGRATE_TUNNELLED | fakelibvirt.VIR_MIGRATE_PEER2PEER | fakelibvirt.VIR_MIGRATE_LIVE) mock_migrateToURI3.assert_called_once_with( drvr._live_migration_uri('dest'), params=params, flags=expected_flags) def test_live_migration_raises_exception(self): # Confirms recover method is called when exceptions are raised. # Preparing data self.compute = manager.ComputeManager() instance_dict = dict(self.test_instance) instance_dict.update({'host': 'fake', 'power_state': power_state.RUNNING, 'vm_state': vm_states.ACTIVE}) instance_ref = objects.Instance(**instance_dict) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) # Preparing mocks vdmock = self.mox.CreateMock(fakelibvirt.virDomain) guest = libvirt_guest.Guest(vdmock) self.mox.StubOutWithMock(vdmock, "migrateToURI2") _bandwidth = CONF.libvirt.live_migration_bandwidth vdmock.XMLDesc(flags=fakelibvirt.VIR_DOMAIN_XML_MIGRATABLE ).AndReturn(FakeVirtDomain().XMLDesc(flags=0)) vdmock.migrateToURI2(drvr._live_migration_uri('dest'), miguri=None, dxml=mox.IgnoreArg(), flags=mox.IgnoreArg(), bandwidth=_bandwidth).AndRaise( fakelibvirt.libvirtError('ERR')) # start test migrate_data = objects.LibvirtLiveMigrateData( graphics_listen_addr_vnc='127.0.0.1', graphics_listen_addr_spice='127.0.0.1', serial_listen_addr='127.0.0.1', target_connect_addr=None, bdms=[], block_migration=False) self.mox.ReplayAll() self.assertRaises(fakelibvirt.libvirtError, drvr._live_migration_operation, self.context, instance_ref, 'dest', False, migrate_data, guest, []) self.assertEqual(vm_states.ACTIVE, instance_ref.vm_state) self.assertEqual(power_state.RUNNING, instance_ref.power_state) @mock.patch('shutil.rmtree') @mock.patch('os.path.exists', return_value=True) @mock.patch('nova.virt.libvirt.utils.get_instance_path_at_destination') @mock.patch('nova.virt.libvirt.driver.LibvirtDriver.destroy') def test_rollback_live_migration_at_dest_not_shared(self, mock_destroy, mock_get_instance_path, mock_exist, mock_shutil ): # destroy method may raise InstanceTerminationFailure or # InstancePowerOffFailure, here use their base class Invalid. mock_destroy.side_effect = exception.Invalid(reason='just test') fake_instance_path = os.path.join(cfg.CONF.instances_path, '/fake_instance_uuid') mock_get_instance_path.return_value = fake_instance_path drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) migrate_data = objects.LibvirtLiveMigrateData( is_shared_instance_path=False, instance_relative_path=False) self.assertRaises(exception.Invalid, drvr.rollback_live_migration_at_destination, "context", "instance", [], None, True, migrate_data) mock_exist.assert_called_once_with(fake_instance_path) mock_shutil.assert_called_once_with(fake_instance_path) @mock.patch('shutil.rmtree') @mock.patch('os.path.exists') @mock.patch('nova.virt.libvirt.utils.get_instance_path_at_destination') @mock.patch('nova.virt.libvirt.driver.LibvirtDriver.destroy') def test_rollback_live_migration_at_dest_shared(self, mock_destroy, mock_get_instance_path, mock_exist, mock_shutil ): def fake_destroy(ctxt, instance, network_info, block_device_info=None, destroy_disks=True): # This is just here to test the signature. Seems there should # be a better way to do this with mock and autospec. pass mock_destroy.side_effect = fake_destroy drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) migrate_data = objects.LibvirtLiveMigrateData( is_shared_instance_path=True, instance_relative_path=False) drvr.rollback_live_migration_at_destination("context", "instance", [], None, True, migrate_data) mock_destroy.assert_called_once_with("context", "instance", [], None, True) self.assertFalse(mock_get_instance_path.called) self.assertFalse(mock_exist.called) self.assertFalse(mock_shutil.called) @mock.patch.object(host.Host, "get_connection") @mock.patch.object(host.Host, "has_min_version", return_value=False) @mock.patch.object(fakelibvirt.Domain, "XMLDesc") def test_live_migration_copy_disk_paths(self, mock_xml, mock_version, mock_conn): xml = """ dummy d4e13113-918e-42fe-9fc9-861693ffd432 """ mock_xml.return_value = xml drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) dom = fakelibvirt.Domain(drvr._get_connection(), xml, False) guest = libvirt_guest.Guest(dom) paths = drvr._live_migration_copy_disk_paths(None, None, guest) self.assertEqual((["/var/lib/nova/instance/123/disk.root", "/dev/mapper/somevol"], ['vda', 'vdd']), paths) @mock.patch.object(fakelibvirt.Domain, "XMLDesc") def test_live_migration_copy_disk_paths_tunnelled(self, mock_xml): self.flags(live_migration_tunnelled=True, group='libvirt') xml = """ dummy d4e13113-918e-42fe-9fc9-861693ffd432 """ mock_xml.return_value = xml drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) drvr._parse_migration_flags() dom = fakelibvirt.Domain(drvr._get_connection(), xml, False) guest = libvirt_guest.Guest(dom) paths = drvr._live_migration_copy_disk_paths(None, None, guest) self.assertEqual((["/var/lib/nova/instance/123/disk.root", "/dev/mapper/somevol"], ['vda', 'vdd']), paths) @mock.patch.object(host.Host, "get_connection") @mock.patch.object(host.Host, "has_min_version", return_value=True) @mock.patch('nova.virt.driver.get_block_device_info') @mock.patch('nova.objects.BlockDeviceMappingList.get_by_instance_uuid') @mock.patch.object(fakelibvirt.Domain, "XMLDesc") def test_live_migration_copy_disk_paths_selective_block_migration( self, mock_xml, mock_get_instance, mock_block_device_info, mock_version, mock_conn): xml = """ dummy d4e13113-918e-42fe-9fc9-861693ffd432 """ mock_xml.return_value = xml instance = objects.Instance(**self.test_instance) instance.root_device_name = '/dev/vda' block_device_info = { 'swap': { 'disk_bus': u'virtio', 'swap_size': 10, 'device_name': u'/dev/vdc' }, 'root_device_name': u'/dev/vda', 'ephemerals': [{ 'guest_format': u'ext3', 'device_name': u'/dev/vdb', 'disk_bus': u'virtio', 'device_type': u'disk', 'size': 1 }], 'block_device_mapping': [{ 'guest_format': None, 'boot_index': None, 'mount_device': u'/dev/vdd', 'connection_info': { u'driver_volume_type': u'iscsi', 'serial': u'147df29f-aec2-4851-b3fe-f68dad151834', u'data': { u'access_mode': u'rw', u'target_discovered': False, u'encrypted': False, u'qos_specs': None, u'target_iqn': u'iqn.2010-10.org.openstack:' u'volume-147df29f-aec2-4851-b3fe-' u'f68dad151834', u'target_portal': u'10.102.44.141:3260', u'volume_id': u'147df29f-aec2-4851-b3fe-f68dad151834', u'target_lun': 1, u'auth_password': u'cXELT66FngwzTwpf', u'auth_username': u'QbQQjj445uWgeQkFKcVw', u'auth_method': u'CHAP' } }, 'disk_bus': None, 'device_type': None, 'delete_on_termination': False }] } mock_block_device_info.return_value = block_device_info drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) dom = fakelibvirt.Domain(drvr._get_connection(), xml, False) guest = libvirt_guest.Guest(dom) return_value = drvr._live_migration_copy_disk_paths(self.context, instance, guest) expected = (['/var/lib/nova/instance/123/disk.root', '/var/lib/nova/instance/123/disk.shared', '/var/lib/nova/instance/123/disk.config'], ['vda', 'vdb', 'vdc']) self.assertEqual(expected, return_value) @mock.patch.object(libvirt_driver.LibvirtDriver, "_live_migration_copy_disk_paths") def test_live_migration_data_gb_plain(self, mock_paths): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) instance = objects.Instance(**self.test_instance) data_gb = drvr._live_migration_data_gb(instance, []) self.assertEqual(2, data_gb) self.assertEqual(0, mock_paths.call_count) def test_live_migration_data_gb_block(self): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) instance = objects.Instance(**self.test_instance) def fake_stat(path): class StatResult(object): def __init__(self, size): self._size = size @property def st_size(self): return self._size if path == "/var/lib/nova/instance/123/disk.root": return StatResult(10 * units.Gi) elif path == "/dev/mapper/somevol": return StatResult(1.5 * units.Gi) else: raise Exception("Should not be reached") disk_paths = ["/var/lib/nova/instance/123/disk.root", "/dev/mapper/somevol"] with mock.patch.object(os, "stat") as mock_stat: mock_stat.side_effect = fake_stat data_gb = drvr._live_migration_data_gb(instance, disk_paths) # Expecting 2 GB for RAM, plus 10 GB for disk.root # and 1.5 GB rounded to 2 GB for somevol, so 14 GB self.assertEqual(14, data_gb) EXPECT_SUCCESS = 1 EXPECT_FAILURE = 2 EXPECT_ABORT = 3 @mock.patch.object(libvirt_guest.Guest, "migrate_start_postcopy") @mock.patch.object(time, "time") @mock.patch.object(time, "sleep", side_effect=lambda x: eventlet.sleep(0)) @mock.patch.object(host.Host, "get_connection") @mock.patch.object(libvirt_guest.Guest, "get_job_info") @mock.patch.object(objects.Instance, "save") @mock.patch.object(objects.Migration, "save") @mock.patch.object(fakelibvirt.Connection, "_mark_running") @mock.patch.object(fakelibvirt.virDomain, "abortJob") @mock.patch.object(libvirt_guest.Guest, "pause") def _test_live_migration_monitoring(self, job_info_records, time_records, expect_result, mock_pause, mock_abort, mock_running, mock_save, mock_mig_save, mock_job_info, mock_conn, mock_sleep, mock_time, mock_postcopy_switch, current_mig_status=None, expected_mig_status=None, scheduled_action=None, scheduled_action_executed=False, block_migration=False, expected_switch=False): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) instance = objects.Instance(**self.test_instance) drvr.active_migrations[instance.uuid] = deque() dom = fakelibvirt.Domain(drvr._get_connection(), "", True) guest = libvirt_guest.Guest(dom) finish_event = eventlet.event.Event() def fake_job_info(): while True: self.assertGreater(len(job_info_records), 0) rec = job_info_records.pop(0) if type(rec) == str: if rec == "thread-finish": finish_event.send() elif rec == "domain-stop": dom.destroy() elif rec == "force_complete": drvr.active_migrations[instance.uuid].append( "force-complete") else: if len(time_records) > 0: time_records.pop(0) return rec return rec def fake_time(): if len(time_records) > 0: return time_records[0] else: return int( datetime.datetime(2001, 1, 20, 20, 1, 0) .strftime('%s')) mock_job_info.side_effect = fake_job_info mock_time.side_effect = fake_time dest = mock.sentinel.migrate_dest migration = objects.Migration(context=self.context, id=1) migrate_data = objects.LibvirtLiveMigrateData( migration=migration, block_migration=block_migration) if current_mig_status: migrate_data.migration.status = current_mig_status else: migrate_data.migration.status = "unset" migrate_data.migration.save() fake_post_method = mock.MagicMock() fake_recover_method = mock.MagicMock() drvr._live_migration_monitor(self.context, instance, guest, dest, fake_post_method, fake_recover_method, False, migrate_data, finish_event, []) if scheduled_action_executed: if scheduled_action == 'pause': self.assertTrue(mock_pause.called) if scheduled_action == 'postcopy_switch': self.assertTrue(mock_postcopy_switch.called) else: if scheduled_action == 'pause': self.assertFalse(mock_pause.called) if scheduled_action == 'postcopy_switch': self.assertFalse(mock_postcopy_switch.called) mock_mig_save.assert_called_with() if expect_result == self.EXPECT_SUCCESS: self.assertFalse(fake_recover_method.called, 'Recover method called when success expected') self.assertFalse(mock_abort.called, 'abortJob not called when success expected') if expected_switch: self.assertTrue(mock_postcopy_switch.called) fake_post_method.assert_called_once_with( self.context, instance, dest, False, migrate_data) else: if expect_result == self.EXPECT_ABORT: self.assertTrue(mock_abort.called, 'abortJob called when abort expected') else: self.assertFalse(mock_abort.called, 'abortJob not called when failure expected') self.assertFalse(fake_post_method.called, 'Post method called when success not expected') if expected_mig_status: fake_recover_method.assert_called_once_with( self.context, instance, dest, migrate_data, migration_status=expected_mig_status) else: fake_recover_method.assert_called_once_with( self.context, instance, dest, migrate_data) self.assertNotIn(instance.uuid, drvr.active_migrations) def test_live_migration_monitor_success(self): # A normal sequence where see all the normal job states domain_info_records = [ libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_NONE), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED), "thread-finish", "domain-stop", libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_COMPLETED), ] self._test_live_migration_monitoring(domain_info_records, [], self.EXPECT_SUCCESS) def test_live_migration_handle_pause_normal(self): # A normal sequence where see all the normal job states, and pause # scheduled in between VIR_DOMAIN_JOB_UNBOUNDED domain_info_records = [ libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_NONE), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED), "force_complete", libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED), "thread-finish", "domain-stop", libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_COMPLETED), ] self._test_live_migration_monitoring(domain_info_records, [], self.EXPECT_SUCCESS, current_mig_status="running", scheduled_action="pause", scheduled_action_executed=True) def test_live_migration_handle_pause_on_start(self): # A normal sequence where see all the normal job states, and pause # scheduled in case of job type VIR_DOMAIN_JOB_NONE and finish_event is # not ready yet domain_info_records = [ "force_complete", libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_NONE), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED), "thread-finish", "domain-stop", libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_COMPLETED), ] self._test_live_migration_monitoring(domain_info_records, [], self.EXPECT_SUCCESS, current_mig_status="preparing", scheduled_action="pause", scheduled_action_executed=True) def test_live_migration_handle_pause_on_finish(self): # A normal sequence where see all the normal job states, and pause # scheduled in case of job type VIR_DOMAIN_JOB_NONE and finish_event is # ready domain_info_records = [ libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_NONE), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED), "thread-finish", "domain-stop", "force_complete", libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_COMPLETED), ] self._test_live_migration_monitoring(domain_info_records, [], self.EXPECT_SUCCESS, current_mig_status="completed", scheduled_action="pause", scheduled_action_executed=False) def test_live_migration_handle_pause_on_cancel(self): # A normal sequence where see all the normal job states, and pause # scheduled in case of job type VIR_DOMAIN_JOB_CANCELLED domain_info_records = [ libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_NONE), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED), "thread-finish", "domain-stop", "force_complete", libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_CANCELLED), ] self._test_live_migration_monitoring(domain_info_records, [], self.EXPECT_FAILURE, current_mig_status="cancelled", expected_mig_status='cancelled', scheduled_action="pause", scheduled_action_executed=False) def test_live_migration_handle_pause_on_failure(self): # A normal sequence where see all the normal job states, and pause # scheduled in case of job type VIR_DOMAIN_JOB_FAILED domain_info_records = [ libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_NONE), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED), "thread-finish", "domain-stop", "force_complete", libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_FAILED), ] self._test_live_migration_monitoring(domain_info_records, [], self.EXPECT_FAILURE, scheduled_action="pause", scheduled_action_executed=False) @mock.patch.object(libvirt_driver.LibvirtDriver, "_is_post_copy_enabled") def test_live_migration_handle_postcopy_normal(self, mock_postcopy_enabled): # A normal sequence where see all the normal job states, and postcopy # switch scheduled in between VIR_DOMAIN_JOB_UNBOUNDED mock_postcopy_enabled.return_value = True domain_info_records = [ libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_NONE), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED), "force_complete", libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED), "thread-finish", "domain-stop", libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_COMPLETED), ] self._test_live_migration_monitoring(domain_info_records, [], self.EXPECT_SUCCESS, current_mig_status="running", scheduled_action="postcopy_switch", scheduled_action_executed=True) @mock.patch.object(libvirt_driver.LibvirtDriver, "_is_post_copy_enabled") def test_live_migration_handle_postcopy_on_start(self, mock_postcopy_enabled): # A normal sequence where see all the normal job states, and postcopy # switch scheduled in case of job type VIR_DOMAIN_JOB_NONE and # finish_event is not ready yet mock_postcopy_enabled.return_value = True domain_info_records = [ "force_complete", libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_NONE), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED), "thread-finish", "domain-stop", libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_COMPLETED), ] self._test_live_migration_monitoring(domain_info_records, [], self.EXPECT_SUCCESS, current_mig_status="preparing", scheduled_action="postcopy_switch", scheduled_action_executed=True) @mock.patch.object(libvirt_driver.LibvirtDriver, "_is_post_copy_enabled") def test_live_migration_handle_postcopy_on_finish(self, mock_postcopy_enabled): # A normal sequence where see all the normal job states, and postcopy # switch scheduled in case of job type VIR_DOMAIN_JOB_NONE and # finish_event is ready mock_postcopy_enabled.return_value = True domain_info_records = [ libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_NONE), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED), "thread-finish", "domain-stop", "force_complete", libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_COMPLETED), ] self._test_live_migration_monitoring(domain_info_records, [], self.EXPECT_SUCCESS, current_mig_status="completed", scheduled_action="postcopy_switch", scheduled_action_executed=False) @mock.patch.object(libvirt_driver.LibvirtDriver, "_is_post_copy_enabled") def test_live_migration_handle_postcopy_on_cancel(self, mock_postcopy_enabled): # A normal sequence where see all the normal job states, and postcopy # scheduled in case of job type VIR_DOMAIN_JOB_CANCELLED mock_postcopy_enabled.return_value = True domain_info_records = [ libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_NONE), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED), "thread-finish", "domain-stop", "force_complete", libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_CANCELLED), ] self._test_live_migration_monitoring(domain_info_records, [], self.EXPECT_FAILURE, current_mig_status="cancelled", expected_mig_status='cancelled', scheduled_action="postcopy_switch", scheduled_action_executed=False) @mock.patch.object(libvirt_driver.LibvirtDriver, "_is_post_copy_enabled") def test_live_migration_handle_pause_on_postcopy(self, mock_postcopy_enabled): # A normal sequence where see all the normal job states, and pause # scheduled after migration switched to postcopy mock_postcopy_enabled.return_value = True domain_info_records = [ libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_NONE), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED), "force_complete", libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED), "thread-finish", "domain-stop", libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_COMPLETED), ] self._test_live_migration_monitoring(domain_info_records, [], self.EXPECT_SUCCESS, current_mig_status="running (post-copy)", scheduled_action="pause", scheduled_action_executed=False) @mock.patch.object(libvirt_driver.LibvirtDriver, "_is_post_copy_enabled") def test_live_migration_handle_postcopy_on_postcopy(self, mock_postcopy_enabled): # A normal sequence where see all the normal job states, and pause # scheduled after migration switched to postcopy mock_postcopy_enabled.return_value = True domain_info_records = [ libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_NONE), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED), "force_complete", libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED), "thread-finish", "domain-stop", libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_COMPLETED), ] self._test_live_migration_monitoring(domain_info_records, [], self.EXPECT_SUCCESS, current_mig_status="running (post-copy)", scheduled_action="postcopy_switch", scheduled_action_executed=False) @mock.patch.object(libvirt_driver.LibvirtDriver, "_is_post_copy_enabled") def test_live_migration_handle_postcopy_on_failure(self, mock_postcopy_enabled): # A normal sequence where see all the normal job states, and postcopy # scheduled in case of job type VIR_DOMAIN_JOB_FAILED mock_postcopy_enabled.return_value = True domain_info_records = [ libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_NONE), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED), "thread-finish", "domain-stop", "force_complete", libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_FAILED), ] self._test_live_migration_monitoring(domain_info_records, [], self.EXPECT_FAILURE, scheduled_action="postcopy_switch", scheduled_action_executed=False) def test_live_migration_monitor_success_race(self): # A normalish sequence but we're too slow to see the # completed job state domain_info_records = [ libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_NONE), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED), "thread-finish", "domain-stop", libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_NONE), ] self._test_live_migration_monitoring(domain_info_records, [], self.EXPECT_SUCCESS) def test_live_migration_monitor_failed(self): # A failed sequence where we see all the expected events domain_info_records = [ libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_NONE), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED), "thread-finish", libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_FAILED), ] self._test_live_migration_monitoring(domain_info_records, [], self.EXPECT_FAILURE) def test_live_migration_monitor_failed_race(self): # A failed sequence where we are too slow to see the # failed event domain_info_records = [ libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_NONE), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED), "thread-finish", libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_NONE), ] self._test_live_migration_monitoring(domain_info_records, [], self.EXPECT_FAILURE) def test_live_migration_monitor_cancelled(self): # A cancelled sequence where we see all the events domain_info_records = [ libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_NONE), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED), "thread-finish", "domain-stop", libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_CANCELLED), ] self._test_live_migration_monitoring(domain_info_records, [], self.EXPECT_FAILURE, expected_mig_status='cancelled') @mock.patch.object(fakelibvirt.virDomain, "migrateSetMaxDowntime") @mock.patch("nova.virt.libvirt.migration.downtime_steps") def test_live_migration_monitor_downtime(self, mock_downtime_steps, mock_set_downtime): self.flags(live_migration_completion_timeout=1000000, live_migration_progress_timeout=1000000, group='libvirt') # We've setup 4 fake downtime steps - first value is the # time delay, second is the downtime value downtime_steps = [ (90, 10), (180, 50), (270, 200), (500, 300), ] mock_downtime_steps.return_value = downtime_steps # Each one of these fake times is used for time.time() # when a new domain_info_records entry is consumed. # Times are chosen so that only the first 3 downtime # steps are needed. fake_times = [0, 1, 30, 95, 150, 200, 300] # A normal sequence where see all the normal job states domain_info_records = [ libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_NONE), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED), "thread-finish", "domain-stop", libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_COMPLETED), ] self._test_live_migration_monitoring(domain_info_records, fake_times, self.EXPECT_SUCCESS) mock_set_downtime.assert_has_calls([mock.call(10), mock.call(50), mock.call(200)]) def test_live_migration_monitor_completion(self): self.flags(live_migration_completion_timeout=100, live_migration_progress_timeout=1000000, group='libvirt') # Each one of these fake times is used for time.time() # when a new domain_info_records entry is consumed. fake_times = [0, 40, 80, 120, 160, 200, 240, 280, 320] # A normal sequence where see all the normal job states domain_info_records = [ libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_NONE), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED), "thread-finish", "domain-stop", libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_CANCELLED), ] self._test_live_migration_monitoring(domain_info_records, fake_times, self.EXPECT_ABORT, expected_mig_status='cancelled') def test_live_migration_monitor_progress(self): self.flags(live_migration_completion_timeout=1000000, live_migration_progress_timeout=150, group='libvirt') # Each one of these fake times is used for time.time() # when a new domain_info_records entry is consumed. fake_times = [0, 40, 80, 120, 160, 200, 240, 280, 320] # A normal sequence where see all the normal job states domain_info_records = [ libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_NONE), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED, data_remaining=90), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED, data_remaining=90), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED, data_remaining=90), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED, data_remaining=90), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED, data_remaining=90), "thread-finish", "domain-stop", libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_CANCELLED), ] self._test_live_migration_monitoring(domain_info_records, fake_times, self.EXPECT_ABORT, expected_mig_status='cancelled') def test_live_migration_monitor_progress_zero_data_remaining(self): self.flags(live_migration_completion_timeout=1000000, live_migration_progress_timeout=150, group='libvirt') # Each one of these fake times is used for time.time() # when a new domain_info_records entry is consumed. fake_times = [0, 40, 80, 120, 160, 200, 240, 280, 320] # A normal sequence where see all the normal job states domain_info_records = [ libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_NONE), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED, data_remaining=0), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED, data_remaining=90), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED, data_remaining=70), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED, data_remaining=50), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED, data_remaining=30), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED, data_remaining=10), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED, data_remaining=0), "thread-finish", "domain-stop", libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_FAILED), ] self._test_live_migration_monitoring(domain_info_records, fake_times, self.EXPECT_FAILURE) @mock.patch('nova.virt.libvirt.migration.should_switch_to_postcopy') @mock.patch.object(libvirt_driver.LibvirtDriver, "_is_post_copy_enabled") def test_live_migration_monitor_postcopy_switch(self, mock_postcopy_enabled, mock_should_switch): # A normal sequence where migration is switched to postcopy mode mock_postcopy_enabled.return_value = True switch_values = [False, False, True] mock_should_switch.return_value = switch_values domain_info_records = [ libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_NONE), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED), libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_UNBOUNDED), "thread-finish", "domain-stop", libvirt_guest.JobInfo( type=fakelibvirt.VIR_DOMAIN_JOB_COMPLETED), ] self._test_live_migration_monitoring(domain_info_records, [], self.EXPECT_SUCCESS, expected_switch=True) @mock.patch.object(host.Host, "get_connection") @mock.patch.object(utils, "spawn") @mock.patch.object(libvirt_driver.LibvirtDriver, "_live_migration_monitor") @mock.patch.object(host.Host, "get_guest") @mock.patch.object(fakelibvirt.Connection, "_mark_running") @mock.patch.object(libvirt_driver.LibvirtDriver, "_live_migration_copy_disk_paths") def test_live_migration_main(self, mock_copy_disk_path, mock_running, mock_guest, mock_monitor, mock_thread, mock_conn): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) instance = objects.Instance(**self.test_instance) dom = fakelibvirt.Domain(drvr._get_connection(), "demo", True) guest = libvirt_guest.Guest(dom) migrate_data = objects.LibvirtLiveMigrateData(block_migration=True) disks_to_copy = (['/some/path/one', '/test/path/two'], ['vda', 'vdb']) mock_copy_disk_path.return_value = disks_to_copy mock_guest.return_value = guest def fake_post(): pass def fake_recover(): pass drvr._live_migration(self.context, instance, "fakehost", fake_post, fake_recover, True, migrate_data) mock_copy_disk_path.assert_called_once_with(self.context, instance, guest) class AnyEventletEvent(object): def __eq__(self, other): return type(other) == eventlet.event.Event mock_thread.assert_called_once_with( drvr._live_migration_operation, self.context, instance, "fakehost", True, migrate_data, guest, disks_to_copy[1]) mock_monitor.assert_called_once_with( self.context, instance, guest, "fakehost", fake_post, fake_recover, True, migrate_data, AnyEventletEvent(), disks_to_copy[0]) def _do_test_create_images_and_backing(self, disk_type): instance = objects.Instance(**self.test_instance) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) self.mox.StubOutWithMock(drvr, '_fetch_instance_kernel_ramdisk') self.mox.StubOutWithMock(libvirt_driver.libvirt_utils, 'create_image') disk_info = {'path': 'foo', 'type': disk_type, 'disk_size': 1 * 1024 ** 3, 'virt_disk_size': 20 * 1024 ** 3, 'backing_file': None} libvirt_driver.libvirt_utils.create_image( disk_info['type'], mox.IgnoreArg(), disk_info['virt_disk_size']) drvr._fetch_instance_kernel_ramdisk(self.context, instance, fallback_from_host=None) self.mox.ReplayAll() self.stub_out('os.path.exists', lambda *args: False) drvr._create_images_and_backing(self.context, instance, "/fake/instance/dir", [disk_info]) def test_create_images_and_backing_qcow2(self): self._do_test_create_images_and_backing('qcow2') def test_create_images_and_backing_raw(self): self._do_test_create_images_and_backing('raw') def test_create_images_and_backing_images_not_exist_no_fallback(self): conn = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) self.test_instance.update({'user_id': 'fake-user', 'os_type': None, 'project_id': 'fake-project'}) instance = objects.Instance(**self.test_instance) backing_file = imagecache.get_cache_fname(instance.image_ref) disk_info = [ {u'backing_file': backing_file, u'disk_size': 10747904, u'path': u'disk_path', u'type': u'qcow2', u'virt_disk_size': 25165824}] with mock.patch.object(libvirt_driver.libvirt_utils, 'fetch_image', side_effect=exception.ImageNotFound( image_id="fake_id")): self.assertRaises(exception.ImageNotFound, conn._create_images_and_backing, self.context, instance, "/fake/instance/dir", disk_info) @mock.patch('nova.privsep.path.utime') def test_create_images_and_backing_images_not_exist_fallback(self, mock_utime): conn = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) base_dir = os.path.join(CONF.instances_path, CONF.image_cache_subdirectory_name) self.test_instance.update({'user_id': 'fake-user', 'os_type': None, 'kernel_id': uuids.kernel_id, 'ramdisk_id': uuids.ramdisk_id, 'project_id': 'fake-project'}) instance = objects.Instance(**self.test_instance) backing_file = imagecache.get_cache_fname(instance.image_ref) disk_info = [ {u'backing_file': backing_file, u'disk_size': 10747904, u'path': u'disk_path', u'type': u'qcow2', u'virt_disk_size': 25165824}] with test.nested( mock.patch.object(libvirt_driver.libvirt_utils, 'copy_image'), mock.patch.object(libvirt_driver.libvirt_utils, 'fetch_image', side_effect=exception.ImageNotFound( image_id=uuids.fake_id)), ) as (copy_image_mock, fetch_image_mock): conn._create_images_and_backing(self.context, instance, "/fake/instance/dir", disk_info, fallback_from_host="fake_host") backfile_path = os.path.join(base_dir, backing_file) kernel_path = os.path.join(CONF.instances_path, self.test_instance['uuid'], 'kernel') ramdisk_path = os.path.join(CONF.instances_path, self.test_instance['uuid'], 'ramdisk') copy_image_mock.assert_has_calls([ mock.call(dest=backfile_path, src=backfile_path, host='fake_host', receive=True), mock.call(dest=kernel_path, src=kernel_path, host='fake_host', receive=True), mock.call(dest=ramdisk_path, src=ramdisk_path, host='fake_host', receive=True) ]) fetch_image_mock.assert_has_calls([ mock.call(context=self.context, target=backfile_path, image_id=self.test_instance['image_ref']), mock.call(self.context, kernel_path, instance.kernel_id), mock.call(self.context, ramdisk_path, instance.ramdisk_id) ]) mock_utime.assert_called() @mock.patch.object(libvirt_driver.libvirt_utils, 'fetch_image') def test_create_images_and_backing_images_exist(self, mock_fetch_image): conn = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) self.test_instance.update({'user_id': 'fake-user', 'os_type': None, 'kernel_id': 'fake_kernel_id', 'ramdisk_id': 'fake_ramdisk_id', 'project_id': 'fake-project'}) instance = objects.Instance(**self.test_instance) disk_info = [ {u'backing_file': imagecache.get_cache_fname(instance.image_ref), u'disk_size': 10747904, u'path': u'disk_path', u'type': u'qcow2', u'virt_disk_size': 25165824}] with test.nested( mock.patch.object(imagebackend.Image, 'get_disk_size', return_value=0), mock.patch.object(os.path, 'exists', return_value=True) ): conn._create_images_and_backing(self.context, instance, '/fake/instance/dir', disk_info) self.assertFalse(mock_fetch_image.called) @mock.patch('nova.privsep.path.utime') def test_create_images_and_backing_ephemeral_gets_created(self, mock_utime): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) base_dir = os.path.join(CONF.instances_path, CONF.image_cache_subdirectory_name) instance = objects.Instance(**self.test_instance) disk_info_byname = fake_disk_info_byname(instance) disk_info_byname['disk.local']['backing_file'] = 'ephemeral_foo' disk_info_byname['disk.local']['virt_disk_size'] = 1 * units.Gi disk_info = disk_info_byname.values() with test.nested( mock.patch.object(libvirt_driver.libvirt_utils, 'fetch_image'), mock.patch.object(drvr, '_create_ephemeral'), mock.patch.object(imagebackend.Image, 'verify_base_size'), mock.patch.object(imagebackend.Image, 'get_disk_size') ) as (fetch_image_mock, create_ephemeral_mock, verify_base_size_mock, disk_size_mock): disk_size_mock.return_value = 0 drvr._create_images_and_backing(self.context, instance, CONF.instances_path, disk_info) self.assertEqual(len(create_ephemeral_mock.call_args_list), 1) root_backing, ephemeral_backing = [ os.path.join(base_dir, name) for name in (disk_info_byname['disk']['backing_file'], 'ephemeral_foo') ] create_ephemeral_mock.assert_called_once_with( ephemeral_size=1, fs_label='ephemeral_foo', os_type='linux', target=ephemeral_backing) fetch_image_mock.assert_called_once_with( context=self.context, image_id=instance.image_ref, target=root_backing) verify_base_size_mock.assert_has_calls([ mock.call(root_backing, instance.flavor.root_gb * units.Gi), mock.call(ephemeral_backing, 1 * units.Gi) ]) mock_utime.assert_called() def test_create_images_and_backing_disk_info_none(self): instance = objects.Instance(**self.test_instance) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) fake_backend = self.useFixture(fake_imagebackend.ImageBackendFixture()) drvr._create_images_and_backing(self.context, instance, "/fake/instance/dir", None) # Assert that we did nothing self.assertEqual({}, fake_backend.created_disks) @mock.patch.object(libvirt_driver.LibvirtDriver, '_fetch_instance_kernel_ramdisk') def test_create_images_and_backing_parallels(self, mock_fetch): self.flags(virt_type='parallels', group='libvirt') instance = objects.Instance(**self.test_instance) instance.vm_mode = fields.VMMode.EXE drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI()) drvr._create_images_and_backing(self.context, instance, '/fake/instance/dir', None) self.assertFalse(mock_fetch.called) def _generate_target_ret(self, target_connect_addr=None): target_ret = { 'graphics_listen_addrs': {'spice': '127.0.0.1', 'vnc': '127.0.0.1'}, 'target_connect_addr': target_connect_addr, 'serial_listen_addr': '127.0.0.1', 'volume': { '12345': {'connection_info': {u'data': {'device_path': u'/dev/disk/by-path/ip-1.2.3.4:3260-iqn.abc.12345.opst-lun-X'}, 'serial': '12345'}, 'disk_info': {'bus': 'scsi', 'dev': 'sda', 'type': 'disk'}}, '67890': {'connection_info': {u'data': {'device_path': u'/dev/disk/by-path/ip-1.2.3.4:3260-iqn.cde.67890.opst-lun-Z'}, 'serial': '67890'}, 'disk_info': {'bus': 'scsi', 'dev': 'sdb', 'type': 'disk'}}}} return target_ret def test_pre_live_migration_works_correctly_mocked(self): self._test_pre_live_migration_works_correctly_mocked() def test_pre_live_migration_with_transport_ip(self): self.flags(live_migration_inbound_addr='127.0.0.2', group='libvirt') target_ret = self._generate_target_ret('127.0.0.2') self._test_pre_live_migration_works_correctly_mocked(target_ret) def test_pre_live_migration_only_dest_supports_native_luks(self): # Assert that allow_native_luks is False when src_supports_native_luks # is missing from migrate data during a P to Q LM. self._test_pre_live_migration_works_correctly_mocked( src_supports_native_luks=None, dest_supports_native_luks=True, allow_native_luks=False) def test_pre_live_migration_only_src_supports_native_luks(self): # Assert that allow_native_luks is False when dest_supports_native_luks # is False due to unmet QEMU and Libvirt deps on the dest compute. self._test_pre_live_migration_works_correctly_mocked( src_supports_native_luks=True, dest_supports_native_luks=False, allow_native_luks=False) def _test_pre_live_migration_works_correctly_mocked(self, target_ret=None, src_supports_native_luks=True, dest_supports_native_luks=True, allow_native_luks=True): # Creating testdata c = context.get_admin_context() instance = objects.Instance(root_device_name='/dev/vda', **self.test_instance) bdms = objects.BlockDeviceMappingList(objects=[ fake_block_device.fake_bdm_object(c, { 'connection_info': jsonutils.dumps({ 'serial': '12345', 'data': { 'device_path': '/dev/disk/by-path/ip-1.2.3.4:3260' '-iqn.abc.12345.opst-lun-X' } }), 'device_name': '/dev/sda', 'volume_id': uuids.volume1, 'source_type': 'volume', 'destination_type': 'volume' }), fake_block_device.fake_bdm_object(c, { 'connection_info': jsonutils.dumps({ 'serial': '67890', 'data': { 'device_path': '/dev/disk/by-path/ip-1.2.3.4:3260' '-iqn.cde.67890.opst-lun-Z' } }), 'device_name': '/dev/sdb', 'volume_id': uuids.volume2, 'source_type': 'volume', 'destination_type': 'volume' }) ]) # We go through get_block_device_info to simulate what the # ComputeManager sends to the driver (make sure we're using the # correct type of BDM objects since there are many of them and # they are super confusing). block_device_info = driver.get_block_device_info(instance, bdms) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) class FakeNetworkInfo(object): def fixed_ips(self): return ["test_ip_addr"] def fake_none(*args, **kwargs): return self.stubs.Set(drvr, '_create_images_and_backing', fake_none) self.stubs.Set(drvr, '_is_native_luks_available', lambda: dest_supports_native_luks) nw_info = FakeNetworkInfo() # Creating mocks self.mox.StubOutWithMock(drvr, "_connect_volume") for v in block_device_info['block_device_mapping']: drvr._connect_volume(c, v['connection_info'], instance, allow_native_luks=allow_native_luks) self.mox.StubOutWithMock(drvr, 'plug_vifs') drvr.plug_vifs(mox.IsA(instance), nw_info) self.mox.ReplayAll() migrate_data = migrate_data_obj.LibvirtLiveMigrateData( block_migration=False, instance_relative_path='foo', is_shared_block_storage=False, is_shared_instance_path=False, graphics_listen_addr_vnc='127.0.0.1', graphics_listen_addr_spice='127.0.0.1', serial_listen_addr='127.0.0.1', ) if src_supports_native_luks: migrate_data.src_supports_native_luks = True result = drvr.pre_live_migration( c, instance, block_device_info, nw_info, None, migrate_data=migrate_data) if not target_ret: target_ret = self._generate_target_ret() self.assertEqual( target_ret, result.to_legacy_dict( pre_migration_result=True)['pre_live_migration_result']) @mock.patch.object(os, 'mkdir') @mock.patch('nova.virt.libvirt.utils.get_instance_path_at_destination') @mock.patch('nova.virt.libvirt.driver.remotefs.' 'RemoteFilesystem.copy_file') @mock.patch('nova.virt.driver.block_device_info_get_mapping') @mock.patch('nova.virt.configdrive.required_by', return_value=True) def test_pre_live_migration_block_with_config_drive_success( self, mock_required_by, block_device_info_get_mapping, mock_copy_file, mock_get_instance_path, mock_mkdir): self.flags(config_drive_format='iso9660') vol = {'block_device_mapping': [ {'connection_info': 'dummy', 'mount_device': '/dev/sda'}, {'connection_info': 'dummy', 'mount_device': '/dev/sdb'}]} fake_instance_path = os.path.join(cfg.CONF.instances_path, '/fake_instance_uuid') mock_get_instance_path.return_value = fake_instance_path drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) instance = objects.Instance(**self.test_instance) migrate_data = objects.LibvirtLiveMigrateData() migrate_data.is_shared_instance_path = False migrate_data.is_shared_block_storage = False migrate_data.block_migration = True migrate_data.instance_relative_path = 'foo' src = "%s:%s/disk.config" % (instance.host, fake_instance_path) result = drvr.pre_live_migration( self.context, instance, vol, [], None, migrate_data) block_device_info_get_mapping.assert_called_once_with( {'block_device_mapping': [ {'connection_info': 'dummy', 'mount_device': '/dev/sda'}, {'connection_info': 'dummy', 'mount_device': '/dev/sdb'} ]} ) mock_copy_file.assert_called_once_with(src, fake_instance_path) migrate_data.graphics_listen_addrs_vnc = '127.0.0.1' migrate_data.graphics_listen_addrs_spice = '127.0.0.1' migrate_data.serial_listen_addr = '127.0.0.1' self.assertEqual(migrate_data, result) @mock.patch('nova.virt.driver.block_device_info_get_mapping', return_value=()) def test_pre_live_migration_block_with_config_drive_mocked_with_vfat( self, block_device_info_get_mapping): self.flags(config_drive_format='vfat') # Creating testdata vol = {'block_device_mapping': [ {'connection_info': 'dummy', 'mount_device': '/dev/sda'}, {'connection_info': 'dummy', 'mount_device': '/dev/sdb'}]} drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) instance = objects.Instance(**self.test_instance) instance.config_drive = 'True' migrate_data = migrate_data_obj.LibvirtLiveMigrateData( is_shared_instance_path=False, is_shared_block_storage=False, block_migration=False, instance_relative_path='foo', ) res_data = drvr.pre_live_migration( self.context, instance, vol, [], None, migrate_data) res_data = res_data.to_legacy_dict(pre_migration_result=True) block_device_info_get_mapping.assert_called_once_with( {'block_device_mapping': [ {'connection_info': 'dummy', 'mount_device': '/dev/sda'}, {'connection_info': 'dummy', 'mount_device': '/dev/sdb'} ]} ) self.assertEqual({'graphics_listen_addrs': {'spice': None, 'vnc': None}, 'target_connect_addr': None, 'serial_listen_addr': None, 'volume': {}}, res_data['pre_live_migration_result']) def test_pre_live_migration_vol_backed_works_correctly_mocked(self): # Creating testdata, using temp dir. with utils.tempdir() as tmpdir: self.flags(instances_path=tmpdir) c = context.get_admin_context() inst_ref = objects.Instance(root_device_name='/dev/vda', **self.test_instance) bdms = objects.BlockDeviceMappingList(objects=[ fake_block_device.fake_bdm_object(c, { 'connection_info': jsonutils.dumps({ 'serial': '12345', 'data': { 'device_path': '/dev/disk/by-path/ip-1.2.3.4:3260' '-iqn.abc.12345.opst-lun-X' } }), 'device_name': '/dev/sda', 'volume_id': uuids.volume1, 'source_type': 'volume', 'destination_type': 'volume' }), fake_block_device.fake_bdm_object(c, { 'connection_info': jsonutils.dumps({ 'serial': '67890', 'data': { 'device_path': '/dev/disk/by-path/ip-1.2.3.4:3260' '-iqn.cde.67890.opst-lun-Z' } }), 'device_name': '/dev/sdb', 'volume_id': uuids.volume2, 'source_type': 'volume', 'destination_type': 'volume' }) ]) # We go through get_block_device_info to simulate what the # ComputeManager sends to the driver (make sure we're using the # correct type of BDM objects since there are many of them and # they are super confusing). block_device_info = driver.get_block_device_info(inst_ref, bdms) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) def fake_none(*args, **kwargs): return self.stubs.Set(drvr, '_create_images_and_backing', fake_none) self.stubs.Set(drvr, '_is_native_luks_available', lambda: True) class FakeNetworkInfo(object): def fixed_ips(self): return ["test_ip_addr"] nw_info = FakeNetworkInfo() # Creating mocks self.mox.StubOutWithMock(drvr, "_connect_volume") for v in block_device_info['block_device_mapping']: drvr._connect_volume(c, v['connection_info'], inst_ref, allow_native_luks=True) self.mox.StubOutWithMock(drvr, 'plug_vifs') drvr.plug_vifs(mox.IsA(inst_ref), nw_info) self.mox.ReplayAll() migrate_data = migrate_data_obj.LibvirtLiveMigrateData( is_shared_instance_path=False, is_shared_block_storage=False, is_volume_backed=True, block_migration=False, instance_relative_path=inst_ref['name'], disk_over_commit=False, disk_available_mb=123, image_type='qcow2', filename='foo', src_supports_native_luks=True, ) ret = drvr.pre_live_migration( c, inst_ref, block_device_info, nw_info, None, migrate_data) expected_result = { 'graphics_listen_addrs': {'spice': None, 'vnc': None}, 'target_connect_addr': None, 'serial_listen_addr': None, 'volume': { '12345': {'connection_info': {u'data': {'device_path': u'/dev/disk/by-path/ip-1.2.3.4:3260-iqn.abc.12345.opst-lun-X'}, 'serial': '12345'}, 'disk_info': {'bus': 'scsi', 'dev': 'sda', 'type': 'disk'}}, '67890': {'connection_info': {u'data': {'device_path': u'/dev/disk/by-path/ip-1.2.3.4:3260-iqn.cde.67890.opst-lun-Z'}, 'serial': '67890'}, 'disk_info': {'bus': 'scsi', 'dev': 'sdb', 'type': 'disk'}}}} self.assertEqual( expected_result, ret.to_legacy_dict(True)['pre_live_migration_result']) self.assertTrue(os.path.exists('%s/%s/' % (tmpdir, inst_ref['name']))) def test_pre_live_migration_plug_vifs_retry_fails(self): self.flags(live_migration_retry_count=3) instance = objects.Instance(**self.test_instance) def fake_plug_vifs(instance, network_info): raise processutils.ProcessExecutionError() drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) self.stubs.Set(drvr, 'plug_vifs', fake_plug_vifs) self.stubs.Set(eventlet.greenthread, 'sleep', lambda x: eventlet.sleep(0)) disk_info_json = jsonutils.dumps({}) migrate_data = migrate_data_obj.LibvirtLiveMigrateData( is_shared_block_storage=True, is_shared_instance_path=True, block_migration=False, ) self.assertRaises(processutils.ProcessExecutionError, drvr.pre_live_migration, self.context, instance, block_device_info=None, network_info=[], disk_info=disk_info_json, migrate_data=migrate_data) def test_pre_live_migration_plug_vifs_retry_works(self): self.flags(live_migration_retry_count=3) called = {'count': 0} instance = objects.Instance(**self.test_instance) def fake_plug_vifs(instance, network_info): called['count'] += 1 if called['count'] < CONF.live_migration_retry_count: raise processutils.ProcessExecutionError() else: return drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) self.stubs.Set(drvr, 'plug_vifs', fake_plug_vifs) self.stubs.Set(eventlet.greenthread, 'sleep', lambda x: eventlet.sleep(0)) disk_info_json = jsonutils.dumps({}) migrate_data = migrate_data_obj.LibvirtLiveMigrateData( is_shared_block_storage=True, is_shared_instance_path=True, block_migration=False, ) drvr.pre_live_migration(self.context, instance, block_device_info=None, network_info=[], disk_info=disk_info_json, migrate_data=migrate_data) def test_pre_live_migration_image_not_created_with_shared_storage(self): migrate_data_set = [{'is_shared_block_storage': False, 'is_shared_instance_path': True, 'is_volume_backed': False, 'filename': 'foo', 'instance_relative_path': 'bar', 'disk_over_commit': False, 'disk_available_mb': 123, 'image_type': 'qcow2', 'block_migration': False}, {'is_shared_block_storage': True, 'is_shared_instance_path': True, 'is_volume_backed': False, 'filename': 'foo', 'instance_relative_path': 'bar', 'disk_over_commit': False, 'disk_available_mb': 123, 'image_type': 'qcow2', 'block_migration': False}, {'is_shared_block_storage': False, 'is_shared_instance_path': True, 'is_volume_backed': False, 'filename': 'foo', 'instance_relative_path': 'bar', 'disk_over_commit': False, 'disk_available_mb': 123, 'image_type': 'qcow2', 'block_migration': True}] def _to_obj(d): return migrate_data_obj.LibvirtLiveMigrateData(**d) migrate_data_set = map(_to_obj, migrate_data_set) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) instance = objects.Instance(**self.test_instance) # creating mocks with test.nested( mock.patch.object(drvr, '_create_images_and_backing'), mock.patch.object(drvr, 'ensure_filtering_rules_for_instance'), mock.patch.object(drvr, 'plug_vifs'), ) as ( create_image_mock, rules_mock, plug_mock, ): disk_info_json = jsonutils.dumps({}) for migrate_data in migrate_data_set: res = drvr.pre_live_migration(self.context, instance, block_device_info=None, network_info=[], disk_info=disk_info_json, migrate_data=migrate_data) self.assertFalse(create_image_mock.called) self.assertIsInstance(res, objects.LibvirtLiveMigrateData) def test_pre_live_migration_with_not_shared_instance_path(self): migrate_data = migrate_data_obj.LibvirtLiveMigrateData( is_shared_block_storage=False, is_shared_instance_path=False, block_migration=False, instance_relative_path='foo', ) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) instance = objects.Instance(**self.test_instance) def check_instance_dir(context, instance, instance_dir, disk_info, fallback_from_host=False): self.assertTrue(instance_dir) # creating mocks with test.nested( mock.patch.object(drvr, '_create_images_and_backing', side_effect=check_instance_dir), mock.patch.object(drvr, 'ensure_filtering_rules_for_instance'), mock.patch.object(drvr, 'plug_vifs'), ) as ( create_image_mock, rules_mock, plug_mock, ): disk_info_json = jsonutils.dumps({}) res = drvr.pre_live_migration(self.context, instance, block_device_info=None, network_info=[], disk_info=disk_info_json, migrate_data=migrate_data) create_image_mock.assert_has_calls( [mock.call(self.context, instance, mock.ANY, {}, fallback_from_host=instance.host)]) self.assertIsInstance(res, objects.LibvirtLiveMigrateData) def test_pre_live_migration_recreate_disk_info(self): migrate_data = migrate_data_obj.LibvirtLiveMigrateData( is_shared_block_storage=False, is_shared_instance_path=False, block_migration=True, instance_relative_path='/some/path/', ) disk_info = [{'disk_size': 5368709120, 'type': 'raw', 'virt_disk_size': 5368709120, 'path': '/some/path/disk', 'backing_file': '', 'over_committed_disk_size': 0}, {'disk_size': 1073741824, 'type': 'raw', 'virt_disk_size': 1073741824, 'path': '/some/path/disk.eph0', 'backing_file': '', 'over_committed_disk_size': 0}] image_disk_info = {'/some/path/disk': 'raw', '/some/path/disk.eph0': 'raw'} drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) instance = objects.Instance(**self.test_instance) instance_path = os.path.dirname(disk_info[0]['path']) disk_info_path = os.path.join(instance_path, 'disk.info') with test.nested( mock.patch.object(os, 'mkdir'), mock.patch.object(fake_libvirt_utils, 'write_to_file'), mock.patch.object(drvr, '_create_images_and_backing') ) as ( mkdir, write_to_file, create_images_and_backing ): drvr.pre_live_migration(self.context, instance, block_device_info=None, network_info=[], disk_info=jsonutils.dumps(disk_info), migrate_data=migrate_data) write_to_file.assert_called_with(disk_info_path, jsonutils.dumps(image_disk_info)) def test_pre_live_migration_with_perf_events(self): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) drvr._supported_perf_events = ['cmt'] migrate_data = migrate_data_obj.LibvirtLiveMigrateData( is_shared_block_storage=False, is_shared_instance_path=False, block_migration=False, instance_relative_path='foo', ) instance = objects.Instance(**self.test_instance) res = drvr.pre_live_migration(self.context, instance, block_device_info=None, network_info=[], disk_info=None, migrate_data=migrate_data) self.assertEqual(['cmt'], res.supported_perf_events) def test_get_instance_disk_info_works_correctly(self): # Test data instance = objects.Instance(**self.test_instance) dummyxml = ("instance-0000000a" "" "" "" "" "" "" "" "") # Preparing mocks vdmock = self.mox.CreateMock(fakelibvirt.virDomain) self.mox.StubOutWithMock(vdmock, "XMLDesc") vdmock.XMLDesc(0).AndReturn(dummyxml) def fake_lookup(_uuid): if _uuid == instance.uuid: return vdmock self.create_fake_libvirt_mock(lookupByUUIDString=fake_lookup) fake_libvirt_utils.disk_sizes['/test/disk'] = 10 * units.Gi fake_libvirt_utils.disk_sizes['/test/disk.local'] = 20 * units.Gi fake_libvirt_utils.disk_backing_files['/test/disk.local'] = 'file' self.mox.StubOutWithMock(os.path, "getsize") os.path.getsize('/test/disk').AndReturn((10737418240)) os.path.getsize('/test/disk.local').AndReturn((3328599655)) ret = ("image: /test/disk.local\n" "file format: qcow2\n" "virtual size: 20G (21474836480 bytes)\n" "disk size: 3.1G\n" "cluster_size: 2097152\n" "backing file: /test/dummy (actual path: /backing/file)\n") self.mox.StubOutWithMock(os.path, "exists") os.path.exists('/test/disk.local').AndReturn(True) self.mox.StubOutWithMock(utils, "execute") utils.execute('env', 'LC_ALL=C', 'LANG=C', 'qemu-img', 'info', '/test/disk.local', prlimit = images.QEMU_IMG_LIMITS, ).AndReturn((ret, '')) self.mox.ReplayAll() drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) info = drvr.get_instance_disk_info(instance) info = jsonutils.loads(info) self.assertEqual(info[0]['type'], 'raw') self.assertEqual(info[0]['path'], '/test/disk') self.assertEqual(info[0]['disk_size'], 10737418240) self.assertEqual(info[0]['backing_file'], "") self.assertEqual(info[0]['over_committed_disk_size'], 0) self.assertEqual(info[1]['type'], 'qcow2') self.assertEqual(info[1]['path'], '/test/disk.local') self.assertEqual(info[1]['virt_disk_size'], 21474836480) self.assertEqual(info[1]['backing_file'], "file") self.assertEqual(info[1]['over_committed_disk_size'], 18146236825) def test_post_live_migration(self): vol = {'block_device_mapping': [ {'attachment_id': None, 'connection_info': { 'data': {'multipath_id': 'dummy1'}, 'serial': 'fake_serial1'}, 'mount_device': '/dev/sda', }, {'attachment_id': None, 'connection_info': { 'data': {}, 'serial': 'fake_serial2'}, 'mount_device': '/dev/sdb', }]} def fake_initialize_connection(context, volume_id, connector): return {'data': {}} drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) fake_connector = {'host': 'fake'} inst_ref = {'id': 'foo'} cntx = context.get_admin_context() # Set up the mock expectations with test.nested( mock.patch.object(driver, 'block_device_info_get_mapping', return_value=vol['block_device_mapping']), mock.patch.object(drvr, "get_volume_connector", return_value=fake_connector), mock.patch.object(drvr._volume_api, "initialize_connection", side_effect=fake_initialize_connection), mock.patch.object(drvr, '_disconnect_volume') ) as (block_device_info_get_mapping, get_volume_connector, initialize_connection, _disconnect_volume): drvr.post_live_migration(cntx, inst_ref, vol) block_device_info_get_mapping.assert_has_calls([ mock.call(vol)]) get_volume_connector.assert_has_calls([ mock.call(inst_ref)]) _disconnect_volume.assert_has_calls([ mock.call(cntx, {'data': {'multipath_id': 'dummy1'}}, inst_ref), mock.call(cntx, {'data': {}}, inst_ref)]) def test_post_live_migration_cinder_v3(self): cntx = context.get_admin_context() drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) instance = fake_instance.fake_instance_obj(cntx, uuid=uuids.instance) vol_id = uuids.volume old_attachment_id = uuids.attachment disk_dev = 'sda' connection_info = { 'data': {'multipath_id': 'dummy1'}, 'serial': vol_id} block_device_mapping = [ {'attachment_id': uuids.attachment, 'mount_device': '/dev/%s' % disk_dev, 'connection_info': connection_info}] old_attachment = { 'connection_info': { 'data': {'multipath_id': 'dummy1'}, 'serial': vol_id}} migrate_data = objects.LibvirtLiveMigrateData( is_shared_block_storage=True, old_vol_attachment_ids={vol_id: old_attachment_id}) @mock.patch.object(drvr, '_disconnect_volume') @mock.patch.object(drvr._volume_api, 'attachment_get') @mock.patch.object(driver, 'block_device_info_get_mapping') def _test(mock_get_bdms, mock_attachment_get, mock_disconnect): mock_get_bdms.return_value = block_device_mapping mock_attachment_get.return_value = old_attachment drvr.post_live_migration(cntx, instance, None, migrate_data=migrate_data) mock_attachment_get.assert_called_once_with(cntx, old_attachment_id) mock_disconnect.assert_called_once_with(cntx, connection_info, instance) _test() def test_get_instance_disk_info_excludes_volumes(self): # Test data instance = objects.Instance(**self.test_instance) dummyxml = ("instance-0000000a" "" "" "" "" "" "" "" "" "" "" "" "" "" "") # Preparing mocks vdmock = self.mox.CreateMock(fakelibvirt.virDomain) self.mox.StubOutWithMock(vdmock, "XMLDesc") vdmock.XMLDesc(0).AndReturn(dummyxml) def fake_lookup(_uuid): if _uuid == instance.uuid: return vdmock self.create_fake_libvirt_mock(lookupByUUIDString=fake_lookup) fake_libvirt_utils.disk_sizes['/test/disk'] = 10 * units.Gi fake_libvirt_utils.disk_sizes['/test/disk.local'] = 20 * units.Gi fake_libvirt_utils.disk_backing_files['/test/disk.local'] = 'file' self.mox.StubOutWithMock(os.path, "getsize") os.path.getsize('/test/disk').AndReturn((10737418240)) os.path.getsize('/test/disk.local').AndReturn((3328599655)) ret = ("image: /test/disk.local\n" "file format: qcow2\n" "virtual size: 20G (21474836480 bytes)\n" "disk size: 3.1G\n" "cluster_size: 2097152\n" "backing file: /test/dummy (actual path: /backing/file)\n") self.mox.StubOutWithMock(os.path, "exists") os.path.exists('/test/disk.local').AndReturn(True) self.mox.StubOutWithMock(utils, "execute") utils.execute('env', 'LC_ALL=C', 'LANG=C', 'qemu-img', 'info', '/test/disk.local', prlimit = images.QEMU_IMG_LIMITS, ).AndReturn((ret, '')) self.mox.ReplayAll() conn_info = {'driver_volume_type': 'fake'} info = {'block_device_mapping': [ {'connection_info': conn_info, 'mount_device': '/dev/vdc'}, {'connection_info': conn_info, 'mount_device': '/dev/vdd'}]} drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) info = drvr.get_instance_disk_info(instance, block_device_info=info) info = jsonutils.loads(info) self.assertEqual(info[0]['type'], 'raw') self.assertEqual(info[0]['path'], '/test/disk') self.assertEqual(info[0]['disk_size'], 10737418240) self.assertEqual(info[0]['backing_file'], "") self.assertEqual(info[0]['over_committed_disk_size'], 0) self.assertEqual(info[1]['type'], 'qcow2') self.assertEqual(info[1]['path'], '/test/disk.local') self.assertEqual(info[1]['virt_disk_size'], 21474836480) self.assertEqual(info[1]['backing_file'], "file") self.assertEqual(info[1]['over_committed_disk_size'], 18146236825) def test_get_instance_disk_info_no_bdinfo_passed(self): # NOTE(ndipanov): _get_disk_overcomitted_size_total calls this method # without access to Nova's block device information. We want to make # sure that we guess volumes mostly correctly in that case as well instance = objects.Instance(**self.test_instance) dummyxml = ("instance-0000000a" "" "" "" "" "" "" "" "") # Preparing mocks vdmock = self.mox.CreateMock(fakelibvirt.virDomain) self.mox.StubOutWithMock(vdmock, "XMLDesc") vdmock.XMLDesc(0).AndReturn(dummyxml) def fake_lookup(_uuid): if _uuid == instance.uuid: return vdmock self.create_fake_libvirt_mock(lookupByUUIDString=fake_lookup) fake_libvirt_utils.disk_sizes['/test/disk'] = 10 * units.Gi self.mox.StubOutWithMock(os.path, "getsize") os.path.getsize('/test/disk').AndReturn((10737418240)) self.mox.ReplayAll() drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) info = drvr.get_instance_disk_info(instance) info = jsonutils.loads(info) self.assertEqual(1, len(info)) self.assertEqual(info[0]['type'], 'raw') self.assertEqual(info[0]['path'], '/test/disk') self.assertEqual(info[0]['disk_size'], 10737418240) self.assertEqual(info[0]['backing_file'], "") self.assertEqual(info[0]['over_committed_disk_size'], 0) def test_spawn_with_network_info(self): def fake_getLibVersion(): return fakelibvirt.FAKE_LIBVIRT_VERSION def fake_getCapabilities(): return """ cef19ce0-0ca2-11df-855d-b19fbce37686 x86_64 Penryn Intel """ def fake_baselineCPU(cpu, flag): return """ Penryn Intel """ # _fake_network_info must be called before create_fake_libvirt_mock(), # as _fake_network_info calls importutils.import_class() and # create_fake_libvirt_mock() mocks importutils.import_class(). network_info = _fake_network_info(self, 1) self.create_fake_libvirt_mock(getLibVersion=fake_getLibVersion, getCapabilities=fake_getCapabilities, getVersion=lambda: 1005001, baselineCPU=fake_baselineCPU) instance = objects.Instance(**self.test_instance) instance.image_ref = uuids.image_ref instance.config_drive = '' image_meta = objects.ImageMeta.from_dict(self.test_image_meta) self.useFixture(fake_imagebackend.ImageBackendFixture()) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) with test.nested( utils.tempdir(), mock.patch('nova.virt.libvirt.driver.libvirt'), mock.patch.object(drvr, '_build_device_metadata'), mock.patch.object(drvr, 'get_info'), mock.patch.object(drvr.firewall_driver, 'setup_basic_filtering'), mock.patch.object(drvr.firewall_driver, 'prepare_instance_filter') ) as ( tmpdir, mock_orig_libvirt, mock_build_device_metadata, mock_get_info, mock_ignored, mock_ignored ): self.flags(instances_path=tmpdir) hw_running = hardware.InstanceInfo(state=power_state.RUNNING) mock_get_info.return_value = hw_running mock_build_device_metadata.return_value = None del mock_orig_libvirt.VIR_CONNECT_BASELINE_CPU_EXPAND_FEATURES drvr.spawn(self.context, instance, image_meta, [], 'herp', {}, network_info=network_info) mock_get_info.assert_called_once_with(instance) mock_build_device_metadata.assert_called_once_with(self.context, instance) # Methods called directly by spawn() @mock.patch.object(libvirt_driver.LibvirtDriver, '_get_guest_xml') @mock.patch.object(libvirt_driver.LibvirtDriver, '_create_domain_and_network') @mock.patch.object(libvirt_driver.LibvirtDriver, 'get_info') # Methods called by _create_configdrive via post_xml_callback @mock.patch('nova.virt.configdrive.ConfigDriveBuilder._make_iso9660') @mock.patch.object(libvirt_driver.LibvirtDriver, '_build_device_metadata') @mock.patch.object(instance_metadata, 'InstanceMetadata') def test_spawn_with_config_drive(self, mock_instance_metadata, mock_build_device_metadata, mock_mkisofs, mock_get_info, mock_create_domain_and_network, mock_get_guest_xml): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) instance = objects.Instance(**self.test_instance) instance.config_drive = 'True' image_meta = objects.ImageMeta.from_dict(self.test_image_meta) instance_info = hardware.InstanceInfo(state=power_state.RUNNING) mock_build_device_metadata.return_value = None def fake_create_domain_and_network( context, xml, instance, network_info, block_device_info=None, power_on=True, vifs_already_plugged=False, post_xml_callback=None, destroy_disks_on_failure=False): # The config disk should be created by this callback, so we need # to execute it. post_xml_callback() fake_backend = self.useFixture( fake_imagebackend.ImageBackendFixture(exists=lambda _: False)) mock_get_info.return_value = instance_info mock_create_domain_and_network.side_effect = \ fake_create_domain_and_network drvr.spawn(self.context, instance, image_meta, [], None, {}) # We should have imported 'disk.config' config_disk = fake_backend.disks['disk.config'] config_disk.import_file.assert_called_once_with(instance, mock.ANY, 'disk.config') def test_spawn_without_image_meta(self): def fake_none(*args, **kwargs): return def fake_get_info(instance): return hardware.InstanceInfo(state=power_state.RUNNING) instance_ref = self.test_instance instance_ref['image_ref'] = 1 instance = objects.Instance(**instance_ref) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) self.stubs.Set(drvr, '_get_guest_xml', fake_none) self.stubs.Set(drvr, '_create_domain_and_network', fake_none) self.stubs.Set(drvr, 'get_info', fake_get_info) image_meta = objects.ImageMeta.from_dict(self.test_image_meta) fake_backend = self.useFixture(fake_imagebackend.ImageBackendFixture()) drvr.spawn(self.context, instance, image_meta, [], None, {}) # We should have created a root disk and an ephemeral disk self.assertEqual(['disk', 'disk.local'], sorted(fake_backend.created_disks.keys())) def _test_spawn_disks(self, image_ref, block_device_info): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) image_meta = objects.ImageMeta.from_dict(self.test_image_meta) # Volume-backed instance created without image instance = objects.Instance(**self.test_instance) instance.image_ref = image_ref instance.root_device_name = '/dev/vda' instance.uuid = uuids.instance_uuid backend = self.useFixture(fake_imagebackend.ImageBackendFixture()) with test.nested( mock.patch.object(drvr, '_get_guest_xml'), mock.patch.object(drvr, '_create_domain_and_network'), mock.patch.object(drvr, 'get_info') ) as ( mock_get_guest_xml, mock_create_domain_and_network, mock_get_info ): hw_running = hardware.InstanceInfo(state=power_state.RUNNING) mock_get_info.return_value = hw_running drvr.spawn(self.context, instance, image_meta, [], None, {}, block_device_info=block_device_info) # Return a sorted list of created disks return sorted(backend.created_disks.keys()) def test_spawn_from_volume_no_image_ref(self): block_device_info = {'root_device_name': '/dev/vda', 'block_device_mapping': [ {'mount_device': 'vda', 'boot_index': 0}]} disks_created = self._test_spawn_disks(None, block_device_info) # We should have created the ephemeral disk, and nothing else self.assertEqual(['disk.local'], disks_created) def test_spawn_from_volume_with_image_ref(self): block_device_info = {'root_device_name': '/dev/vda', 'block_device_mapping': [ {'mount_device': 'vda', 'boot_index': 0}]} disks_created = self._test_spawn_disks(uuids.image_ref, block_device_info) # We should have created the ephemeral disk, and nothing else self.assertEqual(['disk.local'], disks_created) def test_spawn_from_image(self): disks_created = self._test_spawn_disks(uuids.image_ref, None) # We should have created the root and ephemeral disks self.assertEqual(['disk', 'disk.local'], disks_created) def test_start_lxc_from_volume(self): self.flags(virt_type="lxc", group='libvirt') def check_setup_container(image, container_dir=None): self.assertIsInstance(image, imgmodel.LocalBlockImage) self.assertEqual(image.path, '/dev/path/to/dev') return '/dev/nbd1' bdm = { 'guest_format': None, 'boot_index': 0, 'mount_device': '/dev/sda', 'connection_info': { 'driver_volume_type': 'iscsi', 'serial': 'afc1', 'data': { 'access_mode': 'rw', 'target_discovered': False, 'encrypted': False, 'qos_specs': None, 'target_iqn': 'iqn: volume-afc1', 'target_portal': 'ip: 3260', 'volume_id': 'afc1', 'target_lun': 1, 'auth_password': 'uj', 'auth_username': '47', 'auth_method': 'CHAP' } }, 'disk_bus': 'scsi', 'device_type': 'disk', 'delete_on_termination': False } def _connect_volume_side_effect(ctxt, connection_info, instance): bdm['connection_info']['data']['device_path'] = '/dev/path/to/dev' def _get(key, opt=None): return bdm.get(key, opt) def getitem(key): return bdm[key] def setitem(key, val): bdm[key] = val bdm_mock = mock.MagicMock() bdm_mock.__getitem__.side_effect = getitem bdm_mock.__setitem__.side_effect = setitem bdm_mock.get = _get disk_mock = mock.MagicMock() disk_mock.source_path = '/dev/path/to/dev' block_device_info = {'block_device_mapping': [bdm_mock], 'root_device_name': '/dev/sda'} # Volume-backed instance created without image instance_ref = self.test_instance instance_ref['image_ref'] = '' instance_ref['root_device_name'] = '/dev/sda' instance_ref['ephemeral_gb'] = 0 instance_ref['uuid'] = uuids.fake inst_obj = objects.Instance(**instance_ref) image_meta = objects.ImageMeta.from_dict({}) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) with test.nested( mock.patch.object(drvr, 'plug_vifs'), mock.patch.object(drvr.firewall_driver, 'setup_basic_filtering'), mock.patch.object(drvr.firewall_driver, 'prepare_instance_filter'), mock.patch.object(drvr.firewall_driver, 'apply_instance_filter'), mock.patch.object(drvr, '_create_domain'), mock.patch.object(drvr, '_connect_volume', side_effect=_connect_volume_side_effect), mock.patch.object(drvr, '_get_volume_config', return_value=disk_mock), mock.patch.object(drvr, 'get_info', return_value=hardware.InstanceInfo( state=power_state.RUNNING)), mock.patch('nova.virt.disk.api.setup_container', side_effect=check_setup_container), mock.patch('nova.virt.disk.api.teardown_container'), mock.patch.object(objects.Instance, 'save')): drvr.spawn(self.context, inst_obj, image_meta, [], None, {}, network_info=[], block_device_info=block_device_info) self.assertEqual('/dev/nbd1', inst_obj.system_metadata.get( 'rootfs_device_name')) def test_spawn_with_pci_devices(self): def fake_none(*args, **kwargs): return None def fake_get_info(instance): return hardware.InstanceInfo(state=power_state.RUNNING) class FakeLibvirtPciDevice(object): def dettach(self): return None def reset(self): return None def fake_node_device_lookup_by_name(address): pattern = ("pci_%(hex)s{4}_%(hex)s{2}_%(hex)s{2}_%(oct)s{1}" % dict(hex='[\da-f]', oct='[0-8]')) pattern = re.compile(pattern) if pattern.match(address) is None: raise fakelibvirt.libvirtError() return FakeLibvirtPciDevice() drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) self.stubs.Set(drvr, '_get_guest_xml', fake_none) self.stubs.Set(drvr, '_create_domain_and_network', fake_none) self.stubs.Set(drvr, 'get_info', fake_get_info) mock_connection = mock.MagicMock( nodeDeviceLookupByName=fake_node_device_lookup_by_name) instance_ref = self.test_instance instance_ref['image_ref'] = 'my_fake_image' instance = objects.Instance(**instance_ref) instance['pci_devices'] = objects.PciDeviceList( objects=[objects.PciDevice(address='0000:00:00.0')]) image_meta = objects.ImageMeta.from_dict(self.test_image_meta) self.useFixture(fake_imagebackend.ImageBackendFixture()) with mock.patch.object(drvr, '_get_connection', return_value=mock_connection): drvr.spawn(self.context, instance, image_meta, [], None, {}) def _test_create_image_plain(self, os_type='', filename='', mkfs=False): gotFiles = [] def fake_none(*args, **kwargs): return def fake_get_info(instance): return hardware.InstanceInfo(state=power_state.RUNNING) instance_ref = self.test_instance instance_ref['image_ref'] = 1 instance = objects.Instance(**instance_ref) instance['os_type'] = os_type drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) self.stubs.Set(drvr, '_get_guest_xml', fake_none) self.stubs.Set(drvr, '_create_domain_and_network', fake_none) self.stubs.Set(drvr, 'get_info', fake_get_info) if mkfs: self.stubs.Set(nova.virt.disk.api, '_MKFS_COMMAND', {os_type: 'mkfs.ext4 --label %(fs_label)s %(target)s'}) image_meta = objects.ImageMeta.from_dict(self.test_image_meta) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance, image_meta) self.useFixture( fake_imagebackend.ImageBackendFixture(got_files=gotFiles)) drvr._create_image(self.context, instance, disk_info['mapping']) drvr._get_guest_xml(self.context, instance, None, disk_info, image_meta) wantFiles = [ {'filename': '356a192b7913b04c54574d18c28d46e6395428ab', 'size': 10 * units.Gi}, {'filename': filename, 'size': 20 * units.Gi}, ] self.assertEqual(gotFiles, wantFiles) def test_create_image_plain_os_type_blank(self): self._test_create_image_plain(os_type='', filename=self._EPHEMERAL_20_DEFAULT, mkfs=False) def test_create_image_plain_os_type_none(self): self._test_create_image_plain(os_type=None, filename=self._EPHEMERAL_20_DEFAULT, mkfs=False) def test_create_image_plain_os_type_set_no_fs(self): self._test_create_image_plain(os_type='test', filename=self._EPHEMERAL_20_DEFAULT, mkfs=False) def test_create_image_plain_os_type_set_with_fs(self): ephemeral_file_name = ('ephemeral_20_%s' % utils.get_hash_str( 'mkfs.ext4 --label %(fs_label)s %(target)s')[:7]) self._test_create_image_plain(os_type='test', filename=ephemeral_file_name, mkfs=True) def test_create_image_initrd(self): kernel_id = uuids.kernel_id ramdisk_id = uuids.ramdisk_id kernel_fname = imagecache.get_cache_fname(kernel_id) ramdisk_fname = imagecache.get_cache_fname(ramdisk_id) filename = self._EPHEMERAL_20_DEFAULT gotFiles = [] instance_ref = self.test_instance instance_ref['image_ref'] = uuids.instance_id instance_ref['kernel_id'] = uuids.kernel_id instance_ref['ramdisk_id'] = uuids.ramdisk_id instance_ref['os_type'] = 'test' instance = objects.Instance(**instance_ref) driver = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) fake_backend = self.useFixture( fake_imagebackend.ImageBackendFixture(got_files=gotFiles)) with test.nested( mock.patch.object(driver, '_get_guest_xml'), mock.patch.object(driver, '_create_domain_and_network'), mock.patch.object(driver, 'get_info', return_value=[hardware.InstanceInfo(state=power_state.RUNNING)]) ): image_meta = objects.ImageMeta.from_dict(self.test_image_meta) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance, image_meta) driver._create_image(self.context, instance, disk_info['mapping']) # Assert that kernel and ramdisk were fetched with fetch_raw_image # and no size for name, disk in fake_backend.disks.items(): cache = disk.cache if name in ('kernel', 'ramdisk'): cache.assert_called_once_with( context=self.context, filename=mock.ANY, image_id=mock.ANY, fetch_func=fake_libvirt_utils.fetch_raw_image) wantFiles = [ {'filename': kernel_fname, 'size': None}, {'filename': ramdisk_fname, 'size': None}, {'filename': imagecache.get_cache_fname(uuids.instance_id), 'size': 10 * units.Gi}, {'filename': filename, 'size': 20 * units.Gi}, ] self.assertEqual(wantFiles, gotFiles) @mock.patch( 'nova.virt.libvirt.driver.LibvirtDriver._build_device_metadata') @mock.patch('nova.api.metadata.base.InstanceMetadata') @mock.patch('nova.virt.configdrive.ConfigDriveBuilder.make_drive') def test_create_configdrive(self, mock_make_drive, mock_instance_metadata, mock_build_device_metadata): instance = objects.Instance(**self.test_instance) instance.config_drive = 'True' backend = self.useFixture( fake_imagebackend.ImageBackendFixture(exists=lambda path: False)) mock_build_device_metadata.return_value = None injection_info = get_injection_info( network_info=mock.sentinel.network_info, admin_pass=mock.sentinel.admin_pass, files=mock.sentinel.files ) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) drvr._create_configdrive(self.context, instance, injection_info) expected_config_drive_path = os.path.join( CONF.instances_path, instance.uuid, 'disk.config') mock_make_drive.assert_called_once_with(expected_config_drive_path) mock_instance_metadata.assert_called_once_with(instance, request_context=self.context, network_info=mock.sentinel.network_info, content=mock.sentinel.files, extra_md={'admin_pass': mock.sentinel.admin_pass}) backend.disks['disk.config'].import_file.assert_called_once_with( instance, mock.ANY, 'disk.config') @ddt.unpack @ddt.data({'expected': 200, 'flavor_size': 200}, {'expected': 100, 'flavor_size': 200, 'bdi_size': 100}, {'expected': 200, 'flavor_size': 200, 'bdi_size': 100, 'legacy': True}) def test_create_image_with_swap(self, expected, flavor_size=None, bdi_size=None, legacy=False): # Test the precedence of swap disk size specified in both the bdm and # the flavor. drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) instance_ref = self.test_instance instance_ref['image_ref'] = '' instance = objects.Instance(**instance_ref) if flavor_size is not None: instance.flavor.swap = flavor_size bdi = {'block_device_mapping': [{'boot_index': 0}]} if bdi_size is not None: bdi['swap'] = {'swap_size': bdi_size, 'device_name': '/dev/vdb'} create_image_kwargs = {} if legacy: create_image_kwargs['ignore_bdi_for_swap'] = True image_meta = objects.ImageMeta.from_dict(self.test_image_meta) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance, image_meta, block_device_info=bdi) backend = self.useFixture(fake_imagebackend.ImageBackendFixture()) drvr._create_image(self.context, instance, disk_info['mapping'], block_device_info=bdi, **create_image_kwargs) backend.mock_create_swap.assert_called_once_with( target='swap_%i' % expected, swap_mb=expected, context=self.context) backend.disks['disk.swap'].cache.assert_called_once_with( fetch_func=mock.ANY, filename='swap_%i' % expected, size=expected * units.Mi, context=self.context, swap_mb=expected) @mock.patch.object(nova.virt.libvirt.imagebackend.Image, 'cache') def test_create_vz_container_with_swap(self, mock_cache): self.flags(virt_type='parallels', group='libvirt') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI()) instance_ref = copy.deepcopy(self.test_instance) instance_ref['vm_mode'] = fields.VMMode.EXE instance_ref['flavor'].swap = 1024 instance = objects.Instance(**instance_ref) image_meta = objects.ImageMeta.from_dict(self.test_image_meta) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance, image_meta) self.assertRaises(exception.Invalid, drvr._create_image, self.context, instance, disk_info['mapping']) @mock.patch.object(nova.virt.libvirt.imagebackend.Image, 'cache', side_effect=exception.ImageNotFound(image_id='fake-id')) def test_create_image_not_exist_no_fallback(self, mock_cache): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) instance = objects.Instance(**self.test_instance) image_meta = objects.ImageMeta.from_dict(self.test_image_meta) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance, image_meta) self.assertRaises(exception.ImageNotFound, drvr._create_image, self.context, instance, disk_info['mapping']) @mock.patch.object(nova.virt.libvirt.imagebackend.Image, 'cache') def test_create_image_not_exist_fallback(self, mock_cache): def side_effect(fetch_func, filename, size=None, *args, **kwargs): def second_call(fetch_func, filename, size=None, *args, **kwargs): # call copy_from_host ourselves because we mocked image.cache() fetch_func('fake-target') # further calls have no side effect mock_cache.side_effect = None mock_cache.side_effect = second_call # raise an error only the first call raise exception.ImageNotFound(image_id='fake-id') mock_cache.side_effect = side_effect drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) instance = objects.Instance(**self.test_instance) image_meta = objects.ImageMeta.from_dict(self.test_image_meta) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance, image_meta) with mock.patch.object(libvirt_driver.libvirt_utils, 'copy_image') as mock_copy: drvr._create_image(self.context, instance, disk_info['mapping'], fallback_from_host='fake-source-host') mock_copy.assert_called_once_with(src='fake-target', dest='fake-target', host='fake-source-host', receive=True) @mock.patch('nova.virt.disk.api.get_file_extension_for_os_type') def test_create_image_with_ephemerals(self, mock_get_ext): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) instance_ref = self.test_instance instance_ref['image_ref'] = '' instance = objects.Instance(**instance_ref) image_meta = objects.ImageMeta.from_dict(self.test_image_meta) bdi = {'ephemerals': [{'size': 100}], 'block_device_mapping': [{'boot_index': 0}]} disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance, image_meta, block_device_info=bdi) mock_get_ext.return_value = mock.sentinel.file_ext backend = self.useFixture(fake_imagebackend.ImageBackendFixture()) drvr._create_image(self.context, instance, disk_info['mapping'], block_device_info=bdi) filename = 'ephemeral_100_%s' % mock.sentinel.file_ext backend.mock_create_ephemeral.assert_called_once_with( target=filename, ephemeral_size=100, fs_label='ephemeral0', is_block_dev=mock.sentinel.is_block_dev, os_type='linux', specified_fs=None, context=self.context, vm_mode=None) backend.disks['disk.eph0'].cache.assert_called_once_with( fetch_func=mock.ANY, context=self.context, filename=filename, size=100 * units.Gi, ephemeral_size=mock.ANY, specified_fs=None) @mock.patch.object(nova.virt.libvirt.imagebackend.Image, 'cache') def test_create_image_resize_snap_backend(self, mock_cache): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) instance = objects.Instance(**self.test_instance) instance.task_state = task_states.RESIZE_FINISH image_meta = objects.ImageMeta.from_dict(self.test_image_meta) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance, image_meta) fake_backend = self.useFixture(fake_imagebackend.ImageBackendFixture()) drvr._create_image(self.context, instance, disk_info['mapping']) # Assert we called create_snap on the root disk fake_backend.disks['disk'].create_snap.assert_called_once_with( libvirt_utils.RESIZE_SNAPSHOT_NAME) @mock.patch.object(utils, 'execute') def test_create_ephemeral_specified_fs(self, mock_exec): self.flags(default_ephemeral_format='ext3') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) drvr._create_ephemeral('/dev/something', 20, 'myVol', 'linux', is_block_dev=True, specified_fs='ext4') mock_exec.assert_called_once_with('mkfs', '-t', 'ext4', '-F', '-L', 'myVol', '/dev/something', run_as_root=True) @mock.patch('nova.privsep.path.utime') def test_create_ephemeral_specified_fs_not_valid(self, mock_utime): CONF.set_override('default_ephemeral_format', 'ext4') ephemerals = [{'device_type': 'disk', 'disk_bus': 'virtio', 'device_name': '/dev/vdb', 'guest_format': 'dummy', 'size': 1}] block_device_info = { 'ephemerals': ephemerals} instance_ref = self.test_instance instance_ref['image_ref'] = 1 instance = objects.Instance(**instance_ref) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) image_meta = objects.ImageMeta.from_dict({'disk_format': 'raw'}) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance, image_meta) disk_info['mapping'].pop('disk.local') with test.nested( mock.patch.object(utils, 'execute'), mock.patch.object(drvr, 'get_info'), mock.patch.object(drvr, '_create_domain_and_network'), mock.patch.object(imagebackend.Image, 'verify_base_size'), mock.patch.object(imagebackend.Image, 'get_disk_size') ) as (execute_mock, get_info_mock, create_mock, verify_base_size_mock, disk_size_mock): disk_size_mock.return_value = 0 self.assertRaises(exception.InvalidBDMFormat, drvr._create_image, context, instance, disk_info['mapping'], block_device_info=block_device_info) def test_create_ephemeral_default(self): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) self.mox.StubOutWithMock(utils, 'execute') utils.execute('mkfs', '-t', 'ext4', '-F', '-L', 'myVol', '/dev/something', run_as_root=True) self.mox.ReplayAll() drvr._create_ephemeral('/dev/something', 20, 'myVol', 'linux', is_block_dev=True) def test_create_ephemeral_with_conf(self): CONF.set_override('default_ephemeral_format', 'ext4') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) self.mox.StubOutWithMock(utils, 'execute') utils.execute('mkfs', '-t', 'ext4', '-F', '-L', 'myVol', '/dev/something', run_as_root=True) self.mox.ReplayAll() drvr._create_ephemeral('/dev/something', 20, 'myVol', 'linux', is_block_dev=True) def test_create_ephemeral_with_arbitrary(self): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) self.stubs.Set(nova.virt.disk.api, '_MKFS_COMMAND', {'linux': 'mkfs.ext4 --label %(fs_label)s %(target)s'}) self.mox.StubOutWithMock(utils, 'execute') utils.execute('mkfs.ext4', '--label', 'myVol', '/dev/something', run_as_root=True) self.mox.ReplayAll() drvr._create_ephemeral('/dev/something', 20, 'myVol', 'linux', is_block_dev=True) def test_create_ephemeral_with_ext3(self): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) self.stubs.Set(nova.virt.disk.api, '_MKFS_COMMAND', {'linux': 'mkfs.ext3 --label %(fs_label)s %(target)s'}) self.mox.StubOutWithMock(utils, 'execute') utils.execute('mkfs.ext3', '--label', 'myVol', '/dev/something', run_as_root=True) self.mox.ReplayAll() drvr._create_ephemeral('/dev/something', 20, 'myVol', 'linux', is_block_dev=True) @mock.patch.object(fake_libvirt_utils, 'create_ploop_image') def test_create_ephemeral_parallels(self, mock_create_ploop): self.flags(virt_type='parallels', group='libvirt') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) drvr._create_ephemeral('/dev/something', 20, 'myVol', 'linux', is_block_dev=False, specified_fs='fs_format', vm_mode=fields.VMMode.EXE) mock_create_ploop.assert_called_once_with('expanded', '/dev/something', '20G', 'fs_format') def test_create_swap_default(self): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) self.mox.StubOutWithMock(utils, 'execute') utils.execute('mkswap', '/dev/something', run_as_root=False) self.mox.ReplayAll() drvr._create_swap('/dev/something', 1) def test_ensure_console_log_for_instance_pass(self): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) with test.nested( mock.patch.object(drvr, '_get_console_log_path'), mock.patch.object(fake_libvirt_utils, 'file_open') ) as (mock_path, mock_open): drvr._ensure_console_log_for_instance(mock.ANY) mock_path.assert_called_once() mock_open.assert_called_once() def test_ensure_console_log_for_instance_pass_w_permissions(self): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) with test.nested( mock.patch.object(drvr, '_get_console_log_path'), mock.patch.object(fake_libvirt_utils, 'file_open', side_effect=IOError(errno.EACCES, 'exc')) ) as (mock_path, mock_open): drvr._ensure_console_log_for_instance(mock.ANY) mock_path.assert_called_once() mock_open.assert_called_once() def test_ensure_console_log_for_instance_fail(self): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) with test.nested( mock.patch.object(drvr, '_get_console_log_path'), mock.patch.object(fake_libvirt_utils, 'file_open', side_effect=IOError(errno.EREMOTE, 'exc')) ) as (mock_path, mock_open): self.assertRaises( IOError, drvr._ensure_console_log_for_instance, mock.ANY) @mock.patch('nova.privsep.path.last_bytes', return_value=(b'67890', 0)) def test_get_console_output_file(self, mock_last_bytes): with utils.tempdir() as tmpdir: self.flags(instances_path=tmpdir) instance_ref = self.test_instance instance_ref['image_ref'] = 123456 instance = objects.Instance(**instance_ref) console_dir = (os.path.join(tmpdir, instance['name'])) console_log = '%s/console.log' % (console_dir) fake_dom_xml = """ """ % console_log def fake_lookup(id): return FakeVirtDomain(fake_dom_xml) self.create_fake_libvirt_mock() libvirt_driver.LibvirtDriver._conn.lookupByUUIDString = fake_lookup drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) try: prev_max = libvirt_driver.MAX_CONSOLE_BYTES libvirt_driver.MAX_CONSOLE_BYTES = 5 with mock.patch('os.path.exists', return_value=True): output = drvr.get_console_output(self.context, instance) finally: libvirt_driver.MAX_CONSOLE_BYTES = prev_max self.assertEqual(b'67890', output) def test_get_console_output_file_missing(self): with utils.tempdir() as tmpdir: self.flags(instances_path=tmpdir) instance_ref = self.test_instance instance_ref['image_ref'] = 123456 instance = objects.Instance(**instance_ref) console_log = os.path.join(tmpdir, instance['name'], 'non-existent.log') fake_dom_xml = """ """ % console_log def fake_lookup(id): return FakeVirtDomain(fake_dom_xml) self.create_fake_libvirt_mock() libvirt_driver.LibvirtDriver._conn.lookupByUUIDString = fake_lookup drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) with mock.patch('os.path.exists', return_value=False): output = drvr.get_console_output(self.context, instance) self.assertEqual('', output) @mock.patch('os.path.exists', return_value=True) @mock.patch('nova.privsep.path.last_bytes', return_value=(b'67890', 0)) @mock.patch('nova.privsep.path.writefile') @mock.patch('nova.privsep.libvirt.readpty') def test_get_console_output_pty(self, mocked_readfile, mocked_writefile, mocked_last_bytes, mocked_path_exists): with utils.tempdir() as tmpdir: self.flags(instances_path=tmpdir) instance_ref = self.test_instance instance_ref['image_ref'] = 123456 instance = objects.Instance(**instance_ref) console_dir = (os.path.join(tmpdir, instance['name'])) pty_file = '%s/fake_pty' % (console_dir) fake_dom_xml = """ """ % pty_file def fake_lookup(id): return FakeVirtDomain(fake_dom_xml) mocked_readfile.return_value = 'foo' self.create_fake_libvirt_mock() libvirt_driver.LibvirtDriver._conn.lookupByUUIDString = fake_lookup drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) try: prev_max = libvirt_driver.MAX_CONSOLE_BYTES libvirt_driver.MAX_CONSOLE_BYTES = 5 output = drvr.get_console_output(self.context, instance) finally: libvirt_driver.MAX_CONSOLE_BYTES = prev_max self.assertEqual(b'67890', output) def test_get_console_output_pty_not_available(self): instance = objects.Instance(**self.test_instance) fake_dom_xml = """ """ def fake_lookup(id): return FakeVirtDomain(fake_dom_xml) self.create_fake_libvirt_mock() libvirt_driver.LibvirtDriver._conn.lookupByUUIDString = fake_lookup drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) self.assertRaises(exception.ConsoleNotAvailable, drvr.get_console_output, self.context, instance) @mock.patch('nova.virt.libvirt.host.Host._get_domain') @mock.patch.object(libvirt_guest.Guest, "get_xml_desc") def test_get_console_output_not_available(self, mock_get_xml, get_domain): xml = """ """ mock_get_xml.return_value = xml get_domain.return_value = mock.MagicMock() instance = objects.Instance(**self.test_instance) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) self.assertRaises(exception.ConsoleNotAvailable, drvr.get_console_output, self.context, instance) @mock.patch('nova.virt.libvirt.host.Host._get_domain') @mock.patch.object(libvirt_guest.Guest, 'get_xml_desc') def test_get_console_output_logrotate(self, mock_get_xml, get_domain): fake_libvirt_utils.files['console.log'] = b'uvwxyz' fake_libvirt_utils.files['console.log.0'] = b'klmnopqrst' fake_libvirt_utils.files['console.log.1'] = b'abcdefghij' def mock_path_exists(path): return os.path.basename(path) in fake_libvirt_utils.files def mock_last_bytes(path, count): with fake_libvirt_utils.file_open(path) as flo: return nova.privsep.path._last_bytes_inner(flo, count) xml = """ """ mock_get_xml.return_value = xml get_domain.return_value = mock.MagicMock() drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance = objects.Instance(**self.test_instance) def _get_logd_output(bytes_to_read): with utils.tempdir() as tmp_dir: self.flags(instances_path=tmp_dir) log_data = "" try: prev_max = libvirt_driver.MAX_CONSOLE_BYTES libvirt_driver.MAX_CONSOLE_BYTES = bytes_to_read with mock.patch('os.path.exists', side_effect=mock_path_exists): with mock.patch('nova.privsep.path.last_bytes', side_effect=mock_last_bytes): log_data = drvr.get_console_output(self.context, instance) finally: libvirt_driver.MAX_CONSOLE_BYTES = prev_max return log_data # span across only 1 file (with remaining bytes) self.assertEqual(b'wxyz', _get_logd_output(4)) # span across only 1 file (exact bytes) self.assertEqual(b'uvwxyz', _get_logd_output(6)) # span across 2 files (with remaining bytes) self.assertEqual(b'opqrstuvwxyz', _get_logd_output(12)) # span across all files (exact bytes) self.assertEqual(b'abcdefghijklmnopqrstuvwxyz', _get_logd_output(26)) # span across all files with more bytes than available self.assertEqual(b'abcdefghijklmnopqrstuvwxyz', _get_logd_output(30)) # files are not available fake_libvirt_utils.files = {} self.assertEqual('', _get_logd_output(30)) # reset the file for other tests fake_libvirt_utils.files['console.log'] = b'01234567890' def test_get_host_ip_addr(self): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) ip = drvr.get_host_ip_addr() self.assertEqual(ip, CONF.my_ip) @mock.patch.object(libvirt_driver.LOG, 'warning') @mock.patch('nova.compute.utils.get_machine_ips') def test_get_host_ip_addr_failure(self, mock_ips, mock_log): mock_ips.return_value = ['8.8.8.8', '75.75.75.75'] drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) drvr.get_host_ip_addr() mock_log.assert_called_once_with(u'my_ip address (%(my_ip)s) was ' u'not found on any of the ' u'interfaces: %(ifaces)s', {'ifaces': '8.8.8.8, 75.75.75.75', 'my_ip': mock.ANY}) def test_conn_event_handler(self): self.mox.UnsetStubs() drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) service_mock = mock.MagicMock() service_mock.disabled.return_value = False with test.nested( mock.patch.object(drvr._host, "_connect", side_effect=fakelibvirt.make_libvirtError( fakelibvirt.libvirtError, "Failed to connect to host", error_code= fakelibvirt.VIR_ERR_INTERNAL_ERROR)), mock.patch.object(drvr._host, "_init_events", return_value=None), mock.patch.object(objects.Service, "get_by_compute_host", return_value=service_mock)): # verify that the driver registers for the close callback # and re-connects after receiving the callback self.assertRaises(exception.HypervisorUnavailable, drvr.init_host, "wibble") self.assertTrue(service_mock.disabled) def test_command_with_broken_connection(self): self.mox.UnsetStubs() drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) service_mock = mock.MagicMock() service_mock.disabled.return_value = False with test.nested( mock.patch.object(drvr._host, "_connect", side_effect=fakelibvirt.make_libvirtError( fakelibvirt.libvirtError, "Failed to connect to host", error_code= fakelibvirt.VIR_ERR_INTERNAL_ERROR)), mock.patch.object(drvr._host, "_init_events", return_value=None), mock.patch.object(host.Host, "has_min_version", return_value=True), mock.patch.object(drvr, "_do_quality_warnings", return_value=None), mock.patch.object(objects.Service, "get_by_compute_host", return_value=service_mock), mock.patch.object(host.Host, "get_capabilities")): self.assertRaises(exception.HypervisorUnavailable, drvr.init_host, ("wibble",)) self.assertTrue(service_mock.disabled) def test_service_resume_after_broken_connection(self): self.mox.UnsetStubs() drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) service_mock = mock.MagicMock() service_mock.disabled.return_value = True with test.nested( mock.patch.object(drvr._host, "_connect", return_value=mock.MagicMock()), mock.patch.object(drvr._host, "_init_events", return_value=None), mock.patch.object(host.Host, "has_min_version", return_value=True), mock.patch.object(drvr, "_do_quality_warnings", return_value=None), mock.patch.object(objects.Service, "get_by_compute_host", return_value=service_mock), mock.patch.object(host.Host, "get_capabilities")): drvr.init_host("wibble") drvr.get_num_instances() drvr._host._dispatch_conn_event() self.assertFalse(service_mock.disabled) self.assertIsNone(service_mock.disabled_reason) @mock.patch.object(objects.Instance, 'save') def test_immediate_delete(self, mock_save): def fake_get_domain(instance): raise exception.InstanceNotFound(instance_id=instance.uuid) def fake_delete_instance_files(instance): pass drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) self.stubs.Set(drvr._host, '_get_domain', fake_get_domain) self.stubs.Set(drvr, 'delete_instance_files', fake_delete_instance_files) instance = objects.Instance(self.context, **self.test_instance) drvr.destroy(self.context, instance, {}) mock_save.assert_called_once_with() @mock.patch.object(objects.Instance, 'get_by_uuid') @mock.patch.object(objects.Instance, 'obj_load_attr', autospec=True) @mock.patch.object(objects.Instance, 'save', autospec=True) @mock.patch.object(libvirt_driver.LibvirtDriver, '_destroy') @mock.patch.object(libvirt_driver.LibvirtDriver, 'delete_instance_files') @mock.patch.object(libvirt_driver.LibvirtDriver, '_disconnect_volume') @mock.patch.object(driver, 'block_device_info_get_mapping') @mock.patch.object(libvirt_driver.LibvirtDriver, '_undefine_domain') def _test_destroy_removes_disk(self, mock_undefine_domain, mock_mapping, mock_disconnect_volume, mock_delete_instance_files, mock_destroy, mock_inst_save, mock_inst_obj_load_attr, mock_get_by_uuid, volume_fail=False): instance = objects.Instance(self.context, **self.test_instance) vol = {'block_device_mapping': [ {'connection_info': 'dummy', 'mount_device': '/dev/sdb'}]} mock_mapping.return_value = vol['block_device_mapping'] mock_delete_instance_files.return_value = True mock_get_by_uuid.return_value = instance if volume_fail: mock_disconnect_volume.return_value = ( exception.VolumeNotFound('vol')) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) drvr.destroy(self.context, instance, [], vol) def test_destroy_removes_disk(self): self._test_destroy_removes_disk(volume_fail=False) def test_destroy_removes_disk_volume_fails(self): self._test_destroy_removes_disk(volume_fail=True) @mock.patch.object(libvirt_driver.LibvirtDriver, 'unplug_vifs') @mock.patch.object(libvirt_driver.LibvirtDriver, '_destroy') @mock.patch.object(libvirt_driver.LibvirtDriver, '_undefine_domain') def test_destroy_not_removes_disk(self, mock_undefine_domain, mock_destroy, mock_unplug_vifs): instance = fake_instance.fake_instance_obj( None, name='instancename', id=1, uuid='875a8070-d0b9-4949-8b31-104d125c9a64') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) drvr.destroy(self.context, instance, [], None, False) @mock.patch.object(libvirt_driver.LibvirtDriver, 'cleanup') @mock.patch.object(libvirt_driver.LibvirtDriver, '_teardown_container') @mock.patch.object(host.Host, '_get_domain') def test_destroy_lxc_calls_teardown_container(self, mock_get_domain, mock_teardown_container, mock_cleanup): self.flags(virt_type='lxc', group='libvirt') fake_domain = FakeVirtDomain() def destroy_side_effect(*args, **kwargs): fake_domain._info[0] = power_state.SHUTDOWN with mock.patch.object(fake_domain, 'destroy', side_effect=destroy_side_effect) as mock_domain_destroy: mock_get_domain.return_value = fake_domain instance = objects.Instance(**self.test_instance) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) network_info = [] drvr.destroy(self.context, instance, network_info, None, False) mock_get_domain.assert_has_calls([mock.call(instance), mock.call(instance)]) mock_domain_destroy.assert_called_once_with() mock_teardown_container.assert_called_once_with(instance) mock_cleanup.assert_called_once_with(self.context, instance, network_info, None, False) @mock.patch.object(libvirt_driver.LibvirtDriver, 'cleanup') @mock.patch.object(libvirt_driver.LibvirtDriver, '_teardown_container') @mock.patch.object(host.Host, '_get_domain') def test_destroy_lxc_calls_teardown_container_when_no_domain(self, mock_get_domain, mock_teardown_container, mock_cleanup): self.flags(virt_type='lxc', group='libvirt') instance = objects.Instance(**self.test_instance) inf_exception = exception.InstanceNotFound(instance_id=instance.uuid) mock_get_domain.side_effect = inf_exception drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) network_info = [] drvr.destroy(self.context, instance, network_info, None, False) mock_get_domain.assert_has_calls([mock.call(instance), mock.call(instance)]) mock_teardown_container.assert_called_once_with(instance) mock_cleanup.assert_called_once_with(self.context, instance, network_info, None, False) def test_reboot_different_ids(self): class FakeLoopingCall(object): def start(self, *a, **k): return self def wait(self): return None self.flags(wait_soft_reboot_seconds=1, group='libvirt') info_tuple = ('fake', 'fake', 'fake', 'also_fake') self.reboot_create_called = False # Mock domain mock_domain = self.mox.CreateMock(fakelibvirt.virDomain) mock_domain.info().AndReturn( (libvirt_guest.VIR_DOMAIN_RUNNING,) + info_tuple) mock_domain.ID().AndReturn('some_fake_id') mock_domain.ID().AndReturn('some_fake_id') mock_domain.shutdown() mock_domain.info().AndReturn( (libvirt_guest.VIR_DOMAIN_CRASHED,) + info_tuple) mock_domain.ID().AndReturn('some_other_fake_id') mock_domain.ID().AndReturn('some_other_fake_id') self.mox.ReplayAll() def fake_get_domain(instance): return mock_domain def fake_create_domain(**kwargs): self.reboot_create_called = True drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) instance = objects.Instance(**self.test_instance) self.stubs.Set(drvr._host, '_get_domain', fake_get_domain) self.stubs.Set(drvr, '_create_domain', fake_create_domain) self.stubs.Set(loopingcall, 'FixedIntervalLoopingCall', lambda *a, **k: FakeLoopingCall()) self.stubs.Set(pci_manager, 'get_instance_pci_devs', lambda *a: []) drvr.reboot(None, instance, [], 'SOFT') self.assertTrue(self.reboot_create_called) @mock.patch.object(pci_manager, 'get_instance_pci_devs') @mock.patch.object(loopingcall, 'FixedIntervalLoopingCall') @mock.patch.object(greenthread, 'sleep') @mock.patch.object(libvirt_driver.LibvirtDriver, '_hard_reboot') @mock.patch.object(host.Host, '_get_domain') def test_reboot_same_ids(self, mock_get_domain, mock_hard_reboot, mock_sleep, mock_loopingcall, mock_get_instance_pci_devs): class FakeLoopingCall(object): def start(self, *a, **k): return self def wait(self): return None self.flags(wait_soft_reboot_seconds=1, group='libvirt') info_tuple = ('fake', 'fake', 'fake', 'also_fake') self.reboot_hard_reboot_called = False # Mock domain mock_domain = mock.Mock(fakelibvirt.virDomain) return_values = [(libvirt_guest.VIR_DOMAIN_RUNNING,) + info_tuple, (libvirt_guest.VIR_DOMAIN_CRASHED,) + info_tuple] mock_domain.info.side_effect = return_values mock_domain.ID.return_value = 'some_fake_id' mock_domain.shutdown.side_effect = mock.Mock() def fake_hard_reboot(*args, **kwargs): self.reboot_hard_reboot_called = True drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) instance = objects.Instance(**self.test_instance) mock_get_domain.return_value = mock_domain mock_hard_reboot.side_effect = fake_hard_reboot mock_loopingcall.return_value = FakeLoopingCall() mock_get_instance_pci_devs.return_value = [] drvr.reboot(None, instance, [], 'SOFT') self.assertTrue(self.reboot_hard_reboot_called) @mock.patch.object(libvirt_driver.LibvirtDriver, '_hard_reboot') @mock.patch.object(host.Host, '_get_domain') def test_soft_reboot_libvirt_exception(self, mock_get_domain, mock_hard_reboot): # Tests that a hard reboot is performed when a soft reboot results # in raising a libvirtError. info_tuple = ('fake', 'fake', 'fake', 'also_fake') # setup mocks mock_virDomain = mock.Mock(fakelibvirt.virDomain) mock_virDomain.info.return_value = ( (libvirt_guest.VIR_DOMAIN_RUNNING,) + info_tuple) mock_virDomain.ID.return_value = 'some_fake_id' mock_virDomain.shutdown.side_effect = fakelibvirt.libvirtError('Err') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) context = None instance = objects.Instance(**self.test_instance) network_info = [] mock_get_domain.return_value = mock_virDomain drvr.reboot(context, instance, network_info, 'SOFT') @mock.patch.object(libvirt_driver.LibvirtDriver, '_hard_reboot') @mock.patch.object(host.Host, '_get_domain') def _test_resume_state_on_host_boot_with_state(self, state, mock_get_domain, mock_hard_reboot): mock_virDomain = mock.Mock(fakelibvirt.virDomain) mock_virDomain.info.return_value = ([state, None, None, None, None]) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) mock_get_domain.return_value = mock_virDomain instance = objects.Instance(**self.test_instance) network_info = _fake_network_info(self, 1) drvr.resume_state_on_host_boot(self.context, instance, network_info, block_device_info=None) ignored_states = (power_state.RUNNING, power_state.SUSPENDED, power_state.NOSTATE, power_state.PAUSED) self.assertEqual(mock_hard_reboot.called, state not in ignored_states) def test_resume_state_on_host_boot_with_running_state(self): self._test_resume_state_on_host_boot_with_state(power_state.RUNNING) def test_resume_state_on_host_boot_with_suspended_state(self): self._test_resume_state_on_host_boot_with_state(power_state.SUSPENDED) def test_resume_state_on_host_boot_with_paused_state(self): self._test_resume_state_on_host_boot_with_state(power_state.PAUSED) def test_resume_state_on_host_boot_with_nostate(self): self._test_resume_state_on_host_boot_with_state(power_state.NOSTATE) def test_resume_state_on_host_boot_with_shutdown_state(self): self._test_resume_state_on_host_boot_with_state(power_state.RUNNING) def test_resume_state_on_host_boot_with_crashed_state(self): self._test_resume_state_on_host_boot_with_state(power_state.CRASHED) @mock.patch.object(libvirt_driver.LibvirtDriver, '_hard_reboot') @mock.patch.object(host.Host, '_get_domain') def test_resume_state_on_host_boot_with_instance_not_found_on_driver( self, mock_get_domain, mock_hard_reboot): instance = objects.Instance(**self.test_instance) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) mock_get_domain.side_effect = exception.InstanceNotFound( instance_id='fake') drvr.resume_state_on_host_boot(self.context, instance, network_info=[], block_device_info=None) mock_hard_reboot.assert_called_once_with(self.context, instance, [], None) @mock.patch('nova.virt.libvirt.LibvirtDriver._get_neutron_events') @mock.patch('nova.virt.libvirt.LibvirtDriver.plug_vifs') @mock.patch('nova.virt.libvirt.LibvirtDriver._lxc_disk_handler') @mock.patch('nova.virt.libvirt.LibvirtDriver._create_domain') def test_create_domain_and_network_reboot(self, mock_create, mock_handler, mock_plug, mock_events): # Verify that we call get_neutron_events with reboot=True if # create_domain_and_network was called with reboot=True drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance = objects.Instance(**self.test_instance) network_info = _fake_network_info(self, 1) @mock.patch.object(drvr.firewall_driver, 'setup_basic_filtering') @mock.patch.object(drvr.firewall_driver, 'prepare_instance_filter') @mock.patch.object(drvr.firewall_driver, 'apply_instance_filter') def _do_create(mock_apply, mock_prepare, mock_setup): drvr._create_domain_and_network(self.context, mock.sentinel.xml, instance, network_info, reboot=True) _do_create() mock_events.assert_called_once_with(network_info, reboot=True) @mock.patch('nova.virt.libvirt.LibvirtDriver.get_info') @mock.patch('nova.virt.libvirt.LibvirtDriver._create_domain_and_network') @mock.patch('nova.virt.libvirt.LibvirtDriver._get_guest_xml') @mock.patch('nova.virt.libvirt.LibvirtDriver.' '_get_instance_disk_info_from_config') @mock.patch('nova.virt.libvirt.LibvirtDriver.destroy') @mock.patch('nova.virt.libvirt.LibvirtDriver.' '_get_all_assigned_mediated_devices') def test_hard_reboot(self, mock_get_mdev, mock_destroy, mock_get_disk_info, mock_get_guest_xml, mock_create_domain_and_network, mock_get_info): self.context.auth_token = True # any non-None value will suffice instance = objects.Instance(**self.test_instance) network_info = _fake_network_info(self, 1) block_device_info = None dummyxml = ("instance-0000000a" "" "" "" "" "" "" "" "") mock_get_mdev.return_value = {uuids.mdev1: uuids.inst1} drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) return_values = [hardware.InstanceInfo(state=power_state.SHUTDOWN), hardware.InstanceInfo(state=power_state.RUNNING)] mock_get_info.side_effect = return_values mock_get_guest_xml.return_value = dummyxml mock_get_disk_info.return_value = \ fake_disk_info_byname(instance).values() backend = self.useFixture(fake_imagebackend.ImageBackendFixture()) with mock.patch('os.path.exists', return_value=True): drvr._hard_reboot(self.context, instance, network_info, block_device_info) disks = backend.disks # NOTE(mdbooth): _create_images_and_backing() passes a full path in # 'disk_name' when creating a disk. This is wrong, but happens to # work due to handling by each individual backend. This will be # fixed in a subsequent commit. # # We translate all the full paths into disk names here to make the # test readable disks = {os.path.basename(name): value for name, value in disks.items()} # We should have called cache() on the root and ephemeral disks for name in ('disk', 'disk.local'): self.assertTrue(disks[name].cache.called) mock_get_mdev.assert_called_once_with(instance) mock_destroy.assert_called_once_with(self.context, instance, network_info, destroy_disks=False, block_device_info=block_device_info) mock_get_guest_xml.assert_called_once_with(self.context, instance, network_info, mock.ANY, mock.ANY, block_device_info=block_device_info, mdevs=[uuids.mdev1]) mock_create_domain_and_network.assert_called_once_with(self.context, dummyxml, instance, network_info, block_device_info=block_device_info, reboot=True) @mock.patch('oslo_utils.fileutils.ensure_tree') @mock.patch('oslo_service.loopingcall.FixedIntervalLoopingCall') @mock.patch('nova.pci.manager.get_instance_pci_devs') @mock.patch('nova.virt.libvirt.LibvirtDriver._prepare_pci_devices_for_use') @mock.patch('nova.virt.libvirt.LibvirtDriver._create_domain_and_network') @mock.patch('nova.virt.libvirt.LibvirtDriver._create_images_and_backing') @mock.patch('nova.virt.libvirt.LibvirtDriver.' '_get_instance_disk_info_from_config') @mock.patch('nova.virt.libvirt.utils.write_to_file') @mock.patch('nova.virt.libvirt.utils.get_instance_path') @mock.patch('nova.virt.libvirt.LibvirtDriver._get_guest_config') @mock.patch('nova.virt.libvirt.blockinfo.get_disk_info') @mock.patch('nova.virt.libvirt.LibvirtDriver._destroy') @mock.patch('nova.virt.libvirt.LibvirtDriver.' '_get_all_assigned_mediated_devices') def test_hard_reboot_does_not_call_glance_show(self, mock_get_mdev, mock_destroy, mock_get_disk_info, mock_get_guest_config, mock_get_instance_path, mock_write_to_file, mock_get_instance_disk_info, mock_create_images_and_backing, mock_create_domand_and_network, mock_prepare_pci_devices_for_use, mock_get_instance_pci_devs, mock_looping_call, mock_ensure_tree): """For a hard reboot, we shouldn't need an additional call to glance to get the image metadata. This is important for automatically spinning up instances on a host-reboot, since we won't have a user request context that'll allow the Glance request to go through. We have to rely on the cached image metadata, instead. https://bugs.launchpad.net/nova/+bug/1339386 """ drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) instance = objects.Instance(**self.test_instance) mock_get_mdev.return_value = {} network_info = mock.MagicMock() block_device_info = mock.MagicMock() mock_get_disk_info.return_value = {} mock_get_guest_config.return_value = mock.MagicMock() mock_get_instance_path.return_value = '/foo' mock_looping_call.return_value = mock.MagicMock() drvr._image_api = mock.MagicMock() drvr._hard_reboot(self.context, instance, network_info, block_device_info) self.assertFalse(drvr._image_api.get.called) mock_ensure_tree.assert_called_once_with('/foo') def test_suspend(self): guest = libvirt_guest.Guest(FakeVirtDomain(id=1)) dom = guest._domain instance = objects.Instance(**self.test_instance) instance.ephemeral_key_uuid = None conn = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) @mock.patch.object(dmcrypt, 'delete_volume') @mock.patch.object(conn, '_get_instance_disk_info_from_config', return_value=[]) @mock.patch.object(conn, '_detach_mediated_devices') @mock.patch.object(conn, '_detach_direct_passthrough_ports') @mock.patch.object(conn, '_detach_pci_devices') @mock.patch.object(pci_manager, 'get_instance_pci_devs', return_value='pci devs') @mock.patch.object(conn._host, 'get_guest', return_value=guest) def suspend(mock_get_guest, mock_get_instance_pci_devs, mock_detach_pci_devices, mock_detach_direct_passthrough_ports, mock_detach_mediated_devices, mock_get_instance_disk_info, mock_delete_volume): mock_managedSave = mock.Mock() dom.managedSave = mock_managedSave conn.suspend(self.context, instance) mock_managedSave.assert_called_once_with(0) self.assertFalse(mock_get_instance_disk_info.called) mock_delete_volume.assert_has_calls([mock.call(disk['path']) for disk in mock_get_instance_disk_info.return_value], False) suspend() @mock.patch.object(time, 'sleep') @mock.patch.object(libvirt_driver.LibvirtDriver, '_create_domain') @mock.patch.object(host.Host, '_get_domain') def _test_clean_shutdown(self, mock_get_domain, mock_create_domain, mock_sleep, seconds_to_shutdown, timeout, retry_interval, shutdown_attempts, succeeds): info_tuple = ('fake', 'fake', 'fake', 'also_fake') shutdown_count = [] # Mock domain mock_domain = mock.Mock(fakelibvirt.virDomain) return_infos = [(libvirt_guest.VIR_DOMAIN_RUNNING,) + info_tuple] return_shutdowns = [shutdown_count.append("shutdown")] retry_countdown = retry_interval for x in range(min(seconds_to_shutdown, timeout)): return_infos.append( (libvirt_guest.VIR_DOMAIN_RUNNING,) + info_tuple) if retry_countdown == 0: return_shutdowns.append(shutdown_count.append("shutdown")) retry_countdown = retry_interval else: retry_countdown -= 1 if seconds_to_shutdown < timeout: return_infos.append( (libvirt_guest.VIR_DOMAIN_SHUTDOWN,) + info_tuple) mock_domain.info.side_effect = return_infos mock_domain.shutdown.side_effect = return_shutdowns def fake_create_domain(**kwargs): self.reboot_create_called = True drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) instance = objects.Instance(**self.test_instance) mock_get_domain.return_value = mock_domain mock_create_domain.side_effect = fake_create_domain result = drvr._clean_shutdown(instance, timeout, retry_interval) self.assertEqual(succeeds, result) self.assertEqual(shutdown_attempts, len(shutdown_count)) def test_clean_shutdown_first_time(self): self._test_clean_shutdown(seconds_to_shutdown=2, timeout=5, retry_interval=3, shutdown_attempts=1, succeeds=True) def test_clean_shutdown_with_retry(self): self._test_clean_shutdown(seconds_to_shutdown=4, timeout=5, retry_interval=3, shutdown_attempts=2, succeeds=True) def test_clean_shutdown_failure(self): self._test_clean_shutdown(seconds_to_shutdown=6, timeout=5, retry_interval=3, shutdown_attempts=2, succeeds=False) def test_clean_shutdown_no_wait(self): self._test_clean_shutdown(seconds_to_shutdown=6, timeout=0, retry_interval=3, shutdown_attempts=1, succeeds=False) @mock.patch.object(FakeVirtDomain, 'attachDeviceFlags') @mock.patch.object(FakeVirtDomain, 'ID', return_value=1) @mock.patch.object(utils, 'get_image_from_system_metadata', return_value=None) def test_attach_direct_passthrough_ports(self, mock_get_image_metadata, mock_ID, mock_attachDevice): instance = objects.Instance(**self.test_instance) network_info = _fake_network_info(self, 1) network_info[0]['vnic_type'] = network_model.VNIC_TYPE_DIRECT guest = libvirt_guest.Guest(FakeVirtDomain()) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) drvr._attach_direct_passthrough_ports( self.context, instance, guest, network_info) mock_get_image_metadata.assert_called_once_with( instance.system_metadata) self.assertTrue(mock_attachDevice.called) @mock.patch.object(FakeVirtDomain, 'attachDeviceFlags') @mock.patch.object(FakeVirtDomain, 'ID', return_value=1) @mock.patch.object(utils, 'get_image_from_system_metadata', return_value=None) def test_attach_direct_physical_passthrough_ports(self, mock_get_image_metadata, mock_ID, mock_attachDevice): instance = objects.Instance(**self.test_instance) network_info = _fake_network_info(self, 1) network_info[0]['vnic_type'] = network_model.VNIC_TYPE_DIRECT_PHYSICAL guest = libvirt_guest.Guest(FakeVirtDomain()) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) drvr._attach_direct_passthrough_ports( self.context, instance, guest, network_info) mock_get_image_metadata.assert_called_once_with( instance.system_metadata) self.assertTrue(mock_attachDevice.called) @mock.patch.object(FakeVirtDomain, 'attachDeviceFlags') @mock.patch.object(FakeVirtDomain, 'ID', return_value=1) @mock.patch.object(utils, 'get_image_from_system_metadata', return_value=None) def test_attach_direct_passthrough_ports_with_info_cache(self, mock_get_image_metadata, mock_ID, mock_attachDevice): instance = objects.Instance(**self.test_instance) network_info = _fake_network_info(self, 1) network_info[0]['vnic_type'] = network_model.VNIC_TYPE_DIRECT instance.info_cache = objects.InstanceInfoCache( network_info=network_info) guest = libvirt_guest.Guest(FakeVirtDomain()) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) drvr._attach_direct_passthrough_ports( self.context, instance, guest, None) mock_get_image_metadata.assert_called_once_with( instance.system_metadata) self.assertTrue(mock_attachDevice.called) @mock.patch.object(host.Host, 'has_min_version', return_value=True) def _test_detach_direct_passthrough_ports(self, mock_has_min_version, vif_type): instance = objects.Instance(**self.test_instance) expeted_pci_slot = "0000:00:00.0" network_info = _fake_network_info(self, 1) network_info[0]['vnic_type'] = network_model.VNIC_TYPE_DIRECT # some more adjustments for the fake network_info so that # the correct get_config function will be executed (vif's # get_config_hw_veb - which is according to the real SRIOV vif) # and most importantly the pci_slot which is translated to # cfg.source_dev, then to PciDevice.address and sent to # _detach_pci_devices network_info[0]['profile'] = dict(pci_slot=expeted_pci_slot) network_info[0]['type'] = vif_type network_info[0]['details'] = dict(vlan="2145") instance.info_cache = objects.InstanceInfoCache( network_info=network_info) # fill the pci_devices of the instance so that # pci_manager.get_instance_pci_devs will not return an empty list # which will eventually fail the assertion for detachDeviceFlags expected_pci_device_obj = ( objects.PciDevice(address=expeted_pci_slot, request_id=None)) instance.pci_devices = objects.PciDeviceList() instance.pci_devices.objects = [expected_pci_device_obj] domain = FakeVirtDomain() drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) guest = libvirt_guest.Guest(domain) with mock.patch.object(drvr, '_detach_pci_devices') as mock_detach_pci: drvr._detach_direct_passthrough_ports( self.context, instance, guest) mock_detach_pci.assert_called_once_with( guest, [expected_pci_device_obj]) def test_detach_direct_passthrough_ports_interface_interface_hostdev(self): # Note: test detach_direct_passthrough_ports method for vif with config # LibvirtConfigGuestInterface self._test_detach_direct_passthrough_ports(vif_type="hw_veb") def test_detach_direct_passthrough_ports_interface_pci_hostdev(self): # Note: test detach_direct_passthrough_ports method for vif with config # LibvirtConfigGuestHostdevPCI self._test_detach_direct_passthrough_ports(vif_type="ib_hostdev") @mock.patch.object(host.Host, 'has_min_version', return_value=True) @mock.patch.object(FakeVirtDomain, 'detachDeviceFlags') def test_detach_duplicate_mac_direct_passthrough_ports( self, mock_detachDeviceFlags, mock_has_min_version): instance = objects.Instance(**self.test_instance) network_info = _fake_network_info(self, 2) for network_info_inst in network_info: network_info_inst['vnic_type'] = network_model.VNIC_TYPE_DIRECT network_info_inst['type'] = "hw_veb" network_info_inst['details'] = dict(vlan="2145") network_info_inst['address'] = "fa:16:3e:96:2a:48" network_info[0]['profile'] = dict(pci_slot="0000:00:00.0") network_info[1]['profile'] = dict(pci_slot="0000:00:00.1") instance.info_cache = objects.InstanceInfoCache( network_info=network_info) # fill the pci_devices of the instance so that # pci_manager.get_instance_pci_devs will not return an empty list # which will eventually fail the assertion for detachDeviceFlags instance.pci_devices = objects.PciDeviceList() instance.pci_devices.objects = [ objects.PciDevice(address='0000:00:00.0', request_id=None), objects.PciDevice(address='0000:00:00.1', request_id=None) ] domain = FakeVirtDomain() drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) guest = libvirt_guest.Guest(domain) drvr._detach_direct_passthrough_ports(self.context, instance, guest) expected_xml = [ ('\n' ' \n' '
\n' ' \n' '\n'), ('\n' ' \n' '
\n' ' \n' '\n') ] mock_detachDeviceFlags.has_calls([ mock.call(expected_xml[0], flags=1), mock.call(expected_xml[1], flags=1) ]) def test_resume(self): dummyxml = ("instance-0000000a" "" "" "" "" "" "" "" "") instance = objects.Instance(**self.test_instance) network_info = _fake_network_info(self, 1) block_device_info = None drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) guest = libvirt_guest.Guest('fake_dom') with test.nested( mock.patch.object(drvr, '_get_existing_domain_xml', return_value=dummyxml), mock.patch.object(drvr, '_create_domain_and_network', return_value=guest), mock.patch.object(drvr, '_attach_pci_devices'), mock.patch.object(pci_manager, 'get_instance_pci_devs', return_value='fake_pci_devs'), mock.patch.object(utils, 'get_image_from_system_metadata'), mock.patch.object(guest, 'sync_guest_time'), mock.patch.object(drvr, '_wait_for_running', side_effect=loopingcall.LoopingCallDone()), ) as (_get_existing_domain_xml, _create_domain_and_network, _attach_pci_devices, get_instance_pci_devs, get_image_metadata, mock_sync_time, mock_wait): get_image_metadata.return_value = {'bar': 234} drvr.resume(self.context, instance, network_info, block_device_info) _get_existing_domain_xml.assert_has_calls([mock.call(instance, network_info, block_device_info)]) _create_domain_and_network.assert_has_calls([mock.call( self.context, dummyxml, instance, network_info, block_device_info=block_device_info, vifs_already_plugged=True)]) self.assertTrue(mock_sync_time.called) _attach_pci_devices.assert_has_calls([mock.call(guest, 'fake_pci_devs')]) @mock.patch.object(host.Host, '_get_domain') @mock.patch.object(libvirt_driver.LibvirtDriver, 'get_info') @mock.patch.object(libvirt_driver.LibvirtDriver, 'delete_instance_files') @mock.patch.object(objects.Instance, 'save') def test_destroy_undefines(self, mock_save, mock_delete_instance_files, mock_get_info, mock_get_domain): dom_mock = mock.MagicMock() dom_mock.undefineFlags.return_value = 1 drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) mock_get_domain.return_value = dom_mock mock_get_info.return_value = hardware.InstanceInfo( state=power_state.SHUTDOWN, internal_id=-1) mock_delete_instance_files.return_value = None instance = objects.Instance(self.context, **self.test_instance) drvr.destroy(self.context, instance, []) mock_save.assert_called_once_with() @mock.patch.object(rbd_utils.RBDDriver, '_destroy_volume') @mock.patch.object(rbd_utils.RBDDriver, '_disconnect_from_rados') @mock.patch.object(rbd_utils.RBDDriver, '_connect_to_rados') @mock.patch.object(rbd_utils, 'rbd') @mock.patch.object(rbd_utils, 'rados') def test_cleanup_rbd(self, mock_rados, mock_rbd, mock_connect, mock_disconnect, mock_destroy_volume): mock_connect.return_value = mock.MagicMock(), mock.MagicMock() instance = objects.Instance(**self.test_instance) all_volumes = [uuids.other_instance + '_disk', uuids.other_instance + '_disk.swap', instance.uuid + '_disk', instance.uuid + '_disk.swap'] mock_rbd.RBD.return_value.list.return_value = all_volumes drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) drvr._cleanup_rbd(instance) calls = [mock.call(mock.ANY, instance.uuid + '_disk'), mock.call(mock.ANY, instance.uuid + '_disk.swap')] mock_destroy_volume.assert_has_calls(calls) self.assertEqual(2, mock_destroy_volume.call_count) @mock.patch.object(rbd_utils.RBDDriver, '_destroy_volume') @mock.patch.object(rbd_utils.RBDDriver, '_disconnect_from_rados') @mock.patch.object(rbd_utils.RBDDriver, '_connect_to_rados') @mock.patch.object(rbd_utils, 'rbd') @mock.patch.object(rbd_utils, 'rados') def test_cleanup_rbd_resize_reverting(self, mock_rados, mock_rbd, mock_connect, mock_disconnect, mock_destroy_volume): mock_connect.return_value = mock.MagicMock(), mock.MagicMock() instance = objects.Instance(**self.test_instance) instance.task_state = task_states.RESIZE_REVERTING all_volumes = [uuids.other_instance + '_disk', uuids.other_instance + '_disk.local', instance.uuid + '_disk', instance.uuid + '_disk.local'] mock_rbd.RBD.return_value.list.return_value = all_volumes drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) drvr._cleanup_rbd(instance) mock_destroy_volume.assert_called_once_with( mock.ANY, instance.uuid + '_disk.local') @mock.patch.object(objects.Instance, 'save') def test_destroy_undefines_no_undefine_flags(self, mock_save): mock_domain = mock.Mock(fakelibvirt.virDomain) mock_domain.undefineFlags.side_effect = fakelibvirt.libvirtError('Err') mock_domain.ID.return_value = 123 drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) drvr._host._get_domain = mock.Mock(return_value=mock_domain) drvr._has_uefi_support = mock.Mock(return_value=False) drvr.delete_instance_files = mock.Mock(return_value=None) drvr.get_info = mock.Mock(return_value= hardware.InstanceInfo(state=power_state.SHUTDOWN, internal_id=-1) ) instance = objects.Instance(self.context, **self.test_instance) drvr.destroy(self.context, instance, []) self.assertEqual(2, mock_domain.ID.call_count) mock_domain.destroy.assert_called_once_with() mock_domain.undefineFlags.assert_called_once_with(1) mock_domain.undefine.assert_called_once_with() mock_save.assert_called_once_with() @mock.patch.object(objects.Instance, 'save') def test_destroy_undefines_no_attribute_with_managed_save(self, mock_save): mock_domain = mock.Mock(fakelibvirt.virDomain) mock_domain.undefineFlags.side_effect = AttributeError() mock_domain.ID.return_value = 123 drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) drvr._host._get_domain = mock.Mock(return_value=mock_domain) drvr._has_uefi_support = mock.Mock(return_value=False) drvr.delete_instance_files = mock.Mock(return_value=None) drvr.get_info = mock.Mock(return_value= hardware.InstanceInfo(state=power_state.SHUTDOWN, internal_id=-1) ) instance = objects.Instance(self.context, **self.test_instance) drvr.destroy(self.context, instance, []) self.assertEqual(1, mock_domain.ID.call_count) mock_domain.destroy.assert_called_once_with() mock_domain.undefineFlags.assert_called_once_with(1) mock_domain.hasManagedSaveImage.assert_has_calls([mock.call(0)]) mock_domain.managedSaveRemove.assert_called_once_with(0) mock_domain.undefine.assert_called_once_with() mock_save.assert_called_once_with() @mock.patch.object(objects.Instance, 'save') def test_destroy_undefines_no_attribute_no_managed_save(self, mock_save): mock_domain = mock.Mock(fakelibvirt.virDomain) mock_domain.undefineFlags.side_effect = AttributeError() mock_domain.hasManagedSaveImage.side_effect = AttributeError() mock_domain.ID.return_value = 123 drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) drvr._host._get_domain = mock.Mock(return_value=mock_domain) drvr._has_uefi_support = mock.Mock(return_value=False) drvr.delete_instance_files = mock.Mock(return_value=None) drvr.get_info = mock.Mock(return_value= hardware.InstanceInfo(state=power_state.SHUTDOWN, internal_id=-1) ) instance = objects.Instance(self.context, **self.test_instance) drvr.destroy(self.context, instance, []) self.assertEqual(1, mock_domain.ID.call_count) mock_domain.destroy.assert_called_once_with() mock_domain.undefineFlags.assert_called_once_with(1) mock_domain.hasManagedSaveImage.assert_has_calls([mock.call(0)]) mock_domain.undefine.assert_called_once_with() mock_save.assert_called_once_with() @mock.patch.object(objects.Instance, 'save') def test_destroy_removes_nvram(self, mock_save): mock_domain = mock.Mock(fakelibvirt.virDomain) mock_domain.ID.return_value = 123 drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) drvr._host._get_domain = mock.Mock(return_value=mock_domain) drvr._has_uefi_support = mock.Mock(return_value=True) drvr.delete_instance_files = mock.Mock(return_value=None) drvr.get_info = mock.Mock(return_value=hardware.InstanceInfo( state=power_state.SHUTDOWN, internal_id=-1)) instance = objects.Instance(self.context, **self.test_instance) drvr.destroy(self.context, instance, []) self.assertEqual(1, mock_domain.ID.call_count) mock_domain.destroy.assert_called_once_with() # undefineFlags should now be called with 5 as uefi us supported mock_domain.undefineFlags.assert_called_once_with( fakelibvirt.VIR_DOMAIN_UNDEFINE_MANAGED_SAVE | fakelibvirt.VIR_DOMAIN_UNDEFINE_NVRAM ) mock_domain.undefine.assert_not_called() mock_save.assert_called_once_with() def test_destroy_timed_out(self): mock = self.mox.CreateMock(fakelibvirt.virDomain) mock.ID() mock.destroy().AndRaise(fakelibvirt.libvirtError("timed out")) self.mox.ReplayAll() def fake_get_domain(self, instance): return mock def fake_get_error_code(self): return fakelibvirt.VIR_ERR_OPERATION_TIMEOUT drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) self.stubs.Set(host.Host, '_get_domain', fake_get_domain) self.stubs.Set(fakelibvirt.libvirtError, 'get_error_code', fake_get_error_code) instance = objects.Instance(**self.test_instance) self.assertRaises(exception.InstancePowerOffFailure, drvr.destroy, self.context, instance, []) def test_private_destroy_not_found(self): ex = fakelibvirt.make_libvirtError( fakelibvirt.libvirtError, "No such domain", error_code=fakelibvirt.VIR_ERR_NO_DOMAIN) mock = self.mox.CreateMock(fakelibvirt.virDomain) mock.ID() mock.destroy().AndRaise(ex) mock.info().AndRaise(ex) mock.UUIDString() self.mox.ReplayAll() def fake_get_domain(instance): return mock drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) self.stubs.Set(drvr._host, '_get_domain', fake_get_domain) instance = objects.Instance(**self.test_instance) # NOTE(vish): verifies destroy doesn't raise if the instance disappears drvr._destroy(instance) def test_private_destroy_lxc_processes_refused_to_die(self): self.flags(virt_type='lxc', group='libvirt') ex = fakelibvirt.make_libvirtError( fakelibvirt.libvirtError, "", error_message="internal error: Some processes refused to die", error_code=fakelibvirt.VIR_ERR_INTERNAL_ERROR) conn = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) with mock.patch.object(conn._host, '_get_domain') as mock_get_domain, \ mock.patch.object(conn, 'get_info') as mock_get_info: mock_domain = mock.MagicMock() mock_domain.ID.return_value = 1 mock_get_domain.return_value = mock_domain mock_domain.destroy.side_effect = ex mock_info = mock.MagicMock() mock_info.internal_id = 1 mock_info.state = power_state.SHUTDOWN mock_get_info.return_value = mock_info instance = objects.Instance(**self.test_instance) conn._destroy(instance) def test_private_destroy_processes_refused_to_die_still_raises(self): ex = fakelibvirt.make_libvirtError( fakelibvirt.libvirtError, "", error_message="internal error: Some processes refused to die", error_code=fakelibvirt.VIR_ERR_INTERNAL_ERROR) conn = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) with mock.patch.object(conn._host, '_get_domain') as mock_get_domain: mock_domain = mock.MagicMock() mock_domain.ID.return_value = 1 mock_get_domain.return_value = mock_domain mock_domain.destroy.side_effect = ex instance = objects.Instance(**self.test_instance) self.assertRaises(fakelibvirt.libvirtError, conn._destroy, instance) def test_private_destroy_ebusy_timeout(self): # Tests that _destroy will retry 3 times to destroy the guest when an # EBUSY is raised, but eventually times out and raises the libvirtError ex = fakelibvirt.make_libvirtError( fakelibvirt.libvirtError, ("Failed to terminate process 26425 with SIGKILL: " "Device or resource busy"), error_code=fakelibvirt.VIR_ERR_SYSTEM_ERROR, int1=errno.EBUSY) mock_guest = mock.Mock(libvirt_guest.Guest, id=1) mock_guest.poweroff = mock.Mock(side_effect=ex) instance = objects.Instance(**self.test_instance) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) with mock.patch.object(drvr._host, 'get_guest', return_value=mock_guest): self.assertRaises(fakelibvirt.libvirtError, drvr._destroy, instance) self.assertEqual(3, mock_guest.poweroff.call_count) def test_private_destroy_ebusy_multiple_attempt_ok(self): # Tests that the _destroy attempt loop is broken when EBUSY is no # longer raised. ex = fakelibvirt.make_libvirtError( fakelibvirt.libvirtError, ("Failed to terminate process 26425 with SIGKILL: " "Device or resource busy"), error_code=fakelibvirt.VIR_ERR_SYSTEM_ERROR, int1=errno.EBUSY) mock_guest = mock.Mock(libvirt_guest.Guest, id=1) mock_guest.poweroff = mock.Mock(side_effect=[ex, None]) inst_info = hardware.InstanceInfo(power_state.SHUTDOWN, internal_id=1) instance = objects.Instance(**self.test_instance) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) with mock.patch.object(drvr._host, 'get_guest', return_value=mock_guest): with mock.patch.object(drvr, 'get_info', return_value=inst_info): drvr._destroy(instance) self.assertEqual(2, mock_guest.poweroff.call_count) def test_undefine_domain_with_not_found_instance(self): def fake_get_domain(self, instance): raise exception.InstanceNotFound(instance_id=instance.uuid) self.stubs.Set(host.Host, '_get_domain', fake_get_domain) self.mox.StubOutWithMock(fakelibvirt.libvirtError, "get_error_code") self.mox.ReplayAll() drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) instance = objects.Instance(**self.test_instance) # NOTE(wenjianhn): verifies undefine doesn't raise if the # instance disappears drvr._undefine_domain(instance) @mock.patch.object(libvirt_driver.LibvirtDriver, "_has_uefi_support") @mock.patch.object(host.Host, "get_guest") def test_undefine_domain_handles_libvirt_errors(self, mock_get, mock_has_uefi): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) instance = objects.Instance(**self.test_instance) fake_guest = mock.Mock() mock_get.return_value = fake_guest unexpected = fakelibvirt.make_libvirtError( fakelibvirt.libvirtError, "Random", error_code=1) fake_guest.delete_configuration.side_effect = unexpected # ensure raise unexpected error code self.assertRaises(type(unexpected), drvr._undefine_domain, instance) ignored = fakelibvirt.make_libvirtError( fakelibvirt.libvirtError, "No such domain", error_code=fakelibvirt.VIR_ERR_NO_DOMAIN) fake_guest.delete_configuration.side_effect = ignored # ensure no raise for no such domain drvr._undefine_domain(instance) @mock.patch.object(host.Host, "list_instance_domains") @mock.patch.object(objects.BlockDeviceMappingList, "bdms_by_instance_uuid") @mock.patch.object(objects.InstanceList, "get_by_filters") def test_disk_over_committed_size_total(self, mock_get, mock_bdms, mock_list): # Ensure destroy calls managedSaveRemove for saved instance. class DiagFakeDomain(object): def __init__(self, name): self._name = name self._uuid = uuids.fake def ID(self): return 1 def name(self): return self._name def UUIDString(self): return self._uuid def XMLDesc(self, flags): return "%s" % self._name instance_domains = [ DiagFakeDomain("instance0000001"), DiagFakeDomain("instance0000002")] mock_list.return_value = instance_domains drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) fake_disks = {'instance0000001': [{'type': 'qcow2', 'path': '/somepath/disk1', 'virt_disk_size': '10737418240', 'backing_file': '/somepath/disk1', 'disk_size': '83886080', 'over_committed_disk_size': '10653532160'}], 'instance0000002': [{'type': 'raw', 'path': '/somepath/disk2', 'virt_disk_size': '0', 'backing_file': '/somepath/disk2', 'disk_size': '10737418240', 'over_committed_disk_size': '0'}]} def get_info(cfg, block_device_info): return fake_disks.get(cfg.name) instance_uuids = [dom.UUIDString() for dom in instance_domains] instances = [objects.Instance( uuid=instance_uuids[0], root_device_name='/dev/vda'), objects.Instance( uuid=instance_uuids[1], root_device_name='/dev/vdb') ] mock_get.return_value = instances with mock.patch.object( drvr, "_get_instance_disk_info_from_config") as mock_info: mock_info.side_effect = get_info result = drvr._get_disk_over_committed_size_total() self.assertEqual(result, 10653532160) mock_list.assert_called_once_with(only_running=False) self.assertEqual(2, mock_info.call_count) filters = {'uuid': instance_uuids} mock_get.assert_called_once_with(mock.ANY, filters, use_slave=True) mock_bdms.assert_called_with(mock.ANY, instance_uuids) @mock.patch.object(host.Host, "list_instance_domains") @mock.patch.object(objects.BlockDeviceMappingList, "bdms_by_instance_uuid") @mock.patch.object(objects.InstanceList, "get_by_filters") def test_disk_over_committed_size_total_eperm(self, mock_get, mock_bdms, mock_list): # Ensure destroy calls managedSaveRemove for saved instance. class DiagFakeDomain(object): def __init__(self, name): self._name = name self._uuid = uuidutils.generate_uuid() def ID(self): return 1 def name(self): return self._name def UUIDString(self): return self._uuid def XMLDesc(self, flags): return "%s" % self._name instance_domains = [ DiagFakeDomain("instance0000001"), DiagFakeDomain("instance0000002"), DiagFakeDomain("instance0000003"), DiagFakeDomain("instance0000004"), DiagFakeDomain("instance0000005")] mock_list.return_value = instance_domains drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) fake_disks = {'instance0000001': [{'type': 'qcow2', 'path': '/somepath/disk1', 'virt_disk_size': '10737418240', 'backing_file': '/somepath/disk1', 'disk_size': '83886080', 'over_committed_disk_size': '10653532160'}], 'instance0000002': [{'type': 'raw', 'path': '/somepath/disk2', 'virt_disk_size': '0', 'backing_file': '/somepath/disk2', 'disk_size': '10737418240', 'over_committed_disk_size': '21474836480'}], 'instance0000003': [{'type': 'raw', 'path': '/somepath/disk3', 'virt_disk_size': '0', 'backing_file': '/somepath/disk3', 'disk_size': '21474836480', 'over_committed_disk_size': '32212254720'}], 'instance0000004': [{'type': 'raw', 'path': '/somepath/disk4', 'virt_disk_size': '0', 'backing_file': '/somepath/disk4', 'disk_size': '32212254720', 'over_committed_disk_size': '42949672960'}]} def side_effect(cfg, block_device_info): if cfg.name == 'instance0000001': self.assertEqual('/dev/vda', block_device_info['root_device_name']) raise OSError(errno.ENOENT, 'No such file or directory') if cfg.name == 'instance0000002': self.assertEqual('/dev/vdb', block_device_info['root_device_name']) raise OSError(errno.ESTALE, 'Stale NFS file handle') if cfg.name == 'instance0000003': self.assertEqual('/dev/vdc', block_device_info['root_device_name']) raise OSError(errno.EACCES, 'Permission denied') if cfg.name == 'instance0000004': self.assertEqual('/dev/vdd', block_device_info['root_device_name']) return fake_disks.get(cfg.name) get_disk_info = mock.Mock() get_disk_info.side_effect = side_effect drvr._get_instance_disk_info_from_config = get_disk_info instance_uuids = [dom.UUIDString() for dom in instance_domains] instances = [objects.Instance( uuid=instance_uuids[0], root_device_name='/dev/vda'), objects.Instance( uuid=instance_uuids[1], root_device_name='/dev/vdb'), objects.Instance( uuid=instance_uuids[2], root_device_name='/dev/vdc'), objects.Instance( uuid=instance_uuids[3], root_device_name='/dev/vdd'), ] mock_get.return_value = instances # NOTE(danms): We need to have found bdms for our instances, # but we don't really need them to be complete as we just need # to make it to our side_effect above. Exclude the last domain # to simulate the case where we have an instance with no BDMs. mock_bdms.return_value = {uuid: [] for uuid in instance_uuids if uuid != instance_domains[-1].UUIDString()} result = drvr._get_disk_over_committed_size_total() self.assertEqual(42949672960, result) mock_list.assert_called_once_with(only_running=False) self.assertEqual(5, get_disk_info.call_count) filters = {'uuid': instance_uuids} mock_get.assert_called_once_with(mock.ANY, filters, use_slave=True) mock_bdms.assert_called_with(mock.ANY, instance_uuids) @mock.patch.object(host.Host, "list_instance_domains") @mock.patch.object(libvirt_driver.LibvirtDriver, "_get_instance_disk_info_from_config", side_effect=exception.VolumeBDMPathNotFound(path='bar')) @mock.patch.object(objects.BlockDeviceMappingList, "bdms_by_instance_uuid") @mock.patch.object(objects.InstanceList, "get_by_filters") def test_disk_over_committed_size_total_bdm_not_found(self, mock_get, mock_bdms, mock_get_disk_info, mock_list_domains): mock_dom = mock.Mock() mock_dom.XMLDesc.return_value = "" mock_list_domains.return_value = [mock_dom] # Tests that we handle VolumeBDMPathNotFound gracefully. drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) self.assertEqual(0, drvr._get_disk_over_committed_size_total()) def test_cpu_info(self): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) def get_host_capabilities_stub(self): cpu = vconfig.LibvirtConfigCPU() cpu.model = "Opteron_G4" cpu.vendor = "AMD" cpu.arch = fields.Architecture.X86_64 cpu.cells = 1 cpu.cores = 2 cpu.threads = 1 cpu.sockets = 4 cpu.add_feature(vconfig.LibvirtConfigCPUFeature("extapic")) cpu.add_feature(vconfig.LibvirtConfigCPUFeature("3dnow")) caps = vconfig.LibvirtConfigCaps() caps.host = vconfig.LibvirtConfigCapsHost() caps.host.cpu = cpu guest = vconfig.LibvirtConfigGuest() guest.ostype = fields.VMMode.HVM guest.arch = fields.Architecture.X86_64 guest.domtype = ["kvm"] caps.guests.append(guest) guest = vconfig.LibvirtConfigGuest() guest.ostype = fields.VMMode.HVM guest.arch = fields.Architecture.I686 guest.domtype = ["kvm"] caps.guests.append(guest) return caps self.stubs.Set(host.Host, "get_capabilities", get_host_capabilities_stub) want = {"vendor": "AMD", "features": set(["extapic", "3dnow"]), "model": "Opteron_G4", "arch": fields.Architecture.X86_64, "topology": {"cells": 1, "cores": 2, "threads": 1, "sockets": 4}} got = drvr._get_cpu_info() self.assertEqual(want, got) def test_get_pcinet_info(self): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) dev_name = "net_enp2s2_02_9a_a1_37_be_54" parent_address = "pci_0000_04_11_7" node_dev = FakeNodeDevice(_fake_NodeDevXml[dev_name]) with mock.patch.object(pci_utils, 'get_net_name_by_vf_pci_address', return_value=dev_name) as mock_get_net_name, \ mock.patch.object(drvr._host, 'device_lookup_by_name', return_value=node_dev) as mock_dev_lookup: actualvf = drvr._get_pcinet_info(parent_address) expect_vf = { "name": dev_name, "capabilities": ["rx", "tx", "sg", "tso", "gso", "gro", "rxvlan", "txvlan"] } self.assertEqual(expect_vf, actualvf) mock_get_net_name.called_once_with(parent_address) mock_dev_lookup.called_once_with(dev_name) def test_get_pcidev_info(self): def fake_nodeDeviceLookupByName(self, name): return FakeNodeDevice(_fake_NodeDevXml[name]) self.mox.StubOutWithMock(host.Host, 'device_lookup_by_name') host.Host.device_lookup_by_name = fake_nodeDeviceLookupByName drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) with mock.patch.object( fakelibvirt.Connection, 'getLibVersion') as mock_lib_version: mock_lib_version.return_value = ( versionutils.convert_version_to_int( libvirt_driver.MIN_LIBVIRT_PF_WITH_NO_VFS_CAP_VERSION) - 1) actualvf = drvr._get_pcidev_info("pci_0000_04_00_3") expect_vf = { "dev_id": "pci_0000_04_00_3", "address": "0000:04:00.3", "product_id": '1521', "numa_node": None, "vendor_id": '8086', "label": 'label_8086_1521', "dev_type": fields.PciDeviceType.SRIOV_PF, } self.assertEqual(expect_vf, actualvf) actualvf = drvr._get_pcidev_info("pci_0000_04_10_7") expect_vf = { "dev_id": "pci_0000_04_10_7", "address": "0000:04:10.7", "product_id": '1520', "numa_node": None, "vendor_id": '8086', "label": 'label_8086_1520', "dev_type": fields.PciDeviceType.SRIOV_VF, "parent_addr": '0000:04:00.3', } self.assertEqual(expect_vf, actualvf) with mock.patch.object(pci_utils, 'get_net_name_by_vf_pci_address', return_value="net_enp2s2_02_9a_a1_37_be_54"): actualvf = drvr._get_pcidev_info("pci_0000_04_11_7") expect_vf = { "dev_id": "pci_0000_04_11_7", "address": "0000:04:11.7", "product_id": '1520', "vendor_id": '8086', "numa_node": 0, "label": 'label_8086_1520', "dev_type": fields.PciDeviceType.SRIOV_VF, "parent_addr": '0000:04:00.3', "capabilities": { "network": ["rx", "tx", "sg", "tso", "gso", "gro", "rxvlan", "txvlan"]}, } self.assertEqual(expect_vf, actualvf) with mock.patch.object( pci_utils, 'is_physical_function', return_value=True): actualvf = drvr._get_pcidev_info("pci_0000_04_00_1") expect_vf = { "dev_id": "pci_0000_04_00_1", "address": "0000:04:00.1", "product_id": '1013', "numa_node": 0, "vendor_id": '15b3', "label": 'label_15b3_1013', "dev_type": fields.PciDeviceType.SRIOV_PF, } self.assertEqual(expect_vf, actualvf) with mock.patch.object( pci_utils, 'is_physical_function', return_value=False): actualvf = drvr._get_pcidev_info("pci_0000_04_00_1") expect_vf = { "dev_id": "pci_0000_04_00_1", "address": "0000:04:00.1", "product_id": '1013', "numa_node": 0, "vendor_id": '15b3', "label": 'label_15b3_1013', "dev_type": fields.PciDeviceType.STANDARD, } self.assertEqual(expect_vf, actualvf) with mock.patch.object( fakelibvirt.Connection, 'getLibVersion') as mock_lib_version: mock_lib_version.return_value = ( versionutils.convert_version_to_int( libvirt_driver.MIN_LIBVIRT_PF_WITH_NO_VFS_CAP_VERSION)) actualvf = drvr._get_pcidev_info("pci_0000_03_00_0") expect_vf = { "dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": '1013', "numa_node": 0, "vendor_id": '15b3', "label": 'label_15b3_1013', "dev_type": fields.PciDeviceType.SRIOV_PF, } self.assertEqual(expect_vf, actualvf) actualvf = drvr._get_pcidev_info("pci_0000_03_00_1") expect_vf = { "dev_id": "pci_0000_03_00_1", "address": "0000:03:00.1", "product_id": '1013', "numa_node": 0, "vendor_id": '15b3', "label": 'label_15b3_1013', "dev_type": fields.PciDeviceType.SRIOV_PF, } self.assertEqual(expect_vf, actualvf) def test_list_devices_not_supported(self): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) # Handle just the NO_SUPPORT error not_supported_exc = fakelibvirt.make_libvirtError( fakelibvirt.libvirtError, 'this function is not supported by the connection driver:' ' virNodeNumOfDevices', error_code=fakelibvirt.VIR_ERR_NO_SUPPORT) with mock.patch.object(drvr._conn, 'listDevices', side_effect=not_supported_exc): self.assertEqual('[]', drvr._get_pci_passthrough_devices()) # We cache not supported status to avoid emitting too many logging # messages. Clear this value to test the other exception case. del drvr._list_devices_supported # Other errors should not be caught other_exc = fakelibvirt.make_libvirtError( fakelibvirt.libvirtError, 'other exc', error_code=fakelibvirt.VIR_ERR_NO_DOMAIN) with mock.patch.object(drvr._conn, 'listDevices', side_effect=other_exc): self.assertRaises(fakelibvirt.libvirtError, drvr._get_pci_passthrough_devices) def test_get_pci_passthrough_devices(self): def fakelistDevices(caps, fakeargs=0): return ['pci_0000_04_00_3', 'pci_0000_04_10_7', 'pci_0000_04_11_7'] self.mox.StubOutWithMock(libvirt_driver.LibvirtDriver, '_conn') libvirt_driver.LibvirtDriver._conn.listDevices = fakelistDevices def fake_nodeDeviceLookupByName(self, name): return FakeNodeDevice(_fake_NodeDevXml[name]) self.mox.StubOutWithMock(host.Host, 'device_lookup_by_name') host.Host.device_lookup_by_name = fake_nodeDeviceLookupByName drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) actjson = drvr._get_pci_passthrough_devices() expectvfs = [ { "dev_id": "pci_0000_04_00_3", "address": "0000:04:00.3", "product_id": '1521', "vendor_id": '8086', "dev_type": fields.PciDeviceType.SRIOV_PF, "phys_function": None, "numa_node": None}, { "dev_id": "pci_0000_04_10_7", "domain": 0, "address": "0000:04:10.7", "product_id": '1520', "vendor_id": '8086', "numa_node": None, "dev_type": fields.PciDeviceType.SRIOV_VF, "phys_function": [('0x0000', '0x04', '0x00', '0x3')]}, { "dev_id": "pci_0000_04_11_7", "domain": 0, "address": "0000:04:11.7", "product_id": '1520', "vendor_id": '8086', "numa_node": 0, "dev_type": fields.PciDeviceType.SRIOV_VF, "phys_function": [('0x0000', '0x04', '0x00', '0x3')], } ] actualvfs = jsonutils.loads(actjson) for dev in range(len(actualvfs)): for key in actualvfs[dev].keys(): if key not in ['phys_function', 'virt_functions', 'label']: self.assertEqual(expectvfs[dev][key], actualvfs[dev][key]) def _test_get_host_numa_topology(self, mempages): caps = vconfig.LibvirtConfigCaps() caps.host = vconfig.LibvirtConfigCapsHost() caps.host.cpu = vconfig.LibvirtConfigCPU() caps.host.cpu.arch = fields.Architecture.X86_64 caps.host.topology = fakelibvirt.NUMATopology() if mempages: for i, cell in enumerate(caps.host.topology.cells): cell.mempages = fakelibvirt.create_mempages( [(4, 1024 * i), (2048, i)]) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) expected_topo_dict = {'cells': [ {'cpus': '0,1', 'cpu_usage': 0, 'mem': {'total': 256, 'used': 0}, 'id': 0}, {'cpus': '3', 'cpu_usage': 0, 'mem': {'total': 256, 'used': 0}, 'id': 1}, {'cpus': '', 'cpu_usage': 0, 'mem': {'total': 256, 'used': 0}, 'id': 2}, {'cpus': '', 'cpu_usage': 0, 'mem': {'total': 256, 'used': 0}, 'id': 3}]} with test.nested( mock.patch.object(host.Host, "get_capabilities", return_value=caps), mock.patch.object( hardware, 'get_vcpu_pin_set', return_value=set([0, 1, 3, 4, 5])), mock.patch.object(host.Host, 'get_online_cpus', return_value=set([0, 1, 2, 3, 6])), ): got_topo = drvr._get_host_numa_topology() got_topo_dict = got_topo._to_dict() self.assertThat( expected_topo_dict, matchers.DictMatches(got_topo_dict)) if mempages: # cells 0 self.assertEqual(4, got_topo.cells[0].mempages[0].size_kb) self.assertEqual(0, got_topo.cells[0].mempages[0].total) self.assertEqual(2048, got_topo.cells[0].mempages[1].size_kb) self.assertEqual(0, got_topo.cells[0].mempages[1].total) # cells 1 self.assertEqual(4, got_topo.cells[1].mempages[0].size_kb) self.assertEqual(1024, got_topo.cells[1].mempages[0].total) self.assertEqual(2048, got_topo.cells[1].mempages[1].size_kb) self.assertEqual(1, got_topo.cells[1].mempages[1].total) else: self.assertEqual([], got_topo.cells[0].mempages) self.assertEqual([], got_topo.cells[1].mempages) self.assertEqual(expected_topo_dict, got_topo_dict) self.assertEqual(set([]), got_topo.cells[0].pinned_cpus) self.assertEqual(set([]), got_topo.cells[1].pinned_cpus) self.assertEqual(set([]), got_topo.cells[2].pinned_cpus) self.assertEqual(set([]), got_topo.cells[3].pinned_cpus) self.assertEqual([set([0, 1])], got_topo.cells[0].siblings) self.assertEqual([], got_topo.cells[1].siblings) @mock.patch.object(host.Host, 'has_min_version', return_value=True) def test_get_host_numa_topology(self, mock_version): self._test_get_host_numa_topology(mempages=True) def test_get_host_numa_topology_empty(self): caps = vconfig.LibvirtConfigCaps() caps.host = vconfig.LibvirtConfigCapsHost() caps.host.cpu = vconfig.LibvirtConfigCPU() caps.host.cpu.arch = fields.Architecture.X86_64 caps.host.topology = None drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) with test.nested( mock.patch.object(host.Host, 'has_min_version', return_value=True), mock.patch.object(host.Host, "get_capabilities", return_value=caps) ) as (has_min_version, get_caps): self.assertIsNone(drvr._get_host_numa_topology()) self.assertEqual(2, get_caps.call_count) @mock.patch.object(fakelibvirt.Connection, 'getType') @mock.patch.object(fakelibvirt.Connection, 'getVersion') @mock.patch.object(fakelibvirt.Connection, 'getLibVersion') def test_get_host_numa_topology_xen(self, mock_lib_version, mock_version, mock_type): self.flags(virt_type='xen', group='libvirt') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) mock_lib_version.return_value = versionutils.convert_version_to_int( libvirt_driver.MIN_LIBVIRT_VERSION) mock_version.return_value = versionutils.convert_version_to_int( libvirt_driver.MIN_QEMU_VERSION) mock_type.return_value = host.HV_DRIVER_XEN self.assertIsNone(drvr._get_host_numa_topology()) def test_diagnostic_vcpus_exception(self): xml = """ """ class DiagFakeDomain(FakeVirtDomain): def __init__(self): super(DiagFakeDomain, self).__init__(fake_xml=xml) def vcpus(self): raise fakelibvirt.libvirtError('vcpus missing') def blockStats(self, path): return (169, 688640, 0, 0, 1) def interfaceStats(self, path): return (4408, 82, 0, 0, 0, 0, 0, 0) def memoryStats(self): return {'actual': 220160, 'rss': 200164} def maxMemory(self): return 280160 def fake_get_domain(self, instance): return DiagFakeDomain() self.stubs.Set(host.Host, '_get_domain', fake_get_domain) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) instance = objects.Instance(**self.test_instance) actual = drvr.get_diagnostics(instance) expect = {'vda_read': 688640, 'vda_read_req': 169, 'vda_write': 0, 'vda_write_req': 0, 'vda_errors': 1, 'vdb_read': 688640, 'vdb_read_req': 169, 'vdb_write': 0, 'vdb_write_req': 0, 'vdb_errors': 1, 'memory': 280160, 'memory-actual': 220160, 'memory-rss': 200164, 'vnet0_rx': 4408, 'vnet0_rx_drop': 0, 'vnet0_rx_errors': 0, 'vnet0_rx_packets': 82, 'vnet0_tx': 0, 'vnet0_tx_drop': 0, 'vnet0_tx_errors': 0, 'vnet0_tx_packets': 0, } self.assertEqual(actual, expect) lt = datetime.datetime(2012, 11, 22, 12, 00, 00) diags_time = datetime.datetime(2012, 11, 22, 12, 00, 10) self.useFixture(utils_fixture.TimeFixture(diags_time)) instance.launched_at = lt actual = drvr.get_instance_diagnostics(instance) expected = fake_diagnostics_object(with_disks=True, with_nic=True) self.assertDiagnosticsEqual(expected, actual) def test_diagnostic_blockstats_exception(self): xml = """ """ class DiagFakeDomain(FakeVirtDomain): def __init__(self): super(DiagFakeDomain, self).__init__(fake_xml=xml) def vcpus(self): return ([(0, 1, 15340000000, 0), (1, 1, 1640000000, 0), (2, 1, 3040000000, 0), (3, 1, 1420000000, 0)], [(True, False), (True, False), (True, False), (True, False)]) def blockStats(self, path): raise fakelibvirt.libvirtError('blockStats missing') def interfaceStats(self, path): return (4408, 82, 0, 0, 0, 0, 0, 0) def memoryStats(self): return {'actual': 220160, 'rss': 200164} def maxMemory(self): return 280160 def fake_get_domain(self, instance): return DiagFakeDomain() self.stubs.Set(host.Host, '_get_domain', fake_get_domain) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) instance = objects.Instance(**self.test_instance) actual = drvr.get_diagnostics(instance) expect = {'cpu0_time': 15340000000, 'cpu1_time': 1640000000, 'cpu2_time': 3040000000, 'cpu3_time': 1420000000, 'memory': 280160, 'memory-actual': 220160, 'memory-rss': 200164, 'vnet0_rx': 4408, 'vnet0_rx_drop': 0, 'vnet0_rx_errors': 0, 'vnet0_rx_packets': 82, 'vnet0_tx': 0, 'vnet0_tx_drop': 0, 'vnet0_tx_errors': 0, 'vnet0_tx_packets': 0, } self.assertEqual(actual, expect) lt = datetime.datetime(2012, 11, 22, 12, 00, 00) diags_time = datetime.datetime(2012, 11, 22, 12, 00, 10) self.useFixture(utils_fixture.TimeFixture(diags_time)) instance.launched_at = lt actual = drvr.get_instance_diagnostics(instance) expected = fake_diagnostics_object(with_cpus=True, with_nic=True) self.assertDiagnosticsEqual(expected, actual) def test_diagnostic_interfacestats_exception(self): xml = """ """ class DiagFakeDomain(FakeVirtDomain): def __init__(self): super(DiagFakeDomain, self).__init__(fake_xml=xml) def vcpus(self): return ([(0, 1, 15340000000, 0), (1, 1, 1640000000, 0), (2, 1, 3040000000, 0), (3, 1, 1420000000, 0)], [(True, False), (True, False), (True, False), (True, False)]) def blockStats(self, path): return (169, 688640, 0, 0, 1) def interfaceStats(self, path): raise fakelibvirt.libvirtError('interfaceStat missing') def memoryStats(self): return {'actual': 220160, 'rss': 200164} def maxMemory(self): return 280160 def fake_get_domain(self, instance): return DiagFakeDomain() self.stubs.Set(host.Host, '_get_domain', fake_get_domain) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) instance = objects.Instance(**self.test_instance) actual = drvr.get_diagnostics(instance) expect = {'cpu0_time': 15340000000, 'cpu1_time': 1640000000, 'cpu2_time': 3040000000, 'cpu3_time': 1420000000, 'vda_read': 688640, 'vda_read_req': 169, 'vda_write': 0, 'vda_write_req': 0, 'vda_errors': 1, 'vdb_read': 688640, 'vdb_read_req': 169, 'vdb_write': 0, 'vdb_write_req': 0, 'vdb_errors': 1, 'memory': 280160, 'memory-actual': 220160, 'memory-rss': 200164, } self.assertEqual(actual, expect) lt = datetime.datetime(2012, 11, 22, 12, 00, 00) diags_time = datetime.datetime(2012, 11, 22, 12, 00, 10) self.useFixture(utils_fixture.TimeFixture(diags_time)) instance.launched_at = lt actual = drvr.get_instance_diagnostics(instance) expected = fake_diagnostics_object(with_cpus=True, with_disks=True) self.assertDiagnosticsEqual(expected, actual) def test_diagnostic_memorystats_exception(self): xml = """ """ class DiagFakeDomain(FakeVirtDomain): def __init__(self): super(DiagFakeDomain, self).__init__(fake_xml=xml) def vcpus(self): return ([(0, 1, 15340000000, 0), (1, 1, 1640000000, 0), (2, 1, 3040000000, 0), (3, 1, 1420000000, 0)], [(True, False), (True, False), (True, False), (True, False)]) def blockStats(self, path): return (169, 688640, 0, 0, 1) def interfaceStats(self, path): return (4408, 82, 0, 0, 0, 0, 0, 0) def memoryStats(self): raise fakelibvirt.libvirtError('memoryStats missing') def maxMemory(self): return 280160 def fake_get_domain(self, instance): return DiagFakeDomain() self.stubs.Set(host.Host, '_get_domain', fake_get_domain) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) instance = objects.Instance(**self.test_instance) actual = drvr.get_diagnostics(instance) expect = {'cpu0_time': 15340000000, 'cpu1_time': 1640000000, 'cpu2_time': 3040000000, 'cpu3_time': 1420000000, 'vda_read': 688640, 'vda_read_req': 169, 'vda_write': 0, 'vda_write_req': 0, 'vda_errors': 1, 'vdb_read': 688640, 'vdb_read_req': 169, 'vdb_write': 0, 'vdb_write_req': 0, 'vdb_errors': 1, 'memory': 280160, 'vnet0_rx': 4408, 'vnet0_rx_drop': 0, 'vnet0_rx_errors': 0, 'vnet0_rx_packets': 82, 'vnet0_tx': 0, 'vnet0_tx_drop': 0, 'vnet0_tx_errors': 0, 'vnet0_tx_packets': 0, } self.assertEqual(actual, expect) lt = datetime.datetime(2012, 11, 22, 12, 00, 00) diags_time = datetime.datetime(2012, 11, 22, 12, 00, 10) self.useFixture(utils_fixture.TimeFixture(diags_time)) instance.launched_at = lt actual = drvr.get_instance_diagnostics(instance) expected = fake_diagnostics_object(with_cpus=True, with_disks=True, with_nic=True) self.assertDiagnosticsEqual(expected, actual) def test_diagnostic_full(self): xml = """ """ class DiagFakeDomain(FakeVirtDomain): def __init__(self): super(DiagFakeDomain, self).__init__(fake_xml=xml) def vcpus(self): return ([(0, 1, 15340000000, 0), (1, 1, 1640000000, 0), (2, 1, 3040000000, 0), (3, 1, 1420000000, 0)], [(True, False), (True, False), (True, False), (True, False)]) def blockStats(self, path): return (169, 688640, 0, 0, 1) def interfaceStats(self, path): return (4408, 82, 0, 0, 0, 0, 0, 0) def memoryStats(self): return {'actual': 220160, 'rss': 200164} def maxMemory(self): return 280160 def fake_get_domain(self, instance): return DiagFakeDomain() self.stubs.Set(host.Host, '_get_domain', fake_get_domain) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) instance = objects.Instance(**self.test_instance) actual = drvr.get_diagnostics(instance) expect = {'cpu0_time': 15340000000, 'cpu1_time': 1640000000, 'cpu2_time': 3040000000, 'cpu3_time': 1420000000, 'vda_read': 688640, 'vda_read_req': 169, 'vda_write': 0, 'vda_write_req': 0, 'vda_errors': 1, 'vdb_read': 688640, 'vdb_read_req': 169, 'vdb_write': 0, 'vdb_write_req': 0, 'vdb_errors': 1, 'memory': 280160, 'memory-actual': 220160, 'memory-rss': 200164, 'vnet0_rx': 4408, 'vnet0_rx_drop': 0, 'vnet0_rx_errors': 0, 'vnet0_rx_packets': 82, 'vnet0_tx': 0, 'vnet0_tx_drop': 0, 'vnet0_tx_errors': 0, 'vnet0_tx_packets': 0, } self.assertEqual(actual, expect) lt = datetime.datetime(2012, 11, 22, 12, 00, 00) diags_time = datetime.datetime(2012, 11, 22, 12, 00, 10) self.useFixture(utils_fixture.TimeFixture(diags_time)) instance.launched_at = lt actual = drvr.get_instance_diagnostics(instance) expected = fake_diagnostics_object(with_cpus=True, with_disks=True, with_nic=True) self.assertDiagnosticsEqual(expected, actual) @mock.patch.object(host.Host, '_get_domain') def test_diagnostic_full_with_multiple_interfaces(self, mock_get_domain): xml = """ """ class DiagFakeDomain(FakeVirtDomain): def __init__(self): super(DiagFakeDomain, self).__init__(fake_xml=xml) def vcpus(self): return ([(0, 1, 15340000000, 0), (1, 1, 1640000000, 0), (2, 1, 3040000000, 0), (3, 1, 1420000000, 0)], [(True, False), (True, False), (True, False), (True, False)]) def blockStats(self, path): return (169, 688640, 0, 0, 1) def interfaceStats(self, path): return (4408, 82, 0, 0, 0, 0, 0, 0) def memoryStats(self): return {'actual': 220160, 'rss': 200164} def maxMemory(self): return 280160 def fake_get_domain(self): return DiagFakeDomain() mock_get_domain.side_effect = fake_get_domain drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) instance = objects.Instance(**self.test_instance) actual = drvr.get_diagnostics(instance) expect = {'cpu0_time': 15340000000, 'cpu1_time': 1640000000, 'cpu2_time': 3040000000, 'cpu3_time': 1420000000, 'vda_read': 688640, 'vda_read_req': 169, 'vda_write': 0, 'vda_write_req': 0, 'vda_errors': 1, 'vdb_read': 688640, 'vdb_read_req': 169, 'vdb_write': 0, 'vdb_write_req': 0, 'vdb_errors': 1, 'memory': 280160, 'memory-actual': 220160, 'memory-rss': 200164, 'vnet0_rx': 4408, 'vnet0_rx_drop': 0, 'vnet0_rx_errors': 0, 'vnet0_rx_packets': 82, 'vnet0_tx': 0, 'vnet0_tx_drop': 0, 'vnet0_tx_errors': 0, 'vnet0_tx_packets': 0, 'br0_rx': 4408, 'br0_rx_drop': 0, 'br0_rx_errors': 0, 'br0_rx_packets': 82, 'br0_tx': 0, 'br0_tx_drop': 0, 'br0_tx_errors': 0, 'br0_tx_packets': 0, } self.assertEqual(actual, expect) lt = datetime.datetime(2012, 11, 22, 12, 00, 00) diags_time = datetime.datetime(2012, 11, 22, 12, 00, 10) self.useFixture(utils_fixture.TimeFixture(diags_time)) instance.launched_at = lt actual = drvr.get_instance_diagnostics(instance) expected = fake_diagnostics_object(with_cpus=True, with_disks=True, with_nic=True) expected.add_nic(mac_address='53:55:00:a5:39:39', rx_drop=0, rx_errors=0, rx_octets=4408, rx_packets=82, tx_drop=0, tx_errors=0, tx_octets=0, tx_packets=0) self.assertDiagnosticsEqual(expected, actual) @mock.patch.object(host.Host, "list_instance_domains") def test_failing_vcpu_count(self, mock_list): """Domain can fail to return the vcpu description in case it's just starting up or shutting down. Make sure None is handled gracefully. """ class DiagFakeDomain(object): def __init__(self, vcpus): self._vcpus = vcpus def vcpus(self): if self._vcpus is None: raise fakelibvirt.libvirtError("fake-error") else: return ([[1, 2, 3, 4]] * self._vcpus, [True] * self._vcpus) def ID(self): return 1 def name(self): return "instance000001" def UUIDString(self): return "19479fee-07a5-49bb-9138-d3738280d63c" mock_list.return_value = [ DiagFakeDomain(None), DiagFakeDomain(5)] drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) self.assertEqual(6, drvr._get_vcpu_used()) mock_list.assert_called_with(only_guests=True, only_running=True) def test_get_instance_capabilities(self): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) def get_host_capabilities_stub(self): caps = vconfig.LibvirtConfigCaps() guest = vconfig.LibvirtConfigGuest() guest.ostype = 'hvm' guest.arch = fields.Architecture.X86_64 guest.domtype = ['kvm', 'qemu'] caps.guests.append(guest) guest = vconfig.LibvirtConfigGuest() guest.ostype = 'hvm' guest.arch = fields.Architecture.I686 guest.domtype = ['kvm'] caps.guests.append(guest) return caps self.stubs.Set(host.Host, "get_capabilities", get_host_capabilities_stub) want = [(fields.Architecture.X86_64, 'kvm', 'hvm'), (fields.Architecture.X86_64, 'qemu', 'hvm'), (fields.Architecture.I686, 'kvm', 'hvm')] got = drvr._get_instance_capabilities() self.assertEqual(want, got) def test_set_cache_mode(self): self.flags(disk_cachemodes=['file=directsync'], group='libvirt') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) fake_conf = FakeConfigGuestDisk() fake_conf.source_type = 'file' drvr._set_cache_mode(fake_conf) self.assertEqual(fake_conf.driver_cache, 'directsync') def test_set_cache_mode_shareable(self): """Tests that when conf.shareable is True, the configuration is ignored and the driver_cache is forced to 'none'. """ self.flags(disk_cachemodes=['block=writethrough'], group='libvirt') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) fake_conf = FakeConfigGuestDisk() fake_conf.shareable = True fake_conf.source_type = 'block' drvr._set_cache_mode(fake_conf) self.assertEqual('none', fake_conf.driver_cache) def test_set_cache_mode_invalid_mode(self): self.flags(disk_cachemodes=['file=FAKE'], group='libvirt') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) fake_conf = FakeConfigGuestDisk() fake_conf.source_type = 'file' drvr._set_cache_mode(fake_conf) self.assertIsNone(fake_conf.driver_cache) def test_set_cache_mode_invalid_object(self): self.flags(disk_cachemodes=['file=directsync'], group='libvirt') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) fake_conf = FakeConfigGuest() fake_conf.driver_cache = 'fake' drvr._set_cache_mode(fake_conf) self.assertEqual(fake_conf.driver_cache, 'fake') @mock.patch('os.unlink') @mock.patch.object(os.path, 'exists') def _test_shared_storage_detection(self, is_same, mock_exists, mock_unlink): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) drvr.get_host_ip_addr = mock.MagicMock(return_value='bar') mock_exists.return_value = is_same with test.nested( mock.patch.object(drvr._remotefs, 'create_file'), mock.patch.object(drvr._remotefs, 'remove_file') ) as (mock_rem_fs_create, mock_rem_fs_remove): result = drvr._is_storage_shared_with('host', '/path') mock_rem_fs_create.assert_any_call('host', mock.ANY) create_args, create_kwargs = mock_rem_fs_create.call_args self.assertTrue(create_args[1].startswith('/path')) if is_same: mock_unlink.assert_called_once_with(mock.ANY) else: mock_rem_fs_remove.assert_called_with('host', mock.ANY) remove_args, remove_kwargs = mock_rem_fs_remove.call_args self.assertTrue(remove_args[1].startswith('/path')) return result def test_shared_storage_detection_same_host(self): self.assertTrue(self._test_shared_storage_detection(True)) def test_shared_storage_detection_different_host(self): self.assertFalse(self._test_shared_storage_detection(False)) def test_shared_storage_detection_easy(self): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) self.mox.StubOutWithMock(drvr, 'get_host_ip_addr') self.mox.StubOutWithMock(utils, 'execute') self.mox.StubOutWithMock(os.path, 'exists') self.mox.StubOutWithMock(os, 'unlink') drvr.get_host_ip_addr().AndReturn('foo') self.mox.ReplayAll() self.assertTrue(drvr._is_storage_shared_with('foo', '/path')) self.mox.UnsetStubs() def test_store_pid_remove_pid(self): instance = objects.Instance(**self.test_instance) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) popen = mock.Mock(pid=3) drvr.job_tracker.add_job(instance, popen.pid) self.assertIn(3, drvr.job_tracker.jobs[instance.uuid]) drvr.job_tracker.remove_job(instance, popen.pid) self.assertNotIn(instance.uuid, drvr.job_tracker.jobs) @mock.patch('nova.virt.libvirt.host.Host._get_domain') def test_get_domain_info_with_more_return(self, mock_get_domain): instance = objects.Instance(**self.test_instance) dom_mock = mock.MagicMock() dom_mock.info.return_value = [ 1, 2048, 737, 8, 12345, 888888 ] dom_mock.ID.return_value = mock.sentinel.instance_id mock_get_domain.return_value = dom_mock drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) info = drvr.get_info(instance) self.assertEqual(1, info.state) self.assertEqual(mock.sentinel.instance_id, info.internal_id) dom_mock.info.assert_called_once_with() dom_mock.ID.assert_called_once_with() mock_get_domain.assert_called_once_with(instance) def test_create_domain(self): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) mock_domain = mock.MagicMock() guest = drvr._create_domain(domain=mock_domain) self.assertEqual(mock_domain, guest._domain) mock_domain.createWithFlags.assert_has_calls([mock.call(0)]) @mock.patch('nova.virt.disk.api.clean_lxc_namespace') @mock.patch('nova.virt.libvirt.driver.LibvirtDriver.get_info') @mock.patch('nova.virt.disk.api.setup_container') @mock.patch('oslo_utils.fileutils.ensure_tree') @mock.patch.object(fake_libvirt_utils, 'get_instance_path') def test_create_domain_lxc(self, mock_get_inst_path, mock_ensure_tree, mock_setup_container, mock_get_info, mock_clean): self.flags(virt_type='lxc', group='libvirt') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) mock_instance = mock.MagicMock() inst_sys_meta = dict() mock_instance.system_metadata = inst_sys_meta mock_get_inst_path.return_value = '/tmp/' mock_image_backend = mock.MagicMock() drvr.image_backend = mock_image_backend mock_image = mock.MagicMock() mock_image.path = '/tmp/test.img' drvr.image_backend.by_name.return_value = mock_image mock_setup_container.return_value = '/dev/nbd0' mock_get_info.return_value = hardware.InstanceInfo( state=power_state.RUNNING) with test.nested( mock.patch.object(drvr, '_is_booted_from_volume', return_value=False), mock.patch.object(drvr, '_create_domain'), mock.patch.object(drvr, 'plug_vifs'), mock.patch.object(drvr.firewall_driver, 'setup_basic_filtering'), mock.patch.object(drvr.firewall_driver, 'prepare_instance_filter'), mock.patch.object(drvr.firewall_driver, 'apply_instance_filter')): drvr._create_domain_and_network(self.context, 'xml', mock_instance, []) self.assertEqual('/dev/nbd0', inst_sys_meta['rootfs_device_name']) self.assertFalse(mock_instance.called) mock_get_inst_path.assert_has_calls([mock.call(mock_instance)]) mock_ensure_tree.assert_has_calls([mock.call('/tmp/rootfs')]) drvr.image_backend.by_name.assert_has_calls([mock.call(mock_instance, 'disk')]) setup_container_call = mock.call( mock_image.get_model(), container_dir='/tmp/rootfs') mock_setup_container.assert_has_calls([setup_container_call]) mock_get_info.assert_has_calls([mock.call(mock_instance)]) mock_clean.assert_has_calls([mock.call(container_dir='/tmp/rootfs')]) @mock.patch('nova.virt.disk.api.clean_lxc_namespace') @mock.patch('nova.virt.libvirt.driver.LibvirtDriver.get_info') @mock.patch.object(fake_libvirt_utils, 'chown_for_id_maps') @mock.patch('nova.virt.disk.api.setup_container') @mock.patch('oslo_utils.fileutils.ensure_tree') @mock.patch.object(fake_libvirt_utils, 'get_instance_path') def test_create_domain_lxc_id_maps(self, mock_get_inst_path, mock_ensure_tree, mock_setup_container, mock_chown, mock_get_info, mock_clean): self.flags(virt_type='lxc', uid_maps=["0:1000:100"], gid_maps=["0:1000:100"], group='libvirt') def chown_side_effect(path, id_maps): self.assertEqual('/tmp/rootfs', path) self.assertIsInstance(id_maps[0], vconfig.LibvirtConfigGuestUIDMap) self.assertEqual(0, id_maps[0].start) self.assertEqual(1000, id_maps[0].target) self.assertEqual(100, id_maps[0].count) self.assertIsInstance(id_maps[1], vconfig.LibvirtConfigGuestGIDMap) self.assertEqual(0, id_maps[1].start) self.assertEqual(1000, id_maps[1].target) self.assertEqual(100, id_maps[1].count) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) mock_instance = mock.MagicMock() inst_sys_meta = dict() mock_instance.system_metadata = inst_sys_meta mock_get_inst_path.return_value = '/tmp/' mock_image_backend = mock.MagicMock() drvr.image_backend = mock_image_backend mock_image = mock.MagicMock() mock_image.path = '/tmp/test.img' drvr.image_backend.by_name.return_value = mock_image mock_setup_container.return_value = '/dev/nbd0' mock_chown.side_effect = chown_side_effect mock_get_info.return_value = hardware.InstanceInfo( state=power_state.RUNNING) with test.nested( mock.patch.object(drvr, '_is_booted_from_volume', return_value=False), mock.patch.object(drvr, '_create_domain'), mock.patch.object(drvr, 'plug_vifs'), mock.patch.object(drvr.firewall_driver, 'setup_basic_filtering'), mock.patch.object(drvr.firewall_driver, 'prepare_instance_filter'), mock.patch.object(drvr.firewall_driver, 'apply_instance_filter') ) as ( mock_is_booted_from_volume, mock_create_domain, mock_plug_vifs, mock_setup_basic_filtering, mock_prepare_instance_filter, mock_apply_instance_filter ): drvr._create_domain_and_network(self.context, 'xml', mock_instance, []) self.assertEqual('/dev/nbd0', inst_sys_meta['rootfs_device_name']) self.assertFalse(mock_instance.called) mock_get_inst_path.assert_has_calls([mock.call(mock_instance)]) mock_ensure_tree.assert_has_calls([mock.call('/tmp/rootfs')]) drvr.image_backend.by_name.assert_has_calls([mock.call(mock_instance, 'disk')]) setup_container_call = mock.call( mock_image.get_model(), container_dir='/tmp/rootfs') mock_setup_container.assert_has_calls([setup_container_call]) mock_get_info.assert_has_calls([mock.call(mock_instance)]) mock_clean.assert_has_calls([mock.call(container_dir='/tmp/rootfs')]) @mock.patch('nova.virt.disk.api.teardown_container') @mock.patch('nova.virt.libvirt.driver.LibvirtDriver.get_info') @mock.patch('nova.virt.disk.api.setup_container') @mock.patch('oslo_utils.fileutils.ensure_tree') @mock.patch.object(fake_libvirt_utils, 'get_instance_path') def test_create_domain_lxc_not_running(self, mock_get_inst_path, mock_ensure_tree, mock_setup_container, mock_get_info, mock_teardown): self.flags(virt_type='lxc', group='libvirt') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) mock_instance = mock.MagicMock() inst_sys_meta = dict() mock_instance.system_metadata = inst_sys_meta mock_get_inst_path.return_value = '/tmp/' mock_image_backend = mock.MagicMock() drvr.image_backend = mock_image_backend mock_image = mock.MagicMock() mock_image.path = '/tmp/test.img' drvr.image_backend.by_name.return_value = mock_image mock_setup_container.return_value = '/dev/nbd0' mock_get_info.return_value = hardware.InstanceInfo( state=power_state.SHUTDOWN) with test.nested( mock.patch.object(drvr, '_is_booted_from_volume', return_value=False), mock.patch.object(drvr, '_create_domain'), mock.patch.object(drvr, 'plug_vifs'), mock.patch.object(drvr.firewall_driver, 'setup_basic_filtering'), mock.patch.object(drvr.firewall_driver, 'prepare_instance_filter'), mock.patch.object(drvr.firewall_driver, 'apply_instance_filter')): drvr._create_domain_and_network(self.context, 'xml', mock_instance, []) self.assertEqual('/dev/nbd0', inst_sys_meta['rootfs_device_name']) self.assertFalse(mock_instance.called) mock_get_inst_path.assert_has_calls([mock.call(mock_instance)]) mock_ensure_tree.assert_has_calls([mock.call('/tmp/rootfs')]) drvr.image_backend.by_name.assert_has_calls([mock.call(mock_instance, 'disk')]) setup_container_call = mock.call( mock_image.get_model(), container_dir='/tmp/rootfs') mock_setup_container.assert_has_calls([setup_container_call]) mock_get_info.assert_has_calls([mock.call(mock_instance)]) teardown_call = mock.call(container_dir='/tmp/rootfs') mock_teardown.assert_has_calls([teardown_call]) def test_create_domain_define_xml_fails(self): """Tests that the xml is logged when defining the domain fails.""" fake_xml = "this is a test" def fake_defineXML(xml): # In py2 env, xml is encoded in write_instance_config use # encodeutils.safe_encode, it will be decode text before encoding if six.PY2: self.assertEqual(fake_safe_decode(fake_xml), xml) else: self.assertEqual(fake_xml, xml) raise fakelibvirt.libvirtError('virDomainDefineXML() failed') def fake_safe_decode(text, *args, **kwargs): return text + 'safe decoded' self.log_error_called = False def fake_error(msg, *args, **kwargs): self.log_error_called = True self.assertIn(fake_xml, msg % args) self.assertIn('safe decoded', msg % args) self.stubs.Set(encodeutils, 'safe_decode', fake_safe_decode) self.stubs.Set(nova.virt.libvirt.guest.LOG, 'error', fake_error) self.create_fake_libvirt_mock(defineXML=fake_defineXML) self.mox.ReplayAll() drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) self.assertRaises(fakelibvirt.libvirtError, drvr._create_domain, fake_xml) self.assertTrue(self.log_error_called) def test_create_domain_with_flags_fails(self): """Tests that the xml is logged when creating the domain with flags fails """ fake_xml = "this is a test" fake_domain = FakeVirtDomain(fake_xml) def fake_createWithFlags(launch_flags): raise fakelibvirt.libvirtError('virDomainCreateWithFlags() failed') self.log_error_called = False def fake_error(msg, *args, **kwargs): self.log_error_called = True self.assertIn(fake_xml, msg % args) self.stubs.Set(fake_domain, 'createWithFlags', fake_createWithFlags) self.stubs.Set(nova.virt.libvirt.guest.LOG, 'error', fake_error) self.create_fake_libvirt_mock() self.mox.ReplayAll() drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) self.assertRaises(fakelibvirt.libvirtError, drvr._create_domain, domain=fake_domain) self.assertTrue(self.log_error_called) @mock.patch('nova.privsep.libvirt.enable_hairpin') def test_create_domain_enable_hairpin_fails(self, mock_writefile): """Tests that the xml is logged when enabling hairpin mode for the domain fails. """ # Guest.enable_hairpin is only called for nova-network. # TODO(mikal): remove this test when nova-net goes away self.flags(use_neutron=False) fake_xml = "this is a test" fake_domain = FakeVirtDomain(fake_xml) mock_writefile.side_effect = IOError def fake_get_interfaces(*args): return ["dev"] self.log_error_called = False def fake_error(msg, *args, **kwargs): self.log_error_called = True self.assertIn(fake_xml, msg % args) self.stubs.Set(nova.virt.libvirt.guest.LOG, 'error', fake_error) self.create_fake_libvirt_mock() self.mox.ReplayAll() drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) self.stubs.Set( nova.virt.libvirt.guest.Guest, 'get_interfaces', fake_get_interfaces) self.assertRaises(IOError, drvr._create_domain, domain=fake_domain, power_on=False) self.assertTrue(self.log_error_called) def test_get_vnc_console(self): instance = objects.Instance(**self.test_instance) dummyxml = ("instance-0000000a" "" "" "") vdmock = self.mox.CreateMock(fakelibvirt.virDomain) self.mox.StubOutWithMock(vdmock, "XMLDesc") vdmock.XMLDesc(flags=0).AndReturn(dummyxml) def fake_lookup(_uuid): if _uuid == instance['uuid']: return vdmock self.create_fake_libvirt_mock(lookupByUUIDString=fake_lookup) self.mox.ReplayAll() drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) vnc_dict = drvr.get_vnc_console(self.context, instance) self.assertEqual(vnc_dict.port, '5900') def test_get_vnc_console_unavailable(self): instance = objects.Instance(**self.test_instance) dummyxml = ("instance-0000000a" "") vdmock = self.mox.CreateMock(fakelibvirt.virDomain) self.mox.StubOutWithMock(vdmock, "XMLDesc") vdmock.XMLDesc(flags=0).AndReturn(dummyxml) def fake_lookup(_uuid): if _uuid == instance['uuid']: return vdmock self.create_fake_libvirt_mock(lookupByUUIDString=fake_lookup) self.mox.ReplayAll() drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) self.assertRaises(exception.ConsoleTypeUnavailable, drvr.get_vnc_console, self.context, instance) def test_get_spice_console(self): instance = objects.Instance(**self.test_instance) dummyxml = ("instance-0000000a" "" "" "") vdmock = self.mox.CreateMock(fakelibvirt.virDomain) self.mox.StubOutWithMock(vdmock, "XMLDesc") vdmock.XMLDesc(flags=0).AndReturn(dummyxml) def fake_lookup(_uuid): if _uuid == instance['uuid']: return vdmock self.create_fake_libvirt_mock(lookupByUUIDString=fake_lookup) self.mox.ReplayAll() drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) spice_dict = drvr.get_spice_console(self.context, instance) self.assertEqual(spice_dict.port, '5950') def test_get_spice_console_unavailable(self): instance = objects.Instance(**self.test_instance) dummyxml = ("instance-0000000a" "") vdmock = self.mox.CreateMock(fakelibvirt.virDomain) self.mox.StubOutWithMock(vdmock, "XMLDesc") vdmock.XMLDesc(flags=0).AndReturn(dummyxml) def fake_lookup(_uuid): if _uuid == instance['uuid']: return vdmock self.create_fake_libvirt_mock(lookupByUUIDString=fake_lookup) self.mox.ReplayAll() drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) self.assertRaises(exception.ConsoleTypeUnavailable, drvr.get_spice_console, self.context, instance) def test_detach_volume_with_instance_not_found(self): # Test that detach_volume() method does not raise exception, # if the instance does not exist. instance = objects.Instance(**self.test_instance) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) with test.nested( mock.patch.object(host.Host, '_get_domain', side_effect=exception.InstanceNotFound( instance_id=instance.uuid)), mock.patch.object(drvr, '_disconnect_volume') ) as (_get_domain, _disconnect_volume): connection_info = {'driver_volume_type': 'fake'} drvr.detach_volume( self.context, connection_info, instance, '/dev/sda') _get_domain.assert_called_once_with(instance) _disconnect_volume.assert_called_once_with( self.context, connection_info, instance, encryption=None) def _test_attach_detach_interface_get_config(self, method_name): """Tests that the get_config() method is properly called in attach_interface() and detach_interface(). method_name: either \"attach_interface\" or \"detach_interface\" depending on the method to test. """ self.stubs.Set(host.Host, '_get_domain', lambda a, b: FakeVirtDomain()) instance = objects.Instance(**self.test_instance) network_info = _fake_network_info(self, 1) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) fake_image_meta = objects.ImageMeta.from_dict( {'id': instance['image_ref']}) if method_name == "attach_interface": self.mox.StubOutWithMock(drvr.firewall_driver, 'setup_basic_filtering') drvr.firewall_driver.setup_basic_filtering(instance, network_info) self.mox.StubOutWithMock(drvr, '_build_device_metadata') drvr._build_device_metadata(self.context, instance).AndReturn( objects.InstanceDeviceMetadata()) self.mox.StubOutWithMock(objects.Instance, 'save') objects.Instance.save() expected = drvr.vif_driver.get_config(instance, network_info[0], fake_image_meta, instance.get_flavor(), CONF.libvirt.virt_type, drvr._host) self.mox.StubOutWithMock(drvr.vif_driver, 'get_config') drvr.vif_driver.get_config(instance, network_info[0], mox.IsA(objects.ImageMeta), mox.IsA(objects.Flavor), CONF.libvirt.virt_type, drvr._host).\ AndReturn(expected) self.mox.ReplayAll() if method_name == "attach_interface": drvr.attach_interface(self.context, instance, fake_image_meta, network_info[0]) elif method_name == "detach_interface": drvr.detach_interface(self.context, instance, network_info[0]) else: raise ValueError("Unhandled method %s" % method_name) @mock.patch.object(lockutils, "external_lock") def test_attach_interface_get_config(self, mock_lock): """Tests that the get_config() method is properly called in attach_interface(). """ mock_lock.return_value = threading.Semaphore() self._test_attach_detach_interface_get_config("attach_interface") def test_detach_interface_get_config(self): """Tests that the get_config() method is properly called in detach_interface(). """ self._test_attach_detach_interface_get_config("detach_interface") def test_default_root_device_name(self): instance = {'uuid': 'fake_instance'} image_meta = objects.ImageMeta.from_dict({'id': uuids.image_id}) root_bdm = {'source_type': 'image', 'destination_type': 'volume', 'image_id': 'fake_id'} self.flags(virt_type='qemu', group='libvirt') self.mox.StubOutWithMock(blockinfo, 'get_disk_bus_for_device_type') self.mox.StubOutWithMock(blockinfo, 'get_root_info') blockinfo.get_disk_bus_for_device_type(instance, 'qemu', image_meta, 'disk').InAnyOrder().\ AndReturn('virtio') blockinfo.get_disk_bus_for_device_type(instance, 'qemu', image_meta, 'cdrom').InAnyOrder().\ AndReturn('ide') blockinfo.get_root_info(instance, 'qemu', image_meta, root_bdm, 'virtio', 'ide').AndReturn({'dev': 'vda'}) self.mox.ReplayAll() drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) self.assertEqual(drvr.default_root_device_name(instance, image_meta, root_bdm), '/dev/vda') @mock.patch.object(objects.BlockDeviceMapping, "save") def test_default_device_names_for_instance(self, save_mock): instance = objects.Instance(**self.test_instance) instance.root_device_name = '/dev/vda' ephemerals = [objects.BlockDeviceMapping( **fake_block_device.AnonFakeDbBlockDeviceDict( {'device_name': 'vdb', 'source_type': 'blank', 'volume_size': 2, 'destination_type': 'local'}))] swap = [objects.BlockDeviceMapping( **fake_block_device.AnonFakeDbBlockDeviceDict( {'device_name': 'vdg', 'source_type': 'blank', 'volume_size': 512, 'guest_format': 'swap', 'destination_type': 'local'}))] block_device_mapping = [ objects.BlockDeviceMapping( **fake_block_device.AnonFakeDbBlockDeviceDict( {'source_type': 'volume', 'destination_type': 'volume', 'volume_id': 'fake-image-id', 'device_name': '/dev/vdxx', 'disk_bus': 'scsi'}))] drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) drvr.default_device_names_for_instance(instance, instance.root_device_name, ephemerals, swap, block_device_mapping) # Ephemeral device name was correct so no changes self.assertEqual('/dev/vdb', ephemerals[0].device_name) # Swap device name was incorrect so it was changed self.assertEqual('/dev/vdc', swap[0].device_name) # Volume device name was changed too, taking the bus into account self.assertEqual('/dev/sda', block_device_mapping[0].device_name) self.assertEqual(3, save_mock.call_count) def _test_get_device_name_for_instance(self, new_bdm, expected_dev): instance = objects.Instance(**self.test_instance) instance.root_device_name = '/dev/vda' instance.ephemeral_gb = 0 drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) got_dev = drvr.get_device_name_for_instance( instance, [], new_bdm) self.assertEqual(expected_dev, got_dev) def test_get_device_name_for_instance_simple(self): new_bdm = objects.BlockDeviceMapping( context=context, source_type='volume', destination_type='volume', boot_index=-1, volume_id='fake-id', device_name=None, guest_format=None, disk_bus=None, device_type=None) self._test_get_device_name_for_instance(new_bdm, '/dev/vdb') def test_get_device_name_for_instance_suggested(self): new_bdm = objects.BlockDeviceMapping( context=context, source_type='volume', destination_type='volume', boot_index=-1, volume_id='fake-id', device_name='/dev/vdg', guest_format=None, disk_bus=None, device_type=None) self._test_get_device_name_for_instance(new_bdm, '/dev/vdb') def test_get_device_name_for_instance_bus(self): new_bdm = objects.BlockDeviceMapping( context=context, source_type='volume', destination_type='volume', boot_index=-1, volume_id='fake-id', device_name=None, guest_format=None, disk_bus='scsi', device_type=None) self._test_get_device_name_for_instance(new_bdm, '/dev/sda') def test_get_device_name_for_instance_device_type(self): new_bdm = objects.BlockDeviceMapping( context=context, source_type='volume', destination_type='volume', boot_index=-1, volume_id='fake-id', device_name=None, guest_format=None, disk_bus=None, device_type='floppy') self._test_get_device_name_for_instance(new_bdm, '/dev/fda') def test_is_supported_fs_format(self): supported_fs = [disk_api.FS_FORMAT_EXT2, disk_api.FS_FORMAT_EXT3, disk_api.FS_FORMAT_EXT4, disk_api.FS_FORMAT_XFS] drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) for fs in supported_fs: self.assertTrue(drvr.is_supported_fs_format(fs)) supported_fs = ['', 'dummy'] drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) for fs in supported_fs: self.assertFalse(drvr.is_supported_fs_format(fs)) @mock.patch('nova.virt.libvirt.host.Host.write_instance_config') @mock.patch('nova.virt.libvirt.host.Host.get_guest') def test_post_live_migration_at_destination( self, mock_get_guest, mock_write_instance_config): instance = objects.Instance(id=1, uuid=uuids.instance) dom = mock.MagicMock() guest = libvirt_guest.Guest(dom) mock_get_guest.return_value = guest drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) drvr.post_live_migration_at_destination(mock.ANY, instance, mock.ANY) # Assert that we don't try to write anything to the destination node # since the source live migrated with the VIR_MIGRATE_PERSIST_DEST flag mock_write_instance_config.assert_not_called() def test_create_propagates_exceptions(self): self.flags(virt_type='lxc', group='libvirt') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) instance = objects.Instance(id=1, uuid=uuids.instance, image_ref='my_fake_image') with test.nested( mock.patch.object(drvr, '_create_domain_setup_lxc'), mock.patch.object(drvr, '_create_domain_cleanup_lxc'), mock.patch.object(drvr, '_is_booted_from_volume', return_value=False), mock.patch.object(drvr, 'plug_vifs'), mock.patch.object(drvr, 'firewall_driver'), mock.patch.object(drvr, '_create_domain', side_effect=exception.NovaException), mock.patch.object(drvr, 'cleanup')): self.assertRaises(exception.NovaException, drvr._create_domain_and_network, self.context, 'xml', instance, None) def test_create_without_pause(self): self.flags(virt_type='lxc', group='libvirt') @contextlib.contextmanager def fake_lxc_disk_handler(*args, **kwargs): yield drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) instance = objects.Instance(**self.test_instance) with test.nested( mock.patch.object(drvr, '_lxc_disk_handler', side_effect=fake_lxc_disk_handler), mock.patch.object(drvr, 'plug_vifs'), mock.patch.object(drvr, 'firewall_driver'), mock.patch.object(drvr, '_create_domain'), mock.patch.object(drvr, 'cleanup')) as ( _handler, cleanup, firewall_driver, create, plug_vifs): domain = drvr._create_domain_and_network(self.context, 'xml', instance, None) self.assertEqual(0, create.call_args_list[0][1]['pause']) self.assertEqual(0, domain.resume.call_count) def _test_create_with_network_events(self, neutron_failure=None, power_on=True): generated_events = [] def wait_timeout(): event = mock.MagicMock() if neutron_failure == 'timeout': raise eventlet.timeout.Timeout() elif neutron_failure == 'error': event.status = 'failed' else: event.status = 'completed' return event def fake_prepare(instance, event_name): m = mock.MagicMock() m.instance = instance m.event_name = event_name m.wait.side_effect = wait_timeout generated_events.append(m) return m virtapi = manager.ComputeVirtAPI(mock.MagicMock()) prepare = virtapi._compute.instance_events.prepare_for_instance_event prepare.side_effect = fake_prepare drvr = libvirt_driver.LibvirtDriver(virtapi, False) instance = objects.Instance(vm_state=vm_states.BUILDING, **self.test_instance) vifs = [{'id': 'vif1', 'active': False}, {'id': 'vif2', 'active': False}] @mock.patch.object(drvr, 'plug_vifs') @mock.patch.object(drvr, 'firewall_driver') @mock.patch.object(drvr, '_create_domain') @mock.patch.object(drvr, 'cleanup') def test_create(cleanup, create, fw_driver, plug_vifs): domain = drvr._create_domain_and_network(self.context, 'xml', instance, vifs, power_on=power_on) plug_vifs.assert_called_with(instance, vifs) pause = self._get_pause_flag(drvr, vifs, power_on=power_on) self.assertEqual(pause, create.call_args_list[0][1]['pause']) if pause: domain.resume.assert_called_once_with() if neutron_failure and CONF.vif_plugging_is_fatal: cleanup.assert_called_once_with(self.context, instance, network_info=vifs, block_device_info=None) test_create() if utils.is_neutron() and CONF.vif_plugging_timeout and power_on: prepare.assert_has_calls([ mock.call(instance, 'network-vif-plugged-vif1'), mock.call(instance, 'network-vif-plugged-vif2')]) for event in generated_events: if neutron_failure and generated_events.index(event) != 0: self.assertEqual(0, event.call_count) elif (neutron_failure == 'error' and not CONF.vif_plugging_is_fatal): event.wait.assert_called_once_with() else: self.assertEqual(0, prepare.call_count) @mock.patch('nova.utils.is_neutron', return_value=True) def test_create_with_network_events_neutron(self, is_neutron): self._test_create_with_network_events() @mock.patch('nova.utils.is_neutron', return_value=True) def test_create_with_network_events_neutron_power_off(self, is_neutron): # Tests that we don't wait for events if we don't start the instance. self._test_create_with_network_events(power_on=False) @mock.patch('nova.utils.is_neutron', return_value=True) def test_create_with_network_events_neutron_nowait(self, is_neutron): self.flags(vif_plugging_timeout=0) self._test_create_with_network_events() @mock.patch('nova.utils.is_neutron', return_value=True) def test_create_with_network_events_neutron_failed_nonfatal_timeout( self, is_neutron): self.flags(vif_plugging_is_fatal=False) self._test_create_with_network_events(neutron_failure='timeout') @mock.patch('nova.utils.is_neutron', return_value=True) def test_create_with_network_events_neutron_failed_fatal_timeout( self, is_neutron): self.assertRaises(exception.VirtualInterfaceCreateException, self._test_create_with_network_events, neutron_failure='timeout') @mock.patch('nova.utils.is_neutron', return_value=True) def test_create_with_network_events_neutron_failed_nonfatal_error( self, is_neutron): self.flags(vif_plugging_is_fatal=False) self._test_create_with_network_events(neutron_failure='error') @mock.patch('nova.utils.is_neutron', return_value=True) def test_create_with_network_events_neutron_failed_fatal_error( self, is_neutron): self.assertRaises(exception.VirtualInterfaceCreateException, self._test_create_with_network_events, neutron_failure='error') @mock.patch('nova.utils.is_neutron', return_value=False) def test_create_with_network_events_non_neutron(self, is_neutron): self._test_create_with_network_events() def test_create_with_other_error(self): drvr = libvirt_driver.LibvirtDriver(mock.MagicMock(), False) @mock.patch.object(drvr, 'plug_vifs') @mock.patch.object(drvr, 'firewall_driver') @mock.patch.object(drvr, '_create_domain') @mock.patch.object(drvr, '_cleanup_failed_start') def the_test(mock_cleanup, mock_create, mock_fw, mock_plug): instance = objects.Instance(**self.test_instance) mock_create.side_effect = test.TestingException self.assertRaises(test.TestingException, drvr._create_domain_and_network, self.context, 'xml', instance, [], None) mock_cleanup.assert_called_once_with(self.context, instance, [], None, None, False) # destroy_disks_on_failure=True, used only by spawn() mock_cleanup.reset_mock() self.assertRaises(test.TestingException, drvr._create_domain_and_network, self.context, 'xml', instance, [], None, destroy_disks_on_failure=True) mock_cleanup.assert_called_once_with(self.context, instance, [], None, None, True) the_test() def test_cleanup_failed_start_no_guest(self): drvr = libvirt_driver.LibvirtDriver(mock.MagicMock(), False) with mock.patch.object(drvr, 'cleanup') as mock_cleanup: drvr._cleanup_failed_start(None, None, None, None, None, False) self.assertTrue(mock_cleanup.called) def test_cleanup_failed_start_inactive_guest(self): drvr = libvirt_driver.LibvirtDriver(mock.MagicMock(), False) guest = mock.MagicMock() guest.is_active.return_value = False with mock.patch.object(drvr, 'cleanup') as mock_cleanup: drvr._cleanup_failed_start(None, None, None, None, guest, False) self.assertTrue(mock_cleanup.called) self.assertFalse(guest.poweroff.called) def test_cleanup_failed_start_active_guest(self): drvr = libvirt_driver.LibvirtDriver(mock.MagicMock(), False) guest = mock.MagicMock() guest.is_active.return_value = True with mock.patch.object(drvr, 'cleanup') as mock_cleanup: drvr._cleanup_failed_start(None, None, None, None, guest, False) self.assertTrue(mock_cleanup.called) self.assertTrue(guest.poweroff.called) def test_cleanup_failed_start_failed_poweroff(self): drvr = libvirt_driver.LibvirtDriver(mock.MagicMock(), False) guest = mock.MagicMock() guest.is_active.return_value = True guest.poweroff.side_effect = test.TestingException with mock.patch.object(drvr, 'cleanup') as mock_cleanup: self.assertRaises(test.TestingException, drvr._cleanup_failed_start, None, None, None, None, guest, False) self.assertTrue(mock_cleanup.called) self.assertTrue(guest.poweroff.called) def test_cleanup_failed_start_failed_poweroff_destroy_disks(self): drvr = libvirt_driver.LibvirtDriver(mock.MagicMock(), False) guest = mock.MagicMock() guest.is_active.return_value = True guest.poweroff.side_effect = test.TestingException with mock.patch.object(drvr, 'cleanup') as mock_cleanup: self.assertRaises(test.TestingException, drvr._cleanup_failed_start, None, None, None, None, guest, True) mock_cleanup.called_once_with(None, None, network_info=None, block_device_info=None, destroy_disks=True) self.assertTrue(guest.poweroff.called) @mock.patch('os_brick.encryptors.get_encryption_metadata') @mock.patch('nova.virt.libvirt.blockinfo.get_info_from_bdm') def test_create_with_bdm(self, get_info_from_bdm, get_encryption_metadata): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) instance = objects.Instance(**self.test_instance) mock_dom = mock.MagicMock() mock_encryption_meta = mock.MagicMock() get_encryption_metadata.return_value = mock_encryption_meta fake_xml = """ instance-00000001 1048576 1 """ fake_volume_id = "fake-volume-id" connection_info = {"driver_volume_type": "fake", "data": {"access_mode": "rw", "volume_id": fake_volume_id}} def fake_getitem(*args, **kwargs): fake_bdm = {'connection_info': connection_info, 'mount_device': '/dev/vda'} return fake_bdm.get(args[0]) mock_volume = mock.MagicMock() mock_volume.__getitem__.side_effect = fake_getitem block_device_info = {'block_device_mapping': [mock_volume]} network_info = [network_model.VIF(id='1'), network_model.VIF(id='2', active=True)] with test.nested( mock.patch.object(drvr, 'plug_vifs'), mock.patch.object(drvr.firewall_driver, 'setup_basic_filtering'), mock.patch.object(drvr.firewall_driver, 'prepare_instance_filter'), mock.patch.object(drvr, '_create_domain'), mock.patch.object(drvr.firewall_driver, 'apply_instance_filter'), ) as (plug_vifs, setup_basic_filtering, prepare_instance_filter, create_domain, apply_instance_filter): create_domain.return_value = libvirt_guest.Guest(mock_dom) guest = drvr._create_domain_and_network( self.context, fake_xml, instance, network_info, block_device_info=block_device_info) plug_vifs.assert_called_once_with(instance, network_info) setup_basic_filtering.assert_called_once_with(instance, network_info) prepare_instance_filter.assert_called_once_with(instance, network_info) pause = self._get_pause_flag(drvr, network_info) create_domain.assert_called_once_with( fake_xml, pause=pause, power_on=True, post_xml_callback=None) self.assertEqual(mock_dom, guest._domain) def test_get_guest_storage_config(self): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) test_instance = copy.deepcopy(self.test_instance) test_instance["default_swap_device"] = None instance = objects.Instance(**test_instance) image_meta = objects.ImageMeta.from_dict(self.test_image_meta) flavor = instance.get_flavor() conn_info = {'driver_volume_type': 'fake', 'data': {}} bdm = objects.BlockDeviceMapping( self.context, **fake_block_device.FakeDbBlockDeviceDict({ 'id': 1, 'source_type': 'volume', 'destination_type': 'volume', 'device_name': '/dev/vdc'})) bdi = {'block_device_mapping': driver_block_device.convert_volumes([bdm])} bdm = bdi['block_device_mapping'][0] bdm['connection_info'] = conn_info disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance, image_meta, bdi) mock_conf = mock.MagicMock(source_path='fake') with test.nested( mock.patch.object(driver_block_device.DriverVolumeBlockDevice, 'save'), mock.patch.object(drvr, '_connect_volume'), mock.patch.object(drvr, '_get_volume_config', return_value=mock_conf) ) as (volume_save, connect_volume, get_volume_config): devices = drvr._get_guest_storage_config(self.context, instance, image_meta, disk_info, False, bdi, flavor, "hvm") self.assertEqual(3, len(devices)) self.assertEqual('/dev/vdb', instance.default_ephemeral_device) self.assertIsNone(instance.default_swap_device) connect_volume.assert_called_with(self.context, bdm['connection_info'], instance) get_volume_config.assert_called_with(bdm['connection_info'], {'bus': 'virtio', 'type': 'disk', 'dev': 'vdc'}) volume_save.assert_called_once_with() def test_get_neutron_events(self): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) network_info = [network_model.VIF(id='1'), network_model.VIF(id='2', active=True)] events = drvr._get_neutron_events(network_info) self.assertEqual([('network-vif-plugged', '1')], events) def test_get_neutron_events_reboot(self): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) bridge = network_model.VIF_TYPE_BRIDGE ovs = network_model.VIF_TYPE_OVS network_info = [network_model.VIF(id='1'), network_model.VIF(id='2', active=True), network_model.VIF(id='3', type=bridge), network_model.VIF(id='4', type=ovs)] events = drvr._get_neutron_events(network_info, reboot=True) self.assertEqual([('network-vif-plugged', '1'), ('network-vif-plugged', '2'), ('network-vif-plugged', '4')], events) def test_unplug_vifs_ignores_errors(self): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI()) with mock.patch.object(drvr, 'vif_driver') as vif_driver: vif_driver.unplug.side_effect = exception.AgentError( method='unplug') drvr._unplug_vifs('inst', [1], ignore_errors=True) vif_driver.unplug.assert_called_once_with('inst', 1) def test_unplug_vifs_reports_errors(self): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI()) with mock.patch.object(drvr, 'vif_driver') as vif_driver: vif_driver.unplug.side_effect = exception.AgentError( method='unplug') self.assertRaises(exception.AgentError, drvr.unplug_vifs, 'inst', [1]) vif_driver.unplug.assert_called_once_with('inst', 1) @mock.patch('nova.virt.libvirt.driver.LibvirtDriver._unplug_vifs') @mock.patch('nova.virt.libvirt.driver.LibvirtDriver._undefine_domain') def test_cleanup_pass_with_no_mount_device(self, undefine, unplug): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI()) drvr.firewall_driver = mock.Mock() drvr._disconnect_volume = mock.Mock() fake_inst = {'name': 'foo'} fake_bdms = [{'connection_info': 'foo', 'mount_device': None}] with mock.patch('nova.virt.driver' '.block_device_info_get_mapping', return_value=fake_bdms): drvr.cleanup('ctxt', fake_inst, 'netinfo', destroy_disks=False) self.assertTrue(drvr._disconnect_volume.called) @mock.patch('nova.virt.libvirt.driver.LibvirtDriver._unplug_vifs') @mock.patch('nova.virt.libvirt.driver.LibvirtDriver._undefine_domain') def test_cleanup_wants_vif_errors_ignored(self, undefine, unplug): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI()) fake_inst = {'name': 'foo'} with mock.patch.object(drvr._conn, 'lookupByUUIDString') as lookup: lookup.return_value = fake_inst # NOTE(danms): Make unplug cause us to bail early, since # we only care about how it was called unplug.side_effect = test.TestingException self.assertRaises(test.TestingException, drvr.cleanup, 'ctxt', fake_inst, 'netinfo') unplug.assert_called_once_with(fake_inst, 'netinfo', True) @mock.patch.object(libvirt_driver.LibvirtDriver, 'unfilter_instance') @mock.patch.object(libvirt_driver.LibvirtDriver, 'delete_instance_files', return_value=True) @mock.patch.object(objects.Instance, 'save') @mock.patch.object(libvirt_driver.LibvirtDriver, '_undefine_domain') def test_cleanup_migrate_data_shared_block_storage(self, _undefine_domain, save, delete_instance_files, unfilter_instance): # Tests the cleanup method when migrate_data has # is_shared_block_storage=True and destroy_disks=False. instance = objects.Instance(self.context, **self.test_instance) migrate_data = objects.LibvirtLiveMigrateData( is_shared_block_storage=True) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI()) drvr.cleanup( self.context, instance, network_info={}, destroy_disks=False, migrate_data=migrate_data, destroy_vifs=False) delete_instance_files.assert_called_once_with(instance) self.assertEqual(1, int(instance.system_metadata['clean_attempts'])) self.assertTrue(instance.cleaned) save.assert_called_once_with() @mock.patch.object(libvirt_driver.LibvirtDriver, '_get_volume_encryption') @mock.patch.object(libvirt_driver.LibvirtDriver, '_use_native_luks') def test_swap_volume_native_luks_blocked(self, mock_use_native_luks, mock_get_encryption): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI()) mock_get_encryption.return_value = {'provider': 'luks'} mock_use_native_luks.return_value = True self.assertRaises(NotImplementedError, drvr.swap_volume, self.context, {}, {}, None, None, None) @mock.patch('nova.virt.libvirt.guest.BlockDevice.is_job_complete', return_value=True) def _test_swap_volume(self, mock_is_job_complete, source_type, resize=False, fail=False): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI()) mock_dom = mock.MagicMock() guest = libvirt_guest.Guest(mock_dom) with mock.patch.object(drvr._conn, 'defineXML', create=True) as mock_define: srcfile = "/first/path" dstfile = "/second/path" orig_xml = six.text_type(mock.sentinel.orig_xml) new_xml = six.text_type(mock.sentinel.new_xml) mock_dom.XMLDesc.return_value = orig_xml mock_dom.isPersistent.return_value = True def fake_rebase_success(*args, **kwargs): # Make sure the XML is set after the rebase so we know # get_xml_desc was called after the update. mock_dom.XMLDesc.return_value = new_xml if not fail: mock_dom.blockRebase.side_effect = fake_rebase_success # If the swap succeeds, make sure we use the new XML to # redefine the domain. expected_xml = new_xml else: if resize: mock_dom.blockResize.side_effect = test.TestingException() expected_exception = test.TestingException else: mock_dom.blockRebase.side_effect = test.TestingException() expected_exception = exception.VolumeRebaseFailed # If the swap fails, make sure we use the original domain XML # to redefine the domain. expected_xml = orig_xml # Run the swap volume code. mock_conf = mock.MagicMock(source_type=source_type, source_path=dstfile) if not fail: drvr._swap_volume(guest, srcfile, mock_conf, 1) else: self.assertRaises(expected_exception, drvr._swap_volume, guest, srcfile, mock_conf, 1) # Verify we read the original persistent config. expected_call_count = 1 expected_calls = [mock.call( flags=(fakelibvirt.VIR_DOMAIN_XML_INACTIVE | fakelibvirt.VIR_DOMAIN_XML_SECURE))] if not fail: # Verify we read the updated live config. expected_call_count = 2 expected_calls.append( mock.call(flags=fakelibvirt.VIR_DOMAIN_XML_SECURE)) self.assertEqual(expected_call_count, mock_dom.XMLDesc.call_count) mock_dom.XMLDesc.assert_has_calls(expected_calls) # Verify we called with the correct flags. expected_flags = (fakelibvirt.VIR_DOMAIN_BLOCK_REBASE_COPY | fakelibvirt.VIR_DOMAIN_BLOCK_REBASE_REUSE_EXT) if source_type == 'block': expected_flags = (expected_flags | fakelibvirt.VIR_DOMAIN_BLOCK_REBASE_COPY_DEV) mock_dom.blockRebase.assert_called_once_with(srcfile, dstfile, 0, flags=expected_flags) # Verify we defined the expected XML. mock_define.assert_called_once_with(expected_xml) # Verify we called resize with the correct args. if resize: mock_dom.blockResize.assert_called_once_with( srcfile, 1 * units.Gi / units.Ki) def test_swap_volume_file(self): self._test_swap_volume('file') def test_swap_volume_block(self): """If the swapped volume is type="block", make sure that we give libvirt the correct VIR_DOMAIN_BLOCK_REBASE_COPY_DEV flag to ensure the correct type="block" XML is generated (bug 1691195) """ self._test_swap_volume('block') def test_swap_volume_rebase_fail(self): self._test_swap_volume('block', fail=True) def test_swap_volume_resize_fail(self): self._test_swap_volume('file', resize=True, fail=True) @mock.patch('nova.virt.libvirt.driver.LibvirtDriver._disconnect_volume') @mock.patch('nova.virt.libvirt.driver.LibvirtDriver._swap_volume') @mock.patch('nova.virt.libvirt.driver.LibvirtDriver._get_volume_config') @mock.patch('nova.virt.libvirt.driver.LibvirtDriver._connect_volume') @mock.patch('nova.virt.libvirt.host.Host.get_guest') def test_swap_volume(self, get_guest, connect_volume, get_volume_config, swap_volume, disconnect_volume): conn = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI()) instance = objects.Instance(**self.test_instance) old_connection_info = {'driver_volume_type': 'fake', 'serial': 'old-volume-id', 'data': {'device_path': '/fake-old-volume', 'access_mode': 'rw'}} new_connection_info = {'driver_volume_type': 'fake', 'serial': 'new-volume-id', 'data': {'device_path': '/fake-new-volume', 'access_mode': 'rw'}} mock_dom = mock.MagicMock() guest = libvirt_guest.Guest(mock_dom) mock_dom.XMLDesc.return_value = """ """ mock_dom.name.return_value = 'inst' mock_dom.UUIDString.return_value = 'uuid' get_guest.return_value = guest conf = mock.MagicMock(source_path='/fake-new-volume') get_volume_config.return_value = conf conn.swap_volume(self.context, old_connection_info, new_connection_info, instance, '/dev/vdb', 1) get_guest.assert_called_once_with(instance) connect_volume.assert_called_once_with(self.context, new_connection_info, instance) swap_volume.assert_called_once_with(guest, 'vdb', conf, 1) disconnect_volume.assert_called_once_with(self.context, old_connection_info, instance) @mock.patch.object(libvirt_driver.LibvirtDriver, '_get_volume_encryption') @mock.patch('nova.virt.libvirt.guest.BlockDevice.rebase') @mock.patch('nova.virt.libvirt.driver.LibvirtDriver._disconnect_volume') @mock.patch('nova.virt.libvirt.driver.LibvirtDriver._connect_volume') @mock.patch('nova.virt.libvirt.driver.LibvirtDriver._get_volume_config') @mock.patch('nova.virt.libvirt.guest.Guest.get_disk') @mock.patch('nova.virt.libvirt.host.Host.get_guest') @mock.patch('nova.virt.libvirt.host.Host.write_instance_config') def test_swap_volume_disconnect_new_volume_on_rebase_error(self, write_config, get_guest, get_disk, get_volume_config, connect_volume, disconnect_volume, rebase, get_volume_encryption): """Assert that disconnect_volume is called for the new volume if an error is encountered while rebasing """ conn = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI()) instance = objects.Instance(**self.test_instance) guest = libvirt_guest.Guest(mock.MagicMock()) get_guest.return_value = guest get_volume_encryption.return_value = {} exc = fakelibvirt.make_libvirtError(fakelibvirt.libvirtError, 'internal error', error_code=fakelibvirt.VIR_ERR_INTERNAL_ERROR) rebase.side_effect = exc self.assertRaises(exception.VolumeRebaseFailed, conn.swap_volume, self.context, mock.sentinel.old_connection_info, mock.sentinel.new_connection_info, instance, '/dev/vdb', 0) connect_volume.assert_called_once_with(self.context, mock.sentinel.new_connection_info, instance) disconnect_volume.assert_called_once_with(self.context, mock.sentinel.new_connection_info, instance) @mock.patch.object(libvirt_driver.LibvirtDriver, '_get_volume_encryption') @mock.patch('nova.virt.libvirt.guest.BlockDevice.is_job_complete') @mock.patch('nova.virt.libvirt.guest.BlockDevice.abort_job') @mock.patch('nova.virt.libvirt.driver.LibvirtDriver._disconnect_volume') @mock.patch('nova.virt.libvirt.driver.LibvirtDriver._connect_volume') @mock.patch('nova.virt.libvirt.driver.LibvirtDriver._get_volume_config') @mock.patch('nova.virt.libvirt.guest.Guest.get_disk') @mock.patch('nova.virt.libvirt.host.Host.get_guest') @mock.patch('nova.virt.libvirt.host.Host.write_instance_config') def test_swap_volume_disconnect_new_volume_on_pivot_error(self, write_config, get_guest, get_disk, get_volume_config, connect_volume, disconnect_volume, abort_job, is_job_complete, get_volume_encryption): """Assert that disconnect_volume is called for the new volume if an error is encountered while pivoting to the new volume """ conn = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI()) instance = objects.Instance(**self.test_instance) guest = libvirt_guest.Guest(mock.MagicMock()) get_guest.return_value = guest get_volume_encryption.return_value = {} exc = fakelibvirt.make_libvirtError(fakelibvirt.libvirtError, 'internal error', error_code=fakelibvirt.VIR_ERR_INTERNAL_ERROR) is_job_complete.return_value = True abort_job.side_effect = [None, exc] self.assertRaises(exception.VolumeRebaseFailed, conn.swap_volume, self.context, mock.sentinel.old_connection_info, mock.sentinel.new_connection_info, instance, '/dev/vdb', 0) connect_volume.assert_called_once_with(self.context, mock.sentinel.new_connection_info, instance) disconnect_volume.assert_called_once_with(self.context, mock.sentinel.new_connection_info, instance) @mock.patch('nova.virt.libvirt.guest.BlockDevice.is_job_complete') @mock.patch('nova.privsep.path.chown') def _test_live_snapshot(self, mock_chown, mock_is_job_complete, can_quiesce=False, require_quiesce=False): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI()) mock_dom = mock.MagicMock() test_image_meta = self.test_image_meta.copy() if require_quiesce: test_image_meta = {'properties': {'os_require_quiesce': 'yes'}} with test.nested( mock.patch.object(drvr._conn, 'defineXML', create=True), mock.patch.object(fake_libvirt_utils, 'get_disk_size'), mock.patch.object(fake_libvirt_utils, 'get_disk_backing_file'), mock.patch.object(fake_libvirt_utils, 'create_cow_image'), mock.patch.object(fake_libvirt_utils, 'extract_snapshot'), mock.patch.object(drvr, '_set_quiesced') ) as (mock_define, mock_size, mock_backing, mock_create_cow, mock_snapshot, mock_quiesce): xmldoc = "" srcfile = "/first/path" dstfile = "/second/path" bckfile = "/other/path" dltfile = dstfile + ".delta" mock_dom.XMLDesc.return_value = xmldoc mock_dom.isPersistent.return_value = True mock_size.return_value = 1004009 mock_backing.return_value = bckfile guest = libvirt_guest.Guest(mock_dom) if not can_quiesce: mock_quiesce.side_effect = ( exception.InstanceQuiesceNotSupported( instance_id=self.test_instance['id'], reason='test')) image_meta = objects.ImageMeta.from_dict(test_image_meta) mock_is_job_complete.return_value = True drvr._live_snapshot(self.context, self.test_instance, guest, srcfile, dstfile, "qcow2", "qcow2", image_meta) mock_dom.XMLDesc.assert_called_once_with(flags=( fakelibvirt.VIR_DOMAIN_XML_INACTIVE | fakelibvirt.VIR_DOMAIN_XML_SECURE)) mock_dom.blockRebase.assert_called_once_with( srcfile, dltfile, 0, flags=( fakelibvirt.VIR_DOMAIN_BLOCK_REBASE_COPY | fakelibvirt.VIR_DOMAIN_BLOCK_REBASE_REUSE_EXT | fakelibvirt.VIR_DOMAIN_BLOCK_REBASE_SHALLOW)) mock_size.assert_called_once_with(srcfile, format="qcow2") mock_backing.assert_called_once_with(srcfile, basename=False, format="qcow2") mock_create_cow.assert_called_once_with(bckfile, dltfile, 1004009) mock_chown.assert_called_once_with(dltfile, uid=os.getuid()) mock_snapshot.assert_called_once_with(dltfile, "qcow2", dstfile, "qcow2") mock_define.assert_called_once_with(xmldoc) mock_quiesce.assert_any_call(mock.ANY, self.test_instance, mock.ANY, True) if can_quiesce: mock_quiesce.assert_any_call(mock.ANY, self.test_instance, mock.ANY, False) def test_live_snapshot(self): self._test_live_snapshot() def test_live_snapshot_with_quiesce(self): self._test_live_snapshot(can_quiesce=True) def test_live_snapshot_with_require_quiesce(self): self._test_live_snapshot(can_quiesce=True, require_quiesce=True) def test_live_snapshot_with_require_quiesce_fails(self): self.assertRaises(exception.InstanceQuiesceNotSupported, self._test_live_snapshot, can_quiesce=False, require_quiesce=True) @mock.patch.object(libvirt_driver.LibvirtDriver, "_live_migration") def test_live_migration_hostname_valid(self, mock_lm): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) drvr.live_migration(self.context, self.test_instance, "host1.example.com", lambda x: x, lambda x: x) self.assertEqual(1, mock_lm.call_count) @mock.patch.object(libvirt_driver.LibvirtDriver, "_live_migration") @mock.patch.object(fake_libvirt_utils, "is_valid_hostname") def test_live_migration_hostname_invalid(self, mock_hostname, mock_lm): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) mock_hostname.return_value = False self.assertRaises(exception.InvalidHostname, drvr.live_migration, self.context, self.test_instance, "foo/?com=/bin/sh", lambda x: x, lambda x: x) def test_live_migration_force_complete(self): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) instance = fake_instance.fake_instance_obj( None, name='instancename', id=1, uuid='c83a75d4-4d53-4be5-9a40-04d9c0389ff8') drvr.active_migrations[instance.uuid] = deque() drvr.live_migration_force_complete(instance) self.assertEqual( 1, drvr.active_migrations[instance.uuid].count("force-complete")) @mock.patch.object(host.Host, "get_connection") @mock.patch.object(fakelibvirt.virDomain, "abortJob") def test_live_migration_abort(self, mock_abort, mock_conn): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) dom = fakelibvirt.Domain(drvr._get_connection(), "", False) guest = libvirt_guest.Guest(dom) with mock.patch.object(nova.virt.libvirt.host.Host, 'get_guest', return_value=guest): drvr.live_migration_abort(self.test_instance) self.assertTrue(mock_abort.called) @mock.patch('os.path.exists', return_value=True) @mock.patch('tempfile.mkstemp') @mock.patch('os.close', return_value=None) def test_check_instance_shared_storage_local_raw(self, mock_close, mock_mkstemp, mock_exists): instance_uuid = uuids.fake self.flags(images_type='raw', group='libvirt') self.flags(instances_path='/tmp') mock_mkstemp.return_value = (-1, '/tmp/{0}/file'.format(instance_uuid)) driver = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) instance = objects.Instance(**self.test_instance) temp_file = driver.check_instance_shared_storage_local(self.context, instance) self.assertEqual('/tmp/{0}/file'.format(instance_uuid), temp_file['filename']) def test_check_instance_shared_storage_local_rbd(self): self.flags(images_type='rbd', group='libvirt') driver = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) instance = objects.Instance(**self.test_instance) self.assertIsNone(driver. check_instance_shared_storage_local(self.context, instance)) def test_version_to_string(self): driver = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) string_ver = driver._version_to_string((4, 33, 173)) self.assertEqual("4.33.173", string_ver) def test_virtuozzo_min_version_fail(self): self.flags(virt_type='parallels', group='libvirt') driver = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) with test.nested( mock.patch.object( driver._conn, 'getVersion'), mock.patch.object( driver._conn, 'getLibVersion'))\ as (mock_getver, mock_getlibver): mock_getver.return_value = \ versionutils.convert_version_to_int( libvirt_driver.MIN_VIRTUOZZO_VERSION) - 1 mock_getlibver.return_value = \ versionutils.convert_version_to_int( libvirt_driver.MIN_LIBVIRT_VIRTUOZZO_VERSION) self.assertRaises(exception.NovaException, driver.init_host, 'wibble') mock_getver.return_value = \ versionutils.convert_version_to_int( libvirt_driver.MIN_VIRTUOZZO_VERSION) mock_getlibver.return_value = \ versionutils.convert_version_to_int( libvirt_driver.MIN_LIBVIRT_VIRTUOZZO_VERSION) - 1 self.assertRaises(exception.NovaException, driver.init_host, 'wibble') def test_virtuozzo_min_version_ok(self): self.flags(virt_type='parallels', group='libvirt') driver = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) with test.nested( mock.patch.object( driver._conn, 'getVersion'), mock.patch.object( driver._conn, 'getLibVersion'))\ as (mock_getver, mock_getlibver): mock_getver.return_value = \ versionutils.convert_version_to_int( libvirt_driver.MIN_VIRTUOZZO_VERSION) mock_getlibver.return_value = \ versionutils.convert_version_to_int( libvirt_driver.MIN_LIBVIRT_VIRTUOZZO_VERSION) driver.init_host('wibble') def test_get_guest_config_parallels_vm(self): self.flags(virt_type='parallels', group='libvirt') self.flags(images_type='ploop', group='libvirt') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance_ref = objects.Instance(**self.test_instance) image_meta = objects.ImageMeta.from_dict(self.test_image_meta) disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta) cfg = drvr._get_guest_config(instance_ref, _fake_network_info(self, 1), image_meta, disk_info) self.assertEqual("parallels", cfg.virt_type) self.assertEqual(instance_ref["uuid"], cfg.uuid) self.assertEqual(instance_ref.flavor.memory_mb * units.Ki, cfg.memory) self.assertEqual(instance_ref.flavor.vcpus, cfg.vcpus) self.assertEqual(fields.VMMode.HVM, cfg.os_type) self.assertIsNone(cfg.os_root) self.assertEqual(6, len(cfg.devices)) self.assertIsInstance(cfg.devices[0], vconfig.LibvirtConfigGuestDisk) self.assertEqual(cfg.devices[0].driver_format, "ploop") self.assertIsInstance(cfg.devices[1], vconfig.LibvirtConfigGuestDisk) self.assertIsInstance(cfg.devices[2], vconfig.LibvirtConfigGuestInterface) self.assertIsInstance(cfg.devices[3], vconfig.LibvirtConfigGuestInput) self.assertIsInstance(cfg.devices[4], vconfig.LibvirtConfigGuestGraphics) self.assertIsInstance(cfg.devices[5], vconfig.LibvirtConfigGuestVideo) def test_get_guest_config_parallels_ct_rescue(self): self._test_get_guest_config_parallels_ct(rescue=True) def test_get_guest_config_parallels_ct(self): self._test_get_guest_config_parallels_ct(rescue=False) def _test_get_guest_config_parallels_ct(self, rescue=False): self.flags(virt_type='parallels', group='libvirt') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) ct_instance = self.test_instance.copy() ct_instance["vm_mode"] = fields.VMMode.EXE instance_ref = objects.Instance(**ct_instance) image_meta = objects.ImageMeta.from_dict(self.test_image_meta) if rescue: rescue_data = ct_instance else: rescue_data = None cfg = drvr._get_guest_config(instance_ref, _fake_network_info(self, 1), image_meta, {'mapping': {'disk': {}}}, rescue_data) self.assertEqual("parallels", cfg.virt_type) self.assertEqual(instance_ref["uuid"], cfg.uuid) self.assertEqual(instance_ref.flavor.memory_mb * units.Ki, cfg.memory) self.assertEqual(instance_ref.flavor.vcpus, cfg.vcpus) self.assertEqual(fields.VMMode.EXE, cfg.os_type) self.assertEqual("/sbin/init", cfg.os_init_path) self.assertIsNone(cfg.os_root) if rescue: self.assertEqual(5, len(cfg.devices)) else: self.assertEqual(4, len(cfg.devices)) self.assertIsInstance(cfg.devices[0], vconfig.LibvirtConfigGuestFilesys) device_index = 0 fs = cfg.devices[device_index] self.assertEqual(fs.source_type, "file") self.assertEqual(fs.driver_type, "ploop") self.assertEqual(fs.target_dir, "/") if rescue: device_index = 1 fs = cfg.devices[device_index] self.assertEqual(fs.source_type, "file") self.assertEqual(fs.driver_type, "ploop") self.assertEqual(fs.target_dir, "/mnt/rescue") self.assertIsInstance(cfg.devices[device_index + 1], vconfig.LibvirtConfigGuestInterface) self.assertIsInstance(cfg.devices[device_index + 2], vconfig.LibvirtConfigGuestGraphics) self.assertIsInstance(cfg.devices[device_index + 3], vconfig.LibvirtConfigGuestVideo) def _test_get_guest_config_parallels_volume(self, vmmode, devices): self.flags(virt_type='parallels', group='libvirt') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) ct_instance = self.test_instance.copy() ct_instance["vm_mode"] = vmmode instance_ref = objects.Instance(**ct_instance) image_meta = objects.ImageMeta.from_dict(self.test_image_meta) conn_info = {'driver_volume_type': 'fake', 'data': {}} bdm = objects.BlockDeviceMapping( self.context, **fake_block_device.FakeDbBlockDeviceDict( {'id': 0, 'source_type': 'volume', 'destination_type': 'volume', 'device_name': '/dev/sda'})) info = {'block_device_mapping': driver_block_device.convert_volumes( [bdm])} info['block_device_mapping'][0]['connection_info'] = conn_info disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type, instance_ref, image_meta, info) with mock.patch.object( driver_block_device.DriverVolumeBlockDevice, 'save' ) as mock_save: cfg = drvr._get_guest_config(instance_ref, _fake_network_info(self, 1), image_meta, disk_info, None, info) mock_save.assert_called_once_with() self.assertEqual("parallels", cfg.virt_type) self.assertEqual(instance_ref["uuid"], cfg.uuid) self.assertEqual(instance_ref.flavor.memory_mb * units.Ki, cfg.memory) self.assertEqual(instance_ref.flavor.vcpus, cfg.vcpus) self.assertEqual(vmmode, cfg.os_type) self.assertIsNone(cfg.os_root) self.assertEqual(devices, len(cfg.devices)) disk_found = False for dev in cfg.devices: result = isinstance(dev, vconfig.LibvirtConfigGuestFilesys) self.assertFalse(result) if (isinstance(dev, vconfig.LibvirtConfigGuestDisk) and (dev.source_path is None or 'disk.local' not in dev.source_path)): self.assertEqual("disk", dev.source_device) self.assertEqual("sda", dev.target_dev) disk_found = True self.assertTrue(disk_found) def test_get_guest_config_parallels_volume(self): self._test_get_guest_config_parallels_volume(fields.VMMode.EXE, 4) self._test_get_guest_config_parallels_volume(fields.VMMode.HVM, 6) def test_get_guest_disk_config_rbd_older_config_drive_fall_back(self): # New config drives are stored in rbd but existing instances have # config drives in the old location under the instances path. # Test that the driver falls back to 'flat' for config drive if it # doesn't exist in rbd. self.flags(images_type='rbd', group='libvirt') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) drvr.image_backend = mock.Mock() mock_rbd_image = mock.Mock() mock_flat_image = mock.Mock() mock_flat_image.libvirt_info.return_value = mock.sentinel.diskconfig drvr.image_backend.by_name.side_effect = [mock_rbd_image, mock_flat_image] mock_rbd_image.exists.return_value = False instance = objects.Instance() disk_mapping = {'disk.config': {'bus': 'ide', 'dev': 'hda', 'type': 'file'}} flavor = objects.Flavor(extra_specs={}) diskconfig = drvr._get_guest_disk_config( instance, 'disk.config', disk_mapping, flavor, drvr._get_disk_config_image_type()) self.assertEqual(2, drvr.image_backend.by_name.call_count) call1 = mock.call(instance, 'disk.config', 'rbd') call2 = mock.call(instance, 'disk.config', 'flat') drvr.image_backend.by_name.assert_has_calls([call1, call2]) self.assertEqual(mock.sentinel.diskconfig, diskconfig) def _test_prepare_domain_for_snapshot(self, live_snapshot, state): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) instance_ref = objects.Instance(**self.test_instance) with mock.patch.object(drvr, "suspend") as mock_suspend: drvr._prepare_domain_for_snapshot( self.context, live_snapshot, state, instance_ref) return mock_suspend.called def test_prepare_domain_for_snapshot(self): # Ensure that suspend() is only called on RUNNING or PAUSED instances for test_power_state in power_state.STATE_MAP.keys(): if test_power_state in (power_state.RUNNING, power_state.PAUSED): self.assertTrue(self._test_prepare_domain_for_snapshot( False, test_power_state)) else: self.assertFalse(self._test_prepare_domain_for_snapshot( False, test_power_state)) def test_prepare_domain_for_snapshot_lxc(self): self.flags(virt_type='lxc', group='libvirt') # Ensure that suspend() is never called with LXC for test_power_state in power_state.STATE_MAP.keys(): self.assertFalse(self._test_prepare_domain_for_snapshot( False, test_power_state)) def test_prepare_domain_for_snapshot_live_snapshots(self): # Ensure that suspend() is never called for live snapshots for test_power_state in power_state.STATE_MAP.keys(): self.assertFalse(self._test_prepare_domain_for_snapshot( True, test_power_state)) @mock.patch('os.walk') @mock.patch('os.path.exists') @mock.patch('os.path.getsize') @mock.patch('os.path.isdir') @mock.patch('nova.utils.execute') @mock.patch.object(host.Host, '_get_domain') def test_get_instance_disk_info_parallels_ct(self, mock_get_domain, mock_execute, mock_isdir, mock_getsize, mock_exists, mock_walk): dummyxml = ("instance-0000000a" "exe" "" "" "" "" "" "") ret = ("image: /test/disk/root.hds\n" "file format: parallels\n" "virtual size: 20G (21474836480 bytes)\n" "disk size: 789M\n") self.flags(virt_type='parallels', group='libvirt') instance = objects.Instance(**self.test_instance) instance.vm_mode = fields.VMMode.EXE fake_dom = FakeVirtDomain(fake_xml=dummyxml) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) mock_get_domain.return_value = fake_dom mock_walk.return_value = [('/test/disk', [], ['DiskDescriptor.xml', 'root.hds'])] def getsize_sideeffect(*args, **kwargs): if args[0] == '/test/disk/DiskDescriptor.xml': return 790 if args[0] == '/test/disk/root.hds': return 827326464 mock_getsize.side_effect = getsize_sideeffect mock_exists.return_value = True mock_isdir.return_value = True mock_execute.return_value = (ret, '') info = drvr.get_instance_disk_info(instance) info = jsonutils.loads(info) self.assertEqual(info[0]['type'], 'ploop') self.assertEqual(info[0]['path'], '/test/disk') self.assertEqual(info[0]['disk_size'], 827327254) self.assertEqual(info[0]['over_committed_disk_size'], 20647509226) self.assertEqual(info[0]['virt_disk_size'], 21474836480) def test_get_guest_config_with_mdevs(self): mdevs = [uuids.mdev1] drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance_ref = objects.Instance(**self.test_instance) image_meta = objects.ImageMeta.from_dict(self.test_image_meta) cfg = drvr._get_guest_config(instance_ref, _fake_network_info(self, 1), image_meta, {'mapping': {}}, mdevs=mdevs) # Loop over all devices to make sure we have at least one mediated one. for device in cfg.devices: if isinstance(device, vconfig.LibvirtConfigGuestHostdevMDEV): # Make sure we use the provided UUID self.assertEqual(uuids.mdev1, device.uuid) break else: assert False, "Unable to find any mediated device for the guest." class HostStateTestCase(test.NoDBTestCase): cpu_info = {"vendor": "Intel", "model": "pentium", "arch": "i686", "features": ["ssse3", "monitor", "pni", "sse2", "sse", "fxsr", "clflush", "pse36", "pat", "cmov", "mca", "pge", "mtrr", "sep", "apic"], "topology": {"cores": "1", "threads": "1", "sockets": "1"}} instance_caps = [(fields.Architecture.X86_64, "kvm", "hvm"), (fields.Architecture.I686, "kvm", "hvm")] pci_devices = [{ "dev_id": "pci_0000_04_00_3", "address": "0000:04:10.3", "product_id": '1521', "vendor_id": '8086', "dev_type": fields.PciDeviceType.SRIOV_PF, "phys_function": None}] numa_topology = objects.NUMATopology( cells=[objects.NUMACell( id=1, cpuset=set([1, 2]), memory=1024, cpu_usage=0, memory_usage=0, mempages=[], siblings=[], pinned_cpus=set([])), objects.NUMACell( id=2, cpuset=set([3, 4]), memory=1024, cpu_usage=0, memory_usage=0, mempages=[], siblings=[], pinned_cpus=set([]))]) class FakeConnection(libvirt_driver.LibvirtDriver): """Fake connection object.""" def __init__(self): super(HostStateTestCase.FakeConnection, self).__init__(fake.FakeVirtAPI(), True) self._host = host.Host("qemu:///system") def _get_memory_mb_total(): return 497 def _get_memory_mb_used(): return 88 self._host.get_memory_mb_total = _get_memory_mb_total self._host.get_memory_mb_used = _get_memory_mb_used def _get_vcpu_total(self): return 1 def _get_vcpu_used(self): return 0 def _get_vgpu_total(self): return 0 def _get_cpu_info(self): return HostStateTestCase.cpu_info def _get_disk_over_committed_size_total(self): return 0 def _get_local_gb_info(self): return {'total': 100, 'used': 20, 'free': 80} def get_host_uptime(self): return ('10:01:16 up 1:36, 6 users, ' 'load average: 0.21, 0.16, 0.19') def _get_disk_available_least(self): return 13091 def _get_instance_capabilities(self): return HostStateTestCase.instance_caps def _get_pci_passthrough_devices(self): return jsonutils.dumps(HostStateTestCase.pci_devices) def _get_mdev_capable_devices(self, types=None): return [] def _get_mediated_devices(self, types=None): return [] def _get_host_numa_topology(self): return HostStateTestCase.numa_topology def setUp(self): super(HostStateTestCase, self).setUp() self.useFixture(fakelibvirt.FakeLibvirtFixture()) @mock.patch.object(fakelibvirt, "openAuth") def test_update_status(self, mock_open): mock_open.return_value = fakelibvirt.Connection("qemu:///system") drvr = HostStateTestCase.FakeConnection() stats = drvr.get_available_resource("compute1") self.assertEqual(stats["vcpus"], 1) self.assertEqual(stats["memory_mb"], 497) self.assertEqual(stats["local_gb"], 100) self.assertEqual(stats["vcpus_used"], 0) self.assertEqual(stats["memory_mb_used"], 88) self.assertEqual(stats["local_gb_used"], 20) self.assertEqual(stats["hypervisor_type"], 'QEMU') self.assertEqual(stats["hypervisor_version"], fakelibvirt.FAKE_QEMU_VERSION) self.assertEqual(stats["hypervisor_hostname"], 'compute1') cpu_info = jsonutils.loads(stats["cpu_info"]) self.assertEqual(cpu_info, {"vendor": "Intel", "model": "pentium", "arch": fields.Architecture.I686, "features": ["ssse3", "monitor", "pni", "sse2", "sse", "fxsr", "clflush", "pse36", "pat", "cmov", "mca", "pge", "mtrr", "sep", "apic"], "topology": {"cores": "1", "threads": "1", "sockets": "1"} }) self.assertEqual(stats["disk_available_least"], 80) self.assertEqual(jsonutils.loads(stats["pci_passthrough_devices"]), HostStateTestCase.pci_devices) self.assertThat(objects.NUMATopology.obj_from_db_obj( stats['numa_topology'])._to_dict(), matchers.DictMatches( HostStateTestCase.numa_topology._to_dict())) class TestGetInventory(test.NoDBTestCase): def setUp(self): super(TestGetInventory, self).setUp() self.useFixture(fakelibvirt.FakeLibvirtFixture()) self.driver = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) @mock.patch('nova.virt.libvirt.driver.LibvirtDriver._get_vgpu_total') @mock.patch('nova.virt.libvirt.driver.LibvirtDriver._get_local_gb_info', return_value={'total': 200}) @mock.patch('nova.virt.libvirt.host.Host.get_memory_mb_total', return_value=1024) @mock.patch('nova.virt.libvirt.driver.LibvirtDriver._get_vcpu_total', return_value=24) def _test_get_inventory(self, mock_vcpu, mock_mem, mock_disk, mock_vgpus, total_vgpus=0): mock_vgpus.return_value = total_vgpus expected_inv = { fields.ResourceClass.VCPU: { 'total': 24, 'min_unit': 1, 'max_unit': 24, 'step_size': 1, }, fields.ResourceClass.MEMORY_MB: { 'total': 1024, 'min_unit': 1, 'max_unit': 1024, 'step_size': 1, }, fields.ResourceClass.DISK_GB: { 'total': 200, 'min_unit': 1, 'max_unit': 200, 'step_size': 1, }, } if total_vgpus > 0: expected_inv.update({ fields.ResourceClass.VGPU: { 'total': total_vgpus, 'min_unit': 1, 'max_unit': total_vgpus, 'step_size': 1, } }) inv = self.driver.get_inventory(mock.sentinel.nodename) self.assertEqual(expected_inv, inv) def test_get_inventory(self): self._test_get_inventory() def test_get_inventory_with_vgpus(self): self._test_get_inventory(total_vgpus=8) class LibvirtDriverTestCase(test.NoDBTestCase): """Test for nova.virt.libvirt.libvirt_driver.LibvirtDriver.""" def setUp(self): super(LibvirtDriverTestCase, self).setUp() self.flags(sysinfo_serial="none", group="libvirt") self.flags(instances_path=self.useFixture(fixtures.TempDir()).path) self.useFixture(fakelibvirt.FakeLibvirtFixture()) os_vif.initialize() self.drvr = libvirt_driver.LibvirtDriver( fake.FakeVirtAPI(), read_only=True) self.context = context.get_admin_context() self.test_image_meta = { "disk_format": "raw", } def _create_instance(self, params=None): """Create a test instance.""" if not params: params = {} flavor = objects.Flavor(memory_mb=512, swap=0, vcpu_weight=None, root_gb=10, id=2, name=u'm1.tiny', ephemeral_gb=20, rxtx_factor=1.0, flavorid=u'1', vcpus=1, extra_specs={}) flavor.update(params.pop('flavor', {})) inst = {} inst['id'] = 1 inst['uuid'] = uuids.fake_instance_id inst['os_type'] = 'linux' inst['image_ref'] = uuids.fake_image_ref inst['reservation_id'] = 'r-fakeres' inst['user_id'] = 'fake' inst['project_id'] = 'fake' inst['instance_type_id'] = 2 inst['ami_launch_index'] = 0 inst['host'] = 'host1' inst['root_gb'] = flavor.root_gb inst['ephemeral_gb'] = flavor.ephemeral_gb inst['config_drive'] = True inst['kernel_id'] = 2 inst['ramdisk_id'] = 3 inst['key_data'] = 'ABCDEFG' inst['system_metadata'] = {} inst['metadata'] = {} inst['task_state'] = None inst.update(params) instance = fake_instance.fake_instance_obj( self.context, expected_attrs=['metadata', 'system_metadata', 'pci_devices'], flavor=flavor, **inst) # Attributes which we need to be set so they don't touch the db, # but it's not worth the effort to fake properly for field in ['numa_topology', 'vcpu_model']: setattr(instance, field, None) return instance def test_migrate_disk_and_power_off_exception(self): """Test for nova.virt.libvirt.libvirt_driver.LivirtConnection .migrate_disk_and_power_off. """ self.counter = 0 self.checked_shared_storage = False def fake_get_instance_disk_info(instance, block_device_info): return [] def fake_destroy(instance): pass def fake_get_host_ip_addr(): return '10.0.0.1' def fake_execute(*args, **kwargs): self.counter += 1 if self.counter == 1: assert False, "intentional failure" def fake_os_path_exists(path): return True def fake_is_storage_shared(dest, inst_base): self.checked_shared_storage = True return False self.stubs.Set(self.drvr, '_get_instance_disk_info', fake_get_instance_disk_info) self.stubs.Set(self.drvr, '_destroy', fake_destroy) self.stubs.Set(self.drvr, 'get_host_ip_addr', fake_get_host_ip_addr) self.stubs.Set(self.drvr, '_is_storage_shared_with', fake_is_storage_shared) self.stubs.Set(utils, 'execute', fake_execute) self.stub_out('os.path.exists', fake_os_path_exists) ins_ref = self._create_instance() flavor = {'root_gb': 10, 'ephemeral_gb': 20} flavor_obj = objects.Flavor(**flavor) self.assertRaises(AssertionError, self.drvr.migrate_disk_and_power_off, context.get_admin_context(), ins_ref, '10.0.0.2', flavor_obj, None) def _test_migrate_disk_and_power_off(self, ctxt, flavor_obj, block_device_info=None, params_for_instance=None): """Test for nova.virt.libvirt.libvirt_driver.LivirtConnection .migrate_disk_and_power_off. """ instance = self._create_instance(params=params_for_instance) disk_info = list(fake_disk_info_byname(instance).values()) disk_info_text = jsonutils.dumps(disk_info) def fake_get_instance_disk_info(instance, block_device_info): return disk_info def fake_destroy(instance): pass def fake_get_host_ip_addr(): return '10.0.0.1' def fake_execute(*args, **kwargs): pass def fake_copy_image(src, dest, host=None, receive=False, on_execute=None, on_completion=None, compression=True): self.assertIsNotNone(on_execute) self.assertIsNotNone(on_completion) self.stubs.Set(self.drvr, '_get_instance_disk_info', fake_get_instance_disk_info) self.stubs.Set(self.drvr, '_destroy', fake_destroy) self.stubs.Set(self.drvr, 'get_host_ip_addr', fake_get_host_ip_addr) self.stubs.Set(utils, 'execute', fake_execute) self.stubs.Set(libvirt_utils, 'copy_image', fake_copy_image) # dest is different host case out = self.drvr.migrate_disk_and_power_off( ctxt, instance, '10.0.0.2', flavor_obj, None, block_device_info=block_device_info) self.assertEqual(out, disk_info_text) # dest is same host case out = self.drvr.migrate_disk_and_power_off( ctxt, instance, '10.0.0.1', flavor_obj, None, block_device_info=block_device_info) self.assertEqual(out, disk_info_text) def test_migrate_disk_and_power_off(self): flavor = {'root_gb': 10, 'ephemeral_gb': 20} flavor_obj = objects.Flavor(**flavor) self._test_migrate_disk_and_power_off(self.context, flavor_obj) @mock.patch('nova.virt.libvirt.driver.LibvirtDriver._disconnect_volume') def test_migrate_disk_and_power_off_boot_from_volume(self, disconnect_volume): info = { 'block_device_mapping': [ {'boot_index': None, 'mount_device': '/dev/vdd', 'connection_info': mock.sentinel.conn_info_vdd}, {'boot_index': 0, 'mount_device': '/dev/vda', 'connection_info': mock.sentinel.conn_info_vda}]} flavor = {'root_gb': 1, 'ephemeral_gb': 0} flavor_obj = objects.Flavor(**flavor) # Note(Mike_D): The size of instance's ephemeral_gb is 0 gb. self._test_migrate_disk_and_power_off(self.context, flavor_obj, block_device_info=info, params_for_instance={'image_ref': None, 'root_gb': 10, 'ephemeral_gb': 0, 'flavor': {'root_gb': 10, 'ephemeral_gb': 0}}) disconnect_volume.assert_called_with(self.context, mock.sentinel.conn_info_vda, mock.ANY) @mock.patch('nova.virt.libvirt.driver.LibvirtDriver._disconnect_volume') def test_migrate_disk_and_power_off_boot_from_volume_backed_snapshot( self, disconnect_volume): # Such instance has not empty image_ref, but must be considered as # booted from volume. info = { 'block_device_mapping': [ {'boot_index': None, 'mount_device': '/dev/vdd', 'connection_info': mock.sentinel.conn_info_vdd}, {'boot_index': 0, 'mount_device': '/dev/vda', 'connection_info': mock.sentinel.conn_info_vda}]} flavor = {'root_gb': 1, 'ephemeral_gb': 0} flavor_obj = objects.Flavor(**flavor) self._test_migrate_disk_and_power_off(self.context, flavor_obj, block_device_info=info, params_for_instance={ 'image_ref': uuids.fake_volume_backed_image_ref, 'root_gb': 10, 'ephemeral_gb': 0, 'flavor': {'root_gb': 10, 'ephemeral_gb': 0}}) disconnect_volume.assert_called_with(self.context, mock.sentinel.conn_info_vda, mock.ANY) @mock.patch('nova.utils.execute') @mock.patch('nova.virt.libvirt.utils.copy_image') @mock.patch('nova.virt.libvirt.driver.LibvirtDriver._destroy') @mock.patch('nova.virt.libvirt.driver.LibvirtDriver.get_host_ip_addr') @mock.patch('nova.virt.libvirt.driver.LibvirtDriver' '._get_instance_disk_info') def test_migrate_disk_and_power_off_swap(self, mock_get_disk_info, get_host_ip_addr, mock_destroy, mock_copy_image, mock_execute): """Test for nova.virt.libvirt.libvirt_driver.LivirtConnection .migrate_disk_and_power_off. """ self.copy_or_move_swap_called = False # Original instance config instance = self._create_instance({'flavor': {'root_gb': 10, 'ephemeral_gb': 0}}) disk_info = list(fake_disk_info_byname(instance).values()) mock_get_disk_info.return_value = disk_info get_host_ip_addr.return_value = '10.0.0.1' def fake_copy_image(*args, **kwargs): # disk.swap should not be touched since it is skipped over if '/test/disk.swap' in list(args): self.copy_or_move_swap_called = True def fake_execute(*args, **kwargs): # disk.swap should not be touched since it is skipped over if set(['mv', '/test/disk.swap']).issubset(list(args)): self.copy_or_move_swap_called = True mock_copy_image.side_effect = fake_copy_image mock_execute.side_effect = fake_execute drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) # Re-size fake instance to 20G root and 1024M swap disk flavor = {'root_gb': 20, 'ephemeral_gb': 0, 'swap': 1024} flavor_obj = objects.Flavor(**flavor) # Destination is same host out = drvr.migrate_disk_and_power_off(context.get_admin_context(), instance, '10.0.0.1', flavor_obj, None) mock_get_disk_info.assert_called_once_with(instance, None) self.assertTrue(get_host_ip_addr.called) mock_destroy.assert_called_once_with(instance) self.assertFalse(self.copy_or_move_swap_called) disk_info_text = jsonutils.dumps(disk_info) self.assertEqual(disk_info_text, out) def _test_migrate_disk_and_power_off_resize_check(self, expected_exc): """Test for nova.virt.libvirt.libvirt_driver.LibvirtConnection .migrate_disk_and_power_off. """ instance = self._create_instance() disk_info = list(fake_disk_info_byname(instance).values()) def fake_get_instance_disk_info(instance, xml=None, block_device_info=None): return disk_info def fake_destroy(instance): pass def fake_get_host_ip_addr(): return '10.0.0.1' self.stubs.Set(self.drvr, '_get_instance_disk_info', fake_get_instance_disk_info) self.stubs.Set(self.drvr, '_destroy', fake_destroy) self.stubs.Set(self.drvr, 'get_host_ip_addr', fake_get_host_ip_addr) flavor = {'root_gb': 10, 'ephemeral_gb': 20} flavor_obj = objects.Flavor(**flavor) # Migration is not implemented for LVM backed instances self.assertRaises(expected_exc, self.drvr.migrate_disk_and_power_off, None, instance, '10.0.0.1', flavor_obj, None) @mock.patch('nova.utils.execute') @mock.patch('nova.virt.libvirt.driver.LibvirtDriver._destroy') @mock.patch('nova.virt.libvirt.driver.LibvirtDriver' '._get_instance_disk_info') @mock.patch('nova.virt.libvirt.driver.LibvirtDriver' '._is_storage_shared_with') def _test_migrate_disk_and_power_off_backing_file(self, shared_storage, mock_is_shared_storage, mock_get_disk_info, mock_destroy, mock_execute): self.convert_file_called = False flavor = {'root_gb': 20, 'ephemeral_gb': 30, 'swap': 0} flavor_obj = objects.Flavor(**flavor) disk_info = [{'type': 'qcow2', 'path': '/test/disk', 'virt_disk_size': '10737418240', 'backing_file': '/base/disk', 'disk_size': '83886080'}] mock_get_disk_info.return_value = disk_info mock_is_shared_storage.return_value = shared_storage def fake_execute(*args, **kwargs): self.assertNotEqual(args[0:2], ['qemu-img', 'convert']) mock_execute.side_effect = fake_execute instance = self._create_instance() out = self.drvr.migrate_disk_and_power_off( context.get_admin_context(), instance, '10.0.0.2', flavor_obj, None) self.assertTrue(mock_is_shared_storage.called) mock_destroy.assert_called_once_with(instance) disk_info_text = jsonutils.dumps(disk_info) self.assertEqual(out, disk_info_text) def test_migrate_disk_and_power_off_shared_storage(self): self._test_migrate_disk_and_power_off_backing_file(True) def test_migrate_disk_and_power_off_non_shared_storage(self): self._test_migrate_disk_and_power_off_backing_file(False) def test_migrate_disk_and_power_off_lvm(self): self.flags(images_type='lvm', group='libvirt') def fake_execute(*args, **kwargs): pass self.stubs.Set(utils, 'execute', fake_execute) expected_exc = exception.InstanceFaultRollback self._test_migrate_disk_and_power_off_resize_check(expected_exc) def test_migrate_disk_and_power_off_resize_cannot_ssh(self): def fake_execute(*args, **kwargs): raise processutils.ProcessExecutionError() def fake_is_storage_shared(dest, inst_base): self.checked_shared_storage = True return False self.stubs.Set(self.drvr, '_is_storage_shared_with', fake_is_storage_shared) self.stubs.Set(utils, 'execute', fake_execute) expected_exc = exception.InstanceFaultRollback self._test_migrate_disk_and_power_off_resize_check(expected_exc) @mock.patch('nova.virt.libvirt.driver.LibvirtDriver' '._get_instance_disk_info') def test_migrate_disk_and_power_off_resize_error(self, mock_get_disk_info): instance = self._create_instance() flavor = {'root_gb': 5, 'ephemeral_gb': 10} flavor_obj = objects.Flavor(**flavor) mock_get_disk_info.return_value = fake_disk_info_json(instance) self.assertRaises( exception.InstanceFaultRollback, self.drvr.migrate_disk_and_power_off, 'ctx', instance, '10.0.0.1', flavor_obj, None) @mock.patch('nova.virt.libvirt.driver.LibvirtDriver' '._get_instance_disk_info') def test_migrate_disk_and_power_off_resize_error_rbd(self, mock_get_disk_info): # Check error on resize root disk down for rbd. # The difference is that get_instance_disk_info always returns # an emply list for rbd. # Ephemeral size is not changed in this case (otherwise other check # will raise the same error). self.flags(images_type='rbd', group='libvirt') instance = self._create_instance() flavor = {'root_gb': 5, 'ephemeral_gb': 20} flavor_obj = objects.Flavor(**flavor) mock_get_disk_info.return_value = [] self.assertRaises( exception.InstanceFaultRollback, self.drvr.migrate_disk_and_power_off, 'ctx', instance, '10.0.0.1', flavor_obj, None) @mock.patch('nova.virt.libvirt.driver.LibvirtDriver' '._get_instance_disk_info') def test_migrate_disk_and_power_off_resize_error_default_ephemeral( self, mock_get_disk_info): # Note(Mike_D): The size of this instance's ephemeral_gb is 20 gb. instance = self._create_instance() flavor = {'root_gb': 10, 'ephemeral_gb': 0} flavor_obj = objects.Flavor(**flavor) mock_get_disk_info.return_value = fake_disk_info_json(instance) self.assertRaises(exception.InstanceFaultRollback, self.drvr.migrate_disk_and_power_off, 'ctx', instance, '10.0.0.1', flavor_obj, None) @mock.patch('nova.virt.libvirt.driver.LibvirtDriver' '._get_instance_disk_info') @mock.patch('nova.virt.driver.block_device_info_get_ephemerals') def test_migrate_disk_and_power_off_resize_error_eph(self, mock_get, mock_get_disk_info): mappings = [ { 'device_name': '/dev/sdb4', 'source_type': 'blank', 'destination_type': 'local', 'device_type': 'disk', 'guest_format': 'swap', 'boot_index': -1, 'volume_size': 1 }, { 'device_name': '/dev/sda1', 'source_type': 'volume', 'destination_type': 'volume', 'device_type': 'disk', 'volume_id': 1, 'guest_format': None, 'boot_index': 1, 'volume_size': 6 }, { 'device_name': '/dev/sda2', 'source_type': 'snapshot', 'destination_type': 'volume', 'snapshot_id': 1, 'device_type': 'disk', 'guest_format': None, 'boot_index': 0, 'volume_size': 4 }, { 'device_name': '/dev/sda3', 'source_type': 'blank', 'destination_type': 'local', 'device_type': 'disk', 'guest_format': None, 'boot_index': -1, 'volume_size': 3 } ] mock_get.return_value = mappings instance = self._create_instance() # Old flavor, eph is 20, real disk is 3, target is 2, fail flavor = {'root_gb': 10, 'ephemeral_gb': 2} flavor_obj = objects.Flavor(**flavor) mock_get_disk_info.return_value = fake_disk_info_json(instance) self.assertRaises( exception.InstanceFaultRollback, self.drvr.migrate_disk_and_power_off, 'ctx', instance, '10.0.0.1', flavor_obj, None) # Old flavor, eph is 20, real disk is 3, target is 4 flavor = {'root_gb': 10, 'ephemeral_gb': 4} flavor_obj = objects.Flavor(**flavor) self._test_migrate_disk_and_power_off(self.context, flavor_obj) @mock.patch('nova.utils.execute') @mock.patch('nova.virt.libvirt.utils.copy_image') @mock.patch('nova.virt.libvirt.driver.LibvirtDriver._destroy') @mock.patch('nova.virt.libvirt.utils.get_instance_path') @mock.patch('nova.virt.libvirt.driver.LibvirtDriver' '._is_storage_shared_with') @mock.patch('nova.virt.libvirt.driver.LibvirtDriver' '._get_instance_disk_info') def test_migrate_disk_and_power_off_resize_copy_disk_info(self, mock_disk_info, mock_shared, mock_path, mock_destroy, mock_copy, mock_execuate): instance = self._create_instance() disk_info = list(fake_disk_info_byname(instance).values()) instance_base = os.path.dirname(disk_info[0]['path']) flavor = {'root_gb': 10, 'ephemeral_gb': 25} flavor_obj = objects.Flavor(**flavor) mock_disk_info.return_value = disk_info mock_path.return_value = instance_base mock_shared.return_value = False src_disk_info_path = os.path.join(instance_base + '_resize', 'disk.info') with mock.patch.object(os.path, 'exists', autospec=True) \ as mock_exists: # disk.info exists on the source mock_exists.side_effect = \ lambda path: path == src_disk_info_path self.drvr.migrate_disk_and_power_off(context.get_admin_context(), instance, mock.sentinel, flavor_obj, None) self.assertTrue(mock_exists.called) dst_disk_info_path = os.path.join(instance_base, 'disk.info') mock_copy.assert_any_call(src_disk_info_path, dst_disk_info_path, host=mock.sentinel, on_execute=mock.ANY, on_completion=mock.ANY) def test_wait_for_running(self): def fake_get_info(instance): if instance['name'] == "not_found": raise exception.InstanceNotFound(instance_id=instance['uuid']) elif instance['name'] == "running": return hardware.InstanceInfo(state=power_state.RUNNING) else: return hardware.InstanceInfo(state=power_state.SHUTDOWN) self.stubs.Set(self.drvr, 'get_info', fake_get_info) # instance not found case self.assertRaises(exception.InstanceNotFound, self.drvr._wait_for_running, {'name': 'not_found', 'uuid': 'not_found_uuid'}) # instance is running case self.assertRaises(loopingcall.LoopingCallDone, self.drvr._wait_for_running, {'name': 'running', 'uuid': 'running_uuid'}) # else case self.drvr._wait_for_running({'name': 'else', 'uuid': 'other_uuid'}) @mock.patch('nova.utils.execute') def test_disk_raw_to_qcow2(self, mock_execute): path = '/test/disk' _path_qcow = path + '_qcow' self.drvr._disk_raw_to_qcow2(path) mock_execute.assert_has_calls([ mock.call('qemu-img', 'convert', '-f', 'raw', '-O', 'qcow2', path, _path_qcow), mock.call('mv', _path_qcow, path)]) @mock.patch('nova.utils.execute') def test_disk_qcow2_to_raw(self, mock_execute): path = '/test/disk' _path_raw = path + '_raw' self.drvr._disk_qcow2_to_raw(path) mock_execute.assert_has_calls([ mock.call('qemu-img', 'convert', '-f', 'qcow2', '-O', 'raw', path, _path_raw), mock.call('mv', _path_raw, path)]) @mock.patch.object(libvirt_driver.LibvirtDriver, '_inject_data') @mock.patch.object(libvirt_driver.LibvirtDriver, 'get_info') @mock.patch.object(libvirt_driver.LibvirtDriver, '_create_domain_and_network') @mock.patch.object(libvirt_driver.LibvirtDriver, '_disk_raw_to_qcow2') # Don't write libvirt xml to disk @mock.patch.object(libvirt_utils, 'write_to_file') # NOTE(mdbooth): The following 4 mocks are required to execute # get_guest_xml(). @mock.patch.object(libvirt_driver.LibvirtDriver, '_set_host_enabled') @mock.patch.object(libvirt_driver.LibvirtDriver, '_build_device_metadata') @mock.patch('nova.utils.supports_direct_io') @mock.patch('nova.api.metadata.base.InstanceMetadata') def _test_finish_migration(self, mock_instance_metadata, mock_supports_direct_io, mock_build_device_metadata, mock_set_host_enabled, mock_write_to_file, mock_raw_to_qcow2, mock_create_domain_and_network, mock_get_info, mock_inject_data, power_on=True, resize_instance=False): """Test for nova.virt.libvirt.libvirt_driver.LivirtConnection .finish_migration. """ self.flags(use_cow_images=True) if power_on: state = power_state.RUNNING else: state = power_state.SHUTDOWN mock_get_info.return_value = hardware.InstanceInfo(state=state) instance = self._create_instance( {'config_drive': str(True), 'task_state': task_states.RESIZE_FINISH, 'flavor': {'swap': 500}}) bdi = {'block_device_mapping': []} migration = objects.Migration() migration.source_compute = 'fake-source-compute' migration.dest_compute = 'fake-dest-compute' migration.source_node = 'fake-source-node' migration.dest_node = 'fake-dest-node' image_meta = objects.ImageMeta.from_dict(self.test_image_meta) # Source disks are raw to test conversion disk_info = list(fake_disk_info_byname(instance, type='raw').values()) disk_info_text = jsonutils.dumps(disk_info) backend = self.useFixture(fake_imagebackend.ImageBackendFixture()) mock_create_domain_and_network.return_value = \ libvirt_guest.Guest('fake_dom') self.drvr.finish_migration( context.get_admin_context(), migration, instance, disk_info_text, [], image_meta, resize_instance, bdi, power_on) # Assert that we converted the root, ephemeral, and swap disks instance_path = libvirt_utils.get_instance_path(instance) convert_calls = [mock.call(os.path.join(instance_path, name)) for name in ('disk', 'disk.local', 'disk.swap')] mock_raw_to_qcow2.assert_has_calls(convert_calls, any_order=True) # Implicitly assert that we did not convert the config disk self.assertEqual(len(convert_calls), mock_raw_to_qcow2.call_count) disks = backend.disks # Assert that we called cache() on kernel, ramdisk, disk, # and disk.local. # This results in creation of kernel, ramdisk, and disk.swap. # This results in backing file check and resize of disk and disk.local. for name in ('kernel', 'ramdisk', 'disk', 'disk.local', 'disk.swap'): self.assertTrue(disks[name].cache.called, 'cache() not called for %s' % name) # Assert that we created a snapshot for the root disk root_disk = disks['disk'] self.assertTrue(root_disk.create_snap.called) # Assert that we didn't import a config disk # Note that some path currently creates a config disk object, # but only uses it for an exists() check. Therefore the object may # exist, but shouldn't have been imported. if 'disk.config' in disks: self.assertFalse(disks['disk.config'].import_file.called) # We shouldn't be injecting data during migration self.assertFalse(mock_inject_data.called) # NOTE(mdbooth): If we wanted to check the generated xml, we could # insert a hook here mock_create_domain_and_network.assert_called_once_with( mock.ANY, mock.ANY, instance, [], block_device_info=bdi, power_on=power_on, vifs_already_plugged=True, post_xml_callback=mock.ANY) def test_finish_migration_resize(self): with mock.patch('nova.virt.libvirt.guest.Guest.sync_guest_time' ) as mock_guest_time: self._test_finish_migration(resize_instance=True) self.assertTrue(mock_guest_time.called) def test_finish_migration_power_on(self): with mock.patch('nova.virt.libvirt.guest.Guest.sync_guest_time' ) as mock_guest_time: self._test_finish_migration() self.assertTrue(mock_guest_time.called) def test_finish_migration_power_off(self): self._test_finish_migration(power_on=False) def _test_finish_revert_migration(self, power_on): """Test for nova.virt.libvirt.libvirt_driver.LivirtConnection .finish_revert_migration. """ powered_on = power_on self.fake_create_domain_called = False def fake_execute(*args, **kwargs): pass def fake_plug_vifs(instance, network_info): pass def fake_create_domain(context, xml, instance, network_info, block_device_info=None, power_on=None, vifs_already_plugged=None): self.fake_create_domain_called = True self.assertEqual(powered_on, power_on) self.assertTrue(vifs_already_plugged) return mock.MagicMock() def fake_enable_hairpin(): pass def fake_get_info(instance): if powered_on: return hardware.InstanceInfo(state=power_state.RUNNING) else: return hardware.InstanceInfo(state=power_state.SHUTDOWN) def fake_to_xml(context, instance, network_info, disk_info, image_meta=None, rescue=None, block_device_info=None): return "" self.stubs.Set(self.drvr, '_get_guest_xml', fake_to_xml) self.stubs.Set(self.drvr, 'plug_vifs', fake_plug_vifs) self.stubs.Set(utils, 'execute', fake_execute) fw = base_firewall.NoopFirewallDriver() self.stubs.Set(self.drvr, 'firewall_driver', fw) self.stubs.Set(self.drvr, '_create_domain_and_network', fake_create_domain) self.stubs.Set(nova.virt.libvirt.guest.Guest, 'enable_hairpin', fake_enable_hairpin) self.stubs.Set(self.drvr, 'get_info', fake_get_info) self.stubs.Set(utils, 'get_image_from_system_metadata', lambda *a: self.test_image_meta) with utils.tempdir() as tmpdir: self.flags(instances_path=tmpdir) ins_ref = self._create_instance() os.mkdir(os.path.join(tmpdir, ins_ref['name'])) libvirt_xml_path = os.path.join(tmpdir, ins_ref['name'], 'libvirt.xml') f = open(libvirt_xml_path, 'w') f.close() self.drvr.finish_revert_migration( context.get_admin_context(), ins_ref, [], None, power_on) self.assertTrue(self.fake_create_domain_called) def test_finish_revert_migration_power_on(self): self._test_finish_revert_migration(True) def test_finish_revert_migration_power_off(self): self._test_finish_revert_migration(False) def _test_finish_revert_migration_after_crash(self, backup_made=True, del_inst_failed=False): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) drvr.image_backend = mock.Mock() drvr.image_backend.by_name.return_value = drvr.image_backend context = 'fake_context' ins_ref = self._create_instance() with test.nested( mock.patch.object(os.path, 'exists', return_value=backup_made), mock.patch.object(libvirt_utils, 'get_instance_path'), mock.patch.object(utils, 'execute'), mock.patch.object(drvr, '_create_domain_and_network'), mock.patch.object(drvr, '_get_guest_xml'), mock.patch.object(shutil, 'rmtree'), mock.patch.object(loopingcall, 'FixedIntervalLoopingCall'), ) as (mock_stat, mock_path, mock_exec, mock_cdn, mock_ggx, mock_rmtree, mock_looping_call): mock_path.return_value = '/fake/foo' if del_inst_failed: mock_rmtree.side_effect = OSError(errno.ENOENT, 'test exception') drvr.finish_revert_migration(context, ins_ref, []) if backup_made: mock_exec.assert_called_once_with('mv', '/fake/foo_resize', '/fake/foo') else: self.assertFalse(mock_exec.called) def test_finish_revert_migration_after_crash(self): self._test_finish_revert_migration_after_crash(backup_made=True) def test_finish_revert_migration_after_crash_before_new(self): self._test_finish_revert_migration_after_crash(backup_made=True) def test_finish_revert_migration_after_crash_before_backup(self): self._test_finish_revert_migration_after_crash(backup_made=False) def test_finish_revert_migration_after_crash_delete_failed(self): self._test_finish_revert_migration_after_crash(backup_made=True, del_inst_failed=True) def test_finish_revert_migration_preserves_disk_bus(self): def fake_get_guest_xml(context, instance, network_info, disk_info, image_meta, block_device_info=None): self.assertEqual('ide', disk_info['disk_bus']) image_meta = {"disk_format": "raw", "properties": {"hw_disk_bus": "ide"}} instance = self._create_instance() drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) with test.nested( mock.patch.object(drvr, 'image_backend'), mock.patch.object(drvr, '_create_domain_and_network'), mock.patch.object(utils, 'get_image_from_system_metadata', return_value=image_meta), mock.patch.object(drvr, '_get_guest_xml', side_effect=fake_get_guest_xml)): drvr.finish_revert_migration('', instance, None, power_on=False) def test_finish_revert_migration_snap_backend(self): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) drvr.image_backend = mock.Mock() drvr.image_backend.by_name.return_value = drvr.image_backend ins_ref = self._create_instance() with test.nested( mock.patch.object(utils, 'get_image_from_system_metadata'), mock.patch.object(drvr, '_create_domain_and_network'), mock.patch.object(drvr, '_get_guest_xml')) as ( mock_image, mock_cdn, mock_ggx): mock_image.return_value = {'disk_format': 'raw'} drvr.finish_revert_migration('', ins_ref, None, power_on=False) drvr.image_backend.rollback_to_snap.assert_called_once_with( libvirt_utils.RESIZE_SNAPSHOT_NAME) drvr.image_backend.remove_snap.assert_called_once_with( libvirt_utils.RESIZE_SNAPSHOT_NAME, ignore_errors=True) def test_finish_revert_migration_snap_backend_snapshot_not_found(self): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) drvr.image_backend = mock.Mock() drvr.image_backend.by_name.return_value = drvr.image_backend ins_ref = self._create_instance() with test.nested( mock.patch.object(rbd_utils, 'RBDDriver'), mock.patch.object(utils, 'get_image_from_system_metadata'), mock.patch.object(drvr, '_create_domain_and_network'), mock.patch.object(drvr, '_get_guest_xml')) as ( mock_rbd, mock_image, mock_cdn, mock_ggx): mock_image.return_value = {'disk_format': 'raw'} mock_rbd.rollback_to_snap.side_effect = exception.SnapshotNotFound( snapshot_id='testing') drvr.finish_revert_migration('', ins_ref, None, power_on=False) drvr.image_backend.remove_snap.assert_called_once_with( libvirt_utils.RESIZE_SNAPSHOT_NAME, ignore_errors=True) def test_finish_revert_migration_snap_backend_image_does_not_exist(self): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) drvr.image_backend = mock.Mock() drvr.image_backend.by_name.return_value = drvr.image_backend drvr.image_backend.exists.return_value = False ins_ref = self._create_instance() with test.nested( mock.patch.object(rbd_utils, 'RBDDriver'), mock.patch.object(utils, 'get_image_from_system_metadata'), mock.patch.object(drvr, '_create_domain_and_network'), mock.patch.object(drvr, '_get_guest_xml')) as ( mock_rbd, mock_image, mock_cdn, mock_ggx): mock_image.return_value = {'disk_format': 'raw'} drvr.finish_revert_migration('', ins_ref, None, power_on=False) self.assertFalse(drvr.image_backend.rollback_to_snap.called) self.assertFalse(drvr.image_backend.remove_snap.called) def test_cleanup_failed_migration(self): self.mox.StubOutWithMock(shutil, 'rmtree') shutil.rmtree('/fake/inst') self.mox.ReplayAll() self.drvr._cleanup_failed_migration('/fake/inst') def test_confirm_migration(self): ins_ref = self._create_instance() self.mox.StubOutWithMock(self.drvr, "_cleanup_resize") self.drvr._cleanup_resize(self.context, ins_ref, _fake_network_info(self, 1)) self.mox.ReplayAll() self.drvr.confirm_migration(self.context, "migration_ref", ins_ref, _fake_network_info(self, 1)) def test_cleanup_resize_same_host(self): CONF.set_override('policy_dirs', [], group='oslo_policy') ins_ref = self._create_instance({'host': CONF.host}) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) drvr.image_backend = mock.Mock() drvr.image_backend.by_name.return_value = drvr.image_backend with test.nested( mock.patch.object(os.path, 'exists'), mock.patch.object(libvirt_utils, 'get_instance_path'), mock.patch.object(utils, 'execute'), mock.patch.object(shutil, 'rmtree')) as ( mock_exists, mock_get_path, mock_exec, mock_rmtree): mock_exists.return_value = True mock_get_path.return_value = '/fake/inst' drvr._cleanup_resize( self.context, ins_ref, _fake_network_info(self, 1)) mock_get_path.assert_called_once_with(ins_ref) mock_exec.assert_called_once_with('rm', '-rf', '/fake/inst_resize', delay_on_retry=True, attempts=5) mock_rmtree.assert_not_called() def test_cleanup_resize_not_same_host(self): CONF.set_override('policy_dirs', [], group='oslo_policy') host = 'not' + CONF.host ins_ref = self._create_instance({'host': host}) fake_net = _fake_network_info(self, 1) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) drvr.image_backend = mock.Mock() drvr.image_backend.by_name.return_value = drvr.image_backend drvr.image_backend.exists.return_value = False with test.nested( mock.patch('nova.compute.utils.is_volume_backed_instance', return_value=False), mock.patch.object(os.path, 'exists'), mock.patch.object(libvirt_utils, 'get_instance_path'), mock.patch.object(utils, 'execute'), mock.patch.object(shutil, 'rmtree'), mock.patch.object(drvr, '_undefine_domain'), mock.patch.object(drvr, 'unplug_vifs'), mock.patch.object(drvr, 'unfilter_instance') ) as (mock_volume_backed, mock_exists, mock_get_path, mock_exec, mock_rmtree, mock_undef, mock_unplug, mock_unfilter): mock_exists.return_value = True mock_get_path.return_value = '/fake/inst' drvr._cleanup_resize(self.context, ins_ref, fake_net) mock_get_path.assert_called_once_with(ins_ref) mock_exec.assert_called_once_with('rm', '-rf', '/fake/inst_resize', delay_on_retry=True, attempts=5) mock_rmtree.assert_called_once_with('/fake/inst') mock_undef.assert_called_once_with(ins_ref) mock_unplug.assert_called_once_with(ins_ref, fake_net) mock_unfilter.assert_called_once_with(ins_ref, fake_net) def test_cleanup_resize_not_same_host_volume_backed(self): """Tests cleaning up after a resize is confirmed with a volume-backed instance. The key point is that the instance base directory should not be removed for volume-backed instances. """ CONF.set_override('policy_dirs', [], group='oslo_policy') host = 'not' + CONF.host ins_ref = self._create_instance({'host': host}) fake_net = _fake_network_info(self, 1) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) drvr.image_backend = mock.Mock() drvr.image_backend.by_name.return_value = drvr.image_backend drvr.image_backend.exists.return_value = False with test.nested( mock.patch('nova.compute.utils.is_volume_backed_instance', return_value=True), mock.patch.object(os.path, 'exists'), mock.patch.object(libvirt_utils, 'get_instance_path'), mock.patch.object(utils, 'execute'), mock.patch.object(shutil, 'rmtree'), mock.patch.object(drvr, '_undefine_domain'), mock.patch.object(drvr, 'unplug_vifs'), mock.patch.object(drvr, 'unfilter_instance') ) as (mock_volume_backed, mock_exists, mock_get_path, mock_exec, mock_rmtree, mock_undef, mock_unplug, mock_unfilter): mock_exists.return_value = True mock_get_path.return_value = '/fake/inst' drvr._cleanup_resize(self.context, ins_ref, fake_net) mock_get_path.assert_called_once_with(ins_ref) mock_exec.assert_called_once_with('rm', '-rf', '/fake/inst_resize', delay_on_retry=True, attempts=5) mock_rmtree.assert_not_called() mock_undef.assert_called_once_with(ins_ref) mock_unplug.assert_called_once_with(ins_ref, fake_net) mock_unfilter.assert_called_once_with(ins_ref, fake_net) def test_cleanup_resize_snap_backend(self): CONF.set_override('policy_dirs', [], group='oslo_policy') ins_ref = self._create_instance({'host': CONF.host}) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) drvr.image_backend = mock.Mock() drvr.image_backend.by_name.return_value = drvr.image_backend with test.nested( mock.patch.object(os.path, 'exists'), mock.patch.object(libvirt_utils, 'get_instance_path'), mock.patch.object(utils, 'execute'), mock.patch.object(shutil, 'rmtree'), mock.patch.object(drvr.image_backend, 'remove_snap')) as ( mock_exists, mock_get_path, mock_exec, mock_rmtree, mock_remove): mock_exists.return_value = True mock_get_path.return_value = '/fake/inst' drvr._cleanup_resize( self.context, ins_ref, _fake_network_info(self, 1)) mock_get_path.assert_called_once_with(ins_ref) mock_exec.assert_called_once_with('rm', '-rf', '/fake/inst_resize', delay_on_retry=True, attempts=5) mock_remove.assert_called_once_with( libvirt_utils.RESIZE_SNAPSHOT_NAME, ignore_errors=True) self.assertFalse(mock_rmtree.called) def test_cleanup_resize_snap_backend_image_does_not_exist(self): CONF.set_override('policy_dirs', [], group='oslo_policy') ins_ref = self._create_instance({'host': CONF.host}) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) drvr.image_backend = mock.Mock() drvr.image_backend.by_name.return_value = drvr.image_backend drvr.image_backend.exists.return_value = False with test.nested( mock.patch('nova.compute.utils.is_volume_backed_instance', return_value=False), mock.patch.object(os.path, 'exists'), mock.patch.object(libvirt_utils, 'get_instance_path'), mock.patch.object(utils, 'execute'), mock.patch.object(shutil, 'rmtree'), mock.patch.object(drvr.image_backend, 'remove_snap')) as ( mock_volume_backed, mock_exists, mock_get_path, mock_exec, mock_rmtree, mock_remove): mock_exists.return_value = True mock_get_path.return_value = '/fake/inst' drvr._cleanup_resize( self.context, ins_ref, _fake_network_info(self, 1)) mock_get_path.assert_called_once_with(ins_ref) mock_exec.assert_called_once_with('rm', '-rf', '/fake/inst_resize', delay_on_retry=True, attempts=5) self.assertFalse(mock_remove.called) mock_rmtree.called_once_with('/fake/inst') def test_get_instance_disk_info_exception(self): instance = self._create_instance() class FakeExceptionDomain(FakeVirtDomain): def __init__(self): super(FakeExceptionDomain, self).__init__() def XMLDesc(self, flags): raise fakelibvirt.libvirtError("Libvirt error") def fake_get_domain(self, instance): return FakeExceptionDomain() self.stubs.Set(host.Host, '_get_domain', fake_get_domain) self.assertRaises(exception.InstanceNotFound, self.drvr.get_instance_disk_info, instance) @mock.patch('os.path.exists') @mock.patch.object(lvm, 'list_volumes') def test_lvm_disks(self, listlvs, exists): instance = objects.Instance(uuid=uuids.instance, id=1) self.flags(images_volume_group='vols', group='libvirt') exists.return_value = True listlvs.return_value = ['%s_foo' % uuids.instance, 'other-uuid_foo'] disks = self.drvr._lvm_disks(instance) self.assertEqual(['/dev/vols/%s_foo' % uuids.instance], disks) def test_is_booted_from_volume(self): func = libvirt_driver.LibvirtDriver._is_booted_from_volume bdm = [] bdi = {'block_device_mapping': bdm} self.assertFalse(func(bdi)) bdm.append({'boot_index': -1}) self.assertFalse(func(bdi)) bdm.append({'boot_index': None}) self.assertFalse(func(bdi)) bdm.append({'boot_index': 1}) self.assertFalse(func(bdi)) bdm.append({'boot_index': 0}) self.assertTrue(func(bdi)) @mock.patch('nova.virt.libvirt.driver.imagebackend') @mock.patch('nova.virt.libvirt.driver.LibvirtDriver._inject_data') @mock.patch('nova.virt.libvirt.driver.imagecache') def test_data_not_injects_with_configdrive(self, mock_image, mock_inject, mock_backend): self.flags(inject_partition=-1, group='libvirt') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) # config_drive is True by default, configdrive.required_by() # returns True instance_ref = self._create_instance() disk_images = {'image_id': None} drvr._create_and_inject_local_root(self.context, instance_ref, False, '', disk_images, get_injection_info(), None) self.assertFalse(mock_inject.called) @mock.patch('nova.virt.netutils.get_injected_network_template') @mock.patch('nova.virt.disk.api.inject_data') @mock.patch.object(libvirt_driver.LibvirtDriver, "_conn") def _test_inject_data(self, instance, injection_info, path, disk_params, mock_conn, disk_inject_data, inj_network, called=True): class ImageBackend(object): path = '/path' def get_model(self, connection): return imgmodel.LocalFileImage(self.path, imgmodel.FORMAT_RAW) def fake_inj_network(*args, **kwds): return args[0] or None inj_network.side_effect = fake_inj_network image_backend = ImageBackend() image_backend.path = path with mock.patch.object(self.drvr.image_backend, 'by_name', return_value=image_backend): self.flags(inject_partition=0, group='libvirt') self.drvr._inject_data(image_backend, instance, injection_info) if called: disk_inject_data.assert_called_once_with( mock.ANY, *disk_params, partition=None, mandatory=('files',)) self.assertEqual(disk_inject_data.called, called) def test_inject_data_adminpass(self): self.flags(inject_password=True, group='libvirt') instance = self._create_instance() injection_info = get_injection_info(admin_pass='foobar') disk_params = [ None, # key None, # net {}, # metadata 'foobar', # admin_pass None, # files ] self._test_inject_data(instance, injection_info, "/path", disk_params) # Test with the configuration setted to false. self.flags(inject_password=False, group='libvirt') self._test_inject_data(instance, injection_info, "/path", disk_params, called=False) def test_inject_data_key(self): instance = self._create_instance(params={'key_data': 'key-content'}) injection_info = get_injection_info() self.flags(inject_key=True, group='libvirt') disk_params = [ 'key-content', # key None, # net {}, # metadata None, # admin_pass None, # files ] self._test_inject_data(instance, injection_info, "/path", disk_params) # Test with the configuration setted to false. self.flags(inject_key=False, group='libvirt') self._test_inject_data(instance, injection_info, "/path", disk_params, called=False) def test_inject_data_metadata(self): instance = self._create_instance(params={'metadata': {'data': 'foo'}}) injection_info = get_injection_info() disk_params = [ None, # key None, # net {'data': 'foo'}, # metadata None, # admin_pass None, # files ] self._test_inject_data(instance, injection_info, "/path", disk_params) def test_inject_data_files(self): instance = self._create_instance() injection_info = get_injection_info(files=['file1', 'file2']) disk_params = [ None, # key None, # net {}, # metadata None, # admin_pass ['file1', 'file2'], # files ] self._test_inject_data(instance, injection_info, "/path", disk_params) def test_inject_data_net(self): instance = self._create_instance() injection_info = get_injection_info(network_info={'net': 'eno1'}) disk_params = [ None, # key {'net': 'eno1'}, # net {}, # metadata None, # admin_pass None, # files ] self._test_inject_data(instance, injection_info, "/path", disk_params) def test_inject_not_exist_image(self): instance = self._create_instance() injection_info = get_injection_info() disk_params = [ 'key-content', # key None, # net None, # metadata None, # admin_pass None, # files ] self._test_inject_data(instance, injection_info, "/fail/path", disk_params, called=False) def test_attach_interface_build_metadata_fails(self): instance = self._create_instance() network_info = _fake_network_info(self, 1) domain = FakeVirtDomain(fake_xml="""
""") fake_image_meta = objects.ImageMeta.from_dict( {'id': instance.image_ref}) expected = self.drvr.vif_driver.get_config( instance, network_info[0], fake_image_meta, instance.flavor, CONF.libvirt.virt_type, self.drvr._host) with test.nested( mock.patch.object(host.Host, '_get_domain', return_value=domain), mock.patch.object(self.drvr.firewall_driver, 'setup_basic_filtering'), mock.patch.object(domain, 'attachDeviceFlags'), mock.patch.object(domain, 'info', return_value=[power_state.RUNNING, 1, 2, 3, 4]), mock.patch.object(self.drvr.vif_driver, 'get_config', return_value=expected), mock.patch.object(self.drvr, '_build_device_metadata', side_effect=exception.NovaException), mock.patch.object(self.drvr, 'detach_interface'), ) as ( mock_get_domain, mock_setup_basic_filtering, mock_attach_device_flags, mock_info, mock_get_config, mock_build_device_metadata, mock_detach_interface ): self.assertRaises(exception.InterfaceAttachFailed, self.drvr.attach_interface, self.context, instance, fake_image_meta, network_info[0]) mock_get_domain.assert_called_with(instance) mock_info.assert_called_with() mock_setup_basic_filtering.assert_called_with( instance, [network_info[0]]) mock_get_config.assert_called_with( instance, network_info[0], fake_image_meta, instance.flavor, CONF.libvirt.virt_type, self.drvr._host) mock_build_device_metadata.assert_called_with(self.context, instance) mock_attach_device_flags.assert_called_with( expected.to_xml(), flags=(fakelibvirt.VIR_DOMAIN_AFFECT_CONFIG | fakelibvirt.VIR_DOMAIN_AFFECT_LIVE)) mock_detach_interface.assert_called_with(self.context, instance, network_info[0]) def _test_attach_interface(self, power_state, expected_flags): instance = self._create_instance() network_info = _fake_network_info(self, 1) domain = FakeVirtDomain(fake_xml="""
""") self.mox.StubOutWithMock(host.Host, '_get_domain') self.mox.StubOutWithMock(self.drvr.firewall_driver, 'setup_basic_filtering') self.mox.StubOutWithMock(domain, 'attachDeviceFlags') self.mox.StubOutWithMock(domain, 'info') host.Host._get_domain(instance).AndReturn(domain) domain.info().AndReturn([power_state, 1, 2, 3, 4]) self.drvr.firewall_driver.setup_basic_filtering( instance, [network_info[0]]) fake_image_meta = objects.ImageMeta.from_dict( {'id': instance.image_ref}) expected = self.drvr.vif_driver.get_config( instance, network_info[0], fake_image_meta, instance.flavor, CONF.libvirt.virt_type, self.drvr._host) self.mox.StubOutWithMock(self.drvr.vif_driver, 'get_config') self.drvr.vif_driver.get_config( instance, network_info[0], mox.IsA(objects.ImageMeta), mox.IsA(objects.Flavor), CONF.libvirt.virt_type, self.drvr._host).AndReturn(expected) self.mox.StubOutWithMock(self.drvr, '_build_device_metadata') self.drvr._build_device_metadata(self.context, instance).AndReturn( objects.InstanceDeviceMetadata()) self.mox.StubOutWithMock(objects.Instance, 'save') objects.Instance.save() domain.attachDeviceFlags(expected.to_xml(), flags=expected_flags) self.mox.ReplayAll() self.drvr.attach_interface( self.context, instance, fake_image_meta, network_info[0]) self.mox.VerifyAll() def test_attach_interface_with_running_instance(self): self._test_attach_interface( power_state.RUNNING, expected_flags=(fakelibvirt.VIR_DOMAIN_AFFECT_CONFIG | fakelibvirt.VIR_DOMAIN_AFFECT_LIVE)) def test_attach_interface_with_pause_instance(self): self._test_attach_interface( power_state.PAUSED, expected_flags=(fakelibvirt.VIR_DOMAIN_AFFECT_CONFIG | fakelibvirt.VIR_DOMAIN_AFFECT_LIVE)) def test_attach_interface_with_shutdown_instance(self): self._test_attach_interface( power_state.SHUTDOWN, expected_flags=(fakelibvirt.VIR_DOMAIN_AFFECT_CONFIG)) def _test_detach_interface(self, power_state, expected_flags, device_not_found=False): # setup some mocks instance = self._create_instance() network_info = _fake_network_info(self, 1) domain = FakeVirtDomain(fake_xml="""
""", info=[power_state, 1, 2, 3, 4]) guest = libvirt_guest.Guest(domain) expected_cfg = vconfig.LibvirtConfigGuestInterface() expected_cfg.parse_str(""" """) if device_not_found: # This will trigger detach_device_with_retry to raise # DeviceNotFound get_interface_calls = [expected_cfg, None] else: get_interface_calls = [expected_cfg, expected_cfg, None] with test.nested( mock.patch.object(host.Host, 'get_guest', return_value=guest), mock.patch.object(self.drvr.vif_driver, 'get_config', return_value=expected_cfg), # This is called multiple times in a retry loop so we use a # side_effect to simulate the calls to stop the loop. mock.patch.object(guest, 'get_interface_by_cfg', side_effect=get_interface_calls), mock.patch.object(domain, 'detachDeviceFlags'), mock.patch('nova.virt.libvirt.driver.LOG.warning') ) as ( mock_get_guest, mock_get_config, mock_get_interface, mock_detach_device_flags, mock_warning ): # run the detach method self.drvr.detach_interface(self.context, instance, network_info[0]) # make our assertions mock_get_guest.assert_called_once_with(instance) mock_get_config.assert_called_once_with( instance, network_info[0], test.MatchType(objects.ImageMeta), test.MatchType(objects.Flavor), CONF.libvirt.virt_type, self.drvr._host) mock_get_interface.assert_has_calls( [mock.call(expected_cfg) for x in range(len(get_interface_calls))]) if device_not_found: mock_detach_device_flags.assert_not_called() self.assertTrue(mock_warning.called) else: mock_detach_device_flags.assert_called_once_with( expected_cfg.to_xml(), flags=expected_flags) mock_warning.assert_not_called() def test_detach_interface_with_running_instance(self): self._test_detach_interface( power_state.RUNNING, expected_flags=(fakelibvirt.VIR_DOMAIN_AFFECT_CONFIG | fakelibvirt.VIR_DOMAIN_AFFECT_LIVE)) def test_detach_interface_with_running_instance_device_not_found(self): """Tests that the interface is detached before we try to detach it. """ self._test_detach_interface( power_state.RUNNING, expected_flags=(fakelibvirt.VIR_DOMAIN_AFFECT_CONFIG | fakelibvirt.VIR_DOMAIN_AFFECT_LIVE), device_not_found=True) def test_detach_interface_with_pause_instance(self): self._test_detach_interface( power_state.PAUSED, expected_flags=(fakelibvirt.VIR_DOMAIN_AFFECT_CONFIG | fakelibvirt.VIR_DOMAIN_AFFECT_LIVE)) def test_detach_interface_with_shutdown_instance(self): self._test_detach_interface( power_state.SHUTDOWN, expected_flags=(fakelibvirt.VIR_DOMAIN_AFFECT_CONFIG)) @mock.patch('nova.virt.libvirt.driver.LOG') def test_detach_interface_device_not_found(self, mock_log): # Asserts that we don't log an error when the interface device is not # found on the guest after a libvirt error during detach. instance = self._create_instance() vif = _fake_network_info(self, 1)[0] guest = mock.Mock(spec='nova.virt.libvirt.guest.Guest') guest.get_power_state = mock.Mock() self.drvr._host.get_guest = mock.Mock(return_value=guest) error = fakelibvirt.libvirtError( 'no matching network device was found') error.err = (fakelibvirt.VIR_ERR_OPERATION_FAILED,) guest.detach_device = mock.Mock(side_effect=error) # mock out that get_interface_by_cfg doesn't find the interface guest.get_interface_by_cfg = mock.Mock(return_value=None) self.drvr.detach_interface(self.context, instance, vif) # an error shouldn't be logged, but a warning should be logged self.assertFalse(mock_log.error.called) self.assertEqual(1, mock_log.warning.call_count) self.assertIn('the device is no longer found on the guest', six.text_type(mock_log.warning.call_args[0])) def test_detach_interface_device_with_same_mac_address(self): instance = self._create_instance() network_info = _fake_network_info(self, 1) domain = FakeVirtDomain(fake_xml="""
""") self.mox.StubOutWithMock(host.Host, '_get_domain') self.mox.StubOutWithMock(self.drvr.firewall_driver, 'setup_basic_filtering') self.mox.StubOutWithMock(domain, 'detachDeviceFlags') self.mox.StubOutWithMock(domain, 'info') host.Host._get_domain(instance).AndReturn(domain) domain.info().AndReturn([power_state.RUNNING, 1, 2, 3, 4]) expected = vconfig.LibvirtConfigGuestInterface() expected.parse_str(""" """) self.mox.StubOutWithMock(self.drvr.vif_driver, 'get_config') self.drvr.vif_driver.get_config( instance, network_info[0], mox.IsA(objects.ImageMeta), mox.IsA(objects.Flavor), CONF.libvirt.virt_type, self.drvr._host).AndReturn(expected) expected_flags = (fakelibvirt.VIR_DOMAIN_AFFECT_CONFIG | fakelibvirt.VIR_DOMAIN_AFFECT_LIVE) domain.detachDeviceFlags(expected.to_xml(), flags=expected_flags) self.mox.ReplayAll() with mock.patch.object(libvirt_guest.Guest, 'get_interface_by_cfg', side_effect=[expected, expected, None]): self.drvr.detach_interface(self.context, instance, network_info[0]) self.mox.VerifyAll() @mock.patch('nova.virt.libvirt.utils.write_to_file') # NOTE(mdbooth): The following 4 mocks are required to execute # get_guest_xml(). @mock.patch.object(libvirt_driver.LibvirtDriver, '_set_host_enabled') @mock.patch.object(libvirt_driver.LibvirtDriver, '_build_device_metadata') @mock.patch('nova.utils.supports_direct_io') @mock.patch('nova.api.metadata.base.InstanceMetadata') def _test_rescue(self, instance, mock_instance_metadata, mock_supports_direct_io, mock_build_device_metadata, mock_set_host_enabled, mock_write_to_file, exists=None): self.flags(instances_path=self.useFixture(fixtures.TempDir()).path) mock_build_device_metadata.return_value = None mock_supports_direct_io.return_value = True backend = self.useFixture( fake_imagebackend.ImageBackendFixture(exists=exists)) image_meta = objects.ImageMeta.from_dict( {'id': uuids.image_id, 'name': 'fake'}) network_info = _fake_network_info(self, 1) rescue_password = 'fake_password' domain_xml = [None] def fake_create_domain(xml=None, domain=None, power_on=True, pause=False, post_xml_callback=None): domain_xml[0] = xml if post_xml_callback is not None: post_xml_callback() with mock.patch.object( self.drvr, '_create_domain', side_effect=fake_create_domain) as mock_create_domain: self.drvr.rescue(self.context, instance, network_info, image_meta, rescue_password) self.assertTrue(mock_create_domain.called) return backend, etree.fromstring(domain_xml[0]) def test_rescue(self): instance = self._create_instance({'config_drive': None}) backend, doc = self._test_rescue(instance) # Assert that we created the expected set of disks, and no others self.assertEqual(['disk.rescue', 'kernel.rescue', 'ramdisk.rescue'], sorted(backend.created_disks.keys())) disks = backend.disks kernel_ramdisk = [disks[name + '.rescue'] for name in ('kernel', 'ramdisk')] # Assert that kernel and ramdisk were both created as raw for disk in kernel_ramdisk: self.assertEqual('raw', disk.image_type) # Assert that the root rescue disk was created as the default type self.assertIsNone(disks['disk.rescue'].image_type) # We expect the generated domain to contain disk.rescue and # disk, in that order expected_domain_disk_paths = [disks[name].path for name in ('disk.rescue', 'disk')] domain_disk_paths = doc.xpath('devices/disk/source/@file') self.assertEqual(expected_domain_disk_paths, domain_disk_paths) # The generated domain xml should contain the rescue kernel # and ramdisk expected_kernel_ramdisk_paths = [os.path.join(CONF.instances_path, disk.path) for disk in kernel_ramdisk] kernel_ramdisk_paths = \ doc.xpath('os/*[self::initrd|self::kernel]/text()') self.assertEqual(expected_kernel_ramdisk_paths, kernel_ramdisk_paths) @mock.patch('nova.virt.configdrive.ConfigDriveBuilder._make_iso9660') def test_rescue_config_drive(self, mock_mkisofs): instance = self._create_instance({'config_drive': str(True)}) backend, doc = self._test_rescue( instance, exists=lambda name: name != 'disk.config.rescue') # Assert that we created the expected set of disks, and no others self.assertEqual(['disk.config.rescue', 'disk.rescue', 'kernel.rescue', 'ramdisk.rescue'], sorted(backend.created_disks.keys())) disks = backend.disks config_disk = disks['disk.config.rescue'] kernel_ramdisk = [disks[name + '.rescue'] for name in ('kernel', 'ramdisk')] # Assert that we imported the config disk self.assertTrue(config_disk.import_file.called) # Assert that the config disk, kernel and ramdisk were created as raw for disk in [config_disk] + kernel_ramdisk: self.assertEqual('raw', disk.image_type) # Assert that the root rescue disk was created as the default type self.assertIsNone(disks['disk.rescue'].image_type) # We expect the generated domain to contain disk.rescue, disk, and # disk.config.rescue in that order expected_domain_disk_paths = [disks[name].path for name in ('disk.rescue', 'disk', 'disk.config.rescue')] domain_disk_paths = doc.xpath('devices/disk/source/@file') self.assertEqual(expected_domain_disk_paths, domain_disk_paths) # The generated domain xml should contain the rescue kernel # and ramdisk expected_kernel_ramdisk_paths = [os.path.join(CONF.instances_path, disk.path) for disk in kernel_ramdisk] kernel_ramdisk_paths = \ doc.xpath('os/*[self::initrd|self::kernel]/text()') self.assertEqual(expected_kernel_ramdisk_paths, kernel_ramdisk_paths) @mock.patch.object(libvirt_utils, 'get_instance_path') @mock.patch.object(libvirt_utils, 'load_file') @mock.patch.object(host.Host, '_get_domain') def _test_unrescue(self, instance, mock_get_domain, mock_load_file, mock_get_instance_path): dummyxml = ("instance-0000000a" "" "" "" "" "") mock_get_instance_path.return_value = '/path' fake_dom = FakeVirtDomain(fake_xml=dummyxml) mock_get_domain.return_value = fake_dom mock_load_file.return_value = "fake_unrescue_xml" unrescue_xml_path = os.path.join('/path', 'unrescue.xml') rescue_file = os.path.join('/path', 'rescue.file') rescue_dir = os.path.join('/path', 'rescue.dir') def isdir_sideeffect(*args, **kwargs): if args[0] == '/path/rescue.file': return False if args[0] == '/path/rescue.dir': return True drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) with test.nested( mock.patch.object(libvirt_utils, 'write_to_file'), mock.patch.object(drvr, '_destroy'), mock.patch.object(drvr, '_create_domain'), mock.patch.object(os, 'unlink'), mock.patch.object(shutil, 'rmtree'), mock.patch.object(os.path, "isdir", side_effect=isdir_sideeffect), mock.patch.object(drvr, '_lvm_disks', return_value=['lvm.rescue']), mock.patch.object(lvm, 'remove_volumes'), mock.patch.object(glob, 'iglob', return_value=[rescue_file, rescue_dir]) ) as (mock_write, mock_destroy, mock_create, mock_del, mock_rmtree, mock_isdir, mock_lvm_disks, mock_remove_volumes, mock_glob): drvr.unrescue(instance, None) mock_destroy.assert_called_once_with(instance) mock_create.assert_called_once_with("fake_unrescue_xml", fake_dom) self.assertEqual(2, mock_del.call_count) self.assertEqual(unrescue_xml_path, mock_del.call_args_list[0][0][0]) self.assertEqual(1, mock_rmtree.call_count) self.assertEqual(rescue_dir, mock_rmtree.call_args_list[0][0][0]) self.assertEqual(rescue_file, mock_del.call_args_list[1][0][0]) mock_remove_volumes.assert_called_once_with(['lvm.rescue']) def test_unrescue(self): instance = objects.Instance(uuid=uuids.instance, id=1) self._test_unrescue(instance) @mock.patch.object(rbd_utils.RBDDriver, '_destroy_volume') @mock.patch.object(rbd_utils.RBDDriver, '_disconnect_from_rados') @mock.patch.object(rbd_utils.RBDDriver, '_connect_to_rados') @mock.patch.object(rbd_utils, 'rbd') @mock.patch.object(rbd_utils, 'rados') def test_unrescue_rbd(self, mock_rados, mock_rbd, mock_connect, mock_disconnect, mock_destroy_volume): self.flags(images_type='rbd', group='libvirt') mock_connect.return_value = mock.MagicMock(), mock.MagicMock() instance = objects.Instance(uuid=uuids.instance, id=1) all_volumes = [uuids.other_instance + '_disk', uuids.other_instance + '_disk.rescue', instance.uuid + '_disk', instance.uuid + '_disk.rescue'] mock_rbd.RBD.return_value.list.return_value = all_volumes self._test_unrescue(instance) mock_destroy_volume.assert_called_once_with( mock.ANY, instance.uuid + '_disk.rescue') @mock.patch('shutil.rmtree') @mock.patch('nova.utils.execute') @mock.patch('os.path.exists') @mock.patch('nova.virt.libvirt.utils.get_instance_path') def test_delete_instance_files(self, get_instance_path, exists, exe, shutil): get_instance_path.return_value = '/path' instance = objects.Instance(uuid=uuids.instance, id=1) exists.side_effect = [False, False, True, False] result = self.drvr.delete_instance_files(instance) get_instance_path.assert_called_with(instance) exe.assert_called_with('mv', '/path', '/path_del') shutil.assert_called_with('/path_del') self.assertTrue(result) @mock.patch('shutil.rmtree') @mock.patch('nova.utils.execute') @mock.patch('os.path.exists') @mock.patch('os.kill') @mock.patch('nova.virt.libvirt.utils.get_instance_path') def test_delete_instance_files_kill_running( self, get_instance_path, kill, exists, exe, shutil): get_instance_path.return_value = '/path' instance = objects.Instance(uuid=uuids.instance, id=1) self.drvr.job_tracker.jobs[instance.uuid] = [3, 4] exists.side_effect = [False, False, True, False] result = self.drvr.delete_instance_files(instance) get_instance_path.assert_called_with(instance) exe.assert_called_with('mv', '/path', '/path_del') kill.assert_has_calls([mock.call(3, signal.SIGKILL), mock.call(3, 0), mock.call(4, signal.SIGKILL), mock.call(4, 0)]) shutil.assert_called_with('/path_del') self.assertTrue(result) self.assertNotIn(instance.uuid, self.drvr.job_tracker.jobs) @mock.patch('shutil.rmtree') @mock.patch('nova.utils.execute') @mock.patch('os.path.exists') @mock.patch('nova.virt.libvirt.utils.get_instance_path') def test_delete_instance_files_resize(self, get_instance_path, exists, exe, shutil): get_instance_path.return_value = '/path' instance = objects.Instance(uuid=uuids.instance, id=1) nova.utils.execute.side_effect = [Exception(), None] exists.side_effect = [False, False, True, False] result = self.drvr.delete_instance_files(instance) get_instance_path.assert_called_with(instance) expected = [mock.call('mv', '/path', '/path_del'), mock.call('mv', '/path_resize', '/path_del')] self.assertEqual(expected, exe.mock_calls) shutil.assert_called_with('/path_del') self.assertTrue(result) @mock.patch('shutil.rmtree') @mock.patch('nova.utils.execute') @mock.patch('os.path.exists') @mock.patch('nova.virt.libvirt.utils.get_instance_path') def test_delete_instance_files_failed(self, get_instance_path, exists, exe, shutil): get_instance_path.return_value = '/path' instance = objects.Instance(uuid=uuids.instance, id=1) exists.side_effect = [False, False, True, True] result = self.drvr.delete_instance_files(instance) get_instance_path.assert_called_with(instance) exe.assert_called_with('mv', '/path', '/path_del') shutil.assert_called_with('/path_del') self.assertFalse(result) @mock.patch('shutil.rmtree') @mock.patch('nova.utils.execute') @mock.patch('os.path.exists') @mock.patch('nova.virt.libvirt.utils.get_instance_path') def test_delete_instance_files_mv_failed(self, get_instance_path, exists, exe, shutil): get_instance_path.return_value = '/path' instance = objects.Instance(uuid=uuids.instance, id=1) nova.utils.execute.side_effect = Exception() exists.side_effect = [True, True] result = self.drvr.delete_instance_files(instance) get_instance_path.assert_called_with(instance) expected = [mock.call('mv', '/path', '/path_del'), mock.call('mv', '/path_resize', '/path_del')] * 2 self.assertEqual(expected, exe.mock_calls) self.assertFalse(result) @mock.patch('shutil.rmtree') @mock.patch('nova.utils.execute') @mock.patch('os.path.exists') @mock.patch('nova.virt.libvirt.utils.get_instance_path') def test_delete_instance_files_resume(self, get_instance_path, exists, exe, shutil): get_instance_path.return_value = '/path' instance = objects.Instance(uuid=uuids.instance, id=1) nova.utils.execute.side_effect = Exception() exists.side_effect = [False, False, True, False] result = self.drvr.delete_instance_files(instance) get_instance_path.assert_called_with(instance) expected = [mock.call('mv', '/path', '/path_del'), mock.call('mv', '/path_resize', '/path_del')] * 2 self.assertEqual(expected, exe.mock_calls) self.assertTrue(result) @mock.patch('shutil.rmtree') @mock.patch('nova.utils.execute') @mock.patch('os.path.exists') @mock.patch('nova.virt.libvirt.utils.get_instance_path') def test_delete_instance_files_none(self, get_instance_path, exists, exe, shutil): get_instance_path.return_value = '/path' instance = objects.Instance(uuid=uuids.instance, id=1) nova.utils.execute.side_effect = Exception() exists.side_effect = [False, False, False, False] result = self.drvr.delete_instance_files(instance) get_instance_path.assert_called_with(instance) expected = [mock.call('mv', '/path', '/path_del'), mock.call('mv', '/path_resize', '/path_del')] * 2 self.assertEqual(expected, exe.mock_calls) self.assertEqual(0, len(shutil.mock_calls)) self.assertTrue(result) @mock.patch('shutil.rmtree') @mock.patch('nova.utils.execute') @mock.patch('os.path.exists') @mock.patch('nova.virt.libvirt.utils.get_instance_path') def test_delete_instance_files_concurrent(self, get_instance_path, exists, exe, shutil): get_instance_path.return_value = '/path' instance = objects.Instance(uuid=uuids.instance, id=1) nova.utils.execute.side_effect = [Exception(), Exception(), None] exists.side_effect = [False, False, True, False] result = self.drvr.delete_instance_files(instance) get_instance_path.assert_called_with(instance) expected = [mock.call('mv', '/path', '/path_del'), mock.call('mv', '/path_resize', '/path_del')] expected.append(expected[0]) self.assertEqual(expected, exe.mock_calls) shutil.assert_called_with('/path_del') self.assertTrue(result) def _assert_on_id_map(self, idmap, klass, start, target, count): self.assertIsInstance(idmap, klass) self.assertEqual(start, idmap.start) self.assertEqual(target, idmap.target) self.assertEqual(count, idmap.count) def test_get_id_maps(self): self.flags(virt_type="lxc", group="libvirt") CONF.libvirt.virt_type = "lxc" CONF.libvirt.uid_maps = ["0:10000:1", "1:20000:10"] CONF.libvirt.gid_maps = ["0:10000:1", "1:20000:10"] idmaps = self.drvr._get_guest_idmaps() self.assertEqual(len(idmaps), 4) self._assert_on_id_map(idmaps[0], vconfig.LibvirtConfigGuestUIDMap, 0, 10000, 1) self._assert_on_id_map(idmaps[1], vconfig.LibvirtConfigGuestUIDMap, 1, 20000, 10) self._assert_on_id_map(idmaps[2], vconfig.LibvirtConfigGuestGIDMap, 0, 10000, 1) self._assert_on_id_map(idmaps[3], vconfig.LibvirtConfigGuestGIDMap, 1, 20000, 10) def test_get_id_maps_not_lxc(self): CONF.libvirt.uid_maps = ["0:10000:1", "1:20000:10"] CONF.libvirt.gid_maps = ["0:10000:1", "1:20000:10"] idmaps = self.drvr._get_guest_idmaps() self.assertEqual(0, len(idmaps)) def test_get_id_maps_only_uid(self): self.flags(virt_type="lxc", group="libvirt") CONF.libvirt.uid_maps = ["0:10000:1", "1:20000:10"] CONF.libvirt.gid_maps = [] idmaps = self.drvr._get_guest_idmaps() self.assertEqual(2, len(idmaps)) self._assert_on_id_map(idmaps[0], vconfig.LibvirtConfigGuestUIDMap, 0, 10000, 1) self._assert_on_id_map(idmaps[1], vconfig.LibvirtConfigGuestUIDMap, 1, 20000, 10) def test_get_id_maps_only_gid(self): self.flags(virt_type="lxc", group="libvirt") CONF.libvirt.uid_maps = [] CONF.libvirt.gid_maps = ["0:10000:1", "1:20000:10"] idmaps = self.drvr._get_guest_idmaps() self.assertEqual(2, len(idmaps)) self._assert_on_id_map(idmaps[0], vconfig.LibvirtConfigGuestGIDMap, 0, 10000, 1) self._assert_on_id_map(idmaps[1], vconfig.LibvirtConfigGuestGIDMap, 1, 20000, 10) def test_instance_on_disk(self): drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) instance = objects.Instance(uuid=uuids.instance, id=1) self.assertFalse(drvr.instance_on_disk(instance)) def test_instance_on_disk_rbd(self): self.flags(images_type='rbd', group='libvirt') drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) instance = objects.Instance(uuid=uuids.instance, id=1) self.assertTrue(drvr.instance_on_disk(instance)) def test_get_disk_xml(self): dom_xml = """ 0e38683e-f0af-418f-a3f1-6b67ea0f919d """ diska_xml = """ 0e38683e-f0af-418f-a3f1-6b67ea0f919d """ diskb_xml = """ """ dom = mock.MagicMock() dom.XMLDesc.return_value = dom_xml guest = libvirt_guest.Guest(dom) # NOTE(gcb): etree.tostring(node) returns an extra line with # some white spaces, need to strip it. actual_diska_xml = guest.get_disk('vda').to_xml() self.assertEqual(diska_xml.strip(), actual_diska_xml.strip()) actual_diskb_xml = guest.get_disk('vdb').to_xml() self.assertEqual(diskb_xml.strip(), actual_diskb_xml.strip()) self.assertIsNone(guest.get_disk('vdc')) def test_vcpu_model_from_config(self): drv = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) vcpu_model = drv._cpu_config_to_vcpu_model(None, None) self.assertIsNone(vcpu_model) cpu = vconfig.LibvirtConfigGuestCPU() feature1 = vconfig.LibvirtConfigGuestCPUFeature() feature2 = vconfig.LibvirtConfigGuestCPUFeature() feature1.name = 'sse' feature1.policy = fields.CPUFeaturePolicy.REQUIRE feature2.name = 'aes' feature2.policy = fields.CPUFeaturePolicy.REQUIRE cpu.features = set([feature1, feature2]) cpu.mode = fields.CPUMode.CUSTOM cpu.sockets = 1 cpu.cores = 2 cpu.threads = 4 vcpu_model = drv._cpu_config_to_vcpu_model(cpu, None) self.assertEqual(fields.CPUMatch.EXACT, vcpu_model.match) self.assertEqual(fields.CPUMode.CUSTOM, vcpu_model.mode) self.assertEqual(4, vcpu_model.topology.threads) self.assertEqual(set(['sse', 'aes']), set([f.name for f in vcpu_model.features])) cpu.mode = fields.CPUMode.HOST_MODEL vcpu_model_1 = drv._cpu_config_to_vcpu_model(cpu, vcpu_model) self.assertEqual(fields.CPUMode.HOST_MODEL, vcpu_model.mode) self.assertEqual(vcpu_model, vcpu_model_1) @mock.patch.object(lvm, 'get_volume_size', return_value=10) @mock.patch.object(host.Host, "get_guest") @mock.patch.object(dmcrypt, 'delete_volume') @mock.patch('nova.virt.libvirt.driver.LibvirtDriver.unfilter_instance') @mock.patch('nova.virt.libvirt.driver.LibvirtDriver._undefine_domain') @mock.patch.object(objects.Instance, 'save') def test_cleanup_lvm_encrypted(self, mock_save, mock_undefine_domain, mock_unfilter, mock_delete_volume, mock_get_guest, mock_get_size): drv = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance = objects.Instance( uuid=uuids.instance, id=1, ephemeral_key_uuid=uuids.ephemeral_key_uuid) instance.system_metadata = {} block_device_info = {'root_device_name': '/dev/vda', 'ephemerals': [], 'block_device_mapping': []} self.flags(images_type="lvm", group='libvirt') dom_xml = """ """ dom = mock.MagicMock() dom.XMLDesc.return_value = dom_xml guest = libvirt_guest.Guest(dom) mock_get_guest.return_value = guest drv.cleanup(self.context, instance, 'fake_network', destroy_vifs=False, block_device_info=block_device_info) mock_delete_volume.assert_called_once_with('/dev/mapper/fake-dmcrypt') @mock.patch.object(lvm, 'get_volume_size', return_value=10) @mock.patch.object(host.Host, "get_guest") @mock.patch.object(dmcrypt, 'delete_volume') def _test_cleanup_lvm(self, mock_delete_volume, mock_get_guest, mock_size, encrypted=False): drv = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) instance = objects.Instance( uuid=uuids.instance, id=1, ephemeral_key_uuid=uuids.ephemeral_key_uuid) block_device_info = {'root_device_name': '/dev/vda', 'ephemerals': [], 'block_device_mapping': []} dev_name = 'fake-dmcrypt' if encrypted else 'fake' dom_xml = """ """ % dev_name dom = mock.MagicMock() dom.XMLDesc.return_value = dom_xml guest = libvirt_guest.Guest(dom) mock_get_guest.return_value = guest drv._cleanup_lvm(instance, block_device_info) if encrypted: mock_delete_volume.assert_called_once_with( '/dev/mapper/fake-dmcrypt') else: self.assertFalse(mock_delete_volume.called) def test_cleanup_lvm(self): self._test_cleanup_lvm() def test_cleanup_encrypted_lvm(self): self._test_cleanup_lvm(encrypted=True) def test_vcpu_model_to_config(self): drv = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) feature = objects.VirtCPUFeature( policy=fields.CPUFeaturePolicy.REQUIRE, name='sse') feature_1 = objects.VirtCPUFeature( policy=fields.CPUFeaturePolicy.FORBID, name='aes') topo = objects.VirtCPUTopology(sockets=1, cores=2, threads=4) vcpu_model = objects.VirtCPUModel(mode=fields.CPUMode.HOST_MODEL, features=[feature, feature_1], topology=topo) cpu = drv._vcpu_model_to_cpu_config(vcpu_model) self.assertEqual(fields.CPUMode.HOST_MODEL, cpu.mode) self.assertEqual(1, cpu.sockets) self.assertEqual(4, cpu.threads) self.assertEqual(2, len(cpu.features)) self.assertEqual(set(['sse', 'aes']), set([f.name for f in cpu.features])) self.assertEqual(set([fields.CPUFeaturePolicy.REQUIRE, fields.CPUFeaturePolicy.FORBID]), set([f.policy for f in cpu.features])) def test_trigger_crash_dump(self): mock_guest = mock.Mock(libvirt_guest.Guest, id=1) instance = objects.Instance(uuid=uuids.instance, id=1) with mock.patch.object(self.drvr._host, 'get_guest', return_value=mock_guest): self.drvr.trigger_crash_dump(instance) def test_trigger_crash_dump_not_running(self): ex = fakelibvirt.make_libvirtError( fakelibvirt.libvirtError, 'Requested operation is not valid: domain is not running', error_code=fakelibvirt.VIR_ERR_OPERATION_INVALID) mock_guest = mock.Mock(libvirt_guest.Guest, id=1) mock_guest.inject_nmi = mock.Mock(side_effect=ex) instance = objects.Instance(uuid=uuids.instance, id=1) with mock.patch.object(self.drvr._host, 'get_guest', return_value=mock_guest): self.assertRaises(exception.InstanceNotRunning, self.drvr.trigger_crash_dump, instance) def test_trigger_crash_dump_not_supported(self): ex = fakelibvirt.make_libvirtError( fakelibvirt.libvirtError, '', error_code=fakelibvirt.VIR_ERR_NO_SUPPORT) mock_guest = mock.Mock(libvirt_guest.Guest, id=1) mock_guest.inject_nmi = mock.Mock(side_effect=ex) instance = objects.Instance(uuid=uuids.instance, id=1) with mock.patch.object(self.drvr._host, 'get_guest', return_value=mock_guest): self.assertRaises(exception.TriggerCrashDumpNotSupported, self.drvr.trigger_crash_dump, instance) def test_trigger_crash_dump_unexpected_error(self): ex = fakelibvirt.make_libvirtError( fakelibvirt.libvirtError, 'UnexpectedError', error_code=fakelibvirt.VIR_ERR_SYSTEM_ERROR) mock_guest = mock.Mock(libvirt_guest.Guest, id=1) mock_guest.inject_nmi = mock.Mock(side_effect=ex) instance = objects.Instance(uuid=uuids.instance, id=1) with mock.patch.object(self.drvr._host, 'get_guest', return_value=mock_guest): self.assertRaises(fakelibvirt.libvirtError, self.drvr.trigger_crash_dump, instance) @mock.patch.object(libvirt_driver.LOG, 'debug') def test_get_volume_driver_invalid_connector_exception(self, mock_debug): """Tests that the driver doesn't fail to initialize if one of the imported volume drivers raises InvalidConnectorProtocol from os-brick. """ # make a copy of the normal list and add a volume driver that raises # the handled os-brick exception when imported. libvirt_volume_drivers_copy = copy.copy( libvirt_driver.libvirt_volume_drivers) libvirt_volume_drivers_copy.append( 'invalid=nova.tests.unit.virt.libvirt.test_driver.' 'FakeInvalidVolumeDriver' ) with mock.patch.object(libvirt_driver, 'libvirt_volume_drivers', libvirt_volume_drivers_copy): drvr = libvirt_driver.LibvirtDriver( fake.FakeVirtAPI(), read_only=True ) # make sure we didn't register the invalid volume driver self.assertNotIn('invalid', drvr.volume_drivers) # make sure we logged something mock_debug.assert_called_with( ('Unable to load volume driver %s. ' 'It is not supported on this host.'), 'nova.tests.unit.virt.libvirt.test_driver.FakeInvalidVolumeDriver' ) @mock.patch('nova.virt.libvirt.driver.LibvirtDriver' '._get_mediated_devices') @mock.patch('nova.virt.libvirt.driver.LibvirtDriver' '._get_mdev_capable_devices') def test_get_vgpu_total(self, get_mdev_devs, get_mdevs): get_mdev_devs.return_value = [ {'dev_id': 'pci_0000_84_00_0', 'types': {'nvidia-11': {'availableInstances': 14, 'name': 'GRID M60-0B', 'deviceAPI': 'vfio-pci'}, }}] get_mdevs.return_value = [ {'dev_id': 'mdev_4b20d080_1b54_4048_85b3_a6a62d165c01', 'uuid': "4b20d080-1b54-4048-85b3-a6a62d165c01", 'type': 'nvidia-11', 'iommuGroup': 1 }, {'dev_id': 'mdev_4b20d080_1b54_4048_85b3_a6a62d165c02', 'uuid': "4b20d080-1b54-4048-85b3-a6a62d165c02", 'type': 'nvidia-11', 'iommuGroup': 1 }, ] # By default, no specific types are supported self.assertEqual(0, self.drvr._get_vgpu_total()) # Now, ask for only one self.flags(enabled_vgpu_types=['nvidia-11'], group='devices') # We have 14 available for nvidia-11. We also have 2 mdevs of the type. # So, as a total, we have 14+2, hence 16. self.assertEqual(16, self.drvr._get_vgpu_total()) @mock.patch.object(host.Host, 'device_lookup_by_name') @mock.patch.object(host.Host, 'list_mdev_capable_devices') @mock.patch.object(fakelibvirt.Connection, 'getLibVersion', return_value=versionutils.convert_version_to_int( libvirt_driver.MIN_LIBVIRT_MDEV_SUPPORT)) def test_get_mdev_capable_devices(self, _get_libvirt_version, list_mdev_capable_devs, device_lookup_by_name): list_mdev_capable_devs.return_value = ['pci_0000_06_00_0'] def fake_nodeDeviceLookupByName(name): return FakeNodeDevice(_fake_NodeDevXml[name]) device_lookup_by_name.side_effect = fake_nodeDeviceLookupByName drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) expected = [{"dev_id": "pci_0000_06_00_0", "types": {'nvidia-11': {'availableInstances': 16, 'name': 'GRID M60-0B', 'deviceAPI': 'vfio-pci'}, } }] self.assertEqual(expected, drvr._get_mdev_capable_devices()) @mock.patch.object(host.Host, 'device_lookup_by_name') @mock.patch.object(host.Host, 'list_mdev_capable_devices') @mock.patch.object(fakelibvirt.Connection, 'getLibVersion', return_value=versionutils.convert_version_to_int( libvirt_driver.MIN_LIBVIRT_MDEV_SUPPORT)) def test_get_mdev_capable_devices_filtering(self, _get_libvirt_version, list_mdev_capable_devs, device_lookup_by_name): list_mdev_capable_devs.return_value = ['pci_0000_06_00_0'] def fake_nodeDeviceLookupByName(name): return FakeNodeDevice(_fake_NodeDevXml[name]) device_lookup_by_name.side_effect = fake_nodeDeviceLookupByName drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) # Since we filter by a type not supported by the physical device, # we don't get results. self.assertEqual([], drvr._get_mdev_capable_devices(types=['nvidia-12'])) @mock.patch.object(host.Host, 'device_lookup_by_name') @mock.patch.object(host.Host, 'list_mediated_devices') @mock.patch.object(fakelibvirt.Connection, 'getLibVersion', return_value=versionutils.convert_version_to_int( libvirt_driver.MIN_LIBVIRT_MDEV_SUPPORT)) def test_get_mediated_devices(self, _get_libvirt_version, list_mediated_devices, device_lookup_by_name): list_mediated_devices.return_value = [ 'mdev_4b20d080_1b54_4048_85b3_a6a62d165c01'] def fake_nodeDeviceLookupByName(name): return FakeNodeDevice(_fake_NodeDevXml[name]) device_lookup_by_name.side_effect = fake_nodeDeviceLookupByName drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) expected = [{"dev_id": "mdev_4b20d080_1b54_4048_85b3_a6a62d165c01", "uuid": "4b20d080-1b54-4048-85b3-a6a62d165c01", "type": "nvidia-11", "iommu_group": 12 }] self.assertEqual(expected, drvr._get_mediated_devices()) @mock.patch.object(host.Host, 'device_lookup_by_name') @mock.patch.object(host.Host, 'list_mediated_devices') @mock.patch.object(fakelibvirt.Connection, 'getLibVersion', return_value=versionutils.convert_version_to_int( libvirt_driver.MIN_LIBVIRT_MDEV_SUPPORT)) def test_get_mediated_devices_filtering(self, _get_libvirt_version, list_mediated_devices, device_lookup_by_name): list_mediated_devices.return_value = [ 'mdev_4b20d080_1b54_4048_85b3_a6a62d165c01'] def fake_nodeDeviceLookupByName(name): return FakeNodeDevice(_fake_NodeDevXml[name]) device_lookup_by_name.side_effect = fake_nodeDeviceLookupByName drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) # Since we filter by a type not supported by the physical device, # we don't get results. self.assertEqual([], drvr._get_mediated_devices(types=['nvidia-12'])) @mock.patch.object(host.Host, 'list_guests') def test_get_all_assigned_mediated_devices(self, list_guests): dom_with_vgpu = """
""" % uuids.mdev guest1 = libvirt_guest.Guest(FakeVirtDomain()) guest2 = libvirt_guest.Guest(FakeVirtDomain(fake_xml=dom_with_vgpu)) list_guests.return_value = [guest1, guest2] drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) self.assertEqual({uuids.mdev: guest2.uuid}, drvr._get_all_assigned_mediated_devices()) @mock.patch.object(host.Host, 'get_guest') def test_get_all_assigned_mediated_devices_for_an_instance(self, get_guest): dom_with_vgpu = """
""" % uuids.mdev guest = libvirt_guest.Guest(FakeVirtDomain(fake_xml=dom_with_vgpu)) get_guest.return_value = guest drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) fake_inst = objects.Instance() self.assertEqual({uuids.mdev: guest.uuid}, drvr._get_all_assigned_mediated_devices(fake_inst)) get_guest.assert_called_once_with(fake_inst) def test_allocate_mdevs_with_no_vgpu_allocations(self): allocations = { 'rp1': { 'resources': { # Just any resource class but VGPU fields.ResourceClass.VCPU: 1, } } } drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) self.assertIsNone(drvr._allocate_mdevs(allocations=allocations)) @mock.patch.object(libvirt_driver.LibvirtDriver, '_get_existing_mdevs_not_assigned') def test_allocate_mdevs_with_available_mdevs(self, get_unassigned_mdevs): allocations = { 'rp1': { 'resources': { fields.ResourceClass.VGPU: 1, } } } get_unassigned_mdevs.return_value = set([uuids.mdev1]) drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) self.assertEqual([uuids.mdev1], drvr._allocate_mdevs(allocations=allocations)) @mock.patch.object(nova.privsep.libvirt, 'create_mdev') @mock.patch.object(libvirt_driver.LibvirtDriver, '_get_mdev_capable_devices') @mock.patch.object(libvirt_driver.LibvirtDriver, '_get_existing_mdevs_not_assigned') def test_allocate_mdevs_with_no_mdevs_but_capacity(self, unallocated_mdevs, get_mdev_capable_devs, privsep_create_mdev): self.flags(enabled_vgpu_types=['nvidia-11'], group='devices') allocations = { 'rp1': { 'resources': { fields.ResourceClass.VGPU: 1, } } } unallocated_mdevs.return_value = set() get_mdev_capable_devs.return_value = [ {"dev_id": "pci_0000_06_00_0", "types": {'nvidia-11': {'availableInstances': 16, 'name': 'GRID M60-0B', 'deviceAPI': 'vfio-pci'}, } }] privsep_create_mdev.return_value = uuids.mdev1 drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) self.assertEqual([uuids.mdev1], drvr._allocate_mdevs(allocations=allocations)) privsep_create_mdev.assert_called_once_with("0000:06:00.0", 'nvidia-11', uuid=None) @mock.patch.object(nova.privsep.libvirt, 'create_mdev') @mock.patch.object(libvirt_driver.LibvirtDriver, '_get_mdev_capable_devices') @mock.patch.object(libvirt_driver.LibvirtDriver, '_get_existing_mdevs_not_assigned') def test_allocate_mdevs_with_no_gpu_capacity(self, unallocated_mdevs, get_mdev_capable_devs, privsep_create_mdev): self.flags(enabled_vgpu_types=['nvidia-11'], group='devices') allocations = { 'rp1': { 'resources': { fields.ResourceClass.VGPU: 1, } } } unallocated_mdevs.return_value = set() # Mock the fact all possible mediated devices are created and all of # them being assigned get_mdev_capable_devs.return_value = [ {"dev_id": "pci_0000_06_00_0", "types": {'nvidia-11': {'availableInstances': 0, 'name': 'GRID M60-0B', 'deviceAPI': 'vfio-pci'}, } }] drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) self.assertRaises(exception.ComputeResourcesUnavailable, drvr._allocate_mdevs, allocations=allocations) @mock.patch.object(libvirt_driver.LibvirtDriver, '_get_mediated_devices') @mock.patch.object(libvirt_driver.LibvirtDriver, '_get_all_assigned_mediated_devices') def test_get_existing_mdevs_not_assigned(self, get_all_assigned_mdevs, get_mediated_devices): # mdev2 is assigned to instance1 get_all_assigned_mdevs.return_value = {uuids.mdev2: uuids.inst1} # there is a total of 2 mdevs, mdev1 and mdev2 get_mediated_devices.return_value = [{'dev_id': 'mdev_some_uuid1', 'uuid': uuids.mdev1, 'type': 'nvidia-11', 'iommu_group': 1}, {'dev_id': 'mdev_some_uuid2', 'uuid': uuids.mdev2, 'type': 'nvidia-11', 'iommu_group': 1}] drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) # Since mdev2 is assigned to inst1, only mdev1 is available self.assertEqual(set([uuids.mdev1]), drvr._get_existing_mdevs_not_assigned()) @mock.patch.object(nova.privsep.libvirt, 'create_mdev') @mock.patch.object(libvirt_driver.LibvirtDriver, '_get_mdev_capable_devices') @mock.patch.object(os.path, 'exists') @mock.patch.object(libvirt_driver.LibvirtDriver, '_get_all_assigned_mediated_devices') @mock.patch.object(fakelibvirt.Connection, 'getLibVersion', return_value=versionutils.convert_version_to_int( libvirt_driver.MIN_LIBVIRT_MDEV_SUPPORT)) def test_recreate_mediated_device_on_init_host( self, _get_libvirt_version, get_all_assigned_mdevs, exists, get_mdev_capable_devs, privsep_create_mdev): self.flags(enabled_vgpu_types=['nvidia-11'], group='devices') get_all_assigned_mdevs.return_value = {uuids.mdev1: uuids.inst1, uuids.mdev2: uuids.inst2} # Fake the fact that mdev1 is existing but mdev2 not def _exists(path): # Just verify what we ask self.assertIn('/sys/bus/mdev/devices/', path) return True if uuids.mdev1 in path else False exists.side_effect = _exists get_mdev_capable_devs.return_value = [ {"dev_id": "pci_0000_06_00_0", "types": {'nvidia-11': {'availableInstances': 16, 'name': 'GRID M60-0B', 'deviceAPI': 'vfio-pci'}, } }] drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) drvr.init_host(host='foo') privsep_create_mdev.assert_called_once_with( "0000:06:00.0", 'nvidia-11', uuid=uuids.mdev2) @mock.patch.object(libvirt_guest.Guest, 'detach_device') def _test_detach_mediated_devices(self, side_effect, detach_device): dom_with_vgpu = ( """
""") detach_device.side_effect = side_effect drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) guest = libvirt_guest.Guest(FakeVirtDomain(fake_xml=dom_with_vgpu)) drvr._detach_mediated_devices(guest) return detach_device def test_detach_mediated_devices(self): def fake_detach_device(cfg_obj, **kwargs): self.assertIsInstance(cfg_obj, vconfig.LibvirtConfigGuestHostdevMDEV) detach_mock = self._test_detach_mediated_devices(fake_detach_device) detach_mock.assert_called_once_with(mock.ANY, live=True) def test_detach_mediated_devices_raises_exc_unsupported(self): exc = fakelibvirt.make_libvirtError( fakelibvirt.libvirtError, 'virDomainDetachDeviceFlags() failed', error_code=fakelibvirt.VIR_ERR_CONFIG_UNSUPPORTED) self.assertRaises(exception.InstanceFaultRollback, self._test_detach_mediated_devices, exc) def test_detach_mediated_devices_raises_exc(self): exc = test.TestingException() self.assertRaises(test.TestingException, self._test_detach_mediated_devices, exc) class LibvirtVolumeUsageTestCase(test.NoDBTestCase): """Test for LibvirtDriver.get_all_volume_usage.""" def setUp(self): super(LibvirtVolumeUsageTestCase, self).setUp() self.useFixture(fakelibvirt.FakeLibvirtFixture()) self.drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) self.c = context.get_admin_context() self.ins_ref = objects.Instance( id=1729, uuid='875a8070-d0b9-4949-8b31-104d125c9a64' ) # verify bootable volume device path also self.bdms = [{'volume_id': 1, 'device_name': '/dev/vde'}, {'volume_id': 2, 'device_name': 'vda'}] def test_get_all_volume_usage(self): def fake_block_stats(instance_name, disk): return (169, 688640, 0, 0, -1) self.stubs.Set(self.drvr, 'block_stats', fake_block_stats) vol_usage = self.drvr.get_all_volume_usage(self.c, [dict(instance=self.ins_ref, instance_bdms=self.bdms)]) expected_usage = [{'volume': 1, 'instance': self.ins_ref, 'rd_bytes': 688640, 'wr_req': 0, 'rd_req': 169, 'wr_bytes': 0}, {'volume': 2, 'instance': self.ins_ref, 'rd_bytes': 688640, 'wr_req': 0, 'rd_req': 169, 'wr_bytes': 0}] self.assertEqual(vol_usage, expected_usage) def test_get_all_volume_usage_device_not_found(self): def fake_get_domain(self, instance): raise exception.InstanceNotFound(instance_id="fakedom") self.stubs.Set(host.Host, '_get_domain', fake_get_domain) vol_usage = self.drvr.get_all_volume_usage(self.c, [dict(instance=self.ins_ref, instance_bdms=self.bdms)]) self.assertEqual(vol_usage, []) class LibvirtNonblockingTestCase(test.NoDBTestCase): """Test libvirtd calls are nonblocking.""" def setUp(self): super(LibvirtNonblockingTestCase, self).setUp() self.useFixture(fakelibvirt.FakeLibvirtFixture()) self.flags(connection_uri="test:///default", group='libvirt') def test_connection_to_primitive(self): # Test bug 962840. import nova.virt.libvirt.driver as libvirt_driver drvr = libvirt_driver.LibvirtDriver('') drvr.set_host_enabled = mock.Mock() jsonutils.to_primitive(drvr._conn, convert_instances=True) @mock.patch.object(objects.Service, 'get_by_compute_host') def test_tpool_execute_calls_libvirt(self, mock_svc): conn = fakelibvirt.virConnect() conn.is_expected = True self.mox.StubOutWithMock(eventlet.tpool, 'execute') eventlet.tpool.execute( fakelibvirt.openAuth, 'test:///default', mox.IgnoreArg(), mox.IgnoreArg()).AndReturn(conn) eventlet.tpool.execute( conn.domainEventRegisterAny, None, fakelibvirt.VIR_DOMAIN_EVENT_ID_LIFECYCLE, mox.IgnoreArg(), mox.IgnoreArg()) if hasattr(fakelibvirt.virConnect, 'registerCloseCallback'): eventlet.tpool.execute( conn.registerCloseCallback, mox.IgnoreArg(), mox.IgnoreArg()) self.mox.ReplayAll() driver = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) c = driver._get_connection() self.assertTrue(c.is_expected) class LibvirtVolumeSnapshotTestCase(test.NoDBTestCase): """Tests for libvirtDriver.volume_snapshot_create/delete.""" def setUp(self): super(LibvirtVolumeSnapshotTestCase, self).setUp() self.useFixture(fakelibvirt.FakeLibvirtFixture()) self.drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) self.c = context.get_admin_context() self.flags(instance_name_template='instance-%s') # creating instance self.inst = {} self.inst['uuid'] = uuids.fake self.inst['id'] = '1' # system_metadata is needed for objects.Instance.image_meta conversion self.inst['system_metadata'] = {} # create domain info self.dom_xml = """ 0e38683e-f0af-418f-a3f1-6b67ea0f919d """ # alternate domain info with network-backed snapshot chain self.dom_netdisk_xml = """ 0e38683e-f0af-418f-a3f1-6b67eaffffff 0e38683e-f0af-418f-a3f1-6b67ea0f919d """ # XML with netdisk attached, and 1 snapshot taken self.dom_netdisk_xml_2 = """ 0e38683e-f0af-418f-a3f1-6b67eaffffff 0e38683e-f0af-418f-a3f1-6b67ea0f919d """ self.create_info = {'type': 'qcow2', 'snapshot_id': '1234-5678', 'new_file': 'new-file'} self.volume_uuid = '0e38683e-f0af-418f-a3f1-6b67ea0f919d' self.snapshot_id = '9c3ca9f4-9f4e-4dba-bedd-5c5e4b52b162' self.delete_info_1 = {'type': 'qcow2', 'file_to_merge': 'snap.img', 'merge_target_file': None} self.delete_info_2 = {'type': 'qcow2', 'file_to_merge': 'snap.img', 'merge_target_file': 'other-snap.img'} self.delete_info_3 = {'type': 'qcow2', 'file_to_merge': None, 'merge_target_file': None} self.delete_info_netdisk = {'type': 'qcow2', 'file_to_merge': 'snap.img', 'merge_target_file': 'root.img'} self.delete_info_invalid_type = {'type': 'made_up_type', 'file_to_merge': 'some_file', 'merge_target_file': 'some_other_file'} @mock.patch('nova.virt.block_device.DriverVolumeBlockDevice.' 'refresh_connection_info') @mock.patch('nova.objects.block_device.BlockDeviceMapping.' 'get_by_volume_and_instance') def test_volume_refresh_connection_info(self, mock_get_by_volume_and_instance, mock_refresh_connection_info): instance = objects.Instance(**self.inst) fake_bdm = fake_block_device.FakeDbBlockDeviceDict({ 'id': 123, 'instance_uuid': uuids.instance, 'device_name': '/dev/sdb', 'source_type': 'volume', 'destination_type': 'volume', 'volume_id': 'fake-volume-id-1', 'connection_info': '{"fake": "connection_info"}'}) fake_bdm = objects.BlockDeviceMapping(self.c, **fake_bdm) mock_get_by_volume_and_instance.return_value = fake_bdm self.drvr._volume_refresh_connection_info(self.c, instance, self.volume_uuid) mock_get_by_volume_and_instance.assert_called_once_with( self.c, self.volume_uuid, instance.uuid) mock_refresh_connection_info.assert_called_once_with(self.c, instance, self.drvr._volume_api, self.drvr) def _test_volume_snapshot_create(self, quiesce=True, can_quiesce=True, quiesce_required=False): """Test snapshot creation with file-based disk.""" self.flags(instance_name_template='instance-%s') self.mox.StubOutWithMock(self.drvr._host, '_get_domain') self.mox.StubOutWithMock(self.drvr, '_volume_api') if quiesce_required: self.inst['system_metadata']['image_os_require_quiesce'] = True instance = objects.Instance(**self.inst) new_file = 'new-file' domain = FakeVirtDomain(fake_xml=self.dom_xml) self.mox.StubOutWithMock(domain, 'XMLDesc') self.mox.StubOutWithMock(domain, 'snapshotCreateXML') domain.XMLDesc(flags=0).AndReturn(self.dom_xml) snap_xml_src = ( '\n' ' \n' ' \n' ' \n' ' \n' ' \n' ' \n' '\n') # Older versions of libvirt may be missing these. fakelibvirt.VIR_DOMAIN_SNAPSHOT_CREATE_REUSE_EXT = 32 fakelibvirt.VIR_DOMAIN_SNAPSHOT_CREATE_QUIESCE = 64 snap_flags = (fakelibvirt.VIR_DOMAIN_SNAPSHOT_CREATE_DISK_ONLY | fakelibvirt.VIR_DOMAIN_SNAPSHOT_CREATE_NO_METADATA | fakelibvirt.VIR_DOMAIN_SNAPSHOT_CREATE_REUSE_EXT) snap_flags_q = (snap_flags | fakelibvirt.VIR_DOMAIN_SNAPSHOT_CREATE_QUIESCE) can_quiesce_mock = mock.Mock() if can_quiesce: can_quiesce_mock.return_value = None if quiesce: domain.snapshotCreateXML(snap_xml_src, flags=snap_flags_q) else: # we can quiesce but snapshot with quiesce fails domain.snapshotCreateXML(snap_xml_src, flags=snap_flags_q).\ AndRaise(fakelibvirt.libvirtError( 'quiescing failed, no qemu-ga')) if not quiesce_required: # quiesce is not required so try snapshot again without it domain.snapshotCreateXML(snap_xml_src, flags=snap_flags) else: can_quiesce_mock.side_effect = exception.QemuGuestAgentNotEnabled if not quiesce_required: # quiesce is not required so try snapshot again without it domain.snapshotCreateXML(snap_xml_src, flags=snap_flags) self.drvr._can_quiesce = can_quiesce_mock self.mox.ReplayAll() guest = libvirt_guest.Guest(domain) if quiesce_required and (not quiesce or not can_quiesce): # If we can't quiesce but it's required by the image then we should # fail. if not quiesce: # snapshot + quiesce failed which is a libvirtError expected_error = fakelibvirt.libvirtError else: # quiesce is required but we can't do it expected_error = exception.QemuGuestAgentNotEnabled self.assertRaises(expected_error, self.drvr._volume_snapshot_create, self.c, instance, guest, self.volume_uuid, new_file) else: self.drvr._volume_snapshot_create(self.c, instance, guest, self.volume_uuid, new_file) # instance.image_meta generates a new objects.ImageMeta object each # time it's called so just use a mock.ANY for the image_meta arg. can_quiesce_mock.assert_called_once_with(instance, mock.ANY) self.mox.VerifyAll() def test_volume_snapshot_create_libgfapi(self): """Test snapshot creation with libgfapi network disk.""" self.flags(instance_name_template = 'instance-%s') self.mox.StubOutWithMock(self.drvr._host, '_get_domain') self.mox.StubOutWithMock(self.drvr, '_volume_api') self.dom_xml = """ 0e38683e-f0af-418f-a3f1-6b67ea0f919d """ instance = objects.Instance(**self.inst) new_file = 'new-file' domain = FakeVirtDomain(fake_xml=self.dom_xml) self.mox.StubOutWithMock(domain, 'XMLDesc') self.mox.StubOutWithMock(domain, 'snapshotCreateXML') domain.XMLDesc(flags=0).AndReturn(self.dom_xml) snap_xml_src = ( '\n' ' \n' ' \n' ' \n' ' \n' ' \n' ' \n' '\n') # Older versions of libvirt may be missing these. fakelibvirt.VIR_DOMAIN_SNAPSHOT_CREATE_REUSE_EXT = 32 fakelibvirt.VIR_DOMAIN_SNAPSHOT_CREATE_QUIESCE = 64 snap_flags = (fakelibvirt.VIR_DOMAIN_SNAPSHOT_CREATE_DISK_ONLY | fakelibvirt.VIR_DOMAIN_SNAPSHOT_CREATE_NO_METADATA | fakelibvirt.VIR_DOMAIN_SNAPSHOT_CREATE_REUSE_EXT) snap_flags_q = (snap_flags | fakelibvirt.VIR_DOMAIN_SNAPSHOT_CREATE_QUIESCE) domain.snapshotCreateXML(snap_xml_src, flags=snap_flags_q) self.mox.ReplayAll() guest = libvirt_guest.Guest(domain) with mock.patch.object(self.drvr, '_can_quiesce', return_value=None): self.drvr._volume_snapshot_create(self.c, instance, guest, self.volume_uuid, new_file) self.mox.VerifyAll() def test_volume_snapshot_create_cannot_quiesce(self): # We can't quiesce so we don't try. self._test_volume_snapshot_create(can_quiesce=False) def test_volume_snapshot_create_cannot_quiesce_quiesce_required(self): # We can't quiesce but it's required so we fail. self._test_volume_snapshot_create(can_quiesce=False, quiesce_required=True) def test_volume_snapshot_create_can_quiesce_quiesce_required_fails(self): # We can quiesce but it fails and it's required so we fail. self._test_volume_snapshot_create( quiesce=False, can_quiesce=True, quiesce_required=True) def test_volume_snapshot_create_noquiesce(self): # We can quiesce but it fails but it's not required so we don't fail. self._test_volume_snapshot_create(quiesce=False) def test_volume_snapshot_create_noquiesce_cannot_quiesce(self): # We can't quiesce so we don't try, and if we did we'd fail. self._test_volume_snapshot_create(quiesce=False, can_quiesce=False) @mock.patch.object(host.Host, 'has_min_version', return_value=True) def test_can_quiesce(self, ver): self.flags(virt_type='kvm', group='libvirt') instance = objects.Instance(**self.inst) image_meta = objects.ImageMeta.from_dict( {"properties": { "hw_qemu_guest_agent": "yes"}}) self.assertIsNone(self.drvr._can_quiesce(instance, image_meta)) @mock.patch.object(host.Host, 'has_min_version', return_value=True) def test_can_quiesce_bad_hyp(self, ver): self.flags(virt_type='lxc', group='libvirt') instance = objects.Instance(**self.inst) image_meta = objects.ImageMeta.from_dict( {"properties": { "hw_qemu_guest_agent": "yes"}}) self.assertRaises(exception.InstanceQuiesceNotSupported, self.drvr._can_quiesce, instance, image_meta) @mock.patch.object(host.Host, 'has_min_version', return_value=True) def test_can_quiesce_agent_not_enable(self, ver): self.flags(virt_type='kvm', group='libvirt') instance = objects.Instance(**self.inst) image_meta = objects.ImageMeta.from_dict({}) self.assertRaises(exception.QemuGuestAgentNotEnabled, self.drvr._can_quiesce, instance, image_meta) @mock.patch('oslo_service.loopingcall.FixedIntervalLoopingCall') @mock.patch('nova.virt.libvirt.driver.LibvirtDriver.' '_volume_snapshot_create') @mock.patch('nova.virt.libvirt.driver.LibvirtDriver.' '_volume_refresh_connection_info') def test_volume_snapshot_create_outer_success(self, mock_refresh, mock_snap_create, mock_loop): class FakeLoopingCall(object): def __init__(self, func): self.func = func def start(self, *a, **k): try: self.func() except loopingcall.LoopingCallDone: pass return self def wait(self): return None mock_loop.side_effect = FakeLoopingCall instance = objects.Instance(**self.inst) domain = FakeVirtDomain(fake_xml=self.dom_xml, id=1) guest = libvirt_guest.Guest(domain) @mock.patch.object(self.drvr, '_volume_api') @mock.patch.object(self.drvr._host, 'get_guest') def _test(mock_get_guest, mock_vol_api): mock_get_guest.return_value = guest mock_vol_api.get_snapshot.return_value = {'status': 'available'} self.drvr.volume_snapshot_create(self.c, instance, self.volume_uuid, self.create_info) mock_get_guest.assert_called_once_with(instance) mock_snap_create.assert_called_once_with( self.c, instance, guest, self.volume_uuid, self.create_info['new_file']) mock_vol_api.update_snapshot_status.assert_called_once_with( self.c, self.create_info['snapshot_id'], 'creating') mock_vol_api.get_snapshot.assert_called_once_with( self.c, self.create_info['snapshot_id']) mock_refresh.assert_called_once_with( self.c, instance, self.volume_uuid) _test() def test_volume_snapshot_create_outer_failure(self): instance = objects.Instance(**self.inst) domain = FakeVirtDomain(fake_xml=self.dom_xml, id=1) guest = libvirt_guest.Guest(domain) self.mox.StubOutWithMock(self.drvr._host, 'get_guest') self.mox.StubOutWithMock(self.drvr, '_volume_api') self.mox.StubOutWithMock(self.drvr, '_volume_snapshot_create') self.drvr._host.get_guest(instance).AndReturn(guest) self.drvr._volume_snapshot_create(self.c, instance, guest, self.volume_uuid, self.create_info['new_file']).\ AndRaise(exception.NovaException('oops')) self.drvr._volume_api.update_snapshot_status( self.c, self.create_info['snapshot_id'], 'error') self.mox.ReplayAll() self.assertRaises(exception.NovaException, self.drvr.volume_snapshot_create, self.c, instance, self.volume_uuid, self.create_info) @mock.patch('nova.virt.libvirt.guest.BlockDevice.is_job_complete') def test_volume_snapshot_delete_1(self, mock_is_job_complete): """Deleting newest snapshot -- blockRebase.""" # libvirt lib doesn't have VIR_DOMAIN_BLOCK_REBASE_RELATIVE flag fakelibvirt.__dict__.pop('VIR_DOMAIN_BLOCK_REBASE_RELATIVE') self.stubs.Set(libvirt_driver, 'libvirt', fakelibvirt) instance = objects.Instance(**self.inst) snapshot_id = 'snapshot-1234' domain = FakeVirtDomain(fake_xml=self.dom_xml) self.mox.StubOutWithMock(domain, 'XMLDesc') domain.XMLDesc(flags=0).AndReturn(self.dom_xml) self.mox.StubOutWithMock(self.drvr._host, '_get_domain') self.mox.StubOutWithMock(domain, 'blockRebase') self.mox.StubOutWithMock(domain, 'blockCommit') self.drvr._host._get_domain(instance).AndReturn(domain) domain.blockRebase('vda', 'snap.img', 0, flags=0) self.mox.ReplayAll() # is_job_complete returns False when initially called, then True mock_is_job_complete.side_effect = (False, True) self.drvr._volume_snapshot_delete(self.c, instance, self.volume_uuid, snapshot_id, self.delete_info_1) self.mox.VerifyAll() self.assertEqual(2, mock_is_job_complete.call_count) fakelibvirt.__dict__.update({'VIR_DOMAIN_BLOCK_REBASE_RELATIVE': 8}) @mock.patch('nova.virt.libvirt.guest.BlockDevice.is_job_complete') def test_volume_snapshot_delete_relative_1(self, mock_is_job_complete): """Deleting newest snapshot -- blockRebase using relative flag""" self.stubs.Set(libvirt_driver, 'libvirt', fakelibvirt) instance = objects.Instance(**self.inst) snapshot_id = 'snapshot-1234' domain = FakeVirtDomain(fake_xml=self.dom_xml) guest = libvirt_guest.Guest(domain) self.mox.StubOutWithMock(domain, 'XMLDesc') domain.XMLDesc(flags=0).AndReturn(self.dom_xml) self.mox.StubOutWithMock(self.drvr._host, 'get_guest') self.mox.StubOutWithMock(domain, 'blockRebase') self.mox.StubOutWithMock(domain, 'blockCommit') self.drvr._host.get_guest(instance).AndReturn(guest) domain.blockRebase('vda', 'snap.img', 0, flags=fakelibvirt.VIR_DOMAIN_BLOCK_REBASE_RELATIVE) self.mox.ReplayAll() # is_job_complete returns False when initially called, then True mock_is_job_complete.side_effect = (False, True) self.drvr._volume_snapshot_delete(self.c, instance, self.volume_uuid, snapshot_id, self.delete_info_1) self.mox.VerifyAll() self.assertEqual(2, mock_is_job_complete.call_count) def _setup_block_rebase_domain_and_guest_mocks(self, dom_xml): mock_domain = mock.Mock(spec=fakelibvirt.virDomain) mock_domain.XMLDesc.return_value = dom_xml guest = libvirt_guest.Guest(mock_domain) exc = fakelibvirt.make_libvirtError( fakelibvirt.libvirtError, 'virDomainBlockRebase() failed', error_code=fakelibvirt.VIR_ERR_OPERATION_INVALID) mock_domain.blockRebase.side_effect = exc return mock_domain, guest @mock.patch.object(host.Host, "has_min_version", mock.Mock(return_value=True)) @mock.patch("nova.virt.libvirt.guest.Guest.is_active", mock.Mock(return_value=False)) @mock.patch('nova.virt.images.qemu_img_info', return_value=mock.Mock(file_format="fake_fmt")) @mock.patch('nova.utils.execute') def test_volume_snapshot_delete_when_dom_not_running(self, mock_execute, mock_qemu_img_info): """Deleting newest snapshot of a file-based image when the domain is not running should trigger a blockRebase using qemu-img not libvirt. In this test, we rebase the image with another image as backing file. """ mock_domain, guest = self._setup_block_rebase_domain_and_guest_mocks( self.dom_xml) instance = objects.Instance(**self.inst) snapshot_id = 'snapshot-1234' with mock.patch.object(self.drvr._host, 'get_guest', return_value=guest): self.drvr._volume_snapshot_delete(self.c, instance, self.volume_uuid, snapshot_id, self.delete_info_1) mock_qemu_img_info.assert_called_once_with("snap.img") mock_execute.assert_called_once_with('qemu-img', 'rebase', '-b', 'snap.img', '-F', 'fake_fmt', 'disk1_file') @mock.patch.object(host.Host, "has_min_version", mock.Mock(return_value=True)) @mock.patch("nova.virt.libvirt.guest.Guest.is_active", mock.Mock(return_value=False)) @mock.patch('nova.virt.images.qemu_img_info', return_value=mock.Mock(file_format="fake_fmt")) @mock.patch('nova.utils.execute') def test_volume_snapshot_delete_when_dom_not_running_and_no_rebase_base( self, mock_execute, mock_qemu_img_info): """Deleting newest snapshot of a file-based image when the domain is not running should trigger a blockRebase using qemu-img not libvirt. In this test, the image is rebased onto no backing file (i.e. it will exist independently of any backing file) """ mock_domain, mock_guest = ( self._setup_block_rebase_domain_and_guest_mocks(self.dom_xml)) instance = objects.Instance(**self.inst) snapshot_id = 'snapshot-1234' with mock.patch.object(self.drvr._host, 'get_guest', return_value=mock_guest): self.drvr._volume_snapshot_delete(self.c, instance, self.volume_uuid, snapshot_id, self.delete_info_3) self.assertEqual(0, mock_qemu_img_info.call_count) mock_execute.assert_called_once_with('qemu-img', 'rebase', '-b', '', 'disk1_file') @mock.patch.object(host.Host, "has_min_version", mock.Mock(return_value=True)) @mock.patch("nova.virt.libvirt.guest.Guest.is_active", mock.Mock(return_value=False)) def test_volume_snapshot_delete_when_dom_with_nw_disk_not_running(self): """Deleting newest snapshot of a network disk when the domain is not running should raise a NovaException. """ mock_domain, mock_guest = ( self._setup_block_rebase_domain_and_guest_mocks( self.dom_netdisk_xml)) instance = objects.Instance(**self.inst) snapshot_id = 'snapshot-1234' with mock.patch.object(self.drvr._host, 'get_guest', return_value=mock_guest): ex = self.assertRaises(exception.NovaException, self.drvr._volume_snapshot_delete, self.c, instance, self.volume_uuid, snapshot_id, self.delete_info_1) self.assertIn('has not been fully tested', six.text_type(ex)) @mock.patch('nova.virt.libvirt.guest.BlockDevice.is_job_complete') def test_volume_snapshot_delete_relative_2(self, mock_is_job_complete): """Deleting older snapshot -- blockCommit using relative flag""" self.stubs.Set(libvirt_driver, 'libvirt', fakelibvirt) instance = objects.Instance(**self.inst) snapshot_id = 'snapshot-1234' domain = FakeVirtDomain(fake_xml=self.dom_xml) self.mox.StubOutWithMock(domain, 'XMLDesc') domain.XMLDesc(flags=0).AndReturn(self.dom_xml) self.mox.StubOutWithMock(self.drvr._host, '_get_domain') self.mox.StubOutWithMock(domain, 'blockRebase') self.mox.StubOutWithMock(domain, 'blockCommit') self.drvr._host._get_domain(instance).AndReturn(domain) domain.blockCommit('vda', 'other-snap.img', 'snap.img', 0, flags=fakelibvirt.VIR_DOMAIN_BLOCK_COMMIT_RELATIVE) self.mox.ReplayAll() # is_job_complete returns False when initially called, then True mock_is_job_complete.side_effect = (False, True) self.drvr._volume_snapshot_delete(self.c, instance, self.volume_uuid, snapshot_id, self.delete_info_2) self.mox.VerifyAll() self.assertEqual(2, mock_is_job_complete.call_count) @mock.patch('nova.virt.libvirt.guest.BlockDevice.is_job_complete') def test_volume_snapshot_delete_nonrelative_null_base( self, mock_is_job_complete): # Deleting newest and last snapshot of a volume # with blockRebase. So base of the new image will be null. instance = objects.Instance(**self.inst) snapshot_id = 'snapshot-1234' domain = FakeVirtDomain(fake_xml=self.dom_xml) guest = libvirt_guest.Guest(domain) mock_is_job_complete.return_value = True with test.nested( mock.patch.object(domain, 'XMLDesc', return_value=self.dom_xml), mock.patch.object(self.drvr._host, 'get_guest', return_value=guest), mock.patch.object(domain, 'blockRebase'), ) as (mock_xmldesc, mock_get_guest, mock_rebase): self.drvr._volume_snapshot_delete(self.c, instance, self.volume_uuid, snapshot_id, self.delete_info_3) mock_xmldesc.assert_called_once_with(flags=0) mock_get_guest.assert_called_once_with(instance) mock_rebase.assert_called_once_with('vda', None, 0, flags=0) mock_is_job_complete.assert_called() @mock.patch('nova.virt.libvirt.guest.BlockDevice.is_job_complete') def test_volume_snapshot_delete_netdisk_nonrelative_null_base( self, mock_is_job_complete): # Deleting newest and last snapshot of a network attached volume # with blockRebase. So base of the new image will be null. instance = objects.Instance(**self.inst) snapshot_id = 'snapshot-1234' domain = FakeVirtDomain(fake_xml=self.dom_netdisk_xml_2) guest = libvirt_guest.Guest(domain) mock_is_job_complete.return_value = True with test.nested( mock.patch.object(domain, 'XMLDesc', return_value=self.dom_netdisk_xml_2), mock.patch.object(self.drvr._host, 'get_guest', return_value=guest), mock.patch.object(domain, 'blockRebase'), ) as (mock_xmldesc, mock_get_guest, mock_rebase): self.drvr._volume_snapshot_delete(self.c, instance, self.volume_uuid, snapshot_id, self.delete_info_3) mock_xmldesc.assert_called_once_with(flags=0) mock_get_guest.assert_called_once_with(instance) mock_rebase.assert_called_once_with('vdb', None, 0, flags=0) mock_is_job_complete.assert_called() def test_volume_snapshot_delete_outer_success(self): instance = objects.Instance(**self.inst) snapshot_id = 'snapshot-1234' FakeVirtDomain(fake_xml=self.dom_xml) self.mox.StubOutWithMock(self.drvr._host, '_get_domain') self.mox.StubOutWithMock(self.drvr, '_volume_api') self.mox.StubOutWithMock(self.drvr, '_volume_snapshot_delete') self.drvr._volume_snapshot_delete(self.c, instance, self.volume_uuid, snapshot_id, delete_info=self.delete_info_1) self.drvr._volume_api.update_snapshot_status( self.c, snapshot_id, 'deleting') self.mox.StubOutWithMock(self.drvr, '_volume_refresh_connection_info') self.drvr._volume_refresh_connection_info(self.c, instance, self.volume_uuid) self.mox.ReplayAll() self.drvr.volume_snapshot_delete(self.c, instance, self.volume_uuid, snapshot_id, self.delete_info_1) self.mox.VerifyAll() def test_volume_snapshot_delete_outer_failure(self): instance = objects.Instance(**self.inst) snapshot_id = '1234-9876' FakeVirtDomain(fake_xml=self.dom_xml) self.mox.StubOutWithMock(self.drvr._host, '_get_domain') self.mox.StubOutWithMock(self.drvr, '_volume_api') self.mox.StubOutWithMock(self.drvr, '_volume_snapshot_delete') self.drvr._volume_snapshot_delete(self.c, instance, self.volume_uuid, snapshot_id, delete_info=self.delete_info_1).\ AndRaise(exception.NovaException('oops')) self.drvr._volume_api.update_snapshot_status( self.c, snapshot_id, 'error_deleting') self.mox.ReplayAll() self.assertRaises(exception.NovaException, self.drvr.volume_snapshot_delete, self.c, instance, self.volume_uuid, snapshot_id, self.delete_info_1) self.mox.VerifyAll() def test_volume_snapshot_delete_invalid_type(self): instance = objects.Instance(**self.inst) FakeVirtDomain(fake_xml=self.dom_xml) self.mox.StubOutWithMock(self.drvr._host, '_get_domain') self.mox.StubOutWithMock(self.drvr, '_volume_api') self.drvr._volume_api.update_snapshot_status( self.c, self.snapshot_id, 'error_deleting') self.mox.ReplayAll() self.assertRaises(exception.NovaException, self.drvr.volume_snapshot_delete, self.c, instance, self.volume_uuid, self.snapshot_id, self.delete_info_invalid_type) @mock.patch('nova.virt.libvirt.guest.BlockDevice.is_job_complete') def test_volume_snapshot_delete_netdisk_1(self, mock_is_job_complete): """Delete newest snapshot -- blockRebase for libgfapi/network disk.""" class FakeNetdiskDomain(FakeVirtDomain): def __init__(self, *args, **kwargs): super(FakeNetdiskDomain, self).__init__(*args, **kwargs) def XMLDesc(self, flags): return self.dom_netdisk_xml # libvirt lib doesn't have VIR_DOMAIN_BLOCK_REBASE_RELATIVE fakelibvirt.__dict__.pop('VIR_DOMAIN_BLOCK_REBASE_RELATIVE') self.stubs.Set(libvirt_driver, 'libvirt', fakelibvirt) instance = objects.Instance(**self.inst) snapshot_id = 'snapshot-1234' domain = FakeNetdiskDomain(fake_xml=self.dom_netdisk_xml) self.mox.StubOutWithMock(domain, 'XMLDesc') domain.XMLDesc(flags=0).AndReturn(self.dom_netdisk_xml) self.mox.StubOutWithMock(self.drvr._host, '_get_domain') self.mox.StubOutWithMock(domain, 'blockRebase') self.mox.StubOutWithMock(domain, 'blockCommit') self.drvr._host._get_domain(instance).AndReturn(domain) domain.blockRebase('vdb', 'vdb[1]', 0, flags=0) self.mox.ReplayAll() # is_job_complete returns False when initially called, then True mock_is_job_complete.side_effect = (False, True) self.drvr._volume_snapshot_delete(self.c, instance, self.volume_uuid, snapshot_id, self.delete_info_1) self.mox.VerifyAll() self.assertEqual(2, mock_is_job_complete.call_count) fakelibvirt.__dict__.update({'VIR_DOMAIN_BLOCK_REBASE_RELATIVE': 8}) @mock.patch('nova.virt.libvirt.guest.BlockDevice.is_job_complete') def test_volume_snapshot_delete_netdisk_relative_1( self, mock_is_job_complete): """Delete newest snapshot -- blockRebase for libgfapi/network disk.""" class FakeNetdiskDomain(FakeVirtDomain): def __init__(self, *args, **kwargs): super(FakeNetdiskDomain, self).__init__(*args, **kwargs) def XMLDesc(self, flags): return self.dom_netdisk_xml self.stubs.Set(libvirt_driver, 'libvirt', fakelibvirt) instance = objects.Instance(**self.inst) snapshot_id = 'snapshot-1234' domain = FakeNetdiskDomain(fake_xml=self.dom_netdisk_xml) self.mox.StubOutWithMock(domain, 'XMLDesc') domain.XMLDesc(flags=0).AndReturn(self.dom_netdisk_xml) self.mox.StubOutWithMock(self.drvr._host, '_get_domain') self.mox.StubOutWithMock(domain, 'blockRebase') self.mox.StubOutWithMock(domain, 'blockCommit') self.drvr._host._get_domain(instance).AndReturn(domain) domain.blockRebase('vdb', 'vdb[1]', 0, flags=fakelibvirt.VIR_DOMAIN_BLOCK_REBASE_RELATIVE) self.mox.ReplayAll() # is_job_complete returns False when initially called, then True mock_is_job_complete.side_effect = (False, True) self.drvr._volume_snapshot_delete(self.c, instance, self.volume_uuid, snapshot_id, self.delete_info_1) self.mox.VerifyAll() self.assertEqual(2, mock_is_job_complete.call_count) @mock.patch('nova.virt.libvirt.guest.BlockDevice.is_job_complete') def test_volume_snapshot_delete_netdisk_relative_2( self, mock_is_job_complete): """Delete older snapshot -- blockCommit for libgfapi/network disk.""" class FakeNetdiskDomain(FakeVirtDomain): def __init__(self, *args, **kwargs): super(FakeNetdiskDomain, self).__init__(*args, **kwargs) def XMLDesc(self, flags): return self.dom_netdisk_xml self.stubs.Set(libvirt_driver, 'libvirt', fakelibvirt) instance = objects.Instance(**self.inst) snapshot_id = 'snapshot-1234' domain = FakeNetdiskDomain(fake_xml=self.dom_netdisk_xml) self.mox.StubOutWithMock(domain, 'XMLDesc') domain.XMLDesc(flags=0).AndReturn(self.dom_netdisk_xml) self.mox.StubOutWithMock(self.drvr._host, '_get_domain') self.mox.StubOutWithMock(domain, 'blockRebase') self.mox.StubOutWithMock(domain, 'blockCommit') self.drvr._host._get_domain(instance).AndReturn(domain) domain.blockCommit('vdb', 'vdb[0]', 'vdb[1]', 0, flags=fakelibvirt.VIR_DOMAIN_BLOCK_COMMIT_RELATIVE) self.mox.ReplayAll() # is_job_complete returns False when initially called, then True mock_is_job_complete.side_effect = (False, True) self.drvr._volume_snapshot_delete(self.c, instance, self.volume_uuid, snapshot_id, self.delete_info_netdisk) self.mox.VerifyAll() self.assertEqual(2, mock_is_job_complete.call_count) def _fake_convert_image(source, dest, in_format, out_format, run_as_root=True): libvirt_driver.libvirt_utils.files[dest] = b'' class _BaseSnapshotTests(test.NoDBTestCase): def setUp(self): super(_BaseSnapshotTests, self).setUp() self.flags(snapshots_directory='./', group='libvirt') self.context = context.get_admin_context() self.useFixture(fixtures.MonkeyPatch( 'nova.virt.libvirt.driver.libvirt_utils', fake_libvirt_utils)) self.useFixture(fixtures.MonkeyPatch( 'nova.virt.libvirt.imagebackend.libvirt_utils', fake_libvirt_utils)) self.useFixture(fakelibvirt.FakeLibvirtFixture()) self.image_service = nova.tests.unit.image.fake.stub_out_image_service( self) self.mock_update_task_state = mock.Mock() test_instance = _create_test_instance() self.instance_ref = objects.Instance(**test_instance) self.instance_ref.info_cache = objects.InstanceInfoCache( network_info=None) def _assert_snapshot(self, snapshot, disk_format, expected_properties=None): self.mock_update_task_state.assert_has_calls([ mock.call(task_state=task_states.IMAGE_PENDING_UPLOAD), mock.call(task_state=task_states.IMAGE_UPLOADING, expected_state=task_states.IMAGE_PENDING_UPLOAD)]) props = snapshot['properties'] self.assertEqual(props['image_state'], 'available') self.assertEqual(snapshot['status'], 'active') self.assertEqual(snapshot['disk_format'], disk_format) self.assertEqual(snapshot['name'], 'test-snap') if expected_properties: for expected_key, expected_value in \ expected_properties.items(): self.assertEqual(expected_value, props[expected_key]) def _create_image(self, extra_properties=None): properties = {'instance_id': self.instance_ref['id'], 'user_id': str(self.context.user_id)} if extra_properties: properties.update(extra_properties) sent_meta = {'name': 'test-snap', 'is_public': False, 'status': 'creating', 'properties': properties} # Create new image. It will be updated in snapshot method # To work with it from snapshot, the single image_service is needed recv_meta = self.image_service.create(self.context, sent_meta) return recv_meta @mock.patch.object(host.Host, 'has_min_version') @mock.patch.object(imagebackend.Image, 'resolve_driver_format') @mock.patch.object(host.Host, '_get_domain') def _snapshot(self, image_id, mock_get_domain, mock_resolve, mock_version): mock_get_domain.return_value = FakeVirtDomain() driver = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) driver.snapshot(self.context, self.instance_ref, image_id, self.mock_update_task_state) snapshot = self.image_service.show(self.context, image_id) return snapshot def _test_snapshot(self, disk_format, extra_properties=None): recv_meta = self._create_image(extra_properties=extra_properties) snapshot = self._snapshot(recv_meta['id']) self._assert_snapshot(snapshot, disk_format=disk_format, expected_properties=extra_properties) class LibvirtSnapshotTests(_BaseSnapshotTests): def setUp(self): super(LibvirtSnapshotTests, self).setUp() # All paths through livesnapshot trigger a chown behind privsep self.privsep_chown = mock.patch.object(nova.privsep.path, 'chown') self.addCleanup(self.privsep_chown.stop) self.privsep_chown.start() def test_ami(self): # Assign different image_ref from nova/images/fakes for testing ami self.instance_ref.image_ref = 'c905cedb-7281-47e4-8a62-f26bc5fc4c77' self.instance_ref.system_metadata = \ utils.get_system_metadata_from_image( {'disk_format': 'ami'}) self._test_snapshot(disk_format='ami') @mock.patch.object(fake_libvirt_utils, 'disk_type', new='raw') @mock.patch.object(libvirt_driver.imagebackend.images, 'convert_image', side_effect=_fake_convert_image) def test_raw(self, mock_convert_image): self._test_snapshot(disk_format='raw') def test_qcow2(self): self._test_snapshot(disk_format='qcow2') @mock.patch.object(fake_libvirt_utils, 'disk_type', new='ploop') @mock.patch.object(libvirt_driver.imagebackend.images, 'convert_image', side_effect=_fake_convert_image) def test_ploop(self, mock_convert_image): self._test_snapshot(disk_format='ploop') def test_no_image_architecture(self): self.instance_ref.image_ref = '76fa36fc-c930-4bf3-8c8a-ea2a2420deb6' self._test_snapshot(disk_format='qcow2') def test_no_original_image(self): self.instance_ref.image_ref = '661122aa-1234-dede-fefe-babababababa' self._test_snapshot(disk_format='qcow2') def test_snapshot_metadata_image(self): # Assign an image with an architecture defined (x86_64) self.instance_ref.image_ref = 'a440c04b-79fa-479c-bed1-0b816eaec379' extra_properties = {'architecture': 'fake_arch', 'key_a': 'value_a', 'key_b': 'value_b', 'os_type': 'linux'} self._test_snapshot(disk_format='qcow2', extra_properties=extra_properties) @mock.patch.object(rbd_utils, 'RBDDriver') @mock.patch.object(rbd_utils, 'rbd') def test_raw_with_rbd_clone(self, mock_rbd, mock_driver): self.flags(images_type='rbd', group='libvirt') rbd = mock_driver.return_value rbd.parent_info = mock.Mock(return_value=['test-pool', '', '']) rbd.parse_url = mock.Mock(return_value=['a', 'b', 'c', 'd']) with mock.patch.object(fake_libvirt_utils, 'find_disk', return_value=('rbd://some/fake/rbd/image', 'raw')): with mock.patch.object(fake_libvirt_utils, 'disk_type', new='rbd'): self._test_snapshot(disk_format='raw') rbd.clone.assert_called_with(mock.ANY, mock.ANY, dest_pool='test-pool') rbd.flatten.assert_called_with(mock.ANY, pool='test-pool') @mock.patch.object(rbd_utils, 'RBDDriver') @mock.patch.object(rbd_utils, 'rbd') def test_raw_with_rbd_clone_graceful_fallback(self, mock_rbd, mock_driver): self.flags(images_type='rbd', group='libvirt') rbd = mock_driver.return_value rbd.parent_info = mock.Mock(side_effect=exception.ImageUnacceptable( image_id='fake_id', reason='rbd testing')) with test.nested( mock.patch.object(libvirt_driver.imagebackend.images, 'convert_image', side_effect=_fake_convert_image), mock.patch.object(fake_libvirt_utils, 'find_disk', return_value=('rbd://some/fake/rbd/image', 'raw')), mock.patch.object(fake_libvirt_utils, 'disk_type', new='rbd')): self._test_snapshot(disk_format='raw') self.assertFalse(rbd.clone.called) @mock.patch.object(rbd_utils, 'RBDDriver') @mock.patch.object(rbd_utils, 'rbd') def test_raw_with_rbd_clone_eperm(self, mock_rbd, mock_driver): self.flags(images_type='rbd', group='libvirt') rbd = mock_driver.return_value rbd.parent_info = mock.Mock(return_value=['test-pool', '', '']) rbd.parse_url = mock.Mock(return_value=['a', 'b', 'c', 'd']) rbd.clone = mock.Mock(side_effect=exception.Forbidden( image_id='fake_id', reason='rbd testing')) with test.nested( mock.patch.object(libvirt_driver.imagebackend.images, 'convert_image', side_effect=_fake_convert_image), mock.patch.object(fake_libvirt_utils, 'find_disk', return_value=('rbd://some/fake/rbd/image', 'raw')), mock.patch.object(fake_libvirt_utils, 'disk_type', new='rbd')): self._test_snapshot(disk_format='raw') # Ensure that the direct_snapshot attempt was cleaned up rbd.remove_snap.assert_called_with('c', 'd', ignore_errors=False, pool='b', force=True) @mock.patch.object(rbd_utils, 'RBDDriver') @mock.patch.object(rbd_utils, 'rbd') def test_raw_with_rbd_clone_post_process_fails(self, mock_rbd, mock_driver): self.flags(images_type='rbd', group='libvirt') rbd = mock_driver.return_value rbd.parent_info = mock.Mock(return_value=['test-pool', '', '']) rbd.parse_url = mock.Mock(return_value=['a', 'b', 'c', 'd']) with test.nested( mock.patch.object(fake_libvirt_utils, 'find_disk', return_value=('rbd://some/fake/rbd/image', 'raw')), mock.patch.object(fake_libvirt_utils, 'disk_type', new='rbd'), mock.patch.object(self.image_service, 'update', side_effect=test.TestingException)): self.assertRaises(test.TestingException, self._test_snapshot, disk_format='raw') rbd.clone.assert_called_with(mock.ANY, mock.ANY, dest_pool='test-pool') rbd.flatten.assert_called_with(mock.ANY, pool='test-pool') # Ensure that the direct_snapshot attempt was cleaned up rbd.remove_snap.assert_called_with('c', 'd', ignore_errors=True, pool='b', force=True) @mock.patch.object(imagebackend.Image, 'direct_snapshot') @mock.patch.object(imagebackend.Image, 'resolve_driver_format') @mock.patch.object(host.Host, 'has_min_version', return_value=True) @mock.patch.object(host.Host, 'get_guest') def test_raw_with_rbd_clone_is_live_snapshot(self, mock_get_guest, mock_version, mock_resolve, mock_snapshot): self.flags(disable_libvirt_livesnapshot=False, group='workarounds') self.flags(images_type='rbd', group='libvirt') mock_guest = mock.Mock(spec=libvirt_guest.Guest) mock_guest._domain = mock.Mock() mock_get_guest.return_value = mock_guest driver = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) recv_meta = self._create_image() with mock.patch.object(driver, "suspend") as mock_suspend: driver.snapshot(self.context, self.instance_ref, recv_meta['id'], self.mock_update_task_state) self.assertFalse(mock_suspend.called) @mock.patch.object(libvirt_driver.imagebackend.images, 'convert_image', side_effect=_fake_convert_image) @mock.patch.object(fake_libvirt_utils, 'find_disk') @mock.patch.object(imagebackend.Image, 'resolve_driver_format') @mock.patch.object(host.Host, 'has_min_version', return_value=True) @mock.patch.object(host.Host, 'get_guest') @mock.patch.object(rbd_utils, 'RBDDriver') @mock.patch.object(rbd_utils, 'rbd') def test_raw_with_rbd_clone_failure_does_cold_snapshot(self, mock_rbd, mock_driver, mock_get_guest, mock_version, mock_resolve, mock_find_disk, mock_convert): self.flags(disable_libvirt_livesnapshot=False, group='workarounds') self.flags(images_type='rbd', group='libvirt') rbd = mock_driver.return_value rbd.parent_info = mock.Mock(side_effect=exception.ImageUnacceptable( image_id='fake_id', reason='rbd testing')) mock_find_disk.return_value = ('rbd://some/fake/rbd/image', 'raw') mock_guest = mock.Mock(spec=libvirt_guest.Guest) mock_guest.get_power_state.return_value = power_state.RUNNING mock_guest._domain = mock.Mock() mock_get_guest.return_value = mock_guest driver = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), False) recv_meta = self._create_image() with mock.patch.object(fake_libvirt_utils, 'disk_type', new='rbd'): with mock.patch.object(driver, "suspend") as mock_suspend: driver.snapshot(self.context, self.instance_ref, recv_meta['id'], self.mock_update_task_state) self.assertTrue(mock_suspend.called) @mock.patch.object(host.Host, 'get_guest') @mock.patch.object(host.Host, 'has_min_version', return_value=True) def test_cold_snapshot_based_on_power_state( self, mock_version, mock_get_guest): """Tests that a cold snapshot is attempted because the guest power state is SHUTDOWN or PAUSED. """ drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI()) image = self._create_image() for p_state in (power_state.SHUTDOWN, power_state.PAUSED): mock_guest = mock.Mock(spec=libvirt_guest.Guest) mock_guest.get_power_state.return_value = p_state mock_guest._domain = mock.Mock() mock_get_guest.return_value = mock_guest # Make _prepare_domain_for_snapshot short-circuit and fail, we just # want to know that it was called with the correct live_snapshot # argument based on the power_state. with mock.patch.object( drvr, '_prepare_domain_for_snapshot', side_effect=test.TestingException) as mock_prep: self.assertRaises(test.TestingException, drvr.snapshot, self.context, self.instance_ref, image['id'], self.mock_update_task_state) mock_prep.assert_called_once_with( self.context, False, p_state, self.instance_ref) class LXCSnapshotTests(LibvirtSnapshotTests): """Repeat all of the Libvirt snapshot tests, but with LXC enabled""" def setUp(self): super(LXCSnapshotTests, self).setUp() self.flags(virt_type='lxc', group='libvirt') def test_raw_with_rbd_clone_failure_does_cold_snapshot(self): self.skipTest("managedSave is not supported with LXC") class LVMSnapshotTests(_BaseSnapshotTests): @mock.patch.object(fake_libvirt_utils, 'disk_type', new='lvm') @mock.patch.object(libvirt_driver.imagebackend.images, 'convert_image', side_effect=_fake_convert_image) @mock.patch.object(libvirt_driver.imagebackend.lvm, 'volume_info') def _test_lvm_snapshot(self, disk_format, mock_volume_info, mock_convert_image): self.flags(images_type='lvm', images_volume_group='nova-vg', group='libvirt') self._test_snapshot(disk_format=disk_format) mock_volume_info.assert_has_calls([mock.call('/dev/nova-vg/lv')]) mock_convert_image.assert_called_once_with( '/dev/nova-vg/lv', mock.ANY, 'raw', disk_format, run_as_root=True) def test_raw(self): self._test_lvm_snapshot('raw') def test_qcow2(self): self.flags(snapshot_image_format='qcow2', group='libvirt') self._test_lvm_snapshot('qcow2') class TestLibvirtMultiattach(test.NoDBTestCase): """Libvirt driver tests for volume multiattach support.""" def setUp(self): super(TestLibvirtMultiattach, self).setUp() self.useFixture(fakelibvirt.FakeLibvirtFixture()) @mock.patch('nova.virt.libvirt.host.Host.has_min_version', return_value=True) def test_init_host_supports_multiattach_new_enough_libvirt(self, min_ver): """Tests that the driver supports multiattach because libvirt>=3.10. """ drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) drvr._set_multiattach_support() self.assertTrue(drvr.capabilities['supports_multiattach']) min_ver.assert_called_once_with( lv_ver=libvirt_driver.MIN_LIBVIRT_MULTIATTACH) @mock.patch('nova.virt.libvirt.host.Host.has_min_version', side_effect=[False, False]) def test_init_host_supports_multiattach_old_enough_qemu(self, min_ver): """Tests that the driver supports multiattach because qemu<2.10. """ drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) drvr._set_multiattach_support() self.assertTrue(drvr.capabilities['supports_multiattach']) calls = [mock.call(lv_ver=libvirt_driver.MIN_LIBVIRT_MULTIATTACH), mock.call(hv_ver=(2, 10, 0))] min_ver.assert_has_calls(calls) # FIXME(mriedem): This test intermittently fails when run at the same time # as LibvirtConnTestCase, presumably because of shared global state on the # version check. # @mock.patch('nova.virt.libvirt.host.Host.has_min_version', # side_effect=[False, True]) # def test_init_host_supports_multiattach_no_support(self, # has_min_version): # """Tests that the driver does not support multiattach because # qemu>=2.10 and libvirt<3.10. # """ # drvr = libvirt_driver.LibvirtDriver(fake.FakeVirtAPI(), True) # drvr._set_multiattach_support() # self.assertFalse(drvr.capabilities['supports_multiattach']) # calls = [mock.call(lv_ver=libvirt_driver.MIN_LIBVIRT_MULTIATTACH), # mock.call(hv_ver=(2, 10, 0))] # has_min_version.assert_has_calls(calls) nova-17.0.1/nova/tests/unit/virt/libvirt/test_host.py0000666000175000017500000010400413250073126022676 0ustar zuulzuul00000000000000# Copyright 2010 OpenStack Foundation # Copyright 2012 University Of Minho # Copyright 2014 Red Hat, Inc # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import eventlet from eventlet import greenthread import mock from oslo_utils import uuidutils import six import testtools from nova.compute import vm_states from nova import exception from nova import objects from nova.objects import fields as obj_fields from nova import test from nova.tests.unit.virt.libvirt import fakelibvirt from nova.tests import uuidsentinel as uuids from nova.virt import event from nova.virt.libvirt import config as vconfig from nova.virt.libvirt import driver as libvirt_driver from nova.virt.libvirt import guest as libvirt_guest from nova.virt.libvirt import host class StringMatcher(object): def __eq__(self, other): return isinstance(other, six.string_types) class FakeVirtDomain(object): def __init__(self, id=-1, name=None): self._id = id self._name = name self._uuid = uuidutils.generate_uuid() def name(self): return self._name def ID(self): return self._id def UUIDString(self): return self._uuid class HostTestCase(test.NoDBTestCase): def setUp(self): super(HostTestCase, self).setUp() self.useFixture(fakelibvirt.FakeLibvirtFixture()) self.host = host.Host("qemu:///system") @mock.patch("nova.virt.libvirt.host.Host._init_events") def test_repeat_initialization(self, mock_init_events): for i in range(3): self.host.initialize() mock_init_events.assert_called_once_with() @mock.patch.object(fakelibvirt.virConnect, "registerCloseCallback") def test_close_callback(self, mock_close): self.close_callback = None def set_close_callback(cb, opaque): self.close_callback = cb mock_close.side_effect = set_close_callback # verify that the driver registers for the close callback self.host.get_connection() self.assertTrue(self.close_callback) @mock.patch.object(fakelibvirt.virConnect, "getLibVersion") def test_broken_connection(self, mock_ver): for (error, domain) in ( (fakelibvirt.VIR_ERR_SYSTEM_ERROR, fakelibvirt.VIR_FROM_REMOTE), (fakelibvirt.VIR_ERR_SYSTEM_ERROR, fakelibvirt.VIR_FROM_RPC), (fakelibvirt.VIR_ERR_INTERNAL_ERROR, fakelibvirt.VIR_FROM_RPC)): conn = self.host._connect("qemu:///system", False) mock_ver.side_effect = fakelibvirt.make_libvirtError( fakelibvirt.libvirtError, "Connection broken", error_code=error, error_domain=domain) self.assertFalse(self.host._test_connection(conn)) @mock.patch.object(host, 'LOG') def test_connect_auth_cb_exception(self, log_mock): creds = dict(authname='nova', password='verybadpass') self.assertRaises(exception.NovaException, self.host._connect_auth_cb, creds, False) self.assertEqual(0, len(log_mock.method_calls), 'LOG should not be used in _connect_auth_cb.') @mock.patch.object(greenthread, 'spawn_after') def test_event_dispatch(self, mock_spawn_after): # Validate that the libvirt self-pipe for forwarding # events between threads is working sanely def handler(event): got_events.append(event) hostimpl = host.Host("qemu:///system", lifecycle_event_handler=handler) got_events = [] hostimpl._init_events_pipe() event1 = event.LifecycleEvent( "cef19ce0-0ca2-11df-855d-b19fbce37686", event.EVENT_LIFECYCLE_STARTED) event2 = event.LifecycleEvent( "cef19ce0-0ca2-11df-855d-b19fbce37686", event.EVENT_LIFECYCLE_PAUSED) hostimpl._queue_event(event1) hostimpl._queue_event(event2) hostimpl._dispatch_events() want_events = [event1, event2] self.assertEqual(want_events, got_events) event3 = event.LifecycleEvent( "cef19ce0-0ca2-11df-855d-b19fbce37686", event.EVENT_LIFECYCLE_RESUMED) event4 = event.LifecycleEvent( "cef19ce0-0ca2-11df-855d-b19fbce37686", event.EVENT_LIFECYCLE_STOPPED) hostimpl._queue_event(event3) hostimpl._queue_event(event4) hostimpl._dispatch_events() want_events = [event1, event2, event3] self.assertEqual(want_events, got_events) # STOPPED is delayed so it's handled separately mock_spawn_after.assert_called_once_with( hostimpl._lifecycle_delay, hostimpl._event_emit, event4) def test_event_lifecycle(self): got_events = [] # Validate that libvirt events are correctly translated # to Nova events def spawn_after(seconds, func, *args, **kwargs): got_events.append(args[0]) return mock.Mock(spec=greenthread.GreenThread) greenthread.spawn_after = mock.Mock(side_effect=spawn_after) hostimpl = host.Host("qemu:///system", lifecycle_event_handler=lambda e: None) conn = hostimpl.get_connection() hostimpl._init_events_pipe() fake_dom_xml = """ cef19ce0-0ca2-11df-855d-b19fbce37686 """ dom = fakelibvirt.Domain(conn, fake_dom_xml, False) hostimpl._event_lifecycle_callback( conn, dom, fakelibvirt.VIR_DOMAIN_EVENT_STOPPED, 0, hostimpl) hostimpl._dispatch_events() self.assertEqual(len(got_events), 1) self.assertIsInstance(got_events[0], event.LifecycleEvent) self.assertEqual(got_events[0].uuid, "cef19ce0-0ca2-11df-855d-b19fbce37686") self.assertEqual(got_events[0].transition, event.EVENT_LIFECYCLE_STOPPED) def test_event_emit_delayed_call_delayed(self): ev = event.LifecycleEvent( "cef19ce0-0ca2-11df-855d-b19fbce37686", event.EVENT_LIFECYCLE_STOPPED) for uri in ("qemu:///system", "xen:///"): spawn_after_mock = mock.Mock() greenthread.spawn_after = spawn_after_mock hostimpl = host.Host(uri, lifecycle_event_handler=lambda e: None) hostimpl._event_emit_delayed(ev) spawn_after_mock.assert_called_once_with( 15, hostimpl._event_emit, ev) @mock.patch.object(greenthread, 'spawn_after') def test_event_emit_delayed_call_delayed_pending(self, spawn_after_mock): hostimpl = host.Host("xen:///", lifecycle_event_handler=lambda e: None) uuid = "cef19ce0-0ca2-11df-855d-b19fbce37686" gt_mock = mock.Mock() hostimpl._events_delayed[uuid] = gt_mock ev = event.LifecycleEvent( uuid, event.EVENT_LIFECYCLE_STOPPED) hostimpl._event_emit_delayed(ev) gt_mock.cancel.assert_called_once_with() self.assertTrue(spawn_after_mock.called) def test_event_delayed_cleanup(self): hostimpl = host.Host("xen:///", lifecycle_event_handler=lambda e: None) uuid = "cef19ce0-0ca2-11df-855d-b19fbce37686" ev = event.LifecycleEvent( uuid, event.EVENT_LIFECYCLE_STARTED) gt_mock = mock.Mock() hostimpl._events_delayed[uuid] = gt_mock hostimpl._event_emit_delayed(ev) gt_mock.cancel.assert_called_once_with() self.assertNotIn(uuid, hostimpl._events_delayed.keys()) @mock.patch.object(fakelibvirt.virConnect, "domainEventRegisterAny") @mock.patch.object(host.Host, "_connect") def test_get_connection_serial(self, mock_conn, mock_event): def get_conn_currency(host): host.get_connection().getLibVersion() def connect_with_block(*a, **k): # enough to allow another connect to run eventlet.sleep(0) self.connect_calls += 1 return fakelibvirt.openAuth("qemu:///system", [[], lambda: 1, None], 0) def fake_register(*a, **k): self.register_calls += 1 self.connect_calls = 0 self.register_calls = 0 mock_conn.side_effect = connect_with_block mock_event.side_effect = fake_register # call serially get_conn_currency(self.host) get_conn_currency(self.host) self.assertEqual(self.connect_calls, 1) self.assertEqual(self.register_calls, 1) @mock.patch.object(fakelibvirt.virConnect, "domainEventRegisterAny") @mock.patch.object(host.Host, "_connect") def test_get_connection_concurrency(self, mock_conn, mock_event): def get_conn_currency(host): host.get_connection().getLibVersion() def connect_with_block(*a, **k): # enough to allow another connect to run eventlet.sleep(0) self.connect_calls += 1 return fakelibvirt.openAuth("qemu:///system", [[], lambda: 1, None], 0) def fake_register(*a, **k): self.register_calls += 1 self.connect_calls = 0 self.register_calls = 0 mock_conn.side_effect = connect_with_block mock_event.side_effect = fake_register # call concurrently thr1 = eventlet.spawn(get_conn_currency, self.host) thr2 = eventlet.spawn(get_conn_currency, self.host) # let threads run eventlet.sleep(0) thr1.wait() thr2.wait() self.assertEqual(self.connect_calls, 1) self.assertEqual(self.register_calls, 1) @mock.patch.object(host.Host, "_connect") def test_conn_event(self, mock_conn): handler = mock.MagicMock() h = host.Host("qemu:///system", conn_event_handler=handler) h.get_connection() h._dispatch_conn_event() handler.assert_called_once_with(True, None) @mock.patch.object(host.Host, "_connect") def test_conn_event_fail(self, mock_conn): handler = mock.MagicMock() h = host.Host("qemu:///system", conn_event_handler=handler) mock_conn.side_effect = fakelibvirt.libvirtError('test') self.assertRaises(exception.HypervisorUnavailable, h.get_connection) h._dispatch_conn_event() handler.assert_called_once_with(False, StringMatcher()) # Attempt to get a second connection, and assert that we don't add # queue a second callback. Note that we can't call # _dispatch_conn_event() and assert no additional call to the handler # here as above. This is because we haven't added an event, so it would # block. We mock the helper method which queues an event for callback # instead. with mock.patch.object(h, '_queue_conn_event_handler') as mock_queue: self.assertRaises(exception.HypervisorUnavailable, h.get_connection) mock_queue.assert_not_called() @mock.patch.object(host.Host, "_test_connection") @mock.patch.object(host.Host, "_connect") def test_conn_event_up_down(self, mock_conn, mock_test_conn): handler = mock.MagicMock() h = host.Host("qemu:///system", conn_event_handler=handler) mock_conn.side_effect = (mock.MagicMock(), fakelibvirt.libvirtError('test')) mock_test_conn.return_value = False h.get_connection() self.assertRaises(exception.HypervisorUnavailable, h.get_connection) h._dispatch_conn_event() h._dispatch_conn_event() handler.assert_has_calls([ mock.call(True, None), mock.call(False, StringMatcher()) ]) @mock.patch.object(host.Host, "_connect") def test_conn_event_thread(self, mock_conn): event = eventlet.event.Event() h = host.Host("qemu:///system", conn_event_handler=event.send) h.initialize() h.get_connection() event.wait() # This test will timeout if it fails. Success is implicit in a # timely return from wait(), indicating that the connection event # handler was called. @mock.patch.object(fakelibvirt.virConnect, "getLibVersion") @mock.patch.object(fakelibvirt.virConnect, "getVersion") @mock.patch.object(fakelibvirt.virConnect, "getType") def test_has_min_version(self, fake_hv_type, fake_hv_ver, fake_lv_ver): fake_lv_ver.return_value = 1002003 fake_hv_ver.return_value = 4005006 fake_hv_type.return_value = 'xyz' lv_ver = (1, 2, 3) hv_ver = (4, 5, 6) hv_type = 'xyz' self.assertTrue(self.host.has_min_version(lv_ver, hv_ver, hv_type)) self.assertFalse(self.host.has_min_version(lv_ver, hv_ver, 'abc')) self.assertFalse(self.host.has_min_version(lv_ver, (4, 5, 7), hv_type)) self.assertFalse(self.host.has_min_version((1, 3, 3), hv_ver, hv_type)) self.assertTrue(self.host.has_min_version(lv_ver, hv_ver, None)) self.assertTrue(self.host.has_min_version(lv_ver, None, hv_type)) self.assertTrue(self.host.has_min_version(None, hv_ver, hv_type)) @mock.patch.object(fakelibvirt.virConnect, "getLibVersion") @mock.patch.object(fakelibvirt.virConnect, "getVersion") @mock.patch.object(fakelibvirt.virConnect, "getType") def test_has_version(self, fake_hv_type, fake_hv_ver, fake_lv_ver): fake_lv_ver.return_value = 1002003 fake_hv_ver.return_value = 4005006 fake_hv_type.return_value = 'xyz' lv_ver = (1, 2, 3) hv_ver = (4, 5, 6) hv_type = 'xyz' self.assertTrue(self.host.has_version(lv_ver, hv_ver, hv_type)) for lv_ver_ in [(1, 2, 2), (1, 2, 4)]: self.assertFalse(self.host.has_version(lv_ver_, hv_ver, hv_type)) for hv_ver_ in [(4, 4, 6), (4, 6, 6)]: self.assertFalse(self.host.has_version(lv_ver, hv_ver_, hv_type)) self.assertFalse(self.host.has_version(lv_ver, hv_ver, 'abc')) self.assertTrue(self.host.has_version(lv_ver, hv_ver, None)) self.assertTrue(self.host.has_version(lv_ver, None, hv_type)) self.assertTrue(self.host.has_version(None, hv_ver, hv_type)) @mock.patch.object(fakelibvirt.virConnect, "lookupByUUIDString") def test_get_domain(self, fake_lookup): uuid = uuidutils.generate_uuid() dom = fakelibvirt.virDomain(self.host.get_connection(), "") instance = objects.Instance(id="124", uuid=uuid) fake_lookup.return_value = dom self.assertEqual(dom, self.host._get_domain(instance)) fake_lookup.assert_called_once_with(uuid) @mock.patch.object(fakelibvirt.virConnect, "lookupByUUIDString") def test_get_domain_raises(self, fake_lookup): instance = objects.Instance(uuid=uuids.instance, vm_state=vm_states.ACTIVE) fake_lookup.side_effect = fakelibvirt.make_libvirtError( fakelibvirt.libvirtError, 'Domain not found: no domain with matching name', error_code=fakelibvirt.VIR_ERR_NO_DOMAIN, error_domain=fakelibvirt.VIR_FROM_QEMU) with testtools.ExpectedException(exception.InstanceNotFound): self.host._get_domain(instance) fake_lookup.assert_called_once_with(uuids.instance) @mock.patch.object(fakelibvirt.virConnect, "lookupByUUIDString") def test_get_guest(self, fake_lookup): uuid = uuidutils.generate_uuid() dom = fakelibvirt.virDomain(self.host.get_connection(), "") fake_lookup.return_value = dom instance = objects.Instance(id="124", uuid=uuid) guest = self.host.get_guest(instance) self.assertEqual(dom, guest._domain) self.assertIsInstance(guest, libvirt_guest.Guest) fake_lookup.assert_called_once_with(uuid) @mock.patch.object(fakelibvirt.Connection, "listAllDomains") def test_list_instance_domains(self, mock_list_all): vm0 = FakeVirtDomain(id=0, name="Domain-0") # Xen dom-0 vm1 = FakeVirtDomain(id=3, name="instance00000001") vm2 = FakeVirtDomain(id=17, name="instance00000002") vm3 = FakeVirtDomain(name="instance00000003") vm4 = FakeVirtDomain(name="instance00000004") def fake_list_all(flags): vms = [vm0] if flags & fakelibvirt.VIR_CONNECT_LIST_DOMAINS_ACTIVE: vms.extend([vm1, vm2]) if flags & fakelibvirt.VIR_CONNECT_LIST_DOMAINS_INACTIVE: vms.extend([vm3, vm4]) return vms mock_list_all.side_effect = fake_list_all doms = self.host.list_instance_domains() mock_list_all.assert_called_once_with( fakelibvirt.VIR_CONNECT_LIST_DOMAINS_ACTIVE) mock_list_all.reset_mock() self.assertEqual(len(doms), 2) self.assertEqual(doms[0].name(), vm1.name()) self.assertEqual(doms[1].name(), vm2.name()) doms = self.host.list_instance_domains(only_running=False) mock_list_all.assert_called_once_with( fakelibvirt.VIR_CONNECT_LIST_DOMAINS_ACTIVE | fakelibvirt.VIR_CONNECT_LIST_DOMAINS_INACTIVE) mock_list_all.reset_mock() self.assertEqual(len(doms), 4) self.assertEqual(doms[0].name(), vm1.name()) self.assertEqual(doms[1].name(), vm2.name()) self.assertEqual(doms[2].name(), vm3.name()) self.assertEqual(doms[3].name(), vm4.name()) doms = self.host.list_instance_domains(only_guests=False) mock_list_all.assert_called_once_with( fakelibvirt.VIR_CONNECT_LIST_DOMAINS_ACTIVE) mock_list_all.reset_mock() self.assertEqual(len(doms), 3) self.assertEqual(doms[0].name(), vm0.name()) self.assertEqual(doms[1].name(), vm1.name()) self.assertEqual(doms[2].name(), vm2.name()) @mock.patch.object(host.Host, "list_instance_domains") def test_list_guests(self, mock_list_domains): dom0 = mock.Mock(spec=fakelibvirt.virDomain) dom1 = mock.Mock(spec=fakelibvirt.virDomain) mock_list_domains.return_value = [ dom0, dom1] result = self.host.list_guests(True, False) mock_list_domains.assert_called_once_with( only_running=True, only_guests=False) self.assertEqual(dom0, result[0]._domain) self.assertEqual(dom1, result[1]._domain) def test_cpu_features_bug_1217630(self): self.host.get_connection() # Test old version of libvirt, it shouldn't see the `aes' feature with mock.patch('nova.virt.libvirt.host.libvirt') as mock_libvirt: del mock_libvirt.VIR_CONNECT_BASELINE_CPU_EXPAND_FEATURES caps = self.host.get_capabilities() self.assertNotIn('aes', [x.name for x in caps.host.cpu.features]) # Cleanup the capabilities cache firstly self.host._caps = None # Test new version of libvirt, should find the `aes' feature with mock.patch('nova.virt.libvirt.host.libvirt') as mock_libvirt: mock_libvirt['VIR_CONNECT_BASELINE_CPU_EXPAND_FEATURES'] = 1 caps = self.host.get_capabilities() self.assertIn('aes', [x.name for x in caps.host.cpu.features]) def test_cpu_features_are_not_duplicated(self): self.host.get_connection() # Test old version of libvirt. Should return single 'hypervisor' with mock.patch('nova.virt.libvirt.host.libvirt') as mock_libvirt: del mock_libvirt.VIR_CONNECT_BASELINE_CPU_EXPAND_FEATURES caps = self.host.get_capabilities() cnt = [x.name for x in caps.host.cpu.features].count('xtpr') self.assertEqual(1, cnt) # Cleanup the capabilities cache firstly self.host._caps = None # Test new version of libvirt. Should still return single 'hypervisor' with mock.patch('nova.virt.libvirt.host.libvirt') as mock_libvirt: mock_libvirt['VIR_CONNECT_BASELINE_CPU_EXPAND_FEATURES'] = 1 caps = self.host.get_capabilities() cnt = [x.name for x in caps.host.cpu.features].count('xtpr') self.assertEqual(1, cnt) def test_baseline_cpu_not_supported(self): # Handle just the NO_SUPPORT error not_supported_exc = fakelibvirt.make_libvirtError( fakelibvirt.libvirtError, 'this function is not supported by the connection driver:' ' virConnectBaselineCPU', error_code=fakelibvirt.VIR_ERR_NO_SUPPORT) with mock.patch.object(fakelibvirt.virConnect, 'baselineCPU', side_effect=not_supported_exc): caps = self.host.get_capabilities() self.assertEqual(vconfig.LibvirtConfigCaps, type(caps)) self.assertNotIn('aes', [x.name for x in caps.host.cpu.features]) # Clear cached result so we can test again... self.host._caps = None # Other errors should not be caught other_exc = fakelibvirt.make_libvirtError( fakelibvirt.libvirtError, 'other exc', error_code=fakelibvirt.VIR_ERR_NO_DOMAIN) with mock.patch.object(fakelibvirt.virConnect, 'baselineCPU', side_effect=other_exc): self.assertRaises(fakelibvirt.libvirtError, self.host.get_capabilities) def test_get_capabilities_no_host_cpu_model(self): """Tests that cpu features are not retrieved when the host cpu model is not in the capabilities. """ fake_caps_xml = ''' cef19ce0-0ca2-11df-855d-b19fbce37686 x86_64 Intel ''' with mock.patch.object(fakelibvirt.virConnect, 'getCapabilities', return_value=fake_caps_xml): caps = self.host.get_capabilities() self.assertEqual(vconfig.LibvirtConfigCaps, type(caps)) self.assertIsNone(caps.host.cpu.model) self.assertEqual(0, len(caps.host.cpu.features)) @mock.patch.object(fakelibvirt.virConnect, "getHostname") def test_get_hostname_caching(self, mock_hostname): mock_hostname.return_value = "foo" self.assertEqual('foo', self.host.get_hostname()) mock_hostname.assert_called_with() mock_hostname.reset_mock() mock_hostname.return_value = "bar" self.assertEqual('foo', self.host.get_hostname()) mock_hostname.assert_called_with() @mock.patch.object(fakelibvirt.virConnect, "getType") def test_get_driver_type(self, mock_type): mock_type.return_value = "qemu" self.assertEqual("qemu", self.host.get_driver_type()) mock_type.assert_called_once_with() @mock.patch.object(fakelibvirt.virConnect, "getVersion") def test_get_version(self, mock_version): mock_version.return_value = 1005001 self.assertEqual(1005001, self.host.get_version()) mock_version.assert_called_once_with() @mock.patch.object(fakelibvirt.virConnect, "secretLookupByUsage") def test_find_secret(self, mock_sec): """finding secrets with various usage_type.""" expected = [ mock.call(fakelibvirt.VIR_SECRET_USAGE_TYPE_CEPH, 'rbdvol'), mock.call(fakelibvirt.VIR_SECRET_USAGE_TYPE_CEPH, 'cephvol'), mock.call(fakelibvirt.VIR_SECRET_USAGE_TYPE_ISCSI, 'iscsivol'), mock.call(fakelibvirt.VIR_SECRET_USAGE_TYPE_VOLUME, 'vol')] self.host.find_secret('rbd', 'rbdvol') self.host.find_secret('ceph', 'cephvol') self.host.find_secret('iscsi', 'iscsivol') self.host.find_secret('volume', 'vol') self.assertEqual(expected, mock_sec.mock_calls) self.assertRaises(exception.NovaException, self.host.find_secret, "foo", "foovol") mock_sec.side_effect = fakelibvirt.libvirtError("") mock_sec.side_effect.err = (66, ) self.assertIsNone(self.host.find_secret('rbd', 'rbdvol')) @mock.patch.object(fakelibvirt.virConnect, "secretDefineXML") def test_create_secret(self, mock_sec): """creating secrets with various usage_type.""" self.host.create_secret('rbd', 'rbdvol') self.host.create_secret('ceph', 'cephvol') self.host.create_secret('iscsi', 'iscsivol') self.host.create_secret('volume', 'vol') self.assertRaises(exception.NovaException, self.host.create_secret, "foo", "foovol") secret = mock.MagicMock() mock_sec.return_value = secret self.host.create_secret('iscsi', 'iscsivol', password="foo") secret.setValue.assert_called_once_with("foo") @mock.patch('nova.virt.libvirt.host.Host.find_secret') def test_delete_secret(self, mock_find_secret): """deleting secret.""" secret = mock.MagicMock() mock_find_secret.return_value = secret expected = [mock.call('rbd', 'rbdvol'), mock.call().undefine()] self.host.delete_secret('rbd', 'rbdvol') self.assertEqual(expected, mock_find_secret.mock_calls) mock_find_secret.return_value = None self.host.delete_secret("rbd", "rbdvol") def test_get_cpu_count(self): with mock.patch.object(host.Host, "get_connection") as mock_conn: mock_conn().getInfo.return_value = ['zero', 'one', 'two'] self.assertEqual('two', self.host.get_cpu_count()) def test_get_memory_total(self): with mock.patch.object(host.Host, "get_connection") as mock_conn: mock_conn().getInfo.return_value = ['zero', 'one', 'two'] self.assertEqual('one', self.host.get_memory_mb_total()) def test_get_memory_used(self): m = mock.mock_open(read_data=""" MemTotal: 16194180 kB MemFree: 233092 kB MemAvailable: 8892356 kB Buffers: 567708 kB Cached: 8362404 kB SwapCached: 0 kB Active: 8381604 kB """) with test.nested( mock.patch.object(six.moves.builtins, "open", m, create=True), mock.patch.object(host.Host, "get_connection"), mock.patch('sys.platform', 'linux2'), ) as (mock_file, mock_conn, mock_platform): mock_conn().getInfo.return_value = [ obj_fields.Architecture.X86_64, 15814, 8, 1208, 1, 1, 4, 2] self.assertEqual(6866, self.host.get_memory_mb_used()) def test_get_memory_used_xen(self): self.flags(virt_type='xen', group='libvirt') class DiagFakeDomain(object): def __init__(self, id, memmb): self.id = id self.memmb = memmb def info(self): return [0, 0, self.memmb * 1024] def ID(self): return self.id def name(self): return "instance000001" def UUIDString(self): return uuids.fake m = mock.mock_open(read_data=""" MemTotal: 16194180 kB MemFree: 233092 kB MemAvailable: 8892356 kB Buffers: 567708 kB Cached: 8362404 kB SwapCached: 0 kB Active: 8381604 kB """) with test.nested( mock.patch.object(six.moves.builtins, "open", m, create=True), mock.patch.object(host.Host, "list_guests"), mock.patch.object(libvirt_driver.LibvirtDriver, "_conn"), mock.patch('sys.platform', 'linux2'), ) as (mock_file, mock_list, mock_conn, mock_platform): mock_list.return_value = [ libvirt_guest.Guest(DiagFakeDomain(0, 15814)), libvirt_guest.Guest(DiagFakeDomain(1, 750)), libvirt_guest.Guest(DiagFakeDomain(2, 1042))] mock_conn.getInfo.return_value = [ obj_fields.Architecture.X86_64, 15814, 8, 1208, 1, 1, 4, 2] self.assertEqual(8657, self.host.get_memory_mb_used()) mock_list.assert_called_with(only_guests=False) def test_get_cpu_stats(self): stats = self.host.get_cpu_stats() self.assertEqual( {'kernel': 5664160000000, 'idle': 1592705190000000, 'frequency': 800, 'user': 26728850000000, 'iowait': 6121490000000}, stats) @mock.patch.object(fakelibvirt.virConnect, "defineXML") def test_write_instance_config(self, mock_defineXML): fake_dom_xml = """ cef19ce0-0ca2-11df-855d-b19fbce37686 """ conn = self.host.get_connection() dom = fakelibvirt.Domain(conn, fake_dom_xml, False) mock_defineXML.return_value = dom guest = self.host.write_instance_config(fake_dom_xml) mock_defineXML.assert_called_once_with(fake_dom_xml) self.assertIsInstance(guest, libvirt_guest.Guest) def test_write_instance_config_unicode(self): fake_dom_xml = u""" cef19ce0-0ca2-11df-855d-b19fbce37686 """ def emulate_defineXML(xml): conn = self.host.get_connection() # Emulate the decoding behavior of defineXML in Python2 if six.PY2: xml = xml.decode("utf-8") dom = fakelibvirt.Domain(conn, xml, False) return dom with mock.patch.object(fakelibvirt.virConnect, "defineXML" ) as mock_defineXML: mock_defineXML.side_effect = emulate_defineXML guest = self.host.write_instance_config(fake_dom_xml) self.assertIsInstance(guest, libvirt_guest.Guest) @mock.patch.object(fakelibvirt.virConnect, "nodeDeviceLookupByName") def test_device_lookup_by_name(self, mock_nodeDeviceLookupByName): self.host.device_lookup_by_name("foo") mock_nodeDeviceLookupByName.assert_called_once_with("foo") @mock.patch.object(fakelibvirt.virConnect, "listDevices") def test_list_pci_devices(self, mock_listDevices): self.host.list_pci_devices(8) mock_listDevices.assert_called_once_with('pci', 8) def test_list_mdev_capable_devices(self): with mock.patch.object(self.host, "_list_devices") as mock_listDevices: self.host.list_mdev_capable_devices(8) mock_listDevices.assert_called_once_with('mdev_types', flags=8) def test_list_mediated_devices(self): with mock.patch.object(self.host, "_list_devices") as mock_listDevices: self.host.list_mediated_devices(8) mock_listDevices.assert_called_once_with('mdev', flags=8) @mock.patch.object(fakelibvirt.virConnect, "listDevices") def test_list_devices(self, mock_listDevices): self.host._list_devices('mdev', 8) mock_listDevices.assert_called_once_with('mdev', 8) @mock.patch.object(fakelibvirt.virConnect, "listDevices") def test_list_devices_unsupported(self, mock_listDevices): not_supported_exc = fakelibvirt.make_libvirtError( fakelibvirt.libvirtError, 'this function is not supported by the connection driver:' ' listDevices', error_code=fakelibvirt.VIR_ERR_NO_SUPPORT) mock_listDevices.side_effect = not_supported_exc self.assertEqual([], self.host._list_devices('mdev', 8)) @mock.patch.object(fakelibvirt.virConnect, "listDevices") def test_list_devices_other_exc(self, mock_listDevices): mock_listDevices.side_effect = fakelibvirt.libvirtError('test') self.assertRaises(fakelibvirt.libvirtError, self.host._list_devices, 'mdev', 8) @mock.patch.object(fakelibvirt.virConnect, "compareCPU") def test_compare_cpu(self, mock_compareCPU): self.host.compare_cpu("cpuxml") mock_compareCPU.assert_called_once_with("cpuxml", 0) def test_is_cpu_control_policy_capable_ok(self): m = mock.mock_open( read_data="""cg /cgroup/cpu,cpuacct cg opt1,cpu,opt3 0 0 cg /cgroup/memory cg opt1,opt2 0 0 """) with mock.patch( "six.moves.builtins.open", m, create=True): self.assertTrue(self.host.is_cpu_control_policy_capable()) def test_is_cpu_control_policy_capable_ko(self): m = mock.mock_open( read_data="""cg /cgroup/cpu,cpuacct cg opt1,opt2,opt3 0 0 cg /cgroup/memory cg opt1,opt2 0 0 """) with mock.patch( "six.moves.builtins.open", m, create=True): self.assertFalse(self.host.is_cpu_control_policy_capable()) @mock.patch('six.moves.builtins.open', side_effect=IOError) def test_is_cpu_control_policy_capable_ioerror(self, mock_open): self.assertFalse(self.host.is_cpu_control_policy_capable()) nova-17.0.1/nova/tests/unit/virt/libvirt/fakelibvirt.py0000666000175000017500000013751113250073126023175 0ustar zuulzuul00000000000000# Copyright 2010 OpenStack Foundation # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import sys import time import fixtures from lxml import etree from nova.objects import fields as obj_fields from nova.tests import uuidsentinel as uuids from nova.virt.libvirt import config as vconfig # Allow passing None to the various connect methods # (i.e. allow the client to rely on default URLs) allow_default_uri_connection = True # Has libvirt connection been used at least once connection_used = False def _reset(): global allow_default_uri_connection allow_default_uri_connection = True # virDomainState VIR_DOMAIN_NOSTATE = 0 VIR_DOMAIN_RUNNING = 1 VIR_DOMAIN_BLOCKED = 2 VIR_DOMAIN_PAUSED = 3 VIR_DOMAIN_SHUTDOWN = 4 VIR_DOMAIN_SHUTOFF = 5 VIR_DOMAIN_CRASHED = 6 # NOTE(mriedem): These values come from include/libvirt/libvirt-domain.h VIR_DOMAIN_XML_SECURE = 1 VIR_DOMAIN_XML_INACTIVE = 2 VIR_DOMAIN_XML_UPDATE_CPU = 4 VIR_DOMAIN_XML_MIGRATABLE = 8 VIR_DOMAIN_BLOCK_REBASE_SHALLOW = 1 VIR_DOMAIN_BLOCK_REBASE_REUSE_EXT = 2 VIR_DOMAIN_BLOCK_REBASE_COPY = 8 VIR_DOMAIN_BLOCK_REBASE_COPY_DEV = 32 VIR_DOMAIN_BLOCK_JOB_ABORT_ASYNC = 1 VIR_DOMAIN_BLOCK_JOB_ABORT_PIVOT = 2 VIR_DOMAIN_EVENT_ID_LIFECYCLE = 0 VIR_DOMAIN_EVENT_DEFINED = 0 VIR_DOMAIN_EVENT_UNDEFINED = 1 VIR_DOMAIN_EVENT_STARTED = 2 VIR_DOMAIN_EVENT_SUSPENDED = 3 VIR_DOMAIN_EVENT_RESUMED = 4 VIR_DOMAIN_EVENT_STOPPED = 5 VIR_DOMAIN_EVENT_SHUTDOWN = 6 VIR_DOMAIN_EVENT_PMSUSPENDED = 7 VIR_DOMAIN_UNDEFINE_MANAGED_SAVE = 1 VIR_DOMAIN_UNDEFINE_NVRAM = 4 VIR_DOMAIN_AFFECT_CURRENT = 0 VIR_DOMAIN_AFFECT_LIVE = 1 VIR_DOMAIN_AFFECT_CONFIG = 2 VIR_CPU_COMPARE_ERROR = -1 VIR_CPU_COMPARE_INCOMPATIBLE = 0 VIR_CPU_COMPARE_IDENTICAL = 1 VIR_CPU_COMPARE_SUPERSET = 2 VIR_CRED_USERNAME = 1 VIR_CRED_AUTHNAME = 2 VIR_CRED_LANGUAGE = 3 VIR_CRED_CNONCE = 4 VIR_CRED_PASSPHRASE = 5 VIR_CRED_ECHOPROMPT = 6 VIR_CRED_NOECHOPROMPT = 7 VIR_CRED_REALM = 8 VIR_CRED_EXTERNAL = 9 VIR_MIGRATE_LIVE = 1 VIR_MIGRATE_PEER2PEER = 2 VIR_MIGRATE_TUNNELLED = 4 VIR_MIGRATE_PERSIST_DEST = 8 VIR_MIGRATE_UNDEFINE_SOURCE = 16 VIR_MIGRATE_NON_SHARED_INC = 128 VIR_MIGRATE_AUTO_CONVERGE = 8192 VIR_MIGRATE_POSTCOPY = 32768 VIR_NODE_CPU_STATS_ALL_CPUS = -1 VIR_DOMAIN_START_PAUSED = 1 # libvirtError enums # (Intentionally different from what's in libvirt. We do this to check, # that consumers of the library are using the symbolic names rather than # hardcoding the numerical values) VIR_FROM_QEMU = 100 VIR_FROM_DOMAIN = 200 VIR_FROM_NWFILTER = 330 VIR_FROM_REMOTE = 340 VIR_FROM_RPC = 345 VIR_FROM_NODEDEV = 666 VIR_ERR_INVALID_ARG = 8 VIR_ERR_NO_SUPPORT = 3 VIR_ERR_XML_DETAIL = 350 VIR_ERR_NO_DOMAIN = 420 VIR_ERR_OPERATION_FAILED = 510 VIR_ERR_OPERATION_INVALID = 55 VIR_ERR_OPERATION_TIMEOUT = 68 VIR_ERR_NO_NWFILTER = 620 VIR_ERR_SYSTEM_ERROR = 900 VIR_ERR_INTERNAL_ERROR = 950 VIR_ERR_CONFIG_UNSUPPORTED = 951 VIR_ERR_NO_NODE_DEVICE = 667 VIR_ERR_NO_SECRET = 66 VIR_ERR_AGENT_UNRESPONSIVE = 86 VIR_ERR_ARGUMENT_UNSUPPORTED = 74 # Readonly VIR_CONNECT_RO = 1 # virConnectBaselineCPU flags VIR_CONNECT_BASELINE_CPU_EXPAND_FEATURES = 1 # snapshotCreateXML flags VIR_DOMAIN_SNAPSHOT_CREATE_NO_METADATA = 4 VIR_DOMAIN_SNAPSHOT_CREATE_DISK_ONLY = 16 VIR_DOMAIN_SNAPSHOT_CREATE_REUSE_EXT = 32 VIR_DOMAIN_SNAPSHOT_CREATE_QUIESCE = 64 # blockCommit flags VIR_DOMAIN_BLOCK_COMMIT_RELATIVE = 4 # blockRebase flags VIR_DOMAIN_BLOCK_REBASE_RELATIVE = 8 VIR_CONNECT_LIST_DOMAINS_ACTIVE = 1 VIR_CONNECT_LIST_DOMAINS_INACTIVE = 2 # secret type VIR_SECRET_USAGE_TYPE_NONE = 0 VIR_SECRET_USAGE_TYPE_VOLUME = 1 VIR_SECRET_USAGE_TYPE_CEPH = 2 VIR_SECRET_USAGE_TYPE_ISCSI = 3 # Libvirt version to match MIN_LIBVIRT_VERSION in driver.py FAKE_LIBVIRT_VERSION = 1002009 # Libvirt version to match MIN_QEMU_VERSION in driver.py FAKE_QEMU_VERSION = 2001000 PF_CAP_TYPE = 'virt_functions' VF_CAP_TYPE = 'phys_function' PF_PROD_NAME = 'Ethernet Controller 10-Gigabit X540-AT2' VF_PROD_NAME = 'X540 Ethernet Controller Virtual Function' PF_DRIVER_NAME = 'ixgbe' VF_DRIVER_NAME = 'ixgbevf' VF_SLOT = '10' PF_SLOT = '00' class FakePciDevice(object): pci_dev_template = """ pci_0000_81_%(slot)02x_%(dev)d /sys/devices/pci0000:80/0000:80:01.0/0000:81:%(slot)02x.%(dev)d pci_0000_80_01_0 %(driver)s 0 129 %(slot)d %(dev)d %(prod_name)s Intel Corporation %(functions)s
""" def __init__(self, dev_type, vf_ratio, group, dev, product_id, numa_node): """Populate pci devices :param dev_type: (string) Indicates the type of the device (PF, VF) :param vf_ratio: (int) Ratio of Virtual Functions on Physical :param group: (int) iommu group id :param dev: (int) function number of the device :param product_id: (int) Device product ID :param numa_node: (int) NUMA node of the device """ addr_templ = ("
") self.pci_dev = None if dev_type == 'PF': pf_caps = [addr_templ % {'dev': x, 'slot': VF_SLOT} for x in range(dev * vf_ratio, (dev + 1) * vf_ratio)] slot = int(str(PF_SLOT), 16) self.pci_dev = self.pci_dev_template % {'dev': dev, 'prod': product_id, 'group_id': group, 'functions': '\n'.join(pf_caps), 'slot': slot, 'cap_type': PF_CAP_TYPE, 'prod_name': PF_PROD_NAME, 'driver': PF_DRIVER_NAME, 'numa_node': numa_node} elif dev_type == 'VF': vf_caps = [addr_templ % {'dev': int(dev / vf_ratio), 'slot': PF_SLOT}] slot = int(str(VF_SLOT), 16) self.pci_dev = self.pci_dev_template % {'dev': dev, 'prod': product_id, 'group_id': group, 'functions': '\n'.join(vf_caps), 'slot': slot, 'cap_type': VF_CAP_TYPE, 'prod_name': VF_PROD_NAME, 'driver': VF_DRIVER_NAME, 'numa_node': numa_node} def XMLDesc(self, flags): return self.pci_dev class HostPciSRIOVDevicesInfo(object): """Represent a pool of host SR-IOV devices.""" def __init__(self, vf_product_id=1515, pf_product_id=1528, num_pfs=2, num_vfs=8, group=47, numa_node=None, total_numa_nodes=2): """Create a new HostPciSRIOVDevicesInfo object. :param vf_product_id: (int) Product ID of the Virtual Functions :param pf_product_id=1528: (int) Product ID of the Physical Functions :param num_pfs: (int) The number of the Physical Functions :param num_vfs: (int) The number of the Virtual Functions :param group: (int) Initial group id :param numa_node: (int) NUMA node of the device, if set all of the device will be created in the provided node :param total_numa_nodes: (int) total number of NUMA nodes """ def _calc_numa_node(dev): return dev % total_numa_nodes if numa_node is None else numa_node self.devices = {} if num_vfs and not num_pfs: raise ValueError('Cannot create VFs without PFs') vf_ratio = num_vfs // num_pfs if num_pfs else 0 # Generate PFs for dev in range(num_pfs): dev_group = group + dev + 1 pci_dev_name = 'pci_0000_81_%(slot)s_%(dev)d' % {'slot': PF_SLOT, 'dev': dev} self.devices[pci_dev_name] = FakePciDevice('PF', vf_ratio, dev_group, dev, pf_product_id, _calc_numa_node(dev)) # Generate VFs for dev in range(num_vfs): dev_group = group + dev + 1 pci_dev_name = 'pci_0000_81_%(slot)s_%(dev)d' % {'slot': VF_SLOT, 'dev': dev} self.devices[pci_dev_name] = FakePciDevice('VF', vf_ratio, dev_group, dev, vf_product_id, _calc_numa_node(dev)) def get_all_devices(self): return self.devices.keys() def get_device_by_name(self, device_name): pci_dev = self.devices.get(device_name) return pci_dev class HostInfo(object): def __init__(self, arch=obj_fields.Architecture.X86_64, kB_mem=4096, cpus=2, cpu_mhz=800, cpu_nodes=1, cpu_sockets=1, cpu_cores=2, cpu_threads=1, cpu_model="Penryn", cpu_vendor="Intel", numa_topology='', cpu_disabled=None): """Create a new Host Info object :param arch: (string) indicating the CPU arch (eg 'i686' or whatever else uname -m might return) :param kB_mem: (int) memory size in KBytes :param cpus: (int) the number of active CPUs :param cpu_mhz: (int) expected CPU frequency :param cpu_nodes: (int) the number of NUMA cell, 1 for unusual NUMA topologies or uniform :param cpu_sockets: (int) number of CPU sockets per node if nodes > 1, total number of CPU sockets otherwise :param cpu_cores: (int) number of cores per socket :param cpu_threads: (int) number of threads per core :param cpu_model: CPU model :param cpu_vendor: CPU vendor :param numa_topology: Numa topology :param cpu_disabled: List of disabled cpus """ self.arch = arch self.kB_mem = kB_mem self.cpus = cpus self.cpu_mhz = cpu_mhz self.cpu_nodes = cpu_nodes self.cpu_cores = cpu_cores self.cpu_threads = cpu_threads self.cpu_sockets = cpu_sockets self.cpu_model = cpu_model self.cpu_vendor = cpu_vendor self.numa_topology = numa_topology self.disabled_cpus_list = cpu_disabled or [] class NUMAHostInfo(HostInfo): """A NUMA-by-default variant of HostInfo.""" def __init__(self, **kwargs): super(NUMAHostInfo, self).__init__(**kwargs) if not self.numa_topology: topology = NUMATopology(self.cpu_nodes, self.cpu_sockets, self.cpu_cores, self.cpu_threads, self.kB_mem) self.numa_topology = topology # update number of active cpus cpu_count = len(topology.cells) * len(topology.cells[0].cpus) self.cpus = cpu_count - len(self.disabled_cpus_list) class NUMATopology(vconfig.LibvirtConfigCapsNUMATopology): """A batteries-included variant of LibvirtConfigCapsNUMATopology. Provides sane defaults for LibvirtConfigCapsNUMATopology that can be used in tests as is, or overridden where necessary. """ def __init__(self, cpu_nodes=4, cpu_sockets=1, cpu_cores=1, cpu_threads=2, kb_mem=1048576, mempages=None, **kwargs): super(NUMATopology, self).__init__(**kwargs) cpu_count = 0 for cell_count in range(cpu_nodes): cell = vconfig.LibvirtConfigCapsNUMACell() cell.id = cell_count cell.memory = kb_mem // cpu_nodes for socket_count in range(cpu_sockets): for cpu_num in range(cpu_cores * cpu_threads): cpu = vconfig.LibvirtConfigCapsNUMACPU() cpu.id = cpu_count cpu.socket_id = cell_count cpu.core_id = cpu_num // cpu_threads cpu.siblings = set([cpu_threads * (cpu_count // cpu_threads) + thread for thread in range(cpu_threads)]) cell.cpus.append(cpu) cpu_count += 1 # If no mempages are provided, use only the default 4K pages if mempages: cell.mempages = mempages[cell_count] else: cell.mempages = create_mempages([(4, cell.memory // 4)]) self.cells.append(cell) def create_mempages(mappings): """Generate a list of LibvirtConfigCapsNUMAPages objects. :param mappings: (dict) A mapping of page size to quantity of said pages. :returns: [LibvirtConfigCapsNUMAPages, ...] """ mempages = [] for page_size, page_qty in mappings: mempage = vconfig.LibvirtConfigCapsNUMAPages() mempage.size = page_size mempage.total = page_qty mempages.append(mempage) return mempages VIR_DOMAIN_JOB_NONE = 0 VIR_DOMAIN_JOB_BOUNDED = 1 VIR_DOMAIN_JOB_UNBOUNDED = 2 VIR_DOMAIN_JOB_COMPLETED = 3 VIR_DOMAIN_JOB_FAILED = 4 VIR_DOMAIN_JOB_CANCELLED = 5 def _parse_disk_info(element): disk_info = {} disk_info['type'] = element.get('type', 'file') disk_info['device'] = element.get('device', 'disk') driver = element.find('./driver') if driver is not None: disk_info['driver_name'] = driver.get('name') disk_info['driver_type'] = driver.get('type') source = element.find('./source') if source is not None: disk_info['source'] = source.get('file') if not disk_info['source']: disk_info['source'] = source.get('dev') if not disk_info['source']: disk_info['source'] = source.get('path') target = element.find('./target') if target is not None: disk_info['target_dev'] = target.get('dev') disk_info['target_bus'] = target.get('bus') return disk_info def disable_event_thread(self): """Disable nova libvirt driver event thread. The Nova libvirt driver includes a native thread which monitors the libvirt event channel. In a testing environment this becomes problematic because it means we've got a floating thread calling sleep(1) over the life of the unit test. Seems harmless? It's not, because we sometimes want to test things like retry loops that should have specific sleep paterns. An unlucky firing of the libvirt thread will cause a test failure. """ # because we are patching a method in a class MonkeyPatch doesn't # auto import correctly. Import explicitly otherwise the patching # may silently fail. import nova.virt.libvirt.host # noqa def evloop(*args, **kwargs): pass self.useFixture(fixtures.MonkeyPatch( 'nova.virt.libvirt.host.Host._init_events', evloop)) class libvirtError(Exception): """This class was copied and slightly modified from `libvirt-python:libvirt-override.py`. Since a test environment will use the real `libvirt-python` version of `libvirtError` if it's installed and not this fake, we need to maintain strict compatibility with the original class, including `__init__` args and instance-attributes. To create a libvirtError instance you should: # Create an unsupported error exception exc = libvirtError('my message') exc.err = (libvirt.VIR_ERR_NO_SUPPORT,) self.err is a tuple of form: (error_code, error_domain, error_message, error_level, str1, str2, str3, int1, int2) Alternatively, you can use the `make_libvirtError` convenience function to allow you to specify these attributes in one shot. """ def __init__(self, defmsg, conn=None, dom=None, net=None, pool=None, vol=None): Exception.__init__(self, defmsg) self.err = None def get_error_code(self): if self.err is None: return None return self.err[0] def get_error_domain(self): if self.err is None: return None return self.err[1] def get_error_message(self): if self.err is None: return None return self.err[2] def get_error_level(self): if self.err is None: return None return self.err[3] def get_str1(self): if self.err is None: return None return self.err[4] def get_str2(self): if self.err is None: return None return self.err[5] def get_str3(self): if self.err is None: return None return self.err[6] def get_int1(self): if self.err is None: return None return self.err[7] def get_int2(self): if self.err is None: return None return self.err[8] class NWFilter(object): def __init__(self, connection, xml): self._connection = connection self._xml = xml self._parse_xml(xml) def _parse_xml(self, xml): tree = etree.fromstring(xml) root = tree.find('.') self._name = root.get('name') def undefine(self): self._connection._remove_filter(self) class NodeDevice(object): def __init__(self, connection, xml=None): self._connection = connection self._xml = xml if xml is not None: self._parse_xml(xml) def _parse_xml(self, xml): tree = etree.fromstring(xml) root = tree.find('.') self._name = root.get('name') def attach(self): pass def dettach(self): pass def reset(self): pass class Domain(object): def __init__(self, connection, xml, running=False, transient=False): self._connection = connection if running: connection._mark_running(self) self._state = running and VIR_DOMAIN_RUNNING or VIR_DOMAIN_SHUTOFF self._transient = transient self._def = self._parse_definition(xml) self._has_saved_state = False self._snapshots = {} self._id = self._connection._id_counter def _parse_definition(self, xml): try: tree = etree.fromstring(xml) except etree.ParseError: raise make_libvirtError( libvirtError, "Invalid XML.", error_code=VIR_ERR_XML_DETAIL, error_domain=VIR_FROM_DOMAIN) definition = {} name = tree.find('./name') if name is not None: definition['name'] = name.text uuid_elem = tree.find('./uuid') if uuid_elem is not None: definition['uuid'] = uuid_elem.text else: definition['uuid'] = uuids.fake vcpu = tree.find('./vcpu') if vcpu is not None: definition['vcpu'] = int(vcpu.text) memory = tree.find('./memory') if memory is not None: definition['memory'] = int(memory.text) os = {} os_type = tree.find('./os/type') if os_type is not None: os['type'] = os_type.text os['arch'] = os_type.get('arch', self._connection.host_info.arch) os_kernel = tree.find('./os/kernel') if os_kernel is not None: os['kernel'] = os_kernel.text os_initrd = tree.find('./os/initrd') if os_initrd is not None: os['initrd'] = os_initrd.text os_cmdline = tree.find('./os/cmdline') if os_cmdline is not None: os['cmdline'] = os_cmdline.text os_boot = tree.find('./os/boot') if os_boot is not None: os['boot_dev'] = os_boot.get('dev') definition['os'] = os features = {} acpi = tree.find('./features/acpi') if acpi is not None: features['acpi'] = True definition['features'] = features devices = {} device_nodes = tree.find('./devices') if device_nodes is not None: disks_info = [] disks = device_nodes.findall('./disk') for disk in disks: disks_info += [_parse_disk_info(disk)] devices['disks'] = disks_info nics_info = [] nics = device_nodes.findall('./interface') for nic in nics: nic_info = {} nic_info['type'] = nic.get('type') mac = nic.find('./mac') if mac is not None: nic_info['mac'] = mac.get('address') source = nic.find('./source') if source is not None: if nic_info['type'] == 'network': nic_info['source'] = source.get('network') elif nic_info['type'] == 'bridge': nic_info['source'] = source.get('bridge') nics_info += [nic_info] devices['nics'] = nics_info definition['devices'] = devices return definition def create(self): self.createWithFlags(0) def createWithFlags(self, flags): # FIXME: Not handling flags at the moment self._state = VIR_DOMAIN_RUNNING self._connection._mark_running(self) self._has_saved_state = False def isActive(self): return int(self._state == VIR_DOMAIN_RUNNING) def undefine(self): self._connection._undefine(self) def isPersistent(self): return True def undefineFlags(self, flags): self.undefine() if flags & VIR_DOMAIN_UNDEFINE_MANAGED_SAVE: if self.hasManagedSaveImage(0): self.managedSaveRemove() def destroy(self): self._state = VIR_DOMAIN_SHUTOFF self._connection._mark_not_running(self) def ID(self): return self._id def name(self): return self._def['name'] def UUIDString(self): return self._def['uuid'] def interfaceStats(self, device): return [10000242400, 1234, 0, 2, 213412343233, 34214234, 23, 3] def blockStats(self, device): return [2, 10000242400, 234, 2343424234, 34] def setTime(self, time=None, flags=0): pass def suspend(self): self._state = VIR_DOMAIN_PAUSED def shutdown(self): self._state = VIR_DOMAIN_SHUTDOWN self._connection._mark_not_running(self) def reset(self, flags): # FIXME: Not handling flags at the moment self._state = VIR_DOMAIN_RUNNING self._connection._mark_running(self) def info(self): return [self._state, int(self._def['memory']), int(self._def['memory']), self._def['vcpu'], 123456789] def migrateToURI(self, desturi, flags, dname, bandwidth): raise make_libvirtError( libvirtError, "Migration always fails for fake libvirt!", error_code=VIR_ERR_INTERNAL_ERROR, error_domain=VIR_FROM_QEMU) def migrateToURI2(self, dconnuri, miguri, dxml, flags, dname, bandwidth): raise make_libvirtError( libvirtError, "Migration always fails for fake libvirt!", error_code=VIR_ERR_INTERNAL_ERROR, error_domain=VIR_FROM_QEMU) def migrateToURI3(self, dconnuri, params, logical_sum): raise make_libvirtError( libvirtError, "Migration always fails for fake libvirt!", error_code=VIR_ERR_INTERNAL_ERROR, error_domain=VIR_FROM_QEMU) def migrateSetMaxDowntime(self, downtime): pass def migrateSetMaxSpeed(self, bandwidth): pass def attachDevice(self, xml): disk_info = _parse_disk_info(etree.fromstring(xml)) disk_info['_attached'] = True self._def['devices']['disks'] += [disk_info] return True def attachDeviceFlags(self, xml, flags): if (flags & VIR_DOMAIN_AFFECT_LIVE and self._state != VIR_DOMAIN_RUNNING): raise make_libvirtError( libvirtError, "AFFECT_LIVE only allowed for running domains!", error_code=VIR_ERR_INTERNAL_ERROR, error_domain=VIR_FROM_QEMU) self.attachDevice(xml) def detachDevice(self, xml): disk_info = _parse_disk_info(etree.fromstring(xml)) disk_info['_attached'] = True return disk_info in self._def['devices']['disks'] def detachDeviceFlags(self, xml, flags): self.detachDevice(xml) def setUserPassword(self, user, password, flags=0): pass def XMLDesc(self, flags): disks = '' for disk in self._def['devices']['disks']: if disk['type'] == 'file': source_attr = 'file' else: source_attr = 'dev' disks += '''
''' % dict(source_attr=source_attr, **disk) nics = '' for nic in self._def['devices']['nics']: nics += '''
''' % nic return ''' %(name)s %(uuid)s %(memory)s %(memory)s %(vcpu)s hvm destroy restart restart /usr/bin/kvm %(disks)s
%(nics)s